WO2022022137A1 - 成像方法、装置、雷达系统、电子设备和存储介质 - Google Patents

成像方法、装置、雷达系统、电子设备和存储介质 Download PDF

Info

Publication number
WO2022022137A1
WO2022022137A1 PCT/CN2021/100389 CN2021100389W WO2022022137A1 WO 2022022137 A1 WO2022022137 A1 WO 2022022137A1 CN 2021100389 W CN2021100389 W CN 2021100389W WO 2022022137 A1 WO2022022137 A1 WO 2022022137A1
Authority
WO
WIPO (PCT)
Prior art keywords
radar
images
image
target
data
Prior art date
Application number
PCT/CN2021/100389
Other languages
English (en)
French (fr)
Inventor
孟辉
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Priority to EP21850599.8A priority Critical patent/EP4184210A4/en
Priority to KR1020237005514A priority patent/KR20230038291A/ko
Publication of WO2022022137A1 publication Critical patent/WO2022022137A1/zh
Priority to US18/160,169 priority patent/US20230168370A1/en

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S13/00Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
    • G01S13/88Radar or analogous systems specially adapted for specific applications
    • G01S13/93Radar or analogous systems specially adapted for specific applications for anti-collision purposes
    • G01S13/931Radar or analogous systems specially adapted for specific applications for anti-collision purposes of land vehicles
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S13/00Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
    • G01S13/87Combinations of radar systems, e.g. primary radar and secondary radar
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S13/00Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
    • G01S13/88Radar or analogous systems specially adapted for specific applications
    • G01S13/89Radar or analogous systems specially adapted for specific applications for mapping or imaging
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S13/00Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
    • G01S13/88Radar or analogous systems specially adapted for specific applications
    • G01S13/89Radar or analogous systems specially adapted for specific applications for mapping or imaging
    • G01S13/90Radar or analogous systems specially adapted for specific applications for mapping or imaging using synthetic aperture techniques, e.g. synthetic aperture radar [SAR] techniques
    • G01S13/9021SAR image post-processing techniques
    • G01S13/9029SAR image post-processing techniques specially adapted for moving target detection within a single SAR image or within multiple SAR images taken at the same time
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/02Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S13/00
    • G01S7/41Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S13/00 using analysis of echo signal for target characterisation; Target signature; Target cross-section
    • G01S7/411Identification of targets based on measurements of radar reflectivity
    • G01S7/412Identification of targets based on measurements of radar reflectivity based on a comparison between measured values and known or stored values
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/02Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S13/00
    • G01S7/41Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S13/00 using analysis of echo signal for target characterisation; Target signature; Target cross-section
    • G01S7/415Identification of targets based on measurements of movement associated with the target
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/80Geometric correction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S13/00Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
    • G01S13/88Radar or analogous systems specially adapted for specific applications
    • G01S13/89Radar or analogous systems specially adapted for specific applications for mapping or imaging
    • G01S13/90Radar or analogous systems specially adapted for specific applications for mapping or imaging using synthetic aperture techniques, e.g. synthetic aperture radar [SAR] techniques
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S13/00Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
    • G01S13/88Radar or analogous systems specially adapted for specific applications
    • G01S13/93Radar or analogous systems specially adapted for specific applications for anti-collision purposes
    • G01S13/931Radar or analogous systems specially adapted for specific applications for anti-collision purposes of land vehicles
    • G01S2013/9327Sensor installation details
    • G01S2013/93272Sensor installation details in the back of the vehicles
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S13/00Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
    • G01S13/88Radar or analogous systems specially adapted for specific applications
    • G01S13/93Radar or analogous systems specially adapted for specific applications for anti-collision purposes
    • G01S13/931Radar or analogous systems specially adapted for specific applications for anti-collision purposes of land vehicles
    • G01S2013/9327Sensor installation details
    • G01S2013/93274Sensor installation details on the side of the vehicles
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10032Satellite or aerial image; Remote sensing
    • G06T2207/10044Radar image

Definitions

  • the present application relates to the technical field of automotive fusion perception, and in particular, to an imaging method, device, radar system, electronic device and storage medium.
  • the purpose of the present application is to provide an imaging method, apparatus, radar system, electronic device and storage medium. It is used to solve the problem that the radar image resolution generated by the vehicle-mounted synthetic aperture radar system is low, which affects the positioning accuracy and driving safety of the vehicle.
  • an imaging method comprising:
  • At least two groups of original radar data wherein the at least two groups of original radar data are from at least two radars; and synthesize a first target image according to the at least two groups of original radar data, wherein the first target image is a
  • the at least two sets of raw radar data are obtained by image registration and time-domain coherent superposition.
  • the first target image is obtained by acquiring at least two groups of original radar data from different radars, performing image registration and time-domain coherent superposition.
  • each group of original radar data corresponds to one radar respectively. Since the original radar data from different radars contain different radar information, after image registration and time-domain coherent superposition are performed on them, the obtained first target image has more High physical resolution and richer image information, so the resolution of the generated radar image is improved, and the image information in the radar image is enriched, thereby improving the positioning accuracy and driving safety of the vehicle.
  • At least two sets of raw radar data are acquired, including:
  • At least two target radars are determined through the preset radar configuration information, and the original radar data is obtained through the at least two target radars. Since the radar configuration information can be adjusted and set according to specific needs, it is possible to achieve From multiple vehicle-mounted radars, the best radar that matches the current application scenario or application requirements, that is, the target radar, is selected, thereby improving the flexibility and scope of application of the radar system.
  • the radar configuration information includes at least one of radar identification information, radar position information, and emission angle information; wherein the radar position information is used to represent the position of the radar, and the radar emission angle information is used for used to characterize the launch angle of the radar.
  • the method further includes: performing delay processing on the at least two groups of original radar data, so as to achieve phase consistency of the at least two groups of original radar data.
  • performing delay processing on at least two sets of original radar data to achieve phase consistency of at least two sets of original radar data including: acquiring an initial phase of reference radar data; Phase, correcting the phase of other raw radar data in at least two sets of raw radar data.
  • synthesizing the first target image according to at least two groups of original radar data includes: generating corresponding at least two groups of radar images according to at least two groups of original radar data; Image registration to obtain at least two sets of registered images corresponding to the at least two sets of radar images; and to perform coherent stacking of the at least two sets of registered images in the time domain to obtain the first target image.
  • each of the original radar data is correspondingly generated a set of radar images. Since each radar image is generated by the original radar data collected by radars at different positions and angles, therefore, for Image registration is performed on each group of radar images, so that different radar images generate overlapping areas, and then the registered images are coherently superimposed in the time domain to obtain a first target image, wherein the first target image contains radars corresponding to different radars Common images in images. Because the bandwidth of the original radar data collected by a single radar that can cover the target detection object is too large, the resolution of the radar image formed after processing is too low.
  • the target detection object is detected by the narrow radar beams emitted by multiple radars, and the radar images obtained after the acquisition are superimposed in the time domain to obtain a high-resolution image, which improves the positioning accuracy of the target detection object, thereby realizing the accuracy of the vehicle. Positioning to improve the safety of vehicle autonomous driving.
  • the method further includes: acquiring preset radar configuration information; Perform geometric distortion correction.
  • the radar configuration information includes at least one of radar identification information, radar position information, and radar emission angle information
  • performing geometric distortion correction on the radar image according to the radar configuration information includes: The radar configuration information is obtained, and the radar image correction information is obtained; the geometric distortion correction is performed on the radar image according to the radar image correction information, wherein the radar image correction information is used to describe different vehicle sizes, shapes, and radar installation positions and emission angles. Different types of vehicles, or vehicles with different radar setting positions and angles, have corresponding radar image correction information.
  • the geometric distortion correction of the radar images obtained after different radar acquisition and processing can improve the accuracy of the radar images in the subsequent image registration and time-domain coherent stacking, avoiding the Different sizes of vehicles, different radar installation positions, different launch angles, etc., cause the problem of low stacking accuracy when radar images are coherently stacked in the time domain, and improve the time-domain stacking accuracy of radar images.
  • At least two sets of original radar data include a set of reference radar data
  • image registration is performed on at least two sets of radar images to obtain at least two sets of registrations corresponding to the at least two sets of radar images
  • Images including: obtaining the reference radar image corresponding to the reference radar data; according to the target element in the reference radar image, performing image translation on other radar images in at least two sets of radar images, so that the target elements in the other radar images are consistent with the reference radar image.
  • the target elements in are coincident; the reference radar image and other radar images after image translation are used as registration images.
  • image translation is performed on other radar images in the at least two sets of radar images according to the target elements in the reference radar images, so that the target elements in the other radar images are different from those in the reference radar images.
  • the method further includes: acquiring phase information in the reference radar image; registering the phases of other radar images according to the phase information, so that the other radar images in the at least two sets of radar images are the same as the other radar images.
  • the phases of the reference radar images are consistent; the reference radar images and other radar images after phase registration are used as registration images.
  • the reference radar image corresponding to the reference radar data is obtained, wherein the reference radar data can be determined by radar configuration information or specific application scenarios, and image translation and phase are performed on other radar images according to the reference radar image. Register to generate a registered image. Since the registration images have been aligned in the time domain and phase, the first target image can be obtained by directly superimposing, thereby improving the accuracy and resolution of the generated first target image.
  • the registration images include a common view area image and a non-common view area image, wherein the common view area images of different registration images overlap each other; when performing the at least two groups of registration images Domain coherent superposition, to obtain the first target image, includes: acquiring at least two groups of registered images of the common viewing area; a target image.
  • the registration image includes a common view area image used to describe the target detection object, the common view area images of different registration images overlap each other, and the time domain coherent superposition is performed through the common view area of multiple registration images, It can generate a co-viewing area superimposed image with higher resolution, that is, the first target image.
  • the first target image is generated by superimposing multiple co-viewing area images. Therefore, intersecting a radar image has better performance. Image resolution and accuracy.
  • the method further includes: obtaining a positional relationship between the registered images, and according to the positional relationship between the registered images, registering the at least two groups of images The non-common view area images of the quasi-image are spliced on both sides of the first target image to obtain a second target image.
  • the non-common view areas of multiple sets of registration images are spliced on both sides of the first target image to obtain the second target image.
  • the data collected by different radars are obtained after processing. Therefore, the non-common view area of each registration image contains different image information.
  • the target image has a wider field of view and richer detection information, which can further improve the positioning accuracy of the vehicle and the safety of autonomous driving.
  • an imaging device comprising:
  • a data acquisition module for acquiring at least two groups of original radar data, wherein the at least two groups of original radar data come from at least two radars; an image generation module for synthesizing a first target image according to the at least two groups of original radar data , wherein the first target image is obtained by performing image registration and time-domain coherent superposition on the at least two groups of original radar data.
  • the imaging device obtains the first target image by acquiring at least two groups of original radar data from different radars, performing image registration and time-domain coherent superposition.
  • each group of original radar data corresponds to one radar respectively. Since the original radar data from different radars contain different radar information, after image registration and time-domain coherent superposition are performed on them, the obtained first target image has more High physical resolution and richer image information, so the resolution of the generated radar image is improved, and the image information in the radar image is enriched, thereby improving the positioning accuracy and driving safety of the vehicle.
  • the data acquisition module is specifically configured to: determine at least two target radars according to preset radar configuration information; control the at least two target radars to transmit to their corresponding target directions respectively Radar beams; acquiring echo data corresponding to the radar beams sent by the at least two target radars as the original radar data.
  • At least two target radars are determined through the preset radar configuration information, and the original radar data is obtained through the at least two target radars. Since the radar configuration information can be adjusted and set according to specific needs, it is possible to achieve From multiple vehicle-mounted radars, the best radar that matches the current application scenario or application requirements, that is, the target radar, is selected, thereby improving the flexibility and scope of application of the radar system.
  • the radar configuration information includes at least one of radar identification information, radar position information, and radar emission angle information; wherein the radar position information is used to represent the position of the radar, and the radar emission The angle information is used to characterize the launch angle of the radar.
  • the imaging device further includes: a delay processing module configured to perform delay processing on the at least two groups of original radar data, so as to achieve phase consistency of the at least two groups of original radar data.
  • the delay processing module is specifically configured to: acquire the initial phase of the reference radar data; The phase of other raw radar data is corrected.
  • the image generating module is specifically configured to: generate at least two sets of radar images corresponding to the at least two sets of original radar data; perform image registration on the at least two sets of radar images , obtain at least two sets of registration images corresponding to the at least two sets of radar images; and perform coherent superposition of the at least two sets of registration images in the time domain to obtain the first target image.
  • each of the original radar data is correspondingly generated a set of radar images. Since each radar image is generated by the original radar data collected by radars at different positions and angles, therefore, for Image registration is performed on each group of radar images, so that different radar images generate overlapping areas, and then the registered images are coherently superimposed in the time domain to obtain a first target image, wherein the first target image contains radars corresponding to different radars Common images in images. Because the bandwidth of the original radar data collected by a single radar that can cover the target detection object is too large, the resolution of the radar image formed after processing is too low.
  • the target detection object is detected by the narrow radar beams emitted by multiple radars, and the radar images obtained after the acquisition are superimposed in the time domain to obtain a high-resolution image, which improves the positioning accuracy of the target detection object, thereby realizing the accuracy of the vehicle. Positioning to improve the safety of vehicle autonomous driving.
  • the imaging device further includes a distortion correction module for: acquiring preset radar configuration information; and performing geometric distortion correction on the radar image according to the radar configuration information.
  • the radar configuration information includes at least one of radar identification information, radar position information, and radar emission angle information
  • the distortion correction module performs geometric distortion on the radar image according to the radar configuration information.
  • it is specifically used for: obtaining radar image correction information according to the radar configuration information; and performing geometric distortion correction on the radar image according to the radar image correction information.
  • the geometric distortion correction of the radar images obtained after different radar acquisition and processing can improve the accuracy of the radar images in the subsequent image registration and time-domain coherent stacking, avoiding the Different sizes of vehicles, different radar installation positions, different launch angles, etc., cause the problem of low stacking accuracy when radar images are coherently stacked in the time domain, and improve the time-domain stacking accuracy of radar images.
  • the at least two sets of original radar data include a set of reference radar data
  • the image generation module performs image registration on the at least two sets of radar images, and obtains a set of data with the at least two sets of radar images.
  • it is specifically used for: acquiring the reference radar images corresponding to the reference radar data; Image translation is performed on other radar images so that the target elements in the other radar images coincide with the target elements in the reference radar image; the reference radar image and the other radar images after image translation are used as registration images.
  • the image generation module performs image translation on other radar images in the at least two sets of radar images according to target elements in the reference radar images, so that the target elements in the other radar images After being overlapped with the target element in the reference radar image, it is specifically used for: acquiring phase information in the reference radar image; and registering the phases of other radar images according to the phase information, so that the at least two sets of radar images
  • the phases of other radar images in are consistent with the reference radar images; the reference radar images and other radar images after phase registration are used as registration images.
  • the reference radar image corresponding to the reference radar data is obtained, wherein the reference radar data can be determined by radar configuration information or specific application scenarios, and image translation and phase are performed on other radar images according to the reference radar image. Register to generate a registered image. Since the registration images have been aligned in the time domain and phase, the first target image can be obtained by directly superimposing, thereby improving the accuracy and resolution of the generated first target image.
  • the registration image includes a common view area image and a non-common view area image, wherein the common view area images of different registration images overlap each other;
  • the first target image is obtained by coherently stacking the registration images in the temporal domain, it is specifically used for: acquiring the common view area images of at least two groups of registration images; Perform time-domain superposition to obtain the first target image.
  • the registration image includes a common view area image used to describe the target detection object, the common view area images of different registration images overlap each other, and the time domain coherent superposition is performed through the common view area of multiple registration images, It can generate a co-viewing area superimposed image with higher resolution, that is, the first target image.
  • the first target image is generated by superimposing multiple co-viewing area images. Therefore, intersecting a radar image has better performance. Image resolution and accuracy.
  • the image generation module is specifically configured to: obtain the positional relationship between the registered images, and according to the positional relationship between the registered images, generate The non-common view area images of the at least two sets of registration images are spliced on both sides of the first target image to obtain a second target image.
  • the non-common view areas of multiple sets of registration images are spliced on both sides of the first target image to obtain the second target image.
  • the data collected by different radars are obtained after processing. Therefore, the non-common view area of each registration image contains different image information.
  • the target image has a wider field of view and richer detection information, which can further improve the positioning accuracy of the vehicle and the safety of autonomous driving.
  • the present application discloses a radar system.
  • the radar system includes: a processor, a memory, and at least two synthetic aperture radars, wherein the processor is used to control the synthetic aperture radars to send and receive signals; the memory is used for A computer program is stored; the processor is further configured to call and run the computer program stored in the memory, so that the radar system executes the method provided by any one of the implementation manners of the first aspect above.
  • the present application discloses an electronic device, comprising: a processor, a memory, and a transceiver;
  • the processor is used to control the transceiver to send and receive signals; the memory is used to store a computer program; the processor is further used to call and run the computer program stored in the memory, so that the electronic device executes the method provided by any one of the implementation manners of the first aspect above.
  • the present application discloses a computer-readable storage medium, including computer code, which, when executed on a computer, causes the computer to execute the method provided by any one of the implementations of the above first aspect.
  • the present application discloses a computer program product, including program code, when the computer runs the computer program product, the program code executes the method provided by any one of the implementation manners of the first aspect above.
  • the present application discloses a chip including a processor.
  • the processor is configured to call and run the computer program stored in the memory, so as to perform the corresponding operations and/or processes performed in the imaging method of the embodiments of the present application.
  • the chip further includes a memory, the memory and the processor are connected to the memory through a circuit or a wire, and the processor is used for reading and executing the computer program in the memory.
  • the chip further includes a communication interface, and the processor is connected to the communication interface.
  • the communication interface is used to receive data and/or information to be processed, and the processor acquires the data and/or information from the communication interface and processes the data and/or information.
  • the communication interface may be an input-output interface.
  • an embodiment of the present application provides a terminal, where the terminal may be an unmanned aerial vehicle, an unmanned transport vehicle, a vehicle, an aircraft, or a robot, or the like.
  • the terminal includes the radar system provided in the third aspect above, and can use the radar system to execute the imaging method provided in any implementation manner of the first aspect above; in one design, the terminal includes the fifth aspect above provided computer-readable storage medium.
  • the present application obtains a first target image by acquiring at least two groups of original radar data from different radars, performing image registration and time-domain coherent stacking. Among them, each group of original radar data corresponds to one radar respectively. Since the original radar data from different radars contain different radar information, after image registration and time-domain coherent superposition are performed on them, the obtained first target image has more High physical resolution and richer image information, so the resolution of the generated radar image is improved, and the image information in the radar image is enriched, thereby improving the positioning accuracy and driving safety of the vehicle.
  • FIG. 1 is a schematic diagram of an application scenario provided by an embodiment of the present application.
  • FIG. 2 is a schematic diagram of the operation of a synthetic aperture radar system on a vehicle-mounted platform in the related art
  • FIG. 3 is a schematic flowchart of an imaging method provided by an embodiment of the present application.
  • FIG. 4 is a schematic diagram of the setting position of a synthetic aperture radar according to an embodiment of the present application.
  • FIG. 5 is a schematic flowchart of another imaging method provided by an embodiment of the present application.
  • FIG. 6 is a schematic diagram of a target radar provided by an embodiment of the present application.
  • FIG. 7 is a schematic diagram of adjusting a target radar according to an embodiment of the present application.
  • FIG. 8 is a schematic diagram of a target radar transmitting a radar beam to a target direction according to an embodiment of the present application
  • FIG. 9 is a schematic diagram of a comparison between a dynamic target radar provided by an embodiment of the present application and a fixed target radar in the prior art
  • Fig. 10 is a flowchart of an implementation manner of step S204 in the embodiment shown in Fig. 5;
  • FIG. 11 is a schematic flowchart of another imaging method provided by an embodiment of the present application.
  • Fig. 12 is a flowchart of an implementation manner of step S307 in the embodiment shown in Fig. 11;
  • FIG. 13 is a schematic diagram of an image translation coincidence calibration provided by an embodiment of the present application.
  • FIG. 14 is a schematic diagram of performing temporal coherent stacking of registered images according to an embodiment of the present application.
  • FIG. 15 is a schematic diagram of another kind of temporal coherent stacking of registered images according to an embodiment of the present application.
  • FIG. 16 is a schematic block diagram of an imaging device according to an embodiment of the present application.
  • FIG. 17 is a schematic block diagram of another imaging apparatus provided by an embodiment of the present application.
  • FIG. 18 is a schematic block diagram of still another imaging apparatus provided by an embodiment of the present application.
  • FIG. 19 is a schematic block diagram of the structure of a radar system according to an embodiment of the present application.
  • FIG. 20 is a schematic structural block diagram of a vehicle fusion perception system provided by an embodiment of the application.
  • FIG. 21 is a schematic block diagram of the structure of an electronic device provided by an embodiment of the application.
  • FIG. 22 is a schematic block diagram of the structure of a radar according to an embodiment of the present application.
  • Synthetic aperture radar is a high-resolution imaging radar. It can detect in a variety of complex environments by actively transmitting microwaves, and generate high-resolution radar images similar to optical photography. Synthetic aperture radar has the advantages of small size, light weight, Numerous advantages of high imaging accuracy and insusceptibility to environmental influences. For the explanation of synthetic aperture radar, reference can be made to the description of the prior art.
  • Correspondence may refer to an association relationship or binding relationship, and A corresponds to B refers to an association relationship or binding relationship between A and B.
  • FIG. 1 is a schematic diagram of an application scenario provided by an embodiment of the present application.
  • the imaging method provided by the embodiment of the present application can be applied to a radar system 11.
  • the radar system 11 is installed on a target vehicle and used to realize the target vehicle. positioning and obstacle recognition.
  • the radar system on the target vehicle includes a plurality of distributed synthetic aperture radars 111.
  • the target vehicle can detect and perceive surrounding obstacles through the radar system, and adjust the driving route according to the detection and perception results to avoid effectively obstacles, or plan and adjust a reasonable driving route, so as to realize the automatic driving function of the target vehicle.
  • the imaging method provided by the embodiments of the present application may also be applied to the vehicle fusion perception system 12 , or a controller or chip in the vehicle fusion perception system 12 .
  • the vehicle fusion perception system 12 is set in the target vehicle, and the vehicle fusion perception system 12 obtains the original radar data through the radar system 11 installed on the target vehicle, and processes and analyzes the original radar data to obtain the user-described target vehicle position and information.
  • FIG 2 is a schematic diagram of the operation of the synthetic aperture radar system on the vehicle-mounted platform in the related art, as shown in Figure 2, and the vehicle-mounted aperture radar system 11, if it wants to realize real-time imaging, it must complete data acquisition and data processing in a relatively short period of time . Since the imaging scene of the vehicle-mounted aperture radar system is relatively close to the synthetic aperture radar 111, if real-time imaging is realized, the length of the synthetic aperture is required to be short, which limits the beam width in the azimuth direction, and thus reduces the image resolution in the azimuth direction. Further, if the image resolution is too low, it will affect the accuracy of positioning and obstacle recognition based on the image in the later stage, resulting in the problem of reduced accuracy and safety when the vehicle controller performs vehicle path planning and automatic driving control.
  • the raw radar data collected by multiple synthetic aperture radars is acquired, processed to generate multiple radar images, and image registration and time-domain coherent superposition are performed on the multiple radar images, so that the Imaging results with smaller beam bandwidth also have higher image resolution, improving the accuracy of vehicle positioning and obstacle identification.
  • FIG. 3 is a schematic flowchart of an imaging method provided by an embodiment of the present application.
  • the execution subject of the imaging method provided by this embodiment may be a radar system. As shown in FIG. 3 , the method includes:
  • the radar system includes at least two synthetic aperture radars, and the multiple synthetic aperture radars are distributed on the target vehicle.
  • FIG. 4 is a schematic diagram of the arrangement position of a synthetic aperture radar provided by an embodiment of the present application.
  • the synthetic aperture radar may be arranged on one side outside the vehicle, or may be arranged on two adjacent two outside the vehicle. Each edge can be set according to specific needs, which is not limited here.
  • Synthetic aperture radar detects target objects by sending radar beams and receiving echo beams. The specific principles will not be repeated here.
  • the radar beams sent by different synthetic aperture radars will generate echoes after encountering effective obstacles.
  • Different synthetic aperture radars receive the echo signals corresponding to the radar beams sent by them, which are the original radar data.
  • the radar system or similar radar module, device, or device can use multiple synthetic aperture radars according to The above process realizes the acquisition of raw radar data.
  • the execution subject of the imaging method provided in this embodiment is a vehicle fusion perception system, a vehicle-machine system, a vehicle fusion perception system, or a controller or chip in the vehicle-machine system, a cloud server, or other systems, devices or systems that cannot directly obtain the original radar data module, the system, device or module communicates with the radar system or similar radar modules, devices, and devices, and obtains the original radar data through the radar system or similar radar modules, devices, and devices, and the process will not be described here. Repeat.
  • S102 Synthesize a first target image according to at least two groups of original radar data, where the first target image is obtained by performing image registration and time-domain coherent superposition on at least two groups of original radar data.
  • each group of original radar data is processed separately to generate a radar image corresponding to each group of original radar data. Spatial location, structure, form and other information.
  • each group of radar images corresponds to a radar source respectively.
  • the radar beam emitted by each radar will cover an area, thereby forming multiple coverage areas corresponding to different radars.
  • the overlapping area is used as a benchmark to register different radar images, so that the overlapping areas of different radar images are aligned, and then temporal coherent superposition is performed to form the first target image.
  • the first target image can realize the image expression of the measured target, and at the same time has a higher resolution than the radar image formed by a single radar.
  • the first target image is obtained by acquiring at least two groups of original radar data from different radars, performing image registration and time-domain coherent superposition. Among them, each group of original radar data corresponds to one radar respectively. Since the original radar data from different radars contain different radar information, after image registration and time-domain coherent superposition are performed on them, the obtained first target image has more High physical resolution and richer image information, so the resolution of the generated radar image is improved, and the image information in the radar image is enriched, thereby improving the positioning accuracy and driving safety of the vehicle.
  • This solution can further be used to enhance the ADAS capabilities of autonomous driving or advanced driver assistance systems, and can be applied to the Internet of Vehicles, such as vehicle external connection V2X, vehicle-to-vehicle communication long-term evolution technology LTE-V, vehicle-vehicle V2V, etc.
  • Vehicles such as vehicle external connection V2X, vehicle-to-vehicle communication long-term evolution technology LTE-V, vehicle-vehicle V2V, etc.
  • FIG. 5 is a schematic flowchart of another imaging method provided by an embodiment of the present application. As shown in FIG. 5 , the imaging method provided by this embodiment is based on the imaging method provided by the embodiment shown in FIG. 3 , and step S101 is further processed. refinement, and after step S101, a delay processing step is added, and the method includes:
  • Step S201 at least two target radars are determined according to preset radar configuration information.
  • the radar configuration information includes at least one of radar identification information, radar position information, and radar emission angle information; wherein the radar position information is used to characterize the position of the radar, and the radar emission angle information is used to characterize the emission angle of the radar.
  • the radar configuration information includes radar position information and radar emission angle information.
  • the imaging method provided by the embodiment of the present application is executed by, for example, a radar system.
  • Position information or radar launch angle information from multiple preset synthetic aperture radars, determine the target radar suitable for the current application scenario or application requirements.
  • FIG. 6 is a schematic diagram of a target radar provided by an embodiment of the application. As shown in FIG.
  • multiple synthetic aperture radars including R1, R2, R3, R4, and R5 on the right side of the vehicle, and R1, R2, R3, R4, and R5 on the right side of the vehicle, and L1, L2, L3, L4, L5 on the side of the vehicle, and M1 on the rear side of the vehicle, when the symbols of the same exemplary synthetic space radar are involved in the following figures, they will not be repeated
  • the radar system through the radar configuration information, The radar position and radar launch angle more closely match the synthetic aperture radars R1 and R3 of the current application scenario, and are determined as target radars.
  • the radar configuration information includes radar position information and radar emission angle information
  • the imaging method provided by the embodiment of the present application is executed by, for example, a radar system, and the radar system is based on the radar position information in the radar configuration information.
  • radar emission angle information adjust the positions and/or emission angles of some or all of the preset multiple synthetic aperture radars, and determine the adjusted synthetic aperture radar as the target radar.
  • FIG. 7 is a schematic diagram of a target radar adjustment provided by an embodiment of the present application. As shown in FIG. 7 , by adjusting the position and/or the emission angle of the synthetic aperture radar, the same synthetic aperture radar (for example, in FIG. 7 ) can be adjusted.
  • R2 and R3) play a role in different application scenarios or application requirements, reducing the number of synthetic aperture radars set on vehicles and reducing the cost of radar systems.
  • the radar configuration information includes radar identification information
  • the radar identification information is used to represent the mapping relationship between the radar identification and different radar positions and radar emission angles.
  • Table 1 is a schematic diagram of a radar identification information provided by an embodiment of the application. As shown in Table 1, under different application scenarios or application requirements, according to the required radar position and radar emission angle, it can be determined. Radar identification, and then determine the target radar corresponding to the radar identification.
  • Step S202 controlling the at least two target radars to transmit radar beams to respective corresponding target directions.
  • the target directions of the at least two target radars may be the same or different, or, there may be multiple target radars in the at least two target radars, whose target directions are different.
  • Step S203 acquiring echo data corresponding to radar beams sent by at least two target radars as original radar data.
  • FIG. 8 is a schematic diagram of a target radar transmitting a radar beam to a target direction according to an embodiment of the present application.
  • the radar system includes two synthetic aperture radars (R1 and R2), which are respectively set in At the front and rear positions of the vehicle, the two synthetic aperture radars transmit radar beams to their respective target directions according to their respective radar launch angles.
  • the target vehicle installed with the radar system moves in a certain direction, and the radar beam is continuously sent out, and after being reflected by obstacles, an echo signal is generated, and the synthetic aperture radar receives the echo information and obtains
  • the echo data, the set of echo data corresponding to different target radars, is the original radar data. It can be seen that the original radar data contains the environmental information collected by each target radar for perceiving objects in the target area. Compared with the radar information collected by a single radar, the amount of information is more abundant and the accuracy is better.
  • FIG. 9 is a schematic diagram of the comparison between the dynamic target radar provided by the embodiment of the present application and the fixed target radar in the prior art.
  • the target radars for example, R2 and R3
  • R2 and R3 are preset in advance.
  • the fixed radar beam emission angle will form a blind spot for radar detection.
  • the step of determining the target radar can dynamically adjust the target radar (eg R1 and R3) according to the preset radar configuration information and the relationship between the radar and the obstacle, so that the radar beam of the target radar is emitted It can match with obstacles, cover the area where obstacles are located to the greatest extent, and improve the detection effect of radar.
  • the radar configuration information can be adjusted and set according to specific needs. For example, different radar configuration information is used when the vehicle is driving at high speed and when the vehicle is parked and stored. In a possible design, the radar configuration information may be set correspondingly to the radar configuration information matching the operation instruction after the vehicle-machine system set in the vehicle receives the user's operation instruction.
  • the radar system or control device set on the vehicle can automatically set the radar configuration information dynamically according to the detected obstacles, so that the radar can detect with different position parameters and emission angle parameters, thereby improving the Radar detection effect.
  • the above-mentioned control device may be any processor, chip system or device in the vehicle that can be used to perform control functions.
  • the specific method of acquiring radar configuration information is not limited, and can be set according to specific usage scenarios and needs.
  • At least two target radars are determined according to the preset radar configuration information, and original radar data are obtained through the at least two target radars. Since the radar configuration information can be adjusted and set according to specific needs, it is possible to achieve from Among multiple vehicle-mounted radars, the best radar that matches the current application scenario or application requirements, that is, the target radar, is selected, thereby improving the flexibility and scope of application of the radar system.
  • Step S204 performing delay processing on the at least two groups of original radar data, so as to achieve phase consistency of the at least two groups of original radar data.
  • the echo data generated by the target detection object has a phase difference. Therefore, the collected raw radar data There is an initial phase difference between them. In order for the radar beams emitted by multiple target radars to form a seamless wide beam, it is necessary to phase-align the original radar data collected by multiple target radars.
  • S204 includes two specific implementation steps of S2041 and S2042:
  • Step S2041 acquiring the initial phase of the reference radar data.
  • Step S2042 correcting the phases of other original radar data in the at least two groups of original radar data according to the initial phase of the reference radar data.
  • a radar beam is a periodic beam signal with a specific frequency. Based on one full period of the beam signal, the initial phase of the reference radar data can be obtained. Usually, the initial phases of different target radars are different, but the difference between the initial phases can be a fixed value, so the original radar data of the non-reference radars can be phase-shifted, so that the initial phases of multiple target radars are consistent.
  • the reference radar data is determined according to the preset configuration information. For example, according to the radar configuration information, the reference radar data in the multiple original radar data is determined, and the reference radar data is compared with other original radar data. , which can better represent the characteristics of the target detection object, so the benchmark radar data can be understood as radar data with better detection effect.
  • adjusting the phase of other original radar data in at least two sets of original radar data for correction can achieve a better phase correction effect, and then generate the original radar data corresponding to each target radar later.
  • the first target image is used, better image precision and accuracy can be obtained.
  • the phases of multiple sets of original radar data are aligned, and when images are superimposed subsequently, better imaging results can be obtained, and positioning accuracy can be improved.
  • Step S205 synthesizing a first target image according to at least two groups of original radar data, wherein the first target image is obtained by performing image registration and time-domain coherent superposition on at least two groups of original radar data.
  • the implementation of S205 is the same as the implementation of S102 in the embodiment shown in FIG. 3 of the present invention, and details are not repeated here.
  • the execution subject of the imaging method provided in this embodiment may be the above-mentioned radar system or similar radar modules, devices, and equipment, or may be a vehicle fusion perception system, a vehicle-machine system, a vehicle fusion perception system, or a vehicle-machine
  • the controller or chip, cloud server and other systems, equipment or modules in the system after obtaining the original radar data, the above-mentioned execution subject performs a similar imaging process, and the specific process will not be repeated here.
  • FIG. 11 is a schematic flowchart of another imaging method provided by an embodiment of the present application. As shown in FIG. 11 , the imaging method provided by this embodiment is based on the imaging method provided by the embodiment shown in FIG. 5 , and S205 is further detailed. , the method includes:
  • Step S301 at least two target radars are determined according to preset radar configuration information.
  • Step S302 controlling the at least two target radars to transmit radar beams to respective target directions respectively.
  • Step S303 acquiring echo data corresponding to radar beams sent by at least two target radars as original radar data.
  • Step S304 performing delay processing on at least two sets of original radar data to achieve phase consistency of at least two sets of original radar data.
  • Step S305 according to at least two sets of original radar data, generate corresponding at least two sets of radar images.
  • each original radar data can be image processed to obtain a radar image corresponding to the original radar data.
  • the specific method for generating a radar image from data is in the prior art, which will not be repeated here.
  • Step S306 performing geometric distortion correction on the radar image according to the preset radar configuration information.
  • the radar configuration information includes at least one of radar identification information, radar position information, and radar emission angle information.
  • geometric distortion correction is performed on the radar image, including:
  • the radar image correction information is obtained; the geometric distortion correction is performed on the radar image according to the radar image correction information.
  • the center point of the target vehicle is generally used as the reference point.
  • the body length of the target vehicle is long, or the target radar is far from the center of the target vehicle, due to the influence of the radar beam emission angle, the obtained radar image will be distorted. Therefore, the target radars with different positions and emission angles, Distortion will occur depending on its location and launch angle.
  • Radar image correction information is used to describe different vehicle sizes, shapes, radar installation positions, and launch angles. Different types of vehicles, or vehicles with different radar installation positions and angles, have corresponding radar image correction information.
  • the radar image correction information can correct the radar images corresponding to target radars with different positions and emission angles, eliminate the geometric distortion in the radar images due to the factors of radar positions and emission angles, and make the image information collected by different target radars consistent. Thus, accurate superposition of subsequent image information is realized, and the accuracy of the first target image is improved.
  • Step S307 performing image registration on the at least two groups of radar images to obtain at least two groups of registered images corresponding to the at least two groups of radar images.
  • S307 includes two specific implementation steps of S3071 and S3072:
  • Step S3071 performing translation and coincidence calibration on at least two sets of radar images.
  • the reference radar data is the original radar data collected by multiple target radars.
  • the reference radar data is determined according to preset configuration information. For example, according to the radar configuration information, the reference radar data in the multiple original radar data is determined. Compared with other original radar data, radar data can better represent the characteristics of target detection objects, so the benchmark radar data can be understood as radar data with better detection effect.
  • the corresponding reference radar image can be obtained. More specifically, for example, setting the synthetic aperture radar at the head position of the target vehicle as the target radar, correspondingly, the original radar data collected by the synthetic aperture radar at the head position of the target vehicle is the reference radar data, and the corresponding radar image is the reference radar. image.
  • image translation is performed on other radar images in at least two sets of radar images, so that the target elements in the other radar images coincide with the target elements in the reference radar image.
  • FIG. 13 is a schematic diagram of an image translation and coincidence calibration provided by an embodiment of the present application.
  • two target radars are taken as an example, wherein the target radar A generates a corresponding reference radar image, and the target Radar B correspondingly generates other radar images.
  • the reference radar image and other radar images have a common viewing area, that is, the reference radar image and the other radar images include an overlapping image area, and the target detection object is located in the overlapping image area.
  • There is a target element in the common viewing area such as the bicycle a shown in Figure 13. Taking the bicycle a as the benchmark, image translation is performed on other radar images to make the bicycle a in all the radar images coincide, and then the benchmark radar image is converted.
  • the other radar image after translation with the image is used as the registration image.
  • Step S3072 performing phase coincidence calibration on at least two sets of radar images.
  • the phase information in the reference radar image is obtained.
  • the phase information is used to characterize the image phase in the reference radar image. In order to obtain a better superposition effect when the images are coherently superimposed, it is necessary to align the image phases in the common viewing area of different radar images. After the phase alignment, when coherently superimposed, the peaks and peaks in the radar image can be superimposed, and the troughs and troughs can be superimposed, thereby improving the image signal-to-noise ratio and the image accuracy. Further, the radar image includes phase information, and the corresponding phase information can be obtained by analyzing the digitized radar image, which will not be repeated here.
  • the phases of other radar images are registered, so that the phases of the other radar images in at least two sets of radar images are consistent with the reference radar images, and the reference radar images and the other radar images after phase registration are used as Register the images.
  • the phase difference between the reference radar image and other radar images can be obtained, and then the other radar images are phase adjusted according to the phase difference, so that the phases of the other radar images are consistent with the phases of the reference radar images.
  • the radar image is formed by processing the radar beam, and the phase in the radar beam is determined by steps such as Fourier transformation.
  • the phase difference is a specific value, which is based on the phase of the reference radar image, and the phase of other radar images is shifted by the distance of the phase difference, that is, the phase alignment between other radar images and the reference radar image is realized.
  • the adjusted radar image is the registered image.
  • the execution subject for executing the phase registration process is the same as the execution subject for executing the imaging method provided by the embodiments of the present application, and may be, for example, a vehicle fusion perception system, or a controller or chip in a vehicle fusion perception system, or a radar system.
  • the above-mentioned execution subject may also be a terminal such as an unmanned aerial vehicle, an unmanned vehicle, an aircraft, or a robot, as well as a controller or a chip set in the terminal.
  • the reference radar image corresponding to the reference radar data is obtained, wherein the reference radar data can be determined by radar configuration information or a specific application scenario, and other radar images are analyzed according to the reference radar image. Perform image translation and phase registration to generate registered images. Since the registration images have been aligned in the time domain and phase, the first target image can be obtained by directly superimposing, thereby improving the accuracy and resolution of the generated first target image.
  • Step S308 coherently stack at least two groups of registered images in time domain to obtain a first target image.
  • the registered images include common viewing area images and non-common viewing area images, wherein the common viewing area images of different registration images are coincident with each other.
  • temporal coherent stacking is performed on at least two sets of registered images, including:
  • FIG. 14 is a schematic diagram of performing temporal coherent stacking of a registration image provided by an embodiment of the present application.
  • registration image A and registration image B have a common viewing area
  • the registration image A and the registration image B have been registered in the above steps, and the phase and the position of the common view area have been aligned, therefore, the common view area of the registration image A and the registration image B are directly merged in the time domain
  • the first target image can be obtained.
  • the first target image contains the target detection object. Due to the coherent superposition, the resolution of the first target image is higher than that of the radar image generated by the traditional single radar.
  • the process of mutual calibration of radar signals based on the proposed method makes the generated first target image have better accuracy and improves the safety and stability of the target vehicle in application scenarios such as automatic driving.
  • temporal coherent stacking is performed on at least two sets of registration images, including:
  • the positional relationship between the registration images is acquired, and according to the positional relationship between the registration images, the non-common view area images of at least two groups of registration images are spliced on both sides of the first target image to obtain a second target image.
  • FIG. 15 is a schematic diagram of another kind of coherent stacking of registration images in the temporal domain provided by an embodiment of the present application.
  • registration image A and registration image B have a common view
  • the non-common viewing area images of the registration image A and the registration image B are spliced on both sides of the first target image. Since each registration image is obtained by processing the data collected by different radars, the non-common viewing area of each registration image contains different image information, and the non-common viewing area images of different registration images are combined.
  • the obtained second target image has a wider field of view and richer detection information, which can further improve the safety and stability of the target vehicle in application scenarios such as automatic driving.
  • the non-co-viewing area can be spliced on both sides of different co-viewing areas according to the method in the embodiment shown in FIG. 15 to form a second target image.
  • the implementation manner of S301-S304 is the same as the implementation manner of S201-S204 in the embodiment shown in FIG. 5 of the present invention, and details are not repeated here.
  • FIG. 16 is a schematic block diagram of an imaging apparatus according to an embodiment of the present application.
  • the imaging device 4 in this embodiment of the present application may be a radar system, a vehicle-machine system, or a vehicle fusion perception system in the above method embodiments, or may be one or more chips in a radar system, a vehicle-machine system, or a vehicle fusion perception system.
  • the imaging device 4 can be used to perform part or all of the functions of the imaging methods in the above method embodiments.
  • the imaging device 4 may include the following modules.
  • the data acquisition module 41 is configured to acquire at least two groups of original radar data, wherein the at least two groups of original radar data are from at least two radars.
  • the data acquisition module 41 may execute step S101 of the method shown in FIG. 3 .
  • the image generation module 42 is configured to synthesize a first target image according to at least two groups of original radar data, wherein the first target image is obtained by performing image registration and time-domain coherent superposition of at least two groups of original radar data.
  • the image generation module 42 may perform step S102 of the method shown in FIG. 3 , or may perform step S205 of the method shown in FIG. 5 .
  • the imaging device 4 of the embodiment shown in FIG. 16 can be used to implement the technical solution of the embodiment shown in FIG. 3 in the above method, and its implementation principle and technical effect are similar, and details are not repeated here.
  • FIG. 17 is a schematic block diagram of another imaging device provided by an embodiment of the present application. As shown in FIG. 17 , on the basis of the imaging device 4 shown in FIG. 16 , the imaging device provided by the embodiment of the present application 5.
  • Added delay processing module 51 which:
  • the data acquisition module 41 is specifically configured to: determine at least two target radars according to preset radar configuration information; control the at least two target radars to transmit radar beams to their corresponding target directions respectively; acquire the radars sent by the at least two target radars The echo data corresponding to the beam is used as the original radar data.
  • the data acquisition module 41 may perform steps S201 to S203 of the method shown in FIG. 5 , or perform steps S301 to S303 of the method shown in FIG. 11 .
  • the radar configuration information includes at least one of radar identification information, radar position information, and radar emission angle information; wherein the radar position information is used to characterize the position of the radar, and the radar emission angle information is used to characterize the emission angle of the radar.
  • the imaging apparatus further includes: a delay processing module 51, configured to perform delay processing on at least two sets of original radar data, so as to achieve phase consistency of the at least two sets of original radar data.
  • the delay processing module 51 may perform step 204 of the method shown in FIG. 5 , or perform step 304 of the method shown in FIG. 11 .
  • the delay processing module 51 is specifically configured to: obtain the initial phase of the reference radar data; and, according to the initial phase of the reference radar data, correct the phases of other original radar data in the at least two groups of original radar data. At this time, the delay processing module 51 may execute steps S2041 to S2042 of the method shown in FIG. 10 .
  • the imaging device 5 in the embodiment shown in FIG. 17 can be used to execute the technical solution of any one of the embodiments shown in FIG. 3 or FIG. 5 in the foregoing method, and the implementation principle and technical effect thereof are similar, and are not repeated here.
  • this embodiment does not depend on whether the embodiment shown in FIG. 16 is implemented, and this embodiment can be implemented independently.
  • FIG. 18 is a schematic block diagram of another imaging device provided by an embodiment of the present application. As shown in FIG. 18 , based on the device 5 shown in FIG. 17 , an imaging device 6 provided by an embodiment of the present application is formed. Added distortion correction module 61, wherein:
  • the image generating module 42 is specifically configured to: generate corresponding at least two sets of radar images according to at least two sets of original radar data; perform image registration on the at least two sets of radar images to obtain images corresponding to the at least two sets of radar images. at least two sets of registration images; and at least two sets of registration images are coherently superimposed in time domain to obtain a first target image.
  • the image generation module 42 may execute step S305, step S307-step S308 of the method shown in FIG. 11 .
  • the distortion correction module 61 is configured to: acquire preset radar configuration information; and perform geometric distortion correction on the radar image according to the radar configuration information. At this time, the distortion correction module 61 may perform step S306 of the method shown in FIG. 11 .
  • the radar configuration information includes at least one of radar identification information, radar position information, and radar emission angle information.
  • the distortion correction module 61 performs geometric distortion correction on the radar image according to the radar configuration information, it is specifically used for: according to the radar configuration information.
  • the configuration information is obtained, and the radar image correction information is obtained; the geometric distortion correction is performed on the radar image according to the radar image correction information.
  • the distortion correction module 61 may perform step S306 of the method shown in FIG. 11 .
  • At least two sets of original radar data include a set of reference radar data
  • the image generation module 42 performs image registration on the at least two sets of radar images to obtain at least two sets of registered images corresponding to the at least two sets of radar images. is used to: obtain the reference radar image corresponding to the reference radar data; perform image translation on the other radar images in at least two sets of radar images according to the target element in the reference radar image, so that the target element in the other radar images is consistent with the reference radar image.
  • the target elements in the radar image are coincident; the reference radar image and other radar images after image translation are used as the registration image.
  • the image generation module 42 may perform step S3071 of the method shown in FIG. 11 .
  • the image generation module 42 performs image translation on the other radar images in the at least two sets of radar images according to the target elements in the reference radar images, so that the target elements in the other radar images coincide with the target elements in the reference radar images. Afterwards, it is specifically used for: acquiring the phase information in the reference radar image; registering the phases of other radar images according to the phase information, so that the phases of the other radar images in at least two sets of radar images are consistent with the phases of the reference radar images; The radar image and other radar images after phase registration are used as the registration image. At this time, the image generation module 42 may perform step S3072 of the method shown in FIG. 11 .
  • the registration images include common viewing area images and non-common viewing area images, wherein the common viewing area images of different registration images overlap each other; the image generation module 42 performs temporal coherent stacking of at least two sets of registration images. , when the first target image is obtained, it is specifically used for: obtaining common view area images of at least two groups of registration images; At this time, the image generation module 42 may execute step S308 of the method shown in FIG. 11 .
  • the image generation module 42 is specifically configured to: obtain the positional relationship between the registered images, and, according to the positional relationship between the registered images, generate the non-common data of the at least two groups of registered images.
  • the viewport images are spliced on both sides of the first target image to obtain a second target image.
  • the image generation module 42 may execute step S308 of the method shown in FIG. 11 .
  • the imaging device 6 in the embodiment shown in FIG. 18 can be used to implement the technical solution of any one of the embodiments shown in FIG. 3 or FIG. 5 or FIG. 11 in the above method, and its implementation principles and technical effects are similar, and will not be repeated here.
  • this embodiment does not depend on whether the embodiment shown in FIG. 17 is implemented, and this embodiment can be implemented independently.
  • FIG. 19 is a schematic block diagram of the structure of a radar system according to an embodiment of the present application.
  • the radar system 7 includes a processor 71, a memory 72 and at least two synthetic aperture radars 73, wherein the processor 71 is used to control the synthetic aperture radar 73 to send and receive signals; the memory 72 is used to store computer programs; The device 71 is also used to call and run the computer program stored in the memory 72, so that the radar system 7 executes the steps of the method shown in FIG. 3, or executes the steps of the method shown in FIG. 5, or executes the method shown in FIG. 11. of each step.
  • the processor 17 may also be used to implement the modules of Figures 16-18.
  • FIG. 20 is a schematic structural block diagram of a vehicle fusion perception system provided by an embodiment of the application.
  • the vehicle fusion perception system 8 includes a transmitter 81 , a receiver 82 and a processor 83 .
  • the processor 83 is configured to execute the steps in FIG. 3 , or the processor 83 is configured to execute the steps in FIG. 5 , or the processor 83 is configured to execute the steps in FIG. 11 .
  • the processor 83 is used to implement each module in FIG. 16 , FIG. 17 , and FIG. 18 .
  • the vehicle fusion perception system 8 of the embodiment shown in FIG. 20 can be used to execute the technical solutions of the above method embodiments, or the programs of each module of the embodiments shown in FIG. 16 , FIG. 17 , and FIG. 18 .
  • the processor 83 calls the program to execute the above
  • the operation of the method embodiment is to implement each module shown in FIG. 16 , FIG. 17 , and FIG. 18 .
  • the processor 83 may also be a controller, which is represented as “controller/processor 83” in FIG. 20 .
  • the transmitter 81 and the receiver 82 are used to support the transmission and reception of information between the vehicle fusion perception system 8 and each device in the target vehicle in the above embodiment, and to support the vehicle fusion perception system 8 and each device in the target vehicle in the above embodiment. communication between devices.
  • the network device may further include a memory 84 for storing program codes and data of the vehicle fusion perception system 8 .
  • the vehicle fusion perception system may further include a communication interface 85 .
  • the processor 83 is, for example, a central processing unit (Central Processing Unit, CPU), and can also be one or more integrated circuits configured to implement the above method, such as: one or more specific integrated circuits (Application Specific Integrated Circuit, ASIC), Or, one or more microprocessors (digital singnal processor, DSP), or, one or more field programmable gate arrays (Field Programmable Gate Array, FPGA), etc.
  • the memory 84 may be a single memory, or may be a collective term for a plurality of storage elements.
  • the transmitter 81 included in the vehicle fusion perception system 8 in FIG. 20 provided by the embodiment of the present application can perform the sending action
  • the processor 83 can perform the processing action
  • the receiver 82 can perform the receiving action corresponding to the foregoing method embodiments. .
  • FIG. 21 is a schematic structural block diagram of an electronic device provided by an embodiment of the present application.
  • the electronic device 9 provided by this embodiment includes a transceiver 91 , a memory 92 , a processor 93 and a computer program.
  • the processor 93 is used to control the transceiver 91 to send and receive signals, and the computer program is stored in the memory 92 and is configured to be executed by the processor 93 to implement the method provided by any of the implementation manners corresponding to FIG. 3 to FIG. 15 of the present invention .
  • the transceiver 91 , the memory 92 , and the processor 93 are connected through a bus 94 .
  • FIG. 22 is a schematic block diagram of the structure of a radar provided by an embodiment of the present application.
  • the radar 10 provided by this embodiment includes: a transceiver antenna 101 and a controller 102, wherein the transceiver antenna 101 is used for receiving and Sending radar signals; the controller 102 is used to perform signal processing or control, such as communicating with at least one other radar 10 , and for example, controlling the transceiver antenna to receive and/or transmit radar signals.
  • the radar 10 may execute the technical solutions of the above-mentioned method embodiments corresponding to FIG. 3 to FIG. 15 , or the programs of each module of the embodiments shown in FIG. 16 , FIG. 17 , and FIG. 18 .
  • the program is invoked by the controller 102 to execute the operations of the above method embodiments, so as to realize each module shown in FIG. 16 , FIG. 17 , and FIG. 18 .
  • the transceiver antenna may include at least one independent receiving antenna and at least one transmitting antenna, and may also include an antenna array;
  • the controller may include at least one processor, and the explanation of the processor may refer to the explanation of the processor 83 above.
  • the radar may also include other circuit structures, such as at least one of an oscillator, a mixer, and the like.
  • Embodiments of the present application further provide a computer-readable storage medium, including computer code, which, when run on a computer, enables the computer to execute the method provided in any of the implementation manners corresponding to FIG. 3 to FIG. 15 .
  • Embodiments of the present application further provide a computer program product, including program code.
  • the program code executes the method provided in any of the implementation manners corresponding to FIG. 3 to FIG. 15 .
  • An embodiment of the present application further provides a chip including a processor.
  • the processor is used to call and run the computer program stored in the memory to execute the corresponding operations and/or processes performed by the radar system in the imaging method provided by any one of the implementations corresponding to FIG. 3 to FIG. 15 .
  • the chip further includes a memory, the memory and the processor are connected to the memory through a circuit or a wire, and the processor is used for reading and executing the computer program in the memory.
  • the chip further includes a communication interface, and the processor is connected to the communication interface.
  • the communication interface is used to receive data and/or information to be processed, and the processor acquires the data and/or information from the communication interface and processes the data and/or information.
  • the communication interface may be an input-output interface.
  • a computer program product includes one or more computer instructions.
  • the computer may be a general purpose computer, a special purpose computer, a computer network, or other programmable device.
  • Computer instructions may be stored on or transmitted from one computer-readable storage medium to another computer-readable storage medium, for example, the computer instructions may be transmitted from a website site, computer, server, or data center over a wire (e.g.
  • a computer-readable storage medium can be any available medium that can be accessed by a computer or a data storage device such as a server, a data center, or the like that includes an integration of one or more available media.
  • Useful media may be magnetic media (eg, floppy disks, hard disks, magnetic tapes), optical media (eg, DVD), or semiconductor media (eg, Solid State Disk (SSD)), and the like.
  • An embodiment of the present application further provides a terminal, which may be a drone, an unmanned transport vehicle, a vehicle, an aircraft, or a robot, etc.
  • the terminal includes the modules shown in FIG. 16 to FIG.
  • the device of each module shown in FIG. 16-FIG. 18 may include the radar system provided by the embodiment shown in FIG. 19 .
  • the terminal can use the modules shown in Fig. 16-Fig. 18, or include a device capable of implementing the modules shown in Fig. 16-Fig. 18, or include the radar system provided by the embodiment shown in Fig. 19 to perform the following:
  • the imaging method provided by any of the implementation manners corresponding to FIGS. 3-15 .
  • the terminal includes the above-mentioned computer-readable storage medium.
  • Computer-readable media includes both computer storage media and communication media including any medium that facilitates transfer of a computer program from one place to another.
  • a storage medium can be any available medium that can be accessed by a general purpose or special purpose computer.

Landscapes

  • Engineering & Computer Science (AREA)
  • Remote Sensing (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Electromagnetism (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Radar Systems Or Details Thereof (AREA)
  • Traffic Control Systems (AREA)
  • Geometry (AREA)

Abstract

一种成像方法、装置、雷达系统、电子设备和存储介质,通过获取至少两组来自不同雷达的原始雷达数据,进行图像配准和时域相干叠加,得到第一目标图像。其中,每一组原始雷达数据分别对应一个雷达,由于来自不同雷达的原始雷达数据中,包含不同的雷达信息,对其进行图像配准和时域相干叠加后,得到的第一目标图像具有更高的物理分辨率,因此提高了生成的雷达图像的分辨率,丰富了雷达图像中的图像信息,进而提高车辆的定位准确性和行驶安全性,该方案进一步可用于提升自动驾驶或高级驾驶辅助系统ADAS能力,可应用于车联网,例如车辆外联V2X、车间通信长期演进技术LTE-V、车辆-车辆V2V等。

Description

成像方法、装置、雷达系统、电子设备和存储介质
本申请要求于2020年7月30日提交中国专利局、申请号为202010753975.7、申请名称为“成像方法、装置、雷达系统、电子设备和存储介质”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及汽车融合感知技术领域,尤其涉及一种成像方法、装置、雷达系统、电子设备和存储介质。
背景技术
随着社会的发展,智能运输设备、智能家居设备、机器人等智能终端正在逐步进入人们的日常生活中。以智能终端为智能运输设备为例,随着汽车自动驾驶产业的兴起,智能驾驶时代随之来临。具有全天时、全天候进行二维高分辨率成像的合成孔径雷达(Synthetic Aperture Radar,SAR)技术,越来越受到重视和关注。
现有技术中,对于具有自动驾驶或自动泊车功能的汽车,通常是采用例如摄像头、超声波雷达等传统传感设备实现车辆的定位和行驶引导,相关技术中,也可以通过合成孔径雷达实现上述目的,然而,现有的车载合成孔径雷达系统,生成的雷达图像存在分辨率低的问题,影响车辆的定位准确性和行驶安全性。
发明内容
本申请的目的在于提供一种成像方法、装置、雷达系统、电子设备和存储介质。用于解决车载合成孔径雷达系统生成的雷达图像分辨率低,影响车辆的定位准确性和行驶安全性的问题。
第一方面,本申请公开了一种成像方法,包括:
获取至少两组原始雷达数据,其中,所述至少两组原始雷达数据来自至少两个雷达;根据所述至少两组原始雷达数据合成第一目标图像,其中,所述第一目标图像是对所述至少两组原始雷达数据进行图像配准以及时域相干叠加得到的。
基于上述技术内容,通过获取至少两组来着不同雷达的原始雷达数据,进行图像配准和时域相干叠加,得到第一目标图像。其中,每一组原始雷达数据分别对应一个雷达,由于来自不同雷达的原始雷达数据中,包含不同的雷达信息,对其进行图像配准和时域相干叠加后,得到的第一目标图像具有更高的物理分辨率,也更丰富的图像信息,因此提高了生成的雷达图像的分辨率,丰富了雷达图像中的图像信息,进而提高车辆的定位准确性和行驶安全性。
在一种可能的实现方式中,获取至少两组原始雷达数据,包括:
根据预设的雷达配置信息,确定至少两个目标雷达;控制所述至少两个目标雷达分别向各自对应的目标方向发射雷达波束;获取所述至少两个目标雷达发送的雷达波束对应的 回波数据作为所述原始雷达数据。
基于上述技术内容,通过预设的雷达配置信息,确定至少两个目标雷达,并通过该至少两个目标雷达获取原始雷达数据,由于雷达配置信息可以根据具体需要进行调整和设置,因此,可以实现从多个车载雷达中,选取出匹配当前应用场景或应用需求的最佳雷达,即目标雷达,从而提高雷达系统的使用灵活性和适用范围。
在一种可能的实现方式中,雷达配置信息包括雷达标识信息、雷达位置信息、发射角信息中的至少一个;其中,所述雷达位置信息用于表征雷达的位置,所述雷达发射角信息用于表征雷达的发射角度。
基于上述技术内容,通过在配置信息中设置雷达位置信息、发射角信息,可以实现对雷达发射的波束覆盖范围的调整,实现在不同应用场景和应用需要下,雷达波束对应的最佳覆盖范围,提高使用灵活性和适用范围。
在一种可能的实现方式中,在获取至少两组原始雷达数据之后,还包括:对所述至少两组原始雷达数据执行延迟处理,以实现所述至少两组原始雷达数据的相位一致。
在一种可能的实现方式中,对至少两组原始雷达数据执行延迟处理,以实现至少两组原始雷达数据的相位一致,包括:获取基准雷达数据的初始相位;根据所述基准雷达数据的初始相位,对至少两组原始雷达数据中的其他原始雷达数据的相位进行修正。
基于上述技术内容,通过对多组原始雷达数据进行延迟处理,使多组原始雷达数据的相位对齐,后续进行图像叠加时,可以获得更好的成像结果,提高定位精度。
在一种可能的实现方式中,根据至少两组原始雷达数据合成第一目标图像,包括:根据至少两组原始雷达数据,生成对应的至少两组雷达图像;对所述至少两组雷达图像进行图像配准,得到与所述至少两组雷达图像对应的至少两组配准图像;将所述至少两组配准图像进行时域相干叠加,得到所述第一目标图像。
基于上述技术内容,根据至少两组原始雷达数据,每一租原始雷达数据分别对应生成一组雷达图像,由于各雷达图像是通过不同位置、角度的雷达采集的原始雷达数据生成的,因此,对每一组雷达图像进行图像配准,使不同雷达图像产生重合区域,再对配准后的图像进行时域相干叠加,得到第一目标图像,其中,第一目标图像中包含不同雷达对应的雷达图像中的共有图像。由于单一的雷达采集的能够覆盖目标检测物体的原始雷达数据经带宽过大,因此处理后形成的雷达图像,存在分辨率过低的问题。通过多个雷达发射的窄雷达波束对目标检测物体进行探测,并将采集后处理得到的雷达图像进行时域叠加,得到高分辨率的图像,提高目标检测物体的定位精度,进而实现车辆的准确定位,提高车辆自动驾驶的安全性。
在一种可能实现方式中,在根据所述至少两组原始雷达数据,生成对应的最少两组雷达图像之后,还包括:获取预设的雷达配置信息;根据所述雷达配置信息,对雷达图像进行几何失真矫正。
在一种可能实现方式中,雷达配置信息包括雷达标识信息、雷达位置信息、雷达发射角信息中的至少一个,根据所述雷达配置信息,对所述雷达图像进行几何失真矫正,包括:根据所述雷达配置信息,得到雷达图像矫正信息;根据该雷达图像矫正信息对所述雷达图像进行几何失真矫正,其中,雷达图像矫正信息是用于描述不同车辆尺寸、外形以及雷达安装位置、发射角度的信息,不同类型的车辆,或者雷达设置位置、角度不同的车辆,具 有相应的雷达图像矫正信息。
基于上述技术内容,通过预设的雷达配置信息,对不同雷达采集处理后得到的雷达图像进行几何失真矫正,可以提高雷达图像在后续进行图像配准和时域相干叠加时的精准度,避免由于车辆的尺寸不同、雷达设置位置不同、发射角度不同等原因,造成的雷达图像进行时域相干叠加时出现的叠加精准度低的问题,提高雷达图像的时域叠加精度。
在一种可能实现方式中,至少两组原始雷达数据中,包括一组基准雷达数据,对至少两组雷达图像进行图像配准,得到与所述至少两组雷达图像对应的至少两组配准图像,包括:获取基准雷达数据对应的基准雷达图像;根据基准雷达图像中的目标元素,对至少两组雷达图像中的其他雷达图像进行图像平移,使其他雷达图像中的目标元素与基准雷达图像中的目标元素重合;将所述基准雷达图像与图像平移后的其他雷达图像作为配准图像。
在一种可能实现方式中,在根据所述基准雷达图像中的目标元素,对所述至少两组雷达图像中的其他雷达图像进行图像平移,使其他雷达图像中的目标元素与基准雷达图像中的目标元素重合之后,还包括:获取所述基准雷达图像中的相位信息;根据所述相位信息,对其他雷达图像的相位进行配准,使所述至少两组雷达图像中的其他雷达图像与所述基准雷达图像的相位一致;将所述基准雷达图像与相位配准后的其他雷达图像作为配准图像。
基于上述技术内容,通过获取基准雷达数据对应的基准雷达图像,其中,基准雷达数据可以通过雷达配置信息或具体的应用场景进行确定,并根据基准雷达图像,对其他的雷达图像进行图像平移和相位配准,生成配准图像。配准图像之间由于已经在时域和相位上对其,因此可以直接进行叠加而获得第一目标图像,提高生成的第一目标图像的精确度和分辨率。
在一种可能实现方式中,所述配准图像包括共视区图像和非共视区图像,其中,不同配准图像的共视区图像相互重合;将所述至少两组配准图像进行时域相干叠加,得到所述第一目标图像,包括:获取至少两组配准图像的共视区图像;对所述至少两组配准图像的共视区图像进行时域叠加,得到所述第一目标图像。
基于上述技术内容,配准图像中包括用于描述目标检测物体的共视区图像,不同配准图像的共视区图像相互重合,通过多个配准图像的共视区进行时域相干叠加,能够生成具有更高分辨率的共视区叠加图像,即第一目标图像,第一目标图像由于是通过多个共视区图像叠加而生成的,因此,相交一张雷达图像,具有更好的图像分辨率和准确性。
在一种可能实现方式中,在得到所述第一目标图像之后,还包括:获取配准图像之间的位置关系,根据所述配准图像之间的位置关系,将所述至少两组配准图像的非共视区图像拼接在所述第一目标图像的两侧,得到第二目标图像。
基于上述技术内容,根据不同配准图像之间的位置关系,将多组配准图像的非共视区拼接在第一目标图像的两侧,得到第二目标图像,由于各个配准图像是由不同的雷达采集数据经处理后得到的,因此,各配准图像的非共视区中,包含了不同的图像信息,将不同的配准图像的非共视区图像进行拼接,得到的第二目标图像具有更宽的视野,以及更加丰富的检测信息,可以进一步的提高车辆的定位精度和自动驾驶安全性。
第二方面,本申请公开了一种成像装置,包括:
数据获取模块,用于获取至少两组原始雷达数据,其中,所述至少两组原始雷达数据来自至少两个雷达;图像生成模块,用于根据所述至少两组原始雷达数据合成第一目标图 像,其中,所述第一目标图像是对所述至少两组原始雷达数据进行图像配准以及时域相干叠加得到的。
基于上述技术内容,成像装置通过获取至少两组来着不同雷达的原始雷达数据,进行图像配准和时域相干叠加,得到第一目标图像。其中,每一组原始雷达数据分别对应一个雷达,由于来自不同雷达的原始雷达数据中,包含不同的雷达信息,对其进行图像配准和时域相干叠加后,得到的第一目标图像具有更高的物理分辨率,也更丰富的图像信息,因此提高了生成的雷达图像的分辨率,丰富了雷达图像中的图像信息,进而提高车辆的定位准确性和行驶安全性。
在一种可能的实现方式中,所述数据获取模块,具体用于:根据预设的雷达配置信息,确定至少两个目标雷达;控制所述至少两个目标雷达分别向各自对应的目标方向发射雷达波束;获取所述至少两个目标雷达发送的雷达波束对应的回波数据作为所述原始雷达数据。
基于上述技术内容,通过预设的雷达配置信息,确定至少两个目标雷达,并通过该至少两个目标雷达获取原始雷达数据,由于雷达配置信息可以根据具体需要进行调整和设置,因此,可以实现从多个车载雷达中,选取出匹配当前应用场景或应用需求的最佳雷达,即目标雷达,从而提高雷达系统的使用灵活性和适用范围。
在一种可能的实现方式中,所述雷达配置信息包括雷达标识信息、雷达位置信息、雷达发射角信息中的至少一个;其中,所述雷达位置信息用于表征雷达的位置,所述雷达发射角信息用于表征雷达的发射角度。
基于上述技术内容,通过在配置信息中设置雷达位置信息、发射角信息,可以实现对雷达发射的波束覆盖范围的调整,实现在不同应用场景和应用需要下,雷达波束对应的最佳覆盖范围,提高使用灵活性和适用范围。
在一种可能的实现方式中,成像装置还包括:延迟处理模块,用于对所述至少两组原始雷达数据执行延迟处理,以实现所述至少两组原始雷达数据的相位一致。
在一种可能的实现方式中,所述延迟处理模块,具体用于:获取所述基准雷达数据的初始相位;根据所述基准雷达数据的初始相位,对所述至少两组原始雷达数据中的其他原始雷达数据的相位进行修正。
基于上述技术内容,通过对多组原始雷达数据进行延迟处理,使多组原始雷达数据的相位对齐,后续进行图像叠加时,可以获得更好的成像结果,提高定位精度。
在一种可能的实现方式中,所述图像生成模块,具体用于:根据所述至少两组原始雷达数据,生成对应的至少两组雷达图像;对所述至少两组雷达图像进行图像配准,得到与所述至少两组雷达图像对应的至少两组配准图像;将所述至少两组配准图像进行时域相干叠加,得到所述第一目标图像。
基于上述技术内容,根据至少两组原始雷达数据,每一租原始雷达数据分别对应生成一组雷达图像,由于各雷达图像是通过不同位置、角度的雷达采集的原始雷达数据生成的,因此,对每一组雷达图像进行图像配准,使不同雷达图像产生重合区域,再对配准后的图像进行时域相干叠加,得到第一目标图像,其中,第一目标图像中包含不同雷达对应的雷达图像中的共有图像。由于单一的雷达采集的能够覆盖目标检测物体的原始雷达数据经带宽过大,因此处理后形成的雷达图像,存在分辨率过低的问题。通过多个雷达发射的窄雷达波束对目标检测物体进行检测,并将采集后处理得到的雷达图像进行时域叠加,得到高 分辨率的图像,提高目标检测物体的定位精度,进而实现车辆的准确定位,提高车辆自动驾驶的安全性。
在一种可能实现方式中,成像装置还包括失真矫正模块,用于:获取预设的雷达配置信息;根据所述雷达配置信息,对雷达图像进行几何失真矫正。
在一种可能实现方式中,所述雷达配置信息包括雷达标识信息、雷达位置信息、雷达发射角信息中的至少一个,所述失真矫正模块在根据所述雷达配置信息,对雷达图像进行几何失真矫正时,具体用于:根据所述雷达配置信息,得到雷达图像矫正信息;根据所述雷达图像矫正信息对所述雷达图像进行几何失真矫正。
基于上述技术内容,通过预设的雷达配置信息,对不同雷达采集处理后得到的雷达图像进行几何失真矫正,可以提高雷达图像在后续进行图像配准和时域相干叠加时的精准度,避免由于车辆的尺寸不同、雷达设置位置不同、发射角度不同等原因,造成的雷达图像进行时域相干叠加时出现的叠加精准度低的问题,提高雷达图像的时域叠加精度。
在一种可能实现方式中,所述至少两组原始雷达数据中,包括一组基准雷达数据,所述图像生成模块在对所述至少两组雷达图像进行图像配准,得到与所述至少两组雷达图像对应的至少两组配准图像时,具体用于:获取所述基准雷达数据对应的基准雷达图像;根据所述基准雷达图像中的目标元素,对所述至少两组雷达图像中的其他雷达图像进行图像平移,使所述其他雷达图像中的目标元素与基准雷达图像中的目标元素重合;将所述基准雷达图像与图像平移后的其他雷达图像作为配准图像。
在一种可能实现方式中,所述图像生成模块在根据所述基准雷达图像中的目标元素,对所述至少两组雷达图像中的其他雷达图像进行图像平移,使其他雷达图像中的目标元素与基准雷达图像中的目标元素重合之后,具体用于:获取所述基准雷达图像中的相位信息;根据所述相位信息,对其他雷达图像的相位进行配准,使所述至少两组雷达图像中的其他雷达图像与所述基准雷达图像的相位一致;将所述基准雷达图像与相位配准后的其他雷达图像作为配准图像。
基于上述技术内容,通过获取基准雷达数据对应的基准雷达图像,其中,基准雷达数据可以通过雷达配置信息或具体的应用场景进行确定,并根据基准雷达图像,对其他的雷达图像进行图像平移和相位配准,生成配准图像。配准图像之间由于已经在时域和相位上对其,因此可以直接进行叠加而获得第一目标图像,提高生成的第一目标图像的精确度和分辨率。
在一种可能实现方式中,所述配准图像包括共视区图像和非共视区图像,其中,不同配准图像的共视区图像相互重合;所述图像生成模块在将所述至少两组配准图像进行时域相干叠加,得到所述第一目标图像时,具体用于:获取至少两组配准图像的共视区图像;对所述至少两组配准图像的共视区图像进行时域叠加,得到所述第一目标图像。
基于上述技术内容,配准图像中包括用于描述目标检测物体的共视区图像,不同配准图像的共视区图像相互重合,通过多个配准图像的共视区进行时域相干叠加,能够生成具有更高分辨率的共视区叠加图像,即第一目标图像,第一目标图像由于是通过多个共视区图像叠加而生成的,因此,相交一张雷达图像,具有更好的图像分辨率和准确性。
在一种可能实现方式中,所述图像生成模块在得到所述第一目标图像之后,具体用于:获取配准图像之间的位置关系,根据所述配准图像之间的位置关系,将所述至少两组配准 图像的非共视区图像拼接在所述第一目标图像的两侧,得到第二目标图像。
基于上述技术内容,根据不同配准图像之间的位置关系,将多组配准图像的非共视区拼接在第一目标图像的两侧,得到第二目标图像,由于各个配准图像是由不同的雷达采集数据经处理后得到的,因此,各配准图像的非共视区中,包含了不同的图像信息,将不同的配准图像的非共视区图像进行拼接,得到的第二目标图像具有更宽的视野,以及更加丰富的检测信息,可以进一步的提高车辆的定位精度和自动驾驶安全性。
第三方面,本申请公开了一种雷达系统,雷达系统包括:处理器、存储器和至少两个合成孔径雷达,其中,所述处理器用于控制所述合成孔径雷达收发信号;所述存储器用于存储计算机程序;所述处理器还用于调用并运行所述存储器中存储的计算机程序,使得所述雷达系统执行以上第一方面的任一实现方式提供的方法。
第四方面,本申请公开了一种电子设备,包括:处理器、存储器和收发器;
处理器用于控制收发器收发信号;存储器用于存储计算机程序;处理器还用于调用并运行存储器中存储的计算机程序,使得该电子设备执行以上第一方面的任一实现方式提供的方法。
第五方面,本申请公开了一种计算机可读存储介质,包括计算机代码,当其在计算机上运行时,使得计算机执行以上第一方面的任一实现方式提供的方法。
第六方面,本申请公开了一种计算机程序产品,包括程序代码,当计算机运行计算机程序产品时,该程序代码执行以上第一方面的任一实现方式提供的方法。
第七方面,本申请公开了一种芯片,包括处理器。该处理器用于调用并运行存储器中存储的计算机程序,以执行本申请实施例的成像方法中执行的相应操作和/或流程。可选地,该芯片还包括存储器,该存储器与该处理器通过电路或电线与存储器连接,处理器用于读取并执行该存储器中的计算机程序。进一步可选地,该芯片还包括通信接口,处理器与该通信接口连接。通信接口用于接收需要处理的数据和/或信息,处理器从该通信接口获取该数据和/或信息,并对该数据和/或信息进行处理。该通信接口可以是输入输出接口。
第八方面,本申请实施例提供一种终端,所述终端可以为无人机、无人运输车、车辆、飞行器或者机器人等。一种设计中,该终端包括上述第三方面提供的雷达系统,并能够通过该雷达系统执行以上第一方面的任一实现方式提供的成像方法;一种设计中,该终端包括上述第五方面提供的计算机可读存储介质。
结合上述技术方案,本申请通过获取至少两组来着不同雷达的原始雷达数据,进行图像配准和时域相干叠加,得到第一目标图像。其中,每一组原始雷达数据分别对应一个雷达,由于来自不同雷达的原始雷达数据中,包含不同的雷达信息,对其进行图像配准和时域相干叠加后,得到的第一目标图像具有更高的物理分辨率,也更丰富的图像信息,因此提高了生成的雷达图像的分辨率,丰富了雷达图像中的图像信息,进而提高车辆的定位准确性和行驶安全性。
附图说明
图1为本申请实施例提供的一种应用场景示意图;
图2为相关技术中车载平台上合成孔径雷达系统工作示意图;
图3为本申请实施例提供的一种成像方法的流程示意图;
图4为本申请实施例提供的一种合成孔径雷达设置位置示意图;
图5为本申请实施例提供的另一种成像方法的流程示意图;
图6为本申请实施例提供的一种目标雷达示意图;
图7为本申请实施例提供的一种调整目标雷达的示意图;
图8为本申请实施例提供的一种目标雷达向目标方向发射雷达波束的示意图;
图9为本申请实施例提供的动态目标雷达和现有技术中固定目标雷达的对照示意图;
图10为图5所示实施例中步骤S204的一种实施方式的流程图;
图11为本申请实施例提供的又一种成像方法的流程示意图;
图12为图11所示实施例中步骤S307的一种实施方式的流程图;
图13为本申请实施例提供的一种图像平移重合校准的示意图;
图14为本申请实施例提供的一种配准图像进行时域相干叠加的示意图;
图15为本申请实施例提供的另一种配准图像进行时域相干叠加的示意图;
图16为本申请实施例提供的一种成像装置的示意性框图;
图17为本申请实施例提供的另一种成像装置的示意性框图;
图18为本申请实施例提供的又一种成像装置的示意性框图;
图19为本申请实施例提供的一种雷达系统的结构示意性框图;
图20为本申请实施例提供的一种车辆融合感知系统的结构示意性框图;
图21为本申请实施例提供的一种电子设备的结构示意性框图;
图22为本申请实施例提供的一种雷达的结构示意性框图。
具体实施方式
以下对本申请中的部分用语进行解释说明,以便于本领域技术人员理解。需要说明的是,当本申请实施例的方案应用于5G系统、或者现有的系统、或未来可能出现的其他系统时,设备的名称可能发生变化,但这并不影响本申请实施例方案的实施。
1)合成孔径雷达,是一种高分辨力成像雷达,通过主动发射微波,在多种复杂环境下实现探测,生成类似光学照相的高分辨雷达图像,合成孔径雷达具有设备体积小、质量轻、成像精度高和不易受环境影响的众多优点。关于合成孔径雷达的解释可以参见现有技术的阐述。
2)“多个”是指两个或两个以上,其它量词与之类似。“和/或”,描述关联对象的关联关系,表示可以存在三种关系,例如,A和/或B,可以表示:单独存在A,同时存在A和B,单独存在B这三种情况。字符“/”一般表示前后关联对象是一种“或”的关系。
3)“对应”可以指的是一种关联关系或绑定关系,A与B相对应指的是A与B之间是一种关联关系或绑定关系。
需要指出的是,本申请实施例中涉及的名词或术语可以相互参考,不再赘述。
图1为本申请实施例提供的一种应用场景示意图,如图1所示,本申请实施例提供的成像方法可以应用于雷达系统11,雷达系统11安装在目标车辆上,并用于实现目标车辆的定位和障碍物识别等功能。具体地,目标车辆上的雷达系统包括多个分布式的合成孔径雷达111,目标车辆可以通过雷达系统对周围的障碍物进行检测和感知,并根据检测和感 知结果,调整行驶路线,有效地躲避障碍物,或者规划、调整合理行驶路线,进而实现目标车辆的自动驾驶功能。
示例性地,本申请实施例提供的成像方法还可以应用于车辆融合感知系统12,或者车载融合感知系统12内的控制器或芯片。车辆融合感知系统12设置在目标车辆内,车辆融合感知系统12通过安装在目标车辆上的雷达系统11,获得原始雷达数据,并根据该原始雷达数据进行处理和分析,得到用户描述目标车辆位置和周围环境的图像信息,并根据该图像信息,进行车辆行驶路线规划、调整、自动泊车等自动驾驶功能。
当前,由于合成孔径雷达的优良特性,其被广泛应用于卫星雷达成像及飞机雷达成像,即星载合成孔径雷达系统和机载合成孔径雷达系统。现有的星载合成孔径雷达系统和机载合成孔径雷达系统,在其使用场景下,通常是采集几十秒的数据采集后,进行离线处理,以满足数据存储、散热及实时性等方面的要求。
然而在车载平台上,对于自动驾驶等场景,首先需要满足的就是实时性的问题。图2为相关技术中车载平台上合成孔径雷达系统工作示意图,如图2所示,而车载的孔径雷达系统11,若想实现实时成像,则必须在较短的时间内完成数据获取和数据处理。由于车载孔径雷达系统成像场景距离合成孔径雷达111较近,若实现实时成像,则要求合成孔径长度较短,从而限制了方位向的波束宽度,进而导致方位向的图像分辨率降低。进一步地,图像分辨率过低,会影响后期根据图像进行定位和障碍物识别的准确性,造成车辆控制器在进行车辆路径规划、自动行驶控制时,出现准确性和安全性降低的问题。
为了解决上述问题,本申请实施例中,通过获取多个合成孔径雷达采集的原始雷达数据,进行处理生成多个雷达图像,并对多个雷达图像进行图像配准和时域相干叠加,从而使较小波束带宽的成像结果也具有较高的图像分辨率,提高利用车辆进行定位和障碍物识别的准确性。
图3为本申请实施例提供的一种成像方法的流程示意图,本实施例提供的成像方法的执行主体可以为雷达系统,如图3所示,该方法包括:
S101、获取至少两组原始雷达数据,其中,至少两组原始雷达数据来自至少两个雷达。
示例性地,雷达系统中包括至少合两个合成孔径雷达,多个合成孔径雷达呈分布式设置于目标车辆上。可选的,图4为本申请实施例提供的一种合成孔径雷达设置位置示意图,如图4所示,合成孔径雷达可以设置在车辆外部的一侧,也可以设置在车辆外部相邻的两个边缘,可以根据具体需要进行设置,此处不进行限定。
合成孔径雷达通过发送雷达波束,接收回波波束,实现对目标物体的探测,具体原理此处不再赘述。不同的合成孔径雷达发出的雷达波束,在遇到有效障碍物后,会产生回波,不同的合成孔径雷达接收各自发送的雷达波束对应的回波信号,即为原始雷达数据。
可以理解的是,当本实施例提供的成像方法的执行主体为雷达系统或类似的雷达模块、装置、设备时,则雷达系统或类似的雷达模块、装置、设备可以通过多个合成孔径雷达按照上述过程实现原始雷达数据的获取。当本实施例提供的成像方法的执行主体为车辆融合感知系统、车机系统、车辆融合感知系统或车机系统内的控制器或芯片、云端服务器等无法直接获取原始雷达数据的系统、设备或模块时,则该系统、设备或模块与雷达系统或类似的雷达模块、装置、设备进行通讯,通过雷达系统或类似的雷达模块、装置、设备获取该原始雷达数据,此处不再对该过程进行赘述。
S102、根据至少两组原始雷达数据合成第一目标图像,其中,第一目标图像是对至少两组原始雷达数据进行图像配准以及时域相干叠加得到的。
示例性地,以执行主体为雷达系统为例,雷达系统通过多个合成孔径雷达对应的获得多个原始雷达数据后,根据原始雷达数据中电磁波的波形、振幅强度和时间的变化特征中的至少一个,对每一组原始雷达数据分别进行处理,生成与每一组原始雷达数据对应的雷达图像,该雷达图像为对原始雷达数据进行图像变化后生成的图像信息,用于描述被测目标的空间位置、结构、形态等信息。
其中,每一组雷达图像,分别对应一个雷达源。随着承载雷达系统的车辆移动,每一雷达发射的雷达波束会覆盖一个区域,进而形成与不同雷达对应的多个覆盖区域。该多个覆盖区域中存在重合区域,重合区域中存在被测目标,例如为障碍物。为了实现更高的图像分辨率,以该重合区域作为基准,对不同的雷达图像进行配准,使不同雷达图像的重合区域对齐,之后进行时域相干叠加,形成第一目标图像。该第一目标图像能够实现对被测目标的图像表达,同时相比单一雷达形成的雷达图像,具有更高的分辨率。
本申请中,通过获取至少两组来着不同雷达的原始雷达数据,进行图像配准和时域相干叠加,得到第一目标图像。其中,每一组原始雷达数据分别对应一个雷达,由于来自不同雷达的原始雷达数据中,包含不同的雷达信息,对其进行图像配准和时域相干叠加后,得到的第一目标图像具有更高的物理分辨率,也更丰富的图像信息,因此提高了生成的雷达图像的分辨率,丰富了雷达图像中的图像信息,进而提高车辆的定位准确性和行驶安全性。该方案进一步可用于提升自动驾驶或高级驾驶辅助系统ADAS能力,可应用于车联网,例如车辆外联V2X、车间通信长期演进技术LTE-V、车辆-车辆V2V等。
图5为本申请实施例提供的另一种成像方法的流程示意图,如图5所示,本实施例提供的成像方法在图3所示实施例提供的成像方法的基础上,对步骤S101进一步细化,并在步骤S101之后,增加了延迟处理的步骤,该方法包括:
步骤S201,根据预设的雷达配置信息,确定至少两个目标雷达。
示例性地,雷达配置信息包括雷达标识信息、雷达位置信息、雷达发射角信息中的至少一个;其中,雷达位置信息用于表征雷达的位置,雷达发射角信息用于表征雷达的发射角度。
具体地,在一种可能的实现方式中,雷达配置信息包括雷达位置信息、雷达发射角信息,本申请实施例提供的成像方法的执行主体例如为雷达系统,雷达系统根据雷达配置信息中的雷达位置信息或雷达发射角信息,从多个预设的合成孔径雷达中,确定适合当前应用场景或应用需求的目标雷达。图6为本申请实施例提供的一种目标雷达示意图,如图6所示,目标车辆中,设置有多个合成孔径雷达(包括车辆右侧的R1、R2、R3、R4、R5,车辆左侧的L1、L2、L3、L4、L5,以及车辆正后侧的M1,后图中涉及相同示例性的合成空间雷达的标识时,不再赘述),雷达系统,通过雷达配置信息,将其中雷达位置、雷达发射角更加匹配当前应用场景的合成孔径雷达R1、R3,确定为目标雷达。
在另一种可能的实现方式中,雷达配置信息包括雷达位置信息、雷达发射角信息,本申请实施例提供的成像方法的执行主体例如为雷达系统,雷达系统根据雷达配置信息中的雷达位置信息或雷达发射角信息,对预设多个合成孔径雷达中的部分或全部雷达的位置,和/或发射角进行调整,将调整后的合成孔径雷达,确定为目标雷达。图7为本申请实施例 提供的一种调整目标雷达的示意图,如图7所示,通过对合成孔径雷达位置,和/或发射角进行调整,可以使同一个合成孔径雷达(例如图7中的R2和R3)在不同的应用场景或应用需求下发挥作用,减少车辆上设置合成孔径雷达的数量,降低雷达系统的成本。
在又一种可能的实现方式中,雷达配置信息包括雷达标识信息,雷达标识信息用于表征雷达标识与不同的雷达位置、雷达发射角之间的映射关系。示例性地,表1为本申请实施例提供的一种雷达标识信息的示意图,如表1所示,在不同的应用场景或应用需求下,根据所需要的雷达位置和雷达发射角,可以确定雷达标识,进而确定与雷达标识对应的目标雷达。
表1
雷达标识 雷达位置 雷达发射角
#1 左侧40cm 30度
#2 左侧2200cm 150度
#3 右侧40cm 30度
#4 右侧2200cm 150度
本申请中,通过在雷达配置信息中设置雷达位置信息、发射角信息,可以实现对雷达发射的波束覆盖范围的调整,实现在不同应用场景和应用需要下,雷达波束对应的最佳覆盖范围,提高使用灵活性和适用范围。
步骤S202,控制至少两个目标雷达分别向各自对应的目标方向发射雷达波束。可选的,至少两个目标雷达的目标方向可以相同或者不同,或者,至少两个目标雷达中可以存在多个目标雷达,其目标方向不同。
步骤S203,获取至少两个目标雷达发送的雷达波束对应的回波数据作为原始雷达数据。
示例性地,至少两个目标雷达具有特定的雷达位置和雷达发射角,每一目标雷达在各自的发送位置,按照对应的雷达发射角向各自的目标方向发射雷达波束。图8为本申请实施例提供的一种目标雷达向目标方向发射雷达波束的示意图,如图8所示,本实施例中,雷达系统包括两个合成孔径雷达(R1和R2),分别设置在车辆一侧的车头位置和车尾位置,两个合成孔径雷达分别按照各自的雷达发射角,向各自的目标方向发射雷达波束。同时,根据合成孔径雷达的成像原理,安装有雷达系统的目标车辆沿某一方向移动,雷达波束不断发出,并经障碍物反射后,产生回波信号,合成孔径雷达接收该回波信息,得到回波数据,不同目标雷达对应的回波数据的集合,即为原始雷达数据。由此可知,原始雷达数据中,包含了每一目标雷达所采集的用于感知目标区域内物体的环境信息,相对于单一雷达采集的雷达信息,信息量更丰富,准确性更好。
进一步地,图9为本申请实施例提供的动态目标雷达和现有技术中固定目标雷达的对照示意图,如图9所示,现有技术中,目标雷达(例如R2和R3)是提前预设好的,当车辆移动经过障碍物附近时,由于障碍物和车辆的角度、距离关系不确定,因此,固定的雷达波束发射角,会形成雷达探测的盲区。而本申请实施例提供的确定目标雷达的步骤,能够根据预设的雷达配置信息,根据雷达与障碍物之间的关系,动态调整目标雷达(例如R1和R3),使目标雷达的雷达波束发射角,能够与障碍物匹配,最大限度的覆盖障碍物所在区域,提高雷达探测效果。具体地,雷达配置信息可以根据具体需要进行调整和设置,例如,车辆在高速行驶过程中,和车辆在停车入库的过程中,对应使用不同的雷达配置信息。 在一种可能的设计中,该雷达配置信息可以由车辆内设置的车机系统接收用户的操作指令后,对应设置与操作指令匹配的雷达配置信息。在另一种可能的设计中,车辆上设置的雷达系统或者控制装置,自动根据探测到的障碍物情况,动态设置雷达配置信息,使雷达以不同的位置参数和发射角参数进行探测,从而提高雷达探测效果。需要说明的是,上述控制装置可以是车内任何可以用于执行控制功能的处理器、芯片系统或者设备。此处,不对获取雷达配置信息的具体方式进行限定,可以根据具体使用场景和需要进行设置。
本申请中,通过预设的雷达配置信息,确定至少两个目标雷达,并通过该至少两个目标雷达获取原始雷达数据,由于雷达配置信息可以根据具体需要进行调整和设置,因此,可以实现从多个车载雷达中,选取出匹配当前应用场景或应用需求的最佳雷达,即目标雷达,从而提高雷达系统的使用灵活性和适用范围。
步骤S204,对至少两组原始雷达数据执行延迟处理,以实现至少两组原始雷达数据的相位一致。
具体地,由于多个目标雷达的雷达位置和/或雷达发射角不同,不同目标雷达发射的雷达波束经目标检测物体的反射后,产生的回波数据存在相位差,因此,采集的原始雷达数据之间,存在初始相位差,为了多个目标雷达发射的雷达波束可以形成无缝衔接的宽波束,需要对多个目标雷达采集的原始雷达数据进行相位对齐。
示例性地,如图10所示,在一种可能的实施方式中,S204包括S2041、S2042两个具体的实现步骤:
步骤S2041,获取基准雷达数据的初始相位。
步骤S2042,根据基准雷达数据的初始相位,对至少两组原始雷达数据中的其他原始雷达数据的相位进行修正。
具体地,雷达波束是具有特定频率的周期性波束信号。以该波束信号的一个整周期为基准,可以获得基准雷达数据的初始相位。通常不同目标雷达的初始相位是不同的,然而初始相位之差可以是一个固定值,因此可以将其中的非基准雷达的原始雷达数据进行移相处理,使得多个目标雷达的初始相位保持一致。其中,需要说明的是,基准雷达数据是根据预设的配置信息确定的,例如,根据雷达配置信息,确定多个原始雷达数据中的基准雷达数据,该基准雷达数据,相比其他原始雷达数据,能够更好的表现目标检测物体的特征,因此基准雷达数据可以理解为检测效果较好的雷达数据。也因此,根据基准雷达数据的初始相位,调整至少两组原始雷达数据中的其他原始雷达数据的相位进行修正,能够实现更好地相位修正效果,在后续通过各目标雷达对应的原始雷达数据生成第一目标图像时,能够得到更好的图像精度和准确度。
本申请中,通过对多组原始雷达数据进行延迟处理,使多组原始雷达数据的相位对齐,后续进行图像叠加时,可以获得更好的成像结果,提高定位精度。
步骤S205,根据至少两组原始雷达数据合成第一目标图像,其中,第一目标图像是对至少两组原始雷达数据进行图像配准以及时域相干叠加得到的。
本实施例中,S205的实现方式与本发明图3所示实施例中的S102的实现方式相同,在此不再一一赘述。
需要说明的是,本实施例提供的成像方法的执行主体可以为上述的雷达系统或类似的雷达模块、装置、设备,也可以为车辆融合感知系统、车机系统、车辆融合感知系统或车 机系统内的控制器或芯片、云端服务器等系统、设备或模块,上述执行主体在获得原始雷达数据后,进行的成像过程类似,此处不再对具体过程进行一一赘述。
图11为本申请实施例提供的又一种成像方法的流程示意图,如图11所示,本实施例提供的成像方法在图5所示实施例提供的成像方法的基础上,对S205进一步细化,该方法包括:
步骤S301,根据预设的雷达配置信息,确定至少两个目标雷达。
步骤S302,控制至少两个目标雷达分别向各自对应的目标方向发射雷达波束。
步骤S303,获取至少两个目标雷达发送的雷达波束对应的回波数据作为原始雷达数据。
步骤S304,对至少两组原始雷达数据执行延迟处理,以实现至少两组原始雷达数据的相位一致。
步骤S305,根据至少两组原始雷达数据,生成对应的至少两组雷达图像。
具体地,通过至少两个合成孔径雷达,向目标方向发射雷达波束并接收对应的回波信号,得到原始雷达数据后,该原始雷达数据是以二维波形数据的形式存储的,其中,每个合成孔径雷达均可以独立采集原始雷达数据,因此,根据现有的合成孔径雷达的图像合成方法,可以对每个原始雷达数据进行图像处理,得到与原始雷达数据对应的雷达图像,其中,通过雷达数据生成雷达图像的具体方法为现有技术,此处不进行赘述。
步骤S306,根据预设的雷达配置信息,对雷达图像进行几何失真矫正。
示例性地,雷达配置信息中包括雷达标识信息、雷达位置信息、雷达发射角信息中的至少一个。在一种可能的实现方式中,对雷达图像进行几何失真矫正,包括:
根据雷达配置信息,得到雷达图像矫正信息;根据雷达图像矫正信息对雷达图像进行几何失真矫正。
具体地,由于目标车辆的大小、外形等因素,合成孔径雷达在目标车辆上的设置位置,也存在差异。以对目标车辆进行定位时,一般以目标车辆的中心点的作为基准点。当目标车辆的车身长度较长,或者目标雷达距离目标车辆的中心的较远时,由于雷达波束发射角的影响,会使得到的雷达图像产生畸变,因此,不同位置、发射角的目标雷达,会产生与其所在位置、发射角相关的畸变。雷达图像矫正信息是用于描述不同车辆尺寸、外形以及雷达安装位置、发射角度的信息,不同类型的车辆,或者雷达设置位置、角度不同的车辆,具有相应的雷达图像矫正信息。雷达图像矫正信息可以为不同位置、发射角的目标雷达对应的雷达图像进行矫正,消除雷达图像中由于雷达位置和发射角的因素带来的几何畸变,使不同目标雷达采集到的图像信息一致,从而实现后续图像信息的精准叠加,提高第一目标图像的精确度。
步骤S307,对至少两组雷达图像进行图像配准,得到与至少两组雷达图像对应的至少两组配准图像。
示例性地,如图12所示,在一种可能的实施方式中,S307包括S3071、S3072两个具体的实现步骤:
步骤S3071,对至少两组雷达图像进行平移重合校准。
示例性地,具体步骤包括:
首先,获取基准雷达数据对应的基准雷达图像。基准雷达数据为多个目标雷达所采集的原始雷达数据中,基准雷达数据是根据预设的配置信息确定的,例如,根据雷达配置信 息,确定多个原始雷达数据中的基准雷达数据,该基准雷达数据,相比其他原始雷达数据,能够更好的表现目标检测物体的特征,因此基准雷达数据可以理解为检测效果较好的雷达数据。对基准雷达数据进行雷达图像变换,可以得到对应的基准雷达图像。更加具体地,例如,设置目标车辆车头位置的合成孔径雷达为目标雷达,对应地,目标车辆车头位置的合成孔径雷达所采集的原始雷达数据,为基准雷达数据,其对应的雷达图像为基准雷达图像。
其次,根据基准雷达图像中的目标元素,对至少两组雷达图像中的其他雷达图像进行图像平移,使其他雷达图像中的目标元素与基准雷达图像中的目标元素重合。
示例性地,图13为本申请实施例提供的一种图像平移重合校准的示意图,如图13所示,以两个目标雷达为例,其中,目标雷达A对应生成的为基准雷达图像,目标雷达B对应生成的为其他雷达图像。基准雷达图像和其他雷达图像中具有一个共视区,即在基准雷达图像中和其他雷达图像中,包括一个重合的图像区域,目标检测物体处于该重合的图像区域内。共视区内具有一个目标元素,例如图13中所示的自行车a,以自行车a为基准,对其他雷达图像进行图像平移,使所有雷达图像中的自行车a重合,实现进而,将基准雷达图像与图像平移后的其他雷达图像作为配准图像。
步骤S3072,对至少两组雷达图像进行相位重合校准。
示例性地,具体步骤包括:
首先,获取基准雷达图像中的相位信息。相位信息用于表征基准雷达图像中的图像相位,为了使图像在进行相干叠加时,得到更好的叠加效果,需要对不同雷达图像的共视区内的图像相位进行对齐。相位对其后,在相干叠加时,能够使雷达图像中的波峰和波峰叠加,波谷和波谷叠加,进而提高图像信噪比和图像的准确度。进一步地,雷达图像中包括相位信息,通过解析数字化的雷达图像,即可得到对应的相位信息,此处不再赘述。
其次,根据相位信息,对其他雷达图像的相位进行配准,使至少两组雷达图像中的其他雷达图像与基准雷达图像的相位一致,并将基准雷达图像与相位配准后的其他雷达图像作为配准图像。
示例性地,根据相位信息,可以得到基准雷达图像和其他雷达图像的相位差,进而根据相位差对其他雷达图像进行相位调整,使其他雷达图像的相位和基准雷达图像的相位一致。具体地,雷达图像是由雷达波束处理后形成的,雷达波束中的相位,经傅里叶变化等步骤确定。相位差为一个具体的数值,以基准雷达图像的相位为基准,对其他雷达图像的相位平移相位差的距离,即实现其他雷达图像与基准雷达图像之间的相位对齐。调整后的雷达图像,即为配准图像。其中,执行相位配准过程的执行主体与执行本申请实施例提供的成像方法的执行主体一致,例如可以为车辆融合感知系统,或者车载融合感知系统内的控制器或芯片,也可以是雷达系统,当然,在非车辆使用场景下,上述执行主体还可以是无人机、无人车、飞行器、机器人等终端以及终端内设置的控制器或芯片。
本申请实施例中,本申请中,通过获取基准雷达数据对应的基准雷达图像,其中,基准雷达数据可以通过雷达配置信息或具体的应用场景进行确定,并根据基准雷达图像,对其他的雷达图像进行图像平移和相位配准,生成配准图像。配准图像之间由于已经在时域和相位上对其,因此可以直接进行叠加而获得第一目标图像,提高生成的第一目标图像的精确度和分辨率。
步骤S308,将至少两组配准图像进行时域相干叠加,得到第一目标图像。
示例性地,配准图像包括共视区图像和非共视区图像,其中,不同配准图像的共视区图像相互重合。
在一种可能的实现方式中,对至少两组配准图像进行时域相干叠加,包括:
获取至少两组配准图像的共视区图像;对至少两组配准图像的共视区图像进行时域叠加,得到第一目标图像。
图14为本申请实施例提供的一种配准图像进行时域相干叠加的示意图,如图14所示,以两组配准图像为例,配准图像A和配准图像B具有共视区,由于配准图像A和配准图像B已经经过上述步骤的配准,相位和共视区位置已经对齐,因此,直接将配准图像A和配准图像B的共视区在时域进行合并叠加,即可得到第一目标图像。第一目标图像中包含有目标检测物体,由于进行了相干叠加,因此,第一目标图像的分辨率交传统的单雷达生成的雷达图像具有更高的分辨率,同时由于多个雷达源之间的雷达信号相互校准过程,使生成的第一目标图像具有更好的准确性,提高目标车辆在自动驾驶等应用场景下的安全性和稳定性。
在另一种可能的实现方式中,对至少两组配准图像进行时域相干叠加,包括:
获取配准图像之间的位置关系,根据配准图像之间的位置关系,将至少两组配准图像的非共视区图像拼接在第一目标图像的两侧,得到第二目标图像。
图15为本申请实施例提供的另一种配准图像进行时域相干叠加的示意图,如图15所示,以两组配准图像为例,配准图像A和配准图像B具有共视区和非共视区,在通过对共视区进行相干叠加,得到第一目标图像后,将配准图像A和配准图像B的非共视区图像拼接在第一目标图像两侧。由于各个配准图像是由不同的雷达采集数据经处理后得到的,因此,各配准图像的非共视区中,包含了不同的图像信息,将不同的配准图像的非共视区图像进行拼接,得到的第二目标图像具有更宽的视野,以及更加丰富的探测信息,可以进一步的提高目标车辆在自动驾驶等应用场景下的安全性和稳定性。
当然,可以理解的是,当配准图像大于两组的情况下,会出现多个不同配准图像构成的共视区和非共视区,此情况下,对于共视区可以根据图14所示实施例中的方法分别进行处理,对于非共视区,则可以根据图15所示实施例中的方法,将非共视区拼接在不同共视区的两侧,形成第二目标图像。
本实施例中,S301-S304的实现方式与本发明图5所示实施例中的S201-S204的实现方式相同,在此不再一一赘述。
上文中详细描述了本申请实施例的成像方法,下面将描述本申请实施例的成像装置。
在一个示例中,图16为本申请实施例提供的一种成像装置的示意性框图。本申请实施例的成像装置4可以是上述方法实施例中的雷达系统、车机系统、车辆融合感知系统,也可以是雷达系统、车机系统、车辆融合感知系统内的一个或多个芯片。该成像装置4可以用于执行上述方法实施例中的成像方法的部分或全部功能。该成像装置4可以包括下述模块。
数据获取模块41,用于获取至少两组原始雷达数据,其中,至少两组原始雷达数据来自至少两个雷达。数据获取模块41可以执行图3所示方法的步骤S101。
图像生成模块42,用于根据至少两组原始雷达数据合成第一目标图像,其中,第一目 标图像是对至少两组原始雷达数据进行图像配准以及时域相干叠加得到的。图像生成模块42可以执行图3所示方法的步骤S102,或者可以执行图5所示方法的步骤S205。
图16所示实施例的成像装置4可用于执行上述方法中图3所示实施例的技术方案,其实现原理和技术效果类似,此处不再赘述。
在一个示例中,图17为本申请实施例提供的另一种成像装置的示意性框图,如图17所示,在图16所示成像装置4的基础上,本申请实施例提供的成像装置5增加了延迟处理模块51,其中:
数据获取模块41,具体用于:根据预设的雷达配置信息,确定至少两个目标雷达;控制至少两个目标雷达分别向各自对应的目标方向发射雷达波束;获取至少两个目标雷达发送的雷达波束对应的回波数据作为原始雷达数据。此时,数据获取模块41可以执行图5所示方法的步骤S201-步骤S203,或者,执行图11所示方法的步骤S301-步骤S303。
示例性地,雷达配置信息包括雷达标识信息、雷达位置信息、雷达发射角信息中的至少一个;其中,雷达位置信息用于表征雷达的位置,雷达发射角信息用于表征雷达的发射角度。
示例性地,成像装置还包括:延迟处理模块51,用于对至少两组原始雷达数据执行延迟处理,以实现至少两组原始雷达数据的相位一致。此时,延迟处理模块51可以执行图5所示方法的步骤204,或者,执行图11所示方法的步骤304。
示例性地,延迟处理模块51,具体用于:获取基准雷达数据的初始相位;根据基准雷达数据的初始相位,对至少两组原始雷达数据中的其他原始雷达数据的相位进行修正。此时,延迟处理模块51可以执行图10所示方法的步骤S2041-步骤S2042。
图17所示实施例的成像装置5可用于执行上述方法中图3或图5所示实施例中任一项的技术方案,其实现原理和技术效果类似,此处不再赘述。
并且,本实施例的实施不依赖于图16所示的实施例是否实施,本实施例可以独立实施。
在一个示例中,图18为本申请实施例提供的又一种成像装置的示意性框图,如图18所示,在图17所示装置5的基础上,本申请实施例提供的成像装置6增加了失真矫正模块61,其中:
示例性地,图像生成模块42,具体用于:根据至少两组原始雷达数据,生成对应的至少两组雷达图像;对至少两组雷达图像进行图像配准,得到与至少两组雷达图像对应的至少两组配准图像;将至少两组配准图像进行时域相干叠加,得到第一目标图像。此时,图像生成模块42可以执行图11所示方法的步骤S305、步骤S307-步骤S308。
示例性地,失真矫正模块61用于:获取预设的雷达配置信息;根据雷达配置信息,对雷达图像进行几何失真矫正。此时,失真矫正模块61可以执行图11所示的方法的步骤S306。
示例性地,雷达配置信息包括雷达标识信息、雷达位置信息、雷达发射角信息中的至少一个,失真矫正模块61在根据雷达配置信息,对雷达图像进行几何失真矫正时,具体用于:根据雷达配置信息,得到雷达图像矫正信息;根据雷达图像矫正信息对雷达图像进行几何失真矫正。此时,失真矫正模块61可以执行图11所示的方法的步骤S306。
示例性地,至少两组原始雷达数据中,包括一组基准雷达数据,图像生成模块42在 对至少两组雷达图像进行图像配准,得到与至少两组雷达图像对应的至少两组配准图像时,具体用于:获取基准雷达数据对应的基准雷达图像;根据基准雷达图像中的目标元素,对至少两组雷达图像中的其他雷达图像进行图像平移,使其他雷达图像中的目标元素与基准雷达图像中的目标元素重合;将基准雷达图像与图像平移后的其他雷达图像作为配准图像。此时,图像生成模块42可以执行图11所示的方法的步骤S3071。
示例性地,图像生成模块42在根据基准雷达图像中的目标元素,对至少两组雷达图像中的其他雷达图像进行图像平移,使其他雷达图像中的目标元素与基准雷达图像中的目标元素重合之后,具体用于:获取基准雷达图像中的相位信息;根据相位信息,对其他雷达图像的相位进行配准,使至少两组雷达图像中的其他雷达图像与基准雷达图像的相位一致;将基准雷达图像与相位配准后的其他雷达图像作为配准图像。此时,图像生成模块42可以执行图11所示的方法的步骤S3072。
示例性地,配准图像包括共视区图像和非共视区图像,其中,不同配准图像的共视区图像相互重合;图像生成模块42在将至少两组配准图像进行时域相干叠加,得到第一目标图像时,具体用于:获取至少两组配准图像的共视区图像;对至少两组配准图像的共视区图像进行时域叠加,得到第一目标图像。此时,图像生成模块42可以执行图11所示的方法的步骤S308。
示例性地,图像生成模块42在得到第一目标图像之后,具体用于:获取配准图像之间的位置关系,根据配准图像之间的位置关系,将至少两组配准图像的非共视区图像拼接在第一目标图像的两侧,得到第二目标图像。此时,图像生成模块42可以执行图11所示的方法的步骤S308。
图18所示实施例的成像装置6可用于执行上述方法中图3或图5或图11所示实施例中任一项的技术方案,其实现原理和技术效果类似,此处不再赘述。
并且,本实施例的实施不依赖于图17所示的实施例是否实施,本实施例可以独立实施。
图19为本申请实施例提供的一种雷达系统的结构示意性框图。如图19所示,该雷达系统7包括处理器71、存储器72和至少两个合成孔径雷达73,其中,处理器71用于控制合成孔径雷达73收发信号;存储器72用于存储计算机程序;处理器71还用于调用并运行存储器72中存储的计算机程序,使得雷达系统7执行图3所示方法的各步骤,或者,执行图5所示方法的各步骤,或者,执行图11所示方法的各步骤。处理器17还可以用于实现图16-图18的各模块。
图20为本申请实施例提供的一种车辆融合感知系统的结构示意性框图。如图20所示,该车辆融合感知系统8包括发送器81、接收器82和处理器83。
其中,处理器83用于执行图3的各步骤,或者,处理器83用于执行图5的各步骤,或者,处理器83用于执行图11的各步骤。处理器83用于实现图16、图17、图18的各模块。
图20所示实施例的车辆融合感知系统8可用于执行上述方法实施例的技术方案,或者图16、图17、图18所示实施例各个模块的程序,处理器83调用该程序,执行以上方法实施例的操作,以实现图16、图17、图18所示的各个模块。
其中,处理器83也可以为控制器,图20中表示为“控制器/处理器83”。发送器81和 接收器82用于支持车辆融合感知系统8与上述实施例中的目标车辆中的各设备之间收发信息,以及支持车辆融合感知系统8与上述实施例中的目标车辆中的各设备之间进行通信。
进一步的,网络设备还可以包括存储器84,存储器84用于存储车辆融合感知系统8的程序代码和数据。进一步的,车辆融合感知系统还可以包括通信接口85。
处理器83例如中央处理器(Central Processing Unit,CPU),还可以是被配置成实施以上方法的一个或多个集成电路,例如:一个或多个特定集成电路(Application Specific Integrated Circuit,ASIC),或,一个或多个微处理器(digital singnal processor,DSP),或,一个或者多个现场可编程门阵列(Field Programmable Gate Array,FPGA)等。存储器84可以是一个存储器,也可以是多个存储元件的统称。
需要说明的是,本申请实施例提供的图20的车辆融合感知系统8所包含的发送器81对应前述方法实施例中可以执行发送动作,处理器83执行处理动作,接收器82可以执行接收动作。具体可参考前述方法实施例。
图21为本申请实施例提供的一种电子设备的结构示意性框图,如图21所示,本实施例提供的电子设备9包括:收发器91,存储器92,处理器93以及计算机程序。
其中,处理器93用于控制收发器91收发信号,计算机程序存储在存储器92中,并被配置为由处理器93执行以实现本发明图3-图15所对应的任一实现方式提供的方法。
其中,收发器91,存储器92,处理器93通过总线94连接。
相关说明可以对应参见图3-图15所对应的实施例中的步骤所对应的相关描述和效果进行理解,此处不做过多赘述。
图22为本申请实施例提供的一种雷达的结构示意性框图,如图22所示,本实施例提供的雷达10包括:收发天线101和控制器102,其中,收发天线101用于接收和发送雷达信号;控制器102用于执行信号处理或控制,例如与其他至少一个雷达10通信,又如控制收发天线进行雷达信号的接收和/或发送。雷达10可以执行上述图3-图15对应方法实施例的技术方案,或者图16、图17、图18所示实施例各个模块的程序。具体通过控制器102调用该程序,执行以上方法实施例的操作,以实现图16、图17、图18所示的各个模块。具体的,收发天线可以包括独立的至少一个接收天线和至少一个发射天线,也可以包括天线阵列;控制器可以包含至少一个处理器,处理器的解释可以参照上文对处理器83的解释。从雷达的硬件结构上来看,雷达还可以包括其他电路结构,例如振荡器、混频器等中的至少一个。
相关说明可以对应参见图3-图15所对应的实施例中的步骤所对应的相关描述和效果进行理解,此处不做过多赘述。
本申请实施例还提供一种计算机可读存储介质,包括计算机代码,当其在计算机上运行时,使得计算机执行如图3-图15所对应的任一实现方式提供的方法。
本申请实施例还提供一种计算机程序产品,包括程序代码,当计算机运行计算机程序产品时,该程序代码执行如图3-图15所对应的任一实现方式提供的方法。
本申请实施例还提供一种芯片,包括处理器。该处理器用于调用并运行存储器中存储的计算机程序,以执行如图3-图15所对应的任一实现方式提供的成像方法中由雷达系统执行的相应操作和/或流程。可选地,该芯片还包括存储器,该存储器与该处理器通过电路或电线与存储器连接,处理器用于读取并执行该存储器中的计算机程序。进一步可选地, 该芯片还包括通信接口,处理器与该通信接口连接。通信接口用于接收需要处理的数据和/或信息,处理器从该通信接口获取该数据和/或信息,并对该数据和/或信息进行处理。该通信接口可以是输入输出接口。在上述实施例中,可以全部或部分地通过软件、硬件、固件或者其任意组合来实现。当使用软件实现时,可以全部或部分地以计算机程序产品的形式实现。计算机程序产品包括一个或多个计算机指令。在计算机上加载和执行计算机程序指令时,全部或部分地产生按照本申请实施例的流程或功能。计算机可以是通用计算机、专用计算机、计算机网络、或者其他可编程装置。计算机指令可以存储在计算机可读存储介质中,或者从一个计算机可读存储介质向另一个计算机可读存储介质传输,例如,计算机指令可以从一个网站站点、计算机、服务器或数据中心通过有线(例如,同轴电缆、光纤、数字用户线(Digital Subscriber Line,DSL))或无线(例如,红外、无线、微波等)方式向另一个网站站点、计算机、服务器或数据中心进行传输。计算机可读存储介质可以是计算机能够存取的任何可用介质或者是包含一个或多个可用介质集成的服务器、数据中心等数据存储设备。可用介质可以是磁性介质,(例如,软盘、硬盘、磁带)、光介质(例如,DVD)、或者半导体介质(例如,固态硬盘(Solid State Disk,SSD))等。
本申请实施例还提供一种终端,终端可以为无人机、无人运输车、车辆、飞行器或者机器人等,该终端包括上述如图16-图18所示的各模块,或者包括能够实现上述如图16-图18所示的各模块的装置,或者包括如图19所示实施例提供的雷达系统。该终端能够通过如图16-图18所示的各模块,或者包括能够实现上述如图16-图18所示的各模块的装置,或者包括如图19所示实施例提供的雷达系统执行如图3-图15所对应的任一实现方式提供的成像方法。在一种可能的实现方式中,该终端包括上述计算机可读存储介质。
本领域技术人员应该可以意识到,在上述一个或多个示例中,本申请实施例所描述的功能可以用硬件、软件、固件或它们的任意组合来实现。当使用软件实现时,可以将这些功能存储在计算机可读介质中或者作为计算机可读介质上的一个或多个指令或代码进行传输。计算机可读介质包括计算机存储介质和通信介质,其中通信介质包括便于从一个地方向另一个地方传送计算机程序的任何介质。存储介质可以是通用或专用计算机能够存取的任何可用介质。

Claims (30)

  1. 一种成像方法,其特征在于,所述方法包括:
    获取至少两组原始雷达数据,其中,所述至少两组原始雷达数据来自至少两个雷达;
    根据所述至少两组原始雷达数据合成第一目标图像,其中,所述第一目标图像是对所述至少两组原始雷达数据进行图像配准以及时域相干叠加得到的。
  2. 根据权利要求1所述的方法,其特征在于,获取至少两组原始雷达数据,包括:
    根据预设的雷达配置信息,确定至少两个目标雷达;
    控制所述至少两个目标雷达分别向各自对应的目标方向发射雷达波束;
    获取所述至少两个目标雷达发送的雷达波束对应的回波数据作为所述原始雷达数据。
  3. 根据权利要求2所述的方法,其特征在于,所述雷达配置信息包括雷达标识信息、雷达位置信息、雷达发射角信息中的至少一个;其中,所述雷达位置信息用于表征雷达的位置,所述雷达发射角信息用于表征雷达的发射角度。
  4. 根据权利要求1-3任一项所述的方法,其特征在于,在获取至少两组原始雷达数据之后,还包括:
    对所述至少两组原始雷达数据执行延迟处理,以实现所述至少两组原始雷达数据的相位一致。
  5. 根据权利要求4所述的方法,其特征在于,所述至少两组原始雷达数据中,包括一组基准雷达数据,对所述至少两组原始雷达数据执行延迟处理,以实现所述至少两组原始雷达数据的相位一致,包括:
    获取所述基准雷达数据的初始相位;
    根据所述基准雷达数据的初始相位,对所述至少两组原始雷达数据中的其他原始雷达数据的相位进行修正。
  6. 根据权利要求1-5任一项所述的方法,其特征在于,根据所述至少两组原始雷达数据合成第一目标图像,包括:
    根据所述至少两组原始雷达数据,生成对应的至少两组雷达图像;
    对所述至少两组雷达图像进行图像配准,得到与所述至少两组雷达图像对应的至少两组配准图像;
    将所述至少两组配准图像进行时域相干叠加,得到所述第一目标图像。
  7. 根据权利要求6所述的方法,其特征在于,在根据所述至少两组原始雷达数据,生成对应的最少两组雷达图像之后,还包括:
    获取预设的雷达配置信息;
    根据所述雷达配置信息,对雷达图像进行几何失真矫正。
  8. 根据权利要求7所述的方法,其特征在于,所述雷达配置信息包括雷达标识信息、雷达位置信息、雷达发射角信息中的至少一个,根据所述雷达配置信息,对所述雷达图像进行几何失真矫正,包括:
    根据所述雷达配置信息,得到雷达图像矫正信息;
    根据所述雷达图像矫正信息对所述雷达图像进行几何失真矫正。
  9. 根据权利要求6所述的方法,其特征在于,所述至少两组原始雷达数据中,包括一组基准雷达数据,对所述至少两组雷达图像进行图像配准,得到与所述至少两组雷达图 像对应的至少两组配准图像,包括:
    获取所述基准雷达数据对应的基准雷达图像;
    根据所述基准雷达图像中的目标元素,对所述至少两组雷达图像中的其他雷达图像进行图像平移,使所述其他雷达图像中的目标元素与基准雷达图像中的目标元素重合;
    将所述基准雷达图像与图像平移后的其他雷达图像作为配准图像。
  10. 根据权利要求9所述的方法,其特征在于,在根据所述基准雷达图像中的目标元素,对所述至少两组雷达图像中的其他雷达图像进行图像平移,使其他雷达图像中的目标元素与基准雷达图像中的目标元素重合之后,还包括:
    获取所述基准雷达图像中的相位信息;
    根据所述相位信息,对其他雷达图像的相位进行配准,使所述至少两组雷达图像中的其他雷达图像与所述基准雷达图像的相位一致;
    将所述基准雷达图像与相位配准后的其他雷达图像作为配准图像。
  11. 根据权利要求6所述的方法,其特征在于,所述配准图像包括共视区图像和非共视区图像,其中,不同配准图像的共视区图像相互重合;将所述至少两组配准图像进行时域相干叠加,得到所述第一目标图像,包括:
    获取至少两组配准图像的共视区图像;
    对所述至少两组配准图像的共视区图像进行时域叠加,得到所述第一目标图像。
  12. 根据权利要求11所述的方法,其特征在于,在得到所述第一目标图像之后,还包括:
    获取配准图像之间的位置关系,
    根据所述配准图像之间的位置关系,将所述至少两组配准图像的非共视区图像拼接在所述第一目标图像的两侧,得到第二目标图像。
  13. 一种成像装置,其特征在于,所述装置包括:
    数据获取模块,用于获取至少两组原始雷达数据,其中,所述至少两组原始雷达数据来自至少两个雷达;
    图像生成模块,用于根据所述至少两组原始雷达数据合成第一目标图像,其中,所述第一目标图像是对所述至少两组原始雷达数据进行图像配准以及时域相干叠加得到的。
  14. 根据权利要求13所述的装置,其特征在于,所述数据获取模块,具体用于:
    根据预设的雷达配置信息,确定至少两个目标雷达;
    控制所述至少两个目标雷达分别向各自对应的目标方向发射雷达波束;
    获取所述至少两个目标雷达发送的雷达波束对应的回波数据作为所述原始雷达数据。
  15. 根据权利要求14所述的装置,其特征在于,所述雷达配置信息包括雷达标识信息、雷达位置信息、雷达发射角信息中的至少一个;其中,所述雷达位置信息用于表征雷达的位置,所述雷达发射角信息用于表征雷达的发射角度。
  16. 根据权利要求13-15任一项所述的装置,其特征在于,还包括:
    延迟处理模块,用于对所述至少两组原始雷达数据执行延迟处理,以实现所述至少两组原始雷达数据的相位一致。
  17. 根据权利要求16所述的装置,其特征在于,所述至少两组原始雷达数据中,包括一组基准雷达数据,所述延迟处理模块,具体用于:
    获取所述基准雷达数据的初始相位;
    根据所述基准雷达数据的初始相位,对所述至少两组原始雷达数据中的其他原始雷达数据的相位进行修正。
  18. 根据权利要求13-17任一项所述的装置,其特征在于,所述图像生成模块,具体用于:
    根据所述至少两组原始雷达数据,生成对应的至少两组雷达图像;
    对所述至少两组雷达图像进行图像配准,得到与所述至少两组雷达图像对应的至少两组配准图像;
    将所述至少两组配准图像进行时域相干叠加,得到所述第一目标图像。
  19. 根据权利要求18所述的装置,其特征在于,还包括失真矫正模块,用于:
    获取预设的雷达配置信息;
    根据所述雷达配置信息,对雷达图像进行几何失真矫正。
  20. 根据权利要求19所述的装置,其特征在于,所述雷达配置信息包括雷达标识信息、雷达位置信息、雷达发射角信息中的至少一个,所述失真矫正模块在根据所述雷达配置信息,对雷达图像进行几何失真矫正时,具体用于:
    根据所述雷达配置信息,得到雷达图像矫正信息;
    根据所述雷达图像矫正信息对所述雷达图像进行几何失真矫正。
  21. 根据权利要求18所述的装置,其特征在于,所述至少两组原始雷达数据中,包括一组基准雷达数据,所述图像生成模块在对所述至少两组雷达图像进行图像配准,得到与所述至少两组雷达图像对应的至少两组配准图像时,具体用于:
    获取所述基准雷达数据对应的基准雷达图像;
    根据所述基准雷达图像中的目标元素,对所述至少两组雷达图像中的其他雷达图像进行图像平移,使所述其他雷达图像中的目标元素与基准雷达图像中的目标元素重合;
    将所述基准雷达图像与图像平移后的其他雷达图像作为配准图像。
  22. 根据权利要求21所述的装置,其特征在于,所述图像生成模块在根据所述基准雷达图像中的目标元素,对所述至少两组雷达图像中的其他雷达图像进行图像平移,使其他雷达图像中的目标元素与基准雷达图像中的目标元素重合之后,具体用于:
    获取所述基准雷达图像中的相位信息;
    根据所述相位信息,对其他雷达图像的相位进行配准,使所述至少两组雷达图像中的其他雷达图像与所述基准雷达图像的相位一致;
    将所述基准雷达图像与相位配准后的其他雷达图像作为配准图像。
  23. 根据权利要求18所述的装置,其特征在于,所述配准图像包括共视区图像和非共视区图像,其中,不同配准图像的共视区图像相互重合;所述图像生成模块在将所述至少两组配准图像进行时域相干叠加,得到所述第一目标图像时,具体用于:
    获取至少两组配准图像的共视区图像;
    对所述至少两组配准图像的共视区图像进行时域叠加,得到所述第一目标图像。
  24. 根据权利要求23所述的装置,其特征在于,所述图像生成模块在得到所述第一目标图像之后,具体用于:
    获取配准图像之间的位置关系,
    根据所述配准图像之间的位置关系,将所述至少两组配准图像的非共视区图像拼接在所述第一目标图像的两侧,得到第二目标图像。
  25. 一种雷达系统,其特征在于,所述雷达系统包括:至少一个处理器、存储器和至少两个合成孔径雷达,其中,
    所述至少一个处理器用于控制所述合成孔径雷达收发信号;
    所述存储器用于存储计算机程序;
    所述至少一个处理器还用于调用并运行所述存储器中存储的计算机程序,使得所述雷达系统执行所述权利要求1至12中任一项方法。
  26. 一种电子设备,其特征在于,包括至少一个处理器,所述处理器用于执行计算机程序,以执行所述权利要求1至12中任一项所述的方法;所述电子装置还包括通信接口;所述至少一个处理器与所述通信接口连接。
  27. 一种计算机可读存储介质,其特征在于,包括计算机代码,当其在计算机上运行时,使得计算机执行所述权利要求1至12中任一项所述的方法。
  28. 一种计算机程序产品,其特征在于,包括程序代码,当计算机运行所述计算机程序产品时,所述程序代码执行所述权利要求1至12中任一项所述的方法。
  29. 一种芯片,其特征在于,包括处理器,所述处理器用于调用并运行存储器中存储的计算机程序,以执行所述权利要求1至12中任一项所述的方法。
  30. 一种终端,包含如权利要求25所述的雷达系统、或者如权利要求27所述的计算机可读存储介质。
PCT/CN2021/100389 2020-07-30 2021-06-16 成像方法、装置、雷达系统、电子设备和存储介质 WO2022022137A1 (zh)

Priority Applications (3)

Application Number Priority Date Filing Date Title
EP21850599.8A EP4184210A4 (en) 2020-07-30 2021-06-16 IMAGING METHOD AND DEVICE AS WELL AS RADAR SYSTEM, ELECTRONIC DEVICE AND STORAGE MEDIUM
KR1020237005514A KR20230038291A (ko) 2020-07-30 2021-06-16 이미징 방법 및 장치, 레이더 시스템, 전자 디바이스, 및 저장 매체
US18/160,169 US20230168370A1 (en) 2020-07-30 2023-01-26 Imaging method and apparatus, radar system, electronic device, and storage medium

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202010753975.7A CN114063076A (zh) 2020-07-30 2020-07-30 成像方法、装置、雷达系统、电子设备和存储介质
CN202010753975.7 2020-07-30

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US18/160,169 Continuation US20230168370A1 (en) 2020-07-30 2023-01-26 Imaging method and apparatus, radar system, electronic device, and storage medium

Publications (1)

Publication Number Publication Date
WO2022022137A1 true WO2022022137A1 (zh) 2022-02-03

Family

ID=80037161

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/100389 WO2022022137A1 (zh) 2020-07-30 2021-06-16 成像方法、装置、雷达系统、电子设备和存储介质

Country Status (5)

Country Link
US (1) US20230168370A1 (zh)
EP (1) EP4184210A4 (zh)
KR (1) KR20230038291A (zh)
CN (1) CN114063076A (zh)
WO (1) WO2022022137A1 (zh)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN118311569B (zh) * 2024-06-11 2024-09-13 杭州计算机外部设备研究所(中国电子科技集团公司第五十二研究所) 一种多雷达光电一体机协同防护和雷达去重拼接方法

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2013124912A (ja) * 2011-12-14 2013-06-24 Toshiba Corp レーダ情報処理装置及びレーダ情報処理プログラム
CN103308923A (zh) * 2012-03-15 2013-09-18 通用汽车环球科技运作有限责任公司 来自多个激光雷达的距离图像配准方法
CN109507670A (zh) * 2017-09-14 2019-03-22 三星电子株式会社 雷达图像处理方法、装置和系统
CN109541597A (zh) * 2018-12-12 2019-03-29 中国人民解放军国防科技大学 一种多站雷达isar图像配准方法
CN111344597A (zh) * 2017-12-14 2020-06-26 康蒂-特米克微电子有限公司 通过雷达系统检测周围信息的方法

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE10035223A1 (de) * 2000-07-20 2002-01-31 Daimler Chrysler Ag Vorrichtung und Verfahren zur Überwachung der Umgebung eines Objekts
CN106680804A (zh) * 2017-01-03 2017-05-17 郑州云海信息技术有限公司 一种大型设备多点微位移测量方法
DE102018204829A1 (de) * 2017-04-12 2018-10-18 Ford Global Technologies, Llc Verfahren und Vorrichtung zur Analyse einer Fahrzeugumgebung sowie Fahrzeug mit einer solchen Vorrichtung
US11448754B2 (en) * 2018-11-20 2022-09-20 KMB Telematics, Inc. Object sensing from a potentially moving frame of reference with virtual apertures formed from sparse antenna arrays

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2013124912A (ja) * 2011-12-14 2013-06-24 Toshiba Corp レーダ情報処理装置及びレーダ情報処理プログラム
CN103308923A (zh) * 2012-03-15 2013-09-18 通用汽车环球科技运作有限责任公司 来自多个激光雷达的距离图像配准方法
CN109507670A (zh) * 2017-09-14 2019-03-22 三星电子株式会社 雷达图像处理方法、装置和系统
CN111344597A (zh) * 2017-12-14 2020-06-26 康蒂-特米克微电子有限公司 通过雷达系统检测周围信息的方法
CN109541597A (zh) * 2018-12-12 2019-03-29 中国人民解放军国防科技大学 一种多站雷达isar图像配准方法

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
See also references of EP4184210A4
YANG JIE: "Study on Location of Spaceborne Synthetic Aperture Radar Image and Extraction Digital Elevation Model from Spaceborne Interferometric Synthetic Aperture Radar Image", CHINESE DOCTORAL DISSERTATIONS FULL-TEXT DATABASE, UNIVERSITY OF CHINESE ACADEMY OF SCIENCES, CN, no. 4, 30 April 2004 (2004-04-30), CN , XP055891006, ISSN: 1674-022X *

Also Published As

Publication number Publication date
EP4184210A1 (en) 2023-05-24
KR20230038291A (ko) 2023-03-17
US20230168370A1 (en) 2023-06-01
EP4184210A4 (en) 2023-12-13
CN114063076A (zh) 2022-02-18

Similar Documents

Publication Publication Date Title
US11899099B2 (en) Early fusion of camera and radar frames
EP3540464B1 (en) Ranging method based on laser radar system, device and readable storage medium
US20220083841A1 (en) Method and system for automatically labeling radar data
TW202028778A (zh) 雷達深度學習
US10928479B2 (en) Apparatus and method for determining a distance to an object
WO2021129581A1 (zh) 一种信号处理方法及装置
US20210354708A1 (en) Online perception performance evaluation for autonomous and semi-autonomous vehicles
CN115144825A (zh) 一种车载雷达的外参标定方法与装置
WO2022077455A1 (zh) 通信方法和装置
US20200401825A1 (en) Object detection device, object detection method, and storage medium
EP3394633B1 (en) Device in a car for communicating with at least one neighboring device, corresponding method and computer program product.
US20210082148A1 (en) 2d to 3d line-based registration with unknown associations
WO2022183408A1 (zh) 车道线检测方法和车道线检测装置
WO2021203868A1 (zh) 数据处理的方法和装置
US20230168370A1 (en) Imaging method and apparatus, radar system, electronic device, and storage medium
WO2021057324A1 (zh) 数据处理方法、装置、芯片系统及介质
WO2019223515A1 (zh) 信息测量方法及信息测量装置
CN116359908A (zh) 点云数据增强方法、装置、计算机设备、系统及存储介质
WO2020172859A1 (zh) 毫米波雷达的测角方法、设备及存储介质
CN111208502A (zh) 一种无人驾驶物流车辆的定位方法及系统
WO2021196983A1 (zh) 一种自运动估计的方法及装置
EP4446702A1 (en) Positioning device and positioning method
US20240230367A1 (en) Methods and systems for relative localization for operating connected vehicles
WO2022037184A1 (zh) 信号处理的方法和装置
US12117536B2 (en) Systems and methods for transforming autonomous aerial vehicle sensor data between platforms

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21850599

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 20237005514

Country of ref document: KR

Kind code of ref document: A

ENP Entry into the national phase

Ref document number: 2021850599

Country of ref document: EP

Effective date: 20230214

NENP Non-entry into the national phase

Ref country code: DE