WO2022183369A1 - 目标检测方法和装置 - Google Patents

目标检测方法和装置 Download PDF

Info

Publication number
WO2022183369A1
WO2022183369A1 PCT/CN2021/078669 CN2021078669W WO2022183369A1 WO 2022183369 A1 WO2022183369 A1 WO 2022183369A1 CN 2021078669 W CN2021078669 W CN 2021078669W WO 2022183369 A1 WO2022183369 A1 WO 2022183369A1
Authority
WO
WIPO (PCT)
Prior art keywords
target
data
data corresponding
radar
point spread
Prior art date
Application number
PCT/CN2021/078669
Other languages
English (en)
French (fr)
Inventor
张慧
马莎
林永兵
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Priority to PCT/CN2021/078669 priority Critical patent/WO2022183369A1/zh
Priority to CN202180000482.9A priority patent/CN113167886B/zh
Publication of WO2022183369A1 publication Critical patent/WO2022183369A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S13/00Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
    • G01S13/88Radar or analogous systems specially adapted for specific applications
    • G01S13/89Radar or analogous systems specially adapted for specific applications for mapping or imaging
    • G01S13/90Radar or analogous systems specially adapted for specific applications for mapping or imaging using synthetic aperture techniques, e.g. synthetic aperture radar [SAR] techniques
    • G01S13/9021SAR image post-processing techniques
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S13/00Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
    • G01S13/66Radar-tracking systems; Analogous systems
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S13/00Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
    • G01S13/88Radar or analogous systems specially adapted for specific applications
    • G01S13/93Radar or analogous systems specially adapted for specific applications for anti-collision purposes
    • G01S13/931Radar or analogous systems specially adapted for specific applications for anti-collision purposes of land vehicles
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/02Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S13/00
    • G01S7/41Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S13/00 using analysis of echo signal for target characterisation; Target signature; Target cross-section

Definitions

  • the present application relates to the field of intelligent driving or automatic driving, and in particular, to a target detection method and device.
  • Autonomous driving technology relies on the collaboration of computer vision, radar, monitoring devices, and global positioning systems to allow motor vehicles to drive autonomously without the need for active human operation.
  • Autonomous vehicles use various computing systems to help transport passengers from one location to another. Some autonomous vehicles may require some initial or continuous input from an operator, such as a pilot, driver, or passenger.
  • An autonomous vehicle permits the operator to switch from a manual mode of operation to an autonomous driving mode or a mode in between. Since autonomous driving technology does not require humans to drive motor vehicles, it can theoretically effectively avoid human driving errors, reduce the occurrence of traffic accidents, and improve the efficiency of highway transportation. Therefore, autonomous driving technology has received more and more attention.
  • Object detection is an important research topic in autonomous driving, and radar in autonomous vehicles can be used to detect and track objects.
  • the data obtained by the radar can be analyzed to achieve tracking of moving targets and detection of obstacles.
  • the autonomous vehicle realizes functions such as adaptive cruise control (ACC) based on the tracking of moving targets; and functions such as lane keeping assist (LKA) based on the detection of obstacles.
  • ACC adaptive cruise control
  • LKA lane keeping assist
  • the embodiments of the present application provide a target detection method and device, which relate to the field of intelligent driving and automatic driving. Based on different application scenarios, radar can be used to achieve more functions, thereby better assisting driving.
  • an embodiment of the present application provides a target detection method, including: acquiring source data obtained based on radar processing; obtaining data corresponding to at least one first target according to the source data; wherein the at least one first target has a speed less than The target of the first threshold; image processing is performed on the data corresponding to the at least one first target to obtain the imaging result of the at least one first target.
  • the imaging function of radar in automatic driving scenarios can be realized, breaking through the limitation of static scenarios in radar-based imaging scenarios, and more functions of radar can be realized based on different scenarios, and then radar can be used to better assist automatic driving. or smart driving.
  • the at least one first target may also be a target whose speed is less than or equal to the first threshold.
  • the signal strength of the first target is less than the second threshold
  • obtaining data corresponding to at least one first target according to source data includes: performing target detection according to source data to obtain at least one second target and at least one first target. Three targets; wherein, at least one second target is a target whose speed is greater than or equal to the first threshold, and at least one third target is a target whose signal strength is greater than or equal to the second threshold; the data corresponding to the second target and The data corresponding to the third target is obtained, and the data corresponding to at least one first target is obtained.
  • the separation and utilization of radar data can be achieved through the first threshold and the second threshold; The scene realizes more functions of radar, and then better assists automatic driving or intelligent driving.
  • removing the data corresponding to the second target and the data corresponding to the third target in the source data includes: obtaining a first point spread function corresponding to at least one second target according to data corresponding to at least one second target. ; According to the data corresponding to at least one third target, obtain the second point spread function corresponding to at least one third target; Remove the data corresponding to the first point spread function and the data corresponding to the second point spread function in the source data; wherein, The first point spread function includes main lobe data of the second target and side lobe data of the second target; the second point spread function includes main lobe data of the third target and side lobe data of the third target.
  • the removal process can better realize data separation, and can obtain more accurate imaging results based on the data corresponding to the first target, thereby realizing various functions of the radar in different application scenarios.
  • obtaining data corresponding to at least one first target according to source data includes: performing target detection according to source data to obtain at least one second target; wherein, at least one second target has a speed greater than or equal to the first target.
  • the target of the threshold; the data corresponding to the second target is removed from the source data to obtain at least one data corresponding to the first target.
  • removing the data corresponding to the at least one second target in the source data includes: obtaining a first point spread function corresponding to the at least one second target according to the data corresponding to the at least one second target;
  • the function includes main lobe data of the second target and side lobe data of the second target; the data corresponding to the first point spread function is removed from the source data.
  • the removal process can better realize data separation, and can obtain the imaging result of the stationary target based on the data corresponding to the first target, thereby realizing various functions of the radar in different application scenarios.
  • the data corresponding to the first target includes the range velocity RV spectrum
  • imaging processing is performed on the data corresponding to the at least one first target to obtain the imaging result of the at least one first target, including: corresponding to the at least one first target.
  • the stitched data is recovered and stitched along the slow time dimension to obtain stitched data; the stitched data is imaged to obtain an imaging result of at least one first target.
  • the stitching processing can be used to obtain an image with higher resolution during imaging, so that a more accurate imaging result of the first target can be obtained.
  • performing imaging processing on the stitched data includes: performing synthetic aperture radar imaging processing on the stitched data. In this way, based on the synthetic aperture radar imaging processing, a more accurate imaging result of the first target is obtained.
  • performing imaging processing on the spliced data to obtain an imaging result of at least one first target including: using Doppler parameter estimation, Doppler center compensation and/or walking correction to process the spliced data to obtain Processed data; first-order motion compensation and second-order motion compensation are performed on the processed data to obtain compensation data; azimuth compression is performed on the compensation data to obtain an imaging result of at least one first target.
  • SAR imaging can be applied to radar-based moving scenes, which breaks through the limitations of radar imaging scenes for stationary scenes, and can obtain more accurate imaging results of the first target based on SAR imaging.
  • the method is applied to a radar, and the method further includes: sending an imaging result of at least one first target, at least one second target and/or at least one third target to the target device.
  • the method further includes: sending an imaging result of at least one first target, at least one second target and/or at least one third target to the target device.
  • the method is applied to the target device to obtain source data based on radar processing, including: receiving source data from the radar.
  • different functions can be implemented in different application scenarios based on the received source data of the radar, and the usage scenarios of the data in the radar can be enriched.
  • the at least one first target includes at least one lane line
  • the method further includes: determining the automatic driving strategy according to the imaging result of the at least one first target; Lane lines in a map. In this way, the imaging result of the first target can be used to better assist automatic driving and obtain a more accurate high-precision map.
  • an embodiment of the present application provides a target detection device, which is applied to a radar, and includes: a processing unit, configured to acquire source data obtained based on radar processing; and a processing unit, further configured to obtain at least one first The data corresponding to the target; wherein, the at least one first target is a target whose speed is less than the first threshold; the processing unit is further configured to perform imaging processing on the data corresponding to the at least one first target to obtain the imaging result of the at least one first target.
  • the signal strength of the first target is less than the second threshold
  • the processing unit is specifically configured to: perform target detection according to the source data, and obtain at least one second target and at least one third target; wherein, at least one second target The target is a target whose speed is greater than or equal to the first threshold, and at least one third target is a target whose signal strength is greater than or equal to the second threshold; the data corresponding to the second target and the data corresponding to the third target are removed from the source data to obtain at least one target. Data corresponding to a first target.
  • the processing unit is specifically configured to: obtain a first point spread function corresponding to at least one second target according to data corresponding to at least one second target; obtain at least one point spread function corresponding to at least one third target according to data corresponding to at least one third target. the second point spread function corresponding to the third target; remove the data corresponding to the first point spread function and the data corresponding to the second point spread function in the source data; wherein, the first point spread function includes the main lobe data of the second target and the side lobe data of the second target; the second point spread function includes the main lobe data of the third target and the side lobe data of the third target.
  • the processing unit is specifically configured to: perform target detection according to the source data to obtain at least one second target; wherein, at least one second target is a target whose speed is greater than or equal to the first threshold; remove the target from the source data.
  • the data corresponding to the second target is obtained, and the data corresponding to at least one first target is obtained.
  • the processing unit is specifically configured to: obtain a first point spread function corresponding to at least one second target according to data corresponding to at least one second target; the first point spread function includes the main lobe of the second target. data and the side lobe data of the second target; remove the data corresponding to the first point spread function in the source data.
  • the data corresponding to the first target includes the distance velocity RV spectrum
  • the processing unit is specifically used for: recovering the data corresponding to at least one first target and splicing along the slow time dimension to obtain spliced data;
  • the data is subjected to imaging processing to obtain an imaging result of at least one first target.
  • the processing unit is specifically configured to: perform synthetic aperture radar imaging processing on the stitched data.
  • the processing unit is specifically used for: using Doppler parameter estimation, Doppler center compensation and/or walking correction to process the spliced data to obtain processed data; First-order motion compensation and second-order motion compensation are performed to obtain compensation data; azimuth compression is performed on the compensation data to obtain an imaging result of at least one first target.
  • the communication unit is specifically configured to: send the imaging result of at least one first target, at least one second target and/or at least one third target to the target device.
  • the communication unit is specifically configured to: receive source data from the radar.
  • the at least one first target includes at least one lane line
  • the processing unit is specifically configured to: determine the automatic driving strategy according to the imaging result of the at least one first target; and/or, according to the imaging result of the at least one first target The result updates the lane lines in the HD map.
  • embodiments of the present application provide a computer-readable storage medium, where a computer program or instruction is stored in the computer-readable storage medium, and when the computer program or instruction is run on a computer, the computer is made to execute any one of the first aspect.
  • an embodiment of the present application provides a computer program product including instructions, which, when the instructions are run on a computer, cause the computer to execute the target detection method described in any implementation manner of the first aspect.
  • an embodiment of the present application provides a terminal, where the terminal includes the target detection apparatus described in the second aspect and various possible implementation manners of the second aspect.
  • the terminal may include a vehicle or a robot, etc.
  • the vehicle may implement the target detection method described in the embodiments of the present application through the target detection device.
  • the target detection device described in the above embodiments includes but is not limited to: vehicle controller, vehicle module, vehicle module, vehicle components, vehicle chip, vehicle unit, vehicle radar and other sensors.
  • an embodiment of the present application provides a target detection device, the device includes a processor and a storage medium, the storage medium stores instructions, and when the instructions are executed by the processor, the target as described in any implementation manner of the first aspect is achieved. Detection method.
  • the present application provides a chip or a chip system, the chip or chip system includes at least one processor and a communication interface, the communication interface and the at least one processor are interconnected through a line, and the at least one processor is used for running a computer program or instruction, The target detection method described in any one of the implementation manners of the first aspect is performed.
  • the communication interface in the chip may be an input/output interface, a pin, a circuit, or the like.
  • the chip or chip system described above in this application further includes at least one memory, where instructions are stored in the at least one memory.
  • the memory may be a storage unit inside the chip, such as a register, a cache, etc., or a storage unit of the chip (eg, a read-only memory, a random access memory, etc.).
  • FIG. 1 is a schematic diagram of a scenario of automatic driving according to an embodiment of the present application
  • FIG. 2 is a schematic diagram of a scene in which an independent computing device is used for target detection according to an embodiment of the present application;
  • FIG. 3 is a schematic flowchart of a target detection method provided by an embodiment of the present application.
  • FIG. 4 is an interactive schematic diagram of a target detection method provided by an embodiment of the present application.
  • FIG. 5 is a schematic flowchart of another target detection method provided by an embodiment of the present application.
  • FIG. 6 is a schematic flowchart of another target detection method provided by an embodiment of the present application.
  • FIG. 7 is a schematic structural diagram of a terminal according to an embodiment of the present application.
  • FIG. 8 is a schematic structural diagram of a target detection apparatus provided by an embodiment of the present application.
  • FIG. 9 is a schematic diagram of a hardware structure of a target detection device provided by an embodiment of the present application.
  • FIG. 10 is a schematic structural diagram of a chip according to an embodiment of the present application.
  • words such as “first” and “second” are used to distinguish the same or similar items with basically the same function and effect.
  • the first value and the second value are only used to distinguish different values, and do not limit their order.
  • the words “first”, “second” and the like do not limit the quantity and execution order, and the words “first”, “second” and the like are not necessarily different.
  • At least one means one or more
  • plural means two or more.
  • And/or which describes the association relationship of the associated objects, indicates that there can be three kinds of relationships, for example, A and/or B, which can indicate: the existence of A alone, the existence of A and B at the same time, and the existence of B alone, where A, B can be singular or plural.
  • the character “/” generally indicates that the associated objects are an “or” relationship.
  • At least one item(s) below” or similar expressions thereof refer to any combination of these items, including any combination of single item(s) or plural items(s).
  • At least one (a) of a, b, or c can represent: a, b, c, a-b, a-c, b-c, or a-b-c, where a, b, c may be single or multiple .
  • Radar is an indispensable sensor in the field of intelligent driving or autonomous driving. Radar can support two application scenarios, including motion scenarios and stationary scenarios. In moving scenes, such as autonomous driving scenes, radar can be used to detect moving objects such as moving vehicles, moving pedestrians, and more prominent stationary targets such as obstacles, road guardrails or road signs; in stationary scenes, such as parking In the field scene, the radar can be used to detect stationary targets such as stationary vehicles or obstacles.
  • moving scenes such as autonomous driving scenes
  • stationary targets such as obstacles, road guardrails or road signs
  • stationary scenes such as parking
  • the radar can be used to detect stationary targets such as stationary vehicles or obstacles.
  • two-dimensional fast Fourier transform FFT
  • constant false early warning constant false
  • CFAR spatial dimension FFT processing and other methods to obtain the distance, speed and angle information of the target, so as to complete the detection of the target in the moving scene.
  • autonomous vehicles can realize functions such as adaptive cruise based on the tracking of objects such as vehicles or pedestrians.
  • the synthetic aperture radar (SAR) imaging method can be used to perform parameter estimation, motion compensation, and compression on the de-frequency-modulated echo signal received by the radar.
  • the target after SAR imaging is obtained.
  • a vehicle can implement functions such as autonomous valet parking (AVP) based on the imaging of stationary objects such as vehicles or obstacles in a parking lot.
  • AVP autonomous valet parking
  • stationary targets or weak stationary target data are usually filtered out as clutter data.
  • the weak echo data received by the radar is not fully utilized.
  • the imaging principle of SAR imaging is to use a small antenna on the radar as a single radiating unit, this unit moves continuously along a straight line, and receives the echo signals of the same target object at different positions and processes them.
  • the above-mentioned small antenna can be combined into an equivalent "large antenna" by moving, and then a higher-resolution image of the target object can be obtained.
  • the scene in which SAR imaging is realized is that the radar is moving and the target is stationary. Therefore, in general implementation, SAR imaging cannot be applied to scenes containing moving targets, and the imaging scene is limited to static scenes, so the application of SAR imaging technology in moving scenes such as autonomous driving scenes is limited.
  • the weak echo data received by the radar cannot be fully utilized when the radar is used for target detection in the moving scene, or the limitation of the stationary scene when the radar is used for SAR imaging leads to different applications.
  • the functions that can be achieved by radar are limited, and it cannot well assist intelligent driving and automatic driving.
  • the embodiments of the present application provide a target detection method and device, in which, in a motion scene such as automatic driving or intelligent driving, source data obtained by radar processing is obtained, and a first threshold is used to process the source data to different degrees and use it to obtain the data corresponding to the first target suitable for its application scene, and perform imaging processing on the data corresponding to the first target, so as to realize the imaging function of the radar in the automatic driving scene, breaking through the radar-based imaging scene.
  • a motion scene such as automatic driving or intelligent driving
  • a first threshold is used to process the source data to different degrees and use it to obtain the data corresponding to the first target suitable for its application scene, and perform imaging processing on the data corresponding to the first target, so as to realize the imaging function of the radar in the automatic driving scene, breaking through the radar-based imaging scene.
  • a first threshold is used to process the source data to different degrees and use it to obtain the data corresponding to the first target suitable for its application scene, and perform imaging processing on the data corresponding to the first target, so as to realize
  • the target detection method provided in the embodiment of the present application may be applied to an automatic driving scenario.
  • a radar on an autonomous vehicle can detect objects such as obstacles based on the target detection method of the embodiment of the present application, and formulate an autonomous driving strategy or update elements in a high-precision map based on the target detection results.
  • the target detection method provided by the embodiments of the present application may be applied to a digital signal processing (digital signal processing, DSP) unit of a radar.
  • DSP digital signal processing
  • the target detection method provided by the embodiments of the present application may also be applied to other devices, and the other devices may include: an electronic control unit (electronic control unit, ECU) on a vehicle, a multi domain controller (multi domain controller, MDC) Or stand-alone computing devices such as servers and other devices.
  • ECU electronice control unit
  • MDC multi domain controller
  • the radar may perform preliminary processing (eg, second-order FFT processing, etc.) on the acquired data, and send the preliminary processed data to the MDC for subsequent processing.
  • FIG. 1 is a schematic diagram of an automatic driving scenario provided by an embodiment of the present application.
  • the autonomous vehicle 101 and the autonomous vehicle 102 are traveling in different lane lines.
  • the autonomous vehicle 101 and the autonomous vehicle 102 can detect surrounding objects based on radars in the vehicles.
  • autonomous vehicle 101 may detect other objects such as autonomous vehicle 102 , road barriers 103 , pavement markings 104 , lane lines 105 , and lane lines 106 around its vehicle.
  • the autonomous driving vehicle 101 may acquire echo data of objects around the road based on the radar, and the radar in the autonomous driving vehicle 101 may use the target detection method provided in the embodiment of the present application to detect the received data.
  • the echo data is processed, and the speed threshold is used to obtain the data corresponding to the moving target in the processed echo data, and then the data corresponding to the moving target is removed from the processed echo data to obtain the data corresponding to the strong stationary target.
  • the data corresponding to the strongly stationary target is subjected to imaging processing to obtain the imaging result corresponding to the strongly stationary target.
  • the radar in the autonomous vehicle 101 can detect strong stationary objects such as the roadside guardrail 103 .
  • the radar in the self-driving vehicle 101 can also use the speed threshold to obtain the data corresponding to the moving target in the processed echo data, and use the signal strength (or amplitude) threshold to obtain the strong static in the processed echo data. the data corresponding to the target, and then remove the data corresponding to the moving target and the data corresponding to the strong stationary target from the processed echo data to obtain the data corresponding to the weak stationary target, and perform imaging processing on the data corresponding to the weak stationary target, The imaging results corresponding to weak stationary targets can be obtained. As shown in FIG. 1 , the radar in the autonomous vehicle 101 can detect weak stationary objects such as road markings 104 , lane lines 105 , and lane lines 106 .
  • weak stationary objects such as road markings 104 , lane lines 105 , and lane lines 106 .
  • the subsequent automatic driving vehicle 101 can plan the automatic driving route according to the detected target and other automatic driving data such as lane line data, so as to ensure the normal driving of the automatic driving vehicle 101 .
  • FIG. 2 is a schematic diagram of a scene in which an independent computing device is used for target detection according to an embodiment of the present application.
  • the scenario may include: an autonomous vehicle 101 , a wireless wide area network (wide area network, WAN) 202 , a communication network 203 and a server 204 .
  • WAN wide area network
  • the autonomous driving vehicle 101 may include one or more devices such as wireless transceivers.
  • the wireless transceiver in the autonomous vehicle 101 is able to exchange data and communicate as needed with the wireless WAN 202 in the scenario.
  • the autonomous driving system in the autonomous vehicle 101 may use the wireless WAN 202 to receive echo data from radar in the autonomous vehicle, or other data received by other sensors, via one or more communication networks 203 (eg, the Internet) It is transmitted to the server 204 for processing.
  • the server 204 then transmits the processed data to the automatic driving system of the self-driving vehicle 501 for guiding the automatic driving of the vehicle; alternatively, the server 204 can also transmit the processed data to the high-precision map for updating Elements in HD Maps.
  • the server 204 may be one or more servers.
  • the application scenario of the target detection method provided by the embodiment of the present application may be used as an example, and not as a limitation of the application scenario of the embodiment of the present application.
  • a target detection method provided by the embodiments of the present application is not limited to the application scenario of vehicle-mounted radar, but can also be applied to airborne radar or other platforms, which is not limited in the embodiments of the present application.
  • a target detection method provided in this embodiment of the present application can be used for a linear frequency modulated continuous wave (LFMCW) radar, and its radar signal system can be extended to a digitally modulated radar, for example, its signal system can be a phase-modulated continuous wave ( phase modulated continuous wave, PMCW).
  • LFMCW linear frequency modulated continuous wave
  • PMCW phase modulated continuous wave
  • a target detection method provided in this embodiment of the present application can be used for a millimeter wave radar.
  • millimeter wave refers to the electromagnetic wave with wavelength between 1mm-10mm, the corresponding frequency range of millimeter wave is 30GHz-300GHz, in the frequency range of 30GHz-300GHz, the characteristics of millimeter wave can include: easy to achieve miniaturization, large bandwidth , Short wavelength, high radar resolution and strong penetration, the characteristics of millimeter wave make it suitable for use in the automotive field.
  • Millimeter-wave radar is more capable of penetrating smoke, dust or fog, making it possible to work around the clock. Therefore, millimeter-wave radar can be widely used in vehicles.
  • a target detection method provided by the embodiments of the present application can also be used for radars or other sensors in other frequency bands, such as ultrasonic radars, lidars, and other sensors.
  • the terminal devices include but are not limited to mobile stations (mobile stations, MS), mobile Terminal (mobile terminal), for example, the terminal device can be a mobile phone (or a "cellular" phone), a computer with wireless communication function, etc., and the terminal device can also be a computer with wireless transceiver function, virtual reality (virtual reality, VR) terminal equipment, augmented reality (AR) terminal equipment, wireless terminals in industrial control, wireless terminals in self-driving, wireless terminals in remote medical, Wireless terminals in smart grid or smart manufacturing, wireless terminals in transportation safety, drones, wireless terminals in smart city, smart home in smart home or other wireless terminals, etc. Terminals may be called by different names in different networks, for example: User Equipment, Mobile Station, Subscriber Unit, Station, Cell Phone, Personal Digital Assistant, Wireless Modem, Wireless Communication Device, Handheld Device, Laptop, Cordless Phone, Wireless local loop station, etc.
  • the strong stationary target described in this embodiment of the present application may be a stationary target whose speed is less than a certain threshold and whose echo signal strength is greater than a certain value.
  • the strong stationary target may be a stationary target such as a metal guardrail, a stationary vehicle or an obstacle.
  • the strong stationary target may include a radar-transmitting signal, which is reflected by the target and generates an echo signal, and a target with a larger echo intensity (or radar scattering cross-sectional area or reflection intensity, etc.) received by the radar.
  • the magnitude of the echo intensity is related to factors such as the material of the target, the roughness of the target surface, and/or the energy of the radar transmit signal.
  • the weak stationary target described in this embodiment of the present application may be a stationary target whose speed is less than a certain threshold and whose echo signal strength is smaller than a certain value.
  • the weak stationary target may be a stationary target such as a lane line.
  • the weak stationary targets may include targets with low echo strength received by the radar.
  • the SAR imaging described in the embodiments of the application may include forming a large virtual aperture by using the relative motion of the radar and the target, thereby breaking the limitation of the antenna aperture and realizing high-resolution imaging of the target.
  • SAR imaging technology is applied to the scene where the radar is moving and the target is stationary.
  • the slow time dimension (or azimuth dimension) described in the embodiments of the present application may include a dimension along the pulse repetition period.
  • the slow time can be used to mark the time between different pulses, and a pulse can be regarded as a sample of the slow time.
  • the fast time dimension (or called the distance dimension) described in the embodiments of the present application may include, the dimension along a pulse sampling may be understood as the fast time dimension, and the fast time dimension may reflect the intrapulse time.
  • the radar sends a pulse and obtains the echo signal corresponding to this pulse.
  • the time of the above-mentioned pulse sampling is the fast time.
  • fast time can reflect distance.
  • the point spread function (PSF) (or called point spread function) described in the embodiments of the present application may include an impulse response of a system, which may be used to measure the resolution of an imaged image.
  • the main lobe and side lobe (or called side lobe) of the point spread function described in the embodiments of this application may be the main lobe and side lobe of the point spread function (for example, a sinc-type function) formed after signal compression, wherein the main lobe and side lobe
  • the lobe refers to the data between the first zero-crossing points on both sides of the maximum value of the spread function
  • the side lobe refers to other data on the spread function except the main lobe.
  • the high definition map (high definition map, HD Map) described in the embodiments of the present application may include: machine-oriented map data for use by autonomous vehicles. It can more accurately describe the road traffic information elements and more realistically reflect the actual situation of the road. High-precision maps can achieve high-precision positioning functions, road-level and lane-level planning capabilities, and lane-level guidance capabilities.
  • FIG. 3 is a schematic flowchart of a target detection method provided by an embodiment of the present application. As shown in FIG. 3 , the method includes:
  • the source data may include a range-velocity RV spectrum obtained by processing an echo signal received by the radar.
  • the RV spectrum may include a distance velocity map obtained by processing the echo signal.
  • a possible way to obtain the source data obtained by radar processing may include the following steps:
  • the process of generating the echo signal may include: an oscillator in the radar generates a chirp signal over time, a part of the chirp signal is sent out through a transmitting antenna of the radar, and a part of the chirp signal is input into a mixer of the radar.
  • a local oscillator signal the signal sent by the radar's transmitting antenna will be reflected when it encounters the target object.
  • the radar's receiving antenna receives the echo signal reflected by the target object.
  • the echo signal received by the radar's receiving antenna is the same as the local oscillator signal.
  • the frequency mixing operation is performed in the mixer, and after the frequency mixing operation is processed, the echo data (or intermediate frequency signal) after frequency modulation is obtained.
  • the mixed signal is converted into digital form after passing through a low-pass filter and ADC (analog-to-digital converter).
  • S3012 Divide the de-frequency modulated echo data into azimuth sub-blocks in the slow time dimension.
  • the de-frequency modulated echo data may be divided into blocks in the slow time dimension.
  • the size of the orientation sub-block may be selected as one frame. It can be understood that, the orientation sub-block can be divided into different sizes according to the actual scene, which is not limited in this embodiment of the present application.
  • the intermediate frequency signal can be converted into a digital signal through an analog converter, and then entered into a signal processor for processing, so as to obtain the information of the target object.
  • the signal processing method includes: fast Fourier transform and/or spectrum analysis, etc.
  • two-dimensional FFT processing is performed on the above-mentioned intermediate frequency signal divided into azimuth sub-blocks.
  • the two-dimensional FFT includes fast time FFT and slow time FFT.
  • the distance data corresponding to the azimuth sub-block can be obtained along the fast time FFT, and the velocity data corresponding to the azimuth sub-block can be obtained along the slow time FFT, and then the source data obtained by radar processing can be obtained.
  • the at least one first target may include, among the multiple targets included in the source data obtained by performing target detection on the source data, one or more targets whose speed is less than (or less than or equal to) the first threshold; at least The data corresponding to a first target may include, obtained from the RV spectrum of the source data, the RV spectrum of a target or RV spectra of multiple targets whose velocity is less than (or less than or equal to) the first threshold.
  • the first threshold value may include a speed threshold value, the first threshold value may include a numerical value, or may include a range formed by numerical values.
  • the first threshold may be 0, or the first threshold may be in the range of 0m/s ⁇ 2m/s. It can be understood that, when the speed of the first target is in the range of 0m/s ⁇ 2m/s, since the movement range of the first target is small, the first target can also be considered as a stationary target.
  • the first target may include a target whose speed is less than (or less than or equal to) a first threshold.
  • the first target may comprise a stationary target with a velocity less than (or less than or equal to) a first threshold.
  • the stationary objects may include stationary vehicles, obstacles, road signs, and guardrails.
  • a possible implementation of obtaining data corresponding to a first target according to source data is to obtain data corresponding to a stationary target according to the RV spectrum of the source data.
  • target detection can be performed on the RV spectrum of the source data, and multiple targets in the autonomous driving scene can be obtained through target detection.
  • the target of the moving car, the target of the moving pedestrian and the target of the stationary vehicle the above-mentioned multiple targets are further screened by using the speed threshold, and a stationary target among the multiple targets can be obtained, and the stationary target is the first target. , such as a stationary vehicle.
  • a possible implementation of obtaining data corresponding to a plurality of first targets according to source data is to obtain data corresponding to a plurality of stationary targets according to the RV spectrum of the source data.
  • target detection can be performed on the RV spectrum of the source data.
  • Multiple targets including: the target of moving cars, the target of moving pedestrians, the target of stationary vehicles, the target of obstacles, and the target of guardrails.
  • the stationary target is the first target, such as stationary vehicles, obstacles and guardrails.
  • obtaining data corresponding to the at least one first target from the source data may include the following implementations:
  • the data corresponding to the target whose speed is less than the first threshold may be filtered out from the source data.
  • filtering can be understood as taking out the required data from more data.
  • the first threshold value when the first threshold value is equal to a value close to 0 in speed, the first threshold value can be used to filter out data corresponding to a stationary target whose speed is less than the first threshold value from the source data.
  • the data corresponding to the target whose speed is greater than or equal to the first threshold can be obtained, the data corresponding to the target whose speed is greater than or equal to the first threshold can be removed from the source data, and the corresponding data of the target whose speed is less than the first threshold can be obtained.
  • the data For example, when the first threshold value is a value close to 0 in speed, the data corresponding to the moving target whose speed is greater than or equal to the first threshold value can be obtained from the source data by using the first threshold value, and the motion can be removed from the source data. The data corresponding to the target, and then the data corresponding to the stationary target whose speed is less than the first threshold is obtained.
  • obtaining data corresponding to the at least one first target from the source data may include the following implementations:
  • the data corresponding to the target whose speed is less than or equal to the first threshold may be filtered out from the source data. For example, when the first threshold value is equal to 0 in speed, the data corresponding to the stationary target whose speed is less than or equal to the first threshold value can be filtered out from the source data by using the first threshold value.
  • the data corresponding to the target whose speed is greater than the first threshold can be obtained, the data corresponding to the target whose speed is greater than the first threshold can be removed from the source data, and the data corresponding to the target whose speed is less than or equal to the first threshold can be obtained.
  • the first threshold value is equal to 0 in speed
  • the data corresponding to the moving target whose speed is greater than the first threshold value can be obtained from the source data by using the first threshold value, and the data corresponding to the moving target can be removed from the source data, and then the data corresponding to the moving target can be removed from the source data.
  • the method for obtaining data corresponding to at least one first target by using the source data may include other contents according to the actual scenario, which is not limited in this embodiment of the present application.
  • S303 Perform imaging processing on data corresponding to at least one first target to obtain an imaging result of at least one first target.
  • imaging may include processing data into images corresponding to the data; imaging processing may include a real-time process of processing data into corresponding images, and the imaging processing methods may include SAR imaging methods, polarization Imaging or other imaging methods such as remote sensing imaging methods; the imaging result of at least one first target may include performing imaging processing on data corresponding to the first target to obtain an image corresponding to the first target or images corresponding to multiple first targets.
  • the data corresponding to the first target may include the RV spectrum
  • one implementation of performing imaging processing on the data corresponding to at least one first target is to perform data recovery on the data corresponding to the first target, and recover the data.
  • the acquired data is spliced along the slow time dimension, and motion compensation and imaging processing are performed on the spliced data to obtain an imaging result of the first target.
  • the above imaging processing process is used to process the data into a recognizable image.
  • the image can be used to determine the automatic driving strategy; in terms of high-precision maps, the image can also be used to update elements in the high-precision map, thereby obtaining a more comprehensive and accurate high-precision map.
  • the source data obtained based on radar processing is obtained, and the first threshold is used to process and utilize the source data to varying degrees, so as to obtain the first threshold suitable for its application scenario.
  • the data corresponding to the target is processed by imaging the data corresponding to the target, so that the imaging function of the radar in the automatic driving scene can be realized.
  • radar can be used to better assist autonomous driving or intelligent driving.
  • S302 may include the following steps:
  • S3021 Perform target detection according to the source data to obtain at least one second target and at least one third target.
  • the at least one second target may include, among the multiple targets included in the source data obtained by performing target detection on the source data, one or more targets whose speed is greater than (or greater than or equal to) the first threshold; at least A third target may include one or more targets whose signal amplitude is greater than (or greater than or equal to) the second threshold among the multiple targets contained in the source data; the data corresponding to at least one second target may include, from the RV spectrum of the source data The RV spectrum of a target or the RV spectrums of multiple targets whose speed is greater than (or greater than or equal to) the first threshold obtained in , the data corresponding to at least one third target may include, obtained from the RV spectrum of the source data, the signal The RV spectrum of the target or the RV spectra of the targets whose magnitude is greater than (or greater than or equal to) the first threshold.
  • the second threshold may include a numerical value, or may include a range formed by numerical values. Its usage is similar to that of the first threshold, and details are not repeated
  • one implementation of performing target detection according to source data to obtain at least one second target and at least one third target is that target detection can be performed on the RV spectrum of the source data to obtain at least one moving target and at least one strongly static target.
  • the moving target is a target whose speed is greater than 0; the strong stationary target is a target whose signal strength is greater than a preset signal strength.
  • autonomous driving scenarios can include objects such as moving cars, moving pedestrians, stationary vehicles, obstacles, guardrails, or lane lines.
  • Target detection can be performed on the RV spectrum containing the source data of multiple targets in the above scenes, and moving targets such as moving cars or moving pedestrians, as well as stationary targets such as obstacles, guardrails or lane lines can be obtained.
  • the target detection is used to separate the target from other content without features, extract the target, and determine the position of the target.
  • object detection can be used to separate the second and third objects from other data in the RV spectrum.
  • the target detection method may include: a neural network method, or other target detection methods such as CFAR detection. It can be understood that the target detection method may include other contents according to the actual scene, which is not limited in this embodiment of the present application.
  • the obtained second target and third target can also realize other functions of the radar, such as target clustering or target tracking processing for the second target, which can be used to realize functions such as vehicle-mounted adaptive cruise; Processing such as guardrail detection or road edge detection performed on the third target can be used to implement functions such as lane keeping assistance.
  • target clustering or target tracking processing for the second target which can be used to realize functions such as vehicle-mounted adaptive cruise
  • Processing such as guardrail detection or road edge detection performed on the third target can be used to implement functions such as lane keeping assistance.
  • At least one second target distance data and speed data, and at least one third target distance data and speed data can be obtained.
  • the angle data of the second target and the angle data of the third target can also be obtained.
  • the angle data of the second target and the angle data of the third target can be obtained by using inter-channel FFT (or angle dimension FFT), or spectral estimation like direction of arrival (DOA) estimation.
  • the combination of one transmit antenna and one receive antenna may be understood as one channel, and the inter-channel FFT may be understood as performing FFT processing based on RV spectra of multiple channels. It can be understood that the angle data of the target can be used to obtain a more accurate position of the target.
  • S3022 Remove data corresponding to the second target and data corresponding to the third target from the source data, to obtain data corresponding to at least one first target.
  • the data corresponding to the second target or the data corresponding to the third target may include an RV spectrum.
  • An implementation of removing the data corresponding to the second target and the data corresponding to the third target from the source data may be that the point spread function (or point spread function) corresponding to the second target can be recovered, and the point spread function corresponding to the third target can be recovered. point spread function, and remove the point data contained in the point spread function corresponding to the second target from the RV spectrum of the source data, and remove the point data contained in the point spread function corresponding to the third target to obtain the remaining source data.
  • the data for example, may include data corresponding to the first target.
  • the autonomous driving scene may include objects such as moving cars, moving pedestrians, stationary vehicles, obstacles, guardrails, or lane lines.
  • Multiple targets can be obtained by using the target detection method, and moving targets and strong stationary targets can be screened out by using the speed threshold (such as speed dimension 0) and the signal strength threshold; specifically, moving targets, such as: moving cars or moving pedestrians and other targets ; Strong stationary targets such as stationary vehicles, obstacles or guardrails.
  • the point spread function corresponding to the moving target and the point spread function corresponding to the strong stationary target are removed from the RV spectrum of the source data, and the data corresponding to the remaining weak stationary targets in the RV spectrum can be obtained, such as the data corresponding to the lane lines.
  • the separation and utilization of radar data can be achieved through the first threshold and the second threshold;
  • the scene realizes more functions of radar, and then better assists automatic driving or intelligent driving.
  • S3022 may include the following steps:
  • the first point spread function corresponding to the at least one second target may include a two-dimensional (sinc) Singer function corresponding to the scattering point of the at least one second target.
  • the response function may be used to obtain the first point spread function corresponding to the second target.
  • the response function can be understood as the sum of the responses of multiple scattering points (for example, p scattering points) in the RV spectrum, and the response function can be:
  • p represents the number of the p-th scattering point
  • a p represents the amplitude in the RV spectrum corresponding to the p-th scattering point
  • represents the signal wavelength
  • r p and v p represent the distance and velocity corresponding to the p-th scattering point
  • F s represents the sampling rate of the signal
  • K r represents the modulation frequency of the signal
  • PRI represents the repetition time of the chirp signal
  • M and N are the data points of the fast time and slow time, respectively
  • q m and q n are the fast time in the RV spectrum, respectively The number of sampling points corresponding to the dimension and slow time dimension.
  • an implementation of obtaining the first point spread function corresponding to the at least one second target according to the data corresponding to the at least one second target is: obtaining the sampling point sequence number corresponding to the second target whose speed exceeds the first threshold, using The sequence number of the sampling point corresponding to the second target, the amplitude value corresponding to the second target, and the response function restore the two-dimensional Singer function corresponding to the second target.
  • the amplitude and phase error of the point spread function of the second target can also be estimated by the least square method, and the point spread function of the second target can be reconstructed according to the above response function.
  • the effect of the window function on the main lobe width of the sinc-type distance envelope is considered. and side lobe intensity.
  • the second point spread function corresponding to the at least one third target may include a two-dimensional Singer function corresponding to the scattering point of the at least one third target.
  • an implementation of obtaining the second point spread function corresponding to the at least one third target according to the data corresponding to the at least one third target is: obtaining the sampling point sequence number corresponding to the third target whose signal strength exceeds the second threshold, Using the sequence number of the sampling point corresponding to the third target, the data corresponding to the third target, and the response function, the two-dimensional Singer function corresponding to the third target is recovered.
  • the method for obtaining the point spread function may include other contents according to the actual scene, which is not limited in this embodiment of the present application.
  • the data corresponding to the first point spread function includes the main lobe data of the second target and the side lobe data of the second target included in the first point spread function; the data corresponding to the second point spread function includes, Main lobe data of the third target and side lobe data of the third target included in the second point spread function.
  • the main lobe data and side lobe data of the first point spread function can be removed from the source data obtained by radar-based processing, and the removal of The main lobe data and the side lobe data of the second point spread function, so the source data after removing more influences of the second target and the third target, may contain the first target data.
  • the removal process can better realize data separation, and can obtain more accurate imaging results based on the data corresponding to the first target, thereby realizing various functions of the radar in different application scenarios.
  • S302 may include the following steps:
  • S4021 is the same as the process of performing target detection according to the source data to obtain at least one second target in S3021, which is not repeated here.
  • S4022 is the same as the process of removing the data corresponding to the second target from the source data in the above-mentioned S3022, and obtaining the data corresponding to at least one first target, which is not repeated here.
  • S4022 may include the following steps:
  • the process of obtaining the first point spread function corresponding to the at least one second target according to the data corresponding to the at least one second target in S40221 and S30221 is the same, which is not repeated here.
  • the first point spread function includes main lobe data of the second target and side lobe data of the second target.
  • S40222 is the same as the process of removing the data corresponding to the first point spread function in the source data in S30222, which is not repeated here.
  • the removal of The source data after the influence of the second target has been removed may contain the first target data.
  • the removal process can better realize data separation, and can obtain the imaging result of the stationary target based on the data corresponding to the first target, thereby realizing various functions of the radar in different application scenarios.
  • the data corresponding to the first target includes the distance velocity RV spectrum
  • S303 may include the following steps:
  • the restoration may include restoring the data corresponding to the first target obtained after certain processing to the data before processing, for example, restoring the data corresponding to the first target to an intermediate frequency signal corresponding to the first target.
  • the data recovery method may include two-dimensional inverse fast Fourier transform. Transform (invert fast fourier transformation, IFFT).
  • the source data includes data corresponding to the first target.
  • the splicing can be understood as splicing a plurality of sub-data into high-resolution data according to certain conditions.
  • the intermediate frequency signal has been divided into a plurality of azimuth sub-blocks along the slow time dimension, so the splicing here can be performed along the slow time dimension for multiple processed azimuth sub-blocks. (or can be understood as the data corresponding to the first target) to be spliced to obtain spliced data.
  • the orientation sub-block is divided by one frame along the slow time dimension, in the splicing process, it can be spliced by one frame along the slow time dimension.
  • the multiple orientation sub-blocks obtained along the slow time dimension may include overlapping parts, and the splicing processing of the multiple orientation sub-blocks is used to obtain high-resolution imaging during imaging. rate images.
  • the length to be spliced when splicing the data corresponding to the first target needs to meet the azimuth resolution requirement in the imaging process.
  • the azimuth resolution can be the resolution of the azimuth dimension, which is the minimum azimuth distance at which two targets can be identified.
  • the spliced data includes data corresponding to the first target
  • the imaging method in S3032 is the same as the process of performing imaging processing on the data corresponding to at least one first target in S303 to obtain the imaging result of at least one first target , and will not be repeated here.
  • the stitching processing can be used to obtain an image with higher resolution during imaging, so that a more accurate imaging result of the first target can be obtained.
  • S3032 may include: performing synthetic aperture radar imaging processing on the stitched data.
  • the length of the splicing can meet the azimuth resolution requirement of the synthetic aperture radar imaging processing.
  • the splice length T needs to satisfy:
  • Va is the speed of the vehicle platform
  • Ka is the Doppler modulation frequency
  • ⁇ a is the azimuth resolution requirement.
  • the process of performing synthetic aperture radar (or SAR) imaging processing on the stitched data may include the following steps:
  • the strabismus data in the stitched data is processed into equivalent frontal side view data. It can be understood that the method for processing the spliced data may also include other contents, which are not limited in this embodiment of the present application.
  • the Doppler parameter estimation may include estimation of Doppler center and Doppler modulation frequency, both of which may be estimated according to echo data. Radar motion error estimation and subsequent motion compensation can be performed according to the estimated results of Doppler parameters.
  • Doppler center compensation and ambulatory correction can make strabismus data equivalent to frontal side view data for imaging processing; among them, ambulation correction can be used to eliminate the linear ambulatory component of distance introduced by strabismus; Doppler center compensation can make strabismus due to strabismus. The resulting non-zero Doppler center compensation is zero.
  • the envelope compensation and phase compensation can be performed on the processed data in all distance directions by using the trajectory error at the reference distance to complete the first-order motion compensation to obtain the first-order compensated data.
  • Perform azimuth FFT processing on the data after first-order compensation multiply the data after azimuth FFT processing with the phase function corresponding to the frequency scaling factor to complete the frequency scaling, and perform range FFT processing on the frequency scaled data
  • Residual video phase (RVP) correction and range IFFT processing multiply the range IFFT processed data by the compensation factor to complete inverse frequency scaling
  • RVP Residual video phase
  • Range IFFT processing multiply the range IFFT processed data by the compensation factor to complete inverse frequency scaling
  • perform range unit migration Range cell migration, RCM) correction, second range compression (second range compression, SRC) and range FFT processing, etc.
  • the azimuth direction represents the moving direction of the radar (or vehicle)
  • the distance direction represents the direction perpendicular to the azimuth direction.
  • the frequency scaling factor can be directly calculated according to the system parameters, and after the calculation, the frequency scaling can be completed by multiplying the phase function determined by it. It can be understood that the frequency scaling can be completed by multiplying the data after the azimuth FFT with the phase function.
  • the range unit of the radar corresponds to the resolution distance of the radar.
  • the distance unit refers to the sampling unit of the distance dimension;
  • the reference distance unit refers to the distance unit corresponding to the shortest slant distance from the center of the swath (irradiated area).
  • the azimuth phase error in the compensation data is corrected according to the Doppler modulation frequency estimation, the adjusted data is multiplied by the azimuth matched filter function, and the multiplied data is subjected to azimuth IFFT processing to complete azimuth compression. , 2D SAR images containing stationary targets can be obtained.
  • the obtained two-dimensional SAR image in the slant range plane is converted into the SAR image of the ground range plane, and the imaging result of the first target is obtained.
  • SAR imaging can be applied to radar-based moving scenes, breaking through the limitations of radar imaging scenes for stationary scenes, and more accurate imaging results of the first target can be obtained based on SAR imaging.
  • an imaging result of at least one first target, at least one second target and/or at least one third target is sent to the target device.
  • the target device may specify an automatic driving strategy and/or update the target in the high-precision map based on the imaging result of the first target; or, the target device may also perform target clustering or target tracking on the second target, etc. processing, and realize functions such as vehicle-mounted adaptive cruise; the target device performs processing such as guardrail detection or road edge detection for the third target, and realizes functions such as lane keeping assistance.
  • FIG. 4 is an interactive schematic diagram of a target detection method provided by an embodiment of the present application, as shown in FIG. 4 .
  • the target device receives source data from the radar.
  • the target device may include a device on a car, such as an ECU or an MDC, or the target device may also include a server configured outside the car, such as an independent computing device.
  • S402. Determine an automatic driving strategy according to the imaging result of at least one first target; and/or, update the lane line in the high-precision map according to the imaging result of at least one first target.
  • the autonomous driving strategy is determined according to the imaging results of the first target.
  • the autonomous driving strategy may include directing the manner in which the autonomous vehicle operates.
  • autonomous driving strategies may include: guiding the autonomous vehicle to turn, change lanes, change gears, yield to other vehicles or pedestrians, and other autonomous driving strategies.
  • the autonomous driving vehicle when the autonomous driving vehicle detects that in the current scene, the imaging result of the first target shows an image of a turning sign that continues to drive along the lane line, the autonomous driving vehicle may be based on the imaging result of the lane line and the route of the autonomous driving. , select the appropriate lane line to drive. For example, when the autonomous driving route indicates going straight, and the current lane line shows an image of a left turn ahead, the autonomous driving vehicle can prepare to change lanes, change lane lines and continue driving straight; when the autonomous driving route indicates a left turn, and the current lane When the line shows an image of a left turn ahead, the autonomous vehicle can continue driving within the line of the current lane.
  • the lane line in the high-precision map is updated according to the imaging result of the first target.
  • the high-precision map can not only describe the road, but also reflect the vehicle conditions contained in each road, which can more realistically reflect the actual style of the road during driving. Among them, the high-precision map can be used as an important reference for determining the automatic driving strategy in the automatic driving process.
  • the position of the lane line in the imaging result of the first target in the high-precision map can be determined, and the lane line of the position can be compared with the position information in the high-precision map.
  • the lane line obtained in the embodiment of the present application can be used to update the lane line of the position in the high-precision map.
  • the imaging result of the first target can be used to better assist automatic driving and obtain a more accurate high-precision map.
  • FIG. 5 is a schematic flowchart of another target detection method provided by an embodiment of the present application. As shown in FIG. 5 , the target detection process may include the following steps:
  • the speed threshold can be used to screen out moving targets, which can be further used for target tracking and target detection; the amplitude (or signal strength) threshold is used to screen out strong stationary targets, which can be used for guardrail detection. .
  • angle estimation may also be performed on the target to obtain angle information corresponding to the target.
  • data separation can be achieved through the speed threshold and signal strength threshold, and different data can be used to achieve more functions of radar; and the limitations of radar-based SAR imaging for stationary scenes are expanded, and SAR imaging is applied in autonomous driving scenarios. , and then better assisted autonomous driving based on SAR imaging results.
  • FIG. 6 is a schematic flowchart of another target detection method provided by an embodiment of the present application. As shown in FIG. 6 , the target detection process may include the following steps:
  • S601-S603 are the same as the steps shown in the above-mentioned S501-S503, and are not repeated here.
  • S605 Determine whether to filter out strong stationary objects. When it is determined that the strongly stationary objects are filtered out, the steps shown in S606 may be performed; when it is determined that the strongly stationary objects are not filtered out, the steps shown in S610 or S613 may be performed.
  • the method for restoring the original data is two-dimensional FFT processing.
  • S608-S609 are the same as the steps shown in the above-mentioned S506-S507, and are not repeated here.
  • data separation can be achieved through the speed threshold and signal strength threshold, and different data can be used to achieve more functions of radar; and the limitations of radar-based SAR imaging for stationary scenes are expanded, and SAR imaging is applied in autonomous driving scenarios. , and then better assisted autonomous driving based on SAR imaging results.
  • FIG. 7 is a schematic structural diagram of a terminal according to an embodiment of the present application.
  • the terminal 70 includes a target detection apparatus 80 .
  • the terminal may be a terminal device such as a vehicle or a robot. It can be understood that other components or other devices included in the terminal may be set according to actual application scenarios, which are not limited in this embodiment of the present application.
  • the above-mentioned terminal may execute the method described in the above-mentioned embodiment through the target detection device 80, or the above-mentioned terminal may execute the method described in the above-mentioned embodiment through the target detection device 80 with the assistance of a radar. It can be understood that the manner in which the terminal controls the target detection apparatus 80 or the radar may be set according to an actual application scenario, and is not specifically limited in this embodiment of the present application.
  • FIG. 8 is a schematic structural diagram of a target detection apparatus 80 provided by an embodiment of the present application, as shown in FIG. 8,
  • the target detection apparatus 80 may be used in communication equipment, circuits, hardware components or chips, and the target detection apparatus includes: a processing unit 801 and a communication unit 802 .
  • the processing unit 801 is used for supporting the target detection apparatus to perform the steps of information processing;
  • the communication unit 802 is used for supporting the target detection apparatus to perform the steps of data transmission or reception.
  • the processing unit 801 is configured to acquire source data obtained based on radar processing; the processing unit 801 is further configured to obtain data corresponding to at least one first target according to the source data; wherein, the at least one first target has a speed lower than the first target. A target with a threshold value; the processing unit 801 is further configured to perform imaging processing on data corresponding to at least one first target to obtain an imaging result of at least one first target.
  • the processing unit 801 when the signal strength of the first target is less than the second threshold, is specifically configured to: perform target detection according to the source data to obtain at least one second target and at least one third target;
  • the second target is a target whose speed is greater than or equal to the first threshold
  • at least one third target is a target whose signal strength is greater than or equal to the second threshold; the data corresponding to the second target and the data corresponding to the third target are removed from the source data to obtain Data corresponding to at least one first target.
  • the processing unit 801 is specifically configured to: obtain a first point spread function corresponding to at least one second target according to data corresponding to at least one second target; obtain at least one point spread function corresponding to at least one third target according to data corresponding to at least one third target.
  • a second point spread function corresponding to a third target remove the data corresponding to the first point spread function and the data corresponding to the second point spread function from the source data; wherein, the first point spread function includes the main lobe of the second target data and side lobe data of the second target; the second point spread function includes the main lobe data of the third target and the side lobe data of the third target.
  • the processing unit 801 is specifically configured to: perform target detection according to the source data to obtain at least one second target; wherein, the at least one second target is a target whose speed is greater than or equal to the first threshold; in the source data The data corresponding to the second target is removed to obtain data corresponding to at least one first target.
  • the processing unit 801 is specifically configured to: obtain a first point spread function corresponding to at least one second target according to data corresponding to at least one second target; the first point spread function includes the main point spread function of the second target. Lobe data and side lobe data of the second target; remove the data corresponding to the first point spread function in the source data.
  • the data corresponding to the first target includes the distance velocity RV spectrum
  • the processing unit 801 is specifically used for: recovering the data corresponding to at least one first target and splicing along the slow time dimension to obtain spliced data; Imaging processing is performed on the spliced data to obtain an imaging result of at least one first target.
  • the processing unit 801 is specifically configured to: perform synthetic aperture radar imaging processing on the stitched data.
  • the processing unit 801 is specifically configured to: use Doppler parameter estimation, Doppler center compensation and/or walking correction to process the spliced data to obtain processed data; First-order motion compensation and second-order motion compensation are used to obtain compensation data; and azimuth compression is performed on the compensation data to obtain the imaging result of at least one first target.
  • the communication unit 802 is specifically configured to: send the imaging result of at least one first target, at least one second target and/or at least one third target to the target device.
  • the communication unit 802 is specifically configured to: receive source data from the radar.
  • the at least one first target includes at least one lane line
  • the processing unit 801 is specifically configured to: determine the automatic driving strategy according to the imaging result of the at least one first target; and/or, according to the imaging result of the at least one first target Imaging results update lane lines in HD maps.
  • the target detection apparatus may further include: a storage unit 803 .
  • the processing unit 801 and the storage unit 803 are connected through a communication line.
  • the storage unit 803 may include one or more memories, and the memories may be devices in one or more devices or circuits for storing programs or data.
  • the storage unit 803 may exist independently, and is connected to the processing unit 801 of the target detection device through a communication line.
  • the storage unit 803 may also be integrated with the processing unit 801 .
  • the communication unit 802 may be an input or output interface, a pin or a circuit, or the like.
  • the storage unit 803 may store computer-executed instructions of the radar or target device method, so that the processing unit 801 executes the radar or target device method in the above embodiments.
  • the storage unit 803 may be a register, a cache or a RAM, etc., and the storage unit 803 may be integrated with the processing unit 801 .
  • the storage unit 803 may be a ROM or other types of static storage devices that may store static information and instructions, and the storage unit 803 may be independent of the processing unit 801 .
  • An embodiment of the present application provides a target detection device, the target detection device includes one or more modules for implementing the methods in the steps included in the above-mentioned FIG. 3 to FIG. 6 , and the one or more modules can be combined with the above-mentioned modules
  • the steps of the method in the steps contained in FIGS. 3-6 correspond to each other.
  • a module that controls or processes the action of the target detection device may be referred to as a processing module.
  • FIG. 9 is a schematic diagram of the hardware structure of a control device provided by an embodiment of the application.
  • the control device includes a processor 901, a communication line 904 and at least one communication interface (exemplified in FIG. 9 ).
  • the communication interface 903 is used as an example for description).
  • the processor 901 may be a general-purpose central processing unit (central processing unit, CPU), a microprocessor, an application-specific integrated circuit (ASIC), or one or more processors for controlling the execution of the programs of the present application. integrated circuit.
  • CPU central processing unit
  • ASIC application-specific integrated circuit
  • Communication lines 904 may include circuitry to communicate information between the components described above.
  • the communication interface 903 using any device such as a transceiver, is used to communicate with other devices or communication networks, such as Ethernet, wireless local area networks (WLAN) and the like.
  • devices or communication networks such as Ethernet, wireless local area networks (WLAN) and the like.
  • control device may also include a memory 902 .
  • Memory 902 may be read-only memory (ROM) or other type of static storage device that can store static information and instructions, random access memory (RAM) or other type of static storage device that can store information and instructions It can also be an electrically erasable programmable read-only memory (EEPROM), a compact disc read-only memory (CD-ROM) or other optical disk storage, CD-ROM storage (including compact discs, laser discs, optical discs, digital versatile discs, Blu-ray discs, etc.), magnetic disk storage media or other magnetic storage devices, or capable of carrying or storing desired program code in the form of instructions or data structures and capable of being executed by a computer Access any other medium without limitation.
  • the memory may exist independently and be connected to the processor through communication line 904 .
  • the memory can also be integrated with the processor.
  • the memory 902 is used for storing computer-executed instructions for executing the solution of the present application, and the execution is controlled by the processor 901 .
  • the processor 901 is configured to execute the computer-executed instructions stored in the memory 902, so as to implement the target detection method provided by the embodiments of the present application.
  • the computer-executed instructions in the embodiments of the present application may also be referred to as application code, which is not specifically limited in the embodiments of the present application.
  • the processor 901 may include one or more CPUs, such as CPU0 and CPU1 in FIG. 9 .
  • control device may include multiple processors, for example, the processor 901 and the processor 905 in FIG. 9 .
  • processors can be a single-core (single-CPU) processor or a multi-core (multi-CPU) processor.
  • a processor herein may refer to one or more devices, circuits, and/or processing cores for processing data (eg, computer program instructions).
  • the processor 801 is configured to read the program in the memory 902 and execute the method flow in S301-S303 shown in FIG.
  • the method flow in S402 is the method flow in S501-S507 shown in FIG. 5 , or the method flow in S601-S613 shown in FIG. 6 .
  • FIG. 10 is a schematic structural diagram of a chip according to an embodiment of the present application.
  • the chip 100 includes one or more (including two) processors 1010 and a communication interface 1030.
  • memory 1040 stores the following elements: executable modules or data structures, or a subset thereof, or an extended set of them.
  • the memory 1040 may include a read-only memory and a random access memory, and provides instructions and data to the processor 1010 .
  • a portion of memory 1040 may also include non-volatile random access memory (NVRAM).
  • NVRAM non-volatile random access memory
  • the memory 1040 , the communication interface 1030 , and the memory 1040 are coupled together through the bus system 1020 .
  • the bus system 1020 may also include a power bus, a control bus, a status signal bus, and the like, in addition to a data bus.
  • various buses are designated as bus system 1020 in FIG. 10 .
  • the methods described in the foregoing embodiments of the present application may be applied to the processor 1010 or implemented by the processor 1010 .
  • the processor 1010 may be an integrated circuit chip with signal processing capability.
  • each step of the above-mentioned method can be completed by an integrated logic circuit of hardware in the processor 1010 or an instruction in the form of software.
  • the above-mentioned processor 1010 may be a general-purpose processor (eg, a microprocessor or a conventional processor), a digital signal processor (digital signal processing, DSP), an application specific integrated circuit (ASIC), an off-the-shelf programmable gate Array (field-programmable gate array, FPGA) or other programmable logic devices, discrete gates, transistor logic devices or discrete hardware components, the processor 1010 can implement or execute the methods, steps and logic block diagrams disclosed in the embodiments of the present invention .
  • a general-purpose processor eg, a microprocessor or a conventional processor
  • DSP digital signal processing
  • ASIC application specific integrated circuit
  • FPGA field-programmable gate array
  • the steps of the method disclosed in conjunction with the embodiments of the present application may be directly embodied as executed by a hardware decoding processor, or executed by a combination of hardware and software modules in the decoding processor.
  • the software module may be located in a storage medium mature in the field, such as random access memory, read-only memory, programmable read-only memory, or electrically erasable programmable read only memory (EEPROM).
  • the storage medium is located in the memory 1040, and the processor 1010 reads the information in the memory 1040, and completes the steps of the above method in combination with its hardware.
  • the instructions stored by the memory for execution by the processor may be implemented in the form of a computer program product.
  • the computer program product may be written in the memory in advance, or downloaded and installed in the memory in the form of software.
  • a computer program product includes one or more computer instructions. When the computer program instructions are loaded and executed on a computer, the procedures or functions according to the embodiments of the present application are generated in whole or in part.
  • the computer may be a general purpose computer, special purpose computer, computer network, or other programmable device.
  • Computer instructions may be stored in or transmitted from one computer-readable storage medium to another computer-readable storage medium, for example, the computer instructions may be transmitted from a website site, computer, server, or data center over a wire (e.g. Coaxial cable, optical fiber, digital subscriber line (DSL) or wireless (eg infrared, wireless, microwave, etc.) means to transmit to another website site, computer, server or data center.
  • a wire e.g. Coaxial cable, optical fiber, digital subscriber line (DSL) or wireless (eg infrared, wireless, microwave, etc.
  • the computer readable storage medium may be Any available medium on which a computer can store or data storage device including a server, data center, etc., integrated with one or more available media.
  • available media may include magnetic media (eg, floppy disks, hard disks, or tapes), optical media (eg, Digital versatile disc (digital versatile disc, DVD)), or semiconductor media (for example, solid state disk (solid state disk, SSD)), etc.
  • Embodiments of the present application also provide a computer-readable storage medium.
  • the methods described in the above embodiments may be implemented in whole or in part by software, hardware, firmware or any combination thereof.
  • Computer-readable media can include both computer storage media and communication media, and also include any medium that can transfer a computer program from one place to another.
  • the storage medium can be any target medium that can be accessed by a computer.
  • the computer readable medium may include compact disc read-only memory (CD-ROM), RAM, ROM, EEPROM or other optical disk storage; the computer readable medium may include magnetic disks memory or other disk storage devices.
  • any connection line is properly termed a computer-readable medium.
  • Disk and disc includes compact disc (CD), laser disc, optical disc, digital versatile disc (DVD), floppy disk, and blu-ray disc, where disks usually reproduce data magnetically, while discs use lasers to optically reproduce data.

Landscapes

  • Engineering & Computer Science (AREA)
  • Remote Sensing (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Physics & Mathematics (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • General Physics & Mathematics (AREA)
  • Electromagnetism (AREA)
  • Radar Systems Or Details Thereof (AREA)
  • Traffic Control Systems (AREA)

Abstract

本申请实施例提供一种目标检测方法和装置,涉及智能驾驶和自动驾驶领域。方法包括:获取基于雷达处理得到的源数据;根据源数据,得到至少一个第一目标对应的数据;其中,至少一个第一目标为速度小于第一阈值的目标;对至少一个第一目标对应的数据进行成像处理,得到至少一个第一目标的成像结果。这样,可以实现雷达在自动驾驶场景中的成像功能,突破了基于雷达的成像场景中对于静止场景的限制,可以基于不同的场景实现雷达更多的功能,进而可以利用雷达更好的辅助自动驾驶或智能驾驶。

Description

目标检测方法和装置 技术领域
本申请涉及智能驾驶或者自动驾驶领域,尤其涉及一种目标检测方法和装置。
背景技术
信息技术的发展给人们的生活带来很多便利,自动驾驶技术也在人工智能和汽车行业的带领下逐渐成为业界焦点。自动驾驶技术依靠计算机视觉、雷达、监控装置和全球定位系统等协同合作,让机动车辆可以在不需要人类主动操作下,实现自动驾驶。自动驾驶的车辆使用各种计算系统来帮助将乘客从一个位置运输到另一位置。一些自动驾驶车辆可能要求来自操作者(诸如,领航员、驾驶员、或者乘客)的一些初始输入或者连续输入。自动驾驶车辆准许操作者从手动模操作式切换到自动驾驶模式或者介于两者之间的模式。由于自动驾驶技术无需人类来驾驶机动车辆,所以理论上能够有效避免人类的驾驶失误,减少交通事故的发生,且能够提高公路的运输效率,因此,自动驾驶技术越来越受到重视。目标检测是自动驾驶的重要研究课题,自动驾驶车辆中的雷达可以用于实现目标检测及跟踪。
通常情况下,利用雷达实现目标检测及跟踪时,可以对雷达获取的数据进行分析,实现对运动目标的跟踪以及对障碍物的检测。进而,自动驾驶车辆基于对运动目标的跟踪实现自适应巡航(adaptive cruise control,ACC)等功能;以及基于对障碍物的检测实现车道保持辅助LKA(Lane Keeping Assist)等功能。
然而,上述实现中,雷达的功能是有限的,无法很好的辅助自动驾驶。
发明内容
本申请实施例提供一种目标检测方法和装置,涉及智能驾驶和自动驾驶领域,可以基于不同的应用场景,利用雷达实现更多的功能,进而更好的辅助驾驶。
第一方面,本申请实施例提供一种目标检测方法,包括:获取基于雷达处理得到的源数据;根据源数据,得到至少一个第一目标对应的数据;其中,至少一个第一目标为速度小于第一阈值的目标;对至少一个第一目标对应的数据进行成像处理,得到至少一个第一目标的成像结果。这样,可以实现雷达在自动驾驶场景中的成像功能,突破了基于雷达的成像场景中对于静止场景的限制,可以基于不同的场景实现雷达更多的功能,进而可以利用雷达更好的辅助自动驾驶或智能驾驶。可替换的,所述至少一个第一目标也可以为速度小于或者等于第一阈值的目标。
可能的实现方式中,第一目标的信号强度小于第二阈值,根据源数据,得到至少一个第一目标对应的数据,包括:根据源数据进行目标检测,得到至少一个第二目标和至少一个第三目标;其中,至少一个第二目标为速度大于或等于第一阈值的目标,至少一个第三目标为信号强度大于或等于第二阈值的目标;在源数据中去除第二目标对应的数据和第三目标对应的数据,得到至少一个第一目标对应的数据。这样,在自动驾驶或智能驾驶等运动场景中,可以通过第一阈值和第二阈值实现雷达获取数据的分离和利用;并突破了基于 雷达的成像场景中对于静止场景的限制,可以基于不同的场景实现雷达更多的功能,进而更好的辅助自动驾驶或智能驾驶。
可能的实现方式中,在源数据中去除第二目标对应的数据和第三目标对应的数据,包括:根据至少一个第二目标对应的数据,得到至少一个第二目标对应的第一点扩散函数;根据至少一个第三目标对应的数据,得到至少一个第三目标对应的第二点扩散函数;在源数据中去除第一点扩散函数对应的数据和第二点扩散函数对应的数据;其中,第一点扩散函数中包括第二目标的主瓣数据以及第二目标的旁瓣数据;第二点扩散函数中包括第三目标的主瓣数据以及第三目标的旁瓣数据。这样,该去除过程可以更好的实现数据分离,并可以基于该第一目标对应的数据获得更准确的成像结果,进而实现不同应用场景下,雷达的多种功能。
可能的实现方式中,根据源数据,得到至少一个第一目标对应的数据,包括:根据源数据进行目标检测,得到至少一个第二目标;其中,至少一个第二目标为速度大于或等于第一阈值的目标;在源数据中去除第二目标对应的数据,得到至少一个第一目标对应的数据。这样,在自动驾驶或智能驾驶等运动场景中,可以通过第一阈值实现雷达获取数据的分离和利用;并突破了基于雷达的成像场景中对于静止场景的限制,可以基于不同的场景实现雷达更多的功能,进而更好的辅助自动驾驶或智能驾驶。
可能的实现方式中,在源数据中去除至少一个第二目标对应的数据,包括:根据至少一个第二目标对应的数据,得到至少一个第二目标对应的第一点扩散函数;第一点扩散函数中包括第二目标的主瓣数据以及第二目标的旁瓣数据;在源数据中去除第一点扩散函数对应的数据。这样,该去除过程可以更好的实现数据分离,并可以基于该第一目标对应的数据获得静止目标的成像结果,进而实现不同应用场景下,雷达的多种功能。
可能的实现方式中,第一目标对应的数据包括距离速度RV谱,对至少一个第一目标对应的数据进行成像处理,得到至少一个第一目标的成像结果,包括:对至少一个第一目标对应的数据进行恢复并沿慢时间维进行拼接,得到拼接数据;对拼接数据进行成像处理,得到至少一个第一目标的成像结果。这样,由于对至少一个第一目标对应的数据进行拼接处理,该拼接处理可以用于在成像时获得分辨率较高的图像,因此可以获得更加准确的第一目标的成像结果。
可能的实现方式中,对拼接数据进行成像处理,包括:对拼接数据进行合成孔径雷达成像处理。这样,基于合成孔径雷达成像处理,获得更加准确的第一目标的成像结果。
可能的实现方式中,对拼接数据进行成像处理,得到至少一个第一目标的成像结果,包括:利用多普勒参数估计、多普勒中心补偿和/或走动校正,对拼接数据进行处理,得到处理后的数据;对处理后的数据进行一阶运动补偿和二阶运动补偿,得到补偿数据;对补偿数据进行方位压缩,得到至少一个第一目标的成像结果。这样,可以将SAR成像应用于基于雷达的运动场景中,突破了雷达的成像场景中对于静止场景的限制,并可以基于SAR成像获得更加准确的第一目标的成像结果。
可能的实现方式中,方法应用于雷达,方法还包括:向目标设备发送至少一个第一目标的成像结果、至少一个第二目标和/或至少一个第三目标。这样,可以基于雷达发送的不同的目标,在不同的应用场景下,实现不同的功能,丰富雷达中的数据的使用场景。
可能的实现方式中,方法应用于目标设备,获取基于雷达处理得到源数据,包括:接 收来自雷达的源数据。这样,可以基于接收到的雷达的源数据,在不同的应用场景下,实现不同的功能,丰富雷达中的数据的使用场景。
可能的实现方式中,至少一个第一目标包括至少一个车道线,方法还包括:根据至少一个第一目标的成像结果确定自动驾驶策略;和/或,根据至少一个第一目标的成像结果更新高精地图中的车道线。这样,该第一目标的成像结果可以用于更好的辅助自动驾驶,以及获得更加准确的高精地图。
第二方面,本申请实施例提供一种目标检测装置,应用于雷达,包括:处理单元,用于获取基于雷达处理得到的源数据;处理单元,还用于根据源数据,得到至少一个第一目标对应的数据;其中,至少一个第一目标为速度小于第一阈值的目标;处理单元,还用于对至少一个第一目标对应的数据进行成像处理,得到至少一个第一目标的成像结果。
可能的实现方式中,第一目标的信号强度小于第二阈值,处理单元,具体用于:根据源数据进行目标检测,得到至少一个第二目标和至少一个第三目标;其中,至少一个第二目标为速度大于或等于第一阈值的目标,至少一个第三目标为信号强度大于或等于第二阈值的目标;在源数据中去除第二目标对应的数据和第三目标对应的数据,得到至少一个第一目标对应的数据。
可能的实现方式中,处理单元,具体用于:根据至少一个第二目标对应的数据,得到至少一个第二目标对应的第一点扩散函数;根据至少一个第三目标对应的数据,得到至少一个第三目标对应的第二点扩散函数;在源数据中去除第一点扩散函数对应的数据和第二点扩散函数对应的数据;其中,第一点扩散函数中包括第二目标的主瓣数据以及第二目标的旁瓣数据;第二点扩散函数中包括第三目标的主瓣数据以及第三目标的旁瓣数据。
可能的实现方式中,处理单元,具体用于:根据源数据进行目标检测,得到至少一个第二目标;其中,至少一个第二目标为速度大于或等于第一阈值的目标;在源数据中去除第二目标对应的数据,得到至少一个第一目标对应的数据。
可能的实现方式中,处理单元,具体用于:根据至少一个第二目标对应的数据,得到至少一个第二目标对应的第一点扩散函数;第一点扩散函数中包括第二目标的主瓣数据以及第二目标的旁瓣数据;在源数据中去除第一点扩散函数对应的数据。
可能的实现方式中,第一目标对应的数据包括距离速度RV谱,处理单元,具体用于:对至少一个第一目标对应的数据进行恢复并沿慢时间维进行拼接,得到拼接数据;对拼接数据进行成像处理,得到至少一个第一目标的成像结果。
可能的实现方式中,处理单元,具体用于:对拼接数据进行合成孔径雷达成像处理。
可能的实现方式中,处理单元,具体用于:利用多普勒参数估计、多普勒中心补偿和/或走动校正,对拼接数据进行处理,得到处理后的数据;对处理后的数据进行一阶运动补偿和二阶运动补偿,得到补偿数据;对补偿数据进行方位压缩,得到至少一个第一目标的成像结果。
可能的实现方式中,通信单元,具体用于:向目标设备发送至少一个第一目标的成像结果、至少一个第二目标和/或至少一个第三目标。
可能的实现方式中,通信单元,具体用于:接收来自雷达的源数据。
可能的实现方式中,至少一个第一目标包括至少一个车道线,处理单元,具体用于:根据至少一个第一目标的成像结果确定自动驾驶策略;和/或,根据至少一个第一目标的成 像结果更新高精地图中的车道线。
第三方面,本申请实施例提供一种计算机可读存储介质,计算机可读存储介质中存储有计算机程序或指令,当计算机程序或指令在计算机上运行时,使得计算机执行如第一方面的任意一种实现方式中描述的目标检测方法。
第四方面,本申请实施例提供一种包括指令的计算机程序产品,当指令在计算机上运行时,使得计算机执行第一方面任意一种实现方式中描述的目标检测方法。
第五方面,本申请实施例提供一种终端,该终端包括第二方面及第二方面的各种可能的实现方式中描述的目标检测装置。
可能的实现方式中,该终端可以包括车辆或机器人等,车辆可通过目标检测装置实施本申请实施例所描述的目标检测方法。其中,上述实施例所描述的目标检测装置包括但不限于:车载控制器、车载模块、车载模组、车载部件、车载芯片、车载单元、车载雷达等其他传感器。
第六方面,本申请实施例提供一种目标检测装置,该装置包括处理器和存储介质,存储介质存储有指令,指令被处理器运行时,实现如第一方面的任意的实现方式描述的目标检测方法。
第七方面,本申请提供一种芯片或者芯片系统,该芯片或者芯片系统包括至少一个处理器和通信接口,通信接口和至少一个处理器通过线路互联,至少一个处理器用于运行计算机程序或指令,以进行第一方面的任意的实现方式中任一项所描述的目标检测方法。其中,芯片中的通信接口可以为输入/输出接口、管脚或电路等。
在一种可能的实现中,本申请中上述描述的芯片或者芯片系统还包括至少一个存储器,该至少一个存储器中存储有指令。该存储器可以为芯片内部的存储单元,例如,寄存器、缓存等,也可以是该芯片的存储单元(例如,只读存储器、随机存取存储器等)。
应当理解的是,本申请的第二方面至第七方面与本申请的第一方面的技术方案相对应,各方面及对应的可行实施方式所取得的有益效果相似,不再赘述。
附图说明
图1为本申请实施例提供的一种自动驾驶的场景示意图;
图2为本申请实施例提供的一种利用独立的计算设备进行目标检测的场景示意图;
图3为本申请实施例提供的一种目标检测方法的流程示意图;
图4为本申请实施例提供的一种目标检测方法的交互示意图;
图5为本申请实施例提供的一种另一种目标检测方法的流程示意图;
图6为本申请实施例提供的一种又一种目标检测方法的流程示意图;
图7为本申请实施例提供的一种终端的结构示意图;
图8为本申请实施例提供的一种目标检测装置的结构示意图;
图9为本申请实施例提供的一种目标检测设备的硬件结构示意图;
图10为本申请实施例提供的一种芯片的结构示意图。
具体实施方式
为了便于清楚描述本申请实施例的技术方案,在本申请的实施例中,采用了“第一”、“第二”等字样对功能和作用基本相同的相同项或相似项进行区分。例如,第一值和第二值仅仅是为了区分不同的值,并不对其先后顺序进行限定。本领域技术人员可以理解“第一”、“第二”等字样并不对数量和执行次序进行限定,并且“第一”、“第二”等字样也并不限定一定不同。
需要说明的是,本申请中,“示例性的”或者“例如”等词用于表示作例子、例证或说明。本申请中被描述为“示例性的”或者“例如”的任何实施例或设计方案不应被解释为比其他实施例或设计方案更优选或更具优势。确切而言,使用“示例性的”或者“例如”等词旨在以具体方式呈现相关概念。
本申请中,“至少一个”是指一个或者多个,“多个”是指两个或两个以上。“和/或”,描述关联对象的关联关系,表示可以存在三种关系,例如,A和/或B,可以表示:单独存在A,同时存在A和B,单独存在B的情况,其中A,B可以是单数或者复数。字符“/”一般表示前后关联对象是一种“或”的关系。“以下至少一项(个)”或其类似表达,是指的这些项中的任意组合,包括单项(个)或复数项(个)的任意组合。例如,a,b,或c中的至少一项(个),可以表示:a,b,c,a-b,a-c,b-c,或a-b-c,其中a,b,c可以是单个,也可以是多个。
雷达是智能驾驶或者自动驾驶领域中不可或缺的传感器。雷达可以支持两种应用场景,包括运动场景和静止场景。在运动场景中,例如自动驾驶场景中,雷达可以用于检测运动的车辆、运动的行人等运动目标,以及障碍物、道路护栏或道路标识等较为突出的静止目标;在静止场景中,例如停车场场景中,该场景中雷达可以用于检测静止的车辆或障碍物等静止目标。
例如,在运动场景中,基于雷达对运动目标的检测,可以对雷达接收到的去调频后的回波信号进行二维快速傅里叶变换(fast fourier transform,FFT)、恒虚预警(constant false-alarm rate,CFAR)检测以及空间维FFT处理等方法,得到目标的距离、速度及角度信息,从而完成运动场景中对目标的检测。进而,自动驾驶车辆可以基于对车辆或行人等目标的跟踪,实现自适应巡航等功能。
例如,在静止场景中,基于雷达的成像场景,可以利用合成孔径雷达(synthetic aperture radar,SAR)成像方法对雷达接收到的去调频后的回波信号进行参数估计、运动补偿和压缩等方法,得到SAR成像后的目标。例如,车辆可以基于对停车场中的车辆或障碍物等静止目标的成像,实现如自主代客泊车(autonomous valet parking,AVP)等功能。
然而,在运动场景中,在对运动目标进行目标检测中,为了避免杂波对目标检测结果的影响,通常会将静止目标或较弱的静止目标数据作为杂波数据滤除,因此在对目标检测的过程中,没有对雷达接收到的较弱的回波数据进行充分利用。
在静止场景中,由于SAR成像的成像原理为,用雷达上的小天线作为单个辐射单元,将此单元沿直线不断移动,并在不同位置上接收同一目标物体的回波信号并进行处理。上述小天线通过移动的方式可以合成一个等效的“大天线”,进而可以得到目标物体的较高分辨率的图像。可以理解的是,实现SAR成像的场景为雷达运动,目标静止。因此,通常的实现中,SAR成像不能应用于含有运动目标的场景中,其成像场 景限制为静止场景,因此SAR成像技术在如自动驾驶场景等运动场景中的应用受限。
综上所述,基于运动场景中雷达进行目标检测时,无法对雷达接收到的较弱的回波数据进行充分利用,或者,基于雷达进行SAR成像时对于静止场景的限制,导致基于不同的应用场景,雷达可以实现的功能是有限的,无法很好的辅助智能驾驶以及自动驾驶。
有鉴于此,本申请实施例提供一种目标检测方法和装置,在自动驾驶或智能驾驶等运动场景中,获取基于雷达处理得到的源数据,并利用第一阈值对源数据进行不同程度的处理和利用,得到与其应用场景相适应的第一目标对应的数据,对该第一目标对应的数据进行成像处理,从而可以实现雷达在自动驾驶场景中的成像功能,突破了基于雷达的成像场景中对于静止场景的限制,可以基于不同的场景实现雷达更多的功能,进而可以利用雷达更好的辅助自动驾驶或智能驾驶。
为了更好的理解本申请实施例的方法,下面首先对本申请实施例适用的应用场景进行描述。
可能的实现方式中,本申请实施例提供的目标检测方法可以应用于自动驾驶场景中。例如,自动驾驶车辆上的雷达可以基于本申请实施例的目标检测方法,实现对于障碍物等目标的检测,并基于目标检测结果制定自动驾驶策略或更新高精地图中的元素等。示例性的,本申请实施例提供的目标检测方法可以应用在雷达的数字信号处理(digital signal processing,DSP)单元。
示例性的,本申请实施例提供的目标检测方法也可以应用于其他设备,其他设备可以包括:车辆上的电子控制单元(electronic control unit,ECU)、多域控制器(multi domain controller,MDC)或独立的计算设备如服务器等设备。示例性的,当其他设备为MDC时,可能的实现方式中,雷达可以将获取的数据进行初步处理(例如二阶FFT处理等),并将该初步处理后的数据发送至MDC进行后续处理。
示例性的,图1为本申请实施例提供的一种自动驾驶的场景示意图。
如图1所示,自动驾驶车辆101和自动驾驶车辆102在不同的车道线内行驶。自动驾驶车辆101和自动驾驶车辆102可以根据车辆中的雷达探测周围的物体。例如,自动驾驶车辆101可以检测其车辆周围的自动驾驶车辆102、道路护栏103、路面标识104、车道线105和车道线106等其他目标。
示例性的,在自动驾驶场景中,自动驾驶车辆101可以基于雷达获取道路周围的物体的回波数据,自动驾驶车辆101中的雷达可以利用本申请实施例提供的目标检测方法,对接收到的回波数据进行处理,并利用速度阈值获取处理后的回波数据中运动目标对应的数据,进而从处理后的回波数据中去除该运动目标对应的数据,得到强静止目标对应的数据,对该强静止目标对应的数据进行成像处理,可以得到强静止目标对应的成像结果。如图1所示,自动驾驶车辆101中的雷达可以检测出路边护栏103等强静止目标。
示例性的,自动驾驶车辆101中的雷达也可以利用速度阈值获取处理后的回波数据中运动目标对应的数据,以及利用信号强度(或称幅度)阈值获取处理后的回波数据中强静止目标对应的数据,进而从处理后的回波数据中去除该运动目标对应的数据和该强静止目标对应的数据,得到弱静止目标对应的数据,对该弱静止目标对应的数 据进行成像处理,可以得到弱静止目标对应的成像结果。如图1所示,自动驾驶车辆101中的雷达可以检测出路面标识104、车道线105和车道线106等弱静止目标。
后续自动驾驶车辆101可以根据检测到的目标和车道线数据等其他自动驾驶数据,规划自动驾驶路线,进而保证自动驾驶车辆101的正常驾驶。示例性的,图2为本申请实施例提供的一种利用独立的计算设备进行目标检测的场景示意图。
示例性的,如图2所示,以独立的计算设备为服务器为例,该场景中可以包含:自动驾驶车辆101、无线广域网(wide area network,WAN)202、通信网络203和服务器204。
其中,自动驾驶车辆101中可以包含一个或多个无线收发器等设备。自动驾驶车辆101中的无线收发器,能够与该场景中的无线WAN202交换数据并根据需要进行通信。示例性的,自动驾驶车辆101中的自动驾驶系统可以使用无线WAN202经由一个或多个通信网络203(如因特网),将自动驾驶车辆中雷达接收到回波数据,或其他传感器接收到的其他数据传输到服务器204中进行处理。服务器204再将处理后的数据传输到自动驾驶车辆501的自动驾驶系统中,用于指导车辆的自动驾驶;或者,服务器204也可以再将处理后的数据传输到高精地图中,用于更新高精地图中的元素。其中,服务器204可以为一个或多个服务器。
可以理解的是,本申请实施例提供的目标检测方法的应用场景(如图1或图2),可以作为一种示例,并不作为本申请实施例的应用场景的限定。
可以理解的是,本申请实施例提供的一种目标检测方法不限于车载雷达应用场景,还可以应用到机载雷达或其他平台,本申请实施例中对此不做限定。
可以理解的是,本申请实施例提供的一种目标检测方法可以用于线性调频连续波(LFMCW)雷达,其雷达信号体制可以扩展到数字调制雷达,例如其信号体制可以为相位调制连续波(phase modulated continuous wave,PMCW)。
可以理解的是,本申请实施例提供的一种目标检测方法可以用于毫米波雷达。其中,毫米波是指波长在1mm-10mm之间的电磁波,毫米波对应的频率范围为30GHz-300GHz,在30GHz-300GHz的频率范围内,毫米波的特性可以包括:易于小型化实现,带宽大、波长短、雷达分辨率高以及穿透强,毫米波的特性使其适合应用于车载领域。毫米波雷达更加具有穿透烟、灰尘或雾的能力,使得毫米波雷达可以全天候工作。因此,毫米波雷达可以广泛应用于车辆中。此外,本申请实施例提供的一种目标检测方法也可以用于其他频段的雷达或其它传感器,例如超声波雷达、激光雷达等传感器。
可以理解的是,本申请实施例提供的一种目标检测方法可以用于车载雷达,也可以用于其他终端设备(terminal device),终端设备包括但不限于移动台(mobile station,MS)、移动终端(mobile terminal),例如,终端设备可以是移动电话(或称为“蜂窝”电话)、具有无线通信功能的计算机等,终端设备还可以是带无线收发功能的电脑、虚拟现实(virtual reality,VR)终端设备、增强现实(augmented reality,AR)终端设备、工业控制(industrial control)中的无线终端、无人驾驶(self driving)中的无线终端、远程医疗(remote medical)中的无线终端、智能电网(smart grid)或智能制造中的无线终端、运输安全(transportation safety)中的无线终端、无人机、智慧城市(smart city)中的无线终端、智慧家庭(smart home) 中的智能家居或其他无线终端等等。在不同的网络中终端可以叫做不同的名称,例如:用户设备,移动台,用户单元,站台,蜂窝电话,个人数字助理,无线调制解调器,无线通信设备,手持设备,膝上型电脑,无绳电话,无线本地环路台等。
下面对本申请实施例中所描述的词汇进行说明。可以理解,该说明是为更加清楚的解释本申请实施例,并不必然构成对本申请实施例的限定。
本申请实施例所描述的强静止目标可以为速度小于一定阈值,且回波信号强度大于一定值的静止目标。例如,该强静止目标可以为金属护栏、静止车辆或障碍物等静止目标。该强静止目标可以包括,雷达发射信号,该信号遇到目标反射并产生回波信号,雷达接收到的回波强度(或雷达散射截面积或反射强度等)较大的目标。其中,该回波强度的大小与目标的材质、目标表面的粗糙度和/或雷达发射信号的能量大小等因素相关。
本申请实施例所描述的弱静止目标可以为速度小于一定阈值,且回波信号强度小于一定值的静止目标。例如,该弱静止目标可以为车道线等静止目标。该弱静止目标可以包括,雷达接收到的回波强度较小的目标。
申请实施例所描述的SAR成像可以包括,利用雷达和目标的相对运动形成大的虚拟孔径,从而突破天线孔径的限制,实现目标的高分辨率成像。其中,SAR成像技术应用于雷达运动,目标静止的场景中。
本申请实施例所描述的慢时间维(或称方位维)可以包括,沿着脉冲重复周期的维度。例如,雷达周期性的发送脉冲信号(或称脉冲)在多个脉冲进行处理时,慢时间可以用于标记不同脉冲之间的时间,可以把一个脉冲看做是慢时间的一次采样。
本申请实施例所描述的快时间维(或称距离维)可以包括,沿着一个脉冲采样的维度可以理解为快时间维,快时间维可以反映脉内时间。例如,雷达发送一次脉冲,并获取这一次脉冲对应的回波信号,上述脉冲采样的时间为快时间。其中,快时间可以反映距离。
本申请实施例所描述的点扩散函数(point spread function,PSF)(或称点扩展函数)可以包括,一个系统的脉冲响应,可以用来衡量成像的图像的分辨率。
本申请实施例所描述的点扩散函数的主瓣和旁瓣(或称为副瓣)可以为,信号压缩后形成的点扩散函数(例如sinc型函数)的主瓣和旁瓣,其中,主瓣指点扩散函数的最大值两侧第一过零点之间数据,副瓣指点扩散函数上除主瓣之外的其它数据。
本申请实施例所描述的高精地图(high definition map,HD Map)可以包括:面向机器的供自动驾驶汽车使用的地图数据。其可以更加精准的描绘道路交通信息元素,更加真实的反映出道路的实际情况。高精地图能够实现高精度的定位位置功能、道路级和车道级的规划能力、以及车道级的引导能力等能力。
下面以具体地实施例对本申请的技术方案以及本申请的技术方案如何解决上述技术问题进行详细说明。下面这几个具体的实施例可以独立实现,也可以相互结合,对于相同或相似的概念或过程可能在某些实施例中不再赘述。
示例性的,图3为本申请实施例提供的一种目标检测方法的流程示意图,如图3所示,该方法包括:
S301、获取基于雷达处理得到的源数据。
本申请实施例中,源数据可以包括对雷达接收到的回波信号进行处理得到的距离速度RV谱。其中,RV谱可以包括对回波信号进行处理得的距离速度图。
示例性的,获取基于雷达处理得到的源数据的一种的可能的方式,可以包括如下步骤:
S3011、获取雷达接收到的去调频后的回波数据。
示例性的,该回波信号产生的过程可以包括,雷达中的振荡器随时间产生线性调频信号,一部分线性调频信号经过雷达的发射天线发出去,一部分线性调频信号输入至雷达的混频器中作为本振信号,由雷达的发射天线发出的信号遇到目标物体会产生反射,雷达的接收天线接收目标物体反射回来的回波信号,该雷达的接收天线接收到的回波信号与本振信号在混频器中进行混频操作,经过混频操作处理后,得到去调频后的回波数据(或称中频信号)。混频后的信号通过低通滤波器、ADC(模数转换器)后转化为数字形式。
S3012、将去调频后的回波数据在慢时间维分成方位子块。
示例性的,可以将去调频后的回波数据在慢时间维进行分块。其中,该方位子块的大小可以选为一帧。可以理解的是,该方位子块可以根据实际场景分成不同的大小,本申请实施例中对此不做限定。
S3013、对该方位子块进行二维FFT处理,得到源数据。
本申请实施例中,中频信号可以通过模拟转化器转化为数字信号,并进入信号处理器进行处理,可以得到目标物体的信息。其中,该信号处理的方法包括:快速傅里叶变换和/或频谱分析等。
示例性的,对上述分成方位子块的中频信号进行二维FFT处理。其中,该二维FFT包括快时间FFT和慢时间FFT。沿快时间FFT可以得到该方位子块对应的距离数据,沿慢时间FFT可以得到该方位子块对应的速度数据,进而得到基于雷达处理得到的源数据。
S302、根据源数据,得到至少一个第一目标对应的数据。
本申请实施例中,至少一个第一目标可以包括,对源数据进行目标检测后得到的源数据包含的多个目标中,速度小于(或小于等于)第一阈值的一个或多个目标;至少一个第一目标对应的数据可以包括,从源数据的RV谱中获取的,速度小于(或小于等于)第一阈值的一个目标的RV谱或多个目标的RV谱。该第一阈值可以包括速度阈值,该第一阈值可以包括数值,也可以包括数值构成的范围。例如,该第一阈值可以为0,或者该第一阈值可以为0m/s~2m/s的范围。可以理解的是,当第一目标的速度处于0m/s~2m/s的范围时,由于第一目标的运动幅度较小,也可以认为该第一目标为静止目标。
本申请实施例中,该第一目标可以包括速度小于(或小于等于)第一阈值的目标。例如,或者该第一目标可以包括速度小于(或小于等于)第一阈值的静止目标。该静止目标可以包括静止车辆、障碍物、道路标识和护栏等目标。
本申请实施例中,根据源数据,得到一个第一目标对应的数据的一种可能实现为,根据源数据的RV谱,得到一个静止目标对应的数据。例如,在自动驾驶场景中,若该 场景中包含行驶的汽车、运动的行人和静止车辆,可以对源数据的RV谱进行目标检测,经过目标检测可以得到该自动驾驶场景中的多个目标,包括:行驶的汽车的目标、运动的行人的目标和静止车辆的目标,利用速度阈值对上述多个目标进行进一步的筛选,可以得到多个目标中的一个静止目标,该静止目标为第一目标,如静止车辆。
本申请实施例中,根据源数据,得到多个第一目标对应的数据的一种可能实现为,根据源数据的RV谱,得到多个静止目标对应的数据。例如,在自动驾驶场景中,若场景中包含行驶的汽车、运动的行人、静止车辆、障碍物和护栏,可以对源数据的RV谱进行目标检测,经过目标检测可以得到该自动驾驶场景中的多个目标,包括:行驶的汽车的目标、运动的行人的目标、静止车辆的目标、障碍物的目标和护栏的目标,利用速度阈值对上述多个目标进行进一步的筛选,可以得到多个目标中的多个静止目标,该静止目标为第一目标,如静止车辆、障碍物和护栏。
示例性的,当至少一个第一目标为速度小于第一阈值的目标时,从源数据中得到至少一个第一目标对应的数据,可以包括如下实现方式:
一种实现中,可以从源数据中筛选出速度小于第一阈值的目标对应的数据。其中,筛选可以理解为从较多数据中取出需要的数据。例如,当第一阈值为速度等于接近于0的数值时,可以利用该第一阈值从源数据中筛选出速度小于第一阈值的静止目标对应的数据。
另一种实现中,可以得到速度大于或等于第一阈值的目标对应的数据,并从源数据中去除速度大于或等于第一阈值的目标对应的数据,进而得到速度小于第一阈值的目标对应的数据。例如,当第一阈值为速度等于接近于0的数值时,可以利用该第一阈值从源数据中得到出速度大于或等于第一阈值的运动目标对应的数据,并从源数据中去除该运动目标对应的数据,进而得到速度小于第一阈值的静止目标对应的数据。
示例性的,当至少一个第一目标为速度小于或等于第一阈值的目标时,从源数据中得到至少一个第一目标对应的数据,可以包括如下实现方式:
一种实现中,可以从源数据中筛选出速度小于或等于第一阈值的目标对应的数据。例如,当第一阈值为速度等于0时,可以利用该第一阈值从源数据中筛选出速度小于或等于第一阈值的静止目标对应的数据。
另一种实现中,可以得到速度大于第一阈值的目标对应的数据,并从源数据中去除速度大于第一阈值的目标对应的数据,进而得到速度小于或等于第一阈值的目标对应的数据。例如,当第一阈值为速度等于0时,可以利用该第一阈值从源数据中得到出速度大于第一阈值的运动目标对应的数据,并从源数据中去除该运动目标对应的数据,进而得到速度小于或等于第一阈值的静止目标对应的数据。
可以理解的是,利用源数据得到至少一个第一目标对应的数据的方法,可以根据实际场景包括其他内容,本申请实施例中对此不做限定。
S303、对至少一个第一目标对应的数据进行成像处理,得到至少一个第一目标的成像结果。
本申请实施例中,成像可以包括,将数据处理成其数据对应的图像;成像处理可以包括,将数据处理成其对应的图像的实时过程,该成像处理的方法可以包括SAR成 像方法、极化成像或遥感成像方法等其他成像方法;至少一个第一目标的成像结果可以包括,对第一目标对应的数据进行成像处理,得到一个第一目标对应的图像或多个第一目标对应的图像。
示例性的,当该第一目标对应的数据可以包括RV谱时,对至少一个第一目标对应的数据进行成像处理的一种实现为,对第一目标对应的数据进行数据恢复,对数据恢复后的数据沿慢时间维进行拼接,并对拼接后的数据进行运动补偿和成像处理等,得到第一目标的成像结果。上述成像处理过程,用于将数据处理成可识别的图像。在自动驾驶方面,该图像可以用于确定自动驾驶策略;在高精地图方面,该图像也可以用于更新高精地图中的元素,进而得到更全面和更准确的高精地图。
综上所述,在自动驾驶或智能驾驶等运动场景中,获取基于雷达处理得到的源数据,并利用第一阈值对源数据进行不同程度的处理和利用,得到与其应用场景相适应的第一目标对应的数据,对该目标对应的数据进行成像处理,从而可以实现雷达在自动驾驶场景中的成像功能,突破了基于雷达的成像场景中对于静止场景的限制,可以基于不同的场景实现雷达更多的功能,进而可以利用雷达更好的辅助自动驾驶或智能驾驶。
在图3对应的实施例的基础上,可能的实现方式中,S302可以包括如下步骤:
S3021、根据源数据进行目标检测,得到至少一个第二目标和至少一个第三目标。
本申请实施例中,至少一个第二目标可以包括,对源数据进行目标检测后得到的源数据包含的多个目标中,速度大于(或大于等于)第一阈值的一个或多个目标;至少一个第三目标可以包括源数据包含的多个目标中,信号幅度大于(或大于等于)第二阈值的一个或多个目标;至少一个第二目标对应的数据可以包括,从源数据的RV谱中获取的,速度大于(或大于等于)第一阈值的一个目标的RV谱或多个目标的RV谱;至少一个第三目标对应的数据可以包括,从源数据的RV谱中获取的,信号幅度大于(或大于等于)第一阈值的一个目标的RV谱或多个目标的RV谱。其中,该第二阈值可以包括数值,也可以包括数值构成的范围。其用法与第一阈值类似,在此不再赘述。
示例性的,根据源数据进行目标检测,得到至少一个第二目标和至少一个第三目标的一种实现为,可以对源数据的RV谱进行目标检测,得到至少一个运动目标和至少一个强静止目标。其中,运动目标为速度大于0的目标;强静止目标为信号强度大于预设的信号强度的目标。例如,自动驾驶场景中可以包括:行驶的汽车、运动的行人、静止车辆、障碍物、护栏或车道线等目标。可以对包含上述场景中多个目标的源数据的RV谱进行目标检测,得到行驶的汽车或运动的行人等运动目标,以及障碍物、护栏或车道线等静止目标。
本申请实施例中,该目标检测用于将目标与其他没有特征的内容分开,提取目标并确定目标的位置。例如,在源数据的RV谱中,可以利用目标检测将第二目标以及第三目标,与RV谱中的其他数据分开。其中,该目标检测的方法可以包括:神经网络方法,或CFAR检测等其他目标检测方法。可以理解的是,该目标检测方法可以根据实际场景包括其他内容,本申请实施例中对此不做限定。
可选的,得到的第二目标和第三目标也可以实现雷达的其他功能,例如对该第二目标进行的目标聚类或目标跟踪等处理,可以用于实现如车载自适应巡航等功能;对 该第三目标进行的护栏检测或道路边缘检测等处理,可以用于实现车道保持辅助等功能。可以理解的是,本申请实施例可以对雷达接收到的数据进行充分利用,并基于不同的应用场景,实现雷达的多种功能。
可选的,根据源数据进行目标检测得到,可以得到至少一个第二目标距离数据和速度数据,以及至少一个第三目标的距离数据和速度数据。进一步的,也可以获取第二目标的角度数据,以及第三目标的角度数据。例如,可以利用通道间FFT(或称角度维FFT),或谱估计类波达方向(direction of arrival,DOA)估计等方法,得到该第二目标的角度数据和第三目标的角度数据。其中,一个发射天线与一个接收天线的组合可以理解为一个通道,通道间FFT可以理解为基于多个通道的RV谱进行FFT处理。可以理解的是,该目标的角度数据可以用于获取目标更准确的位置。
S3022、在源数据中去除第二目标对应的数据和第三目标对应的数据,得到至少一个第一目标对应的数据。
本申请实施例中,该第二目标对应的数据或第三目标对应的数据可以包括RV谱。从源数据中去除第二目标对应的数据和第三目标对应的数据的一种实现可以为,可以恢复出第二目标对应的点扩散函数(或称点扩展函数),和第三目标对应的点扩散函数,并都从源数据的RV谱中去除该第二目标对应的点扩散函数所包含的点数据,以及去除第三目标对应的点扩散函数所包含的点数据,得到源数据中剩余的数据,例如可以包括第一目标对应的数据。
示例性的,自动驾驶场景中可以包括:行驶的汽车、运动的行人、静止车辆、障碍物、护栏或车道线等目标。可以利用目标检测方法得到多个目标,并利用速度阈值(例如速度维0)和信号强度阈值筛选出运动目标和强静止目标;具体的,运动目标,例如:行驶的汽车或运动的行人等目标;强静止目标,例如:静止车辆、障碍物或护栏等目标。则从源数据的RV谱中去除运动目标对应的点扩散函数和强静止目标对应的点扩散函数,可以得到RV谱中剩余的弱静止目标对应的数据,例如车道线对应的数据等。
基于此,在自动驾驶或智能驾驶等运动场景中,可以通过第一阈值和第二阈值实现雷达获取数据的分离和利用;并突破了基于雷达的成像场景中对于静止场景的限制,可以基于不同的场景实现雷达更多的功能,进而更好的辅助自动驾驶或智能驾驶。
在图3对应的实施例的基础上,可能的实现方式中,S3022可以包括如下步骤:
S30221、根据至少一个第二目标对应的数据,得到至少一个第二目标对应的第一点扩散函数。
本申请实施例中,该至少一个第二目标对应的第一点扩散函数可以包括,至少一个第二目标的散射点对应的二维(sinc)辛格函数。
一种实现中,可以利用响应函数得到第二目标对应的第一点扩散函数。该响应函数可以理解为RV谱中多个散射点(例如p个散射点)的响应之和,该响应函数可以为:
Figure PCTCN2021078669-appb-000001
Figure PCTCN2021078669-appb-000002
Figure PCTCN2021078669-appb-000003
其中,p表示第p个散射点的编号,A p表示第p个散射点对应的RV谱中的幅度,λ表示信号波长,r p和v p表示第p个散射点对应的距离和速度,F s代表信号采样率,K r代表信号的调频率,PRI表示线性调频信号的重复时间,M、N分别为快时间和慢时间的数据点数,q m、q n分别为RV谱中快时间维和慢时间维对应的采样点序号。
示例性的,根据至少一个第二目标对应的数据,得到至少一个第二目标对应的第一点扩散函数的一种实现为,获取速度超过第一阈值的第二目标对应的采样点序号,利用该第二目标对应的采样点序号、第二目标对应的幅值以及响应函数,恢复出第二目标对应的二维辛格函数。
可以理解的是,不同的散射点对应的点扩散函数,相当于对同一种信号形式的平移和幅相变换。
另一种实现中,也可以通过最小二乘方法估计第二目标的点扩散函数的幅度和相位误差,并根据上述响应函数重建第二目标的点扩散函数。
可选的,若在实现本申请实施例提供的目标检测过程有加窗处理,则在恢复第一点扩散函数和第二点扩散函数的过程中考虑窗函数对sinc型距离包络主瓣宽度及旁瓣强度的影响。
S30222、根据至少一个第三目标对应的数据,得到至少一个第三目标对应的第二点扩散函数。
本申请实施例中,该至少一个第三目标对应的第二点扩散函数可以包括,至少一个第三目标的散射点对应的二维辛格函数。
示例性的,根据至少一个第三目标对应的数据,得到至少一个第三目标对应的第二点扩散函数的一种实现为,获取信号强度超过第二阈值的第三目标对应的采样点序号,利用该第三目标对应的采样点序号、第三目标对应的数据以及响应函数,恢复出第三目标对应的二维辛格函数。
可以理解的是,获取点扩散函数的方法可以根据实际场景包括其他内容,本申请实施例中对此不做限定。
S30223、在源数据中去除第一点扩散函数对应的数据和第二点扩散函数对应的数据。
本申请实施例中,第一点扩散函数对应的数据包括,第一点扩散函数中包括的第二目标的主瓣数据以及第二目标的旁瓣数据;第二点扩散函数对应的数据包括,第二点扩散函数中包括的第三目标的主瓣数据以及第三目标的旁瓣数据。
基于此,在自动驾驶或智能驾驶等运动场景中,基于雷达的成像处理过程中,由于可以从基于雷达处理得到的源数据中去除第一点扩散函数的主瓣数据和旁瓣数据,以及去除第二点扩散函数的主瓣数据和旁瓣数据,因此去除掉第二目标和第三目标的更多的影响后的源数据,可以包含第一目标数据。该去除过程可以更好的实现数据分离,并可以基于该第一目标对应的数据获得更准确的成像结果,进而实现不同应用场景下,雷达的多种功能。
在图3对应的实施例的基础上,可能的实现方式中,S302可以包括如下步骤:
S4021、根据源数据进行目标检测,得到至少一个第二目标。
本申请实施例中,S4021与上述S3021中根据源数据进行目标检测,得到至少一个第二目标的过程相同,在此不再赘述。
S4022、在源数据中去除第二目标对应的数据,得到至少一个第一目标对应的数据。
本申请实施例中,S4022与上述S3022中在源数据中去除第二目标对应的数据,得到至少一个第一目标对应的数据的过程相同,在此不再赘述。
在图3对应的实施例的基础上,可能的实现方式中,S4022可以包括如下步骤:
S40221、根据至少一个第二目标对应的数据,得到至少一个第二目标对应的第一点扩散函数。
本申请实施例中,S40221与上述S30221中根据至少一个第二目标对应的数据,得到至少一个第二目标对应的第一点扩散函数的过程相同,在此不再赘述。
S40222、在源数据中去除第一点扩散函数对应的数据。
本申请实施例中,第一点扩散函数中包括第二目标的主瓣数据以及第二目标的旁瓣数据。
本申请实施例中,S40222与上述S30222中在源数据中去除第一点扩散函数对应的数据的过程相同,在此不再赘述。
基于此,在自动驾驶或智能驾驶等运动场景中,基于雷达的成像处理过程中,由于可以从基于雷达处理得到的源数据中去除第一点扩散函数的主瓣数据和旁瓣数据,因此去除掉第二目标的影响后的源数据可以包含第一目标数据。该去除过程可以更好的实现数据分离,并可以基于该第一目标对应的数据获得静止目标的成像结果,进而实现不同应用场景下,雷达的多种功能。
在图3对应的实施例的基础上,可能的实现方式中,第一目标对应的数据包括距离速度RV谱,S303可以包括如下步骤:
S3031、对至少一个第一目标对应的数据进行恢复并沿慢时间维进行拼接,得到拼接数据。
本申请实施例中,该恢复可以包括对经过一定处理得到的第一目标对应的数据恢复成处理前的数据,例如,将第一目标对应的数据恢复成第一目标对应的中频信号。示例性的,由于在S3013所示的步骤中,已经对分成方位子块的中频信号进行二维FFT处理,得到源数据,因此此处的数据恢复的方法可以包括二维反向快速傅里叶变换(invert fast fourier transformation,IFFT)。其中,源数据包括第一目标对应的数据。
本申请实施例中,该拼接可以理解为,根据一定条件将多个子数据拼成高分辨率的数据。示例性的,由于在S3012所示的步骤中,已经沿慢时间维将中频信号分成了多个方位 子块,因此此处的拼接,可以沿慢时间维对多个经过处理后的方位子块(或可以理解为第一目标对应的数据)进行拼接,得到拼接数据。例如,当方位子块沿慢时间维按一帧进行分块时,在拼接处理时,可以沿慢时间维按一帧进行拼接。例如,本申请实施例中沿慢时间维获取的多个方位子块,该多个方位子块中可以包含重叠的部分,该多个方位子块的拼接处理,用于在成像时得到高分辨率的图像。示例性的,对第一目标对应的数据进行拼接时所要拼接的长度,需要满足成像过程中对于方位分辨率的要求。例如,根据成像时方位分辨率的要求,获取与该方位分辨率相对应的方位子块的数量。其中,方位分辨率(azimuth resolution)可以为方位维度的分辨率,为两个目标可以被辨识的最小方位距离。
S3032、对拼接数据进行成像处理,得到至少一个第一目标的成像结果。
本申请实施例中,该拼接数据包括第一目标对应的数据,S3032中的成像方法与S303中对至少一个第一目标对应的数据进行成像处理,得到至少一个第一目标的成像结果的过程相同,在此不再赘述。
基于此,由于对至少一个第一目标对应的数据进行拼接处理,该拼接处理可以用于在成像时获得分辨率较高的图像,因此可以获得更加准确的第一目标的成像结果。
在图3对应的实施例的基础上,可能的实现方式中,S3032可以包括:对拼接数据进行合成孔径雷达成像处理。
本申请实施例中,合成孔径雷达成像处理时,拼接的长度可以满足合成孔径雷达成像处理的方位分辨率要求。
例如,拼接长度T需要满足:
Figure PCTCN2021078669-appb-000004
其中,V a为车载平台速度为,K a为多普勒调频率,ρ a为方位分辨率要求。
示例性的,对拼接数据进行合成孔径雷达(或称SAR)成像处理的过程可以包括如下步骤:
S30321、利用多普勒参数估计、多普勒中心补偿和/或走动校正,对拼接数据进行处理,得到处理后的数据。
示例性的,利用多普勒参数估计、多普勒中心补偿和/或走动校正,将拼接数据中的斜视数据处理为等效的正侧视数据。可以理解的是,对拼接数据进行处理的方法也可以包括其他内容,本申请实施例中对此不做限定。
其中,多普勒参数估计可以包括多普勒中心和多普勒调频率的估计,二者均可根据回波数据进行估计。根据多普勒参数的估计结果可以进行雷达运动误差估计,以及后面的运动补偿。
多普勒中心补偿和走动校正可以将斜视数据等效为正侧视数据进行成像处理;其中,走动校正可以用于消除由于斜视引入的距离的线性走动分量;多普勒中心补偿可以使得由于斜视造成的非零多普勒中心补偿为零。
S30322、对处理后的数据进行一阶运动补偿和二阶运动补偿,得到补偿数据。
示例性的,可以利用参考距离处的轨迹误差在所有距离向,对处理后的数据进行包络补偿和相位补偿,完成一阶运动补偿,得到一阶补偿后的数据。对一阶补偿后的数据,进行方位向FFT处理,将方位FFT处理后的数据与频率变标因子对应的相位函数相乘完成频率变标,对频率变标后的数据进行距离向FFT处理、残余视频相位(residual  video phase,RVP)校正以及距离向IFFT处理,将距离向IFFT处理后的数据与补偿因子相乘完成逆频率变标,对逆频率变标后的数据进行距离单元徙动(range cell migration,RCM)校正、二次距离压缩(second range compression,SRC)和距离向FFT处理等处理,得到距离压缩后的数据。其中,方位向表示雷达(或车辆)运动方向,距离向表示垂直于方位向的方向。
频率变标因子可以根据系统参数直接计算出来,计算出来后通过乘以其决定的相位函数完成频率变标。可以理解为,方位FFT后的数据与相位函数相乘就可以完成频率变标。
利用补偿其他距离单元与参考距离单元之间的距离差导致的相位差,对上述距离压缩后的数据进行二阶运动补偿,得到补偿数据。其中,雷达的距离单元对应于雷达的分辨距离。其中,距离单元指距离维的采样单元;参考距离单元指测绘带(照射区域)中心的最短斜距对应的距离单元。
S30323、对补偿数据进行方位压缩,得到至少一个第一目标的成像结果。
示例性的,根据多普勒调频率估计对补偿数据中的方位相位误差进行校正,将调整后的数据与方位匹配滤波函数相乘,并对相乘后的数据进行方位IFFT处理,完成方位压缩,可以得到包含静止目标的二维SAR图像。
利用几何形变校正和坐标转换,将得到的斜距平面内的二维SAR图像转换为地距平面的SAR图像,得到第一目标的成像结果。
基于此,可以将SAR成像应用于基于雷达的运动场景中,突破了雷达的成像场景中对于静止场景的限制,并可以基于SAR成像获得更加准确的第一目标的成像结果。
在图3对应的实施例的基础上,可能的实现方式中,向目标设备发送至少一个第一目标的成像结果、至少一个第二目标和/或至少一个第三目标。
本申请实施例中,目标设备可以基于第一目标的成像结果指定自动驾驶策略和/更新高精地图中的目标;或者,目标设备也可以对该第二目标进行的目标聚类或目标跟踪等处理,并实现如车载自适应巡航等功能;目标设备对该第三目标进行的护栏检测或道路边缘检测等处理,并实现如车道保持辅助等功能。
基于此,可以基于雷达发送的不同的目标,在不同的应用场景下,实现不同的功能,丰富雷达中的数据的使用场景。
示例性的,图4为本申请实施例提供的一种目标检测方法的交互示意图,如图4所示。
S401、目标设备接收来自雷达的源数据。
本申请实施例中,该目标设备可以包括汽车上的设备,例如:ECU或MDC等,或者,该目标设备也可以包括配置在汽车外的服务器,例如独立的计算设备等。
S402、根据至少一个第一目标的成像结果确定自动驾驶策略;和/或,根据至少一个第一目标的成像结果更新高精地图中的车道线。
一种实现中,根据第一目标的成像结果确定自动驾驶策略。该自动驾驶策略可以包括,指导自动驾驶车辆运行的方式。例如,自动驾驶策略可以包括:指导自动驾驶车辆拐弯、变道、变速、为其他车辆或行人让行等其他自动驾驶策略。
示例性的,当自动驾驶车辆检测到当前场景中,第一目标的成像结果显示,沿车道线 继续行驶的转向标识的图像时,自动驾驶车辆可以基于对于车道线的成像结果以及自动驾驶的路线,选择合适的车道线行驶。例如,当自动驾驶的路线指示直行,且当前车道线显示沿前方左转的图像时,自动驾驶车辆可以准备变道,更换车道线继续直行行驶;当自动驾驶的路线指示左转,且当前车道线显示沿前方左转的图像时,自动驾驶车辆可以在本车道线内继续行驶。
另一种实现中,根据第一目标的成像结果更新高精地图中的车道线。该高精地图不仅可以描绘道路,还能够反映出每个道路中包含的车辆情况,能够更真实的反映驾驶过程中道路的实际样式。其中,高精地图可以作为自动驾驶环节中确定自动驾驶策略的重要参考依据。
示例性的,可以确定第一目标的成像结果中的车道线在高精地图中的位置,并将该位置的车道线与高精地图中的位置信息相对照,若高精地图中该位置的目标与本申请实施例得到的车道线不同时,可以用本申请实施例得到的车道线,更新高精地图中该位置的车道线。
基于此,该第一目标的成像结果可以用于更好的辅助自动驾驶,以及获得更加准确的高精地图。
基于上述实施例中所描述的内容,为了更好的理解本申请各实施例,下面以从源数据中去除运动目标和强静止目标为例,详细描述本申请实施例提供的目标检测方法的一种实现过程。示例性的,图5为本申请实施例提供的一种另一种目标检测方法的流程示意图,如图5所示,该目标检测过程可以包括以下步骤:
S501、沿慢时间维将去调频后的回波数据分为多个方位子块。
S502、对该方位子块进行二维FFT处理,得到RV谱。
S503、对该RV谱进行CFAR目标检测,得到多个目标。
示例性的,可以利用速度阈值筛选出运动目标,该运动目标可以进一步用于目标跟踪和目标检测;利用幅度(或称信号强度)阈值筛选出强静止目标,该强静止目标可以用于护栏检测。
可选的,也可以对该目标进行角度估计,得到该目标对应的角度信息。
S504、从RV谱中滤除运动目标对应的数据和强静止目标对应的数据,得到弱静止目标对应的数据。
S505、对弱静止目标对应的数据进行二维IFFT处理,恢复得到弱静止目标的回波数据。
S506、沿慢时间维对弱静止目标的回波数据进行拼接,得到拼接数据。
S507、对拼接数据进行SAR成像处理,得到弱目标的成像结果,例如车道线。
基于此,可以通过速度阈值和信号强度阈值实现数据分离,利用不同的数据实现雷达更多的功能;并扩展了基于雷达的SAR成像中对于静止场景的限制,将SAR成像应用于自动驾驶场景中,进而基于SAR成像结果更好的辅助自动驾驶。
下面以从源数据中去除运动目标为例,详细描述本申请实施例提供的目标检测方法的另一种实现过程。示例性的,图6为本申请实施例提供的一种又一种目标检测方法的流程示意图,如图6所示,该目标检测过程可以包括以下步骤:
S601-S603与上述S501-S503所示的步骤相同,在此不再赘述。
S604、从RV谱中滤除运动目标对应的数据,得到静止目标对应的数据。
S605、判断是否进行强静止目标滤除,当确定强静止目标滤除时,可以执行S606所示的步骤;当确定强静止目标不滤除时,可以执行S610或S613所示的步骤。
S606、从RV谱中滤除强静止目标对应的数据,得到弱静止目标对应的数据。
S607、对弱静止目标对应的数据进行原始数据恢复,恢复得到弱静止目标的回波数据。
示例性的,该原始数据恢复的方法为二维FFT处理。
S608-S609与上述S506-S507所示的步骤相同,在此不再赘述。
S610、对强静止目标对应的数据进行原始数据恢复,恢复得到强静止目标的回波数据。
S611、沿慢时间维对强静止目标的回波数据进行拼接,得到拼接数据。
S612、对拼接数据进行SAR成像处理,得到强静止目标,例如护栏或静止的车等。
S613、获得强静止目标的点云图像。
基于此,可以通过速度阈值和信号强度阈值实现数据分离,利用不同的数据实现雷达更多的功能;并扩展了基于雷达的SAR成像中对于静止场景的限制,将SAR成像应用于自动驾驶场景中,进而基于SAR成像结果更好的辅助自动驾驶。
上面结合图3-图6,对本申请实施例提供的方法进行了说明,下面对本申请实施例提供的执行上述方法的装置进行描述。
示例性的,图7为本申请实施例提供的一种终端的结构示意图,如图7所示,终端70包括目标检测装置80。其中,该终端可以为车辆或机器人等终端设备。可以理解,终端所包括的其他部件或其他装置可以根据实际应用场景设定,本申请实施例不作限定。
本申请实施例中,上述终端可以通过目标检测装置80执行上述实施例所描述的方法,或者,上述终端可以在雷达的辅助下,通过目标检测装置80执行上述实施例所描述的方法。可以理解,终端对目标检测装置80或雷达进行控制的实现方式,可以根据实际应用场景设定,本申请实施例不作具体限定。
在图7所示的终端的基础上,为了更好地描述目标检测装置80,示例性的,图8为本申请实施例提供的一种目标检测装置80的结构示意图,如图8所示,目标检测装置80可以用于通信设备、电路、硬件组件或者芯片中,该目标检测装置包括:处理单元801和通信单元802。其中,处理单元801用于支持目标检测装置执行信息处理的步骤;通信单元802用于支持目标检测装置执行数据发送或接收的步骤。
具体的,处理单元801,用于获取基于雷达处理得到的源数据;处理单元801,还用于根据源数据,得到至少一个第一目标对应的数据;其中,至少一个第一目标为速度小于第一阈值的目标;处理单元801,还用于对至少一个第一目标对应的数据进行成像处理,得到至少一个第一目标的成像结果。
可能的实现方式中,第一目标的信号强度小于第二阈值,处理单元801,具体用于:根据源数据进行目标检测,得到至少一个第二目标和至少一个第三目标;其中,至少一个第二目标为速度大于或等于第一阈值的目标,至少一个第三目标为信号强度大于或等于第二阈值的目标;在源数据中去除第二目标对应的数据和第三目标对应的数据,得到至少一 个第一目标对应的数据。
可能的实现方式中,处理单元801,具体用于:根据至少一个第二目标对应的数据,得到至少一个第二目标对应的第一点扩散函数;根据至少一个第三目标对应的数据,得到至少一个第三目标对应的第二点扩散函数;在源数据中去除第一点扩散函数对应的数据和第二点扩散函数对应的数据;其中,第一点扩散函数中包括第二目标的主瓣数据以及第二目标的旁瓣数据;第二点扩散函数中包括第三目标的主瓣数据以及第三目标的旁瓣数据。
可能的实现方式中,处理单元801,具体用于:根据源数据进行目标检测,得到至少一个第二目标;其中,至少一个第二目标为速度大于或等于第一阈值的目标;在源数据中去除第二目标对应的数据,得到至少一个第一目标对应的数据。
可能的实现方式中,处理单元801,具体用于:根据至少一个第二目标对应的数据,得到至少一个第二目标对应的第一点扩散函数;第一点扩散函数中包括第二目标的主瓣数据以及第二目标的旁瓣数据;在源数据中去除第一点扩散函数对应的数据。
可能的实现方式中,第一目标对应的数据包括距离速度RV谱,处理单元801,具体用于:对至少一个第一目标对应的数据进行恢复并沿慢时间维进行拼接,得到拼接数据;对拼接数据进行成像处理,得到至少一个第一目标的成像结果。
可能的实现方式中,处理单元801,具体用于:对拼接数据进行合成孔径雷达成像处理。
可能的实现方式中,处理单元801,具体用于:利用多普勒参数估计、多普勒中心补偿和/或走动校正,对拼接数据进行处理,得到处理后的数据;对处理后的数据进行一阶运动补偿和二阶运动补偿,得到补偿数据;对补偿数据进行方位压缩,得到至少一个第一目标的成像结果。
可能的实现方式中,通信单元802,具体用于:向目标设备发送至少一个第一目标的成像结果、至少一个第二目标和/或至少一个第三目标。
可能的实现方式中,通信单元802,具体用于:接收来自雷达的源数据。
可能的实现方式中,至少一个第一目标包括至少一个车道线,处理单元801,具体用于:根据至少一个第一目标的成像结果确定自动驾驶策略;和/或,根据至少一个第一目标的成像结果更新高精地图中的车道线。
在一种可能的实施例中,目标检测装置还可以包括:存储单元803。处理单元801、存储单元803通过通信线路相连。
存储单元803可以包括一个或者多个存储器,存储器可以是一个或者多个设备、电路中用于存储程序或者数据的器件。
存储单元803可以独立存在,通过通信线路与目标检测装置具有的处理单元801相连。存储单元803也可以和处理单元801集成在一起。
其中,则通信单元802可以是输入或者输出接口、管脚或者电路等。示例性的,存储单元803可以存储雷达或目标设备的方法的计算机执行指令,以使处理单元801执行上述实施例中雷达或目标设备的方法。存储单元803可以是寄存器、缓存或者RAM等,存储单元803可以和处理单元801集成在一起。存储单元803可以是ROM或者可存储静态信息和指令的其他类型的静态存储设备,存储单元803可以与处理单元801相独立。
本申请实施例提供了一种目标检测装置,该目标检测装置包括一个或者多个模块,用于实现上述图3-图6中所包含的步骤中的方法,该一个或者多个模块可以与上述图3-图6中所包含的步骤中的方法的步骤相对应。例如,对于执行对该目标检测装置的动作进行控制或处理的模块可以称为处理模块。
示例性的,图9为本申请实施例提供的一种控制设备的硬件结构示意图,如图9所示,该控制设备包括处理器901,通信线路904以及至少一个通信接口(图9中示例性的以通信接口903为例进行说明)。
处理器901可以是一个通用中央处理器(central processing unit,CPU),微处理器,特定应用集成电路(application-specific integrated circuit,ASIC),或一个或多个用于控制本申请方案程序执行的集成电路。
通信线路904可包括在上述组件之间传送信息的电路。
通信接口903,使用任何收发器一类的装置,用于与其他设备或通信网络通信,如以太网,无线局域网(wireless local area networks,WLAN)等。
可能的,该控制设备还可以包括存储器902。
存储器902可以是只读存储器(read-only memory,ROM)或可存储静态信息和指令的其他类型的静态存储设备,随机存取存储器(random access memory,RAM)或者可存储信息和指令的其他类型的动态存储设备,也可以是电可擦可编程只读存储器(electrically erasable programmable read-only memory,EEPROM)、只读光盘(compact disc read-only memory,CD-ROM)或其他光盘存储、光碟存储(包括压缩光碟、激光碟、光碟、数字通用光碟、蓝光光碟等)、磁盘存储介质或者其他磁存储设备、或者能够用于携带或存储具有指令或数据结构形式的期望的程序代码并能够由计算机存取的任何其他介质,但不限于此。存储器可以是独立存在,通过通信线路904与处理器相连接。存储器也可以和处理器集成在一起。
其中,存储器902用于存储执行本申请方案的计算机执行指令,并由处理器901来控制执行。处理器901用于执行存储器902中存储的计算机执行指令,从而实现本申请实施例所提供的目标检测方法。
可能的,本申请实施例中的计算机执行指令也可以称之为应用程序代码,本申请实施例对此不作具体限定。
在具体实现中,作为一种实施例,处理器901可以包括一个或多个CPU,例如图9中的CPU0和CPU1。
在具体实现中,作为一种实施例,控制设备可以包括多个处理器,例如图9中的处理器901和处理器905。这些处理器中的每一个可以是一个单核(single-CPU)处理器,也可以是一个多核(multi-CPU)处理器。这里的处理器可以指一个或多个设备、电路、和/或用于处理数据(例如计算机程序指令)的处理核。
本申请实施例一种可选的方式,所述处理器801用于读取存储器902中的程序并以执行如图3所示的S301-S303中的方法流程,如图4所示的S401-S402中的方法流程,如图5所示的S501-S507中的方法流程,或如图6所示的S601-S613中的方法流程。
示例性的,图10为本申请实施例提供的一种芯片的结构示意图。芯片100包括一 个或两个以上(包括两个)处理器1010和通信接口1030。
在一些实施方式中,存储器1040存储了如下的元素:可执行模块或者数据结构,或者他们的子集,或者他们的扩展集。
本申请实施例中,存储器1040可以包括只读存储器和随机存取存储器,并向处理器1010提供指令和数据。存储器1040的一部分还可以包括非易失性随机存取存储器(non-volatile random access memory,NVRAM)。
本申请实施例中,存储器1040、通信接口1030以及存储器1040通过总线系统1020耦合在一起。其中,总线系统1020除包括数据总线之外,还可以包括电源总线、控制总线和状态信号总线等。为了便于描述,在图10中将各种总线都标为总线系统1020。
上述本申请实施例描述的方法可以应用于处理器1010中,或者由处理器1010实现。处理器1010可能是一种集成电路芯片,具有信号的处理能力。在实现过程中,上述方法的各步骤可以通过处理器1010中的硬件的集成逻辑电路或者软件形式的指令完成。上述的处理器1010可以是通用处理器(例如,微处理器或常规处理器)、数字信号处理器(digital signal processing,DSP)、专用集成电路(application specific integrated circuit,ASIC)、现成可编程门阵列(field-programmable gate array,FPGA)或者其他可编程逻辑器件、分立门、晶体管逻辑器件或分立硬件组件,处理器1010可以实现或者执行本发明实施例中的公开的各方法、步骤及逻辑框图。
结合本申请实施例所公开的方法的步骤可以直接体现为硬件译码处理器执行完成,或者用译码处理器中的硬件及软件模块组合执行完成。其中,软件模块可以位于随机存储器、只读存储器、可编程只读存储器或带电可擦写可编程存储器(electrically erasable programmable read only memory,EEPROM)等本领域成熟的存储介质中。该存储介质位于存储器1040,处理器1010读取存储器1040中的信息,结合其硬件完成上述方法的步骤。
在上述实施例中,存储器存储的供处理器执行的指令可以以计算机程序产品的形式实现。其中,计算机程序产品可以是事先写入在存储器中,也可以是以软件形式下载并安装在存储器中。
计算机程序产品包括一个或多个计算机指令。在计算机上加载和执行计算机程序指令时,全部或部分地产生按照本申请实施例的流程或功能。计算机可以是通用计算机、专用计算机、计算机网络或者其他可编程装置。计算机指令可以存储在计算机可读存储介质中,或者从一个计算机可读存储介质向另一计算机可读存储介质传输,例如,计算机指令可以从一个网站站点、计算机、服务器或数据中心通过有线(例如同轴电缆、光纤、数字用户线(digital subscriber line,DSL)或无线(例如红外、无线、微波等)方式向另一个网站站点、计算机、服务器或数据中心进行传输。计算机可读存储介质可以是计算机能够存储的任何可用介质或者是包括一个或多个可用介质集成的服务器、数据中心等数据存储设备。例如,可用介质可以包括磁性介质(例如,软盘、硬盘或磁带)、光介质(例如,数字通用光盘(digital versatile disc,DVD))、或者半导体介质(例如,固态硬盘(solid state disk,SSD))等。
本申请实施例还提供了一种计算机可读存储介质。上述实施例中描述的方法可以全部或部分地通过软件、硬件、固件或者其任意组合来实现。计算机可读介质可以包 括计算机存储介质和通信介质,还可以包括任何可以将计算机程序从一个地方传送到另一个地方的介质。存储介质可以是可由计算机访问的任何目标介质。
作为一种可能的设计,计算机可读介质可以包括紧凑型光盘只读储存器(compact disc read-only memory,CD-ROM)、RAM、ROM、EEPROM或其它光盘存储器;计算机可读介质可以包括磁盘存储器或其它磁盘存储设备。而且,任何连接线也可以被适当地称为计算机可读介质。例如,如果使用同轴电缆,光纤电缆,双绞线,DSL或无线技术(如红外,无线电和微波)从网站,服务器或其它远程源传输软件,则同轴电缆,光纤电缆,双绞线,DSL或诸如红外,无线电和微波之类的无线技术包括在介质的定义中。如本文所使用的磁盘和光盘包括光盘(CD),激光盘,光盘,数字通用光盘(digital versatile disc,DVD),软盘和蓝光盘,其中磁盘通常以磁性方式再现数据,而光盘利用激光光学地再现数据。
上述的组合也应包括在计算机可读介质的范围内。以上所述,仅为本发明的具体实施方式,但本发明的保护范围并不局限于此,任何熟悉本技术领域的技术人员在本发明揭露的技术范围内,可轻易想到变化或替换,都应涵盖在本发明的保护范围之内。因此,本发明的保护范围应以权利要求的保护范围为准。

Claims (26)

  1. 一种目标检测方法,其特征在于,应用于雷达,包括:
    获取基于雷达处理得到的源数据;
    根据所述源数据,得到至少一个第一目标对应的数据;其中,所述至少一个第一目标为速度小于第一阈值的目标;
    对所述至少一个第一目标对应的数据进行成像处理,得到所述至少一个第一目标的成像结果。
  2. 根据权利要求1所述的方法,其特征在于,所述第一目标的信号强度小于第二阈值,所述根据所述源数据,得到至少一个第一目标对应的数据,包括:
    根据所述源数据进行目标检测,得到至少一个第二目标和至少一个第三目标;其中,所述至少一个第二目标为速度大于或等于所述第一阈值的目标,所述至少一个第三目标为信号强度大于或等于所述第二阈值的目标;
    在所述源数据中去除所述第二目标对应的数据和所述第三目标对应的数据,得到所述至少一个第一目标对应的数据。
  3. 根据权利要求2所述的方法,其特征在于,所述在所述源数据中去除所述第二目标对应的数据和所述第三目标对应的数据,包括:
    根据所述至少一个第二目标对应的数据,得到所述至少一个第二目标对应的第一点扩散函数;
    根据所述至少一个第三目标对应的数据,得到所述至少一个第三目标对应的第二点扩散函数;
    在所述源数据中去除所述第一点扩散函数对应的数据和所述第二点扩散函数对应的数据;
    其中,所述第一点扩散函数中包括第二目标的主瓣数据以及所述第二目标的旁瓣数据;所述第二点扩散函数中包括第三目标的主瓣数据以及所述第三目标的旁瓣数据。
  4. 根据权利要求1所述的方法,其特征在于,所述根据所述源数据,得到至少一个第一目标对应的数据,包括:
    根据所述源数据进行目标检测,得到至少一个第二目标;其中,所述至少一个第二目标为速度大于或等于所述第一阈值的目标;
    在所述源数据中去除所述第二目标对应的数据,得到所述至少一个第一目标对应的数据。
  5. 根据权利要求4所述的方法,其特征在于,所述在所述源数据中去除所述至少一个第二目标对应的数据,包括:
    根据所述至少一个第二目标对应的数据,得到所述至少一个第二目标对应的第一点扩散函数;所述第一点扩散函数中包括第二目标的主瓣数据以及所述第二目标的旁瓣数据;
    在所述源数据中去除所述第一点扩散函数对应的数据。
  6. 根据权利要求1-5任一项所述的方法,其特征在于,所述第一目标对应的数据包括距离速度RV谱,所述对所述至少一个第一目标对应的数据进行成像处理,得到所述至少一个第一目标的成像结果,包括:
    对所述至少一个第一目标对应的数据进行恢复并沿慢时间维进行拼接,得到拼接数据;
    对所述拼接数据进行成像处理,得到所述至少一个第一目标的成像结果。
  7. 根据权利要求6所述的方法,其特征在于,所述对所述拼接数据进行成像处理,包括:
    对所述拼接数据进行合成孔径雷达成像处理。
  8. 根据权利要求6或7所述的方法,其特征在于,所述对所述拼接数据进行成像处理,得到所述至少一个第一目标的成像结果,包括:
    利用多普勒参数估计、多普勒中心补偿和/或走动校正,对所述拼接数据进行处理,得到处理后的数据;
    对所述处理后的数据进行一阶运动补偿和二阶运动补偿,得到补偿数据;
    对所述补偿数据进行方位压缩,得到所述至少一个第一目标的成像结果。
  9. 根据权利要求8所述的方法,其特征在于,所述方法应用于雷达,所述方法还包括:
    向目标设备发送所述至少一个第一目标的成像结果、所述至少一个第二目标和/或所述至少一个第三目标。
  10. 根据权利要求8所述的方法,其特征在于,所述方法应用于目标设备,所述获取基于雷达处理得到源数据,包括:
    接收来自所述雷达的所述源数据。
  11. 根据权利要求10所述的方法,其特征在于,所述至少一个第一目标包括至少一个车道线,所述方法还包括:
    根据所述至少一个第一目标的成像结果确定自动驾驶策略;和/或,
    根据所述至少一个第一目标的成像结果更新高精地图中的车道线。
  12. 一种目标检测装置,其特征在于,所述装置包括:
    处理单元,用于获取基于雷达处理得到的源数据;
    所述处理单元,还用于根据所述源数据,得到至少一个第一目标对应的数据;其中,所述至少一个第一目标为速度小于第一阈值的目标;
    所述处理单元,还用于对所述至少一个第一目标对应的数据进行成像处理,得到所述至少一个第一目标的成像结果。
  13. 根据权利要求12所述的装置,其特征在于,所述第一目标的信号强度小于第二阈值,所述处理单元,具体用于:
    根据所述源数据进行目标检测,得到至少一个第二目标和至少一个第三目标;其中,所述至少一个第二目标为速度大于或等于所述第一阈值的目标,所述至少一个第三目标为信号强度大于或等于所述第二阈值的目标;
    在所述源数据中去除所述第二目标对应的数据和所述第三目标对应的数据,得到所述至少一个第一目标对应的数据。
  14. 根据权利要求13所述的装置,其特征在于,所述处理单元,具体用于:
    根据所述至少一个第二目标对应的数据,得到所述至少一个第二目标对应的第一点扩散函数;
    根据所述至少一个第三目标对应的数据,得到所述至少一个第三目标对应的第二点扩散函数;
    在所述源数据中去除所述第一点扩散函数对应的数据和所述第二点扩散函数对应的 数据;
    其中,所述第一点扩散函数中包括第二目标的主瓣数据以及所述第二目标的旁瓣数据;所述第二点扩散函数中包括第三目标的主瓣数据以及所述第三目标的旁瓣数据。
  15. 根据权利要求12所述的装置,其特征在于,所述处理单元,具体用于:
    根据所述源数据进行目标检测,得到至少一个第二目标;其中,所述至少一个第二目标为速度大于或等于所述第一阈值的目标;
    在所述源数据中去除所述第二目标对应的数据,得到所述至少一个第一目标对应的数据。
  16. 根据权利要求15所述的装置,其特征在于,所述处理单元,具体用于:
    根据所述至少一个第二目标对应的数据,得到所述至少一个第二目标对应的第一点扩散函数;所述第一点扩散函数中包括第二目标的主瓣数据以及所述第二目标的旁瓣数据;
    在所述源数据中去除所述第一点扩散函数对应的数据。
  17. 根据权利要求12-16任一项所述的装置,其特征在于,所述第一目标对应的数据包括距离速度RV谱,所述处理单元,具体用于:
    对所述至少一个第一目标对应的数据进行恢复并沿慢时间维进行拼接,得到拼接数据;
    对所述拼接数据进行成像处理,得到所述至少一个第一目标的成像结果。
  18. 根据权利要求17所述的装置,其特征在于,所述处理单元,具体用于:
    对所述拼接数据进行合成孔径雷达成像处理。
  19. 根据权利要求17或18所述的装置,其特征在于,所述处理单元,具体用于:
    利用多普勒参数估计、多普勒中心补偿和/或走动校正,对所述拼接数据进行处理,得到处理后的数据;
    对所述处理后的数据进行一阶运动补偿和二阶运动补偿,得到补偿数据;
    对所述补偿数据进行方位压缩,得到所述至少一个第一目标的成像结果。
  20. 根据权利要求19所述的装置,其特征在于,通信单元,具体用于:
    向目标设备发送所述至少一个第一目标的成像结果、所述至少一个第二目标和/或所述至少一个第三目标。
  21. 根据权利要求19所述的装置,其特征在于,所述通信单元,具体用于:
    接收来自所述雷达的所述源数据。
  22. 根据权利要求21所述的装置,其特征在于,所述至少一个第一目标包括至少一个车道线,所述处理单元,具体用于:
    根据所述至少一个第一目标的成像结果确定自动驾驶策略;和/或,
    根据所述至少一个第一目标的成像结果更新高精地图中的车道线。
  23. 一种芯片,其特征在于,所述芯片包括至少一个处理器和通信接口,所述通信接口和所述至少一个处理器耦合,所述至少一个处理器用于运行计算机程序或指令,以实现如权利要求1-11中任一项所述的目标检测方法。
  24. 一种计算机可读存储介质,其特征在于,所述计算机可读存储介质中存储有指令,当所述指令被运行时,实现如权利要求1-11中任一项所述的目标检测方法。
  25. 一种终端,其特征在于,所述终端包括如权利要求12-22任一项所述的目标检测装置。
  26. 根据权利要求25所述的终端,其特征在于,所述终端为车辆或机器人。
PCT/CN2021/078669 2021-03-02 2021-03-02 目标检测方法和装置 WO2022183369A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
PCT/CN2021/078669 WO2022183369A1 (zh) 2021-03-02 2021-03-02 目标检测方法和装置
CN202180000482.9A CN113167886B (zh) 2021-03-02 2021-03-02 目标检测方法和装置

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2021/078669 WO2022183369A1 (zh) 2021-03-02 2021-03-02 目标检测方法和装置

Publications (1)

Publication Number Publication Date
WO2022183369A1 true WO2022183369A1 (zh) 2022-09-09

Family

ID=76875964

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/078669 WO2022183369A1 (zh) 2021-03-02 2021-03-02 目标检测方法和装置

Country Status (2)

Country Link
CN (1) CN113167886B (zh)
WO (1) WO2022183369A1 (zh)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022183408A1 (zh) * 2021-03-03 2022-09-09 华为技术有限公司 车道线检测方法和车道线检测装置

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106991389A (zh) * 2017-03-29 2017-07-28 蔚来汽车有限公司 确定道路边沿的装置和方法
CN110058239A (zh) * 2019-04-29 2019-07-26 上海保隆汽车科技股份有限公司 一种车载毫米波雷达装置及目标探测方法
CN111289980A (zh) * 2020-03-06 2020-06-16 成都纳雷科技有限公司 基于车载毫米波雷达的路边静止物的检测方法及系统
CN111712731A (zh) * 2019-07-25 2020-09-25 深圳市大疆创新科技有限公司 目标检测方法、系统及可移动平台
US20210003693A1 (en) * 2018-04-12 2021-01-07 FLIR Belgium BVBA Adaptive doppler radar systems and methods
CN112313539A (zh) * 2019-11-26 2021-02-02 深圳市大疆创新科技有限公司 护栏检测方法及设备、存储介质和可移动平台

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4678945B2 (ja) * 2000-12-28 2011-04-27 富士通テン株式会社 スキャン式レーダの静止物検知方法
CN111781608B (zh) * 2020-07-03 2023-04-25 浙江光珀智能科技有限公司 一种基于fmcw激光雷达的运动目标检测方法及系统

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106991389A (zh) * 2017-03-29 2017-07-28 蔚来汽车有限公司 确定道路边沿的装置和方法
US20210003693A1 (en) * 2018-04-12 2021-01-07 FLIR Belgium BVBA Adaptive doppler radar systems and methods
CN110058239A (zh) * 2019-04-29 2019-07-26 上海保隆汽车科技股份有限公司 一种车载毫米波雷达装置及目标探测方法
CN111712731A (zh) * 2019-07-25 2020-09-25 深圳市大疆创新科技有限公司 目标检测方法、系统及可移动平台
CN112313539A (zh) * 2019-11-26 2021-02-02 深圳市大疆创新科技有限公司 护栏检测方法及设备、存储介质和可移动平台
CN111289980A (zh) * 2020-03-06 2020-06-16 成都纳雷科技有限公司 基于车载毫米波雷达的路边静止物的检测方法及系统

Also Published As

Publication number Publication date
CN113167886B (zh) 2022-05-31
CN113167886A (zh) 2021-07-23

Similar Documents

Publication Publication Date Title
EP4027167A1 (en) Sensor calibration method and apparatus
Mohammed et al. The perception system of intelligent ground vehicles in all weather conditions: A systematic literature review
US11630197B2 (en) Determining a motion state of a target object
Baek et al. Vehicle trajectory prediction and collision warning via fusion of multisensors and wireless vehicular communications
EP3415945B1 (en) Method of determining the yaw rate of a target vehicle
JP7355877B2 (ja) 車路協同自動運転の制御方法、装置、電子機器及び車両
DE102016120507A1 (de) Prädizieren von fahrzeugbewegungen anhand von fahrerkörpersprache
DE102016117123A1 (de) Fahrzeugradarwahrnehmung und -lokalisierung
US11475678B2 (en) Lane marker detection and lane instance recognition
DE112018004507T5 (de) Informationsverarbeitungseinrichtung, bewegungseinrichtung und verfahren und programm
Tebaldini et al. Sensing the urban environment by automotive SAR imaging: Potentials and challenges
EP4137846A1 (en) High-precision map generation method, localization method, and device
US20220065657A1 (en) Systems and methods for vehicle mapping and localization using synthetic aperture radar
Cui et al. 3D detection and tracking for on-road vehicles with a monovision camera and dual low-cost 4D mmWave radars
DE102021103012A1 (de) Lidar-polarisation für fahrzeuge
CN114578382A (zh) 一种基于人工智能的汽车周围环境实时检测方法及系统
WO2022183369A1 (zh) 目标检测方法和装置
Baek et al. Driving environment perception based on the fusion of vehicular wireless communications and automotive remote sensors
Carow et al. Projecting lane lines from proxy high-definition maps for automated vehicle perception in road occlusion scenarios
Olutomilayo et al. Trailer angle estimation using radar point clouds
Deusch Random finite set-based localization and SLAM for highly automated vehicles
Hussain et al. Development and demonstration of merge assist system using connected vehicle technology
WO2020244467A1 (zh) 一种运动状态估计方法及装置
Rydzewski et al. Human awareness versus Autonomous Vehicles view: comparison of reaction times during emergencies
Gao Efficient and Enhanced Radar Perception for Autonomous Driving Systems

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21928460

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21928460

Country of ref document: EP

Kind code of ref document: A1