CN113167885B - Lane line detection method and lane line detection device - Google Patents

Lane line detection method and lane line detection device Download PDF

Info

Publication number
CN113167885B
CN113167885B CN202180000475.9A CN202180000475A CN113167885B CN 113167885 B CN113167885 B CN 113167885B CN 202180000475 A CN202180000475 A CN 202180000475A CN 113167885 B CN113167885 B CN 113167885B
Authority
CN
China
Prior art keywords
data
radar
lane line
echo
imaging
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202180000475.9A
Other languages
Chinese (zh)
Other versions
CN113167885A (en
Inventor
张慧
马莎
宋思达
吕笑宇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huawei Technologies Co Ltd
Original Assignee
Huawei Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Technologies Co Ltd filed Critical Huawei Technologies Co Ltd
Publication of CN113167885A publication Critical patent/CN113167885A/en
Application granted granted Critical
Publication of CN113167885B publication Critical patent/CN113167885B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S13/00Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
    • G01S13/88Radar or analogous systems specially adapted for specific applications
    • G01S13/89Radar or analogous systems specially adapted for specific applications for mapping or imaging
    • G01S13/90Radar or analogous systems specially adapted for specific applications for mapping or imaging using synthetic aperture techniques, e.g. synthetic aperture radar [SAR] techniques
    • G01S13/904SAR modes
    • G01S13/9041Squint mode
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S13/00Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
    • G01S13/88Radar or analogous systems specially adapted for specific applications
    • G01S13/89Radar or analogous systems specially adapted for specific applications for mapping or imaging
    • G01S13/90Radar or analogous systems specially adapted for specific applications for mapping or imaging using synthetic aperture techniques, e.g. synthetic aperture radar [SAR] techniques
    • G01S13/9004SAR image acquisition techniques
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/02Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S13/00
    • G01S7/41Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S13/00 using analysis of echo signal for target characterisation; Target signature; Target cross-section

Landscapes

  • Engineering & Computer Science (AREA)
  • Remote Sensing (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Physics & Mathematics (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • General Physics & Mathematics (AREA)
  • Electromagnetism (AREA)
  • Radar Systems Or Details Thereof (AREA)
  • Traffic Control Systems (AREA)

Abstract

The application provides a lane line detection method and a lane line detection device, which can be applied to the field of automatic driving or intelligent driving. The method comprises the following steps: acquiring echo data of the ground by using a radar, wherein the ground comprises a lane line; preprocessing the echo data to obtain first data; filtering the first data to obtain second data, wherein the second data is data with amplitude smaller than a first threshold value; and imaging the second data to obtain the synthetic aperture radar SAR image of the lane line. The lane line detection method and the lane line detection device can acquire the SAR image of the lane line with high resolution, and effectively improve the accuracy of lane line detection.

Description

Lane line detection method and lane line detection device
Technical Field
The present disclosure relates to the field of assistant driving or automatic driving, and in particular, to a lane line detection method and a lane line detection device.
Background
The detection of lane lines is important in many advanced driving assistance and automatic driving applications such as Lane Departure Warning (LDW) and Lane Keeping Assistance (LKA). At present, the main source of lane line information is an optical image obtained by a camera or a point cloud image obtained by a laser radar. However, the method of acquiring lane line information based on a camera fails in a dark scene, and lacks depth information or is insufficient in depth information accuracy. The detection capability of the laser radar is limited under the conditions of rain, snow, haze, road dust and the like. The radar (or millimeter wave radar) has the all-weather working capability all day long, but the angular resolution of the obtained point cloud data is limited by the length of the antenna aperture (or virtual aperture), the point cloud is sparse, and the detection of the lane line is difficult to complete.
Synthetic Aperture Radar (SAR) imaging technology can form a large synthetic aperture by using relative motion of a radar and a target, break through the limitation of the real aperture of an antenna, and realize high-resolution imaging. One application of the SAR imaging technology is to perform SAR imaging on a static parking lot or a roadside parking area by adopting a vehicle-mounted radar, so that a high-resolution image can be acquired to perform vacant parking space detection.
At present, no research is available for detecting lane lines by utilizing SAR imaging technology. Therefore, it is desirable to provide a lane line detection method based on SAR imaging technology to obtain a high-resolution lane line image.
Disclosure of Invention
The application provides a lane line detection method and a lane line detection device, which can acquire an SAR image of a lane line with high resolution and effectively improve the accuracy of lane line detection.
In a first aspect, a lane line detection method is provided, and the method includes: acquiring echo data of the ground by using a radar, wherein the ground comprises a lane line; preprocessing the echo data to obtain first data; filtering the first data to obtain second data, wherein the second data is data with amplitude smaller than a first threshold value; and imaging the second data to obtain an SAR image of the lane line.
According to the lane line detection method provided by the embodiment of the application, the first data is obtained by preprocessing the echo data acquired by the radar, the first data is filtered by using the first threshold value, the second data smaller than the first threshold value is obtained, and the second data is subjected to imaging processing, so that the SAR image of the lane line is obtained. The method and the device filter the influence of the strong scattering target in the imaging process, so that the lane line can be displayed in the SAR image, the detection of the lane line by utilizing the SAR imaging technology is realized, the SAR image of the lane line with high resolution is favorably acquired, and the accuracy of lane line detection is improved.
It should be understood that the echo data of the ground acquired by the radar may include echo data of a lane line and echo data of other targets around the lane line.
It should be understood that in an application scenario of the vehicle-mounted radar, the echo data acquired by the vehicle-mounted radar includes echo data of a lane line and echo data of other strong scattering targets such as guardrails and vehicles. That is, the first data includes not only the data of the lane line to be detected, but also the data of other strong scattering targets other than the data of the lane line.
Alternatively, the filtering process may be performed according to a preset first threshold to obtain the second data with the amplitude smaller than the first threshold.
Illustratively, the first data is a set of a plurality of sample point data, wherein each sample point data has a corresponding magnitude.
It should also be understood that the first data or the second data in the embodiments of the present application may be data in the form of a two-dimensional array. Illustratively, the two-dimensional array corresponds to a two-dimensional coordinate system, and the abscissa may represent the distance direction (i.e., fast time) and the ordinate may represent the azimuth direction (i.e., slow time). In the application scenario of the vehicle-mounted radar, the azimuth direction represents the moving direction of the vehicle (i.e. the moving direction of the vehicle-mounted radar), and the range direction represents the moving direction perpendicular to the vehicle. With reference to the first aspect, in some implementations of the first aspect, the filtering the first data to obtain second data includes: and removing the data with the amplitude value larger than or equal to the first threshold value from the first data to obtain second data.
The method for acquiring the second data has simple and easy processing process, effectively reduces the operation pressure of the data processing equipment, and simultaneously improves the lane detection efficiency.
With reference to the first aspect, in some implementation manners of the first aspect, the filtering the first data to obtain second data includes: obtaining a point spread function of the data according to the data of which the amplitude is greater than or equal to a first threshold value in the first data; and removing the data corresponding to the point spread function from the first data to obtain the second data.
It should be understood that the data having the amplitude greater than or equal to the first threshold may be referred to as main lobe data, and the data corresponding to the point spread function may be referred to as side lobe data.
It should be appreciated that the point spread function is a function used to estimate the minimum spatially resolved distance of an imaging system.
It will also be appreciated that the data processing apparatus may derive the point spread function for data having a magnitude greater than or equal to the first threshold in two different ways. And the point spread functions corresponding to different data are equivalent to translation and amplitude-phase transformation of the same signal form.
Optionally, if the data processing device performs windowing in the imaging processing process, the influence of the window function on the distance envelope main lobe width and the side lobe intensity of the point spread function needs to be considered during data filtering.
In the embodiment of the application, the point spread function of the data is obtained according to the data of which the amplitude is greater than or equal to the first threshold, so that the influence of side lobe data of the point spread function of the data can be further filtered while the main lobe data of the point spread function corresponding to the data is removed, and the accuracy of lane line detection can be more effectively improved.
With reference to the first aspect, in some implementations of the first aspect, the preprocessing the echo data to obtain first data includes: and performing Doppler parameter estimation, motion compensation and range direction compression on the echo data to obtain the first data.
It should be understood that, if the echo data obtained by the radar is forward squint echo data (that is, the radar operates in forward squint mode) or backward squint echo data (that is, the radar operates in backward squint mode), after doppler parameter estimation and motion parameter estimation are performed on the echo data obtained by the radar, distance linear walking term compensation and doppler center correction are further required to be performed, so that the processed squint data can be equivalent to forward squint data, and subsequent processing such as motion compensation is performed.
With reference to the first aspect, in some implementations of the first aspect, the performing imaging processing on the second data to obtain an SAR image of the lane line includes: and quantizing the second data, estimating Doppler modulation frequency, correcting azimuth phase error and compressing azimuth to obtain the SAR image.
It should be understood that the above quantization process may be performed in any step after the above filtering process, or may be performed after the SAR image is acquired, which is not limited in this embodiment of the application.
With reference to the first aspect, in certain implementations of the first aspect, the SAR image is an image in a slant range plane, and the method further includes: and carrying out geometric deformation correction and coordinate transformation on the SAR image to obtain an image in a ground plane.
In the embodiment of the application, through converting the image in the skew distance plane into the image in the ground distance plane, can be so that the lane line of true road surface is more pressed close to the lane line image that obtains, can improve the accuracy nature to lane line detection more effectively.
With reference to the first aspect, in certain implementations of the first aspect, the transmission signal bandwidth B, the downward angle of view θ, and the oblique angle of view β of the radar satisfy:
Figure BDA0002976458670000031
wherein Δ x is the width of the lane line,
Figure BDA0002976458670000032
satisfy the requirements of
Figure BDA0002976458670000033
And c is the propagation velocity of the electromagnetic wave.
Alternatively, in actual calculation, c is 3 × 108m/s。
Alternatively, to ensure that the lane lines are within the imaging area during normal driving, it is desirable that the width of the unilateral survey belt (i.e., the width of the imaging area), Δ X, satisfy Δ X ≧ L1-L2. Wherein L is1Is the lane width, L2The width of the vehicle.
Optionally, the installation height H, the squint angle β and the downward view angle θ of the radar are set to ensure that the beam is not reflected by the vehicle body as much as possible.
With reference to the first aspect, in certain implementations of the first aspect, the radar operates in a front-side view, a front-oblique view or a rear-oblique view, and the imaging area of the radar is a road surface including a lane line on one side or both sides of the vehicle.
It should be understood that the radar operates in a front-side view, with the angle of inclination β of the radar being 0 °. When the radar operates in a backward oblique view mode, the value of the oblique view angle beta of the radar is negative.
According to the lane line detection method, in the application scene of the vehicle-mounted radar, the vehicle-mounted radar adopts a front squint working mode, lane line information in front of a vehicle can be obtained in advance, so that the vehicle or a driver can know the road condition in front of the vehicle in advance, and a driving route can be flexibly adjusted according to the road condition in front of the vehicle. And the vehicle-mounted radar adopts a working mode of front side view or rear oblique view, and can be suitable for applications such as map drawing.
With reference to the first aspect, in certain implementations of the first aspect, the radar is a millimeter wave radar.
It should be appreciated that millimeter-wave radar has the ability to penetrate smoke, dust, or fog, such that millimeter-wave radar can operate all day long, all day long. In the application scene of the vehicle-mounted radar, the millimeter wave radar is adopted as the vehicle-mounted radar, so that the vehicle-mounted radar can better assist in driving or automatically drive.
It should also be understood that the lane line detection method provided by the embodiment of the present application is also applicable to radars in other frequency bands.
In a second aspect, a lane marking detection apparatus is provided for performing the method of any one of the possible implementations of the first aspect. In particular, the apparatus comprises means for performing the method of any one of the possible implementations of the first aspect described above.
In a third aspect, there is provided another lane line detection apparatus, including a processor, coupled to a memory, and configured to execute instructions in the memory to implement the method in any one of the possible implementations of the first aspect. Optionally, the apparatus further comprises a memory. Optionally, the apparatus further comprises a communication interface, the processor being coupled to the communication interface.
In one implementation, the lane line detection apparatus is a data processing device. When the lane line detection apparatus is a data processing device, the communication interface may be a transceiver, or an input/output interface.
In another implementation manner, the lane line detection device is a chip configured in the server. When the lane line detection device is a chip configured in a server, the communication interface may be an input/output interface.
In a fourth aspect, a processor is provided, comprising: input circuit, output circuit and processing circuit. The processing circuit is configured to receive a signal via the input circuit and transmit a signal via the output circuit, so that the processor performs the method of any one of the possible implementations of the first aspect.
In a specific implementation process, the processor may be a chip, the input circuit may be an input pin, the output circuit may be an output pin, and the processing circuit may be a transistor, a gate circuit, a flip-flop, various logic circuits, and the like. The input signal received by the input circuit may be received and input by, for example and without limitation, a receiver, the signal output by the output circuit may be output to and transmitted by a transmitter, for example and without limitation, and the input circuit and the output circuit may be the same circuit that functions as the input circuit and the output circuit, respectively, at different times. The embodiment of the present application does not limit the specific implementation manner of the processor and various circuits.
In a fifth aspect, a processing apparatus is provided that includes a processor and a memory. The processor is configured to read instructions stored in the memory, and may receive signals via the receiver and transmit signals via the transmitter to perform the method of any one of the possible implementations of the first aspect.
Optionally, there are one or more processors and one or more memories.
Alternatively, the memory may be integrated with the processor, or provided separately from the processor.
In a specific implementation process, the memory may be a non-transient memory, such as a Read Only Memory (ROM), which may be integrated on the same chip as the processor, or may be separately disposed on different chips.
It will be appreciated that the associated data interaction process, for example, sending the indication information, may be a process of outputting the indication information from the processor, and receiving the capability information may be a process of receiving the input capability information from the processor. In particular, the data output by the processor may be output to a transmitter and the input data received by the processor may be from a receiver. The transmitter and receiver may be collectively referred to as a transceiver, among others.
The processing device in the fifth aspect may be a chip, the processor may be implemented by hardware or software, and when implemented by hardware, the processor may be a logic circuit, an integrated circuit, or the like; when implemented in software, the processor may be a general-purpose processor implemented by reading software code stored in a memory, which may be integrated with the processor, located external to the processor, or stand-alone.
In a sixth aspect, a computer program product is provided, the computer program product comprising: computer program (also called code, or instructions), which when executed, causes a computer to perform the method of any of the possible implementations of the first aspect described above.
In a seventh aspect, a computer-readable storage medium is provided, which stores a computer program (which may also be referred to as code or instructions) that, when executed on a computer, causes the computer to perform the method in any of the possible implementations of the first aspect.
In an eighth aspect, a terminal is provided, which may be a vehicle or a smart device (e.g., a smart home or a smart manufacturing device, etc.), including a drone, an unmanned vehicle, an automobile, or a robot, and the vehicle or the smart device includes the apparatus in any possible implementation manner of the second, third, or fifth aspect.
Drawings
Fig. 1 is a schematic diagram of an application scenario provided in an embodiment of the present application;
fig. 2 is a schematic flow chart of a lane line detection method provided in an embodiment of the present application;
FIG. 3 is a three-dimensional geometry of radar imaging provided by embodiments of the present application;
FIG. 4 is a schematic view of an imaging region provided by an embodiment of the present application;
FIG. 5 is a schematic view of another imaging region provided by embodiments of the present application;
FIG. 6 is a schematic flow chart of an imaging algorithm provided herein;
FIG. 7 is a schematic flow chart of another imaging algorithm provided herein;
FIG. 8 is a schematic flow chart of yet another imaging algorithm provided by an embodiment of the present application;
fig. 9 is a schematic block diagram of a lane line detection apparatus according to an embodiment of the present application;
fig. 10 is a schematic block diagram of another lane line detection apparatus according to an embodiment of the present application.
Detailed Description
The technical solution in the present application will be described below with reference to the accompanying drawings.
For the convenience of clearly describing the technical scheme of the embodiment of the present application, the following points are explained first.
First, in the embodiments shown below, terms and english abbreviations such as strong scattering target, point spread function, etc. are given as illustrative examples for convenience of description, and should not limit the present application in any way. This application is not intended to exclude the possibility that other terms may be defined in existing or future protocols to carry out the same or similar functions.
Second, the first, second and various numerical numbering in the embodiments shown below are merely for convenience of description and are not intended to limit the scope of the embodiments of the present application. For example, to distinguish between different data.
Third, "at least one" means one or more, "a plurality" means two or more. "and/or" describes the association relationship of the associated objects, meaning that there may be three relationships, e.g., a and/or B, which may mean: a exists alone, A and B exist simultaneously, and B exists alone, wherein A and B can be singular or plural. The character "/" generally indicates that the former and latter associated objects are in an "or" relationship. "at least one of the following" or similar expressions refer to any combination of these items, including any combination of the singular or plural items. For example, at least one (one) of a, b, and c, may represent: a, or b, or c, or a and b, or a and c, or b and c, or a, b and c, wherein a, b and c can be single or multiple.
Fourth, words such as "exemplary" or "e.g.," mean an example, instance, or illustration. Any embodiment or design described herein as "exemplary" or "e.g.," is not necessarily to be construed as preferred or advantageous over other embodiments or designs. Rather, use of the word "exemplary" or "such as" is intended to present concepts related in a concrete fashion.
To facilitate understanding of the embodiments of the present application, the following description will be made of terms related to the present application.
1. The all-day working capacity: it can work both at day and night.
2. All-weather working capacity: it can work in sunny days, rainy days, foggy days, snowy days and the like.
3. Resolution ratio: the minimum size of two targets to be resolved is typically 3 decibels (dB) resolution, i.e., half power point resolution, or rayleigh resolution, which is half the width of the main lobe between two nearest zeros.
4. Radar: is an electronic device for detecting an object using electromagnetic waves. The radar irradiates a target by transmitting an electromagnetic wave and receives an echo of the target. Information such as the distance from the target to the electromagnetic wave emitting point, the rate of change of distance (radial velocity), the azimuth, the altitude, and the like is obtained.
The radar can periodically send a pulse signal during operation and sample the echo signal for the duration of the pulse. Although the sampling interval and the pulse repetition interval (i.e., the period of the pulse signal) of the echo signal may be on one time axis, they are very different in magnitude.
5. Main lobe and side lobe: the side lobes may also be referred to as side lobes. In a radar detection target, a main lobe is a region around the maximum radiation direction, generally a region within 3dB of the peak value of a main beam, and is the main working direction of the radar. It is understood that the main lobe may contain the radar waves with the strongest radiation intensity. The lobes other than the main lobe are side lobes, which are smaller beams radiating around the main beam. It will be appreciated that the presence of side lobes may reduce the ability of the radar to detect objects.
In the application of a plurality of advanced assistant driving and automatic driving such as lane departure early warning, lane keeping assistance and the like, the normal driving of the vehicle is ensured, and the detection of the lane line is important. At present, in the process of lane line detection, the main source of lane line information is an optical image obtained by a camera or a point cloud image obtained by a laser radar.
The existing lane line detection method comprises the following steps: firstly, lane line detection and parameter estimation are carried out based on visual detection, then boundary line detection is carried out by utilizing laser radar detection data, and finally, lane line detection results and boundary line results are fused to obtain lane detection parameters. Another lane line detection method is as follows: the method comprises the steps of detecting lane lines obtained by a camera by using a machine vision method, detecting static guardrails on two sides of a road by using a millimeter wave radar to obtain boundary information of the road, then obtaining current road information by using a low-precision Global Positioning System (GPS), and fusing the lane lines and the road edge information to realize lane-level positioning.
However, the above method has the following two disadvantages. On the one hand, the mode of acquiring lane line information based on the camera is ineffective in dark scenes, and lacks depth information or the depth information is insufficient in precision. On the other hand, the radar performs road edge detection (such as guardrails and the like) instead of lane line detection. The road edge obtained by the radar is low-resolution point cloud data, and a high-resolution radar image containing a lane line is not obtained.
It should be understood that radar point cloud data is generally implemented by a Multiple Input Multiple Output (MIMO) technology, which obtains point cloud data with angular resolution limited by the antenna aperture (or virtual aperture) length, sparse point cloud, and low resolution. Therefore, the point cloud image acquired by the radar is difficult to detect the lane line.
The SAR imaging technology can form a large synthetic aperture by utilizing the relative motion of a radar and a target, breaks through the limitation of the real aperture of an antenna, and realizes high-resolution imaging. The principle of the SAR imaging technique is: the small antenna on the radar is used as a single radiation unit, the unit is continuously moved along a straight line, echo signals of the same target object are received at different positions and processed, and then the image with higher resolution of the target object can be obtained. The small antenna can be combined into an equivalent 'big antenna' by moving. Therefore, the SAR imaging technology is implemented in a condition that the radar moves and the target is stationary to ensure relative movement between the radar and the target. That is, in the application scenario of the vehicle-mounted radar, the vehicle-mounted radar is in a motion state, that is, the vehicle mounted with the vehicle-mounted radar is in a running state.
One application of the SAR imaging technology is to perform SAR imaging on a static parking lot or a roadside parking area by adopting a vehicle-mounted radar, so that a high-resolution image can be acquired to perform vacant parking space detection. At present, no research and report for detecting the lane line by utilizing the SAR imaging technology exists. Therefore, it is desirable to provide a lane line detection method based on SAR imaging technology to obtain a high-resolution lane line image.
Fig. 1 illustrates an application scenario 100 provided in an embodiment of the present application. As shown in fig. 1, a vehicle 110 travels in a lane 1, and the vehicle 110 is mounted with a vehicle-mounted radar. During the driving process of the vehicle 110, surrounding objects, such as the speed limit sign 120, the road sign 130, the lane line 140, the lane line 150, and the like around the vehicle 110, can be detected according to the vehicle-mounted radar on the vehicle 110. Lane lines 140 are used to separate vehicles traveling in phase, indicating that the vehicles cannot cross the lane lines to change lanes, and lane lines 150 are used to separate vehicles traveling in phase, indicating that the vehicles can cross the lane lines to change lanes in a safe situation.
Optionally, the vehicle 110 in the scenario 100 may be a vehicle configured with an advanced driving assistance system, or the vehicle 110 may also be a vehicle configured with an intelligent driving system, which is not limited in this embodiment of the application.
It should be understood that the application scenario 100 provided in the embodiment of the present application is only an example, and does not set any limit to the application scenario of the embodiment of the present application. Optionally, the application scenario 100 may further include other vehicles, roadside barriers, and other objects not shown in fig. 1, which is not limited in this embodiment of the present application.
In the application scenario shown in fig. 1, the radar on the vehicle of the vehicle 110 transmits a signal, and the signal is reflected when encountering any one of the above-mentioned targets (the speed limit sign 120, the road surface sign 130, the lane line 140 or the lane line 150) and generates an echo signal. Since the surroundings of the vehicle 110 include a plurality of different types of targets, the echo signals received by the vehicle-mounted radar and reflected back also include a plurality of echo signals with different intensities.
It should be understood that the magnitude of the intensity of the echo signal is related to the material of the target, the roughness of the target surface, or the energy of the radar emission signal. For example, echo signals reflected by targets such as metal guardrails, stationary vehicles or obstacles have high intensity; the echo signal intensity reflected by targets such as a lane line is small. For convenience of description, the target corresponding to the echo signal with higher intensity is referred to as a strong scattering target (e.g., a guardrail, a vehicle, etc.), and the target corresponding to the echo signal with lower intensity is referred to as a weak scattering target (e.g., a lane line).
Due to the fact that the echo signals include a plurality of echo signals with different intensities, the echo signals of the strong scattering targets usually submerge the echo signals of the weak scattering targets. For example, the echo signal of the speed limit sign 120 may flood the echo signal of the road surface sign 130, the lane line 140, or the lane line 150. Therefore, when the SAR imaging technique is used to acquire an SAR image, even if a lane line exists in the current scene, the amplitude of the lane line is submerged by side lobes of a strong scattering target such as a roadside guardrail or a vehicle, and it is difficult to see lane line information on the ground in the acquired SAR image.
In view of this, the present application provides a lane line detection method and a lane line detection apparatus, where echo data obtained by a radar is preprocessed to obtain first data, the first data is filtered to obtain second data smaller than a first threshold, and the second data is imaged to obtain an SAR image of a lane line. The method and the device filter the influence of the strong scattering target in the imaging process, so that the lane line can be displayed in the SAR image, the detection of the lane line by utilizing the SAR imaging technology is realized, the SAR image of the lane line with high resolution is favorably acquired, and the accuracy of lane line detection is improved.
It should be understood that the lane line detection method provided in the embodiment of the present application may be applied to a radar of a Linear Frequency Modulated Continuous Wave (LFMCW) system. The radar signal regime may also be extended to digitally modulated radars, for example, the radar signal regime may be Phase Modulated Continuous Wave (PMCW).
It should also be understood that the method of the embodiment of the present application may be executed by a data processing device, and may also be executed by a chip in the data processing device, which is not limited in this embodiment of the present application. The embodiments of the present application are described taking a data processing apparatus as an example. It should be understood that, in an application scenario of the vehicle-mounted radar, the data processing device may be the vehicle-mounted radar, and may also be other devices installed on the vehicle together with the vehicle-mounted radar, which is not limited in this application.
The following describes the lane line detection method 200 provided in the embodiment of the present application in detail with reference to fig. 2.
Fig. 2 is a schematic flowchart of a lane line detection method 200 according to an embodiment of the present disclosure. It should be understood that the method 200 may be applied to the application scenario 100 shown in fig. 1, and may also be applied to an airborne radar or other scenarios, which is not limited by the embodiment of the present application. As shown in fig. 2, the method 200 includes:
s201, acquiring echo data of the ground by using a radar, wherein the ground comprises a lane line.
It should be understood that the echo data of the ground acquired by the radar may include echo data of a lane line and echo data of other targets around the lane line.
As known from the radar operation principle, the sampling interval and the pulse repetition interval (i.e. the period of the pulse signal) of the echo signal can be on one time axis, but are very different in magnitude. Thus, to facilitate processing of the resulting sampled data, the data processing device may divide the echo sampling interval and the pulse repetition period into two dimensions, referred to as fast time and slow time, respectively. For example, the data processing device may divide the echo signal in each pulse interval into one line, and store the resulting sample data (i.e., the echo data described above) in the form of a two-dimensional array. Illustratively, the two-dimensional array may correspond to a two-dimensional coordinate system with the horizontal axis representing fast time (i.e., echo sampling interval) and the vertical axis representing slow time (i.e., pulse repetition period).
S202, preprocessing the echo data to obtain first data.
For example, the data processing device may perform a series of preprocessing such as doppler parameter estimation, motion compensation, and range compression on the echo data, and convert the echo data acquired in S201 into image data (i.e., the first data).
It should be understood that in an application scenario of the vehicle-mounted radar, the echo data acquired by the vehicle-mounted radar includes echo data of a lane line and echo data of other strong scattering targets such as guardrails and vehicles. That is, the first data includes not only the data of the lane line to be detected, but also the data of other strong scattering targets other than the data of the lane line.
S203, filtering the first data to obtain second data, wherein the second data is data with amplitude smaller than a first threshold value.
Since the data of other strong scattering targets except the data of the weak scattering targets such as the lane lines are aliased in the first data, the data processing device can filter the data of other strong scattering targets influencing the imaging of the lane lines in the first data through filtering processing, so as to obtain the data (namely, the second data) including the weak scattering targets.
Alternatively, the filtering process may be performed according to a preset first threshold to obtain the second data with the amplitude smaller than the first threshold.
Illustratively, the first data may be a set of a plurality of sample point data, wherein each sample point data corresponds to a magnitude value.
It should also be understood that the first data or the second data in the embodiments of the present application may be data in the form of a two-dimensional array. For example, the two-dimensional array may correspond to a two-dimensional coordinate system, and the abscissa of the two-dimensional coordinate system may represent the distance direction (i.e., fast time) and the ordinate of the two-dimensional coordinate system may represent the azimuth direction (i.e., slow time). In the application scenario of the vehicle-mounted radar, the azimuth direction represents the moving direction of the vehicle (i.e. the moving direction of the vehicle-mounted radar), and the range direction represents the direction perpendicular to the movement of the vehicle.
And S204, imaging the second data to obtain an SAR image of the lane line.
According to the lane line detection method provided by the embodiment of the application, the first data is obtained by preprocessing the echo data acquired by the radar, the first data is filtered by using the first threshold value, the second data smaller than the first threshold value is obtained, and the second data is subjected to imaging processing, so that the SAR image of the lane line is obtained. The method and the device filter the influence of the strong scattering target in the imaging process, so that the lane line can be displayed in the SAR image, the detection of the lane line by utilizing the SAR imaging technology is realized, the SAR image of the lane line with high resolution is favorably acquired, and the accuracy of lane line detection is improved.
In the embodiment of the present application, the data processing device may obtain the second data in two possible ways.
In one possible implementation manner, the data processing device may delete data of which the amplitude is greater than or equal to the first threshold value in the first data, and obtain the second data.
The method for acquiring the second data has simple and easy processing process, effectively reduces the operation pressure of the data processing equipment, and simultaneously improves the lane detection efficiency.
In another possible implementation manner, the data processing device may obtain a point spread function of the data with the amplitude greater than or equal to the first threshold according to the data with the amplitude greater than or equal to the first threshold in the first data, and remove the data corresponding to the point spread function from the first data to obtain the second data.
It should be understood that data between two first zero-crossing points on both sides of the maximum peak of the point spread function may be referred to as main lobe data, and data other than the main lobe data corresponding to the point spread function may be referred to as side lobe data.
According to the mode for acquiring the second data, the main lobe data of the point spread function corresponding to the data can be removed, meanwhile, the side lobe data of the point spread function of the data can be further filtered, meanwhile, the influence of the main lobe data and the side lobe data of the strong scattering target is eliminated, and the accuracy of lane line detection is effectively improved.
It should be appreciated that the point spread function is a function used to estimate the minimum spatially resolved distance of an imaging system.
It should also be understood that the data processing device may obtain the point spread function of the data with the amplitude greater than or equal to the first threshold value in two different ways, which is not limited in the embodiment of the present application.
In the method 1, the data processing device may obtain the sampling point serial numbers corresponding to the data whose amplitude is greater than or equal to the first threshold, and input the sampling point serial numbers and the amplitude of the data whose amplitude is greater than or equal to the first threshold into an objective function, where the type of the objective function may be, for example, a singe (sinc) type function, to obtain a point spread function of the data whose amplitude is greater than or equal to the first threshold. Illustratively, the embodiment of the present application refers to the determined function as a target sinc-type function.
It should be understood that different data correspond to point spread functions, which correspond to translation and amplitude and phase transformations for the same signal form.
For example, taking the sampling point number i of any data with amplitude greater than or equal to the first threshold as an example, the data processing device may determine the signal amplitude a of the data according to the amplitude in the distance unit where the peak of the data is locatediThe serial number i and the amplitude A of the sampling point data are calculatediAnd inputting the data into a target sinc type function to obtain a point spread function of the data, so that the corresponding amplitude of the data at other distance unit positions can be obtained. The data processing device may further subtract the corresponding amplitude of the point spread function of the data in each range unit from the data corresponding to each range profile, so as to achieve filtering of the data side lobe, and obtain the second data.
It will be appreciated that the objective function employed to obtain the point spread function described above is different under different SAR imaging algorithms, i.e. the data processing device may determine the objective function in accordance with the SAR imaging processing algorithm employed. Exemplarily, the SAR imaging algorithm may include a Frequency Scaling (FS) imaging algorithm, a Range Doppler (RD) imaging algorithm, and the like.
It should also be appreciated that windowing is typically performed during imaging to increase peak-to-sidelobe ratio and reduce the sidelobe effect. Therefore, whether windowing is performed or not, and the type of window function selected, during the imaging process all affect the form of the objective function.
In the following, how a target sinc-type function corresponding to an SAR imaging algorithm is determined is described in detail by taking an example that a windowing process is not performed when an SAR imaging process is performed on an echo signal acquired by a radar under an FMCW system. Illustratively, the expression of the SAR imaging algorithm in the azimuth time domain after completing the distance compression is as follows:
Figure BDA0002976458670000091
where τ represents the fast time of the distance direction, ζ represents the slow time of the azimuth direction, ζcDenotes the beam center off time, AiIs the gain associated with the backscatter coefficient of the object, representing the gain of the ith object,
Figure BDA0002976458670000092
the range envelope is of the sinc type and contains target range migration which varies with azimuth, and the latter two terms give azimuth-up gain and phase which are independent of range.
The expression of the distance compressed data in the azimuth frequency domain is as follows:
Figure BDA0002976458670000093
wherein f isζIndicating the azimuth frequency (also called doppler frequency),
Figure BDA0002976458670000094
for azimuth antenna pattern omegaa(ζ-ζc) In the frequency domain, KaIndicating the azimuth modulation frequency, determined by parameters such as wavelength, radar speed, target distance, R0iIndicating the nearest slope of the ith target. Similar to equation (1), the last three terms in the above equation give the gain and phase in the azimuth direction independent of distance, with the sinc-type distance envelope invariant.
For the range-compressed azimuth time domain and azimuth frequency domain data, the discrete form of the range-wise target sinc function can be expressed as:
Figure BDA0002976458670000095
wherein m represents the number of the distance direction sampling units, n represents the number of the azimuth direction sampling units, Ri(n) represents the distance from the ith target to the radar at the azimuth moment n, the parameter alpha represents the amplitude, the parameter a represents the parameter related to the corresponding relation between the distance sampling point and the actual distance, the parameter b represents the parameter related to the width of the main lobe/side lobe of the sinc type function, and the general parameters a and b can be determined according to the radar system parameters or the sampling point of the distance envelope function.
The distance compression is carried out on the echo signal after the frequency modulation of the radar under an FMCW system, and ideally, the expression of the sinc type distance envelope is as follows:
Figure BDA0002976458670000096
where M represents the number of distance dimension samples, KrRepresenting the chirp rate of the transmitted signal, c representing the speed of light, FsRepresenting the signal sampling rate.
The formula (3) is a target sinc type function corresponding to the FS imaging algorithm. It should be understood that, in the above mode 1, when obtaining the point spread function of the data with the amplitude greater than or equal to the first threshold, the distance R corresponding to the sampling unit number m and the azimuth sampling unit number n of the data may be obtainediAnd (n) substituting the amplitude alpha of the data into the formula (3) to obtain the point spread function of the data.
It should be understood that the sampling point number i of the data in the above example includes the distance direction sampling unit number m and the azimuth direction sampling unit number n of the data.
In mode 2, the data processing apparatus may estimate, by a least square method, an amplitude and a phase error of a point spread function of data whose amplitude is greater than or equal to a first threshold, and obtain the point spread function of the data from an objective function (e.g., a target sinc-type function).
Illustratively, the point scattering function is obtained by estimating the parameters α, a, and b in the above formula (3) according to the least square principle based on the data with the amplitude greater than or equal to the first threshold in the first data and the L (L is an integer greater than or equal to 0) data values on both sides of the data from the distance dimension on the profile.
It should be understood that the determination method of the objective function in the mode 2 is similar to the mode 1, and the detailed description is omitted here.
Optionally, if the data processing device performs windowing in the imaging processing process, the influence of the window function on the distance envelope main lobe width and the side lobe intensity of the point spread function needs to be considered during data filtering.
As an alternative embodiment, in step S202, the preprocessing the echo data to obtain first data includes: and performing Doppler parameter estimation, motion compensation and range direction compression on the echo data to obtain the first data. Accordingly, in step S204, the imaging processing is performed on the second data to obtain an SAR image of the lane line, including: and quantizing the second data, estimating Doppler modulation frequency, correcting azimuth phase error and compressing azimuth to obtain the SAR image of the lane line.
It is to be understood that the above motion compensation may include first order motion compensation and second order motion compensation.
As an alternative embodiment, in step S202, the preprocessing the echo data to obtain first data includes: and performing Doppler parameter estimation, motion parameter estimation, first-order motion compensation and range direction compression on echo data acquired by the radar to obtain the first data. Accordingly, in step S204, the imaging processing is performed on the second data to obtain the SAR image of the lane line, including: and quantizing the second data, performing second-order motion compensation, Doppler frequency modulation estimation, azimuth phase error correction and azimuth compression to obtain the SAR image of the lane line.
It should be understood that, if the echo data obtained by the radar is front oblique view echo data (that is, the radar operates in a front oblique view) or back oblique view echo data (that is, the radar operates in a back oblique view), after doppler parameter estimation and motion parameter estimation are performed on the echo data obtained by the radar, distance linear walking term compensation and doppler center correction are further required to be performed, so that the processed oblique view data can be equivalent to front side view data, and subsequent processing such as first order motion compensation is performed.
It should also be understood that the above quantization process may be performed in any step after the second data (i.e., S203) is obtained, or may be performed after the SAR image is obtained, which is not limited in this embodiment of the application.
As an alternative embodiment, when the obtained SAR image of the lane line is an image in a pitch plane. The method 200 may further include: and carrying out geometric deformation correction and coordinate transformation on the SAR image of the lane line to obtain an image in a ground distance plane.
In the embodiment of the application, through converting the image in the skew distance plane into the image in the ground distance plane, can be so that the lane line of true road surface is more pressed close to the lane line image that obtains, can improve the accuracy nature to lane line detection more effectively.
The lane line images in the pitch plane and the ground plane are explained in detail below with reference to fig. 3. Fig. 3 shows a three-dimensional geometry of radar imaging provided by embodiments of the present application. As shown in fig. 3, the y-axis of the coordinate system represents the vehicle movement direction (i.e., the movement direction of the radar, referred to as the azimuth), the x-axis represents the direction perpendicular to the vehicle movement (referred to as the distance direction), and the z-axis represents the vertically upward direction of the vertical plane xy. H (on the y-axis) is the height of the radar from the ground, taking target P as an example, R0Is the shortest slope distance, theta is the downward viewing angle corresponding to the shortest distance, beta is the oblique viewing angle, R is the slope distance, X0Is the ground distance corresponding to P, XminAnd XminThe closest ground distance and the farthest ground distance of the imaging area.
The image in the ground distance plane has an imaging width Xmin-XminThe image in the pitch plane has an imaging width W1-W2The image of (2).
As an optional embodiment, the transmission signal bandwidth B, the downward viewing angle θ, and the oblique viewing angle β of the radar satisfy:
Figure BDA0002976458670000111
wherein, the width of the Deltax lane lineThe degree of the magnetic field is measured,
Figure BDA0002976458670000112
satisfy the requirement of
Figure BDA0002976458670000113
And c is the propagation velocity of the electromagnetic wave.
Alternatively, in actual calculation, c is 3 × 108m/s。
Alternatively, to ensure that the lane lines are within the imaging area during normal driving, it is desirable that the width of the unilateral survey belt (i.e., the width of the imaging area), Δ X, satisfy Δ X ≧ L1-L2. Wherein L is1Is the lane width, L2Is the vehicle width.
Optionally, the installation height H, the squint angle β and the downward view angle θ of the radar are set to ensure that the beam is not reflected by the vehicle body as much as possible.
As an alternative embodiment, the radar works in a front-side view, a front-oblique view or a rear-oblique view, and the imaging area of the radar is a road surface containing a lane line on one side or two sides of the vehicle.
It should be understood that the radar operates in a front-view mode, with the squint angle β of the radar being 0 °. When the radar operates in a backward oblique view mode, the value of the oblique view angle beta of the radar is negative.
According to the lane line detection method provided by the embodiment of the application, in the application scene of the vehicle-mounted radar, the vehicle-mounted radar adopts a working mode of front squint, so that the lane line information in front of the vehicle can be obtained in advance, the vehicle or a driver can know the road condition in front of the vehicle in advance, and the driving route can be flexibly adjusted according to the road condition in front of the vehicle. And the vehicle-mounted radar adopts a working mode of front side view or rear oblique view, and can be suitable for applications such as map drawing.
Fig. 4 and 5 are schematic views of two different imaging modes of an embodiment of the present application.
Fig. 4 is a schematic diagram of an imaging area provided in an embodiment of the present application. As shown in fig. 4, 1, 2, and 3 in the figure represent possible installation positions (only for left and right illustration, not representing heights) of the vehicle-mounted radar, and in a state where the vehicle 410 is in normal driving, the data processing device on the vehicle 410 may image a lane line 420 on the right side or a lane line 430 on the left side of the vehicle 410. Fig. 4 shows the imaging range of the lane line 420 on the right side of the vehicle 410, i.e., the shaded area on the right side of the vehicle 410.
Fig. 5 is a schematic view of another imaging area provided in the embodiments of the present application. As shown in fig. 5, in which 1 and 2 represent possible installation positions of the vehicle-mounted radar (for left and right illustration only, not representing the height), the data processing apparatus on the vehicle 510 can image lane lines 520 and 530 on both sides of the vehicle 510 in a state where the vehicle 510 is in normal driving. Fig. 5 shows the lane lines 520 and the imaging ranges of the lane lines 530 on the left and right sides of the vehicle 510, i.e., the shaded areas on both sides of the vehicle 510.
As an alternative embodiment, the radar is a millimeter wave radar.
It should be understood that millimeter waves refer to electromagnetic waves having a wavelength between 1mm and 10mm, and that millimeter waves correspond to a frequency range of 30GHz to 300 GHz. The characteristics of the millimeter wave may include: the broadband is large, the wavelength is short, the resolution ratio is high, the penetration is strong, and the characteristics of the millimeter waves enable the millimeter waves to be suitable for being applied to the vehicle-mounted field. Therefore, the millimeter wave radar has the ability to penetrate smoke, dust or fog, so that the millimeter wave radar can operate all day long, all day long.
In the application scene of the vehicle-mounted radar, the millimeter wave radar is adopted as the vehicle-mounted radar, so that the vehicle-mounted radar can better assist in driving or automatically drive.
It should also be understood that the lane line detection method provided by the embodiment of the present application is also applicable to radars in other frequency bands.
Based on the content described in the foregoing embodiments, in order to better understand the embodiments of the present application, the following takes an FS imaging algorithm as an example, and details of the lane line detection method provided by the embodiments of the present application are described with reference to fig. 6 to 8. The FS imaging algorithm may include: the method comprises six processes of data preprocessing, motion compensation, distance compression, strong scattering point filtering, azimuth compression and data post-processing.
Fig. 6 is a schematic flow chart of an imaging algorithm 600 provided herein. As shown in fig. 6, the specific steps of the algorithm are as follows:
s601, performing Doppler parameter estimation on the echo data subjected to frequency modulation (dechirp) removal to obtain first echo data.
And S602, performing motion parameter estimation on the first echo data to obtain second echo data.
S603, linear walking item compensation and Doppler center correction are completed on the second echo data in a distance time domain and an azimuth time domain, and echo data of an equivalent front side view is obtained.
S604, selecting a reference distance, and performing envelope compensation and phase compensation on all distance directions of the echo data equivalent to the front side and the side by using the track error at the reference distance to complete first-order motion compensation to obtain first-order compensated echo data.
S605, performing Fast Fourier Transform (FFT) processing on the first-order compensated echo data to obtain echo data after the FFT processing in the azimuth direction.
And S606, multiplying the echo data after the azimuth FFT processing by a frequency scaling factor to perform frequency scaling to obtain the echo data after the frequency scaling.
S607, distance direction FFT processing is carried out on the echo data after frequency scaling, and the echo data after distance direction FFT processing is obtained.
S608, Residual Video Phase (RVP) correction is performed on the echo data after the distance-to-FFT processing, so as to obtain corrected echo data.
S609, perform an Inverse Fast Fourier Transform (IFFT) process on the corrected echo data to obtain inverse transformed echo data.
And S610, multiplying the inversely transformed echo data by a compensation factor to complete inverse frequency scaling, distance cell migration (RCM) correction and secondary distance compression (SRC), so as to obtain echo data after secondary distance compression.
S611, distance direction FFT processing is carried out on the echo data after the secondary distance compression, distance compression is completed, and data after the distance compression are obtained.
S612, compensating the distance compressed data for a phase difference caused by a distance difference between another distance unit and the reference distance unit, and completing second-order motion compensation to obtain second-order compensated data (i.e. the first data).
And S613, judging whether data with amplitude exceeding a threshold eta exists in the data after the second-order compensation. If data with the amplitude exceeding the threshold eta exists, executing S614 and S615; if there is no data with amplitude exceeding the threshold η, S616 is executed.
And S614, filtering the data exceeding the threshold to obtain the filtered data (namely the second data).
And S615, carrying out quantization processing on the filtered data to obtain processed data.
In the filtering process, when the side lobe of the data exceeding the threshold eta is small and the influence on the echo of the lane line is negligible, the data and the range gate data corresponding to the surrounding range units can be directly removed. But the side lobe of the sinc function formed after the data compression exceeding the threshold eta exists in the whole distance area. Therefore, when the far side lobe of the data also affects the lane line echo, the main lobe of the data and the side lobe waveform thereof need to be reconstructed, and the main lobe and the side lobe influence of the data exceeding the threshold η are filtered in the compressed data of the distance compressed image.
And S616, performing Doppler frequency modulation rate estimation on the data without amplitude exceeding a threshold eta or the processed data obtained in S615 to obtain data after frequency modulation rate estimation.
And S617, performing azimuth phase error correction on the data after the frequency modulation estimation to obtain corrected data.
And S618, multiplying the corrected data by the azimuth matching filter function to obtain data subjected to azimuth matching processing.
S619, perform azimuth IFFT processing on the data after the azimuth matching processing, complete azimuth compression, and obtain an SAR image including lane line information (i.e., an SAR image in the above-described range plane).
S620, carrying out geometric deformation correction and coordinate conversion on the SAR image containing lane line information, and converting the SAR image in the range-to-range plane into the SAR image in the ground-to-range plane.
According to the lane line detection method provided by the embodiment of the application, the acquired data after the second-order compensation is filtered to obtain the data smaller than the threshold eta, and finally the imaging result of the lane line is obtained. The method is beneficial to acquiring the SAR image of the lane line with high resolution, and the accuracy of lane line detection is effectively improved.
It should be understood that the above-mentioned S601 to S603 are specific procedures of data preprocessing, S605 to S611 are specific procedures of distance compression, S604, S612, S616 and S617 are specific procedures of motion compensation, S613 to S615 are specific procedures of strong scattering point filtering, S618 and S619 are specific procedures of azimuth compression, and S620 is specific procedures of data post-processing.
It should be understood that the strong scattering point filtering step may be performed at any step corresponding to the distance-compressed azimuth time domain or the azimuth frequency domain. Illustratively, the step of filtering out the strong scattering points may also be performed after RVP correction, or after the second distance FFT processing, which is not limited in the embodiment of the present application.
Fig. 7 is a schematic flow chart of another imaging algorithm 700 provided herein. The difference with the algorithm flow shown in fig. 6 is that the algorithm flow shown in fig. 7 performs a step of strong scattering point filtering after RVP correction. As shown in fig. 7, the specific steps of the algorithm are as follows:
s701 to S708 are the same as S601 to S608 described above, and are not described herein again.
S709, determine whether there is data whose amplitude exceeds the threshold η in the corrected echo data (i.e., the first data). If data with amplitude exceeding the threshold eta exists, executing S710 and S711; if there is no data having an amplitude exceeding the threshold η, S712 is performed.
S710 and S711 are the same as S614 and S615 described above, and are not described herein again.
And S712, performing distance-to-IFFT processing on the data without amplitude exceeding the threshold eta or the processed data obtained in S711 to obtain the echo data after inverse transformation.
S713 to S715 are the same as S610 to S612 described above, and are not described herein again.
And S716, performing Doppler frequency modulation rate estimation on the second-order compensated data to obtain data after frequency modulation rate estimation.
S717 to S720 are the same as S617 to S620, and are not described herein again.
According to the lane line detection method provided by the embodiment of the application, the acquired corrected data are filtered to obtain data smaller than a threshold eta, and finally the imaging result of the lane line is obtained. The method is beneficial to acquiring the SAR image of the lane line with high resolution, and the accuracy of lane line detection is effectively improved.
It should be understood that S701 to S703 are specific processes of data preprocessing, S705 to S708, S713 and S714 are specific processes of distance compression, S704, S715 to S717 are specific processes of motion compensation, S709 to S711 are specific processes of strong scattering point filtering, S718 and S719 are specific processes of azimuth compression, and S720 is specific processes of data post-processing.
Fig. 8 is a schematic flow chart of yet another imaging algorithm 800 provided herein. The difference from the algorithm flow shown in fig. 6 is that the algorithm flow shown in fig. 8 performs the step of strong scattering point filtering after the second distance-wise FFT processing. As shown in fig. 8, the specific steps of the algorithm are as follows:
s801 to S811 are the same as S601 to S611 described above, and are not described herein again.
And S812, judging whether data with amplitude exceeding a threshold eta exists in the data after distance compression. If data with amplitude exceeding threshold eta exists, executing S813 and S814; if there is no data with amplitude exceeding the threshold η, S815 is executed.
S813 to S814 are the same as S614 to S615, and are not described again here.
And S815, performing second-order motion compensation on the data without amplitude exceeding the threshold eta or the processed data obtained in the S814 to obtain second-order compensated data.
And S816, performing Doppler frequency modulation rate estimation on the second-order compensated data to obtain data after frequency modulation rate estimation.
S817 to S820 are the same as S617 to S620 described above, and are not described herein again.
It should be understood that the above step of filtering out the strong scattering points may also be performed after the SAR image is acquired (i.e., S620, S720, S820). In an application scenario of the vehicle-mounted radar, an image of a lane line may be obtained through an image processing technique after the SAR image is acquired.
According to the lane line detection method provided by the embodiment of the application, the acquired data after the distance compression is filtered to obtain the data smaller than the threshold eta, and finally the imaging result of the lane line is obtained. The method is beneficial to acquiring the SAR image of the lane line with high resolution, and the accuracy of lane line detection is effectively improved.
It should be understood that S801 to S803 are specific processes of data preprocessing, S805 to S811 are specific processes of distance compression, S804, S815 to S817 are specific processes of motion compensation, S812 to S814 are specific processes of strong scattering point filtering, S818 and S819 are specific processes of orientation compression, and S820 is a specific process of data post-processing.
Alternatively, if other imaging algorithms are used, such as the RD algorithm, the back propagation algorithm (BP) algorithm, etc., the step of strong scatter point filtering may be performed after the distance compression.
The lane line detection method according to the embodiment of the present application is described in detail above with reference to fig. 2 to 8, and the lane line detection device according to the embodiment of the present application is described in detail below with reference to fig. 9 and 10.
It should be understood that the lane line detection device according to the embodiment of the present application may be a chip or an integrated circuit of a processor in the radar itself, or may be a device mounted on the vehicle separately from the radar, or may be a chip or an integrated circuit of a processor in the device, and the embodiment of the present application is not limited thereto.
Fig. 9 shows a lane line detection apparatus 900 according to an embodiment of the present application, where the apparatus 900 includes: an acquisition module 910 and a processing module 920.
The acquiring module 910 is configured to acquire echo data of a ground by using a radar, where the ground includes a lane line; a processing module 920, configured to pre-process the echo data to obtain first data; filtering the first data to obtain second data, wherein the second data is data with amplitude smaller than a first threshold value; and imaging the second data to obtain an SAR image of the lane line.
It should be understood that the echo data of the ground is obtained by the radar through a receiving module and a transmitting module, the transmitting module may specifically be a transmitting antenna or a transmitting antenna array of the radar, and the receiving module may specifically be a receiving antenna or a receiving antenna array of the radar.
If the apparatus 900 is a radar itself, the apparatus further comprises: the transceiver module is configured to acquire echo data on the ground by receiving and/or transmitting a signal, and send the echo data to the acquisition module 910.
If the device 900 is a device independent from a radar, the radar may receive and/or transmit signals by using its own transceiver module, acquire echo data of the ground, and then transmit the echo data to the device 900. Thus, the obtaining module 910 is specifically configured to: echo data from the radar is acquired.
Optionally, the processing module 920 is further configured to: and removing the data with the amplitude value larger than or equal to the first threshold value from the first data to obtain the second data.
Optionally, the processing module 920 is further configured to: obtaining a point spread function of the data according to the data of which the amplitude is greater than or equal to the first threshold in the first data; and removing the data corresponding to the point spread function from the first data to obtain the second data.
Optionally, the processing module 920 is further configured to: and performing Doppler parameter estimation, motion compensation and range direction compression on the echo data to obtain the first data.
Optionally, the processing module 920 is further configured to: and quantizing the second data, estimating Doppler modulation frequency, correcting azimuth phase errors and compressing the azimuth to obtain the SAR image.
Optionally, the processing module 920 is further configured to: and carrying out geometric deformation correction and coordinate transformation on the SAR image to obtain an image in a ground distance plane.
Optionally, a transmission signal bandwidth B, a downward viewing angle θ, and an oblique viewing angle β of the radar satisfy:
Figure BDA0002976458670000141
wherein Δ x is the width of the lane line,
Figure BDA0002976458670000142
satisfy the requirement of
Figure BDA0002976458670000143
And c is the propagation velocity of the electromagnetic wave.
Optionally, the radar operates in a front-side view, a front-oblique view or a rear-oblique view, and the imaging area of the radar is a road surface including a lane line on one side or two sides of the vehicle.
Optionally, the radar is a millimeter wave radar.
It should be appreciated that the apparatus 900 herein is embodied in the form of functional modules. The term module herein may refer to an Application Specific Integrated Circuit (ASIC), an electronic circuit, a processor (e.g., a shared, dedicated, or group processor) and memory that execute one or more software or firmware programs, a combinational logic circuit, and/or other suitable components that support the described functionality. In an optional example, it may be understood by those skilled in the art that the apparatus 900 may be embodied as a data processing device or a chip in the data processing device in the foregoing embodiment, or a function of the data processing device or a function of the chip in the data processing device in the foregoing embodiment may be integrated in the apparatus 900, and the apparatus 900 may be configured to execute each procedure and/or step corresponding to the data processing device in the foregoing method embodiment, and is not described herein again to avoid repetition.
The device 900 has functions of implementing corresponding steps executed by data processing equipment in the method; the above functions may be implemented by hardware, or may be implemented by hardware executing corresponding software. The hardware or software includes one or more modules corresponding to the functions described above. For example, the obtaining module 910 may be a communication interface, such as a transceiver interface.
In an embodiment of the present application, the apparatus 900 in fig. 9 may also be a chip or a chip system, for example: system on chip (SoC). Correspondingly, the obtaining module 910 may be a transceiver circuit of the chip, and the application is not limited herein.
Fig. 10 shows another lane line detection apparatus 1000 according to an embodiment of the present application. The apparatus 1000 includes a processor 1010, a transceiver 1020, and a memory 1030. Wherein the processor 1010, the transceiver 1020 and the memory 1030 are in communication with each other via an internal connection path, the memory 1030 is configured to store instructions, and the processor 1010 is configured to execute the instructions stored in the memory 1030 to control the transceiver 1020 to transmit and/or receive signals.
It should be understood that the lane line detection apparatus 1000 may be a radar itself, or may be an apparatus mounted on the vehicle together with the radar, independently of the radar.
If the apparatus 1000 is a radar itself, the transceiver 1020 is configured to: acquiring echo data of the ground by receiving and/or transmitting signals; the processor 1010 is configured to: preprocessing the echo data to obtain first data; filtering the first data to obtain second data, wherein the second data is data with amplitude smaller than a first threshold value; and imaging the second data to obtain an SAR image of the lane line.
If the device 1000 is a radar-independent device, the radar can receive and/or transmit signals using its own transceiver, obtain echo data from the ground, and send the echo data to the device 1000. Thus, the processor 1010 is configured to: receiving echo data from a radar through a transceiver 1020, and preprocessing the echo data to obtain first data; filtering the first data to obtain second data, wherein the second data is data with amplitude smaller than a first threshold value; and imaging the second data to obtain an SAR image of the lane line.
It should be understood that the apparatus 1000 may be embodied as a data processing device or a chip in a data processing device in the foregoing embodiments, or functions of a data processing device or functions of a chip in a data processing device in the foregoing embodiments may be integrated in the apparatus 1000, and the apparatus 1000 may be configured to perform each step and/or flow corresponding to the data processing device or the chip in the data processing device in the foregoing method embodiments. Alternatively, the memory 1030 may include a read-only memory and a random access memory, and provides instructions and data to the processor. The portion of memory may also include non-volatile random access memory. For example, the memory may also store device type information. The processor 1010 may be configured to execute the instructions stored in the memory, and when the processor executes the instructions, the processor may perform the steps and/or processes corresponding to the data processing apparatus in the above method embodiments.
It should be understood that, in the embodiments of the present application, the processor may be a Central Processing Unit (CPU), and the processor may also be other general processors, Digital Signal Processors (DSPs), Application Specific Integrated Circuits (ASICs), Field Programmable Gate Arrays (FPGAs) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, and the like. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
In implementation, the steps of the method 200 may be performed by integrated logic circuits of hardware or instructions in the form of software in a processor. The steps of a method disclosed in connection with the embodiments of the present application may be directly implemented by a hardware processor, or may be implemented by a combination of hardware and software modules in a processor. The software module may be located in ram, flash memory, rom, prom, or eprom, registers, etc. storage media as is well known in the art. The storage medium is located in a memory, and a processor executes instructions in the memory, in combination with hardware thereof, to perform the steps of the above-described method. To avoid repetition, it is not described in detail here.
The embodiment of the present application provides a computer-readable storage medium, where the computer storage medium is used to store a computer program, and the computer program is used to implement methods corresponding to various possible implementation manners in the foregoing embodiments.
The present application provides a computer program product, which includes a computer program (also referred to as code or instructions), and when the computer program runs on a computer, the computer can execute the methods corresponding to the various possible implementations of the foregoing embodiments.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the above-described systems, apparatuses and units may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the several embodiments provided in the present application, it should be understood that the disclosed system, apparatus and method may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the units is only one type of logical functional division, and other divisions may be realized in practice, for example, multiple units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application or portions thereof that substantially contribute to the prior art may be embodied in the form of a software product stored in a storage medium and including instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: various media capable of storing program codes, such as a usb disk, a removable hard disk, a read-only memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk.
The above description is only for the specific embodiments of the present application, but the scope of the present application is not limited thereto, and any person skilled in the art can easily conceive of the changes or substitutions within the technical scope of the present application, and shall be covered by the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (20)

1. A lane line detection method is characterized by comprising the following steps:
acquiring echo data of the ground by using a radar, wherein the ground comprises lane lines, the radar works in a front side view, a front oblique view or a rear oblique view mode, and an imaging area of the radar is a road surface containing the lane lines on one side or two sides of a vehicle;
preprocessing the echo data to obtain first data;
filtering the first data to obtain second data, wherein the second data is data with amplitude smaller than a first threshold value;
and imaging the second data to obtain a Synthetic Aperture Radar (SAR) image of the lane line.
2. The method of claim 1, wherein filtering the first data to obtain second data comprises:
and removing the data with the amplitude value larger than or equal to the first threshold value from the first data to obtain the second data.
3. The method of claim 1, wherein filtering the first data to obtain second data comprises:
obtaining a point spread function of the data according to the data of which the amplitude is greater than or equal to the first threshold in the first data;
and removing the data corresponding to the point spread function from the first data to obtain the second data.
4. The method of any of claims 1-3, wherein the preprocessing the echo data to obtain first data comprises:
and performing Doppler parameter estimation, motion compensation and range direction compression on the echo data to obtain the first data.
5. The method of claim 4, wherein the imaging the second data to obtain the SAR image of the lane line comprises:
and quantizing the second data, estimating Doppler modulation frequency, correcting azimuth phase errors and compressing the azimuth to obtain the SAR image.
6. The method of any of claims 1-3, 5, wherein the SAR image is an image in a pitch plane, the method further comprising:
and carrying out geometric deformation correction and coordinate transformation on the SAR image to obtain an image in a ground plane.
7. The method according to any one of claims 1-3 and 5, wherein the transmission signal bandwidth B, the downward view angle theta and the oblique view angle beta of the radar satisfy:
Figure FDA0003497658720000011
wherein Δ x is the width of the lane line,
Figure FDA0003497658720000012
satisfy the requirements of
Figure FDA0003497658720000013
And c is the propagation velocity of the electromagnetic wave.
8. The method of any of claims 1-3, 5, wherein the radar is a millimeter wave radar.
9. A lane line detection apparatus, comprising:
the system comprises an acquisition module, a display module and a display module, wherein the acquisition module is used for acquiring echo data of the ground by utilizing a radar, the ground comprises lane lines, the working mode of the radar is front side view, front oblique view or rear oblique view, and the imaging area of the radar is a road surface containing the lane lines on one side or two sides of a vehicle;
the processing module is used for preprocessing the echo data to obtain first data; filtering the first data to obtain second data, wherein the second data is data with amplitude smaller than a first threshold value; and imaging the second data to obtain a Synthetic Aperture Radar (SAR) image of the lane line.
10. The apparatus of claim 9, wherein the processing module is further to:
and removing the data with the amplitude value larger than or equal to the first threshold value from the first data to obtain the second data.
11. The apparatus of claim 9, wherein the processing module is further to:
obtaining a point spread function of the data according to the data of which the amplitude is greater than or equal to the first threshold in the first data;
and removing the data corresponding to the point spread function from the first data to obtain the second data.
12. The apparatus of any of claims 9-11, wherein the processing module is further to:
and performing Doppler parameter estimation, motion compensation and range direction compression on the echo data to obtain the first data.
13. The apparatus of claim 12, wherein the processing module is further to:
and quantizing the second data, estimating Doppler modulation frequency, correcting azimuth phase errors and compressing the azimuth to obtain the SAR image.
14. The apparatus of any of claims 9-11, 13, wherein the processing module is further to:
and carrying out geometric deformation correction and coordinate transformation on the SAR image to obtain an image in a ground plane.
15. The apparatus of any one of claims 9-11, 13, wherein a transmission signal bandwidth B, a lower view angle θ, and an oblique view angle β of the radar satisfy:
Figure FDA0003497658720000021
wherein Δ x is the width of the lane line,
Figure FDA0003497658720000022
satisfy the requirement of
Figure FDA0003497658720000023
And c is the propagation velocity of the electromagnetic wave.
16. The apparatus of any of claims 9-11, 13, wherein the radar is a millimeter wave radar.
17. A lane line detection apparatus, comprising: a processor coupled with a memory for storing a computer program that, when invoked by the processor, causes the apparatus to perform the method of any of claims 1 to 8.
18. A terminal, characterized in that it comprises an arrangement according to any of claims 9-17.
19. A computer-readable storage medium for storing a computer program comprising instructions for implementing the method of any one of claims 1 to 8.
20. A computer program product comprising computer program code which, when run on a computer, causes the computer to implement the method of any one of claims 1 to 8.
CN202180000475.9A 2021-03-03 2021-03-03 Lane line detection method and lane line detection device Active CN113167885B (en)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2021/078907 WO2022183408A1 (en) 2021-03-03 2021-03-03 Lane line detection method and lane line detection apparatus

Publications (2)

Publication Number Publication Date
CN113167885A CN113167885A (en) 2021-07-23
CN113167885B true CN113167885B (en) 2022-05-31

Family

ID=76875956

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202180000475.9A Active CN113167885B (en) 2021-03-03 2021-03-03 Lane line detection method and lane line detection device

Country Status (2)

Country Link
CN (1) CN113167885B (en)
WO (1) WO2022183408A1 (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115100844A (en) * 2022-05-07 2022-09-23 深圳汇辰软件有限公司 Emergency lane occupation behavior recognition system and method and terminal equipment
CN115272182B (en) * 2022-06-23 2023-05-26 禾多科技(北京)有限公司 Lane line detection method, lane line detection device, electronic equipment and computer readable medium

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2005085900A1 (en) * 2004-03-01 2005-09-15 Gamma Remote Sensing Research And Consulting Ag Method for identifying radar point targets
CN107507417B (en) * 2017-08-03 2019-10-18 北京信息科技大学 A kind of smartway partitioning method and device based on microwave radar echo-signal
CN109426800B (en) * 2017-08-22 2021-08-13 北京图森未来科技有限公司 Lane line detection method and device
CN109840463B (en) * 2017-11-27 2021-03-30 北京图森未来科技有限公司 Lane line identification method and device
CN108860016B (en) * 2018-07-04 2020-05-05 广东奎创科技股份有限公司 Intelligent robot coach auxiliary driving system
CN110609268B (en) * 2018-11-01 2022-04-29 驭势科技(北京)有限公司 Laser radar calibration method, device and system and storage medium
CN112433203B (en) * 2020-10-29 2023-06-20 同济大学 Lane linearity detection method based on millimeter wave radar data
CN113167886B (en) * 2021-03-02 2022-05-31 华为技术有限公司 Target detection method and device

Also Published As

Publication number Publication date
WO2022183408A1 (en) 2022-09-09
CN113167885A (en) 2021-07-23

Similar Documents

Publication Publication Date Title
US11276189B2 (en) Radar-aided single image three-dimensional depth reconstruction
US11340342B2 (en) Automotive radar using 3D printed luneburg lens
CN113167885B (en) Lane line detection method and lane line detection device
CN111679266B (en) Automobile millimeter wave radar sparse array grating lobe false target identification method and system
EP3617736A1 (en) Determining material category based on the polarization of received signals
KR101456185B1 (en) Method and apparatus for yielding radar imaging
EP3460515B1 (en) Mapping for autonomous robotic devices
CN111025256A (en) Method and system for detecting weak vital sign signals of airborne radar
EP3508869B1 (en) Light-weight radar system
CN113406639A (en) FOD detection method, system and medium based on vehicle-mounted mobile radar
JP5035782B2 (en) Split beam synthetic aperture radar
WO2020133041A1 (en) Vehicle speed calculation method, system and device, and storage medium
EP4166984A1 (en) Distributed microwave radar imaging method and apparatus
Gao et al. Static background removal in vehicular radar: Filtering in azimuth-elevation-doppler domain
CN115015925A (en) Airborne array radar super-resolution forward-looking imaging method and device based on improved matching pursuit
AU2020279716B2 (en) Multi-timescale doppler processing and associated systems and methods
CN116359908A (en) Point cloud data enhancement method, device, computer equipment, system and storage medium
CN113406643A (en) Detection method and system of FOD detection device based on vehicle-mounted distributed aperture radar
KR101857132B1 (en) Apparatus Detecting Target Using W-Band Millimeter Wave Seeker and Image Seeker Based on Predicted Radar Cross Section
CN113490863B (en) Radar-assisted single image three-dimensional depth reconstruction
TWI834771B (en) Early fusion of camera and radar frames
CN117538856A (en) Target detection method, device, radar and medium
Rajender Designing of Synthetic Aperture Radar Based Control Algorithms for the Autonomous Vehicles
CN116203577A (en) Target detection method and device based on multi-signal fusion
CN116125488A (en) Target tracking method, signal fusion method, device, terminal and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant