CN113196098A - Echo data-based speed estimation method and device - Google Patents

Echo data-based speed estimation method and device Download PDF

Info

Publication number
CN113196098A
CN113196098A CN202180001251.XA CN202180001251A CN113196098A CN 113196098 A CN113196098 A CN 113196098A CN 202180001251 A CN202180001251 A CN 202180001251A CN 113196098 A CN113196098 A CN 113196098A
Authority
CN
China
Prior art keywords
processed
image
speed
echo
determining
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202180001251.XA
Other languages
Chinese (zh)
Other versions
CN113196098B (en
Inventor
黄磊
王犇
赵博
李强
陈佳民
易程博
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huawei Technologies Co Ltd
Original Assignee
Huawei Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Technologies Co Ltd filed Critical Huawei Technologies Co Ltd
Publication of CN113196098A publication Critical patent/CN113196098A/en
Application granted granted Critical
Publication of CN113196098B publication Critical patent/CN113196098B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S13/00Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
    • G01S13/02Systems using reflection of radio waves, e.g. primary radar systems; Analogous systems
    • G01S13/50Systems of measurement based on relative movement of target
    • G01S13/58Velocity or trajectory determination systems; Sense-of-movement determination systems
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S13/00Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
    • G01S13/88Radar or analogous systems specially adapted for specific applications
    • G01S13/89Radar or analogous systems specially adapted for specific applications for mapping or imaging
    • G01S13/90Radar or analogous systems specially adapted for specific applications for mapping or imaging using synthetic aperture techniques, e.g. synthetic aperture radar [SAR] techniques
    • G01S13/9021SAR image post-processing techniques
    • G01S13/9029SAR image post-processing techniques specially adapted for moving target detection within a single SAR image or within multiple SAR images taken at the same time
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/02Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S13/00
    • G01S7/41Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S13/00 using analysis of echo signal for target characterisation; Target signature; Target cross-section
    • G01S7/415Identification of targets based on measurements of movement associated with the target

Landscapes

  • Engineering & Computer Science (AREA)
  • Remote Sensing (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Physics & Mathematics (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • General Physics & Mathematics (AREA)
  • Electromagnetism (AREA)
  • Radar Systems Or Details Thereof (AREA)

Abstract

The application provides a speed estimation method and device based on echo data, which are applicable to the technical field of radar, are used for automatic driving or intelligent driving and can finish speed estimation without depending on an inertial sensor. The method comprises the following steps: determining a local area from the imaging area, performing imaging processing on the echo subdata in the local area according to a plurality of first speeds to be processed to obtain a plurality of first images to be processed, wherein the plurality of first speeds to be processed are obtained according to a preset speed range, the echo subdata is obtained by dividing the echo data, each first image to be processed corresponds to one first speed to be processed, determining a target image to be processed from the plurality of first images to be processed, determining the first speed to be processed corresponding to the target image to be processed as the estimated speed of the echo subdata, and finally obtaining the estimated speed of the echo data. The method can be applied to the Internet of vehicles, such as vehicle external connection V2X, workshop communication long term evolution technology LTE-V, vehicle-vehicle V2V and the like.

Description

Echo data-based speed estimation method and device
Technical Field
The embodiment of the application relates to the technical field of radars, in particular to a speed estimation method and device based on echo data.
Background
Synthetic Aperture Radar (SAR) is a modern remote sensing imaging radar all day long and all weather, has long-distance and high-resolution detection capability, and is commonly used in many fields such as remote sensing mapping, area detection, geological exploration, disaster relief and the like. In a wider civil field, the current vehicle-mounted SAR platform for intelligent driving becomes a new round of research hotspot. Secondly, in the process of data recording of the SAR, the radar needs to perform continuous motion to form a synthetic aperture, and the echo property is closely related to the motion speed of the radar. In the imaging process, the relative motion speed information of the radar and the target is one of the key parameters needed to be used in the data processing of the SAR imaging algorithm. Whether the recorded echo can be accurately matched with the current radar speed can directly influence the imaging quality. When the vehicle is used as a working platform of the radar, the vehicle is difficult to keep running at a stable speed for a long time in a road environment, so that the imaging difficulty of the vehicle-mounted SAR is increased. Therefore, obtaining the speed information of the radar during recording the echo becomes an indispensable part of the work of obtaining high-quality imaging results.
At present, SAR imaging systems mainly work on platforms such as airplanes or satellites, and high-precision inertial sensor systems are installed in the working platforms, so that accurate speed information can be provided for the imaging systems.
However, in a vehicle platform, applying such an inertial sensor system for imaging has a large hardware cost, increases the complexity of the system, and has an error in case of vibration, so that a speed estimation method that does not rely on the inertial sensor is needed.
Disclosure of Invention
The embodiment of the application provides a speed estimation method and device based on echo data, which can finish speed estimation without depending on an inertial sensor and reduce hardware cost.
A first aspect of the embodiments of the present application provides a speed estimation method based on echo data, in which a local area is determined from an imaging area, a plurality of first to-be-processed speeds are obtained according to a preset speed range, the echo data is divided to obtain a plurality of echo sub-data, one echo sub-data of the plurality of echo sub-data is imaged in the local area according to the plurality of first to-be-processed speeds to obtain a plurality of first to-be-processed images, each first to-be-processed image corresponds to one first to-be-processed speed, and based on this, a target to-be-processed image is determined from the plurality of first to-be-processed images. Each first to-be-processed image corresponds to one first to-be-processed speed, so that the target to-be-processed image also corresponds to one first to-be-processed speed, the first to-be-processed speed corresponding to the target to-be-processed image can be determined as the estimated speed of the echo sub-data, and the estimated speed of the echo sub-data is used for obtaining the estimated speed of the echo data.
In the embodiment, the estimated speed of the echo sub-data can be determined based on the echo data, and the estimated speed of the echo data can be obtained through the estimated speed of the echo sub-data, so that the speed estimation can be finished without depending on an inertial sensor, and the hardware cost is reduced.
With reference to the first aspect of the embodiment of the present application, in a first implementation manner of the first aspect of the embodiment of the present application, before the echo sub-data is subjected to imaging processing in a local area according to a plurality of first speeds to be processed to obtain a plurality of first images to be processed, the estimated speed range is divided by a first preset speed interval to obtain a plurality of first speeds to be processed. For example, if the preset speed range is 0 to 10 meters per second (m/s), based on which the first preset speed interval can be determined as 1m/s, that is, dividing 0 to 10m/s by 1m/s, 10 first to-be-processed speeds can be obtained, and then the 10 first to-be-processed speeds are 1m/s, 2m/s, 3m/s, 4m/s, 5m/s, 6m/s, 7m/s, 8m/s, 9m/s and 10m/s, respectively.
In this embodiment, the preset speed range is divided by the first preset speed interval, and the larger speed range can be reduced, so that the speed estimation can be more accurately performed in the smaller speed range, and the accuracy of the speed estimation is improved.
With reference to the first implementation manner of the first aspect of the embodiment of the present application, in a second implementation manner of the first aspect of the embodiment of the present application, specifically, a plurality of first to-be-processed direction dimensions of each first to-be-processed image are superimposed along the same direction to obtain a plurality of first superimposed direction dimensions of each first to-be-processed image, then a first superimposed direction dimension with a largest value in the plurality of first superimposed direction dimensions of each first to-be-processed image is determined as a first quality assessment index of each first to-be-processed image, and finally a first to-be-processed image with a largest value in the first quality assessment indexes of the plurality of first to-be-processed images is determined as a target to-be-processed image.
In this embodiment, when the first to-be-processed speed corresponding to the first to-be-processed image is closer to the real speed, the line in the first to-be-processed image may be close to the vertical distance dimension, that is, the line is relatively gathered in the direction dimension, and therefore when the first to-be-processed speed is accurate, the values corresponding to the plurality of direction dimensions are obtained by accumulating along each orientation dimension, and then the maximum value of the values corresponding to the plurality of direction dimensions is obtained along the distance dimension, which is larger than the maximum value obtained by the similar method when the first to-be-processed speed is inaccurate, so that the value may be used as the quality assessment index. Therefore, the first to-be-processed direction dimension in the first to-be-processed image with the largest value in the first quality assessment index is relatively on a line similar to a straight line, so that the determined first to-be-processed speed corresponding to the target to-be-processed image at the moment is closer to the real speed.
With reference to the second implementation manner of the first aspect of the embodiment of the present application, in the third implementation manner of the first aspect of the embodiment of the present application, it is further possible to determine a first to-be-processed image with a largest value in the first quality assessment indexes of the plurality of first to-be-processed images as a first pre-estimated image, obtain a plurality of second to-be-processed speeds according to the first speed range, then perform imaging processing on the first pre-estimated image according to the plurality of second to-be-processed speeds by using a similar manner as that of the first aspect of the embodiment of the present application, obtain a plurality of second to-be-processed images, where each second to-be-processed image corresponds to one second to-be-processed speed, and determine a target to-be-processed image from the plurality of second to-be-processed images.
In this embodiment, the first to-be-processed image with the largest value in the first quality evaluation indexes of the plurality of first to-be-processed images is determined as the first estimated image (i.e., the above-described target to-be-processed image), at this time, the estimated speed of the echo sub-data is not determined, but the speed estimation interval (i.e., the first speed range) is further reduced, after imaging processing is performed on each second to-be-processed speed, the corresponding speed is determined in the smaller area speed estimation interval to be a target to-be-processed image that better conforms to the actual situation, and this step may be repeated several times to improve the accuracy of speed estimation.
With reference to the third implementation manner of the first aspect of the embodiment of the present application, in a fourth implementation manner of the first aspect of the embodiment of the present application, before the first estimated image is subjected to imaging processing according to the second speeds to be processed to obtain the second images to be processed, a first speed range is determined based on the first speed to be processed corresponding to the first estimated image, and the first speed range is divided by the second first preset speed interval to obtain the second speeds to be processed. For example, if the first speed range corresponding to the first pre-estimated image is 3m/s, the first speed range may be 2.5 to 3.5m/s, and based on this, the first speed range is divided by the second preset speed interval to obtain a plurality of second speed ranges, for example, the second preset speed interval may be determined to be 0.1m/s, that is, the second speed range is divided by 0.1m/s and 2.5 to 3.5m/s to obtain 11 second speed ranges, and then the 11 second speed ranges are 2.5m/s, 2.6m/s, 2.7m/s, 2.5m/s, 2.9m/s, 3.0m/s, 3.1m/s, 3.2m/s, 3.3m/s, 3.4m/s, and 3.5m/s, respectively.
In this embodiment, the speed range can be further reduced by dividing the first speed range by the second preset speed interval, so that the speed estimation can be more accurately performed in a smaller speed range, thereby improving the accuracy of the speed estimation.
With reference to the fourth implementation manner of the first aspect of the embodiment of the present application, in a fifth implementation manner of the first aspect of the embodiment of the present application, a plurality of second to-be-processed direction dimensions of each second to-be-processed image are superimposed along the same direction to obtain a plurality of second superimposed direction dimensions of each second to-be-processed image, a second superimposed direction dimension with a largest numerical value in the plurality of second superimposed direction dimensions of each second to-be-processed image is determined as the second quality assessment index of each second to-be-processed image, and a second to-be-processed image with a largest numerical value in the second quality assessment indexes of the plurality of second to-be-processed images is determined as the target to-be-processed image.
In this embodiment, since the second to-be-processed speed corresponding to the second to-be-processed image is closer to the real speed, the line in the second to-be-processed image may be closer to the vertical distance dimension. Therefore, when the second speed to be processed is accurate, the numerical values corresponding to the plurality of direction dimensions are obtained after accumulation along each direction dimension, and then the maximum numerical value of the numerical values corresponding to the plurality of direction dimensions is obtained along the distance dimension, which is larger than the maximum numerical value obtained by a similar method when the second speed to be processed is inaccurate, so that the numerical value can be used as a quality evaluation index. Therefore, the first to-be-processed direction dimension in the second to-be-processed image with the largest value in the second quality assessment index is relatively on a line similar to a straight line, and the determined second to-be-processed speed corresponding to the target to-be-processed image at the moment is closer to the real speed.
With reference to any one of the first aspect of the embodiment of the present application to the fifth implementation manner of the first aspect of the embodiment of the present application, in the sixth implementation manner of the first aspect of the embodiment of the present application, echo data also needs to be obtained, and the echo data is divided by a preset step size to obtain a plurality of echo sub data.
In this embodiment, the echo data is divided to obtain the plurality of echo sub data, so that the speed estimation method described in the above embodiment can be performed on each echo sub data, thereby improving the feasibility of the present solution.
With reference to the sixth implementation manner of the first aspect of the embodiment of the present application, in the seventh implementation manner of the first aspect of the embodiment of the present application, the estimated speed of the echo data is obtained by performing filtering processing on the estimated speeds of the multiple echo sub-data.
In this embodiment, the velocity estimation error is further reduced by the filtering process, thereby improving the velocity estimation accuracy.
With reference to any one of the first aspect of the embodiment of the present application to the seventh implementation manner of the first aspect of the embodiment of the present application, in an eighth implementation manner of the first aspect of the embodiment of the present application, a direction dimension of the imaging region is greater than a direction dimension of the local region, and a distance dimension of the imaging region is greater than a distance dimension of the local region.
In the embodiment, the estimated speed of the echo sub-data is determined by the local area in the imaging area, and the calculation amount can be reduced when the speed estimation accuracy is improved, so that the speed estimation efficiency is improved.
A second aspect of an embodiment of the present application provides a speed estimation apparatus, including:
a determination module for determining a local region from the imaging region;
the processing module is used for imaging the echo subdata in the local area according to a plurality of first speeds to be processed to obtain a plurality of first images to be processed, wherein the plurality of first speeds to be processed are obtained according to a preset speed range, the echo subdata is obtained by dividing echo data, and each first image to be processed corresponds to one first speed to be processed;
the determining module is further used for determining a target image to be processed from the plurality of first images to be processed;
the determining module is further configured to determine a first to-be-processed speed corresponding to the target to-be-processed image as an estimated speed of the echo sub-data, where the estimated speed of the echo sub-data is used to obtain an estimated speed of the echo data.
In an optional embodiment of the present application, the speed estimation apparatus further comprises a dividing module;
the dividing module is used for dividing the estimated speed range at a first preset speed interval to obtain a plurality of first speeds to be processed before the processing module performs imaging processing on the echo sub-data in the local area according to the plurality of first speeds to be processed to obtain a plurality of first images to be processed.
In an optional implementation manner of the present application, the determining module is specifically configured to:
superposing the plurality of first to-be-processed direction dimensions of each first to-be-processed image along the same direction to obtain a plurality of first superposed direction dimensions of each first to-be-processed image;
determining a first superposition direction dimension with the largest value in the plurality of first superposition direction dimensions of each first image to be processed as a first quality evaluation index of each first image to be processed;
and determining the first to-be-processed image with the largest value in the first quality evaluation indexes of the plurality of first to-be-processed images as the target to-be-processed image.
In an optional implementation manner of the present application, the determining module is specifically configured to:
determining a first to-be-processed image with the largest value in the first quality evaluation indexes of the plurality of first to-be-processed images as a first pre-estimated image;
imaging the first estimated image according to a plurality of second to-be-processed speeds to obtain a plurality of second to-be-processed images, wherein the plurality of second to-be-processed speeds are obtained according to a first speed range, and each second to-be-processed image corresponds to one second to-be-processed speed;
and determining a target image to be processed from the plurality of second images to be processed.
In an optional implementation manner of the present application, the determining module is further configured to determine a first speed range based on a first to-be-processed speed corresponding to the first estimated image before the processing module performs imaging processing on the first estimated image according to the plurality of second to-be-processed speeds to obtain a plurality of second to-be-processed images;
and the dividing module is also used for dividing the first speed range at a second preset speed interval to obtain a plurality of second speeds to be processed.
In an optional implementation manner of the present application, the determining module is specifically configured to:
superposing a plurality of second to-be-processed direction dimensions of each second to-be-processed image along the same direction to obtain a plurality of second superposed direction dimensions of each second to-be-processed image;
determining a second superposition direction dimension with the largest value in the plurality of second superposition direction dimensions of each second image to be processed as a second quality evaluation index of each second image to be processed;
and determining the second image to be processed with the largest numerical value in the second quality evaluation indexes of the plurality of second images to be processed as the target image to be processed.
In an optional implementation manner of the present application, the speed estimation apparatus further includes an obtaining module;
the acquisition module is used for acquiring echo data;
the dividing module is further used for dividing the echo data by a preset step length to obtain a plurality of echo subdata.
In an optional implementation manner of the present application, the processing module is further configured to perform filtering processing on the estimated speed of the multiple echo sub-data to obtain the estimated speed of the echo data.
In an alternative embodiment of the present application, the direction dimension of the imaging region is larger than the direction dimension of the local region, and the distance dimension of the imaging region is larger than the distance dimension of the local region.
In a third aspect, a vehicle is provided that includes a speed estimation device that performs any one of the possible implementations of the second aspect.
In a fourth aspect, a processor is provided, comprising: input circuit, output circuit and processing circuit. The processing circuit is configured to receive a signal through the input circuit and transmit a signal through the output circuit, so that the processor performs the method in any one of the possible implementations of the first aspect.
In a specific implementation process, the processor may be a chip, the input circuit may be an input pin, the output circuit may be an output pin, and the processing circuit may be a transistor, a gate circuit, a flip-flop, various logic circuits, and the like. The input signal received by the input circuit may be received and input by, for example and without limitation, a receiver, the signal output by the output circuit may be output to and transmitted by a transmitter, for example and without limitation, and the input circuit and the output circuit may be the same circuit that functions as the input circuit and the output circuit, respectively, at different times. The embodiment of the present application does not limit the specific implementation manner of the processor and various circuits.
In a fifth aspect, a speed estimation apparatus is provided that includes a communication interface and a processor. The communication interface is coupled with the processor. The communication interface is used for inputting and/or outputting information. The information includes at least one of instructions and data. The processor is configured to execute a computer program to cause the speed estimation apparatus to perform the method of any of the possible implementations of the first aspect.
Optionally, the number of the processors is one or more, and the number of the memories is one or more.
In a sixth aspect, a speed estimation apparatus is provided that includes a processor and a memory. The processor is configured to read instructions stored in the memory and to receive signals via the receiver and transmit signals via the transmitter, so that the apparatus performs the method of any of the possible implementations of the first aspect.
Optionally, the number of the processors is one or more, and the number of the memories is one or more.
Alternatively, the memory may be integral to the processor or provided separately from the processor.
In a specific implementation process, the memory may be a non-transient memory, such as a Read Only Memory (ROM), which may be integrated on the same chip as the processor, or may be separately disposed on different chips.
It will be appreciated that the relevant information interaction process, e.g., sending a message, may be the process of outputting a message from the processor, and receiving a message may be the process of inputting a received message to the processor. In particular, the information output by the processor may be output to a transmitter and the input information received by the processor may be from a receiver. The transmitter and receiver may be collectively referred to as a transceiver, among others.
The speed estimation device in the fifth aspect and the sixth aspect may be a chip, the processor may be implemented by hardware or may be implemented by software, and when implemented by hardware, the processor may be a logic circuit, an integrated circuit, or the like; when implemented in software, the processor may be a general-purpose processor implemented by reading software code stored in a memory, which may be integrated with the processor, located external to the processor, or stand-alone.
In a seventh aspect, a computer program product is provided, the computer program product comprising: computer program (also called code, or instructions), which when executed, causes a computer to perform the method of any of the possible implementations of the first aspect described above.
In an eighth aspect, a computer-readable storage medium is provided, which stores a computer program (which may also be referred to as code or instructions) that, when executed on a computer, causes the computer to perform the method of any one of the possible implementations of the first aspect.
In a ninth aspect, the present application provides a chip system, which includes a processor and an interface, the interface is used for acquiring a program or an instruction, and the processor is used for calling the program or the instruction to realize or support a speed estimation device to realize the functions related to the first aspect.
In one possible design, the system-on-chip further includes a memory for storing program instructions and data necessary for the speed estimation device. The chip system may be formed by a chip, or may include a chip and other discrete devices.
It should be noted that beneficial effects brought by the embodiments of the second aspect to the ninth aspect of the present application can be understood by referring to the embodiments of the first aspect, and therefore, repeated descriptions are omitted.
Drawings
FIG. 1 is a schematic diagram of an echo data acquisition scenario provided by an embodiment of the present application;
FIG. 2 is a schematic flow chart of a velocity estimation method based on echo data according to an embodiment of the present application;
FIG. 3 is a schematic diagram of an embodiment of determining a local region from an imaging region in an embodiment of the present application;
FIG. 4 is a schematic diagram of an embodiment of imaging a local region in an embodiment of the present application;
FIG. 5 is a schematic flowchart of a rear projection imaging algorithm in an embodiment of the present application;
FIG. 6 is a schematic diagram of an embodiment of a plurality of first to-be-processed images in an embodiment of the present application;
FIG. 7 is a schematic diagram of another embodiment of a plurality of first to-be-processed images in the embodiment of the present application;
FIG. 8 is a flowchart illustrating a process of determining a target to-be-processed image from a plurality of first to-be-processed images according to an embodiment of the present application;
FIG. 9 is a schematic diagram of an embodiment of a speed estimation result in the embodiment of the present application;
FIG. 10 is a schematic flow chart illustrating a method for velocity estimation based on echo data according to an embodiment of the present application;
fig. 11 is a schematic structural diagram of a speed estimation device in an embodiment of the present application.
Detailed Description
In order to make the above objects, technical solutions and advantages of the present application more comprehensible, detailed descriptions are provided below. The detailed description sets forth various embodiments of the devices and/or processes via the use of block diagrams, flowcharts, and/or examples. Since these block diagrams, flowcharts, and/or examples contain one or more functions and/or operations, it will be understood by those within the art that each function and/or operation within these block diagrams, flowcharts, or examples can be implemented, individually and/or collectively, by a wide range of hardware, software, firmware, or any combination thereof. The terms "first," "second," "third," "fourth," and the like in the description and in the claims and drawings of the present application, if any, are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It will be appreciated that the data so used may be interchanged under appropriate circumstances such that the embodiments described herein may be practiced otherwise than as specifically illustrated or described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
In order to better understand a speed estimation method and apparatus based on echo data disclosed in the embodiments of the present application, a system architecture used in the embodiments of the present invention is described below. Referring to fig. 1, fig. 1 is a schematic view of an echo data acquisition scene provided in an embodiment of the present application, and as shown in fig. 1, echo data is mainly acquired by a synthetic aperture radar 120, the synthetic aperture radar 120 is disposed on the top of a mobile carrier, and the synthetic aperture radar 120 may also be disposed on other parts of the mobile carrier, which is not limited herein. Next, the mobile carrier may be, for example, a collection vehicle 100, an unmanned aerial vehicle, a network device, and the like, and the vehicle 100 may be a car, a truck, a motorcycle, a bus, an amusement ride, a playground vehicle, a construction device, a train, and the like, and the embodiment of the present application is not particularly limited.
Based on this, the synthetic aperture radar needs to perform continuous motion to form the synthetic aperture in the data recording process, and the echo property is closely related to the radar motion speed. Therefore, in the imaging process, the relative motion speed information between the synthetic aperture radar and the target is one of the key parameters required to be used in the data processing of the SAR imaging algorithm. Whether the recorded echo can be accurately matched with the current radar speed can directly influence the imaging quality. When the vehicle is used as a working platform of the radar, the vehicle is difficult to keep running at a stable speed for a long time in a road environment, so that the imaging difficulty of the vehicle-mounted SAR is increased. Therefore, obtaining the speed information of the radar during recording the echo becomes an indispensable part of the work of obtaining high-quality imaging results. At present, SAR imaging systems mainly work on platforms such as airplanes or satellites, and high-precision inertial navigation systems are installed in the working platforms, so that accurate speed information can be provided for the imaging systems. However, in a vehicle platform, applying such an inertial navigation system for imaging has a large hardware cost, increases the complexity of the system, and has an error in case of vibration by using an inertial sensor system, so that a speed estimation method independent of an inertial sensor is needed.
In order to solve the above problem, embodiments of the present application provide a velocity estimation method based on echo data, which can complete velocity estimation without depending on an inertial sensor, and reduce hardware cost. Referring to fig. 2, fig. 2 is a schematic flow chart of a speed estimation method based on echo data according to an embodiment of the present application, and as shown in fig. 2, specific steps of the speed estimation method based on echo data are as follows.
S201, determining a local area from the imaging area.
In the present embodiment, the velocity estimation device needs to determine a local region from the imaging region. Specifically, the direction dimension of the imaging region is larger than the direction dimension of the local region, and the distance dimension of the imaging region is larger than the distance dimension of the local region. For example, the distance dimension of the imaging region is 0 to 20 meters, and the orientation dimension is 0 to 20 meters, in which case the distance dimension of the local region should be less than 20 meters, and the orientation dimension should be less than 20 meters.
For convenience of understanding, a distance dimension of the imaging area is 0 to 20 meters, and an orientation dimension is 0 to 20 meters as an example, please refer to fig. 3, fig. 3 is a schematic diagram of an embodiment of determining a local area from the imaging area in the embodiment of the present application, as shown in fig. 3, a horizontal axis shown in fig. 3 is a distance dimension, and a horizontal axis shown in fig. 3 is a direction dimension. Based on this, the distance dimension of the imaging region 301 is 0 to 20 meters, and the orientation dimension of the imaging region 301 is 0 to 20 meters, and then a local region 302 is determined from the imaging region 301, the distance dimension of the local region 302 is 0 to 4 meters, and the orientation dimension is 0 to 6 meters. Based on fig. 3, fig. 4 is a schematic diagram of an embodiment of imaging in a local area in the embodiment of the present application, and as shown in fig. 4, a distance dimension of an imaging result 401 obtained after imaging in an imaging area is 0 to 20 meters and an orientation dimension is 0 to 20 meters, and a distance dimension of an imaging result 402 obtained after imaging in the imaging area is 0 to 4 meters and an orientation dimension is 0 to 6 meters.
It should be understood that the foregoing examples are only used for understanding the present solution, and in practical applications, the local region may be located at other positions of the imaging region, and the specific distance dimension and the specific direction dimension of the local region need to be determined by performing experiments and/or statistics based on a large amount of data, so the foregoing examples should not be construed as limiting the present solution.
S202, imaging the echo sub-data in the local area according to the first to-be-processed speeds to obtain a plurality of first to-be-processed images.
In this embodiment, based on the scenario shown in fig. 1, after the speed estimation device acquires the echo data through the synthetic aperture radar, the echo data is divided to obtain a plurality of echo sub data. Based on this, the present embodiment will first describe how to perform velocity estimation on any echo sub data of the multiple echo sub data.
Firstly, the speed estimation device processes according to a preset speed range to obtain a plurality of first speeds to be processed, the preset speed range is determined by which kind of mobile carrier the speed estimation device is arranged on, if the mobile carrier is a vehicle, the preset speed range is the running speed of the vehicle, and the specific preset speed range also needs to be flexibly determined according to actual conditions because the running speed of each vehicle is different. Specifically, the speed estimation device divides the estimated speed range by a first preset speed interval to obtain a plurality of first speeds to be processed. For example, if the driving speed range of the vehicle is 0 to 36 kilometers per hour (km/h), the preset speed range may be determined as 0 to 36km/h, and 0-36km/h is specifically 0 to 10 meters per second (m/s), based on which the first preset speed interval may be determined as 1m/s, that is, dividing 0 to 10m/s by 1m/s, and 10 first speeds to be processed may be obtained, and then the 10 first speeds to be processed are 1m/s, 2m/s, 3m/s, 4m/s, 5m/s, 6m/s, 7m/s, 8m/s, 9m/s, and 10m/s, respectively.
Based on this, the speed estimation apparatus performs imaging processing on the echo sub-data in the local area determined in step S201 according to the plurality of first speeds to be processed to obtain first images to be processed corresponding to each of the first speeds to be processed, respectively. Specifically, in this embodiment, a Back Projection (BP) imaging algorithm is used to perform imaging processing on the echo sub-data in the local area, that is, in the dual-station synthetic aperture radar, the BP imaging algorithm performs imaging by back-projecting the echo sub-data to each pixel in the local area and calculating a delay of a distance between an antenna of the radar and a pixel of an image in the echo sub-data of the radar, so as to obtain a first image to be processed.
For convenience of understanding, please refer to fig. 5, fig. 5 is a schematic flowchart of a backward projection imaging algorithm in the embodiment of the present application, and as shown in fig. 5, in step S501, the acquired echo sub-data is input first. Then, in step S502, the echo sub-data is demodulated (Dechirp) to complete the distance dimension pulse compression processing, so as to obtain a high-resolution range profile. Further, in step S503, the distance dimension and the direction dimension are sampled at intervals selected according to the resolutionThe sampling intervals in the off-and azimuthal dimensions are subject to a resolution no greater than the respective dimension. Based on the above steps, in step S504, taking T to the left and right respectively with the distance line where each pixel is located as the centerS/2,TS/2, here TsTaking T for the complete synthetic aperture time of the position of the target, namely the time taken by the target to be swept by a complete wave beam and taking the distance line of the pixel point as the centerSAnd/2 represents that the scene position corresponding to the pixel point is in beam irradiation in the period of time. Then, in step S5051, the time delays from the sampling points of each azimuth dimension to the sampling points of all pixels are calculated, and the corresponding range gates are determined, and in step S5052, the position of the synthetic aperture radar is calculated, and the corresponding direction dimension units are determined. Based on the results of this step S5051 and step S5052, in step S506, a compensated phase factor exp (j4 π R) is determinednLambda) where RnNamely the distance from the radar to each pixel point under the current position point. In step S507, the phase factor exp (j4 π R) is compensatednλ), the signals on the accumulation curve are coherently superimposed, thereby generating a first image to be processed in step S508. It should be understood that in practical applications, the imaging algorithm applied to SAR imaging may further include, but is not limited to, Range Migration (RM) imaging algorithm, Range-Doppler (RD) imaging algorithm, etc., to perform imaging processing on echo sub-data in a local region, and all possibilities are not described in detail herein.
Therefore, based on the above algorithm, the echo sub-data can be subjected to imaging processing in the local region for each first speed to be processed, so as to obtain a first image to be processed corresponding to each first speed to be processed. To facilitate understanding, please refer to fig. 6, fig. 6 is a diagram illustrating an embodiment of a plurality of first to-be-processed images in an embodiment of the present application, and as shown in fig. 6, (a) in fig. 6 is a diagram illustrating a first to-be-processed image a obtained by performing imaging processing on echo sub-data in a local area according to a first to-be-processed speed a, and similarly, (B) in fig. 6 is a diagram illustrating a first to-be-processed image B obtained by performing imaging processing on echo sub-data in a local area according to a first to-be-processed speed B, and (C) in fig. 6 is a diagram illustrating a first to-be-processed image C obtained by performing imaging processing on echo sub-data in a local area according to a first to-be-processed speed C. It should be understood that the foregoing examples are only used for understanding the present solution, and in practical applications, a specific first to-be-processed image corresponding to a different first to-be-processed speed is associated with an imaging algorithm.
S203, determining a target image to be processed from the plurality of first images to be processed.
In the present embodiment, the speed estimation apparatus determines a target image to be processed among the plurality of first images to be processed obtained in step S202. Specifically, if a plurality of first to-be-processed images are obtained by using a BP imaging algorithm, if a first to-be-processed speed corresponding to the first to-be-processed image is closer to a real speed, a line may appear in the first to-be-processed image after imaging processing, and conversely, if a difference between the first to-be-processed speed corresponding to the first to-be-processed image and the real speed is larger, a deformed line may appear in the first to-be-processed image after imaging processing.
By further describing based on fig. 6, fig. 7 is a schematic diagram of another embodiment of a plurality of first to-be-processed images in the embodiment of the present application, and as shown in fig. 7, a line 701 can be seen in the diagram (a) in fig. 7, and at this time, the line 701 is deformed toward the right of the direction dimension, so that it can be determined that the first to-be-processed speed corresponding to the first to-be-processed image shown in the diagram (a) in fig. 7 is smaller than the real speed. While the line 702 can be seen in the graph of fig. 7 (B), when the line 702 is substantially perpendicular to the distance dimension, it can be determined that the first to-be-processed speed corresponding to the first to-be-processed image shown in the graph of fig. 7 (B) is close to the real speed. Next, the line 703 can be seen in the diagram (C) in fig. 7, and at this time, the line 703 is deformed to the left in the direction dimension, so that it can be determined that the first to-be-processed speed corresponding to the first to-be-processed image shown in the diagram (C) in fig. 7 is greater than the real speed.
Specifically, when the first to-be-processed speed corresponding to the first to-be-processed image is closer to the real speed, the line in the first to-be-processed image may be close to the vertical distance dimension, that is, the line is relatively gathered in the direction dimension, and therefore when the first to-be-processed speed is accurate, the values corresponding to the plurality of direction dimensions are obtained by accumulating along each orientation dimension, and then the maximum value of the values corresponding to the plurality of direction dimensions is obtained along the distance dimension, which is larger than the maximum value obtained by the similar method when the first to-be-processed speed is inaccurate, so that the value may be used as the quality assessment index. Based on this, the velocity estimation apparatus can superimpose the plurality of first to-be-processed direction dimensions of each first to-be-processed image along the same direction to obtain the plurality of first to-be-processed direction dimensions of each first to-be-processed image, then determine the first to-be-processed image with the largest value among the plurality of first to-be-processed direction dimensions of each first to-be-processed image as the first quality evaluation index of each first to-be-processed image, and then determine the first to-be-processed image with the largest value among the first quality evaluation indexes of the plurality of first to-be-processed images as the target to-be-processed image.
For easy understanding, referring to fig. 8, fig. 8 is a schematic flow chart illustrating a process of determining a target to-be-processed image from a plurality of first to-be-processed images according to an embodiment of the present invention, as shown in fig. 8, first, N first to-be-processed speeds can be obtained by the method described in step S202, where the N first to-be-processed speeds include a first to-be-processed speed a, a first to-be-processed speed B, and a first to-be-processed speed N. Then, by performing the imaging processing on the first speed to be processed a, the first speed to be processed B to the first speed to be processed N in the local area by the method described in step S202, the first image to be processed corresponding to each first speed to be processed, for example, the first image to be processed a corresponding to the first speed to be processed a, the first image to be processed B corresponding to the first speed to be processed B, the first image to be processed N corresponding to the first speed to be processed N, and the like can be obtained.
Further, superimposing each first to-be-processed direction dimension of each first to-be-processed image along the same direction can result in multiple first superimposing direction dimensions of each first to-be-processed image, e.g., the first to-be-processed image a results in multiple first superimposing direction dimensions a, the first to-be-processed image B results in multiple first superimposing direction dimensions B, and so on. Then, a first quality assessment index is determined from the plurality of first overlay direction dimensions of each first image to be processed, and in particular, the first overlay direction dimension with the largest value among the plurality of first overlay direction dimensions is determined as the first quality assessment index, for example, if the plurality of first overlay direction dimensions a includes values of 3.0, 4.6, 6.4, 7.2, and 8.8, where the largest value is 6.8, the first quality assessment index a is 6.8. Similarly, a first quality assessment index, such as the first quality assessment index B and the first quality assessment index N, can be obtained.
Based on the above, the numerical values of the first quality evaluation index A and the first quality evaluation index N are compared, and the target image to be processed is determined by the first image to be processed corresponding to the first quality evaluation index with the largest numerical value. For example, if the first quality evaluation index a is 8.8, the first quality evaluation index B is 9.8, the first quality evaluation index C is 8.6, and the first quality evaluation index N is 3.8, and if the first quality evaluation index B can be determined to be the maximum value of the N first quality evaluation indexes, the target to-be-processed image corresponding to the first to-be-processed image B can be determined.
Specifically, the first image to be processed includes a plurality of pixel points, and each pixel point has a corresponding direction dimension and distance dimension, so that each first direction dimension to be processed of the first image to be processed is superimposed in the same direction, and actually, the direction dimensions of the pixel points with the same distance dimension in the first image to be processed are superimposed, so as to obtain a plurality of first superimposed direction dimensions. For example, the first to-be-processed image includes a pixel point 1, a pixel point 2, a pixel point 3, a pixel point 4, a pixel point 5, and a pixel point 6, and the coordinates of the pixel point 1 are (3.8, 1.2), the coordinates of the pixel point 2 are (2.8, 1.8), the coordinates of the pixel point 3 are (2.0 ), the coordinates of the pixel point 4 are (1.6, 3.2), the coordinates of the pixel point 5 are (0.8, 3.8), and the coordinates of the pixel point 6 are (0.8, 4.2). Wherein, (3.8, 1.2) indicates that the distance dimension of the pixel point 1 is 3.8, and the direction dimension is 2, and the distance dimension and the direction dimension of other pixel points can be known in the same way, which is not illustrated here. Based on this, the distance dimension distribution of the pixel points in the first to-be-processed image can be obtained to include 3.8, 2.8, 2.0, 1.6 and 0.8, wherein only one pixel point exists in the distance dimension of 3.8, 2.8, 2.0 and 1.6, and the distance dimension of 0.8 includes the pixel point 5 and the pixel point 6, so that the direction dimensions of the pixel points with the same distance dimension are overlapped, that is, the direction dimensions of the pixel point 5 and the pixel point 6 need to be overlapped, so that 5 first overlapped direction dimensions can be obtained, and the numerical values are respectively 1.2, 1.8, 2.0, 3.2 and 8.0(3.8+4.2 ═ 8.0), and therefore, the first quality assessment index of the first to-be-processed image can also be determined to be 8. It should be understood that the foregoing examples are for the purpose of understanding the present solution and are not to be construed as limiting thereof.
S204, determining a first to-be-processed speed corresponding to the target to-be-processed image as an estimated speed of the echo sub-data.
In this embodiment, since the step S203 can determine the target to-be-processed image and the first to-be-processed speed corresponding to the target to-be-processed image is the closest to the real speed among the plurality of first to-be-processed speeds, the speed estimation apparatus can determine the first to-be-processed speed corresponding to the target to-be-processed image as the estimated speed of the echo sub-data. And the estimated speed of the echo data can be obtained based on the estimated speeds of the echo sub data.
Further, in this embodiment, the speed estimation interval can be narrowed based on the estimated speed of the echo sub-data determined in step S204, and the speed estimation is further performed, that is, the similar methods shown in step S202 to step S204 are repeatedly performed, so as to further improve the speed estimation accuracy.
Specifically, through the process shown in step S203, the speed estimation apparatus determines the first to-be-processed image with the largest value in the first quality estimation indexes of the plurality of first to-be-processed images as the first pre-estimated image (i.e., the target to-be-processed image introduced in step S203). At this time, the speed estimation device does not perform the step S204 to determine the estimated speed of the echo sub-data, but determines the first speed range based on the first to-be-processed speed corresponding to the first estimated image, for example, the first to-be-processed speed corresponding to the first estimated image is 3m/S, then the first speed range may be 2.5 to 3.5m/S, it should be understood that, in practical applications, the first speed range may also be 2.4 to 3.4m/S, or 2.6 to 3.6m/S, and the specific setting manner of the first speed range is not limited herein, and the first speed range includes the first to-be-processed speed. Based on this, the first speed range is divided by the second preset speed interval to obtain a plurality of second speeds to be processed, for example, the second preset speed interval can be determined as 0.1m/s, that is, 2.5 to 3.5m/s is divided by 0.1m/s, and 11 second speeds to be processed can be obtained, so that the 11 second speeds to be processed are respectively 2.5m/s, 2.6m/s, 2.7m/s, 2.5m/s, 2.9m/s, 3.0m/s, 3.1m/s, 3.2m/s, 3.3m/s, 3.4m/s and 3.5 m/s.
Further, the velocity estimation apparatus superimposes the plurality of second to-be-processed direction dimensions of each second to-be-processed image along the same direction by a method similar to that in the step S203 to obtain a plurality of second superimposed direction dimensions of each second to-be-processed image, determines the second superimposed direction dimension with the largest value in the plurality of second superimposed direction dimensions of each second to-be-processed image as the second quality evaluation index of each second to-be-processed image, and then determines the second to-be-processed image with the largest value in the second quality evaluation indexes of the plurality of second to-be-processed images as the target to-be-processed image. The detailed manner is similar to step S203, and is not described herein again.
At this time, the speed estimation apparatus has repeatedly performed the similar methods shown in step S202 to step S204 once, and in order to improve the speed estimation accuracy, the similar methods shown in step S202 to step S204 can also be performed again, and this embodiment considers the accuracy and the power consumption in actual execution. The speed estimation device executes 3 times of steps S202 to S204 to determine the target image to be processed, and in practical application, the number of repetitions can be flexibly determined according to requirements, and is not limited at this time.
For ease of understanding, description will be made by taking as an example that a speed estimation device is provided in a vehicle and 77GHz millimeter wave radar is adopted as a signal transmitting and receiving device, and the speed estimation device executes a similar method shown in steps S202 to S204 3 times. Referring to fig. 9, fig. 9 is a schematic view of an embodiment of a speed estimation result in an embodiment of the present application, and as shown in fig. 9, (a) in fig. 9 is an image obtained by performing imaging processing on echo sub data obtained by performing the methods shown in steps S202 to S204 for the first time in an imaging region and a local region, which includes an image 901 obtained by performing imaging processing on the imaging region and an image 902 obtained by performing imaging processing on the local region. Fig. 9 (B) shows the echo sub-data obtained by performing the methods shown in steps S202 to S204 for the second time, and includes an image 903 obtained by performing imaging processing on the imaging region and an image 904 obtained by performing imaging processing on the local region. Fig. 9 (C) shows the echo sub-data obtained by performing the methods shown in steps S202 to S204 for the third time, and the images obtained by performing the imaging processing on the imaging region and the local region include an image 905 obtained by performing the imaging processing on the imaging region and an image 906 obtained by performing the imaging processing on the local region.
Based on this, as shown in (a) of fig. 9, after the method shown in step S202 to step S204 is performed for the first time, since the rough velocity estimation is performed in a large velocity range, the line 9021 included in the image 902 cannot be focused normally, and there is a large deformation, which is more obvious in the image 901, and at this time, the determined estimated velocity is different from the real velocity. Next, as shown in fig. 9 (B), after performing the similar method of steps S202 to S204 for the second time, the deformation of the lines in the image 903 and the image 904 is greatly improved, and the estimated speed determined at this time substantially coincides with the real speed. Again, as shown in (C) of fig. 9, after performing the similar method of steps S202 to S204 for the second time, it is difficult to distinguish with only naked eyes as compared with the image as shown in (B) of fig. 9, and the improvement in data is embodied in that, in the image shown in (B) of fig. 9, the quality evaluation index obtained in the local area is 2.97E6, whereas in the image shown in (C) of fig. 9, the quality evaluation index obtained in the local area is 4.21E6, whereby it is known that the speed estimation accuracy can be further improved by repeating the foregoing steps.
Therefore, in the method shown in the embodiment of fig. 2, the estimated speed of the echo sub-data can be determined based on the echo data, and the estimated speed of the echo data can be obtained through the estimated speed of the echo sub-data, so that the speed estimation can be completed without depending on an inertial sensor, and the hardware cost can be reduced. And the local area in the imaging area determines the estimated speed of the echo sub-data, so that the calculated amount can be reduced when the speed estimation accuracy is improved, and the speed estimation efficiency is improved.
Fig. 2 mainly illustrates a method for determining an estimated speed of echo sub-data in the embodiment, and how to determine an estimated speed of echo data according to the estimated speed of echo sub-data will be described below, please refer to fig. 10, where fig. 10 is another schematic flow diagram of the echo data-based speed estimation method in the embodiment of the present application, and as shown in fig. 10, the specific steps of the echo data-based speed estimation method are as follows.
And S1001, acquiring echo data.
In this embodiment, based on the scenario shown in fig. 1, the speed estimation apparatus can acquire echo data by using a synthetic aperture radar. The specific manner is similar to that described in step S202, and is not described herein again.
S1002, dividing the echo data by a preset step length to obtain a plurality of echo subdata.
In this embodiment, the speed estimation device divides the echo data by a preset step size to obtain a plurality of echo sub data. Specifically, the preset step size means that the echo sub-data used by the current frame is advanced by a fixed length compared to the echo sub-data used by the previous frame, and the echo sub-data are the same. The meaning of the multiple pieces of echo sub-data with the same size is specifically that the number of sampling points included in each frame of image is the same, for example, assuming that each frame of image is obtained by 500 azimuth sampling point data, the preset step length is 100, the range of the echo sub-data used in the previous frame is 500 to 1000, and the range of the echo sub-data used in the current frame is 600 to 1100, so that the echo data is processed in a stepping type segmentation manner, and multiple pieces of echo sub-data can be obtained.
S1003, filtering the estimated speed of the echo sub-data to obtain the estimated speed of the echo data.
In this embodiment, the speed estimation device performs filtering processing on the estimated speed of each echo sub-data, so as to further reduce the estimation error. Specifically, the filtering method may include, but is not limited to, Kalman Filtering (KF), Extended Kalman Filtering (EKF), Sigma Point Kalman Filtering (SPKF), and the like, and is not limited herein.
The scheme provided by the embodiment of the application is mainly introduced from the perspective of a method. It is to be understood that the speed estimation device includes hardware structures and/or software modules corresponding to the respective functions for realizing the above functions. Those of skill in the art will readily appreciate that the various illustrative modules and algorithm steps described in connection with the embodiments disclosed herein may be implemented as hardware or combinations of hardware and computer software. Whether a function is performed as hardware or computer software drives hardware depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
The embodiment of the present application may perform division of function modules on the speed estimation device based on the above method example, for example, each function module may be divided corresponding to each function, or two or more functions may be integrated into one processing module. The integrated module can be realized in a hardware mode, and can also be realized in a software functional module mode. It should be noted that, in the embodiment of the present application, the division of the module is schematic, and is only one logic function division, and there may be another division manner in actual implementation.
Therefore, the speed estimation device in the present application is described in detail below, please refer to fig. 11, fig. 11 is a schematic structural diagram of the speed estimation device in the embodiment of the present application, and as shown in the figure, the speed estimation device 1100 includes:
a determining module 1101 for determining a local region from the imaging region;
the processing module 1102 is configured to perform imaging processing on the echo sub-data in the local area according to a plurality of first speeds to be processed to obtain a plurality of first images to be processed, where the plurality of first speeds to be processed are obtained according to a preset speed range, the echo sub-data is obtained by dividing echo data, and each first image to be processed corresponds to one first speed to be processed;
a determining module 1101, further configured to determine a target image to be processed from the plurality of first images to be processed;
the determining module 1101 is further configured to determine a first to-be-processed speed corresponding to the target to-be-processed image as an estimated speed of the echo sub-data, where the estimated speed of the echo sub-data is used to obtain an estimated speed of the echo data.
In an alternative implementation manner, on the basis of the embodiment corresponding to fig. 11, in another embodiment of the speed estimation device 1100 provided by the embodiment of the present application,
the speed estimation device 1100 further comprises a dividing module 1103;
the dividing module 1103 is configured to divide the estimated speed range at a first preset speed interval to obtain a plurality of first speeds to be processed before the processing module performs imaging processing on the echo sub-data in the local area according to the plurality of first speeds to be processed to obtain a plurality of first images to be processed.
In an alternative implementation manner, on the basis of the embodiment corresponding to fig. 11, in another embodiment of the speed estimation device 1100 provided by the embodiment of the present application,
the determining module 1101 is specifically configured to:
superposing the plurality of first to-be-processed direction dimensions of each first to-be-processed image along the same direction to obtain a plurality of first superposed direction dimensions of each first to-be-processed image;
determining a first superposition direction dimension with the largest value in the plurality of first superposition direction dimensions of each first image to be processed as a first quality evaluation index of each first image to be processed;
and determining the first to-be-processed image with the largest value in the first quality evaluation indexes of the plurality of first to-be-processed images as the target to-be-processed image.
In an alternative implementation manner, on the basis of the embodiment corresponding to fig. 11, in another embodiment of the speed estimation device 1100 provided by the embodiment of the present application,
the determining module 1101 is specifically configured to:
determining a first to-be-processed image with the largest value in the first quality evaluation indexes of the plurality of first to-be-processed images as a first pre-estimated image;
imaging the first estimated image according to a plurality of second to-be-processed speeds to obtain a plurality of second to-be-processed images, wherein the plurality of second to-be-processed speeds are obtained according to a first speed range, and each second to-be-processed image corresponds to one second to-be-processed speed;
and determining a target image to be processed from the plurality of second images to be processed.
In an alternative implementation manner, on the basis of the embodiment corresponding to fig. 11, in another embodiment of the speed estimation device 1100 provided by the embodiment of the present application,
the determining module 1101 is further configured to determine a first speed range based on a first to-be-processed speed corresponding to the first estimated image before the processing module performs imaging processing on the first estimated image according to the plurality of second to-be-processed speeds to obtain a plurality of second to-be-processed images;
the dividing module 1103 is further configured to divide the first speed range at a second preset speed interval to obtain a plurality of second speeds to be processed.
In an alternative implementation manner, on the basis of the embodiment corresponding to fig. 11, in another embodiment of the speed estimation device 1100 provided by the embodiment of the present application,
the determining module 1101 is specifically configured to:
superposing a plurality of second to-be-processed direction dimensions of each second to-be-processed image along the same direction to obtain a plurality of second superposed direction dimensions of each second to-be-processed image;
determining a second superposition direction dimension with the largest value in the plurality of second superposition direction dimensions of each second image to be processed as a second quality evaluation index of each second image to be processed;
and determining the second image to be processed with the largest numerical value in the second quality evaluation indexes of the plurality of second images to be processed as the target image to be processed.
In an alternative implementation manner, on the basis of the embodiment corresponding to fig. 11, in another embodiment of the speed estimation device 1100 provided by the embodiment of the present application,
the speed estimation device 1100 also includes an acquisition module 1104;
an obtaining module 1104, configured to obtain echo data;
the dividing module 1103 is further configured to divide the echo data by a preset step length to obtain a plurality of echo sub-data.
In an alternative implementation manner, on the basis of the embodiment corresponding to fig. 11, in another embodiment of the speed estimation device 1100 provided by the embodiment of the present application,
the processing module 1102 is further configured to perform filtering processing on the estimated speed of the multiple echo sub-data to obtain the estimated speed of the echo data.
In an alternative implementation manner, on the basis of the embodiment corresponding to fig. 11, in another embodiment of the velocity estimation apparatus 1100 provided in the embodiment of the present application, a direction dimension of the imaging region is greater than a direction dimension of the local region, and a distance dimension of the imaging region is greater than a distance dimension of the local region.
The present application further provides a speed estimation apparatus comprising at least one processor configured to execute a computer program stored in a memory to cause the speed estimation apparatus to perform a method performed by the speed estimation apparatus in any of the above method embodiments.
It should be understood that the speed estimation device described above may be one or more chips. For example, the speed estimation device may be a Field Programmable Gate Array (FPGA), an Application Specific Integrated Circuit (ASIC), a system on chip (SoC), a Central Processing Unit (CPU), a Network Processor (NP), a digital signal processing circuit (DSP), a Microcontroller (MCU), a Programmable Logic Device (PLD), or other integrated chips.
The embodiment of the application also provides a speed estimation device which comprises a processor and a communication interface. The communication interface is coupled with the processor. The communication interface is used for inputting and/or outputting information. The information includes at least one of instructions and data. The processor is configured to execute a computer program to cause the speed estimation apparatus to perform the method performed by the speed estimation apparatus in any of the above method embodiments.
The embodiment of the application also provides a speed estimation device which comprises a processor and a memory. The memory is used for storing a computer program, and the processor is used for calling and running the computer program from the memory so as to enable the speed estimation device to execute the method executed by the speed estimation device in any method embodiment.
In implementation, the steps of the above method may be performed by integrated logic circuits of hardware in a processor or instructions in the form of software. The steps of a method disclosed in connection with the embodiments of the present application may be directly implemented by a hardware processor, or may be implemented by a combination of hardware and software modules in a processor. The software module may be located in ram, flash memory, rom, prom, or eprom, registers, etc. storage media as is well known in the art. The storage medium is located in a memory, and a processor reads information in the memory and completes the steps of the method in combination with hardware of the processor. To avoid repetition, it is not described in detail here.
According to the method provided by the embodiment of the application, the application also provides a vehicle which comprises a speed estimation device for executing the embodiment shown in the figures 2, 5, 8 and 10.
According to the method provided by the embodiment of the present application, the present application further provides a computer program product, which includes: computer program code which, when run on a computer, causes the computer to perform the methods performed by the respective units in the embodiments shown in fig. 2, 5, 8 and 10.
According to the method provided by the embodiment of the application, the application also provides a computer readable storage medium, which stores program codes, and when the program codes are run on a computer, the computer is caused to execute the method executed by each unit in the embodiments shown in fig. 2, fig. 5 and fig. 10 and fig. 8.
The modules in the above-mentioned device embodiments and the units in the method embodiments completely correspond to each other, and the corresponding steps are executed by the corresponding modules or units, for example, the communication unit (transceiver) executes the steps of receiving or transmitting in the method embodiments, and other steps besides transmitting and receiving may be executed by the processing unit (processor). The functions of the specific elements may be referred to in the respective method embodiments. The number of the processors may be one or more.
In the several embodiments provided in the present application, it should be understood that the disclosed system, apparatus and method may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the units is only one logical division, and other divisions may be realized in practice, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application or portions thereof that substantially contribute to the prior art may be embodied in the form of a software product stored in a storage medium and including instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: various media capable of storing program codes, such as a usb disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk.
The above description is only for the specific embodiments of the present application, but the scope of the present application is not limited thereto, and any person skilled in the art can easily conceive of the changes or substitutions within the technical scope of the present application, and shall be covered by the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (21)

1. A method of velocity estimation based on echo data, the method comprising:
determining a local region from the imaging region;
imaging echo subdata in the local area according to a plurality of first speeds to be processed to obtain a plurality of first images to be processed, wherein the plurality of first speeds to be processed are obtained according to a preset speed range, the echo subdata is obtained by dividing echo data, and each first image to be processed corresponds to one first speed to be processed;
determining a target image to be processed from the plurality of first images to be processed;
and determining a first to-be-processed speed corresponding to the target to-be-processed image as the estimated speed of the echo subdata, wherein the estimated speed of the echo subdata is used for obtaining the estimated speed of the echo data.
2. The method of claim 1, wherein before the imaging processing the echo sub-data in the local region according to the first plurality of to-be-processed speeds to obtain a first plurality of to-be-processed images, the method further comprises:
and dividing the estimated speed range by a first preset speed interval to obtain a plurality of first speeds to be processed.
3. The method according to claim 2, wherein the determining a target to-be-processed image from the plurality of first to-be-processed images comprises:
superposing the plurality of first to-be-processed direction dimensions of each first to-be-processed image along the same direction to obtain a plurality of first superposed direction dimensions of each first to-be-processed image;
determining a first superposition direction dimension with the largest value in the plurality of first superposition direction dimensions of each first image to be processed as a first quality evaluation index of each first image to be processed;
and determining the first to-be-processed image with the largest value in the first quality evaluation indexes of the plurality of first to-be-processed images as the target to-be-processed image.
4. The method according to claim 3, wherein the determining the first to-be-processed image with the largest value in the first quality assessment indexes of the plurality of first to-be-processed images as the target to-be-processed image comprises:
determining a first to-be-processed image with the largest value in the first quality evaluation indexes of the plurality of first to-be-processed images as a first pre-estimated image;
imaging the first estimated image according to a plurality of second to-be-processed speeds to obtain a plurality of second to-be-processed images, wherein the plurality of second to-be-processed speeds are obtained according to the first speed range, and each second to-be-processed image corresponds to one second to-be-processed speed;
determining the target image to be processed from the plurality of second images to be processed.
5. The method of claim 4, wherein before the imaging the first pre-estimated image according to the second plurality of speeds to be processed to obtain a second plurality of images to be processed, the method further comprises:
determining a first speed range based on a first to-be-processed speed corresponding to the first pre-estimated image;
and dividing the first speed range at a second preset speed interval to obtain a plurality of second speeds to be processed.
6. The method of claim 5, wherein determining the target to-be-processed image from the second plurality of to-be-processed images comprises:
superposing a plurality of second to-be-processed direction dimensions of each second to-be-processed image along the same direction to obtain a plurality of second superposed direction dimensions of each second to-be-processed image;
determining a second superposition direction dimension with the largest value in the plurality of second superposition direction dimensions of each second image to be processed as a second quality evaluation index of each second image to be processed;
and determining the second image to be processed with the largest numerical value in the second quality evaluation indexes of the plurality of second images to be processed as the target image to be processed.
7. The method according to any one of claims 1 to 6, further comprising:
acquiring the echo data;
and dividing the echo data by a preset step length to obtain a plurality of echo subdata.
8. The method of claim 7, further comprising:
and filtering the estimated speeds of the echo sub-data to obtain the estimated speed of the echo data.
9. The method of any one of claims 1 to 8, wherein a direction dimension of the imaging region is greater than a direction dimension of the local region, and a distance dimension of the imaging region is greater than a distance dimension of the local region.
10. A speed estimation device, characterized by comprising:
a determination module for determining a local region from the imaging region;
the processing module is used for performing imaging processing on the echo subdata in the local area according to a plurality of first speeds to be processed to obtain a plurality of first images to be processed, wherein the plurality of first speeds to be processed are obtained according to a preset speed range, the echo subdata is obtained by dividing echo data, and each first image to be processed corresponds to one first speed to be processed;
the determining module is further configured to determine a target image to be processed from the plurality of first images to be processed;
the determining module is further configured to determine a first to-be-processed speed corresponding to the target to-be-processed image as the estimated speed of the echo sub-data, where the estimated speed of the echo sub-data is used to obtain the estimated speed of the echo data.
11. The speed estimation device according to claim 10, characterized in that the speed estimation device further comprises a dividing module;
the dividing module is configured to divide the estimated speed range at a first preset speed interval to obtain a plurality of first speeds to be processed before the processing module performs imaging processing on the echo sub-data in the local area according to the plurality of first speeds to be processed to obtain a plurality of first images to be processed.
12. The speed estimation device according to claim 11, characterized in that the determination module is specifically configured to:
superposing the plurality of first to-be-processed direction dimensions of each first to-be-processed image along the same direction to obtain a plurality of first superposed direction dimensions of each first to-be-processed image;
determining a first superposition direction dimension with the largest value in the plurality of first superposition direction dimensions of each first image to be processed as a first quality evaluation index of each first image to be processed;
and determining the first to-be-processed image with the largest value in the first quality evaluation indexes of the plurality of first to-be-processed images as the target to-be-processed image.
13. The speed estimation device according to claim 12, characterized in that the determination module is specifically configured to:
determining a first to-be-processed image with the largest value in the first quality evaluation indexes of the plurality of first to-be-processed images as a first pre-estimated image;
imaging the first estimated image according to a plurality of second to-be-processed speeds to obtain a plurality of second to-be-processed images, wherein the plurality of second to-be-processed speeds are obtained according to the first speed range, and each second to-be-processed image corresponds to one second to-be-processed speed;
determining the target image to be processed from the plurality of second images to be processed.
14. The speed estimation device according to claim 13, wherein the determining module is further configured to determine a first speed range based on a first to-be-processed speed corresponding to the first pre-estimated image before the processing module performs imaging processing on the first pre-estimated image according to a plurality of second to-be-processed speeds to obtain a plurality of second to-be-processed images;
the dividing module is further configured to divide the first speed range at a second preset speed interval to obtain a plurality of second speeds to be processed.
15. The speed estimation device according to claim 14, characterized in that the determination module is specifically configured to:
superposing a plurality of second to-be-processed direction dimensions of each second to-be-processed image along the same direction to obtain a plurality of second superposed direction dimensions of each second to-be-processed image;
determining a second superposition direction dimension with the largest value in the plurality of second superposition direction dimensions of each second image to be processed as a second quality evaluation index of each second image to be processed;
and determining the second image to be processed with the largest numerical value in the second quality evaluation indexes of the plurality of second images to be processed as the target image to be processed.
16. The speed estimation device according to any one of claims 10 to 15, characterized in that the speed estimation device further comprises an acquisition module;
the acquisition module is used for acquiring the echo data;
the dividing module is further configured to divide the echo data by a preset step length to obtain a plurality of echo subdata.
17. The speed estimation device of claim 16, wherein the processing module is further configured to filter the estimated speeds of the echo sub-data to obtain the estimated speed of the echo data.
18. The velocity estimation apparatus according to any one of claims 10 to 17, characterized in that a direction dimension of the imaging region is larger than a direction dimension of the local region, and a distance dimension of the imaging region is larger than a distance dimension of the local region.
19. A vehicle characterized by comprising a speed estimation device according to any one of claims 10 to 18.
20. A chip comprising at least one processor communicatively coupled to at least one memory, the at least one memory having instructions stored therein; the instructions are to be executed by the at least one processor to perform the method of any one of claims 1 to 9.
21. A computer readable storage medium having stored therein instructions which, when executed on a computer, cause the computer to perform the method of any of claims 1 to 9.
CN202180001251.XA 2021-03-25 2021-03-25 Echo data-based speed estimation method and device Active CN113196098B (en)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2021/083004 WO2022198566A1 (en) 2021-03-25 2021-03-25 Speed estimation method and apparatus based on echo data

Publications (2)

Publication Number Publication Date
CN113196098A true CN113196098A (en) 2021-07-30
CN113196098B CN113196098B (en) 2022-05-17

Family

ID=76976963

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202180001251.XA Active CN113196098B (en) 2021-03-25 2021-03-25 Echo data-based speed estimation method and device

Country Status (2)

Country Link
CN (1) CN113196098B (en)
WO (1) WO2022198566A1 (en)

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CA2304306A1 (en) * 1999-12-17 2001-06-17 Sicom Systems, Ltd. Multi-channel moving target radar detection and imaging apparatus and method
JP2010230373A (en) * 2009-03-26 2010-10-14 Mitsubishi Space Software Kk Device, method and program for calculating vehicle speed
CN103472448A (en) * 2013-07-15 2013-12-25 中国科学院电子学研究所 SAR imaging method, device and system
CN108020834A (en) * 2017-11-14 2018-05-11 石家庄铁道大学 Based on moving target detecting method, device and the electronic equipment for improving EDPCA
CN110554385A (en) * 2019-07-02 2019-12-10 中国航空工业集团公司雷华电子技术研究所 Self-focusing imaging method and device for maneuvering trajectory synthetic aperture radar and radar system
CN111007512A (en) * 2019-12-27 2020-04-14 北京行易道科技有限公司 Vehicle-mounted radar imaging method and device and electronic equipment
CN111670381A (en) * 2018-01-23 2020-09-15 西门子交通有限公司 Method and device for determining the speed of a vehicle
CN112505692A (en) * 2020-10-21 2021-03-16 中山大学 Multiple-input multiple-output inverse synthetic aperture radar imaging method, system and storage medium

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7038612B2 (en) * 2003-08-05 2006-05-02 Raytheon Company Method for SAR processing without INS data
DE102015120659A1 (en) * 2015-11-27 2017-06-14 Ice Gateway Gmbh Classify one or more reflection objects

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CA2304306A1 (en) * 1999-12-17 2001-06-17 Sicom Systems, Ltd. Multi-channel moving target radar detection and imaging apparatus and method
JP2010230373A (en) * 2009-03-26 2010-10-14 Mitsubishi Space Software Kk Device, method and program for calculating vehicle speed
CN103472448A (en) * 2013-07-15 2013-12-25 中国科学院电子学研究所 SAR imaging method, device and system
CN108020834A (en) * 2017-11-14 2018-05-11 石家庄铁道大学 Based on moving target detecting method, device and the electronic equipment for improving EDPCA
CN111670381A (en) * 2018-01-23 2020-09-15 西门子交通有限公司 Method and device for determining the speed of a vehicle
CN110554385A (en) * 2019-07-02 2019-12-10 中国航空工业集团公司雷华电子技术研究所 Self-focusing imaging method and device for maneuvering trajectory synthetic aperture radar and radar system
CN111007512A (en) * 2019-12-27 2020-04-14 北京行易道科技有限公司 Vehicle-mounted radar imaging method and device and electronic equipment
CN112505692A (en) * 2020-10-21 2021-03-16 中山大学 Multiple-input multiple-output inverse synthetic aperture radar imaging method, system and storage medium

Also Published As

Publication number Publication date
WO2022198566A1 (en) 2022-09-29
CN113196098B (en) 2022-05-17

Similar Documents

Publication Publication Date Title
US11927668B2 (en) Radar deep learning
CN107328411B (en) Vehicle-mounted positioning system and automatic driving vehicle
CN112558023B (en) Calibration method and device of sensor
US11506776B2 (en) Method and device with improved radar resolution
CA2579898C (en) Method for the processing and representing of ground images obtained by synthetic aperture radar systems (sar)
EP3825728A1 (en) Method and device to improve radar data using reference data background
CN111436216A (en) Method and system for color point cloud generation
US11467001B2 (en) Adjustment value calculation method
CN114636993A (en) External parameter calibration method, device and equipment for laser radar and IMU
EP3999876B1 (en) Method and device for detecting an environment
CN114111775B (en) Multi-sensor fusion positioning method and device, storage medium and electronic equipment
Lundquist et al. Joint ego-motion and road geometry estimation
JP2011017599A (en) Autonomous positioning program, autonomous positioning device and autonomous positioning method
Hu et al. Automotive squint-forward-looking SAR: High resolution and early warning
CN113240813B (en) Three-dimensional point cloud information determining method and device
CN113196098B (en) Echo data-based speed estimation method and device
WO2020133041A1 (en) Vehicle speed calculation method, system and device, and storage medium
CN113312403B (en) Map acquisition method and device, electronic equipment and storage medium
Higuchi et al. Monitoring live parking availability by vision-based vehicular crowdsensing
CN115376365B (en) Vehicle control method, device, electronic equipment and computer readable medium
CN116736322B (en) Speed prediction method integrating camera image and airborne laser radar point cloud data
EP4202473A1 (en) Radar sensor processing chain
US20230393257A1 (en) Fractalet radar processing
EP4270048A1 (en) Use of camera information for radar beamforming
EP4198457A1 (en) Vehicle surroundings image displaying device and vehicle surroundings image displaying method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant