CN112213692B - Method and system for classifying a received signal from a radar system - Google Patents

Method and system for classifying a received signal from a radar system Download PDF

Info

Publication number
CN112213692B
CN112213692B CN202010661121.6A CN202010661121A CN112213692B CN 112213692 B CN112213692 B CN 112213692B CN 202010661121 A CN202010661121 A CN 202010661121A CN 112213692 B CN112213692 B CN 112213692B
Authority
CN
China
Prior art keywords
guard
initial
neighborhood
threshold
doppler
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010661121.6A
Other languages
Chinese (zh)
Other versions
CN112213692A (en
Inventor
E.里特伯格
B.因德尔曼
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
GM Global Technology Operations LLC
Original Assignee
GM Global Technology Operations LLC
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by GM Global Technology Operations LLC filed Critical GM Global Technology Operations LLC
Publication of CN112213692A publication Critical patent/CN112213692A/en
Application granted granted Critical
Publication of CN112213692B publication Critical patent/CN112213692B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/02Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S13/00
    • G01S7/41Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S13/00 using analysis of echo signal for target characterisation; Target signature; Target cross-section
    • G01S7/414Discriminating targets with respect to background clutter
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/02Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S13/00
    • G01S7/021Auxiliary means for detecting or identifying radar signals or the like, e.g. radar jamming signals
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S13/00Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
    • G01S13/88Radar or analogous systems specially adapted for specific applications
    • G01S13/89Radar or analogous systems specially adapted for specific applications for mapping or imaging
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S13/00Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
    • G01S13/88Radar or analogous systems specially adapted for specific applications
    • G01S13/93Radar or analogous systems specially adapted for specific applications for anti-collision purposes
    • G01S13/931Radar or analogous systems specially adapted for specific applications for anti-collision purposes of land vehicles

Landscapes

  • Engineering & Computer Science (AREA)
  • Remote Sensing (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Physics & Mathematics (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • General Physics & Mathematics (AREA)
  • Electromagnetism (AREA)
  • Traffic Control Systems (AREA)
  • Radar Systems Or Details Thereof (AREA)

Abstract

A system for classifying a received signal from a radar system as noise or detection includes a source of a radar energy map and a memory storing an integral image data structure for calculating an integral image. The system includes a processor in communication with a source and a memory, the processor programmed to: generating an initial image comprising initial cells based on the radar energy map, each initial cell having an energy value; calculating an integral image based on the initial image; determining the coordinate position of the initial unit; determining a coordinate position of an index related to an angle of a neighborhood around the initial cell; determining an energy sum of the neighborhood based on the index and the values from the individual cells of the integral image; determining an estimated noise associated with the initial cell based on the energy sum; and determining whether the initial unit indicates detection of an object.

Description

Method and system for classifying a received signal from a radar system
Technical Field
The technical field relates generally to methods and systems for controlling a vehicle, and more particularly to methods and systems for classifying received signals from a radar system associated with a vehicle as noise or for controlling detection of a vehicle.
Background
Autonomous and semi-autonomous vehicles may rely on sensors such as radar systems to control the motion of the vehicle or alert objects near the vehicle. Typically, to ensure that the radar system has detected an object, the data received by the radar detector is analyzed to determine whether an object is detected or whether the data contains noise. Typically, to analyze the data, a large amount of computation is required to determine whether each cell in the data contains noise or is indicative of detection. Current radar systems may be limited by the number of detections that can be processed, which may result in detection omission, due to the number of calculations involved.
Accordingly, it is desirable to provide improved methods and systems for classifying received signals from a radar system associated with a vehicle as noise or for controlling detection of the vehicle. Furthermore, other desirable features and characteristics of the present invention will become apparent from the subsequent detailed description and the appended claims, taken in conjunction with the accompanying drawings and the foregoing technical field and background.
Disclosure of Invention
According to various embodiments, a system for classifying a received signal from a radar system as noise or detection is provided. The system comprises: a source of a radar energy map containing received signals; and a memory storing an integral image data structure for calculating an integral image having a plurality of cells with coordinate positions. The system includes a processor in communication with the source and the memory, the processor programmed to: generating an initial image comprising a plurality of initial units, each initial unit having an energy value and a coordinate location, based on the radar energy map; calculating an integral image based on the initial image, wherein each of a plurality of cells of the integral image contains a value calculated based on energy values of associated cells of the plurality of initial cells of the initial image, and storing the calculated integral image in an integral image data structure; determining a coordinate position of an initial unit of the plurality of initial units based on the initial image; determining a coordinate position of an index related to an angle of a neighborhood surrounding the initial cell based on the coordinate position of the initial cell and a neighborhood threshold of a number of initial cells related to the neighborhood; determining an energy sum of the neighborhood based on the indexed coordinate position and a value from each of a plurality of cells of the integral image associated with the indexed coordinate position; determining an estimated noise associated with the initial cell based on the energy sum; and determining whether the initial unit indicates detection of the object based on the estimated noise and the energy value of the initial unit from the initial image.
The processor is further programmed to determine whether to wrap the neighborhood around the initial image based on the coordinate locations of the initial cells and the neighborhood threshold. Based on the determination of wrapping the neighborhood around the initial image, the processor is further programmed to divide the neighborhood into additional portions to reach a neighborhood threshold for an initial number of cells associated with the neighborhood, and to determine an index of the neighborhood based on the indices associated with the corners of each portion. The processor is further programmed to construct a guard around the initial unit within the neighborhood based on the coordinate position of the initial unit in the initial image and a guard threshold for a number of initial units associated with the guard, and determine a guard index based on the guard. The processor is further programmed to determine whether to wrap the guard around the initial image based on the coordinate locations of the initial cells and a guard threshold for a number of initial cells associated with the guard. Based on the determination to wrap the guard around the initial image, the processor is further programmed to divide the guard into additional guard portions to reach a guard threshold for an initial number of units associated with the guard, and to determine a guard index based on the guard index associated with the corners of each guard portion. The processor is programmed to determine a guard energy sum of the guard based on the coordinate position of the guard index and values of respective ones of the plurality of cells from the integral image associated with the coordinate position of the guard index. The processor is further programmed to determine a total energy sum associated with the initial unit based on the energy sum and the guard energy sum, and the processor is programmed to determine the estimated noise based on the total energy sum.
According to various embodiments, there is also provided a method for classifying a received signal from a radar system as noise or detection. The method comprises the following steps: providing a memory storing an integral image data structure for calculating an integral image having a plurality of cells; receiving, by a processor, a radar energy map containing received signals from a source; generating, by a processor, an initial image having a plurality of initial cells based on the radar energy map, each initial cell containing an energy value and a coordinate location; calculating, by the processor, an integral image based on the initial image, wherein each of a plurality of cells of the integral image contains a value calculated based on energy values of associated cells of the plurality of initial cells of the initial image, and storing the calculated integral image in an integral image data structure; determining, by the processor, based on the initial image, a coordinate position of an initial cell of the plurality of initial cells having an energy value greater than a threshold; determining, by the processor, a coordinate location of an index related to an angle of a neighborhood surrounding the initial cell based on the coordinate location of the initial cell and a neighborhood threshold for a number of initial cells related to the neighborhood; determining, by the processor, an energy sum of the neighborhood based on the indexed coordinate position and a value from each of a plurality of cells of the integral image associated with the indexed coordinate position; determining, by the processor, estimated noise associated with the initial cell based on the energy sum; and determining, by the processor, whether the initial unit indicates detection of the object based on the estimated noise and the energy value of the initial unit from the initial image.
The method further comprises the steps of: determining, by the processor, to wrap the neighborhood around the initial image based on the coordinate location of the initial cell and the neighborhood threshold; dividing, by the processor, the neighborhood into additional portions to arrive at a neighborhood threshold for an initial number of cells associated with the neighborhood; and determining, by the processor, an index of the neighborhood based on the index associated with the angle of each portion. The method further comprises the steps of: constructing, by the processor, a guard around the initial unit within the neighborhood based on the coordinate position of the initial unit in the initial image and a guard threshold for a number of initial units associated with the guard; and determining, by the processor, a guard index based on the guard, the guard index being related to an angle of the guard. The method further comprises the steps of: determining to wrap the guard around the initial image based on the coordinate locations of the initial cells and a guard threshold for a number of initial cells associated with the guard; dividing, by the processor, the guard into additional guard portions to reach a guard threshold for an initial number of units associated with the guard; and determining, by the processor, a guard index based on the index associated with the angle of each guard portion. The method also includes determining, by the processor, a guard energy sum of the guard based on the coordinate position of the guard index and values of respective ones of the plurality of cells from the integral image associated with the coordinate position of the guard index. The method further comprises the steps of: determining, by the processor, a total energy sum associated with the initial cell based on the energy sum and the guard energy sum; and determining, by the processor, estimated noise based on the total energy sum.
According to various embodiments, a vehicle is also provided. The vehicle includes a source of a radar energy map and a memory storing an integral image data structure for calculating an integral image having a plurality of cells. The vehicle includes a processor in communication with the source and the memory, the processor programmed to: generating an initial image comprising a plurality of initial units, each initial unit having an energy value and a coordinate location, based on the radar energy map; calculating an integral image based on the initial image, wherein each of a plurality of cells of the integral image contains a value calculated based on energy values of associated cells of the plurality of initial cells of the initial image, and storing the calculated integral image in an integral image data structure; determining, based on the initial image, a coordinate position of an initial cell of the plurality of initial cells having an energy value greater than a threshold; determining a coordinate position of an index related to an angle of a neighborhood surrounding the initial cell based on the coordinate position of the initial cell and a neighborhood threshold of a number of initial cells related to the neighborhood; determining an energy sum of the neighborhood based on the indexed coordinate position and a value from each of a plurality of cells of the integral image associated with the indexed coordinate position; determining a coordinate position of a guard index related to an angle of guard around the initial cell based on the coordinate position of the initial cell and a guard threshold of an initial cell number related to guard; determining a guard energy sum of the guard based on the coordinate position of the guard index and values of respective ones of the plurality of cells from the integral image associated with the coordinate position of the guard index; determining an estimated noise associated with the initial unit based on the energy sum and the guard energy sum; and determining whether the initial unit indicates detection of the object based on the estimated noise and the energy value of the initial unit from the initial image.
The processor is further programmed to determine whether to wrap the neighborhood around the initial image based on the coordinate locations of the initial cells and the neighborhood threshold. Based on the determination of wrapping the neighborhood around the initial image, the processor is further programmed to divide the neighborhood into additional portions to reach a neighborhood threshold for an initial number of cells associated with the neighborhood, and to determine an index of the neighborhood based on the indices associated with the corners of each portion. The processor is further programmed to determine whether to wrap the guard around the initial image based on the coordinate locations of the initial cells and a guard threshold for a number of initial cells associated with the guard. Based on the determination to wrap the guard around the initial image, the processor is further programmed to divide the guard into additional guard portions to reach a guard threshold for an initial number of units associated with the guard, and to determine a guard index based on the guard index associated with the corners of each guard portion. The processor is further programmed to control the vehicle based on the detection of the object.
Drawings
Exemplary embodiments will hereinafter be described in conjunction with the following drawing figures, wherein like numerals denote like elements, and wherein:
FIG. 1 is a functional block diagram illustrating an autonomous vehicle including a radar system with a classification and detection system in accordance with various embodiments;
FIGS. 2 and 3 are data flow diagrams illustrating an autonomous driving system including a classification and detection system of an autonomous vehicle, in accordance with various embodiments;
FIGS. 4 and 4A are examples of energy values of a portion of an initial image of a range-Doppler unit generated by a receiver of a radar system based on a radar energy map received by a receive antenna of the radar system associated with an autonomous vehicle, where FIG. 4A is a continuation of FIG. 4, in accordance with various embodiments;
FIG. 4B is an example of energy values of a portion of an initial image of a range-Doppler unit generated by a receiver of the radar system based on a radar energy map received by a receive antenna of the radar system associated with an autonomous vehicle in accordance with various embodiments;
FIGS. 5 and 5A are examples of values associated with units of integrated images generated by the classification and detection system of energy values of range-Doppler units generated by the radar-based receiver in a portion of the initial image of FIGS. 4 and 4A, where FIG. 5A is a continuation of FIG. 5, in accordance with various embodiments;
FIG. 5B is an example of values related to elements of an integrated image generated by the classification and detection system of energy values of range-Doppler elements generated by the radar-based receiver in a portion of the initial image of FIG. 4B, in accordance with various embodiments;
FIG. 6 is an example of a radar energy map received by a receive antenna of a radar system, indicating a dedicated area for removal to generate an initial image, in accordance with various embodiments;
FIG. 7 is an example of an initial image generated based on the removal of a dedicated region from the radar energy map of FIG. 6, in accordance with various embodiments;
FIG. 8 is an example of an integrated image generated based on the integrated image data structure and the initial image of FIG. 7, in accordance with various embodiments; and
Fig. 9-13A are flowcharts illustrating methods that may be performed by the classification and detection system according to various embodiments.
Detailed Description
The following detailed description is merely exemplary in nature and is not intended to limit applications and uses. Furthermore, there is no intention to be bound by any expressed or implied theory presented in the preceding description, brief summary or the following detailed description. As used herein, the term module refers to any hardware, software, firmware, electronic control component, processing logic, and/or processor device, alone or in any combination, including, but not limited to: an Application Specific Integrated Circuit (ASIC), an electronic circuit, a processor (shared, dedicated, or group) and memory that execute one or more software or firmware programs, a combinational logic circuit, and/or other suitable components that provide the described functionality.
Embodiments of the present disclosure may be described herein in terms of functional and/or logical block components and various processing steps. It should be appreciated that such block components may be implemented by any number of hardware, software, and/or firmware components configured to perform the specified functions. For example, embodiments of the present disclosure may employ various integrated circuit components, e.g., memory elements, digital signal processing elements, logic elements, look-up tables, or the like, which may carry out a variety of functions under the control of one or more microprocessors or other control devices. In addition, those skilled in the art will appreciate that embodiments of the present disclosure may be practiced in conjunction with any number of systems and that the vehicle system described herein is merely an exemplary embodiment of the present disclosure.
For the sake of brevity, conventional techniques related to signal processing, data transmission, signaling, control, and other functional aspects of the systems (and the individual operating components of the systems) may not be described in detail herein. Furthermore, the connecting lines shown in the various figures contained herein are intended to represent example functional relationships and/or physical couplings between the various elements. It should be noted that many alternative or additional functional relationships or physical connections may be present in an embodiment of the disclosure.
Referring to FIG. 1, a classification and detection system, shown generally at 100, is associated with a vehicle 10 in accordance with various embodiments. In general, the classification and detection system 100 classifies the received signal from the radar system 40a associated with the vehicle 10 as noise or two-dimensional detection with significantly reduced computation, which ensures that the data received by the radar system 40a is effectively processed to intelligently control the vehicle 10 based thereon.
As shown in FIG. 1, an autonomous vehicle 10 generally includes a chassis 12, a body 14, front wheels 16, and rear wheels 18. The body 14 is disposed on the chassis 12 and substantially encloses the components of the vehicle 10. The body 14 and chassis 12 may together form a frame. Wheels 16 and 18 are each rotatably coupled to chassis 12 near a respective corner of body 14.
In various embodiments, the vehicle 10 may be an autonomous vehicle, and the classification and detection system 100 is incorporated into the vehicle 10 (hereinafter referred to as the autonomous vehicle 10). Autonomous vehicle 10 is, for example, a vehicle that is automatically controlled to transport passengers from one location to another. The autonomous vehicle 10 is described in the illustrated embodiment as a passenger vehicle, but it should be understood that any other vehicle may be used including motorcycles, trucks, sport Utility Vehicles (SUVs), recreational Vehicles (RVs), boats, aircraft, and the like. In the exemplary embodiment, autonomous vehicle 10 is a so-called four-level or five-level automated system. A four-level system represents "highly automated" meaning that the autonomous driving system is driving mode specific to all aspects of the dynamic driving task, even if the driver does not respond appropriately to the intervention request. A five-level system represents "fully automated" referring to the full-time performance of an autonomous driving system for all aspects of a dynamic driving task under all road and environmental conditions that can be managed by a human driver. It should be noted that the classification and detection system 100 may also be used with other lower level automated systems. For example, the classification and detection system 100 may be used with a secondary system to enable an alert or notification to be output when an object in the vicinity of the vehicle is detected.
As shown, the autonomous vehicle 10 generally includes a propulsion system 20, a transmission system 22, a steering system 24, a braking system 26, a sensor system 28, an actuator system 30, at least one data storage device 32, at least one controller 34, and a communication system 36. In various embodiments, propulsion system 20 may include an internal combustion engine, an electric machine such as a traction motor, and/or a fuel cell propulsion system. The transmission 22 is configured to transfer power from the propulsion system 20 to the wheels 16-18 according to a selectable speed ratio. According to various embodiments, driveline 22 may include a stepped ratio automatic transmission, a continuously variable transmission, or other suitable transmission. The braking system 26 is configured to provide braking torque to the wheels 16-18. In various embodiments, braking system 26 may include friction braking, line braking, a regenerative braking system such as an electric motor, and/or other suitable braking systems. The steering system 24 affects the position of the wheels 16-18. Although shown as including a steering wheel for illustrative purposes, in some embodiments contemplated within the scope of the present disclosure, steering system 24 may not include a steering wheel.
Sensor system 28 includes one or more sensing devices 40a-40n that sense observable conditions of the external environment and/or the internal environment of autonomous vehicle 10. Sensing devices 40a-40n may include, but are not limited to, radar system 10, lidar, global positioning system, optical camera, thermal camera, ultrasonic sensor, inertial measurement unit, and/or other sensors. In general, radar system 40a includes a transceiver module 41, one or more transmit antennas 43, and one or more receive antennas 45. The transceiver module 41 communicates with a transmit antenna 43 and a receive antenna 45. The transmitting antenna 43 radiates a radio frequency signal and the receiving antenna 45 detects any reflection from the potential object. The transceiver module 41 receives a control signal from the controller 34 to radiate a radio frequency signal via the transmitting antenna 43, and transmits a reception signal from the receiving antenna 45 to the controller 34. Based on the received signal, the controller 34 determines whether an object has been detected or whether the signal is indicative of noise, as will be discussed. It should be noted that the location of radar system 40a is merely exemplary, as radar system 40a may be positioned at any desired location around autonomous vehicle 10, and, in addition, autonomous vehicle 10 may include more than one radar system 40a.
The actuator system 30 includes one or more actuator devices 42a-42n that control one or more vehicle features such as, but not limited to, the propulsion system 20, the transmission system 22, the steering system 24, and the braking system 26. In various embodiments, the vehicle features may also include interior and/or exterior vehicle features such as, but not limited to, doors, trunk and cabin features such as ventilation, music, lighting, etc. (not numbered).
The communication system 36 is configured to wirelessly communicate with other entities 48 such as, but not limited to, other vehicles ("V2V" communication), infrastructure ("V2I" communication), remote systems, and/or personal devices (described in more detail with respect to fig. 2). In the exemplary embodiment, communication system 36 is a wireless communication system that is configured to communicate via a Wireless Local Area Network (WLAN) using the IEEE 802.11 standard or by using cellular data communications. However, additional or alternative communication methods, such as Dedicated Short Range Communication (DSRC) channels, are also contemplated within the scope of the present disclosure. A DSRC channel refers to a one-way or two-way short-to-medium range wireless communication channel specifically designed for automotive use, as well as a corresponding set of protocols and standards.
The data storage device 32 stores data for automatically controlling the vehicle 10. In various embodiments, the data storage device 32 stores a defined map of the navigable environment. In various embodiments, the defined map may be predefined by and obtained from a remote system. For example, the defined map may be assembled by a remote system and transmitted to the autonomous vehicle 10 (wirelessly and/or in a wired manner) and stored in the data storage device 32. It is to be appreciated that the data storage device 32 can be part of the controller 34, separate from the controller 34, or part of the controller 34 and a separate system.
The controller 34 includes at least one processor 44 and a computer-readable storage device or medium 46. Processor 44 may be any custom made or commercially available processor, a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), an auxiliary processor among a plurality of processors associated with controller 34, a semiconductor-based microprocessor (in the form of a microchip or chip set), a macroprocessor, any combination thereof, or generally any device for executing instructions. For example, computer-readable storage devices or media 46 may include volatile and nonvolatile storage in Read Only Memory (ROM), random Access Memory (RAM), and Keep Alive Memory (KAM). KAM is persistent or non-volatile memory that may be used to store various operating variables when processor 44 is powered down. The computer readable storage device or medium 46 may be implemented using any of a number of known storage devices, such as a PROM (programmable read only memory), EPROM (electrically PROM), EEPROM (electrically erasable PROM), flash memory, or any other electrical, magnetic, optical, or combination storage device capable of storing data, some of which represent executable instructions for use by the controller 34 in controlling the autonomous vehicle 10.
The instructions may include one or more separate programs, each comprising an ordered listing of executable instructions for implementing logical functions. When executed by processor 44, the instructions receive and process signals from sensor system 28, perform logic, calculations, methods, and/or algorithms for automatically controlling components of autonomous vehicle 10, and generate control signals to actuator system 30 based on the logic, calculations, methods, and/or algorithms to automatically control the components of autonomous vehicle 10. Although only one controller 34 is shown in fig. 1, embodiments of the autonomous vehicle 10 may include any number of controllers 34 that communicate over any suitable communication medium or combination of communication media and cooperate to process sensor signals, perform logic, calculations, methods, and/or algorithms, and generate control signals to automatically control features of the autonomous vehicle 10.
In various embodiments, one or more instructions of the controller 34 are embodied in the classification and detection system 100 and, when executed by the processor 44, determine whether the received signal from the receive antenna 45 contains noise or detection. In general, the classification and detection system 100 may identify signals as noise or detection with significantly fewer computations by using the integrated image data structure, as will be discussed further herein.
It is to be appreciated that the subject matter disclosed herein provides certain enhanced features and functionality for a vehicle that may be considered a standard or benchmark autonomous vehicle 10. To this end, the autonomous vehicle may be modified, enhanced, or supplemented to provide additional features that will be described in detail below.
According to various embodiments, controller 34 implements an Autonomous Driving System (ADS) 70 as shown in fig. 2. That is, suitable software and/or hardware components of the controller 34 (e.g., the processor 44 and the computer-readable storage device 46) are used to provide the autonomous driving system 70 for use with the autonomous vehicle 10.
In various embodiments, the instructions of autonomous driving system 70 may be organized by function, module, or system. For example, as shown in fig. 2, autonomous driving system 70 may include a computer vision system 74, a positioning system 76, a guidance system 78, and a vehicle control system 80. It is to be appreciated that in various embodiments, instructions may be organized into any number of systems (e.g., combined, further partitioned, etc.), as the present disclosure is not limited to this example.
In various embodiments, the computer vision system 74 synthesizes and processes the sensor data and predicts the presence, location, classification, and/or path of objects and features of the environment of the autonomous vehicle 10. In various embodiments, computer vision system 74 may incorporate information from a plurality of sensors including, but not limited to, cameras, lidar, radar system 40a, and/or any number of other types of sensors. It should be noted that in certain embodiments, computer vision system 74 may also include a signal processing system that processes signals received from a plurality of sensors, including but not limited to cameras, lidar, radar system 40a, and/or any number of other types of sensors.
The positioning system 76 processes the sensor data along with other data to determine the location of the autonomous vehicle 10 relative to the environment (e.g., local location relative to a map, precise location of lanes relative to a road, vehicle heading, speed, etc.). Guidance system 78 processes the sensor data along with other data to determine a path to be followed by autonomous vehicle 10. The vehicle control system 80 generates control signals for controlling the autonomous vehicle 10 according to the determined path.
In various embodiments, the controller 34 implements machine learning techniques to assist the functions of the controller 34, such as feature detection/classification, obstacle mitigation, route traversal, mapping, sensor integration, ground truth determination, and the like.
As briefly mentioned above, the classification and detection system 100 of fig. 1 is included within the autonomous driving system 70 and, for example, is embedded within or associated with the computer vision system 74. In some embodiments, the classification and detection system 100 may be embedded within or associated with a signal processing system. In this example, classification and detection system 100 processes the signal received by receive antenna 45 (fig. 1) in sensor system 28 and determines whether radar system 40a has detected an object or whether the signal received by receive antenna 45 contains noise. By processing the sensor data from the sensor system 28, the radar system 40a, in this example, using the classification and detection system 100, can efficiently process the received signal in two dimensions, which provides improved detection accuracy and reduced computation time. Based on the detection of the object by classification and detection system 100, guidance system 78 may alter the path of autonomous vehicle 10 (to navigate around the object), or vehicle control system 80 may generate control signals for controlling autonomous vehicle 10 based on the detection of the object, including controlling one or more actuators 49a-49n to actuate braking system 26 (to slow or stop autonomous vehicle 10); control steering system 24 (to steer around the object); etc.
For example, as shown in greater detail with respect to fig. 3 and with continued reference to fig. 1 and 2, a dataflow diagram illustrates various embodiments of the classification and detection system 100 that may be embedded within the controller 34. Various embodiments of a classification and detection system 100 according to the present disclosure may include any number of sub-modules embedded within the controller 34. It will be appreciated that the sub-modules shown in fig. 3 may be combined and/or further partitioned to similarly determine whether the received signal from the receive antenna 45 contains noise or object detection for controlling the autonomous vehicle 10. Inputs to classification and detection system 100 may be received from radar system 40a (fig. 1), received from other control modules (not shown) associated with autonomous vehicle 10, and/or determined/modeled by other sub-modules (not shown) within controller 34. In various embodiments, classification and detection system 100 includes an integral image data store 102, an initial image module 104, a detection data store 106, a threshold data store 108, a threshold energy determination module 110, a neighborhood determination module 112, and an energy determination module 114.
The integral image data store 102 stores an integral image data structure 116 for generating integral images from signals received by the receive antenna 45. In one example, the integral image data structure 116 includes a plurality of cells, each cell pre-filled with an equation for calculating an integral image based on values associated with signals received from the receive antenna 45 (fig. 1). In this example, each cell included in the integral image data structure 116 contains the following equation:
ii(x,y)=i(x,y)+ii(x-1,y)+ii(x,y-1)-ii(x-1,y-1) (1)
Wherein ii represents a unit of integrating an image; i represents a unit of an initial image; x is the Doppler coordinate value; y is a range coordinate value. Typically, the integral image includes a plurality of cells, each cell having a value determined based on equation (1) and an energy value from the initial image. As an example, referring to fig. 4 and 4A, a portion of an initial image 200 is shown with energy values associated with each particular cell 202 provided in decibels (Db). Fig. 4B also shows a portion of the initial image 200 with energy values associated with each particular cell provided in decibels (Db). Typically, doppler 198 is along the x-axis and range 197 is on the y-axis. As will be discussed, using the integrated image data structure 116 stored in the integrated image data store 102 and the initial image 200 received by the radar system 40a (fig. 1), the initial image module 104 populates the integrated image 300 with a corresponding plurality of cells 302, as shown in fig. 5 and 5A. In fig. 5 and 5A, an integral image 300 is generated based on the energy value of each cell 202 in the initial image 200 of fig. 4 and 4A and equation (1). In fig. 5B, an integral image 300 is generated based on the energy value of each cell 202 in the initial image 200 of fig. 4B and equation (1). As an example, referring to fig. 5 and 5A, the values of cell 302a are calculated as follows:
38 (element 302 a) =10 (element 202 a) +15 (element 302 b) +25 (element 302 c) -12 (element 302 d)
Thus, referring back to fig. 3, the integral image data structure 116 enables efficient calculation of the energy values of surrounding cells by providing the integral image data structure 116 with a plurality of cells, each of which is pre-filled with equation (1). As will be discussed, the use of the integral image data structure 116 enables the energy sums of the units under test to be determined in about 8 to about 32 scalar operations instead of about 380 hundred scalar operations, which enables the controller 34 to efficiently process all units under test in the radar energy map and reduces the likelihood of missed detection.
The initial image module 104 receives as input a received signal 118 received by a receive antenna 45 from the transceiver module 41 of the radar system 40a (fig. 1). As shown in fig. 6, the received signal 118 provides a radar energy map 600, as shown in fig. 6. In fig. 6, range (in meters) is provided on the y-axis 602 and doppler (in meters per second) is provided on the x-axis 604. Each cell (doppler, range) has an energy value in decibels (dB). The scale 606 provides a reference for the energy value associated with each coordinate cell (Doppler, range).
In general, radar energy map 600 is calculated by another control module associated with controller 34. In an example, each transmit antenna 43 transmits at a different time in a Time Division Multiple Access (TDMA) manner such that each transmit antenna 43 transmits sequentially. The receive antenna 45 simultaneously detects each transmission and samples a predetermined or predetermined number of times. For each cycle of transmitting through the transmit antenna 43 and receiving through the receive antenna 45, there is a predetermined or predetermined number of channels. This cycle of transmitting through the transmitting antenna 43 and receiving through the receiving antenna 45 is repeated a predetermined or predetermined number of times.
A range Fast Fourier Transform (FFT) is performed on the signal by a processor of another control module associated with the controller 34, followed by a doppler Fast Fourier Transform (FFT). After the range FFT and doppler FFT, the processor of the other control module generates a radar cube. The radar cube is compressed into the radar energy map 600 by calculating the norms of each cell and summing all cells on the channel axis by the processor of the other control module. This results in a radar energy map 600 that changes the 3D Lei Dadian cloud to a 2D mesh map.
Each target is stored in cells in a radar energy map (2D grid) according to its radial distance (range) (i.e. radial distance) and its radial velocity (doppler). Thus, each cell has a coordinate position (x, y) based on the (doppler, range) associated with each cell. Typically, the energy of the target leaks to adjacent cells. The classification and detection system 100 processes the radar energy map 600 to determine the cell with the highest energy target instead of leakage.
Referring to fig. 2 and 6, upon receipt of the received signal 118, the initial image module 104 removes the dedicated region 608 from the radar energy map 600. The dedicated area 608 is an area with high energy, which is caused by a transmitted signal reflected off of the stationary object by the transmitting antenna 43 (fig. 1) and received by the receiving antenna 45. The initial image module 104 removes the dedicated region 608 from the initial radar energy map 600 to form an initial image 700, as shown in fig. 7. (2D grid of energy values of a portion of an exemplary initial image is shown in FIGS. 4 and 4A and 4B.) in an example, the initial image module 104 removes the dedicated region 608 by comparing energy values of cells in the radar energy map 600 to a threshold value, for example, stored in the threshold data store 108, and removes the dedicated region 608 based on the energy values of the cells being greater than the threshold value. In fig. 7, the initial image 700 has no high energy region and includes a range (in meters) on the y-axis 702 and doppler (in meters per second) on the x-axis 704. The scale 706 provides a reference for the energy value associated with each cell (Doppler, range). Referring back to fig. 2, the initial image module 104 sets an initial image 120 for the threshold energy determination module 110 and the neighborhood determination module 112. The initial image 120 is a radar energy map remaining in the radar energy map received from the received signal 118 after the dedicated region is removed. In other words, in this example, the initial image 120 is an initial image 700 (fig. 7) that is generated by removing the dedicated region 608 (fig. 6) from the radar energy map 600 (fig. 6) provided by the received signal 118. The initial image 120 includes a plurality of initial cells in a 2D grid, each having energy values from a radar energy map.
Based on the initial image 120, the initial image module 104 generates an integral image 124. In one example, the initial image module 104 retrieves the integral image data structure 116 from the integral image data store 102. The initial image module 104 populates each cell of the integral image data structure 116 with energy values of the associated cell of the initial image. When the integral image data structure 116 is stored in each cell in equation (1), as the energy values from the initial image are filled into the integral image data structure 116, the value of each relevant cell in the integral image 124 is determined by using the values from the initial image 120 and equation (1) as described above. The initial image module 104 sets an integral image 124 for the energy determination module 114.
In an example, referring to fig. 8, an integral image 800 is shown. Using the integral image data structure 116 (fig. 2), an integral image 800 is generated based on the energy values in the cells associated with the initial image 700 of fig. 7. Thus, the energy values in the cells of the initial image 700 (fig. 7) are used to calculate a value in each corresponding cell of the integral image 800 using the stored integral image data structure 116 (fig. 2), wherein each cell of the integral image 800 is calculated based on equation (1) and the energy values from the associated cells of the initial image 700 (fig. 7).
Referring back to FIG. 2, the test data store 106 stores data for units under test 126 that are determined to be likely tests. In one example, the detection data store 106 stores coordinate values of the unit under test 126 in the initial image 120 identified by the threshold energy determination module 110. Thus, the detection data store 106 may be populated by the threshold energy determination module 110. In various embodiments, for each unit under test 126 stored in the test data store 106, the unit under test 126 may have an associated qualification 128. As will be discussed, the energy determination module 114 may authenticate each detected unit under test 126 and associate the authentication (noise or detection) with the corresponding unit under test 126 and store this in the detection data store 106. Thus, in various embodiments, the energy determination module 114 also populates the detection data store 106. Other modules associated with autonomous driving system 70, such as guidance system 78, may access detection data store 106 to retrieve units that are detection assessments. In other embodiments, a module associated with the computer vision system 74 may access the detection data store 106 and retrieve the detection for further processing. For example, the detection may be communicated to the guidance system 78 to determine which objects are in close proximity to the vehicle 10, and the guidance system 78 may then determine the direction and speed of the control vehicle 10 based on the objects.
The threshold data store 108 stores data for thresholds associated with the classification and detection system 100. In this example, detection data store 106 stores energy threshold 130, neighborhood threshold 132, guard threshold 134, noise threshold 135, doppler limit threshold 137, and range limit threshold 139. The energy threshold 130, neighborhood threshold 132, guard threshold 134, and noise threshold 135 are each predefined values, default values, or factory settings. The energy threshold 130 is a threshold of energy values of cells in the initial image 120. In one example, the energy threshold 130 is about 30 to about 38 decibels (dB). The neighborhood threshold 132 is a threshold for the number of cells in a window or neighborhood of the initial image 120. In one example, neighborhood threshold 132 is about 7 to about 10 cells. The guard threshold 134 is a threshold for the number of units in the guard within the neighborhood of the initial image 120. In one example, the guard threshold 134 is about 0 to about 3 units. Noise threshold 135 is a threshold for error margin associated with the estimated noise. In one example, noise threshold 135 is a predetermined value of the error margin. In this example, the noise threshold 135 is about 25 decibels (dB). The doppler limit threshold 137 is a predetermined value for the limit of doppler values of the radar energy map 600. In one example, the Doppler limit threshold 137 is about 128. The range limit threshold 139 is a predetermined value for the limit of doppler values of the radar energy map 600. In one example, the range limit threshold 139 is about 512.
The threshold energy determination module 110 receives as input the initial image 120. The threshold energy determination module 110 retrieves the energy threshold 130 from the threshold data store 108. The threshold energy determination module 110 compares each cell in the initial image 120 to an energy threshold 130. If the energy value in a particular cell is greater than the energy threshold 130, the threshold energy determination module 110 stores the cell's coordinate location (Doppler, range) as the cell 126 under test in the detection data store 106. If the energy value of a particular cell is less than or equal to the energy threshold 130, the threshold energy determination module 110 proceeds to the next cell. The threshold energy determination module 110 repeats this comparison of the energy of each cell with the energy threshold 130 until all cells of the initial image 120 are compared with the energy threshold 130. It should be noted that the threshold energy determination module 110 may alternatively, the classification and detection system 100 may treat each cell as a cell under test without an initial comparison with the energy threshold 130.
Neighborhood determination module 112 queries detection data store 106 and retrieves the first of the units under test. Neighborhood determination module 112 queries threshold data store 108 and retrieves neighborhood threshold 132, doppler limit threshold 137, and range limit threshold 139. Based on the coordinate locations (doppler, range) of the units under test 126 in the initial image 120, the neighborhood determination module 112 builds a neighborhood or window around the coordinate locations (doppler, range) of the units under test 126. In this example, neighborhood determination module 112 builds a neighborhood based on the number of cells contained in neighborhood threshold 132, doppler limit threshold 137, and range limit threshold 139. For example, based on the neighborhood threshold 132 and the coordinate locations (Doppler, range) of the cells under test 126, the neighborhood determination module 112 determines whether there are enough cells in the initial image 120 to surround the cells under test 126 to provide a neighborhood. If there are not enough cells around the cell under test 126, the neighborhood determination module 112 divides the neighborhood into smaller rectangles, squares, or portions. In other words, the neighborhood determination module 112 wraps around the initial image 120 to borrow cells from: top (if there are not enough cells in the initial image 120 below the cell under test 126); bottom (if there are not enough cells in the initial image 120 above the cell under test 126); left (if there are not enough cells in the initial image 120 to the right of the cell under test 126); and/or right side (if there are not enough cells in the initial image 120 to the left of the cell under test 126) to form additional rectangles, squares, or portions to define a neighborhood. The neighborhood determination module 112 borrows multiple cells by wrapping around the initial image 120 to ensure that the number of cells in the neighborhood around the cell under test 126 matches or corresponds to the number of cells in the neighborhood threshold 132.
For example, referring to fig. 7, a unit under test 710 is shown. In the example of a neighborhood threshold 132 of about 10, the number of cells surrounding the cell under test 710 is insufficient to form a neighborhood 712. In this example, neighborhood determination module 112 decomposes neighborhood 712 into first portion 714 and second portion 716. Thus, neighborhood determination module 112 wraps around to the bottom of initial image 700 to borrow the number of cells needed around cell under test 710 to form neighborhood 712 that matches the number of cells in neighborhood threshold 132. As another example, a unit under test 720 is shown. As cells under test 720 are surrounded by enough cells to form a neighborhood 722 that matches or corresponds to neighborhood threshold 132.
Typically, the neighborhood 712, 722 is rectangular or square such that, starting from the unit under test 710, the neighborhood determination module 112 determines the 10 units above the unit under test 710, 720 by taking them; 10 units under the tested units 710, 720; 10 units on the right side of the tested units 710, 720; and 10 cells to the left of the cell under test 710, 720 form a neighborhood based on the neighborhood threshold 132 of 10 (and wrap around cells as needed as discussed with respect to cell under test 710). The neighborhood determination module 112 forms the neighborhood 712, 722 as a rectangle or square around the cell defined by the neighborhood threshold 132.
In this regard, as another example, referring to fig. 4 and 4A, a unit under test 210 is shown in a 2D grid. In the example of fig. 4 and 4A, the classification and detection system 100 treats each cell as a measured cell without performing an initial comparison of the energy value to the energy threshold 130. The energy value of the unit under test 210 is 3. For example, based on a neighborhood threshold 132 of 7, neighborhood determination module 112 identifies 7 cells above, below, to the right, and to the left of cell under test 210 to form neighborhood 212. In this example, neighborhood determination module 112 decomposes neighborhood 212 into two portions 214, 216 and wraps around original image 200. Neighborhood 212 includes all cells contained in rectangles 218 (for portion 214), 220 (for portion 216) defined by 7 cells above, below, right and left.
As another example, referring to fig. 4B, a unit under test 250 is shown. In this example, the classification and detection system 100 is performing an initial thresholding and the energy value of the unit under test 250 is 38, which is greater than the energy threshold 130. Based on the neighborhood threshold 132 of 7, the neighborhood determination module 112 identifies 7 cells above, below, to the right and to the left of the cell under test 250 to form a neighborhood 252. In this example, there are enough cells around the cell 250 under test to form a neighborhood 252 without surrounding the initial image 200. Neighborhood 252 includes all cells contained in a rectangle defined by 7 cells above, below, right and left of cell 250 under test.
Referring back to fig. 2, where a neighborhood is constructed, neighborhood determination module 112 determines the indices (doppler, range) of the four corners of the rectangle, square, or portion associated with the neighborhood. In other words, the neighborhood determination module 112 determines the coordinate locations (Doppler, range) of the cells associated with the four corners of the rectangle, square, or portion defining the neighborhood. Where the neighborhood has been decomposed into additional parts or has been wrapped around the initial image 120, the neighborhood determination module 112 determines an index for each part (or rectangle) of the neighborhood. The neighborhood determination module 112 sets the coordinate locations (doppler, range) of the indexed cells as window data 136 of the energy determination module 114. Window data 136 includes the coordinate locations (doppler, range) of indices (cells associated with corners of a rectangle or square) associated with a neighborhood, which may include multiple indices depending on whether the neighborhood is broken into multiple portions or wrapped around the initial image 120.
In one example, neighborhood determination module 112 determines the index by starting with the coordinate location (Doppler, range) of the cell under test and neighborhood threshold 132. The neighborhood determination module 112 determines a Range_Start value, a Range_End value, a Doppler_Start value, and a Doppler_End value for each portion of the neighborhood. In the event that the neighborhood is not wrapped around the initial image 120, the neighborhood determination module 112 determines these values based on the following equation:
Range_Start=RangeCUT-ThresholdW (2)
Range_End= RangeCUT+ThresholdW (3)
Doppler_Start=DopplerCUT-ThresholdW (4)
Doppler_End=DopplerCUT+ThresholdW (5)
Wherein Range_Start is the starting Range value of the neighborhood; range_End is the ending Range value of the neighborhood; doppler_Start is the starting Doppler value of the neighborhood; doppler End is the End Doppler value of the neighborhood; doppler CUT is the Doppler value of the unit under test; range CUT is the Range value of the unit under test; threshold W is neighborhood Threshold 132. In the case where the neighborhood has been divided into a plurality of portions, neighborhood determination module 112 determines a range_start value, a range_end value, a doppler_start value, and a doppler_end value for each portion based on neighborhood threshold 132, doppler limit threshold 137, and Range limit threshold 139.
In one example, neighborhood determination module 112 determines the Range End value based on equation (3) or whether the sum of the Range of the unit under test and neighborhood threshold 132 is greater than a Range limit threshold 139. Range_End is the Range limit threshold minus the sum of the Range of the unit under test and the neighborhood threshold 132 if the sum of the Range of the unit under test and the neighborhood threshold 132 is greater than the Range limit threshold 139. The neighborhood determination module 112 determines a range_start value based on whether the value of the neighborhood threshold 132, subtracted from equation (2) or the Range of units under test, is less than 1. Range_Start is the sum of the Range limit threshold 139 and the difference between the Range of the unit under test and the neighborhood threshold 132 if the value of the unit under test minus the neighborhood threshold 132 is less than 1. In one example, neighborhood determination module 112 determines the Doppler_End value based on equation (5) or whether the sum of the Doppler of the unit under test and neighborhood threshold 132 is greater than Doppler limit threshold 137. If the sum of the Doppler for the unit under test and the neighborhood threshold 132 is greater than the Doppler limit threshold 137, doppler_End is the Doppler limit threshold minus the sum of the Doppler for the unit under test and the neighborhood threshold 132. The neighborhood determination module 112 determines the Doppler_Start value based on equation (4) or whether the Doppler of the unit under test minus the neighborhood threshold 132 is less than 1. If the Doppler of the measured cell minus the neighborhood threshold 132 is less than 1, doppler_Start is the sum of the Doppler limit threshold 137 and the difference between the Doppler of the measured cell and the neighborhood threshold 132. Referring to FIGS. 4 and 4A, for portion 216, range_Start is 1, range_End is 14, doppler_Start is 4, and Doppler_End is 18. For section 214, range_Start is 43, range_End is 43, doppler_Start is 4, and Doppler_End is 18.
The neighborhood determination module 112 determines a first index of the neighborhood as (Doppler_End, range_End) and retrieves a value of the first index from the integral image. As will be discussed, the energy determination module 114 sets the value of the first index to the energy sum of the portion. The neighborhood determination module 112 determines whether Range_Start minus one is greater than zero. If true, the neighborhood determination module 112 determines the second index as (Doppler_End, range_Start-1). The energy determination module 114 retrieves the value of the second index from the integral image and subtracts the value of the second index (from the integral image) from the energy sum of the portion. If Range_Start minus one is not greater than zero, then neighborhood determination module 112 sets the sum of the diagonals to false to indicate that the neighborhood wraps around the initial image and does not calculate the second index.
The neighborhood determination module 112 determines whether Doppler_Start minus one is greater than zero. If true, the neighborhood determination module 112 determines the third index as (Doppler_Start-1, range_End). As will be discussed, the energy determination module 114 retrieves the value of the third index from the integral image and subtracts the value of the third index (from the integral image) from the energy sum of the portion. If Doppler_Start minus one is not greater than zero, then neighborhood determination module 112 sets the sum of the diagonals to false to indicate that the neighborhood wraps around the initial image and does not calculate the third index.
The neighborhood determination module 112 determines whether the sum of the diagonals is true, whether Range_Start minus one is greater than zero, and whether Doppler_Start minus one is greater than zero. If the sum of the diagonals is not set to false, then the sum of the diagonals is true. If true, the neighborhood determination module 112 determines the fourth index as (Doppler_Start-1, range_Start-1). As will be discussed, the energy determination module 114 retrieves the value of the fourth index from the integral image and adds the value of the fourth index (from the integral image) to the energy sum of the portion. If the sum of the diagonals is false, range_Start minus one is no greater than zero or Doppler_Start minus one is no greater than zero, then the neighborhood determination module 112 does not calculate the fourth index.
Typically, the lower right corner index (Range End) is inside the neighborhood (and each part of the neighborhood when the neighborhood surrounds the initial image), while the remaining indexes are outside the neighborhood. In the event that the neighborhood has been wrapped around the initial image, neighborhood determination module 112 calculates the index based on those available for various portions that have been wrapped around the initial image.
Referring back to fig. 4B, in the example of neighborhood 252, neighborhood determination module 112 calculates an index based on the neighborhood threshold 132 for cells under test 250 having a range of doppler sum 15 of 11 and 7. Based on the neighborhood threshold 132 for the measured cell coordinate locations (Doppler 11, range 15) and 7, range_Start is 8 because the neighborhood 252 is not divided into multiple parts using equations (2) - (5); range_End is 22; doppler_Start is 4; doppler End is 18. The first index 260 is (Doppler 18, range 22). Range_Start is decremented by one greater than zero and the second index 258 is (Doppler 18, range 7). Doppler_Start minus one is greater than zero, and the third index 256 is (Doppler 3, range 22). Since the diagonal is true, range_Start is minus one greater than zero and Doppler_Start is minus one greater than zero, the fourth index 254 is (Doppler 3, range 7).
Referring back to fig. 4 and 4A, in the example of neighborhood 212, neighborhood determination module 112 calculates the index of portions 214, 216 based on the measured cell 210 having a range of doppler sum 7 of 11 and neighborhood threshold 132 of 7. Based on the neighborhood threshold 132 for the measured cell coordinate locations (Doppler 11, range 7) and 7, the neighborhood determination module 112 determines a neighborhood surround initial image and divides the neighborhood into portions 214, 216. For each portion, neighborhood determination module 112 determines an index. For portion 216, range_Start is 1; range_End is 14; doppler_Start is 4; doppler End is 18. The first index 228 is (doppler_end, range_end) or (Doppler 18, range 14). Range_Start minus one is no greater than zero and the diagonal is set to false. Thus, for portion 216, neighborhood determination module 112 does not calculate the second index. The third index 226 is (Doppler_Start-1, range_End) or (Doppler 3, range 14) minus one greater than zero. Since the diagonal has been set to false, the neighborhood determination module 112 does not calculate the fourth index.
For section 214, range_Start is 43, range_End is 43, doppler_Start is 4, and Doppler_End is 18. The first index 231 is (doppler_end, range_end) or (Doppler 18, range 43). Range_Start minus 1 is greater than zero, and the second index 232 is either (Doppler_end, range_Start-1) or (Doppler 18, range 42). The third index 233 is (Doppler_Start-1, range_End) or (Doppler 3, range 43) minus one greater than zero. Since the diagonal is true, range_Start is minus one greater than zero and Doppler_Start is minus one greater than zero, the fourth index 235 is either (Doppler_Start-1, range_Start-1) or (Doppler 3, range 42).
Based on the neighborhood, neighborhood determination module 112 also builds a guard around the units under test 126 within the neighborhood. The neighborhood determination module 112 queries the threshold data store 108 and retrieves the guard threshold 134, the Doppler limit threshold 137, and the range limit threshold 139. Based on the coordinate locations (doppler, range) of the units under test 126 in the initial image 120, the neighborhood determination module 112 builds a guard around the coordinate locations (doppler, range) of the units under test 126. In this example, neighborhood determination module 112 builds a guard based on the number of units contained in guard threshold 134, doppler limit threshold 137, and range limit threshold 139. For example, based on the guard threshold 134 and the coordinate locations (doppler, range) of the units under test 126, the neighborhood determination module 112 determines whether there are enough units in the initial image 120 surrounding the units under test 126 to provide the guard. If there are not enough cells around the cell under test 126, the neighborhood determination module 112 divides the guard into smaller rectangles. In other words, the neighborhood determination module 112 wraps around the initial image 120 to borrow cells from: top (if there are not enough cells in the initial image 120 below the cell under test 126); bottom (if there are not enough cells in the initial image 120 above the cell under test 126); left (if there are not enough cells in the initial image 120 to the right of the cell under test 126); and/or right side (if there are not enough cells in the initial image 120 to the left of the cell under test 126) to form an additional rectangle to define the guard. The neighborhood determination module 112 borrows multiple cells by wrapping around the initial image 120 to ensure that the number of cells in the guard around the cell under test 126 matches or corresponds to the number of cells in the guard threshold 134.
For example, referring to fig. 7, a unit under test 710 is shown. In the example of a guard threshold 134 of about 3, the number of cells around the unit under test 710 is insufficient to form a guard 730. In this example, neighborhood determination module 112 divides guard 730 into a first guard portion 732 and a second guard portion 734. Thus, neighborhood determination module 112 wraps around to the bottom of initial image 700 to borrow the number of cells needed around the cell under test 710 to form guard 730 that matches the number of cells in guard threshold 134. As another example, the unit under test 720 is surrounded by enough units to form a guard 740 that matches or corresponds to the guard threshold 134.
Typically, guards 730, 740 are rectangular or square such that, starting from unit under test 710, neighborhood determination module 112 determines the number of units above unit under test 710, 720 by taking the 3 units; 3 units under the units under test 710, 720; 3 units on the right side of the unit under test 710, 720; and 3 cells to the left of the tested cells 710, 720 form guards 730, 740 based on the guard threshold 134 (and wrap around cells as needed as discussed with respect to the tested cells 710). The neighborhood determination module 112 forms the guards 730, 740 as rectangles or squares around the cell defined by the guard threshold 134.
As another example, referring to fig. 4 and 4A, based on the guard threshold 134 of 3, the neighborhood determination module 112 identifies 3 units above, below, to the right and to the left of the unit under test 210 to form a guard 236. In this example, neighborhood determination module 112 does not divide the guard into multiple portions because there are enough cells to form guard 236. Guard 236 includes all cells contained within rectangle 238 defined by the 3 cells above, below, right and left of the cell under test 210. As another example, as shown in FIG. 4B, based on a guard threshold 134 of 3, neighborhood determination module 112 identifies 3 units above, below, to the right and to the left of unit under test 250 to form guard 264. In this example, neighborhood determination module 112 does not divide the guard into multiple portions because there are enough cells to form guard 264. Guard 264 includes all cells contained within rectangle 238 defined by the 3 cells above, below, right and left of the cell under test 250.
Referring back to fig. 2, where a guard is constructed, neighborhood determination module 112 determines the indices (doppler, range) of the four corners of the rectangle, square, or portion associated with the guard. In other words, the neighborhood determination module 112 determines the coordinate locations (Doppler, range) of the cells associated with the four corners of the rectangle or square defining the guard. Where the guard has been decomposed into additional parts or has been wrapped around the initial image 120, the neighborhood determination module 112 determines an index for each part (or rectangle) of the neighborhood. The neighborhood determination module 112 sets the coordinate locations (doppler, range) of the cells of the guard index as the guard data 138 of the energy determination module 114. The guard data 138 includes the coordinate locations (doppler, range) of the indices (cells associated with the corners of a rectangle, square, or portion) associated with the guard, which may include multiple indices depending on whether the neighborhood is resolved or wrapped around the initial image 120.
In one example, neighborhood determination module 112 determines the index by starting with the coordinate location (Doppler, range) of the unit under test and a guard threshold 134. The neighborhood determination module 112 determines a Range_Start guard value, a Range_End guard value, a Doppler_Start guard value, and a Doppler_End guard value for each portion of the neighborhood. In the event that the neighborhood is not wrapped around the initial image 120, in one example, the neighborhood determination module 112 determines these values based on the following equation:
Range_Start Guard=RangeCUT-ThresholdG (6)
Range_End Guard= RangeCUT+ThresholdG (7)
Doppler_Start Guard=DopplerCUT-ThresholdG (8)
Doppler_End Guard=DopplerCUT+ThresholdG (9)
Wherein Range_Start Guard is the initial Range value of the Guard; range_End Guard is the End Range value of the Guard; doppler_Start Guard is the initial Doppler value of the Guard; doppler_end Guard is the End Doppler value of the Guard; doppler CUT is the Doppler value of the unit under test; range CUT is the Range value of the unit under test; threshold G is guard Threshold 134.
In one example, neighborhood determination module 112 determines the range_end Guard value based on equation (7) or whether the sum of the Range of the unit under test and Guard threshold 134 is greater than a Range limit threshold 139. Range_End Guard is the sum of the Range limit threshold minus the Range of the unit under test and the Guard threshold 134 if the sum of the Range of the unit under test and the Guard threshold 134 is greater than the Range limit threshold 139. The neighborhood determination module 112 determines a range_start Guard value based on whether the Range of the unit under test minus the Guard threshold 134 is less than 1 or not, equation (6). If the Range of the unit under test minus the Guard threshold 134 is less than 1, range_Start Guard is the sum of the Range limit threshold 139 and the difference between the Range of the unit under test and the Guard threshold 134. In one example, neighborhood determination module 112 determines the Doppler_End Guard value based on equation (9) or whether the sum of the Doppler of the unit under test and Guard threshold 134 is greater than Doppler limit threshold 137. If the sum of the Doppler for the unit under test and the Guard threshold 134 is greater than the Doppler limit threshold 137, doppler_End Guard is the Doppler limit threshold minus the sum of the Doppler for the unit under test and the Guard threshold 134. The neighborhood determination module 112 determines the Doppler_Start Guard value based on equation (8) or whether the Doppler of the unit under test minus the Guard threshold 134 is less than 1. If the Doppler of the unit under test minus the Guard threshold 134 is less than 1, doppler_Start Guard is the sum of the Doppler limit threshold 137 and the difference between the Doppler of the unit under test and the Guard threshold 134.
The neighborhood determination module 112 determines a first Guard index for a Guard as (Doppler_End Guard, range_End Guard). As will be discussed, the energy determination module 114 retrieves the value of the first index from the integral image and sets the value of the first guard index to the guard energy sum for the portion. The neighborhood determination module 112 determines whether Range_Start Guard minus one is greater than zero. If true, the neighborhood determination module 112 determines the second Guard index as (Doppler_End Guard, range_Start-1 Guard). The energy determination module 114 retrieves the value of the second guard index from the integral image and subtracts the value of the second guard index (from the integral image) from the guard energy sum of the portion. If Range_Start Guard is decremented by one not greater than zero, then neighborhood determination module 112 sets the sum of the diagonals to false to indicate that the Guard wraps around the initial image and does not calculate a second Guard index.
The neighborhood determination module 112 determines whether Doppler_Start Guard minus one is greater than zero. If true, neighborhood determination module 112 determines the third index as (Doppler_Start-1 Guard,Range_End Guard). As will be discussed, the energy determination module 114 retrieves the value of the third guard index from the integral image and subtracts the value of the third guard index (from the integral image) from the guard energy sum of the portion. If Doppler_Start Guard is decremented by one no greater than zero, neighborhood determination module 112 sets the sum of the diagonals to false to indicate that the Guard wraps around the initial image and does not calculate a third Guard index.
The neighborhood determination module 112 determines whether the sum of the diagonals is true, whether Range_Start Guard minus one is greater than zero, and whether Doppler_Start Guard minus one is greater than zero. If the sum of the diagonals is not set to false, then the sum of the diagonals is true. If true, neighborhood determination module 112 determines the fourth guard index as (Doppler_Start-1 Guard,Range_Start-1 Guard). As will be discussed, the energy determination module 114 retrieves the value of the fourth guard index from the integral image and adds the value of the fourth guard index (from the integral image) to the energy sum of the portion. If the sum of the diagonals is false, range_Start Guard minus one is no greater than zero or Doppler_Start Guard minus one is no greater than zero, then the neighborhood determination module 112 does not calculate the fourth Guard index.
Referring back to fig. 4 and 4A, in the example of guard 236, neighborhood determination module 112 calculates an index based on the guard threshold 134 for units under test 210 and 3 having a doppler of 11 and a range of 7. Based on the Guard threshold 134 of the units under test 210 and 3 at (Doppler 11, range 7), range_Start Guard is 4 using equations (6) - (9); range_End Guard is 10; doppler_Start Guard is 8; doppler_end Guard is 14. The first guard index 246 is (doppler 14, range 10). Range_Start Guard is decremented by one greater than zero and the second Guard index 244 is (Doppler 14, range 3). The Doppler_Start Guard is decremented by one greater than zero and the third Guard index 242 is (Doppler 7, range 10). Since the diagonal is true, range_Start Guard is decremented by one greater than zero and Doppler_Start Guard is decremented by one greater than zero, the fourth Guard index 240 is (Doppler 7, range 3).
Referring back to fig. 4B, in the example of guard 264, neighborhood determination module 112 calculates an index based on guard threshold 134 for units under test 250 and 3 having a range of doppler and 23 of 11. Based on the Guard threshold 134 of the units under test 250 and 3 at (Doppler 11, range 15), range_Start Guard is 12 using equations (6) - (9); range_End Guard is 18; doppler_Start Guard is 8; doppler_end Guard is 14. The first guard index 272 is (doppler 14, range 18). Range_Start Guard is decremented by one greater than zero and the second Guard index 270 is (Doppler 14, range 11). The Doppler_Start Guard is decremented by one greater than zero and the third Guard index 268 is (Doppler 7, range 18). Since the diagonal is true, range_Start Guard is decremented by one greater than zero and Doppler_Start Guard is decremented by one greater than zero, the fourth Guard index 266 is (Doppler 7, range 11).
Referring back to FIG. 2, the energy determination module 114 queries the detection data store 106 and retrieves the first of the units under test 126. The energy determination module 114 receives as input the integral image 124 and window data 136. Based on the coordinate locations (doppler, range) of the cells contained in the window data 136, the energy determination module 114 retrieves the values of the cells in the integrated image at the relevant coordinate locations (doppler, range) from the integrated image 124. In other words, for each index in the window data 136, the energy determination module 114 retrieves the value of the cell in the integral image at the coordinate position (doppler, range) that matches the coordinate position (doppler, range) of the cell of the respective window index. The energy determination module 114 calculates the energy of the neighborhood based on the following equation:
E=Index1-Index2-Index3+Index4 (10)
Where E is the energy sum of the neighborhood; index1 is the value of the integral image from the first Index (Doppler_End, range_End) of the neighborhood; index2 is the value of the integral image from the second Index of the neighborhood (Doppler_End, range_Start-1); index2 is the value of the integral image from the third Index of the neighborhood (Doppler_Start-1, range_End), and Index4 is the value of the integral image from the fourth Index of the neighborhood (Doppler_Start-1, range_Start-1). It should be noted that the energy determination module 114 repeats this calculation for each set of indices, such that for a neighborhood that has been divided into multiple portions, the energy determination module 114 can calculate multiple energy sums (one for each portion) that are added together to obtain the energy sum for the neighborhood. In one example, referring to FIG. 8, index 1 is labeled 810, index 2 is labeled 814, index 3 is labeled 812, and index 4 is labeled 816.
Referring to fig. 4, 4A, 5 and 5A, for portion 216, based on window data 136 of a first index 228 (doppler 18, range 14) and a third index 226 (doppler 3, range 14), the energy determination module 114 retrieves the corresponding values from the integral image in fig. 5 and 5A. In fig. 5 and 5A, index 1 is labeled 520 and the value 1406 for portion 216 of neighborhood 212 of fig. 4 and 4A. Index 3 is labeled 522 and has a value of 227. The energy sum of the portion 216 (fig. 4 and 4A) is 1179.
For portion 214 of neighborhood 212, based on first index 231 (Doppler 18, range 43); a second index 232 (doppler 18, range 42); the window data 136 of the third index 233 (doppler 3, range 43) and the fourth index 235 (doppler 3, range 42), the energy determination module 114 retrieves the corresponding values from the integrated images in fig. 5 and 5A. In fig. 5 and 5A, index 1 is labeled 524 and the value 4329 for portion 214 of neighborhood 212 of fig. 4 and 4A. Index 2 is labeled 525 and has a value of 4228.Index 3 is labeled 526 and a value 746.Index 4 is marked 527 and has a value of 724. The energy sum of this portion 214 (fig. 4 and 4A) is 79. The energy sum of neighborhood 212 is 1258 (the sum of the energy sums of portions 214, 216).
Referring to fig. 4B and 5B, based on the first index 260 (doppler 18, range 22); a second index 258 (doppler 18, range 7); the window data 136 of the third index 256 (doppler 3, range 22) and the fourth index 254 (doppler 3, range 7), the energy determination module 114 retrieves the corresponding values from the integrated image in fig. 5B. In FIG. 5B, index 1 is labeled 530 and the value is 20169 for neighborhood 252 of FIG. 4B. Index 2 is marked 532 and has a value of 6662.Index 3 is marked 534 and has a value of 3051.Index 4 is marked 536 and has a value of 881. The energy sum of neighborhood 252 is 11337.
The energy determination module 114 receives as input the guard data 138. Based on the coordinate locations (doppler, range) of the cells contained in the guard data 138, the energy determination module 114 retrieves the values of the cells in the integrated image at the relevant coordinate locations (doppler, range) from the integrated image 124. In other words, for each index in the guard data 138, the energy determination module 114 retrieves the value of the cell in the integral image at the coordinate position (doppler, range) that matches the coordinate position (doppler, range) of the cell of the corresponding guard index. The energy determination module 114 calculates the energy of the guard based on the following equation:
GE=GIndex1-GIndex2-GIndex3+GIndex4 (11)
Wherein GE is the sum of protection energy; GIndex 1 are values of the integral image from the first Guard index (Range End Guard) of the Guard; GIndex 2 is the value of the integral image from the second Guard index of the Guard (Doppler_end Guard, range_Start Guard-1); GIndex 3 is the value of the integral image from the third Guard index of Guard (Doppler_Start Guard-1,Range_End Guard) and GIndex 4 is the value of the integral image from the fourth Guard index of Guard (Doppler_Start Guard-1,Range_Start Guard-1). It should be noted that the energy determination module 114 repeats this calculation for each set of guard indexes, such that for a guard that has been broken up into multiple portions, the energy determination module 114 can calculate multiple energy sums (one for each guard portion) that are added together to obtain the energy sum for the guard. In one example, referring to fig. 8, gindex 1 is labeled 820, gindex b is labeled 824, gindex d is labeled 822, gindex a is labeled 826. As shown, the unit under test 830 is surrounded by a guard index and a neighborhood index.
Referring to fig. 4, 4A, 5, and 5A, based on a first guard index 246 (doppler 14, range 10); a second guard index 244 (doppler 14, range 3); a third guard index 242 (doppler 7, range 10); and a fourth guard index 240 (doppler 7, range 3) guard data 138, the energy determination module 114 retrieves the corresponding values from the integrated images of fig. 5 and 5A. For guard 236, GIndex 1 is labeled 550 and the value 772 in fig. 5 and 5A. GIndex 2 is labeled 552 and has a value of 260.GIndex 3 is labeled 554 and a value of 364.GIndex 4 is labeled 556 and has a value of 128. The energy sum of guard 236 is 276.
Referring to fig. 4B and 5B, based on the first guard index 272 (doppler 14, range 18); a second guard index 270 (doppler 14, range 11); third guard index 268 (doppler 7, range 18); the guard data 138 of the fourth guard index 266 (doppler 7, range 11), the energy determination module 114 retrieves the corresponding values from the integrated images of fig. 5 and 5A. For guard 264, GIndex 1 is labeled 560 and the value is 13038 in fig. 5 and 5A. GIndex 2 is labeled 562 and the value is 8004.GIndex 3 is marked 564 and the value 6184.GIndex 4 is marked 566 and the value is 3712. The energy sum of guard 264 is 2562.
The energy determination module 114 subtracts the energy of the guard (determined in equation (11)) from the energy of the window (determined in equation (10)) to determine the total energy sum. The energy determination module 114 subtracts the number of cells in the neighborhood (from the neighborhood threshold 132) from the number of cells in the guard (from the guard threshold 134). The energy determination module 114 divides the energy sum by the difference between the number of cells in the neighborhood and the number of cells in the guard to derive an estimated noise for the cell under test 126.
For fig. 4 and 4A, the total energy sum is 982, and the difference between the number of cells in neighborhood 212 (196; twice neighborhood threshold 132 times twice neighborhood threshold 132) and the number of cells in guard 236 (36; twice guard threshold 134 times twice guard threshold 134) is 160. The estimated noise of the unit under test 210 is 6.14 (982 divided by 160). For FIG. 4B, the total energy sum is 8775, and the difference between the number of cells in neighborhood 252 (196; twice the neighborhood threshold 132 times the neighborhood threshold 132) and the number of cells in guard 236 (36; twice the guard threshold 134 times the guard threshold 134) is 160. The estimated noise of the unit under test 250 is 54.84 (8775 divided by 160).
The energy determination module 114 queries the threshold data store 108 and retrieves the noise threshold 135. The energy determination module 114 retrieves the energy value of the unit under test 126 from the initial image 120. The energy determination module 114 multiplies the estimated noise of the unit under test 126 by a noise threshold 135 and compares the energy value of the unit under test 126 from the initial image 120 with the product of the estimated noise and the noise threshold 135. If the energy value of the unit under test 126 from the initial image 120 is greater than the product of the estimated noise and the noise threshold 135, the energy determination module 114 identifies the unit under test 126 as detected and correlates the identification 128 of the detection with the unit under test 126. The energy determination module 114 stores the identification 128 associated with the unit under test 126 in the test data store 106. If the energy value of the unit under test 126 from the initial image 120 is less than or equal to the product of the estimated noise of the unit under test 126 and the noise threshold 135, the energy determination module 114 identifies the unit under test 126 as noise and correlates the identification 128 of noise with the unit under test 126. The energy determination module 114 stores the identification 128 associated with the unit under test 126 in the test data store 106.
For the unit under test 210 (fig. 4 and 4A), the energy value from the initial image is 3. The product of the estimated noise and the noise threshold 135 (6.14x25) is 153.5, which is greater than 3. Thus, the energy determination module 114 identifies the unit under test 210 as noise and correlates the identification 128 of noise with the unit under test 210. For the unit under test 250 (fig. 4B), the energy value from the initial image is 38. The product of the estimated noise and the noise threshold 135 (54.84 x 25) is 1371, which is greater than 38. Thus, the energy determination module 114 identifies the unit under test 250 as noise and correlates the identification 128 of noise with the unit under test 250.
The neighborhood determination module 112 repeatedly determines window data 136 and guard data 138 for each unit under test 126 and the energy determination module 114 repeatedly authenticates each unit under test 126 until each unit under test 126 in the detection data store 106 has an associated authentication 128.
Referring now to fig. 9-13 with continued reference to fig. 1-3, a flowchart illustrates a method 900 that may be performed by the classification and detection system 100 of fig. 1 according to the present disclosure. In one example, the method 900 is performed by the processor 44 of the controller 34. It will be appreciated from the present disclosure that the order of operations within the method is not limited to sequential execution as shown in fig. 9-13, but may be performed in one or more varying orders as applicable and in accordance with the present disclosure. In various embodiments, method 900 may be scheduled to run based on one or more predetermined events and/or may run continuously during operation of autonomous vehicle 10.
Referring to fig. 9, the method begins at 902. At 904, the method receives a radar energy map (based on the received signal 118 from the receive antenna 45 of the transceiver module 41 of the radar system 40 a) from another control module associated with the controller 34. At 906, the method removes the dedicated field 608 (fig. 6) from the radar energy map to generate an initial image 700 (fig. 7). At 908, the method queries the integral image data store 102 and retrieves the integral image data structure 116. The method calculates an integral image based on the energy values in the initial image and equation (1) using integral image data structure 116. At method 910, the method queries threshold data store 108 and retrieves energy threshold 130. For each cell in the initial image, the method compares the energy value of the cell in the initial image to an energy threshold 130. At 912, the method optionally determines whether the energy value in the cell is greater than an energy threshold 130. If true, the method optionally identifies the unit as a possible detection at 914 and stores the unit in detection data store 106. From 914, the method proceeds to 916.
Otherwise, if false at 912, the method proceeds to 916. At 916, the method determines whether the energy values of all cells in the initial image have been compared to the energy threshold 130. If true, the method proceeds to 918. If false, the method loops to 912. If an initial thresholding is not performed (blocks 910-916), the method treats each cell as a unit under test and proceeds to 918.
At 918, the method receives a first one of the units under test 126 from the test data store 106. At 920, the method queries threshold data store 108 and retrieves neighborhood threshold 132. The method constructs a neighborhood around the unit under test 126 based on the coordinate location (doppler, range) of the unit under test 126 and a neighborhood threshold 132. Referring to FIG. 9A, at 922, the method determines whether the neighborhood needs to wrap around the initial image to provide the number of cells needed to define the neighborhood. If false, at 924 the method proceeds to FIG. 10. Otherwise, if true, at 926, the method divides the neighborhood into additional smaller rectangles, squares, or portions to arrive at a number of cells that matches or corresponds to the neighborhood threshold 132. At 924, the method proceeds to FIG. 10 to determine a neighborhood energy sum.
Referring to fig. 10-10A, the method begins at 1002. At 1003, the method sets the neighborhood energy sum to zero. At 1004, the method sets the energy sum equal to zero for the portion and sets the diagonal equal to true for the portion. At 1006, the method determines the range_start value, range_end value, doppler_start value, and doppler_end value for the portion. In other words, if the neighborhood is not decomposed or wrapped around the original image, the method determines the Range_Start value, range_End value, doppler_Start value, and Doppler_End value with the entire neighborhood as the portion at 1006, and determines the Range_Start value, range_End value, doppler_Start value, and Doppler_End value for the neighborhood. If the neighborhood is broken up into portions or wrapped around the initial image, the method determines a range_start value, a range_end value, a Doppler_start value, and a Doppler_end value for a first of the portions at 1006. In one example, referring to FIGS. 12-12A, the method begins at 1202 with determining a Range_Start value, a Range_End value, a Doppler_Start value, and a Doppler_End value for the portion. At 1204, the method sets END equal to the sum of the Range of the unit under test (Range CUT) and the neighborhood threshold 132. At 1206, the method determines whether END is greater than the range limit threshold 139. If true, the method proceeds to 1208. Otherwise, at 1210, the method sets Range End equal to End, which is the sum of the Range of the unit under test and the neighborhood threshold 132 (Range end=range CUT+ThresholdW).
At 1208, the method sets Range_end as the Range limit Threshold 139 minus END or the Range limit Threshold 139 minus the sum of the Range of the unit under test and the neighborhood Threshold 132 (Range_end= RANGE LIMIT Threshold- (Range CUT+ThresholdW)). At 1212, the method sets START to the range of the unit under test minus neighborhood threshold 132. At 1214, the method determines if START is less than 1. If true, the method proceeds to 1216. Otherwise, at 1218, the method sets range_start equal to Start, which is the Range of the unit under test minus the neighborhood threshold 132 (range_start=range CUT-ThresholdW).
At 1216, the method sets Range_Start equal to the sum of the Range limit Threshold and START, or the sum of the Range limit Threshold and the Range of the unit under test minus the neighborhood Threshold 132 (Range_Start= RANGE LIMIT threshold+ (Range CUT-ThresholdW)). At 1220, the method sets END equal to the sum of Doppler (Doppler CUT) and neighborhood threshold 132 for the unit under test. Referring to FIG. 12A, at 1222, the method determines whether END is greater than the Doppler limit threshold 137. If true, the method proceeds to 1224. Otherwise, at 1226, the method sets Doppler_End equal to END, which is the sum of the Doppler of the unit under test and the neighborhood threshold 132 (Doppler_End=Doppler CUT+ThresholdW).
At 1224, the method sets doppler_end to Doppler limit threshold 137 minus End or Doppler limit threshold 137 minus the sum of the measured cell's Doppler and neighborhood threshold 132 (doppler_end= Doppler Limit Threshold- (Doppler CUT+ThresholdW)). At 1228, the method sets START to the Doppler of the unit under test minus the neighborhood threshold 132. At 1230, the method determines if START is less than 1. If true, the method proceeds to 1232. Otherwise, at 1234, the method sets doppler_start equal to Start, which is the measured unit's Doppler minus the neighborhood threshold 132 (doppler_start=doppler CUT-ThresholdW). At 1232, the method sets doppler_start equal to Doppler limit threshold 137 plus Start, or the sum of Doppler limit threshold 137 and the measured cell's Doppler minus neighborhood threshold 132 (doppler_start= Doppler Limit Threshold + (Doppler CUT-ThresholdW)). In the case where Range_Start, range_End, doppler_Start, and Doppler_End are determined for the portion, the method sets or stores Range_Start, range_End, doppler_Start, and Doppler_End for the portion at 1236. At 1238, the method determines whether all portions have been evaluated. If true, the method returns to 1008 of FIG. 10. Otherwise, the method loops to 1204.
Referring back to FIG. 10, where Range_Start, range_End, doppler_Start, and Doppler_End are determined for each portion based on 1202-1238 discussed with respect to FIGS. 12-12A, the method retrieves the value of the first index (Doppler_End, range_End) from the integral image and sets the value as the energy sum of the portion at 1008. At 1010, the method determines whether Range_Start minus one is greater than zero. If true, the method proceeds to 1012. If false, the method proceeds to 1014 and sets the diagonal to false, and then proceeds to 1016. At 1012, the method retrieves the value of the second index (Doppler_End, range_Start-1) from the integral image and subtracts the value from the energy sum of the portion.
Referring to FIG. 10A, at 1016, the method determines whether Doppler_Start minus one is greater than zero. If true, the method retrieves the value of the third index (Doppler_Start-1, range_End) from the integral image and subtracts the value from the energy sum of the portion at 1018. Otherwise, if false, the method sets the diagonal to false at 1020.
At 1022, the method determines if the diagonal is true, if Range_Start minus one is greater than zero, and if Doppler_Start minus one is greater than zero. If false, the method proceeds to 1024. Otherwise, if true, the method retrieves the value of the fourth index (Doppler_Start-1, range_Start-1) from the integral image and adds the value to the energy sum of the portion at 1026. At 1024, the method adds the partial energy sum to the neighborhood energy sum. At 1028, the method determines whether all portions of the neighborhood have been evaluated based on the neighborhood constructed in FIG. 9. If true, the method proceeds to 927 on FIG. 9A. Otherwise, the method loops to 1004 and determines an energy sum for a next portion of the neighborhood.
Referring back to fig. 9A, at 927, the method queries the threshold data store 108 and retrieves the guard threshold 134. The method constructs a guard around the unit under test 126 based on the coordinate location (doppler, range) of the unit under test 126 and the guard threshold 134. At 928, the method determines whether the guard needs to wrap around the initial image to provide the number of units needed to define the guard. If false, at 930, the method proceeds to FIGS. 11-11A. Otherwise, if true, at 932, the method divides the guard into additional smaller rectangles, squares, or portions to achieve a number of cells that matches or corresponds to the guard threshold 134. At 930, the method proceeds to FIGS. 11-11A to determine a guard energy sum.
Referring to fig. 11, the method begins at 1102. At 1103, the method sets the guard energy sum equal to zero. At 1104, the method sets the energy sum equal to zero and the diagonal equal to true for that portion. At 1106, the method determines the range_start Guard value, range_end Guard value, doppler_start Guard value, and Doppler_end Guard value for the portion. In other words, if the Guard is not broken down or wrapped around the original image, the method determines the Range_Start Guard value, range_End Guard value, doppler_Start Guard value, and Doppler_End Guard value with the entire Guard as the portion 1106 and determines the Range_Start Guard value, range_End Guard value, doppler_Start Guard value, and Doppler_End Guard value for the Guard. If the Guard is divided into portions or wrapped around the original image, the method determines 1106 a range_Start Guard value, a range_end Guard value, a Doppler_Start Guard value, and a Doppler_end Guard value for a first of the portions. In one example, referring to FIGS. 13-13A, the method begins at 1302 to determine a Range_Start Guard value, a Range_end Guard value, a Doppler_Start Guard value, and a Doppler_end Guard value for the portion. At 1304, the method sets END equal to the sum of the Range of the unit under test (Range CUT) and the guard threshold 134. At 1306, the method determines whether END is greater than a range limit threshold 139. If true, the method proceeds to 1308. Otherwise, at 1310, the method sets Range_End Guard to END, which is equal to the sum of the Range of the unit under test and the Guard threshold 134 (Range_End guard=Range CUT+ThresholdG).
At 1308, the method sets Range_End Guard to the sum of the Range limit Threshold 139 minus END or the Range limit Threshold 139 minus the Range of the unit under test plus the Guard Threshold 134 (Range_End guard= RANGE LIMIT Threshold- (Range CUT+ThresholdG)). At 1312, the method sets START to the range of the unit under test minus the guard threshold 134. At 1314, the method determines if START is less than 1. If true, the method proceeds to 1316. Otherwise, at 1318, the method sets range_start Guard equal to Start, which is the Range of the unit under test minus the Guard threshold 134 (range_start guard=range CUT–ThresholdG).
At 1316, the method sets range_start Guard equal to the sum of the Range limit Threshold and Start, or the sum of the Range limit Threshold and the Range of the unit under test minus the Guard Threshold 134 (range_start guard= RANGE LIMIT threshold+ (Range CUT–ThresholdG)). At 1320, the method sets END equal to the sum of the Doppler (Doppler CUT) of the unit under test and the guard threshold 134. Referring to FIG. 13A, at 1322, the method determines whether END is greater than the Doppler limit threshold 137. If true, the method proceeds to 1324. Otherwise, at 1326, the method sets Doppler End Guard equal to End, which is the sum of the Doppler of the cell under test and the Guard threshold 134 (Doppler End Guard = Doppler CUT+ThresholdG).
At 1324, the method sets doppler_end Guard to the Doppler limit threshold 137 minus End or the sum of the Doppler limit threshold 137 minus the Doppler of the unit under test and the Guard threshold 134 (doppler_end guard= Doppler Limit Threshold- (Doppler CUT+ThresholdG)). At 1328, the method sets START to the Doppler of the unit under test minus the guard threshold 134. At 1330, the method determines if START is less than 1. If true, the method proceeds to 1332. Otherwise, at 1334, the method sets doppler_start Guard equal to Start, which is the Doppler of the unit under test minus the Guard threshold 134 (doppler_start guard=doppler CUT–ThresholdG). At 1332, the method sets doppler_start Guard equal to the Doppler limit threshold 137 plus the sum of the Start or Doppler limit threshold 137 and the measured unit's Doppler minus the Guard threshold 134 (doppler_start guard= Doppler Limit Threshold + (Doppler CUT–ThresholdG)). With the range_start Guard, range_end Guard, doppler_start Guard, and Doppler_end Guard determined for the portion, the method sets or stores the range_start Guard, range_end Guard, doppler_start Guard, and Doppler_end Guard for the portion at 1336. At 1338, the method determines whether all portions have been evaluated. If true, the method returns to 1108 on FIG. 11. Otherwise, the method loops to 1304.
Referring back to FIG. 11, where Range_Start Guard, range_End Guard, doppler_Start Guard, and Doppler_End Guard are determined for each portion based on 1302-1338 discussed with respect to FIGS. 13-13A, at 1108, the method retrieves the value of the first index (Doppler_End Guard, range_End Guard) from the integral image and sets the value to the Guard energy sum for that portion. At 1110, the method determines if Range_Start Guard minus one is greater than zero. If true, the method proceeds to 1112. If false, the method proceeds to 1114 and sets the diagonal to false, and then proceeds to 1116. At 1112, the method retrieves a value of a second index (Doppler_end Guard, range_Start Guard-1) from the integral image and subtracts the value from the Guard energy sum of the portion.
Referring to FIG. 11A, at 1116, the method determines whether Doppler_Start Guard minus one is greater than zero. If true, the method retrieves the value of the third index (Doppler_Start Guard-1,Range_End Guard) from the integral image and subtracts the value from the Guard energy sum for that portion at 1118. Otherwise, if false, the method sets the diagonal to false at 1120.
At 1122, the method determines if the diagonal is true, if Range_Start Guard minus one is greater than zero, and if Doppler_Start Guard minus one is greater than zero. If false, the method proceeds to 1124. Otherwise, if true, the method retrieves the value of the fourth index (Doppler_Start Guard-1,Range_Start Guard-1) from the integral image and adds the value to the Guard energy sum of the portion at 1126. At 1124, the method adds the partial guard energy sum to the guard energy sum. At 1128, the method determines whether all portions of the guard have been evaluated based on the guard constructed in FIG. 9A. If true, the method proceeds to 938 on FIG. 9A. Otherwise, the method loops to 1104 and determines the energy sum for the next portion of the guard.
Referring to fig. 9A, at 938, the method subtracts the guard energy from the energy of the neighborhood to calculate a total energy sum. At 940, the method subtracts the number of cells in the guard from the number of cells in the neighborhood to derive a plurality of cells. The method divides the total energy sum by the number of cells (the number of cells in the window minus the number of cells in the guard) to calculate the estimated noise for the cell 126 under test. Referring to FIG. 9B, at 942, the method queries threshold data store 108 and retrieves noise threshold 135. The method retrieves the energy value of the unit under test 126 from the initial image 120. The method multiplies the estimated noise of the unit under test 126 by a noise threshold 135 and compares the energy value of the unit under test 126 from the initial image 120 with the product of the estimated noise of the unit under test 126 and the noise threshold 135. The method determines whether the energy value from the unit under test 126 of the initial image 120 is greater than the product of the estimated noise and the noise threshold 135. If true, the method proceeds to 944 and identifies the unit under test 126 as a test. If false, the method proceeds to 946 and identifies the unit under test 126 as noise.
At 948, the method correlates the identification (noise or detection) with the unit under test and stores the correlation in detection data store 106. At 950, the method determines whether all cells in the detection data store 106 have been authenticated or have correlation (noise or detection). If true, the method ends at 952. Otherwise, if false, the method proceeds to 918. Thus, the sorting and inspection system 100 authenticates units under test with a significantly reduced number of scalar operations, which reduces the processing time of the controller 34 and also reduces missed detections. In addition, by identifying the unit under test in two dimensions (Doppler and range), the accuracy of the detection can be improved.
While at least one exemplary embodiment has been presented in the foregoing detailed description, it should be appreciated that a vast number of variations exist. It should also be appreciated that the exemplary embodiment or exemplary embodiments are only examples, and are not intended to limit the scope, applicability, or configuration of the disclosure in any way. Rather, the foregoing detailed description will provide those skilled in the art with a convenient road map for implementing the exemplary embodiment or exemplary embodiments. It should be understood that various changes can be made in the function and arrangement of elements without departing from the scope of the disclosure as set forth in the appended claims and the legal equivalents thereof.

Claims (10)

1. A system for classifying a received signal from a radar system as noise or detection, comprising:
a source of a radar energy map containing received signals;
A memory storing an integrated image data structure for calculating an integrated image having a plurality of cells with coordinate locations;
a processor in communication with the source and the memory, the processor programmed to:
generating an initial image comprising a plurality of initial units, each initial unit having an energy value and a coordinate location, based on the radar energy map;
calculating an integral image based on the initial image, wherein each of a plurality of cells of the integral image contains a value calculated based on energy values of associated cells of the plurality of initial cells of the initial image, and storing the calculated integral image in an integral image data structure;
Determining, based on the initial image, a coordinate position of an initial cell of the plurality of initial cells having an energy value greater than a threshold;
Determining a coordinate position of an index related to an angle of a neighborhood surrounding the initial cell based on the coordinate position of the initial cell and a neighborhood threshold of a number of initial cells related to the neighborhood;
Determining an energy sum of the neighborhood based on the indexed coordinate position and a value from each of a plurality of cells of the integral image associated with the indexed coordinate position;
determining an estimated noise associated with the initial cell based on the energy sum; and
Based on the estimated noise and the energy value from the initial unit of the initial image, it is determined whether the initial unit indicates detection of the object.
2. The system of claim 1, wherein the processor is further programmed to determine whether to wrap the neighborhood around the initial image based on the coordinate locations of the initial cells and a neighborhood threshold, and based on the determination to wrap the neighborhood around the initial image, the processor is further programmed to divide the neighborhood into additional portions to reach a neighborhood threshold for an initial number of cells associated with the neighborhood, and to determine an index of the neighborhood based on the index associated with the corners of each portion.
3. The system of claim 1, wherein the processor is further programmed to construct a guard around the initial unit within the neighborhood based on the coordinate position of the initial unit in the initial image and a guard threshold for a number of initial units associated with the guard, and determine the guard index based on the guard.
4. The system of claim 3, wherein the processor is further programmed to determine whether to wrap a guard around the initial image based on the coordinate locations of the initial cells and a guard threshold for a number of initial cells associated with the guard, and based on the determination to wrap the guard around the initial image, the processor is further programmed to divide the guard into additional guard portions to reach the guard threshold for the number of initial cells associated with the guard, and to determine the guard index based on the guard index associated with the angle of each guard portion.
5. The system of claim 4, wherein the processor is programmed to determine the guard energy sum of the guard based on the coordinate position of the guard index and values of respective ones of the plurality of cells from the integral image associated with the coordinate position of the guard index.
6. The system of claim 5, wherein the processor is further programmed to determine a total energy sum associated with the initial cell based on the energy sum and the guard energy sum, and the processor is programmed to determine the estimated noise based on the total energy sum.
7. A method for classifying a received signal from a radar system as noise or detection, comprising:
Providing a memory storing an integral image data structure for calculating an integral image having a plurality of cells;
Receiving, by a processor, a radar energy map containing received signals from a source;
generating, by a processor, an initial image having a plurality of initial cells based on the radar energy map, each initial cell containing an energy value and a coordinate location;
Calculating, by the processor, an integral image based on the initial image, wherein each of a plurality of cells of the integral image contains a value calculated based on energy values of associated cells of the plurality of initial cells of the initial image, and storing the calculated integral image in an integral image data structure;
determining, by the processor, based on the initial image, a coordinate position of an initial cell of the plurality of initial cells having an energy value greater than a threshold;
Determining, by the processor, a coordinate location of an index related to an angle of a neighborhood surrounding the initial cell based on the coordinate location of the initial cell and a neighborhood threshold for a number of initial cells related to the neighborhood;
Determining, by the processor, an energy sum of the neighborhood based on the indexed coordinate position and a value from each of a plurality of cells of the integral image associated with the indexed coordinate position;
determining, by the processor, estimated noise associated with the initial cell based on the energy sum; and
Whether the initial unit indicates detection of the object is determined by the processor based on the estimated noise and the energy value of the initial unit from the initial image.
8. The method of claim 7, further comprising:
determining, by the processor, to wrap the neighborhood around the initial image based on the coordinate location of the initial cell and the neighborhood threshold;
dividing, by the processor, the neighborhood into additional portions to arrive at a neighborhood threshold for an initial number of cells associated with the neighborhood; and
The index of the neighborhood is determined by the processor based on the index associated with the angle of each portion.
9. The method of claim 7, further comprising:
constructing, by the processor, a guard around the initial unit within the neighborhood based on the coordinate position of the initial unit in the initial image and a guard threshold for a number of initial units associated with the guard; and
A guard index is determined by the processor based on the guard, the guard index being related to an angle of the guard.
10. The method of claim 9, further comprising:
determining, by the processor, a guard energy sum of the guard based on the coordinate position of the guard index and values of respective ones of the plurality of cells from the integral image associated with the coordinate position of the guard index;
determining, by the processor, a total energy sum associated with the initial cell based on the energy sum and the guard energy sum; and
An estimated noise is determined by the processor based on the total energy sum.
CN202010661121.6A 2019-07-11 2020-07-10 Method and system for classifying a received signal from a radar system Active CN112213692B (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US16/508,701 US11143747B2 (en) 2019-07-11 2019-07-11 Methods and systems for classifying received signals from radar system
US16/508,701 2019-07-11

Publications (2)

Publication Number Publication Date
CN112213692A CN112213692A (en) 2021-01-12
CN112213692B true CN112213692B (en) 2024-05-10

Family

ID=74059262

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010661121.6A Active CN112213692B (en) 2019-07-11 2020-07-10 Method and system for classifying a received signal from a radar system

Country Status (2)

Country Link
US (1) US11143747B2 (en)
CN (1) CN112213692B (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9746549B1 (en) * 2014-07-11 2017-08-29 Altera Corporation Constant false alarm rate circuitry in adaptive target detection of radar systems
CN108802761A (en) * 2017-05-03 2018-11-13 通用汽车环球科技运作有限责任公司 Method and system for laser radar point cloud exception
CN109416397A (en) * 2016-06-14 2019-03-01 雷神加拿大有限公司 For radar and the detection of the impact noise of communication system and remove

Family Cites Families (33)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4450446A (en) * 1981-05-29 1984-05-22 Westinghouse Electric Corp. Method and system for tracking targets in a pulse doppler radar system
IT1199170B (en) * 1984-07-27 1988-12-30 Selenia Ind Elettroniche DIGITAL RADAR SIGNAL PROCESSOR ABLE TO PERFORM THE ADAPTIVE CANCELLATION OF THE CLUTTER BY PARAMETRIC ESTIMATOR
US4680588A (en) * 1985-12-05 1987-07-14 Raytheon Company Radar system with incremental automatic gain control
US5539412A (en) * 1994-04-29 1996-07-23 Litton Systems, Inc. Radar system with adaptive clutter suppression
US5465095A (en) * 1994-08-05 1995-11-07 The United States Of America As Represented By The Secretary Of The Air Force Time efficient method for processing adaptive target detection thresholds in doppler radar systems
US5760734A (en) * 1996-11-18 1998-06-02 Lockheed Martin Corp. Radar clutter removal by matrix processing
US6252540B1 (en) * 1999-12-21 2001-06-26 The United States Of America As Represented By The Secretary Of The Air Force Apparatus and method for two stage hybrid space-time adaptive processing in radar and communication systems
GB0019825D0 (en) * 2000-08-12 2000-09-27 Secr Defence Signal processing
US7295154B2 (en) * 2002-01-17 2007-11-13 The Ohio State University Vehicle obstacle warning radar
JP2005520160A (en) * 2002-03-13 2005-07-07 レイセオン・カナダ・リミテッド System and method for radar spectrum generation
AU2003213762A1 (en) * 2002-03-13 2003-09-29 Raytheon Canada Limited An adaptive system and method for radar detection
US6937185B1 (en) * 2003-08-14 2005-08-30 Lockheed Martin Corporation Rain versus target discrimination for doppler radars
US7298316B2 (en) * 2005-12-19 2007-11-20 Chung Shan Institute Of Science And Technology, Armaments Bureau M.N.D. Apparatus and method for instantly automatic detecting clutter blocks and interference source and for dynamically establishing clutter map
US8294881B2 (en) * 2008-08-26 2012-10-23 Honeywell International Inc. Security system using LADAR-based sensors
US8013781B2 (en) * 2008-09-24 2011-09-06 Lockheed Martin Corporation Method and apparatus for radar surveillance and detection of sea targets
US8305261B2 (en) * 2010-04-02 2012-11-06 Raytheon Company Adaptive mainlobe clutter method for range-Doppler maps
CA2774377C (en) * 2012-02-02 2017-05-02 Raytheon Canada Limited Knowledge aided detector
DE102012024999A1 (en) * 2012-12-19 2014-06-26 Valeo Schalter Und Sensoren Gmbh Method for setting a detection threshold for a received signal of a frequency modulation continuous wave radar sensor of a motor vehicle depending on the noise level, radar sensor and motor vehicle
US8976059B2 (en) * 2012-12-21 2015-03-10 Raytheon Canada Limited Identification and removal of a false detection in a radar system
US9310480B2 (en) * 2013-05-08 2016-04-12 Jorn Sierwald Method and arrangement for removing ground clutter
US9594159B2 (en) * 2013-07-15 2017-03-14 Texas Instruments Incorporated 2-D object detection in radar applications
JP6384018B2 (en) * 2014-03-25 2018-09-05 日本無線株式会社 Automotive radar equipment
US9772402B2 (en) * 2014-06-09 2017-09-26 Src, Inc. Multiplatform GMTI radar with adaptive clutter suppression
US10627480B2 (en) * 2014-07-17 2020-04-21 Texas Instruments Incorporated Distributed radar signal processing in a radar system
DE102015211490A1 (en) * 2015-06-22 2016-12-22 Robert Bosch Gmbh Method for operating a radar device
DE102015008403B3 (en) * 2015-07-01 2016-08-11 Airbus Ds Electronics And Border Security Gmbh Method for automatic classification of radar objects
US9952312B2 (en) * 2015-07-06 2018-04-24 Navico Holding As Radar interference mitigation
EP3382419A1 (en) * 2017-03-27 2018-10-03 Melexis Technologies SA Method and apparatus for echo detection
US10712437B2 (en) * 2017-07-07 2020-07-14 Veoneer Us, Inc. Radar systems and methods utilizing composite waveforms for customization of resolution requirements
US10852419B2 (en) * 2017-10-20 2020-12-01 Texas Instruments Incorporated System and method for camera radar fusion
US9971027B1 (en) * 2017-11-10 2018-05-15 Helios Remote Sensing Systems, Inc. Methods and systems for suppressing clutter in radar systems
US11353577B2 (en) * 2018-09-28 2022-06-07 Zoox, Inc. Radar spatial estimation
KR20200067629A (en) * 2018-12-04 2020-06-12 삼성전자주식회사 Method and device to process radar data

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9746549B1 (en) * 2014-07-11 2017-08-29 Altera Corporation Constant false alarm rate circuitry in adaptive target detection of radar systems
CN109416397A (en) * 2016-06-14 2019-03-01 雷神加拿大有限公司 For radar and the detection of the impact noise of communication system and remove
CN108802761A (en) * 2017-05-03 2018-11-13 通用汽车环球科技运作有限责任公司 Method and system for laser radar point cloud exception

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
基于抛物线随机Hough 变换的机载脉冲多普勒雷达机动弱目标检测前跟踪方法;于洪波 等;《兵工学报》;第36卷(第10期);第1924-1932页 *

Also Published As

Publication number Publication date
US20210011124A1 (en) 2021-01-14
CN112213692A (en) 2021-01-12
US11143747B2 (en) 2021-10-12

Similar Documents

Publication Publication Date Title
US11125567B2 (en) Methods and systems for mapping and localization for a vehicle
US8731742B2 (en) Target vehicle movement classification
US10168425B2 (en) Centralized vehicle radar methods and systems
US10859673B2 (en) Method for disambiguating ambiguous detections in sensor fusion systems
US10733420B2 (en) Systems and methods for free space inference to break apart clustered objects in vehicle perception systems
US10935652B2 (en) Systems and methods for using road understanding to constrain radar tracks
WO2012002251A1 (en) Signal processing apparatus, radar apparatus, vehicle control system, and signal processing method
CN110488295B (en) DBSCAN parameters configured from sensor suite
US11433886B2 (en) System, vehicle and method for adapting a driving condition of a vehicle upon detecting an event in an environment of the vehicle
CN110857983B (en) Object velocity vector estimation using multiple radars with different observation angles
US20200278423A1 (en) Removing false alarms at the beamforming stage for sensing radars using a deep neural network
US11042160B2 (en) Autonomous driving trajectory determination device
US20150102955A1 (en) Measurement association in vehicles
US20190339376A1 (en) Differential phase-based detector
CN113196362B (en) Detection device, mobile body system, and detection method
US20220146667A1 (en) Apparatus, system and method of radar tracking
CN115524666A (en) Method and system for detecting and mitigating automotive radar interference
CN112213692B (en) Method and system for classifying a received signal from a radar system
CN111599166B (en) Method and system for interpreting traffic signals and negotiating signalized intersections
CN115635963B (en) Target object screening method, target object screening device, electronic device, storage medium and vehicle
US11292487B2 (en) Methods and systems for controlling automated driving features of a vehicle
CN112069867B (en) Learning association of multi-objective tracking with multi-sensory data and missing modalities
US11989893B2 (en) Methods and systems for camera to lidar alignment using road poles
US20230311858A1 (en) Systems and methods for combining detected objects
EP4004668B1 (en) A method for forming a travelling path for a vehicle

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant