WO2021121247A1 - Method and apparatus for determining target object tracking threshold - Google Patents

Method and apparatus for determining target object tracking threshold Download PDF

Info

Publication number
WO2021121247A1
WO2021121247A1 PCT/CN2020/136718 CN2020136718W WO2021121247A1 WO 2021121247 A1 WO2021121247 A1 WO 2021121247A1 CN 2020136718 W CN2020136718 W CN 2020136718W WO 2021121247 A1 WO2021121247 A1 WO 2021121247A1
Authority
WO
WIPO (PCT)
Prior art keywords
target
point cloud
frame
threshold
tracking
Prior art date
Application number
PCT/CN2020/136718
Other languages
French (fr)
Chinese (zh)
Inventor
崔天翔
刘兴业
康文武
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Publication of WO2021121247A1 publication Critical patent/WO2021121247A1/en

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S13/00Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
    • G01S13/66Radar-tracking systems; Analogous systems
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S13/00Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
    • G01S13/02Systems using reflection of radio waves, e.g. primary radar systems; Analogous systems
    • G01S13/06Systems determining position data of a target
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S13/00Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
    • G01S13/66Radar-tracking systems; Analogous systems
    • G01S13/72Radar-tracking systems; Analogous systems for two-dimensional tracking, e.g. combination of angle and range tracking, track-while-scan radar
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S13/00Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
    • G01S13/88Radar or analogous systems specially adapted for specific applications
    • G01S13/93Radar or analogous systems specially adapted for specific applications for anti-collision purposes
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S13/00Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
    • G01S13/88Radar or analogous systems specially adapted for specific applications
    • G01S13/93Radar or analogous systems specially adapted for specific applications for anti-collision purposes
    • G01S13/931Radar or analogous systems specially adapted for specific applications for anti-collision purposes of land vehicles

Definitions

  • This application relates to the field of automatic driving technology, and in particular to a method and device for determining a target object tracking threshold.
  • target classification technology can provide information about the surrounding objects for the vehicle and help the vehicle's subsequent driving decisions. Therefore, target recognition plays a vital role in the vehicle's perception of the surrounding environment.
  • Target recognition can also be called target type, which can refer to determining the target object as a certain type of object, that is, distinguishing a target object from other objects.
  • One solution is to use a clustering algorithm to cluster the detection points on the radar point cloud to obtain clusters. Then, the detection points in the same cluster are classified as a target object.
  • This solution has low accuracy, especially when the detection points in the radar point cloud are relatively sparse (for example, the resolution of millimeter wave radar is low, and the detection points in the radar point cloud are relatively sparse), the classification accuracy of the target object is low, and there is A large number of false detections.
  • clustering of detection points on a radar point cloud using a clustering algorithm often results in detection points corresponding to a single object being clustered into multiple clusters.
  • using a clustering algorithm to cluster the detection points on the radar point cloud can show that the detection points corresponding to multiple objects are clustered into a cluster.
  • One solution is to use tracking algorithms such as Kalman filter and particle filter to frame the detection points according to a fixed threshold frame.
  • the detection point framed by a threshold frame is regarded as a target object for classification. It can be understood that different objects have different sizes, and the corresponding detection point distribution ranges are different.
  • a unified tracking threshold frame is used for framing, and different detection points corresponding to different objects are often framed into a threshold frame, which makes the tracks of different objects interfere with each other, and there are a lot of errors. Check.
  • the embodiments of the present application provide a method and device for determining the tracking threshold of a target object, which can determine the type of the target object, thereby improving the accuracy of the tracking threshold of the target object, effectively reducing the interference between tracks of different target objects, and reducing errors. Inspection rate.
  • a method for determining a tracking threshold of a target object including determining at least one frame of radar point cloud, the at least one frame of radar point cloud is a point data set obtained by radar measuring the target object, and the at least one frame of radar point cloud
  • the point cloud includes the first frame of radar point cloud; N tracking thresholds corresponding to the first frame of radar point cloud are determined, the N tracking thresholds include the first threshold, and K confidence levels are determined according to the point cloud data in the first threshold, the K confidence levels correspond to K target categories one-to-one; at least one frame of radar point cloud in other frames of radar point cloud can refer to the first frame of radar point cloud to obtain K confidence levels; at least according to the first frame of radar point cloud
  • the K confidence levels of, determine the first target category; according to the first target category, the target threshold for tracking the target object can be determined from N tracking thresholds.
  • the solution of this application can determine the confidence that the target object belongs to different target categories, and then, according to the confidence of belonging to different target categories, determine the category of the target object, that is, the first target category, and then according to the category of the target object , Determine the tracking threshold suitable for the target object, as the target threshold for tracking the target object, by comprehensively determining the target object category, the accuracy of classification can be improved, and then the determination of the target threshold can be optimized.
  • the K confidence levels of the radar point cloud of the first frame include a first confidence level corresponding to the first target category, and the first confidence level is used to characterize that the point cloud data in the first threshold belongs to The accuracy of the first target category.
  • the other confidences in the K confidences are used to characterize the accuracy of the point cloud data in the first threshold belonging to the target category corresponding to the other confidences.
  • the accuracy of the point cloud data in the first threshold belonging to different target categories can be determined, and then the target category with the highest accuracy can be determined as the target object category, and the most suitable target object can be determined The tracking threshold.
  • the N tracking thresholds are determined according to preset parameter information, and are used to define the range corresponding to the target object in the radar point cloud of the first frame.
  • N tracking thresholds can be set according to preset parameter information so as to define the range corresponding to the target object in the radar point cloud to determine the type of the target object.
  • the parameter information includes geometric size information of the preset target category and/or speed information of the preset target category corresponding to the parameter information.
  • the geometric size information of the target category and/or the speed information of the preset target category can be preset to set the tracking threshold, so as to define the range of the target in the radar point cloud to determine the target The category of the object and the target threshold suitable for the target.
  • the first target category is determined based on K total confidence levels, the K total confidence levels include a first total confidence level, and the first total confidence level is the number of N first confidence levels. Sum, N first confidence levels correspond to N tracking thresholds one by one.
  • the K confidence levels of the target object under each of the N tracking thresholds it is possible to determine the K confidence levels of the target object under each of the N tracking thresholds, and calculate the confidence levels of the same target category under each of the N tracking thresholds. Add them together to get the total confidence of the same target category.
  • the total confidence of the different target categories can be obtained, and then the first target category can be determined, which improves the accuracy of determining the target object category.
  • the first target category is determined according to K multi-frame total confidence levels
  • the K multi-frame total confidence levels include the first multi-frame total confidence level
  • the first multi-frame total confidence level Is the sum of at least one first total confidence level
  • the at least one first total confidence level corresponds to the at least one frame of radar point cloud one by one.
  • the first target category can be determined according to the K total confidences in the multi-frame radar point cloud, that is, the first target category can be determined using the information of the multi-frame radar point cloud, which improves Determine the accuracy of the target object category.
  • the K target categories include two or more of pedestrians, automobiles, bicycles, and electric vehicles.
  • pedestrians, cars, bicycles, and electric vehicles can be distinguished, and different tracking thresholds can be used to track pedestrians, cars, bicycles, and electric vehicles respectively.
  • the at least one frame of radar point cloud is a millimeter wave radar point cloud.
  • the resolution of millimeter-wave radar is low.
  • the point data collection of the target object measured by the millimeter-wave radar can be used to determine the target object category and the threshold for tracking the target object, which can reduce the use of millimeter-wave radar. Interference between different tracks when tracking the target object, reducing the false detection rate.
  • an embodiment of the present application provides a device for determining a tracking threshold of a target object.
  • the device includes a processor and a transceiver; wherein the transceiver is used to determine at least one frame of radar point cloud, and at least one frame of radar point cloud is The point data set obtained by measuring the target object, at least one frame of radar point cloud includes the first frame of radar point cloud; the processor is used to determine N tracking thresholds corresponding to the first frame of radar point cloud, and the N tracking thresholds include The first threshold is to determine K confidence levels according to the point cloud data in the first threshold, and the K confidence levels correspond to the K target categories one-to-one; the processor is further configured to determine the first target category according to the K confidence levels The processor is also used to determine the target threshold for tracking the target object from the N tracking thresholds according to the first target category.
  • the device may be, for example, a radar detection device, or for example, a processing device independent of the radar device.
  • the K confidence levels of the radar point cloud of the first frame include a first confidence level corresponding to the first target category, and the first confidence level is used to characterize that the point cloud data in the first threshold belongs to The accuracy of the first target category.
  • the other confidences in the K confidences are used to characterize the accuracy of the point cloud data in the first threshold belonging to the target category corresponding to the other confidences.
  • the N tracking thresholds are determined according to preset parameter information, and are used to define the range corresponding to the target object in the radar point cloud of the first frame.
  • the parameter information includes geometric size information of the preset target category and/or speed information of the preset target category corresponding to the parameter information.
  • the first target category is determined based on K total confidence levels, the K total confidence levels include a first total confidence level, and the first total confidence level is the number of N first confidence levels. Sum, N first confidence levels correspond to N tracking thresholds one by one.
  • the first target category is determined according to K multi-frame total confidence levels
  • the K multi-frame total confidence levels include the first multi-frame total confidence level
  • the first multi-frame total confidence level Is the sum of at least one first total confidence level
  • the at least one first total confidence level corresponds to the at least one frame of radar point cloud one by one.
  • the K target categories include two or more of pedestrians, automobiles, bicycles, and electric vehicles.
  • the device for determining the target object tracking threshold provided in the second aspect is used to implement the corresponding method provided in the first aspect. Therefore, the beneficial effects that can be achieved can refer to the corresponding method provided in the first aspect The beneficial effects of, will not be repeated here.
  • an embodiment of the present application provides a device for determining a target object tracking threshold.
  • the device includes a processing unit and a transceiver unit; wherein the transceiver unit is used to determine at least one frame of radar point cloud, and at least one frame of radar point cloud
  • the processing unit is used to determine N tracking thresholds corresponding to the first frame of radar point cloud, and the N tracking
  • the threshold includes a first threshold, and K confidences are determined according to the point cloud data in the first threshold, and the K confidences correspond to K target categories one-to-one; the processing unit is also used to determine the first confidence based on the K confidences.
  • Target category the processing unit is further configured to determine a target threshold for tracking the target object from the N tracking thresholds according to the first target category.
  • the device for determining the tracking threshold of a target object provided in the third aspect is used to execute the corresponding method provided in the first aspect. Therefore, the beneficial effects that it can achieve can refer to the corresponding method provided in the first aspect. The beneficial effects will not be repeated here.
  • an embodiment of the present application provides a computer storage medium, the computer storage medium includes computer instructions, when the computer instructions run on an electronic device, the electronic device is caused to execute the method described in the first aspect .
  • the computer storage medium provided in the fourth aspect is used to execute the corresponding method provided in the first aspect. Therefore, the beneficial effects that can be achieved can refer to the beneficial effects in the corresponding method provided in the first aspect. I won't repeat them here.
  • the embodiments of the present application provide a computer program product, and the program code included in the computer program product implements the method described in the first aspect when the program code included in the computer program product is executed by a processor in an electronic device.
  • the computer program product provided in the fifth aspect is used to execute the corresponding method provided in the first aspect. Therefore, the beneficial effects it can achieve can refer to the beneficial effects in the corresponding method provided in the first aspect. I won't repeat them here.
  • an embodiment of the present application provides a system for determining a target tracking threshold, which is composed of a detection device and a processing device; wherein the detection device can be used to determine at least one frame of radar point cloud, and the at least one frame
  • the radar point cloud is a collection of point data obtained by the detection device measuring the target object, the at least one frame of radar point cloud includes the first frame of radar point cloud;
  • the processing device is used to determine N corresponding to the first frame of radar point cloud Tracking thresholds, the N tracking thresholds include a first threshold, and K confidence levels are determined according to the point cloud data in the first threshold, and the K confidence levels correspond to K target categories one-to-one; the processing device is also used to K confidence levels determine the first target category;
  • the processing device is further configured to determine a target threshold for tracking the target object from the N tracking thresholds according to the first target category.
  • the detection device may be a radar, such as a vehicle-mounted radar.
  • the system for determining the tracking threshold of the target object may be a smart car.
  • an embodiment of the present application provides a chip system, the chip system includes a processor, and the processor is configured to execute instructions so that a device installed with the chip system executes the method provided in the first aspect.
  • the solution provided by the embodiment of the present application can determine the category of the target object, and use the tracking threshold corresponding to the category of the target object to track the target threshold of the target object, so that the detection point framed by the target threshold no longer participates in the detection of other target objects.
  • Clustering or the process of establishing the track of other target objects realizes that the detection point in the target threshold frame does not affect other tracks, eliminates or reduces the interference between different tracks, and reduces the false detection rate.
  • Figure 1A shows an application scenario of target recognition
  • FIG. 1B shows a clustering result of the point cloud data corresponding to the target object in the scene shown in FIG. 1A;
  • Figure 2A shows another application scenario of target recognition
  • FIG. 2B shows the result of using a fixed tracking threshold to frame the point cloud data corresponding to the target object in the scene shown in FIG. 2A;
  • FIG. 3 is a schematic diagram of an application scenario of an embodiment of the application.
  • FIG. 4 is a schematic diagram of the hardware structure of a vehicle provided by an embodiment of the application.
  • FIG. 5 is a schematic diagram of determining an estimated category of a target object according to an embodiment of the application.
  • FIG. 6 is a schematic diagram of determining the confidence sum of a single frame according to an embodiment of the application.
  • FIG. 7 is a schematic diagram of determining an estimated category of a target object according to an embodiment of the application.
  • FIG. 8 is a flowchart of adjusting the estimated category of a target object provided by an embodiment of the application.
  • FIG. 9A is a scene diagram of an actual verification experiment provided by an embodiment of the application.
  • FIG. 9B is a schematic diagram of an actual verification result provided by an embodiment of the application.
  • FIG. 10A is a scene diagram of an actual verification experiment provided by an embodiment of the application.
  • FIG. 10B is a schematic diagram of an actual verification result provided by an embodiment of the application.
  • FIG. 11A is a scene diagram of an actual verification experiment provided by an embodiment of the application.
  • FIG. 11B is a schematic diagram of an actual verification result provided by an embodiment of the application.
  • FIG. 12 is a flowchart of determining a target object tracking threshold provided by an embodiment of the application.
  • FIG. 13 is a schematic structural diagram of a tracking device for determining a target object provided by an embodiment of the application.
  • FIG. 14 is a schematic block diagram of a device for determining a target object tracking provided by an embodiment of the application.
  • first and second are only used for descriptive purposes, and cannot be understood as indicating or implying relative importance or implicitly indicating the number of indicated technical features. Thus, the features defined with “first” and “second” may explicitly or implicitly include one or more of these features.
  • the terms “including”, “including”, “having” and their variations all mean “including but not limited to”, unless otherwise specifically emphasized.
  • the vehicle 100 may be an automobile, or other forms of motor vehicles.
  • the vehicle may be a vehicle in the form of a car, a bus, a truck, a motorcycle, an agricultural locomotive, a parade float, a game vehicle in an amusement park, and the like.
  • the vehicle 100 may be in an automatic driving state, that is, the vehicle 100 is driven completely autonomously, without the driver's control or only a small amount of driver's control.
  • the vehicle 100 can track nearby objects, such as the vehicle 210, the pedestrian 220, etc., to provide assistance for the subsequent driving decision of the vehicle 100.
  • the vehicle 100 may interact with the control center 300 to perform automatic driving with the assistance of the control center 300.
  • FIG. 4 shows the hardware structure of the vehicle 100.
  • the vehicle 100 may include a computing system 102, an interactive system 104, a propulsion system 106, a sensor system 108, a control system 110, and a power source 112.
  • the computing system 102 may include a processor 1021, a memory 1022, and the like.
  • the interactive system 104 may include a wireless communication system 1041, a display screen 1042, a microphone 1043, a speaker 1044, and the like.
  • the propulsion system 106 may include a power component 1061, an energy component 1062, a transmission component 1063, an actuation component 1064, and the like.
  • the sensor system 108 may include a positioning component 1081, a camera 1082, an inertial measurement unit 1083, a radar 1084, and the like.
  • the control system may include a control component 1101, a throttle valve 1102, a brake component 1103, and the like.
  • the structure illustrated in the embodiment of the present application does not constitute a specific limitation on the vehicle 100.
  • the vehicle 100 may include more or fewer components than shown, or combine certain components, or disassemble certain components, or arrange different components.
  • the illustrated components can be implemented in hardware, software, or a combination of software and hardware.
  • the components of the vehicle 100 can be connected together through a system bus (for example, a controller area network bus (controller area network bus), CAN bus), a network, and/or other connection mechanisms, so that the components can work in an interconnected manner.
  • a system bus for example, a controller area network bus (controller area network bus), CAN bus), a network, and/or other connection mechanisms, so that the components can work in an interconnected manner.
  • the processor 1021 may include one or more processing units.
  • the processor 110 may include an application processor (AP), a modem processor, a graphics processing unit (GPU), and an image signal processor. (image signal processor, ISP), controller, video codec, digital signal processor (digital signal processor, DSP), baseband processor, and/or neural-network processing unit (NPU), etc.
  • AP application processor
  • modem processor modem processor
  • GPU graphics processing unit
  • image signal processor image signal processor
  • ISP image signal processor
  • controller video codec
  • digital signal processor digital signal processor
  • DSP digital signal processor
  • NPU neural-network processing unit
  • the different processing units may be independent devices or integrated in one or more processors.
  • the memory 1022 may be used to store computer executable program code, where the executable program code includes instructions.
  • the memory 1022 may include a program storage area and a data storage area.
  • the storage program area can store information such as a classifier, and can also store an operating system, an application program required for at least one function (such as a sound playback function, an image playback function, etc.).
  • the memory 1022 may include a high-speed random access memory, and may also include a non-volatile memory, such as at least one magnetic disk storage device, a flash memory device, a universal flash storage (UFS), and the like.
  • UFS universal flash storage
  • the processor 1021 can execute various automobile functions and data processing described below by running instructions stored in the memory 1022.
  • the computing system 102 can be implemented as a vehicle-mounted intelligent system or an automatic driving system, which can realize the automatic driving of the vehicle 100 (when the vehicle 100 is running, the vehicle 100 is completely autonomous driving without the driver’s control or only the driver’s Little control). It is also possible to realize semi-autonomous driving of the vehicle 100 (when the vehicle is running, the vehicle is not fully autonomous driving and requires proper control by the driver). The driver can also drive the vehicle 100 manually (the driver height controls the vehicle 100).
  • the computing system 102 may include a vehicle controller.
  • the vehicle controller is the core control component of the vehicle.
  • the vehicle controller is configured to complete numerous task coordination while the vehicle is running.
  • the main tasks include: communication with subsystems; collecting driver's operating signals to identify their intentions; monitoring the driving status of the vehicle, detecting and identifying vehicle faults, storing fault information, and ensuring the safe driving of the vehicle.
  • the vehicle controller also contains multiple independent motor control units, and the information exchange between the vehicle controller and the motor control unit is carried out through a bus.
  • the vehicle controller is the controller center of the vehicle. It can communicate with signal sensors, active steering controllers, and electric drive controllers through CAN bus communication to realize signal collection, control strategy decision-making, and drive signal output.
  • the vehicle controller collects and processes signals from sensors (such as accelerator pedal, brake pedal, etc.), and is responsible for the logic control of power-on and power-off of its own controller and the logic control of power-on and power-off of the motor control unit. It is also responsible for torque calculation: driver demand torque calculation, mechanical brake and electric brake torque distribution, front and rear axles bear drive/brake torque, and 4-wheel motor torque distribution. It is also responsible for energy optimization management: charging control, power distribution based on motor operating efficiency, braking energy recovery control. It is also responsible for vehicle dynamics control: vehicle state recognition, yaw control, anti-skid control, anti-lock control, anti-roll control, and active steering control. It is also responsible for monitoring and diagnosis functions: bus node receiving and dispatching monitoring, sensor failure diagnosis, torque monitoring, CPU monitoring diagnosis, fault management, fault realization safety measures (such as vehicle deceleration speed limit processing).
  • sensors such as accelerator pedal, brake pedal, etc.
  • the vehicle controller can complete data exchange with other sub-control units (such as motor controllers, power management systems, dashboards, etc.) through CAN network communication.
  • the motor control unit receives the command distributed by the vehicle controller through the CAN bus, converts the chemical energy of the battery pack into the mechanical energy of the motor, and then transmits the power to the wheels through the transmission system to ensure the power of the vehicle.
  • the computing system 102 may also include a body controller.
  • the body controller manages modules in the field of vehicle body electronics and supports multiple functions.
  • a typical body control module consists of a microprocessor and is used to control and classify the body electronics. Functions of equipment (power windows, wipers, side mirrors, etc.).
  • ports are provided on the body controller for communication with different body control modules, instrument panels, sensors and actuators, etc.
  • the computing system 102 may include an intelligent driving controller for processing data from various sensors.
  • the wireless communication system 1041 may include one or more antennas, modems, baseband processors, etc., and may communicate with the management center 200, other automobiles, and other communication entities.
  • a wireless communication system can be configured to communicate according to one or more communication technologies, such as mobile communication technologies such as 2G/3G/4G/5G, and wireless local area networks (WLAN) (such as wireless security).
  • mobile communication technologies such as 2G/3G/4G/5G
  • WLAN wireless local area networks
  • True wireless fidelity, Wi-Fi
  • Bluetooth bluetooth, BT
  • global navigation satellite system global navigation satellite system
  • FM frequency modulation
  • NFC near field communication
  • infrared technology infrared, IR
  • other wireless communication technologies will not be listed here.
  • the display screen 1042 is used to display images, videos, and so on.
  • the display screen 1042 includes a display panel.
  • the display panel can use liquid crystal display (LCD), organic light-emitting diode (OLED), active matrix organic light-emitting diode or active-matrix organic light-emitting diode (active-matrix organic light-emitting diode).
  • LCD liquid crystal display
  • OLED organic light-emitting diode
  • active matrix organic light-emitting diode active-matrix organic light-emitting diode
  • active-matrix organic light-emitting diode active-matrix organic light-emitting diode
  • AMOLED flexible light-emitting diode (FLED), Miniled, MicroLed, Micro-OLED, quantum dot light-emitting diode (QLED), etc.
  • the display panel may be covered with a touch panel, and when the touch panel detects a touch operation on or near it, the touch operation may be transmitted to the processor 1021 to determine the type of touch event.
  • the visual output related to the touch operation can be provided through the display screen 1042. In other embodiments, the position of the touch panel and the display screen 1042 may be different.
  • the microphone 1043 also called “microphone”, “microphone”, is used to convert sound signals into electrical signals.
  • the user wants to control the vehicle 100 by voice, the user can speak close to the microphone 1043 through the human mouth, and input a voice command into the microphone 1043.
  • the vehicle 100 may be provided with at least one microphone 1043.
  • the vehicle 100 may be provided with two microphones 1043, in addition to collecting sound signals, it may also implement a noise reduction function.
  • the electronic device 100 may also be provided with three, four or more microphones 1043 to collect sound signals, reduce noise, identify sound sources, and realize directional recording functions.
  • the speaker 1044 also called a “speaker” is used to convert audio electrical signals into sound signals.
  • the vehicle 100 can listen to music through the speaker 1044, or listen to prompt information.
  • the power component 1061 may be an engine, and may be any one or a combination of a gasoline engine, an electric motor of an electric vehicle, a diesel engine, a hybrid engine, etc., or a combination of other types of engines.
  • the energy component 1062 may be a source of energy, and provides power for the power component 1061 in whole or in part. That is, the power component 1061 may be configured to convert the energy provided by the energy component 1062 into mechanical energy.
  • Energy components 1062 can provide energy including gasoline, diesel, other petroleum-based fuels, propane, other compressed gas-based fuels, ethanol, solar panels, batteries, and other sources of electrical power.
  • the energy component 1062 may also include any combination of fuel tanks, batteries, capacitors, and/or flywheels. In some embodiments, the energy component 1062 may also provide energy for other systems of the vehicle 100.
  • the transmission component 1063 may include a gearbox, a clutch, a differential, a transmission shaft, and other components. After being configured, the transmission component 1063 can transmit mechanical energy from the power component 1061 to the actuation component 1064.
  • the actuating member 1064 may include wheels, tires, and the like.
  • the wheels can be configured in various styles, including unicycle, two-wheeler/motorcycle, tricycle, or car/truck four-wheel style.
  • a tire may be attached to a wheel, and the wheel may be attached to the transmission member 1063, and may rotate in response to the mechanical power transmitted by the transmission member 1063 to drive the vehicle 100 to move.
  • the positioning component 1081 may be configured to estimate the position of the vehicle 100.
  • the positioning component 1081 may include a transceiver configured to estimate the position of the vehicle 100 relative to the earth based on satellite positioning data.
  • the computing system 102 may be configured to use the positioning component 1081 in conjunction with map data to estimate the road on which the vehicle 100 may travel and the position of the vehicle 100 on the road.
  • the positioning component 1081 may include a global positioning system (GPS) module, may also include a Beidou navigation satellite system (BDS), or may include a Galileo satellite navigation system (galileo satellite navigation system), and many more.
  • GPS global positioning system
  • BDS Beidou navigation satellite system
  • Galileo satellite navigation system Galileo satellite navigation system
  • the camera 1082 may include an outside camera configured to capture the environment outside the vehicle 100, and may also include an in-vehicle camera configured to capture the environment inside the vehicle 100.
  • the camera 1082 can be a camera that detects visible light, or can detect light from other parts of the spectrum (infrared or ultraviolet, etc.).
  • the camera 1082 is used to capture two-dimensional images, and can also be used to capture depth images.
  • An inertial measurement unit (IMU) 1083 is configured as any combination of sensors that sense changes in the position and orientation of the vehicle 100 based on inertial acceleration.
  • the inertial measurement unit 1083 may include one or more accelerometers and gyroscopes.
  • the radar 1084 may include a sensor configured to use radio waves or sound waves to sense or detect objects in the environment where the vehicle 100 is located.
  • the radar 1084 may include laser radar, millimeter wave radar, or ultrasonic radar.
  • the radar 1084 may include a waveform generator, a transmitting antenna, a receiving antenna, and a signal processor. In each scan, the waveform generator can generate a waveform signal and transmit it through the transmitting antenna. The waveform signal can be received by the receiving antenna after being reflected by objects in the environment where the vehicle 100 is located. By comparing the transmitted signal and the receiving antenna signal, the original detection data can be obtained.
  • the signal processor of the radar 1084 can perform constant false-alarm rate (CFAR) detection, peak grouping and direction of arrival (DOA) on the original detection data. Obtain inspection points.
  • the detection points obtained by the radar 1084 in one scan form a frame of radar point cloud.
  • the detection points scanned by the radar 1084 may also be referred to as a point data set.
  • the radar 1084 is a millimeter wave radar, the resolution of the millimeter wave radar is low, and the detection points on the radar point cloud of a millimeter wave radar are relatively sparse. Therefore, the radar point cloud of the meter wave radar can be called a sparse point cloud ( sparse point cloud).
  • the radar 1084 may transmit the original detection data to the computing system 102, and the computing system 102 determines to obtain the radar point cloud according to the original detection data.
  • the radar 1084 may send the original detection data to the control center 300 through the wireless communication system 1041, and the control center determines to obtain the radar point cloud based on the original detection data.
  • the detection point on the radar point cloud corresponds to the reflection point of the reflected received signal.
  • one object corresponds to multiple reflection points, that is, one object can correspond to multiple detection points on the radar point cloud.
  • the signal processor can obtain information such as the position and speed of the reflection point corresponding to the detection point based on the time difference between the received signal and the transmitted signal corresponding to the detection point and the Doppler frequency shift, which is the detection point on the radar point cloud. It has information such as position and speed.
  • the signal processor of the radar 1084 may execute the method provided in the embodiments of the present application.
  • the radar 1084 may transmit the radar point cloud to the computing system 102 so that the computing system 102 can execute the method provided in the embodiments of the present application.
  • the radar 1084 can send the radar point cloud to the control center 300 through the wireless communication system 1041, so that the control center executes the method provided in the embodiment of this application and processes The result is fed back to the vehicle 100.
  • the manipulation component 1101 may be a component configured to adjust the direction of movement of the vehicle 100 in response to a driver's operation or a computer instruction.
  • the throttle valve 1102 may be a component configured to control the operating speed and acceleration of the power component 1061 and thereby control the speed and acceleration of the vehicle 100.
  • the brake component 1103 may be a component configured to reduce the moving speed of the vehicle 100.
  • the brake component 1103 may use friction to slow the rotation speed of the wheels in the actuating component 1064.
  • the power supply 112 may be configured to provide power to a part or all of the components of the vehicle 100.
  • the power source 112 may include a lithium ion battery or a lead storage battery that can be recharged and discharged.
  • the power source 112 may include one or more battery packs.
  • the power supply 112 and the energy component 1062 can be implemented together, and the chemical energy provided by the power supply 112 can be converted into the mechanical energy of the motor through the power component 1061, and it can be transmitted to the actuating component 1064 through the transmission component 1063 to realize the vehicle 100 Action.
  • the method for determining the tracking threshold of the target object may be applied to scenarios such as automatic driving, automatic parking, or automatic cruise of the vehicle 100.
  • the method can determine the target category of the target object detected by the on-board radar of the vehicle 100, and use the tracking gate threshold corresponding to the target category to which the target object belongs to track the target object, which can reduce The detection point corresponding to the target object interferes with tracking other target objects, thereby eliminating or reducing the interference between the tracks of the multiple target objects when the vehicle 100 tracks multiple target objects, and reducing the false detection rate.
  • the method may be implemented by a detection device, such as a radar device.
  • the radar device may be a vehicle-mounted radar device, such as the radar 1084 shown in FIG. 4.
  • This method can also be implemented by a processing device integrated in the detection device, such as the signal processor in the radar 1084 shown in FIG. 4.
  • the method can also be implemented by a processing device independent of the detection device (for example, the control center 300, the computing system 102, etc.), and the processing result is fed back to the detection device.
  • the target category can be referred to as a category for short, which can be a preset object category, for example, multiple target categories such as pedestrians, cars, bicycles, and electric vehicles can be set.
  • the target category can also be referred to as a category for short.
  • the tracking threshold also known as the tracking threshold frame, refers to a limit range set according to the target category parameter information.
  • the tracking threshold of the target category can be set according to the size information and/or speed information of the target category.
  • the size in the size information may be the upper limit of the size.
  • the speed in the speed information can be the upper speed limit or the lower speed limit.
  • the tracking threshold corresponding to the target category of pedestrians can be set as size (1.5-2)m*(2.5-3)m and speed 4m/s.
  • the speed of 4m/s can be the upper limit of speed.
  • the tracking threshold corresponding to the target category of pedestrians can be set to a size of 1.5m*2.5m and a speed of 4m/s.
  • the tracking threshold corresponding to the target category of pedestrians can be set to a size of 2m*3m and a speed of 4m/s.
  • the tracking threshold corresponding to the target category of pedestrians can be set to a size of 1.8m*2.6m and a speed of 4m/s. And so on, you can set the tracking threshold corresponding to the target category of pedestrians based on experience or experiment.
  • the tracking threshold corresponding to the target category of pedestrians can be set based on experience or experiment.
  • the tracking threshold corresponding to the target category of cars can be set as size (3-5)m*(5-7), 10m/s.
  • the speed of 10m/s can be the lower limit of speed.
  • the tracking threshold corresponding to the target category of cars can be set to a size of 3m*5m and a speed of 10m/s.
  • the tracking threshold corresponding to the target category of cars can be set to a size of 4m*6m and a speed of 10m/s.
  • the tracking threshold corresponding to the target category of cars can be set to a size of 5m*7m and a speed of 10m/s. and many more.
  • the tracking threshold corresponding to the target category of automobiles can be set based on experience or experiments.
  • the tracking threshold corresponding to the target category of bicycles can be set as 3m*4m in size and 7m/s in speed. Among them, the speed of 7m/s can be the upper limit of speed.
  • the tracking threshold corresponding to the target category of bicycles can be set based on experience or experiment.
  • the target category of electric vehicles Take the target category of electric vehicles as an example. It can be understood that, generally speaking, the size of an electric vehicle (and a cyclist) is similar to the size of a bicycle and a cyclist, and the speed is also similar.
  • the tracking threshold corresponding to the target category of electric vehicles is the same as the tracking threshold corresponding to bicycles.
  • the tracking threshold can also be set independently for the electric vehicle, for example, it can be 3m*4m in size and 8m/s in speed. Among them, the speed of 8m/s can be the upper limit of speed.
  • the tracking threshold corresponding to the target category of electric vehicles can be set based on experience or experiments.
  • K target categories and N tracking thresholds can be set. Both K and N are positive integers greater than 1, and K ⁇ N.
  • Each target category may correspond to a tracking threshold, wherein there may be two or more target categories that jointly correspond to a tracking threshold, and different tracking thresholds correspond to different target categories.
  • a machine learning algorithm can be used to use the detection points (also referred to as a point data set) in a multi-frame radar point cloud pre-marked with target categories as training samples for training to obtain K targets.
  • the classifier of the category can also be called the recognition model).
  • the machine learning algorithm used may be the XGBoost algorithm. In the training process of the classifier, 70% of the training samples may be taken as the training set and 30% of the training samples as the test set.
  • the tracking process of each target object can be divided into a track establishment stage and a track tracking stage.
  • the tracking threshold for tracking the target object can be determined in the stage of establishing the track of the target object.
  • the tracking threshold used to track the target object may also be referred to as the target threshold of the target object.
  • the target threshold of the target object can be used to track the target object.
  • the detection point that falls within the target threshold of the target object is regarded as the detection point belonging to the target object and is only used for Determine the track of the target object and no longer participate in the process of determining the track outside the track of the target object.
  • a clustering algorithm can be used to cluster the detection points on the radar point cloud 1 to roughly classify the target objects detected by the radar point cloud.
  • the clustering algorithm can be K-means.
  • the clustering algorithm may be DBSCAN. Other clustering algorithms can also be used, which will not be listed here.
  • the detection points on the radar point cloud 1 may be filtered to exclude the stationary detection points on the radar point cloud 1. That is, only the detection points moving on the radar point cloud 1 can be clustered.
  • Each type of cluster obtained by clustering can be regarded as a target object.
  • the target object S1 for any target object, for example, the target object S1.
  • the detection point in the target object S1 is framed by the tracking threshold 1
  • the detection point in the target object S1 is framed by the tracking threshold 2
  • the detection point in the target object S1 is framed by the tracking threshold N .
  • the detection points in the target object S1 can be framed respectively through N tracking thresholds.
  • any detection point framed by a tracking threshold it can be referred to as a detection point within the tracking threshold, or it can be referred to as point cloud data within the tracking threshold.
  • each type of cluster obtained has a cluster center.
  • the tracking threshold includes size information
  • the detection point where the cluster center of the target object is located can be used as the center of the frame to frame the detection point (point cloud data) within each tracking threshold.
  • the tracking threshold also includes speed information
  • the detection points within the tracking threshold can be screened according to the speed information. Take the tracking threshold corresponding to the target category of pedestrians as an example, where the speed information is 4m/s. Among the detection points within the tracking threshold corresponding to the target category of pedestrians, the detection points with a speed greater than 4m/s can be eliminated.
  • the speed information is a speed of 10m/s.
  • the detection points with a speed less than 10m/s among the detection points within the tracking threshold corresponding to the target category of automobiles can be excluded.
  • the point cloud data within the N tracking thresholds corresponding to the target object S1 can be obtained.
  • the point cloud data in each of the N tracking thresholds corresponding to the target object S1 is input into the above-mentioned classifier, and K confidence levels corresponding to the K target categories can be obtained.
  • the confidence levels It is used to characterize the accuracy of the target object belonging to a certain target category.
  • the point cloud data within the tracking threshold 1 corresponding to the target object S1 can be input into the above-mentioned classifier to obtain K confidence levels corresponding to the target object S1 under the tracking threshold 1.
  • the point cloud data corresponding to the tracking threshold 1 is processed to obtain the confidence of each target category, and a total of K confidences are obtained.
  • the first confidence level in the K confidence levels represents the accuracy of the target object S1 belonging to the first target category corresponding to the first confidence level.
  • the point cloud data within the tracking threshold 2 corresponding to the target object S1 can be input into the aforementioned classifier to obtain K confidence levels corresponding to the target object S1 under the tracking threshold 2, which also includes the first confidence level
  • the first confidence level here corresponds to the tracking threshold 2, which is obtained by processing the point cloud data in the tracking threshold 2, and is used to characterize the accuracy of the target object S1 belonging to the first target category.
  • the above-mentioned processing can be performed on all N tracking thresholds, and then K confidence levels corresponding to each tracking threshold can be obtained, that is, N*K confidence levels can be obtained.
  • the confidence levels of the target object S1 corresponding to the same target category under N tracking thresholds can be added, and the obtained sum can be used as the single frame confidence sum of the target category.
  • the target object S1 corresponds to the confidence level of category 1 under the tracking threshold 1
  • the target object S1 corresponds to the confidence level of category 1 under the tracking threshold 2,...
  • the target object S1 corresponds to the category under the tracking threshold N
  • the confidence of 1 is added, that is, the N first confidences are added, and the obtained sum can be used as the single-frame confidence sum of category 1.
  • the single-frame confidence sum of category 2,..., and category K can be obtained.
  • the sum of the single-frame confidence levels of each target category under the target object S1 may be compared to obtain the target category with the highest single-frame confidence.
  • the target category with the highest single frame confidence can be used as the estimated category of the target object S1.
  • the estimated category of the target object S1 corresponds to the tracking threshold, which is used as the tracking threshold for tracking the target object S1 in the track tracking phase of the target object S1, that is, as the target threshold of the target object S1.
  • the target threshold of the target object S1 can be used to frame the detection point clusters corresponding to the target object S1, so that the detection points within the target threshold of the target object S1 can be used to determine the flight path of the target object S1. trace.
  • the detection points within the target threshold of the target object S1 no longer participate in the process of establishing the track of the clustering other target objects, thereby reducing mutual interference of the track and reducing the false detection rate.
  • the detection points that are not framed to any tracking threshold can be clustered again , To get the cluster. And according to the clusters obtained by clustering again, the new target object is judged or the track is established.
  • using the target threshold of the target object S1 to frame the detection point cluster corresponding to the target object S1 may specifically be based on the detection point center where the cluster center point of the detection point cluster corresponding to the target object S1 is located. Framed.
  • a clustering algorithm can be used to detect the radar point cloud of each frame of the two radar point clouds. Points are clustered to get clusters. Then, using the tracking algorithm to correlate the clusters on the radar point cloud of two adjacent frames, the clusters of the two adjacent frame rate radar point clouds that can be regarded as corresponding to the same object can be obtained, that is, the two adjacent frame rate radar points can be obtained.
  • the cloud can be regarded as a cluster corresponding to the target object S1.
  • the tracking algorithm may be Kalman filter algorithm, particle filter algorithm, etc.
  • the single-frame confidence sum of the target object S1 in each frame of the radar point cloud in the multi-frame radar point cloud can be calculated.
  • the calculation method of the single-frame confidence sum of each target category under each frame of radar point cloud can refer to the above introduction.
  • the single-frame confidence sum corresponding to the same target category under each frame of the radar point cloud in the multi-frame radar point cloud can be added to obtain the multi-frame confidence sum of the target category.
  • the single frame confidence level of the target object S1 under the radar point cloud 1 of category 1 can be combined, ..., the single frame confidence level of the target object S1 under the radar point cloud P, and the target object S1
  • the single-frame confidence sum of category 1 under the radar point cloud P+1 is added to obtain the multi-frame confidence sum of category 1 corresponding to the target object S1.
  • the multi-frame confidence sum can be a simple addition of the single-frame confidence sum.
  • it may also be a weighted addition of the single frame confidence sum. For example, according to the sequence of the acquisition time of each frame of the radar point cloud in the multi-frame radar point cloud, the confidence and weight of the single frame corresponding to each frame of the radar point cloud can be assigned.
  • radar point cloud L1 Taking radar point cloud L1, radar point cloud L2, and radar point cloud L3 as examples, you can set the acquisition time of radar point cloud L1 to be earlier than the acquisition time of radar point cloud L2, and the acquisition time of radar point cloud L2 to be earlier than radar point cloud
  • the weight of the single frame confidence sum corresponding to the radar point cloud L1 ⁇ the weight of the single frame confidence sum corresponding to the radar point cloud L2 ⁇ the weight of the single frame confidence sum corresponding to the radar point cloud L3, for example
  • the weight corresponding to the radar point cloud L1 can be 0.1
  • the weight corresponding to the radar point cloud L1 can be 0.3
  • the weight corresponding to the radar point cloud L1 can be 0.6.
  • the sum of the multi-frame confidence levels of each target category under the target object S1 can be compared to obtain the target category with the highest multi-frame confidence.
  • the target category with the highest confidence in the multiple frames may be used as the estimated category of the target object S1.
  • the estimated category of the target object S1 corresponds to the tracking threshold, which is used as the tracking threshold for tracking the target object S1 in the track tracking phase of the target object S1, that is, as the target threshold of the target object S1.
  • the target threshold of the target object S1 can be used to frame the detection point clusters corresponding to the target object S1, so that the detection points within the target threshold of the target object S1 can be used to determine the flight path of the target object S1. trace.
  • the detection points within the target threshold of the target object S1 no longer participate in the process of establishing the track of the clustering other target objects, thereby reducing mutual interference of the track and reducing the false detection rate.
  • the confidence threshold can be a threshold preset based on experience or experiment, for example, it can be set to 80%, can also be set to 85%, can also be set to 95%, etc., and will not be listed here.
  • the confidence that the target object S1 belongs to its estimated category is specifically: the tracking threshold corresponding to the estimated category of the target object S1 is framed (target Threshold) detection points (point cloud data) are input to the above-mentioned classifier, and the K confidence levels output by the classifier correspond to the confidence level of the estimated category.
  • the confidence that the target object S1 belongs to its estimated category under its target threshold may be a simple average of the confidences corresponding to multiple frames of radar point clouds.
  • the confidence that the target object S1 corresponding to each frame of the radar point cloud in the multi-frame radar point cloud belongs to the estimated category under the target threshold can be calculated.
  • the specific calculation method can refer to the introduction above, and will not be repeated here.
  • the confidence of the target object S1 corresponding to each frame of the radar point cloud in the multi-frame radar point cloud belonging to the estimated category under the target threshold can be added to obtain the multi-frame confidence of the target object S1 belonging to the estimated category under the target threshold
  • divide the multi-frame confidence level by the number of multi-frame frames to obtain the simple average value of the confidence level corresponding to the multi-frame radar point cloud, and use this average value as the confidence that the target object S1 belongs to its estimated category under its target threshold degree.
  • the confidence that the target object S1 belongs to its estimated category under its target threshold may be a weighted average of the confidences corresponding to multiple frames of radar point clouds.
  • the confidence that the target object S1 corresponding to each frame of the radar point cloud in the multi-frame radar point cloud belongs to the estimated category under the target threshold can be calculated.
  • the specific calculation method can refer to the introduction above, and will not be repeated here.
  • the confidence that the target object S1 corresponding to each frame of the radar point cloud in the multi-frame radar point cloud belongs to the estimated category under the target threshold can be multiplied by the corresponding weight, and then added, and the weighted average value obtained can be used as the target object S1 Confidence of belonging to its estimated category under its target threshold.
  • the way of assigning the weight of each frame in the multi-frame radar point cloud can be to assign different weights according to the sequence of the acquisition time of the radar point cloud. For details, please refer to the above introduction and will not be repeated here.
  • the estimated category of the target object S1 is locked, and the target object S1 is no longer targeted in the subsequent collected radar point cloud.
  • Object S1 is classified.
  • the confidence that the target object S1 belongs to its estimated category is greater than the confidence threshold, by locking the estimated category of the target object S1, the computational overhead of triggering the classifier can be reduced, and the detection efficiency can be improved.
  • the track tracking stage of the target object S1 can be performed according to the radar newly acquired by the detection device.
  • Point cloud re-determine the estimated category of the target object S1.
  • the estimated category of the target object S1 it can be determined whether the confidence that the target object S1 belongs to the re-determined estimated category is greater than the confidence threshold.
  • the confidence level of the estimated category of the target object S1 determined during the trajectory establishment phase of the target object S1 can be set to be less than or equal to the confidence threshold.
  • the radar point cloud P+2 can be used to re- Determine the estimated category of the target object S1. If the confidence level of the estimated category of the target object S1 re-determined by the radar point cloud P+2 is still less than or equal to the confidence threshold, the radar point cloud P+3 (not included) whose acquisition time is after the radar point cloud P+2 can be used. (Shown), the estimated category of the target object S1 is re-determined again. The foregoing process is repeated until the confidence of the newly determined estimated category of the target object S1 is greater than the confidence threshold or the track tracking of the target object S1 ends.
  • the tracking threshold corresponding to the newly determined estimated category of the target object S1 can be passed Frame the radar point cloud of this frame, determine the track of the target object S1 with the detection points in the threshold frame, and avoid the detection points in the threshold frame from participating in the track establishment process of other target objects.
  • the confidence level of the estimated category of the target object S1 determined in the trajectory establishment phase of the target object S1 can be set to be less than or equal to the confidence threshold.
  • the tracking threshold corresponding to the estimated category of the target object S1 determined in the track establishment stage of the target object S1 can be used on the radar point cloud P+2 Framed.
  • the tracking threshold corresponding to the estimated category of the target object S1 re-determined through the radar point cloud P+2 can be used. The tracking threshold is determined on P+3.
  • the method for determining the tracking threshold of a target object provided by the embodiment of the present application can determine the category of the target object, and use the tracking threshold corresponding to the category of the target object to track the target threshold of the target object, so that the detection point defined by the target threshold is not Participate in the clustering of other target objects or the process of establishing the trajectory of other target objects to realize that the detection point in the target threshold frame does not affect other trajectories, eliminates or reduces the interference between different trajectories, and reduces the false detection rate.
  • FIGS. 9A and 9B show an actual verification result of the method for determining the tracking threshold of a target object provided by an embodiment of the present application.
  • the target threshold of the pedestrian is determined to be the threshold 920 by the method for determining the target object tracking threshold provided in the embodiment of the present application.
  • the threshold 920 is a preset tracking threshold corresponding to pedestrians in the embodiment of the application.
  • FIGS. 10A and 10B show another actual verification result of the method for determining the tracking threshold of a target object provided by an embodiment of the present application.
  • the target threshold of the bicycle is determined to be the threshold 1020 by the method for determining the target object tracking threshold provided by the embodiment of the present application.
  • the threshold 1020 is the preset tracking threshold corresponding to the bicycle in the embodiment of the application.
  • Figures 11A and 11B show another actual verification result of the method for determining the target object tracking threshold provided by an embodiment of the present application.
  • the target threshold of the car is determined to be the threshold 1120 through the method for determining the target object tracking threshold provided in the embodiment of the present application.
  • the threshold 1120 is the preset tracking threshold corresponding to the car in the embodiment of the application.
  • the three actual verification results shown above show that the method for determining the target object tracking threshold provided by the embodiments of the present application can accurately determine the target object category, and use the tracking threshold corresponding to the target object category to target the target object.
  • an embodiment of the present application provides a method for determining a target object tracking threshold.
  • the method can be implemented by a detection device, such as a radar device.
  • the radar device may be a vehicle-mounted radar device, such as the radar 1084 shown in FIG. 4.
  • This method can also be implemented by a processing device integrated in the detection device, such as the signal processor in the radar 1084 shown in FIG. 4.
  • the method can also be implemented by a processing device independent of the detection device (for example, the control center 300, the computing system 102, etc.), and the processing result is fed back to the detection device.
  • the method may include the following steps.
  • Step 1201 Determine at least one frame of radar point cloud, where the at least one frame of radar point cloud is a point data set obtained by measuring a target object, and the at least one frame of radar point cloud includes the first frame of radar point cloud.
  • the detection device or other processing device may determine one or more frames of radar point clouds based on the original detection data collected by the detection device for one scan of the target object or multiple raw detection data collected by multiple scans. .
  • one frame of radar point cloud corresponds to one scan of the detection device.
  • Step 1203 Determine N tracking thresholds corresponding to the first frame of radar point cloud, where the N tracking thresholds include a first threshold, and determine K confidence levels based on the point cloud data in the first threshold, the K confidence levels correspond to K target categories one-to-one.
  • N tracking thresholds can be preset, and point cloud data (detection points) are respectively framed on the first frame of radar point cloud, and N tracking thresholds corresponding to the first frame of radar point cloud are respectively obtained.
  • the point cloud data in any one of the N tracking thresholds corresponding to the first frame of radar point cloud can be input into the K classification, for example, the point cloud data in the first threshold (that is, the point cloud data framed by the first threshold) In the classifier of, K confidences are obtained.
  • the above method can be repeated to obtain K confidence levels corresponding to each of the N tracking thresholds corresponding to the other frame radar point clouds.
  • K confidence levels corresponding to each of the N tracking thresholds corresponding to the other frame radar point clouds K confidence levels corresponding to each of the N tracking thresholds corresponding to the other frame radar point clouds.
  • Step 1205 Determine the first target category according to the K confidence levels.
  • Step 1207 Determine a target threshold for tracking the target object from the N tracking thresholds according to the first target category.
  • the tracking threshold corresponding to the first target category may be determined as the target threshold for tracking the target object.
  • the point cloud data used to establish the trajectory of the target object can be framed by the target threshold, and the point cloud data framed by the target threshold can not participate in the trajectory establishment process of other target objects.
  • the K confidence levels include a first confidence level corresponding to the first target category, and the first confidence level is used to characterize that the point cloud data in the first threshold belongs to the first target category.
  • the accuracy of a target category is used to characterize that the point cloud data in the first threshold belongs to the first target category.
  • any one of the K confidence levels corresponding to the threshold 1 in the N tracking thresholds is used to characterize the accuracy of the point cloud data in the threshold 1 belonging to the target category corresponding to the confidence.
  • the confidence level 1 represents the accuracy of the point cloud data in the threshold 1 belonging to the target category corresponding to the confidence level 1
  • the confidence level 2 represents the accuracy of the point cloud data in the threshold 1 belonging to the target category corresponding to the confidence level 2.
  • the confidence K represents the accuracy of the point cloud data in the threshold 1 belonging to the target category corresponding to the confidence K.
  • any one of the K confidence levels corresponding to threshold 2 in the N tracking thresholds is used to characterize the accuracy of the point cloud data in the threshold 2 belonging to the target category corresponding to the confidence level
  • the threshold 3 in the N tracking thresholds Any one of the corresponding K confidence levels is used to characterize the accuracy of the point cloud data in the threshold 3 that belongs to the target category corresponding to the confidence level,..., among the K confidence levels corresponding to the threshold N in the N tracking thresholds Any confidence is used to characterize the accuracy of the point cloud data in the threshold N belonging to the target category corresponding to the confidence.
  • the N tracking thresholds are determined according to preset parameter information, and are used to define a range corresponding to the target object in the radar point cloud of the first frame.
  • Different tracking thresholds may be determined according to preset parameter information corresponding to different target categories. It can be understood that target objects belonging to the same target category have similar measurement ranges or point data ranges.
  • the parameter information can be set according to the measurement range or the point data range of the target object under the same target category, and then the corresponding tracking threshold can be determined according to the parameter information.
  • the range of the target object defined by the tracking threshold may be the range of the point data of the target object or the measurement range of the target object. Defining the range or measurement range of the point data of the target object by the tracking threshold can facilitate the classifier to classify the target object.
  • the parameter information includes geometric size information of a preset target category and/or speed information of a preset target category. Target objects belonging to the same target category have similar geometric dimensions and speeds. Through the geometric size information and/or speed information of the target category, the tracking threshold corresponding to the target category can be determined.
  • the confidence level corresponding to the first target category in the K target categories is the largest and is greater than the preset The first confidence threshold (for example, 95%) of, then the first target category can be determined as the target category of the target object.
  • the first target category is determined according to K total confidence levels, the K total confidence levels include a first total confidence level, and the first total confidence level is N first total confidence levels.
  • the sum of the confidences, the N first confidences correspond to the N tracking thresholds one by one.
  • the cloud data in each of the N tracking thresholds is input to the classifier of the K classification to obtain K confidences, and K confidences can be obtained, that is, each tracking threshold corresponds to K confidences.
  • the sum of the first confidences in each of the K confidences in the N tracking thresholds can be obtained.
  • the sum of the second confidences in each of the K confidences in the N tracking thresholds can be obtained.
  • the first target category can be determined according to the sum of each of the K confidence degrees.
  • the sum of each confidence degree in the K confidence degrees may be compared, and the target category corresponding to the confidence degree with the largest sum of confidence degrees may be used as the first target category.
  • the first target category is determined according to K multi-frame total confidence levels
  • the K multi-frame total confidence levels include a first multi-frame total confidence level
  • the first multi-frame total confidence level The degree is the sum of at least one first total confidence degree, and the at least one first total confidence degree corresponds to the at least one frame of radar point cloud one by one, wherein the first frame of radar point cloud corresponds to the first
  • the total confidence is the sum of the N first confidences, and the N first confidences correspond to the N tracking thresholds one by one.
  • the sum of each of the K confidence degrees corresponding to the first radar point cloud can be determined to obtain K total confidence degrees.
  • K total confidences corresponding to each frame of radar point cloud in at least one frame of radar point cloud can be obtained.
  • the first total confidence among the K total confidences corresponding to each frame of the radar point cloud in at least one frame of radar point cloud is added to obtain the first multi-frame total confidence corresponding to the first total confidence.
  • the multi-frame total confidence level corresponding to each of the K total confidence levels can be obtained, that is, the K multi-frame total confidence levels can be obtained.
  • the target category corresponding to the largest multi-frame total confidence among the K multi-frame total confidences may be determined as the first target category.
  • the K target categories include two or more of the following:
  • Pedestrians cars, bicycles, electric vehicles.
  • the at least one frame of radar point cloud is a millimeter wave radar point cloud.
  • the method for determining the tracking threshold of a target object provided by the embodiment of the present application can determine the category of the target object, and use the tracking threshold corresponding to the category of the target object to track the target threshold of the target object, so that the detection point defined by the target threshold is not Participate in the clustering of other target objects or the process of establishing the trajectory of other target objects to realize that the detection point in the target threshold frame does not affect other trajectories, eliminates or reduces the interference between different trajectories, and reduces the false detection rate.
  • an embodiment of the present application provides an apparatus for determining a tracking threshold of a target object.
  • the device may include a processor 1310 and a transceiver 1320.
  • the processor 1310 executes computer instructions to make the device execute the method shown in FIG. 12.
  • the processor 1310 may determine at least one frame of radar point cloud, where the at least one frame of radar point cloud is a point data set obtained by measuring a target object, and the at least one frame of radar point cloud includes the first frame of radar point cloud;
  • the processor 1310 may determine N tracking thresholds corresponding to the first frame of radar point cloud, where the N tracking thresholds include a first threshold, and determine K confidence levels based on the point cloud data in the first threshold, so The K confidence levels correspond one-to-one with the K target categories.
  • the processor 1310 may determine the first target category according to the K confidence levels.
  • the processor 1310 may determine a target threshold for tracking the target object from the N tracking thresholds according to the first target category.
  • the device further includes a memory 1330.
  • the memory 1330 may be used to store the foregoing computer instructions, and may also be used to store a classifier and the like.
  • the electronic device further includes a communication bus 1340, where the processor 1310 can be connected to the transceiver 1320 and the memory 1330 through the communication bus 1340, so that the computer can execute the instructions stored in the memory 1330 to execute the command to the transceiver 1320. And other components for corresponding control.
  • the category of the target object can be determined, and the tracking threshold corresponding to the category of the target object can be used to track the target threshold of the target object, so that the detection points framed by the target threshold no longer participate in the clustering of other target objects or other targets
  • the establishment process of the target trajectory realizes that the detection point in the target threshold frame does not affect other trajectories, eliminates or reduces the interference between different trajectories, and reduces the false detection rate.
  • the processor in the embodiment of the present application may be a central processing unit (central processing unit, CPU), or other general-purpose processors, digital signal processors (digital signal processors, DSP), and application-specific integrated circuits. (application specific integrated circuit, ASIC), field programmable gate array (field programmable gate array, FPGA) or other programmable logic devices, transistor logic devices, hardware components, or any combination thereof.
  • the general-purpose processor may be a microprocessor or any conventional processor.
  • an embodiment of the present application provides an apparatus 1400 for determining a target object tracking threshold.
  • the device 1400 includes a processing unit 1410 and a transceiver unit 1420.
  • the transceiver unit 1420 is configured to determine at least one frame of radar point cloud, the at least one frame of radar point cloud is a point data set obtained by measuring a target object, and the at least one frame of radar point cloud includes the first frame of radar point cloud;
  • the processing unit 1410 is configured to determine N tracking thresholds corresponding to the first frame of radar point cloud, the N tracking thresholds include a first threshold, and K confidence levels are determined according to the point cloud data in the first threshold, and the K confidence levels One-to-one correspondence with K target categories;
  • the processing unit 1410 is further configured to determine the first target category according to the K confidence levels
  • the processing unit 1410 is further configured to determine a target threshold for tracking the target object from the N tracking thresholds according to the first target category.
  • the device for determining the tracking threshold of a target object provided by the embodiment of the present application can determine the category of the target object, and use the tracking threshold corresponding to the category of the target object to track the target threshold of the target object, so that the detection point defined by the target threshold is not Participate in the clustering of other target objects or the process of establishing the trajectory of other target objects to realize that the detection point in the target threshold frame does not affect other trajectories, eliminates or reduces the interference between different trajectories, and reduces the false detection rate.
  • the method steps in the embodiments of the present application can be implemented by hardware, and can also be implemented by a processor executing software instructions.
  • Software instructions can be composed of corresponding software modules, which can be stored in random access memory (RAM), flash memory, read-only memory (ROM), programmable read-only memory (programmable rom) , PROM), erasable programmable read-only memory (erasable PROM, EPROM), electrically erasable programmable read-only memory (electrically EPROM, EEPROM), register, hard disk, mobile hard disk, CD-ROM or well-known in the art Any other form of storage medium.
  • An exemplary storage medium is coupled to the processor, so that the processor can read information from the storage medium and write information to the storage medium.
  • the storage medium may also be an integral part of the processor.
  • the processor and the storage medium may be located in the ASIC.
  • the above embodiments it may be implemented in whole or in part by software, hardware, firmware, or any combination thereof.
  • software it can be implemented in the form of a computer program product in whole or in part.
  • the computer program product includes one or more computer instructions.
  • the computer may be a general-purpose computer, a special-purpose computer, a computer network, or other programmable devices.
  • the computer instructions may be stored in a computer-readable storage medium or transmitted through the computer-readable storage medium.
  • the computer instructions can be sent from a website site, computer, server, or data center to another website site via wired (such as coaxial cable, optical fiber, digital subscriber line (DSL)) or wireless (such as infrared, wireless, microwave, etc.) , Computer, server or data center for transmission.
  • the computer-readable storage medium may be any available medium that can be accessed by a computer or a data storage device such as a server or a data center integrated with one or more available media.
  • the usable medium may be a magnetic medium (for example, a floppy disk, a hard disk, and a magnetic tape), an optical medium (for example, a DVD), or a semiconductor medium (for example, a solid state disk (SSD)).

Abstract

Disclosed are a method and apparatus for determining a target object tracking threshold. The method comprises: determining at least one frame of radar point cloud, wherein the at least one frame of radar point cloud is a point data set obtained by measuring a target object, and the at least one frame of radar point cloud comprises a first frame of radar point cloud (1201); determining N tracking thresholds corresponding to the first frame of radar point cloud, the N tracking thresholds comprising a first threshold, and determining K confidence levels according to point cloud data in the first threshold, the K confidence levels being in one-to-one correspondence with K target categories (1203); determining a first target category according to the K confidence levels (1205); and determining, from among the N tracking thresholds and according to the first target category, a target threshold for tracking the target object (1207). By means of the method and apparatus, the category of a target object can be determined, and then the target object can be tracked with the category tracking threshold of the target object, so that interference between tracks of different target objects can be effectively reduced and the false detection rate can be reduced.

Description

一种确定目标对象跟踪门限的方法、装置Method and device for determining target object tracking threshold
本申请要求于2019年12月16日提交中国国家知识产权局、申请号为201911294731.0、申请名称为“一种确定目标对象跟踪门限的方法、装置”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。This application claims the priority of a Chinese patent application filed with the State Intellectual Property Office of China, the application number is 201911294731.0, and the application name is "a method and device for determining the tracking threshold of a target object" on December 16, 2019, and the entire content of it is approved The reference is incorporated in this application.
技术领域Technical field
本申请涉及自动驾驶技术领域,具体涉及一种确定目标对象跟踪门限的方法、装置。This application relates to the field of automatic driving technology, and in particular to a method and device for determining a target object tracking threshold.
背景技术Background technique
在车辆的自动驾驶过程中,目标识别(target classification)技术可以为车辆提供周围物体信息,可以车辆的后续行驶决策提供帮助。因此,目标识别对车辆对周围环境的感知具有至关重要的作用。目标识别,也可以称为目标种类,其可以是指将目标物体判定为某一种类的物体,即将一个目标物体从其他物体中区分出来。In the automatic driving process of a vehicle, target classification technology can provide information about the surrounding objects for the vehicle and help the vehicle's subsequent driving decisions. Therefore, target recognition plays a vital role in the vehicle's perception of the surrounding environment. Target recognition can also be called target type, which can refer to determining the target object as a certain type of object, that is, distinguishing a target object from other objects.
一种方案为利用聚类算法对雷达点云上的检测点进行聚类,得到类簇。然后,根据同一类簇中检测点作为一个目标对象,进行分类。该方案精度较低,特别是在雷达点云中检测点较为稀疏(例如,毫米波雷达分辨率较低,其雷达点云中检测点较为稀疏)的情况下,目标物体分类精度较低,存在大量误检。参阅图1A和图1B,利用聚类算法对雷达点云上的检测点进行聚类往往会出现单个物体对应的检测点被聚类为多个类簇。另外,利用聚类算法对雷达点云上的检测点进行聚类可以出现多个物体对应的检测点被聚为一个类簇。One solution is to use a clustering algorithm to cluster the detection points on the radar point cloud to obtain clusters. Then, the detection points in the same cluster are classified as a target object. This solution has low accuracy, especially when the detection points in the radar point cloud are relatively sparse (for example, the resolution of millimeter wave radar is low, and the detection points in the radar point cloud are relatively sparse), the classification accuracy of the target object is low, and there is A large number of false detections. Referring to Figures 1A and 1B, clustering of detection points on a radar point cloud using a clustering algorithm often results in detection points corresponding to a single object being clustered into multiple clusters. In addition, using a clustering algorithm to cluster the detection points on the radar point cloud can show that the detection points corresponding to multiple objects are clustered into a cluster.
一种方案为使用卡尔曼滤波、粒子滤波等跟踪算法,按照固定的门限框框定检测点。并将一个门限框框定的检测点视为一个目标对象,进行分类。可以理解,不同的物体尺寸大小不同,对应的检测点分布的范围不同。参阅图2A和图2B,在该方案中使用统一的跟踪门限框进行框定,往往会出现将不同物体对应的不同检测点框到一个门限框中,使得不同物体的航迹互相干扰,存在大量误检。One solution is to use tracking algorithms such as Kalman filter and particle filter to frame the detection points according to a fixed threshold frame. The detection point framed by a threshold frame is regarded as a target object for classification. It can be understood that different objects have different sizes, and the corresponding detection point distribution ranges are different. Refer to Figure 2A and Figure 2B. In this scheme, a unified tracking threshold frame is used for framing, and different detection points corresponding to different objects are often framed into a threshold frame, which makes the tracks of different objects interfere with each other, and there are a lot of errors. Check.
发明内容Summary of the invention
本申请实施例提供了一种确定目标对象跟踪门限的方法、装置,可以确定目标对象的类别,进而可以提高目标对象的跟踪门限准确度,可有效减少不同目标对象的航迹间干扰,降低误检率。The embodiments of the present application provide a method and device for determining the tracking threshold of a target object, which can determine the type of the target object, thereby improving the accuracy of the tracking threshold of the target object, effectively reducing the interference between tracks of different target objects, and reducing errors. Inspection rate.
第一方面,提供了一种确定目标对象跟踪门限的方法,包括确定至少一帧雷达点云,该至少一帧雷达点云为雷达对目标对象进行测量得到的点数据集合,该至少一帧雷达点云包括第一帧雷达点云;确定第一帧雷达点云对应的N个跟踪门限,该N个跟踪门限包括第一门限,根据第一门限中的点云数据确定K个置信度,该K个置信度与K个目标类别一一对应;至少一帧雷达点云中其他帧雷达点云可以参考第一帧雷达点云,分别得到K个置信度;可至少根据第一帧雷达点云的K个置信度确定第一目标类别;可根据第一目标类别,从N个跟踪门限中确定用于跟踪该目标对象的目标门限。In a first aspect, a method for determining a tracking threshold of a target object is provided, including determining at least one frame of radar point cloud, the at least one frame of radar point cloud is a point data set obtained by radar measuring the target object, and the at least one frame of radar point cloud The point cloud includes the first frame of radar point cloud; N tracking thresholds corresponding to the first frame of radar point cloud are determined, the N tracking thresholds include the first threshold, and K confidence levels are determined according to the point cloud data in the first threshold, the K confidence levels correspond to K target categories one-to-one; at least one frame of radar point cloud in other frames of radar point cloud can refer to the first frame of radar point cloud to obtain K confidence levels; at least according to the first frame of radar point cloud The K confidence levels of, determine the first target category; according to the first target category, the target threshold for tracking the target object can be determined from N tracking thresholds.
也就是说,本申请的方案可以确定目标对象属于不同目标类别的置信度,然后,根据属于不同目标类别的置信度,确定目标对象的类别,即第一目标类别,进而可根据目标对象的类别,确定适合目标对象的跟踪门限,作为用于跟踪目标对象的目标门限,通过对目标对象类别的综合判定可以提升分类的准确度,进而优化目标门限的确定。That is to say, the solution of this application can determine the confidence that the target object belongs to different target categories, and then, according to the confidence of belonging to different target categories, determine the category of the target object, that is, the first target category, and then according to the category of the target object , Determine the tracking threshold suitable for the target object, as the target threshold for tracking the target object, by comprehensively determining the target object category, the accuracy of classification can be improved, and then the determination of the target threshold can be optimized.
在一种可能的实现方式中,第一帧雷达点云的K个置信度包括对应该第一目标类别的第 一置信度,该第一置信度用于表征第一门限中的点云数据属于第一目标类别的准确度。参考第一置信度,K个置信度中的其他置信度用于表征第一门限中的点云数据属于该其他置信度对应的目标类别的准确度。In a possible implementation manner, the K confidence levels of the radar point cloud of the first frame include a first confidence level corresponding to the first target category, and the first confidence level is used to characterize that the point cloud data in the first threshold belongs to The accuracy of the first target category. With reference to the first confidence, the other confidences in the K confidences are used to characterize the accuracy of the point cloud data in the first threshold belonging to the target category corresponding to the other confidences.
也就是说,在该实现方式中,可以确定第一门限中的点云数据属于不同目标类别的准确度,进而可以确定准确度最高的目标类别为目标对象的类别,可以确定出最适合目标对象的跟踪门限。That is to say, in this implementation manner, the accuracy of the point cloud data in the first threshold belonging to different target categories can be determined, and then the target category with the highest accuracy can be determined as the target object category, and the most suitable target object can be determined The tracking threshold.
在一种可能的实现方式中,N个跟踪门限是根据预设的参数信息确定的,用于界定第一帧雷达点云中对应于该目标对象的范围。In a possible implementation manner, the N tracking thresholds are determined according to preset parameter information, and are used to define the range corresponding to the target object in the radar point cloud of the first frame.
也就说,在该实现方式中,可以根据预设的参数信息来设置N个跟踪门限,以便用来在雷达点云中界定目标对象对应的范围,以确定目标对象的类别。In other words, in this implementation manner, N tracking thresholds can be set according to preset parameter information so as to define the range corresponding to the target object in the radar point cloud to determine the type of the target object.
在一种可能的实现方式中,该参数信息包括该参数信息对应的预设目标类别的几何尺寸信息和/或预设目标类别的速度信息。In a possible implementation manner, the parameter information includes geometric size information of the preset target category and/or speed information of the preset target category corresponding to the parameter information.
也就是说,在该实现方式中,可以预设目标类别的几何尺寸信息和/或预设目标类别的速度信息,来设置跟踪门限,以便在雷达点云中界定目标对应的范围,以确定目标对象的类别以及适合标对象的目标门限。That is to say, in this implementation manner, the geometric size information of the target category and/or the speed information of the preset target category can be preset to set the tracking threshold, so as to define the range of the target in the radar point cloud to determine the target The category of the object and the target threshold suitable for the target.
在一种可能的实现方式中,第一目标类别是根据K个总置信度确定的,该K个总置信度包括第一总置信度,该第一总置信度为N个第一置信度的总和,N个第一置信度一一对应于N个跟踪门限。In a possible implementation manner, the first target category is determined based on K total confidence levels, the K total confidence levels include a first total confidence level, and the first total confidence level is the number of N first confidence levels. Sum, N first confidence levels correspond to N tracking thresholds one by one.
也就是说,在该实现方式中,可以确定目标对象在N个跟踪门限中每一个跟踪门限下的K个置信度,并将N个跟踪门限中每一个跟踪门限下对应同一目标类别的置信度相加,得到该同一目标类别的总置信度。参考前述方式,可以到的不同目标类别的总置信度,进而可以确定第一目标类别,提高了确定目标对象的类别的准确率。That is to say, in this implementation manner, it is possible to determine the K confidence levels of the target object under each of the N tracking thresholds, and calculate the confidence levels of the same target category under each of the N tracking thresholds. Add them together to get the total confidence of the same target category. With reference to the foregoing method, the total confidence of the different target categories can be obtained, and then the first target category can be determined, which improves the accuracy of determining the target object category.
在一种可能的实现方式中,第一目标类别是根据K个多帧总置信度确定的,该K个多帧总置信度包括第一多帧总置信度,该第一多帧总置信度为至少一个第一总置信度的总和,该至少一个第一总置信度一一对应于该至少一帧雷达点云。In a possible implementation manner, the first target category is determined according to K multi-frame total confidence levels, the K multi-frame total confidence levels include the first multi-frame total confidence level, and the first multi-frame total confidence level Is the sum of at least one first total confidence level, and the at least one first total confidence level corresponds to the at least one frame of radar point cloud one by one.
也就是说,在该实现方式中,可以根据多帧雷达点云中K个总置信度,来确定第一目标类别,即可以利用多帧雷达点云的信息来确定第一目标类别,提高了确定目标对象的类别的准确率。That is to say, in this implementation, the first target category can be determined according to the K total confidences in the multi-frame radar point cloud, that is, the first target category can be determined using the information of the multi-frame radar point cloud, which improves Determine the accuracy of the target object category.
在一种可能的实现方式中,K个目标类别包括行人、汽车、自行车、电动车中的两项或多项。In a possible implementation manner, the K target categories include two or more of pedestrians, automobiles, bicycles, and electric vehicles.
也就是说,在该实现方式中,可以区分行人、汽车、自行车、电动车,并使用不同的跟踪门限来分别跟踪行人、汽车、自行车、电动车。That is to say, in this implementation, pedestrians, cars, bicycles, and electric vehicles can be distinguished, and different tracking thresholds can be used to track pedestrians, cars, bicycles, and electric vehicles respectively.
在一种可能的实现方式中,该至少一帧雷达点云为毫米波雷达点云。In a possible implementation manner, the at least one frame of radar point cloud is a millimeter wave radar point cloud.
毫米波雷达的分辨率较低的,在该实现方式中,可以通过毫米波雷达测量的目标对象的点数据集合,确定目标对象的类别以及用于跟踪目标对象的门限,可以减少通过毫米波雷达跟踪目标对象时不同航迹间的干扰,减少误检率。The resolution of millimeter-wave radar is low. In this implementation mode, the point data collection of the target object measured by the millimeter-wave radar can be used to determine the target object category and the threshold for tracking the target object, which can reduce the use of millimeter-wave radar. Interference between different tracks when tracking the target object, reducing the false detection rate.
第二方面,本申请实施例提供了一种确定目标对象跟踪门限的装置,该装置包括处理器和收发器;其中,该收发器用于确定至少一帧雷达点云,至少一帧雷达点云为对目标对象进行测量得到的点数据集合,至少一帧雷达点云包括第一帧雷达点云;该处理器用于确定对应于第一帧雷达点云的N个跟踪门限,该N个跟踪门限包括第一门限,根据第一门限中的点云数据确定K个置信度,该K个置信度与K个目标类别一一对应;该处理器还用于根据该K个 置信度确定第一目标类别;该处理器还用于根据第一目标类别,从该N个跟踪门限中确定用于跟踪所述目标对象的目标门限。In a second aspect, an embodiment of the present application provides a device for determining a tracking threshold of a target object. The device includes a processor and a transceiver; wherein the transceiver is used to determine at least one frame of radar point cloud, and at least one frame of radar point cloud is The point data set obtained by measuring the target object, at least one frame of radar point cloud includes the first frame of radar point cloud; the processor is used to determine N tracking thresholds corresponding to the first frame of radar point cloud, and the N tracking thresholds include The first threshold is to determine K confidence levels according to the point cloud data in the first threshold, and the K confidence levels correspond to the K target categories one-to-one; the processor is further configured to determine the first target category according to the K confidence levels The processor is also used to determine the target threshold for tracking the target object from the N tracking thresholds according to the first target category.
在一种可能的实现方式中,该装置例如可以为雷达探测装置,又例如为独立于雷达装置的处理装置。In a possible implementation manner, the device may be, for example, a radar detection device, or for example, a processing device independent of the radar device.
在一种可能的实现方式中,第一帧雷达点云的K个置信度包括对应该第一目标类别的第一置信度,该第一置信度用于表征第一门限中的点云数据属于第一目标类别的准确度。参考第一置信度,K个置信度中的其他置信度用于表征第一门限中的点云数据属于该其他置信度对应的目标类别的准确度。In a possible implementation manner, the K confidence levels of the radar point cloud of the first frame include a first confidence level corresponding to the first target category, and the first confidence level is used to characterize that the point cloud data in the first threshold belongs to The accuracy of the first target category. With reference to the first confidence, the other confidences in the K confidences are used to characterize the accuracy of the point cloud data in the first threshold belonging to the target category corresponding to the other confidences.
在一种可能的实现方式中,N个跟踪门限是根据预设的参数信息确定的,用于界定第一帧雷达点云中对应于该目标对象的范围。In a possible implementation manner, the N tracking thresholds are determined according to preset parameter information, and are used to define the range corresponding to the target object in the radar point cloud of the first frame.
在一种可能的实现方式中,该参数信息包括该参数信息对应的预设目标类别的几何尺寸信息和/或预设目标类别的速度信息。In a possible implementation manner, the parameter information includes geometric size information of the preset target category and/or speed information of the preset target category corresponding to the parameter information.
在一种可能的实现方式中,第一目标类别是根据K个总置信度确定的,该K个总置信度包括第一总置信度,该第一总置信度为N个第一置信度的总和,N个第一置信度一一对应于N个跟踪门限。In a possible implementation manner, the first target category is determined based on K total confidence levels, the K total confidence levels include a first total confidence level, and the first total confidence level is the number of N first confidence levels. Sum, N first confidence levels correspond to N tracking thresholds one by one.
在一种可能的实现方式中,第一目标类别是根据K个多帧总置信度确定的,该K个多帧总置信度包括第一多帧总置信度,该第一多帧总置信度为至少一个第一总置信度的总和,该至少一个第一总置信度一一对应于该至少一帧雷达点云。In a possible implementation manner, the first target category is determined according to K multi-frame total confidence levels, the K multi-frame total confidence levels include the first multi-frame total confidence level, and the first multi-frame total confidence level Is the sum of at least one first total confidence level, and the at least one first total confidence level corresponds to the at least one frame of radar point cloud one by one.
在一种可能的实现方式中,K个目标类别包括行人、汽车、自行车、电动车中的两项或多项。In a possible implementation manner, the K target categories include two or more of pedestrians, automobiles, bicycles, and electric vehicles.
可以理解地,第二方面提供的确定目标对象跟踪门限的装置用于执行第一方面所提供的对应的方法,因此,其所能达到的有益效果可参考第一方面所提供的对应的方法中的有益效果,此处不再赘述。Understandably, the device for determining the target object tracking threshold provided in the second aspect is used to implement the corresponding method provided in the first aspect. Therefore, the beneficial effects that can be achieved can refer to the corresponding method provided in the first aspect The beneficial effects of, will not be repeated here.
第三方面,本申请实施例提供了一种确定目标对象跟踪门限的装置,该装置包括处理单元和收发单元;其中,该收发单元用于确定至少一帧雷达点云,至少一帧雷达点云为对目标对象进行测量得到的点数据集合,至少一帧雷达点云包括第一帧雷达点云;该处理单元用于确定对应于第一帧雷达点云的N个跟踪门限,该N个跟踪门限包括第一门限,根据第一门限中的点云数据确定K个置信度,该K个置信度与K个目标类别一一对应;该处理单元还用于根据该K个置信度确定第一目标类别;该处理单元还用于根据第一目标类别,从该N个跟踪门限中确定用于跟踪所述目标对象的目标门限。In a third aspect, an embodiment of the present application provides a device for determining a target object tracking threshold. The device includes a processing unit and a transceiver unit; wherein the transceiver unit is used to determine at least one frame of radar point cloud, and at least one frame of radar point cloud For the point data set obtained by measuring the target object, at least one frame of radar point cloud includes the first frame of radar point cloud; the processing unit is used to determine N tracking thresholds corresponding to the first frame of radar point cloud, and the N tracking The threshold includes a first threshold, and K confidences are determined according to the point cloud data in the first threshold, and the K confidences correspond to K target categories one-to-one; the processing unit is also used to determine the first confidence based on the K confidences. Target category; the processing unit is further configured to determine a target threshold for tracking the target object from the N tracking thresholds according to the first target category.
可以理解地,第三方面提供的确定目标对象跟踪门限的装置用于执行第一方面所提供的对应方法,因此,其所能达到的有益效果可参考第一方面所提供的对应的方法中的有益效果,此处不再赘述。Understandably, the device for determining the tracking threshold of a target object provided in the third aspect is used to execute the corresponding method provided in the first aspect. Therefore, the beneficial effects that it can achieve can refer to the corresponding method provided in the first aspect. The beneficial effects will not be repeated here.
第四方面,本申请实施例提供了一种计算机存储介质,所述计算机存储介质包括计算机指令,当所述计算机指令在电子设备上运行时,使得所述电子设备执行第一方面所述的方法。In a fourth aspect, an embodiment of the present application provides a computer storage medium, the computer storage medium includes computer instructions, when the computer instructions run on an electronic device, the electronic device is caused to execute the method described in the first aspect .
可以理解地,第四方面提供的计算机存储介质用于执行第一方面所提供的对应的方法,因此,其所能达到的有益效果可参考第一方面所提供的对应的方法中的有益效果,此处不再赘述。Understandably, the computer storage medium provided in the fourth aspect is used to execute the corresponding method provided in the first aspect. Therefore, the beneficial effects that can be achieved can refer to the beneficial effects in the corresponding method provided in the first aspect. I won't repeat them here.
第五方面,本申请实施例提供了一种计算机程序产品,所述计算机程序产品包含的程序代码被电子设备中的处理器执行时,实现第一方面所述的方法。In the fifth aspect, the embodiments of the present application provide a computer program product, and the program code included in the computer program product implements the method described in the first aspect when the program code included in the computer program product is executed by a processor in an electronic device.
可以理解地,第五方面提供的计算机程序产品用于执行第一方面所提供的对应的方法, 因此,其所能达到的有益效果可参考第一方面所提供的对应的方法中的有益效果,此处不再赘述。It is understandable that the computer program product provided in the fifth aspect is used to execute the corresponding method provided in the first aspect. Therefore, the beneficial effects it can achieve can refer to the beneficial effects in the corresponding method provided in the first aspect. I won't repeat them here.
第六方面,本申请实施例提供了一种确定目标对象跟踪门限的系统,该系统由探测装置和处理装置组成;其中,该探测装置可以用于确定至少一帧雷达点云,该至少一帧雷达点云为该探测装置对目标对象进行测量得到的点数据集合,该至少一帧雷达点云包括第一帧雷达点云;该处理装置用于确定对应于第一帧雷达点云的N个跟踪门限,该N个跟踪门限包括第一门限,根据第一门限中的点云数据确定K个置信度,该K个置信度与K个目标类别一一对应;该处理装置还用于根据该K个置信度确定第一目标类别;该处理装置还用于根据第一目标类别,从该N个跟踪门限中确定用于跟踪所述目标对象的目标门限。In a sixth aspect, an embodiment of the present application provides a system for determining a target tracking threshold, which is composed of a detection device and a processing device; wherein the detection device can be used to determine at least one frame of radar point cloud, and the at least one frame The radar point cloud is a collection of point data obtained by the detection device measuring the target object, the at least one frame of radar point cloud includes the first frame of radar point cloud; the processing device is used to determine N corresponding to the first frame of radar point cloud Tracking thresholds, the N tracking thresholds include a first threshold, and K confidence levels are determined according to the point cloud data in the first threshold, and the K confidence levels correspond to K target categories one-to-one; the processing device is also used to K confidence levels determine the first target category; the processing device is further configured to determine a target threshold for tracking the target object from the N tracking thresholds according to the first target category.
在一种可能的实现方式中,该探测装置可以为雷达,例如车载雷达。In a possible implementation manner, the detection device may be a radar, such as a vehicle-mounted radar.
在一种可能的实现方式中,该确定目标对象跟踪门限的系统可以为智能汽车。In a possible implementation manner, the system for determining the tracking threshold of the target object may be a smart car.
第七方面,本申请实施例提供了一种芯片系统,该芯片系统包括处理器,该处理器用于执行指令以使得安装有该芯片系统的装置执行第一方面所提供的方法。In a seventh aspect, an embodiment of the present application provides a chip system, the chip system includes a processor, and the processor is configured to execute instructions so that a device installed with the chip system executes the method provided in the first aspect.
本申请实施例提供的方案,可以确定目标对象的类别,并将该目标对象的类别对应的跟踪门限用于跟踪该目标对象的目标门限,使得目标门限框定的检测点不再参与其他目标对象的聚类或其他目标对象航迹的建立过程,实现了目标门限框内的检测点不影响其他航迹,消除或减少了不同航迹间的干扰,降低了误检率。The solution provided by the embodiment of the present application can determine the category of the target object, and use the tracking threshold corresponding to the category of the target object to track the target threshold of the target object, so that the detection point framed by the target threshold no longer participates in the detection of other target objects. Clustering or the process of establishing the track of other target objects realizes that the detection point in the target threshold frame does not affect other tracks, eliminates or reduces the interference between different tracks, and reduces the false detection rate.
附图说明Description of the drawings
图1A示出了一种目标识别的应用场景;Figure 1A shows an application scenario of target recognition;
图1B示出了对图1A所示的场景中目标对象对应的点云数据的聚类结果;FIG. 1B shows a clustering result of the point cloud data corresponding to the target object in the scene shown in FIG. 1A;
图2A示出了另一种目标识别的应用场景;Figure 2A shows another application scenario of target recognition;
图2B示出了使用固定跟踪门限对图2A所示的场景中目标对象对应的点云数据进行框定的结果;FIG. 2B shows the result of using a fixed tracking threshold to frame the point cloud data corresponding to the target object in the scene shown in FIG. 2A;
图3为本申请实施例的一种适用场景示意图;FIG. 3 is a schematic diagram of an application scenario of an embodiment of the application;
图4为本申请实施例提供的一种车辆的硬件结构示意图;4 is a schematic diagram of the hardware structure of a vehicle provided by an embodiment of the application;
图5为本申请实施例提供的一种确定目标对象的估计类别的示意图;FIG. 5 is a schematic diagram of determining an estimated category of a target object according to an embodiment of the application;
图6为本申请实施例提供的一种确定单帧置信度和的示意图;FIG. 6 is a schematic diagram of determining the confidence sum of a single frame according to an embodiment of the application;
图7为本申请实施例提供的一种确定目标对象的估计类别的示意图;FIG. 7 is a schematic diagram of determining an estimated category of a target object according to an embodiment of the application;
图8为本申请实施例提供的一种调整目标对象的估计类别的流程图;FIG. 8 is a flowchart of adjusting the estimated category of a target object provided by an embodiment of the application;
图9A为本申请实施例提供的一种实际验证实验场景图;FIG. 9A is a scene diagram of an actual verification experiment provided by an embodiment of the application;
图9B为本申请实施例提供的一种实际验证结果示意图;FIG. 9B is a schematic diagram of an actual verification result provided by an embodiment of the application;
图10A为本申请实施例提供的一种实际验证实验场景图;FIG. 10A is a scene diagram of an actual verification experiment provided by an embodiment of the application; FIG.
图10B为本申请实施例提供的一种实际验证结果示意图;FIG. 10B is a schematic diagram of an actual verification result provided by an embodiment of the application;
图11A为本申请实施例提供的一种实际验证实验场景图;FIG. 11A is a scene diagram of an actual verification experiment provided by an embodiment of the application;
图11B为本申请实施例提供的一种实际验证结果示意图;FIG. 11B is a schematic diagram of an actual verification result provided by an embodiment of the application;
图12为本申请实施例提供的一种确定目标对象跟踪门限的流程图;FIG. 12 is a flowchart of determining a target object tracking threshold provided by an embodiment of the application;
图13为本申请实施例提供的一种确定目标对象跟踪装置的结构示意图;FIG. 13 is a schematic structural diagram of a tracking device for determining a target object provided by an embodiment of the application;
图14为本申请实施例提供的一种确定目标对象跟踪装置的示意性框图。FIG. 14 is a schematic block diagram of a device for determining a target object tracking provided by an embodiment of the application.
具体实施方式Detailed ways
下面将结合附图,对本发明实施例中的技术方案进行描述。显然,所描述的实施例仅是本申请一部分实施例,而不是全部的实施例。The technical solutions in the embodiments of the present invention will be described below in conjunction with the accompanying drawings. Obviously, the described embodiments are only a part of the embodiments of the present application, rather than all the embodiments.
在本说明书的描述中“一个实施例”或“一些实施例”等意味着在本申请的一个或多个实施例中包括结合该实施例描述的特定特征、结构或特点。由此,在本说明书中的不同之处出现的语句“在一个实施例中”、“在一些实施例中”、“在其他一些实施例中”、“在另外一些实施例中”等不是必然都参考相同的实施例,而是意味着“一个或多个但不是所有的实施例”,除非是以其他方式另外特别强调。In the description of this specification, "one embodiment" or "some embodiments", etc. mean that one or more embodiments of the present application include a specific feature, structure, or characteristic described in combination with the embodiment. Therefore, the sentences "in one embodiment", "in some embodiments", "in some other embodiments", "in some other embodiments", etc. appearing in different places in this specification are not necessarily All refer to the same embodiment, but mean "one or more but not all embodiments" unless it is specifically emphasized otherwise.
其中,在本说明书的描述中,除非另有说明,“/”表示或的意思,例如,A/B可以表示A或B;本文中的“和/或”仅仅是一种描述关联对象的关联关系,表示可以存在三种关系,例如,A和/或B,可以表示:单独存在A,同时存在A和B,单独存在B这三种情况。另外,在本申请实施例的描述中,“多个”是指两个或多于两个。Among them, in the description of this specification, unless otherwise stated, "/" means or, for example, A/B can mean A or B; "and/or" in this specification is only an association describing the associated object Relationship means that there can be three kinds of relationships. For example, A and/or B can mean that: A alone exists, A and B exist at the same time, and B exists alone. In addition, in the description of the embodiments of the present application, "plurality" refers to two or more than two.
在本说明书的描述中,术语“第一”、“第二”仅用于描述目的,而不能理解为指示或暗示相对重要性或者隐含指明所指示的技术特征的数量。由此,限定有“第一”、“第二”的特征可以明示或者隐含地包括一个或者更多个该特征。术语“包括”、“包含”、“具有”及它们的变形都意味着“包括但不限于”,除非是以其他方式另外特别强调。In the description of this specification, the terms "first" and "second" are only used for descriptive purposes, and cannot be understood as indicating or implying relative importance or implicitly indicating the number of indicated technical features. Thus, the features defined with "first" and "second" may explicitly or implicitly include one or more of these features. The terms "including", "including", "having" and their variations all mean "including but not limited to", unless otherwise specifically emphasized.
图3示出了一种自动驾驶场景。车辆100可以为汽车,也可以为其他形式机动车辆。示例性的,车辆可以为轿车、公交车、卡车、摩托车、农用机车、游行花车、游乐园中的游戏车等形式的车辆。Figure 3 shows an autonomous driving scenario. The vehicle 100 may be an automobile, or other forms of motor vehicles. Exemplarily, the vehicle may be a vehicle in the form of a car, a bus, a truck, a motorcycle, an agricultural locomotive, a parade float, a game vehicle in an amusement park, and the like.
车辆100可以为处于自动驾驶状态,即车辆100完全自主驾驶,无需驾驶员的控制或仅需驾驶员的少量控制。在行驶时,车辆100可以跟踪其附近的物体,例如车辆210、行人220等,以为车辆100的后续行驶决策提供帮助。示例性的,车辆100可以和控制中心300进行信息交互,以在控制中心300的辅助下进行自动驾驶。The vehicle 100 may be in an automatic driving state, that is, the vehicle 100 is driven completely autonomously, without the driver's control or only a small amount of driver's control. When driving, the vehicle 100 can track nearby objects, such as the vehicle 210, the pedestrian 220, etc., to provide assistance for the subsequent driving decision of the vehicle 100. Exemplarily, the vehicle 100 may interact with the control center 300 to perform automatic driving with the assistance of the control center 300.
图4示出了车辆100的硬件结构。FIG. 4 shows the hardware structure of the vehicle 100.
参阅图4,车辆100可以包括计算系统102、交互系统104、推进系统106、传感器系统108、控制系统110、电源112。计算系统102可以包括处理器1021、存储器1022等。交互系统104可以包括无线通信系统1041、显示屏1042、麦克风1043、扬声器1044等。推进系统106可以包括动力部件1061、能源部件1062、传动部件1063、施动部件1064等。传感器系统108可以包括定位部件1081、相机1082、惯性测量单元1083、雷达1084等。控制系统可以包括操控部件1101、节流阀1102、刹车部件1103等。Referring to FIG. 4, the vehicle 100 may include a computing system 102, an interactive system 104, a propulsion system 106, a sensor system 108, a control system 110, and a power source 112. The computing system 102 may include a processor 1021, a memory 1022, and the like. The interactive system 104 may include a wireless communication system 1041, a display screen 1042, a microphone 1043, a speaker 1044, and the like. The propulsion system 106 may include a power component 1061, an energy component 1062, a transmission component 1063, an actuation component 1064, and the like. The sensor system 108 may include a positioning component 1081, a camera 1082, an inertial measurement unit 1083, a radar 1084, and the like. The control system may include a control component 1101, a throttle valve 1102, a brake component 1103, and the like.
可以理解的是,本申请实施例示意的结构并不构成对车辆100的具体限定。在本申请另一些实施例中,车辆100可以包括比图示更多或更少的部件,或者组合某些部件,或者拆分某些部件,或者不同的部件布置。图示的部件可以以硬件,软件或软件和硬件的组合实现。It can be understood that the structure illustrated in the embodiment of the present application does not constitute a specific limitation on the vehicle 100. In other embodiments of the present application, the vehicle 100 may include more or fewer components than shown, or combine certain components, or disassemble certain components, or arrange different components. The illustrated components can be implemented in hardware, software, or a combination of software and hardware.
可将车辆100的各部件通过系统总线(例如控制器局域网络总线(controller area network bus),CAN总线)、网络和/或其他连接机构连接在一起,以使各部件可按照互连方式工作。The components of the vehicle 100 can be connected together through a system bus (for example, a controller area network bus (controller area network bus), CAN bus), a network, and/or other connection mechanisms, so that the components can work in an interconnected manner.
处理器1021可以包括一个或多个处理单元,例如:处理器110可以包括应用处理器(application processor,AP),调制解调处理器,图形处理器(graphics processing unit,GPU),图像信号处理器(image signal processor,ISP),控制器,视频编解码器,数字信号处理器(digital signal processor,DSP),基带处理器,和/或神经网络处理器(neural-network processing unit,NPU)等。其中,不同的处理单元可以是独立的器件,也可以集成在一个或多个处理器中。The processor 1021 may include one or more processing units. For example, the processor 110 may include an application processor (AP), a modem processor, a graphics processing unit (GPU), and an image signal processor. (image signal processor, ISP), controller, video codec, digital signal processor (digital signal processor, DSP), baseband processor, and/or neural-network processing unit (NPU), etc. Among them, the different processing units may be independent devices or integrated in one or more processors.
存储器1022可以用于存储计算机可执行程序代码,所述可执行程序代码包括指令。存储器1022可以包括存储程序区和存储数据区。其中,存储程序区可存储分类器等信息,还可以 存储操作系统,至少一个功能所需的应用程序(比如声音播放功能,图像播放功能等)等。此外,存储器1022可以包括高速随机存取存储器,还可以包括非易失性存储器,例如至少一个磁盘存储器件,闪存器件,通用闪存存储器(universal flash storage,UFS)等。The memory 1022 may be used to store computer executable program code, where the executable program code includes instructions. The memory 1022 may include a program storage area and a data storage area. Among them, the storage program area can store information such as a classifier, and can also store an operating system, an application program required for at least one function (such as a sound playback function, an image playback function, etc.). In addition, the memory 1022 may include a high-speed random access memory, and may also include a non-volatile memory, such as at least one magnetic disk storage device, a flash memory device, a universal flash storage (UFS), and the like.
处理器1021可通过运行存储在存储器1022的指令,执行下文所述的各种汽车功能以及数据处理。The processor 1021 can execute various automobile functions and data processing described below by running instructions stored in the memory 1022.
示例性的,该计算系统102可以实现为车载智能系统或自动驾驶系统,可以实现车辆100的自动驾驶(在车辆100行驶时,车辆100完全自主驾驶,无需驾驶员的控制或仅需驾驶员的少量控制)。也可以实现车辆100的半自动驾驶(在车辆行驶时,车辆非完全自主驾驶,需要驾驶员适度控制)。驾驶员也可以手动驾驶车辆100行驶(驾驶员高度控制车辆100)。Exemplarily, the computing system 102 can be implemented as a vehicle-mounted intelligent system or an automatic driving system, which can realize the automatic driving of the vehicle 100 (when the vehicle 100 is running, the vehicle 100 is completely autonomous driving without the driver’s control or only the driver’s Little control). It is also possible to realize semi-autonomous driving of the vehicle 100 (when the vehicle is running, the vehicle is not fully autonomous driving and requires proper control by the driver). The driver can also drive the vehicle 100 manually (the driver height controls the vehicle 100).
在一些实施例中,计算系统102可以包括整车控制器。整车控制器作为纯电动汽车关键技术之一,是整车的核心控制部件。整车控制器配置为在车辆行驶时完成众多的任务协调。主要任务包括:与子系统之间的通信;采集驾驶员的操作信号,以识别其意图;监控车辆的行驶状态,对车辆故障进行检测识别,存储故障信息,保证车辆安全行驶。整车控制器还包含多个独立电机控制单元,整车控制器与电机控制单元之间信息交互是通过总线方式进行的。整车控制器是整车的控制器中枢,可通过CAN总线通讯方式,与信号传感器、主动转向控制器、电驱控制器进行信息互通,实现信号采集、控制策略决策以及驱动信号的输出。In some embodiments, the computing system 102 may include a vehicle controller. As one of the key technologies of pure electric vehicles, the vehicle controller is the core control component of the vehicle. The vehicle controller is configured to complete numerous task coordination while the vehicle is running. The main tasks include: communication with subsystems; collecting driver's operating signals to identify their intentions; monitoring the driving status of the vehicle, detecting and identifying vehicle faults, storing fault information, and ensuring the safe driving of the vehicle. The vehicle controller also contains multiple independent motor control units, and the information exchange between the vehicle controller and the motor control unit is carried out through a bus. The vehicle controller is the controller center of the vehicle. It can communicate with signal sensors, active steering controllers, and electric drive controllers through CAN bus communication to realize signal collection, control strategy decision-making, and drive signal output.
整车控制器采集并处理来自传感器的信号(如油门踏板、刹车踏板等信息),负责自身控制器的上下电逻辑控制、电机控制单元的上下电逻辑控制。还负责扭矩计算:驾驶员需求扭矩计算、机械制动与电制动扭矩分配、前后轴承担驱动/制动扭矩、4轮电机扭矩分配。还负责能量优化管理:充电控制、基于电机运行效率的功率分配、制动能量回收控制。还负责车辆动力学控制:车辆状态识别、横摆控制、防滑控制,防抱死控制、防侧倾控制、主动转向控制。还负责监控诊断功能:总线节点收发监控、传感器失效诊断、扭矩监控、CPU监控诊断、故障管理、故障实现安全措施(如车辆减速限速处理)。The vehicle controller collects and processes signals from sensors (such as accelerator pedal, brake pedal, etc.), and is responsible for the logic control of power-on and power-off of its own controller and the logic control of power-on and power-off of the motor control unit. It is also responsible for torque calculation: driver demand torque calculation, mechanical brake and electric brake torque distribution, front and rear axles bear drive/brake torque, and 4-wheel motor torque distribution. It is also responsible for energy optimization management: charging control, power distribution based on motor operating efficiency, braking energy recovery control. It is also responsible for vehicle dynamics control: vehicle state recognition, yaw control, anti-skid control, anti-lock control, anti-roll control, and active steering control. It is also responsible for monitoring and diagnosis functions: bus node receiving and dispatching monitoring, sensor failure diagnosis, torque monitoring, CPU monitoring diagnosis, fault management, fault realization safety measures (such as vehicle deceleration speed limit processing).
整车控制器可通过CAN网络通信与其他子控制单元(例如电机控制器、电源管理系统和仪表盘等)完成数据交换。电机控制单元接收整车控制器通过CAN总线分发的命令,把电池组的化学能转变为电机的机械能,然后经过传动系统将动力传递到车轮上,确保车辆行驶的动力。The vehicle controller can complete data exchange with other sub-control units (such as motor controllers, power management systems, dashboards, etc.) through CAN network communication. The motor control unit receives the command distributed by the vehicle controller through the CAN bus, converts the chemical energy of the battery pack into the mechanical energy of the motor, and then transmits the power to the wheels through the transmission system to ensure the power of the vehicle.
在一些实施例中,计算系统102还可以包括车身控制器,车身控制器管理车辆车身电子领域的模块,支持多种功能,典型的车身控制模块由微处理器组成,用于控制分类为车身电子设备(电动车窗,雨刮器,侧视镜等)的功能。此外,车身控制器上还提供了端口,用于与不同的车身控制模块,仪表板,传感器和执行器等进行通信。In some embodiments, the computing system 102 may also include a body controller. The body controller manages modules in the field of vehicle body electronics and supports multiple functions. A typical body control module consists of a microprocessor and is used to control and classify the body electronics. Functions of equipment (power windows, wipers, side mirrors, etc.). In addition, ports are provided on the body controller for communication with different body control modules, instrument panels, sensors and actuators, etc.
在一些实施例中,计算系统102可以包括智能驾驶控制器,用于处理来自于各个传感器的数据。In some embodiments, the computing system 102 may include an intelligent driving controller for processing data from various sensors.
无线通信系统1041可以包括一个或多个天线、调制解调器、基带处理器等,可与管理中心200、其他汽车以及其他通信实体进行通信。一般而言,无线通信系统可以被配置为根据一种或多种通信技术进行通信,例如2G/3G/4G/5G等移动通信技术,以及无线局域网(wireless local area networks,WLAN)(如无线保真(wireless fidelity,Wi-Fi)网络),蓝牙(bluetooth,BT),全球导航卫星系统(global navigation satellite system,GNSS),调频(frequency modulation,FM),近距离无线通信技术(near field communication,NFC),红外技术(infrared,IR)等无线通信技术,以及其他通信技术,此处不再一一列举。The wireless communication system 1041 may include one or more antennas, modems, baseband processors, etc., and may communicate with the management center 200, other automobiles, and other communication entities. Generally speaking, a wireless communication system can be configured to communicate according to one or more communication technologies, such as mobile communication technologies such as 2G/3G/4G/5G, and wireless local area networks (WLAN) (such as wireless security). True (wireless fidelity, Wi-Fi), Bluetooth (bluetooth, BT), global navigation satellite system (global navigation satellite system, GNSS), frequency modulation (FM), near field communication, NFC), infrared technology (infrared, IR) and other wireless communication technologies, as well as other communication technologies, will not be listed here.
显示屏1042用于显示图像,视频等。显示屏1042包括显示面板。显示面板可以采用液 晶显示屏(liquid crystal display,LCD),有机发光二极管(organic light-emitting diode,OLED),有源矩阵有机发光二极体或主动矩阵有机发光二极体(active-matrix organic light emitting diode的,AMOLED),柔性发光二极管(flex light-emitting diode,FLED),Miniled,MicroLed,Micro-OLED,量子点发光二极管(quantum dot light emitting diodes,QLED)等。The display screen 1042 is used to display images, videos, and so on. The display screen 1042 includes a display panel. The display panel can use liquid crystal display (LCD), organic light-emitting diode (OLED), active matrix organic light-emitting diode or active-matrix organic light-emitting diode (active-matrix organic light-emitting diode). AMOLED, flexible light-emitting diode (FLED), Miniled, MicroLed, Micro-OLED, quantum dot light-emitting diode (QLED), etc.
在一些实施例中,显示面板上可覆盖有触控面板,当触控面板检测到其上或其附近的触摸操作后,可将触摸操作传递给处理器1021,以确定触摸事件类型。可以通过显示屏1042提供与触摸操作相关的视觉输出。在另一些实施例中,触控面板可以与显示屏1042所处的位置不同。In some embodiments, the display panel may be covered with a touch panel, and when the touch panel detects a touch operation on or near it, the touch operation may be transmitted to the processor 1021 to determine the type of touch event. The visual output related to the touch operation can be provided through the display screen 1042. In other embodiments, the position of the touch panel and the display screen 1042 may be different.
麦克风1043,也称“话筒”,“传声器”,用于将声音信号转换为电信号。当用户想通过语音控制车辆100时,用户可以通过人嘴靠近麦克风1043发声,将语音命令输入到麦克风1043。车辆100可以设置至少一个麦克风1043。在一些实施例中,车辆100可以设置两个麦克风1043,除了采集声音信号,还可以实现降噪功能。在另一些实施例中,电子设备100还可以设置三个,四个或更多个麦克风1043,实现采集声音信号,降噪,还可以识别声音来源,实现定向录音功能等。The microphone 1043, also called "microphone", "microphone", is used to convert sound signals into electrical signals. When the user wants to control the vehicle 100 by voice, the user can speak close to the microphone 1043 through the human mouth, and input a voice command into the microphone 1043. The vehicle 100 may be provided with at least one microphone 1043. In some embodiments, the vehicle 100 may be provided with two microphones 1043, in addition to collecting sound signals, it may also implement a noise reduction function. In other embodiments, the electronic device 100 may also be provided with three, four or more microphones 1043 to collect sound signals, reduce noise, identify sound sources, and realize directional recording functions.
扬声器1044,也称“喇叭”,用于将音频电信号转换为声音信号。车辆100可以通过扬声器1044收听音乐,或收听提示信息。The speaker 1044, also called a "speaker", is used to convert audio electrical signals into sound signals. The vehicle 100 can listen to music through the speaker 1044, or listen to prompt information.
动力部件1061可以为发动机,可以为汽油发动机、电动汽车的电动机、柴油发动机、混合动力发动机等发动机中的任一种或多种的组合,也可以为其他形式的发动机。The power component 1061 may be an engine, and may be any one or a combination of a gasoline engine, an electric motor of an electric vehicle, a diesel engine, a hybrid engine, etc., or a combination of other types of engines.
能源部件1062可以是能量的来源,全部或部分地为动力部件1061提供动力。也就是说,可将动力部件1061配置为将能源部件1062提供的能源转换为机械能。能源部件1062可提供的能源包括汽油、柴油、其他基于石油的燃料、丙烷、其他基于压缩气体的燃料、乙醇、太阳能电池板、电池、以及其他电功率来源。能源部件1062还可包括燃料箱、电池、电容器、和/或飞轮的任意组合。在一些实施例中,能源部件1062也可以为车辆100的其他系统提供能量。The energy component 1062 may be a source of energy, and provides power for the power component 1061 in whole or in part. That is, the power component 1061 may be configured to convert the energy provided by the energy component 1062 into mechanical energy. Energy components 1062 can provide energy including gasoline, diesel, other petroleum-based fuels, propane, other compressed gas-based fuels, ethanol, solar panels, batteries, and other sources of electrical power. The energy component 1062 may also include any combination of fuel tanks, batteries, capacitors, and/or flywheels. In some embodiments, the energy component 1062 may also provide energy for other systems of the vehicle 100.
传动部件1063可包括变速箱、离合器、差动器、传动轴以及其他组件。经配置,传动部件1063可将机械能从动力部件1061传输给施动部件1064。The transmission component 1063 may include a gearbox, a clutch, a differential, a transmission shaft, and other components. After being configured, the transmission component 1063 can transmit mechanical energy from the power component 1061 to the actuation component 1064.
施动部件1064可以包括车轮、轮胎等。车轮可配置为各种款式,包括单轮车、双轮车/摩托车、三轮车、或者轿车/卡车四轮款式等。轮胎可以附接车轮,车轮可以附接到传动部件1063,可响应传动部件1063传动的机械功率而进行转动,以驱动车辆100行动。The actuating member 1064 may include wheels, tires, and the like. The wheels can be configured in various styles, including unicycle, two-wheeler/motorcycle, tricycle, or car/truck four-wheel style. A tire may be attached to a wheel, and the wheel may be attached to the transmission member 1063, and may rotate in response to the mechanical power transmitted by the transmission member 1063 to drive the vehicle 100 to move.
定位部件1081可被配置为用于估计车辆100的位置。定位部件1081可包括被配置为基于卫星定位数据估计车辆100相对于地球的位置的收发器。在一些实施例中,可将计算系统102配置为结合地图数据使用定位部件1081来估计车辆100可能行驶的道路以及车辆100在道路上的位置。具体的,定位部件1081可以包括全球定位系统(global positioning system,GPS)模块,也可以包括北斗卫星导航系统(beidou navigation satellite system,BDS),也可以包括伽利略卫星导航系统(galileo satellite navigation system),等等。The positioning component 1081 may be configured to estimate the position of the vehicle 100. The positioning component 1081 may include a transceiver configured to estimate the position of the vehicle 100 relative to the earth based on satellite positioning data. In some embodiments, the computing system 102 may be configured to use the positioning component 1081 in conjunction with map data to estimate the road on which the vehicle 100 may travel and the position of the vehicle 100 on the road. Specifically, the positioning component 1081 may include a global positioning system (GPS) module, may also include a Beidou navigation satellite system (BDS), or may include a Galileo satellite navigation system (galileo satellite navigation system), and many more.
相机1082可以包括被配置为捕获车辆100外部环境的车外相机,也可以包括被配置为捕获车辆100内部环境的车内相机。相机1082可以为检测可见光的相机,或者为检测来自光谱其他部分的光线(红外线或紫外线等)。相机1082用于捕获二维图像,也可以用于捕获深度图像。The camera 1082 may include an outside camera configured to capture the environment outside the vehicle 100, and may also include an in-vehicle camera configured to capture the environment inside the vehicle 100. The camera 1082 can be a camera that detects visible light, or can detect light from other parts of the spectrum (infrared or ultraviolet, etc.). The camera 1082 is used to capture two-dimensional images, and can also be used to capture depth images.
惯性测量单元(inertial measurement unit,IMU)1083被配置为基于惯性加速度来感 测车辆100的位置和方位变化的传感器的任意组合。在一些实施例,惯性测量单元1083可包括一个或多个加速计和陀螺仪。An inertial measurement unit (IMU) 1083 is configured as any combination of sensors that sense changes in the position and orientation of the vehicle 100 based on inertial acceleration. In some embodiments, the inertial measurement unit 1083 may include one or more accelerometers and gyroscopes.
雷达1084可包括被配置为使用无线电波或声波来感测或检测车辆100所在环境中的对象的传感器。具体的,雷达1084可以为包括激光雷达、毫米波雷达或超声波雷达等。雷达1084可以包括波形生成器、发射天线、接收天线、信号处理器。在每一次扫描中,波形生成器可以生成波形信号,并通过发射天线进行发射。波形信号在经车辆100所在环境的物体反射后,可被接收天线接收。通过比较发射信号和接收天线信号,可以得到原始探测数据。在一个例子中,雷达1084的信号处理器可以对原始探测数据进行恒虚警率(constant false-alarm rate,CFAR)检测,峰分组(peak grouping)以及波达方向估计(direction of arrival,DOA)获得检测点。雷达1084一次扫描获得的检测点形成一帧雷达点云。雷达1084扫描得到的检测点也可以称为点数据集合。当雷达1084为毫米波雷达时,毫米波雷达分辨率较低,一帧毫米波雷达的雷达点云上的检测点较为稀疏,因此,可以将米波雷达的雷达点云称为稀疏点云(sparse point cloud)。在一个例子中,雷达1084可以将原始探测数据传递给计算系统102,由计算系统102根据原始探测数据,确定得到雷达点云。在一个例子中,雷达1084可以通过无线通信系统1041将原始探测数据发送给控制中心300,由控制中心根据原始探测数据,确定得到雷达点云。The radar 1084 may include a sensor configured to use radio waves or sound waves to sense or detect objects in the environment where the vehicle 100 is located. Specifically, the radar 1084 may include laser radar, millimeter wave radar, or ultrasonic radar. The radar 1084 may include a waveform generator, a transmitting antenna, a receiving antenna, and a signal processor. In each scan, the waveform generator can generate a waveform signal and transmit it through the transmitting antenna. The waveform signal can be received by the receiving antenna after being reflected by objects in the environment where the vehicle 100 is located. By comparing the transmitted signal and the receiving antenna signal, the original detection data can be obtained. In one example, the signal processor of the radar 1084 can perform constant false-alarm rate (CFAR) detection, peak grouping and direction of arrival (DOA) on the original detection data. Obtain inspection points. The detection points obtained by the radar 1084 in one scan form a frame of radar point cloud. The detection points scanned by the radar 1084 may also be referred to as a point data set. When the radar 1084 is a millimeter wave radar, the resolution of the millimeter wave radar is low, and the detection points on the radar point cloud of a millimeter wave radar are relatively sparse. Therefore, the radar point cloud of the meter wave radar can be called a sparse point cloud ( sparse point cloud). In an example, the radar 1084 may transmit the original detection data to the computing system 102, and the computing system 102 determines to obtain the radar point cloud according to the original detection data. In an example, the radar 1084 may send the original detection data to the control center 300 through the wireless communication system 1041, and the control center determines to obtain the radar point cloud based on the original detection data.
雷达点云上的检测点对应反射接收信号的反射点。通常一个物体对应多个反射点,即一个物体可以对应雷达点云上的多个检测点。信号处理器可以根据检测点对应的接收信号和发射信号之间的时间差以及多普勒频移等信息,可以得到检测点对应的反射点的位置、速度等信息,即雷达点云上的检测点具有位置、速度等信息。The detection point on the radar point cloud corresponds to the reflection point of the reflected received signal. Generally, one object corresponds to multiple reflection points, that is, one object can correspond to multiple detection points on the radar point cloud. The signal processor can obtain information such as the position and speed of the reflection point corresponding to the detection point based on the time difference between the received signal and the transmitted signal corresponding to the detection point and the Doppler frequency shift, which is the detection point on the radar point cloud. It has information such as position and speed.
在一些实施例中,雷达1084的信号处理器可以执行本申请实施例提供的方法。In some embodiments, the signal processor of the radar 1084 may execute the method provided in the embodiments of the present application.
在一些实施例中,若雷达点云是由雷达1084确定的,雷达1084可以将雷达点云传递给计算系统102,使得计算系统102可以执行本申请实施例提供的方法。In some embodiments, if the radar point cloud is determined by the radar 1084, the radar 1084 may transmit the radar point cloud to the computing system 102 so that the computing system 102 can execute the method provided in the embodiments of the present application.
在一些实施例中,若雷达点云是由雷达1084确定的,雷达1084可以通过无线通信系统1041将雷达点云发送给控制中心300,使得控制中心执行本申请实施例提供的方法,并将处理结果反馈给车辆100。In some embodiments, if the radar point cloud is determined by the radar 1084, the radar 1084 can send the radar point cloud to the control center 300 through the wireless communication system 1041, so that the control center executes the method provided in the embodiment of this application and processes The result is fed back to the vehicle 100.
操控部件1101可以是被配置为响应驾驶员操作或计算机指令,调整车辆100的行动方向的部件。The manipulation component 1101 may be a component configured to adjust the direction of movement of the vehicle 100 in response to a driver's operation or a computer instruction.
节流阀1102可以是被配置为控制动力部件1061的运行速度和加速度并进而控制车辆100的速度和加速度的部件。The throttle valve 1102 may be a component configured to control the operating speed and acceleration of the power component 1061 and thereby control the speed and acceleration of the vehicle 100.
刹车部件1103可以是被配置为降低车辆100的行动速度的部件。例如,刹车部件1103可以使用摩擦力来减慢施动部件1064中车轮的转动速度。The brake component 1103 may be a component configured to reduce the moving speed of the vehicle 100. For example, the brake component 1103 may use friction to slow the rotation speed of the wheels in the actuating component 1064.
电源112可配置为向车辆100的一部分或全部部件提供电力。在一些实施例中,电源112可包括可重复充放电的锂离子电池或者铅蓄电池。在一些实施例中,电源112可包括一个或多个电池组。在一些实施例中,可以将电源112和能源部件1062共同实现,通过动力部件1061可将电源112提供的化学能转变为电机的机械能,并通过传动部件1063传递给施动部件1064,实现车辆100的行动。The power supply 112 may be configured to provide power to a part or all of the components of the vehicle 100. In some embodiments, the power source 112 may include a lithium ion battery or a lead storage battery that can be recharged and discharged. In some embodiments, the power source 112 may include one or more battery packs. In some embodiments, the power supply 112 and the energy component 1062 can be implemented together, and the chemical energy provided by the power supply 112 can be converted into the mechanical energy of the motor through the power component 1061, and it can be transmitted to the actuating component 1064 through the transmission component 1063 to realize the vehicle 100 Action.
本申请实施例提供的确定目标对象跟踪门限的方法,可以应用在车辆100的自动驾驶、自动泊车或者自动巡航等场景。在前述场景下,该方法可以确定车辆100的车载雷达探测到的目标对象所属目标类别,并使用该目标对象所属目标类别对应的跟踪门限(tracking gate threshold),对该目标对象进行跟踪,可减少该目标对象对应的检测点对跟踪其他目标对象 的干扰,从而消除或降低了在车辆100跟踪多个目标对象时,该多个目标对象航迹间干扰,降低了误检率。The method for determining the tracking threshold of the target object provided by the embodiment of the present application may be applied to scenarios such as automatic driving, automatic parking, or automatic cruise of the vehicle 100. In the foregoing scenario, the method can determine the target category of the target object detected by the on-board radar of the vehicle 100, and use the tracking gate threshold corresponding to the target category to which the target object belongs to track the target object, which can reduce The detection point corresponding to the target object interferes with tracking other target objects, thereby eliminating or reducing the interference between the tracks of the multiple target objects when the vehicle 100 tracks multiple target objects, and reducing the false detection rate.
在一些实施例中,该方法可以由探测装置实现,例如雷达装置。该雷达装置可以为车载雷达装置,例如图4所示的雷达1084。该方法也可以由集成在探测装置中处理装置实现,例如图4所示的雷达1084中的信号处理器。在一个例子中,该方法也可以由独立于探测装置的处理装置实现(例如控制中心300、计算系统102等),并将处理结果反馈给探测装置。In some embodiments, the method may be implemented by a detection device, such as a radar device. The radar device may be a vehicle-mounted radar device, such as the radar 1084 shown in FIG. 4. This method can also be implemented by a processing device integrated in the detection device, such as the signal processor in the radar 1084 shown in FIG. 4. In an example, the method can also be implemented by a processing device independent of the detection device (for example, the control center 300, the computing system 102, etc.), and the processing result is fed back to the detection device.
目标类别,可简称为类别,其可以为预先设置的物体类别,例如可以设置行人、汽车、自行车、电动车等多种目标类别。目标类别也可以简称为类别。The target category can be referred to as a category for short, which can be a preset object category, for example, multiple target categories such as pedestrians, cars, bicycles, and electric vehicles can be set. The target category can also be referred to as a category for short.
跟踪门限,又称为跟踪门限框,是指根据目标类别参数信息设置的一种限制范围。可以根据目标类别的尺寸信息和/或速度信息,来设置该目标类别的跟踪门限。其中,尺寸信息中的尺寸可以为尺寸上限值。速度信息中的速度可以为速度上限值或者速度下限制。The tracking threshold, also known as the tracking threshold frame, refers to a limit range set according to the target category parameter information. The tracking threshold of the target category can be set according to the size information and/or speed information of the target category. Wherein, the size in the size information may be the upper limit of the size. The speed in the speed information can be the upper speed limit or the lower speed limit.
以行人这一目标类别为例。可以理解,行人的尺寸一般在(1.5-2)m*(2.5-3)m之内,步行速度一般为4m/s之内。可以设置行人这一目标类别对应的跟踪门限的为尺寸(1.5-2)m*(2.5-3)m、速度4m/s。其中,速度4m/s可以为速度上限值。在一个具体例子中,可以设置行人这一目标类别对应的跟踪门限的为尺寸1.5m*2.5m、速度4m/s。在另一个具体例子中,可以设置行人这一目标类别对应的跟踪门限的为尺寸2m*3m、速度4m/s。在另一个具体例子中,可以设置行人这一目标类别对应的跟踪门限的为尺寸1.8m*2.6m、速度4m/s。等等,可以根据经验或实验对行人这一目标类别对应的跟踪门限进行设置。前述仅为示例说明,并不构成限制,在具体实现时,可以根据经验或实验对行人这一目标类别对应的跟踪门限进行设置。Take pedestrians as an example. It can be understood that the size of a pedestrian is generally within (1.5-2)m*(2.5-3)m, and the walking speed is generally within 4m/s. The tracking threshold corresponding to the target category of pedestrians can be set as size (1.5-2)m*(2.5-3)m and speed 4m/s. Among them, the speed of 4m/s can be the upper limit of speed. In a specific example, the tracking threshold corresponding to the target category of pedestrians can be set to a size of 1.5m*2.5m and a speed of 4m/s. In another specific example, the tracking threshold corresponding to the target category of pedestrians can be set to a size of 2m*3m and a speed of 4m/s. In another specific example, the tracking threshold corresponding to the target category of pedestrians can be set to a size of 1.8m*2.6m and a speed of 4m/s. And so on, you can set the tracking threshold corresponding to the target category of pedestrians based on experience or experiment. The foregoing is only an example and does not constitute a limitation. In specific implementation, the tracking threshold corresponding to the target category of pedestrians can be set based on experience or experiment.
以汽车这一目标类别为例。可以理解,汽车的尺寸一般在(3-5)m*(5-7)m之内,行驶速度一般为10m/s之上。可以设置汽车这一目标类别对应的跟踪门限的为尺寸(3-5)m*(5-7)、10m/s。其中,速度10m/s可以为速度下限值。在一个具体例子中,可以设置汽车这一目标类别对应的跟踪门限的为尺寸3m*5m、速度10m/s。在另一个具体例子中,可以设置汽车这一目标类别对应的跟踪门限的为尺寸4m*6m、速度10m/s。在另一个具体例子中,可以设置汽车这一目标类别对应的跟踪门限的为尺寸5m*7m、速度10m/s。等等。前述仅为示例说明,并不构成限制,在具体实现时,可以根据经验或实验对汽车这一目标类别对应的跟踪门限进行设置。Take the target category of automobiles as an example. It can be understood that the size of the car is generally within (3-5)m*(5-7)m, and the driving speed is generally above 10m/s. The tracking threshold corresponding to the target category of cars can be set as size (3-5)m*(5-7), 10m/s. Among them, the speed of 10m/s can be the lower limit of speed. In a specific example, the tracking threshold corresponding to the target category of cars can be set to a size of 3m*5m and a speed of 10m/s. In another specific example, the tracking threshold corresponding to the target category of cars can be set to a size of 4m*6m and a speed of 10m/s. In another specific example, the tracking threshold corresponding to the target category of cars can be set to a size of 5m*7m and a speed of 10m/s. and many more. The foregoing is only an example and does not constitute a limitation. In specific implementation, the tracking threshold corresponding to the target category of automobiles can be set based on experience or experiments.
以自行车这一目标类别为例。可以理解,骑行者和自行车的尺寸一般在3m*4m之内,速度一般为7m/s之内。可以设置自行车这一目标类别对应的跟踪门限的为尺寸3m*4m、速度7m/s。其中,速度7m/s可以为速度上限值。前述仅为示例说明,并不构成限制,在具体实现时,可以根据经验或实验对自行车这一目标类别对应的跟踪门限进行设置。Take the target category of bicycles as an example. It can be understood that the size of the rider and the bicycle is generally within 3m*4m, and the speed is generally within 7m/s. The tracking threshold corresponding to the target category of bicycles can be set as 3m*4m in size and 7m/s in speed. Among them, the speed of 7m/s can be the upper limit of speed. The foregoing is only an example and does not constitute a limitation. In specific implementation, the tracking threshold corresponding to the target category of bicycles can be set based on experience or experiment.
以电动车这一目标类别为例。可以理解,一般而言,电动车(和骑行者)的尺寸和自行车和骑行者的尺寸相似,并且速度也相近。在一个例子中,电动车这一目标类别对应的跟踪门限和自行车对应的跟踪门限相同。在一个例子中,也可以为电动车独立设置跟踪门限,例如可以为尺寸3m*4m、速度8m/s。其中,速度8m/s可以为速度上限值。前述仅为示例说明,并不构成限制,在具体实现时,可以根据经验或实验对电动车这一目标类别对应的跟踪门限进行设置。Take the target category of electric vehicles as an example. It can be understood that, generally speaking, the size of an electric vehicle (and a cyclist) is similar to the size of a bicycle and a cyclist, and the speed is also similar. In one example, the tracking threshold corresponding to the target category of electric vehicles is the same as the tracking threshold corresponding to bicycles. In an example, the tracking threshold can also be set independently for the electric vehicle, for example, it can be 3m*4m in size and 8m/s in speed. Among them, the speed of 8m/s can be the upper limit of speed. The foregoing is only an example and does not constitute a limitation. In specific implementation, the tracking threshold corresponding to the target category of electric vehicles can be set based on experience or experiments.
在本申请实施例中,可以设定设置了K个目标类别,N个跟踪门限。K和N均为大于1的正整数,且K≥N。每一种目标类别可以对应一种跟踪门限,其中,可以存在两种或两种以上的目标类别共同对应一种跟踪门限,不同跟踪门限对应不同的目标类别。In the embodiment of the present application, K target categories and N tracking thresholds can be set. Both K and N are positive integers greater than 1, and K≥N. Each target category may correspond to a tracking threshold, wherein there may be two or more target categories that jointly correspond to a tracking threshold, and different tracking thresholds correspond to different target categories.
在本申请实施例中,可以采用机器学习算法,利用预先标注了目标类别的多帧雷达点云中检测点(也可以称为点数据集合)为训练样本进行训练,得到用于识别K个目标类别的分类器(也可以称为识别模型)。示例性的,采用的机器学习算法可以为XGBoost算法,在分类器的训练过程中,可以取70%训练样本为训练集,30%训练样本为测试集。In the embodiment of the present application, a machine learning algorithm can be used to use the detection points (also referred to as a point data set) in a multi-frame radar point cloud pre-marked with target categories as training samples for training to obtain K targets. The classifier of the category (can also be called the recognition model). Exemplarily, the machine learning algorithm used may be the XGBoost algorithm. In the training process of the classifier, 70% of the training samples may be taken as the training set and 30% of the training samples as the test set.
接下来,以自动驾驶场景为例,对本申请实施例提供的确定目标对象跟踪门限的方法进行示例说明。Next, taking an autonomous driving scenario as an example, the method for determining the tracking threshold of a target object provided in the embodiment of the present application will be illustrated.
自动驾驶场景下,对每一个目标对象的跟踪过程可以分为航迹建立阶段和航迹跟踪阶段。在目标对象的航迹建立阶段可以确定用于跟踪目标对象的跟踪门限。用于跟踪目标对象的跟踪门限也可以称为目标对象的目标门限。在目标对象的航迹跟踪阶段,可以使用目标对象的目标门限对目标对象进行跟踪,其中,落入到目标对象的目标门限内的检测点被视为属于该目标对象的检测点,仅用于确定该目标对象的航迹,不再参与目标对象的航迹之外航迹的确定过程。In the autonomous driving scenario, the tracking process of each target object can be divided into a track establishment stage and a track tracking stage. The tracking threshold for tracking the target object can be determined in the stage of establishing the track of the target object. The tracking threshold used to track the target object may also be referred to as the target threshold of the target object. In the track tracking phase of the target object, the target threshold of the target object can be used to track the target object. Among them, the detection point that falls within the target threshold of the target object is regarded as the detection point belonging to the target object and is only used for Determine the track of the target object and no longer participate in the process of determining the track outside the track of the target object.
接下来,结合参阅图5、图6、图7,对确定目标对象的目标门限进行示例说明。Next, with reference to Fig. 5, Fig. 6, and Fig. 7, an example of determining the target threshold of the target object will be explained.
参阅图5,对于雷达点云1(可以为雷达开启或者启用雷达航迹跟踪功能后的首帧雷达点云,也可以为雷达开启或者启用雷达航迹跟踪功能后的第n帧雷达点云),可以采用聚类算法对雷达点云1上的检测点进行聚类,以对雷达点云检测到的目标对象进行粗分。在一个例子中,聚类算法可以为K-means。在一个例子中,聚类算法可以为DBSCAN。还可以采用其他聚类算法,此处不再一一列举。Refer to Figure 5, for radar point cloud 1 (the first frame of radar point cloud after the radar is turned on or the radar track tracking function is enabled, or the nth frame of radar point cloud after the radar is turned on or the radar track tracking function is enabled) , A clustering algorithm can be used to cluster the detection points on the radar point cloud 1 to roughly classify the target objects detected by the radar point cloud. In one example, the clustering algorithm can be K-means. In one example, the clustering algorithm may be DBSCAN. Other clustering algorithms can also be used, which will not be listed here.
示例性的,采用聚类算法对雷达点云1上的检测点进行聚类前,可以对雷达点云1上的检测点进行过滤,以排除雷达点云1上的静止检测点。即可以仅对雷达点云1上移动的检测点进行聚类。Exemplarily, before clustering the detection points on the radar point cloud 1 using a clustering algorithm, the detection points on the radar point cloud 1 may be filtered to exclude the stationary detection points on the radar point cloud 1. That is, only the detection points moving on the radar point cloud 1 can be clustered.
参阅图5和图6。可以将聚类得到的每一类簇,视为一个目标对象。如图5所示,对于任意一个目标对象,例如目标对象S1。通过跟踪门限1对该目标对象S1中的检测点进行框定,通过跟踪门限2对该目标对象S1中的检测点进行框定,……,通过跟踪门限N对该目标对象S1中的检测点进行框定。如此,可通过N个跟踪门限,分别对该目标对象S1中的检测点进行框定。其中,对于任意一个跟踪门限框定的检测点,可以称为该跟踪门限内的检测点,也可以称为该跟踪门限内的点云数据。Refer to Figure 5 and Figure 6. Each type of cluster obtained by clustering can be regarded as a target object. As shown in Fig. 5, for any target object, for example, the target object S1. The detection point in the target object S1 is framed by the tracking threshold 1, the detection point in the target object S1 is framed by the tracking threshold 2,..., the detection point in the target object S1 is framed by the tracking threshold N . In this way, the detection points in the target object S1 can be framed respectively through N tracking thresholds. Wherein, for any detection point framed by a tracking threshold, it can be referred to as a detection point within the tracking threshold, or it can be referred to as point cloud data within the tracking threshold.
示例性的,可以理解,对于采用K-means、DBSCAN等聚类算法,得到的每一类簇均具有类簇中心。可以在跟踪门限包括尺寸信息时,可以以该目标对象的聚类中心所在的检测点为框定中心进行框定,得到每一个跟踪门限内的检测点(点云数据)。当该跟踪门限还包括速度信息时,可以根据该速度信息对该跟踪门限内的检测点进行筛选。以行人这一目标类别对应的跟踪门限为例,其中,速度信息为速度4m/s。可以将行人这一目标类别对应的跟踪门限内的检测点中速度大于4m/s的检测点排除掉。以汽车这一目标类别对应的跟踪门限为例,其中,速度信息为速度10m/s。可以将汽车这一目标类别对应的跟踪门限内的检测点中速度小于10m/s的检测点排除掉。Exemplarily, it can be understood that for clustering algorithms such as K-means and DBSCAN, each type of cluster obtained has a cluster center. When the tracking threshold includes size information, the detection point where the cluster center of the target object is located can be used as the center of the frame to frame the detection point (point cloud data) within each tracking threshold. When the tracking threshold also includes speed information, the detection points within the tracking threshold can be screened according to the speed information. Take the tracking threshold corresponding to the target category of pedestrians as an example, where the speed information is 4m/s. Among the detection points within the tracking threshold corresponding to the target category of pedestrians, the detection points with a speed greater than 4m/s can be eliminated. Take the tracking threshold corresponding to the target category of automobile as an example, where the speed information is a speed of 10m/s. The detection points with a speed less than 10m/s among the detection points within the tracking threshold corresponding to the target category of automobiles can be excluded.
通过上述方式,可以得到目标对象S1对应的N个跟踪门限内的点云数据。将目标对象S1对应的N个跟踪门限中每一个跟踪门限内的点云数据,输入到上文所述的分类器中,均可以得到K个目标类别对应的K个置信度,所述置信度用于表征目标对象属于某个目标类别的准确度。具体而言,可将目标对象S1对应的跟踪门限1内的点云数据,输入到上述所说的分类器中,得到目标对象S1在跟踪门限1下对应的K个置信度,该K个置信度和K个目标类别一一对应,也就是说将跟踪门限1对应的点云数据进行处理,得到每个目标类别的置信度, 一共得到K个置信度。其中,K个置信度中的第一置信度表示目标对象S1属于该第一置信度对应的第一目标类别的准确度。可以将目标对象S1对应的跟踪门限2内的点云数据,输入到上述所说的分类器中,得到目标对象S1在跟踪门限2下对应的K个置信度,其中也包括第一置信度,这里的第一置信度是对应于跟踪门限2,是通过对跟踪门限2中点云数据进行处理而得到的,用于表征目标对象S1属于第一目标类别的准确度。同理,可以对N个跟踪门限都进行上述的处理,进而可以得到每个跟踪门限对应的K个置信度,也就是说可以得到N*K个置信度。Through the above method, the point cloud data within the N tracking thresholds corresponding to the target object S1 can be obtained. The point cloud data in each of the N tracking thresholds corresponding to the target object S1 is input into the above-mentioned classifier, and K confidence levels corresponding to the K target categories can be obtained. The confidence levels It is used to characterize the accuracy of the target object belonging to a certain target category. Specifically, the point cloud data within the tracking threshold 1 corresponding to the target object S1 can be input into the above-mentioned classifier to obtain K confidence levels corresponding to the target object S1 under the tracking threshold 1. There is a one-to-one correspondence between the degree and the K target categories, that is to say, the point cloud data corresponding to the tracking threshold 1 is processed to obtain the confidence of each target category, and a total of K confidences are obtained. Wherein, the first confidence level in the K confidence levels represents the accuracy of the target object S1 belonging to the first target category corresponding to the first confidence level. The point cloud data within the tracking threshold 2 corresponding to the target object S1 can be input into the aforementioned classifier to obtain K confidence levels corresponding to the target object S1 under the tracking threshold 2, which also includes the first confidence level, The first confidence level here corresponds to the tracking threshold 2, which is obtained by processing the point cloud data in the tracking threshold 2, and is used to characterize the accuracy of the target object S1 belonging to the first target category. In the same way, the above-mentioned processing can be performed on all N tracking thresholds, and then K confidence levels corresponding to each tracking threshold can be obtained, that is, N*K confidence levels can be obtained.
继续参阅图6,可以将目标对象S1在N个跟踪门限下的对应于同一目标类别的置信度进行相加,得到的加和可作为该目标类别的单帧置信度和。具体的,将目标对象S1在跟踪门限1下对应于类别1的置信度、目标对象S1在跟踪门限2下对应于类别1的置信度、……、目标对象S1在跟踪门限N下对应于类别1的置信度进行相加,也就是将N个第一置信度加和,得到的加和可作为类别1的单帧置信度和。采用前述类似于类别1的单帧置信度和的计算方式,可以得到类别2、……、类别K的单帧置信度和,通过对N个跟踪门限中第一目标类别的置信度加和,可以综合判断在多种跟踪门限下目标对象属于第一目标类别的可信度,更准确的评估所述目标对象的类别。Continuing to refer to FIG. 6, the confidence levels of the target object S1 corresponding to the same target category under N tracking thresholds can be added, and the obtained sum can be used as the single frame confidence sum of the target category. Specifically, the target object S1 corresponds to the confidence level of category 1 under the tracking threshold 1, the target object S1 corresponds to the confidence level of category 1 under the tracking threshold 2,..., the target object S1 corresponds to the category under the tracking threshold N The confidence of 1 is added, that is, the N first confidences are added, and the obtained sum can be used as the single-frame confidence sum of category 1. Using the aforementioned calculation method similar to the single-frame confidence sum of category 1, the single-frame confidence sum of category 2,..., and category K can be obtained. By summing the confidence levels of the first target category in the N tracking thresholds, The reliability of the target object belonging to the first target category under various tracking thresholds can be comprehensively judged, and the category of the target object can be evaluated more accurately.
在一些实施例中,可以比较目标对象S1下的各个目标类别的单帧置信度和的大小,得到单帧置信度最高的目标类别。可以将单帧置信度最高的目标类别作为目标对象S1的估计类别。并将目标对象S1的估计类别对应跟踪门限,作为用于在目标对象S1的航迹跟踪阶段跟踪目标对象S1的跟踪门限,即作为目标对象S1的目标门限。在目标对象S1的航迹跟踪阶段,可以使用目标对象S1的目标门限,对目标对象S1对应的检测点类簇进行框定,以利用目标对象S1的目标门限内的检测点确定目标对象S1的航迹。并且,目标对象S1的目标门限内的检测点不再参与聚类其他目标对象的航迹的建立过程,从而可以减少航迹互相干扰,降低误检率。In some embodiments, the sum of the single-frame confidence levels of each target category under the target object S1 may be compared to obtain the target category with the highest single-frame confidence. The target category with the highest single frame confidence can be used as the estimated category of the target object S1. The estimated category of the target object S1 corresponds to the tracking threshold, which is used as the tracking threshold for tracking the target object S1 in the track tracking phase of the target object S1, that is, as the target threshold of the target object S1. In the track tracking stage of the target object S1, the target threshold of the target object S1 can be used to frame the detection point clusters corresponding to the target object S1, so that the detection points within the target threshold of the target object S1 can be used to determine the flight path of the target object S1. trace. In addition, the detection points within the target threshold of the target object S1 no longer participate in the process of establishing the track of the clustering other target objects, thereby reducing mutual interference of the track and reducing the false detection rate.
示例性的,对于任一帧雷达点云,在通过目标对象S1的目标门限以及其他已确定的目标对象的目标门限进行框定后,可以针对未被框定到任何跟踪门限的检测点再次进行聚类,以得到类簇。并根据再次聚类得到的类簇进行新的目标对象的判断或者航迹建立。Exemplarily, for any frame of radar point cloud, after the target threshold of the target object S1 and the target thresholds of other determined target objects are framed, the detection points that are not framed to any tracking threshold can be clustered again , To get the cluster. And according to the clusters obtained by clustering again, the new target object is judged or the track is established.
在一个例子中,上述使用目标对象S1的目标门限,对目标对象S1对应的检测点类簇进行框定具体可以为,以目标对象S1对应的检测点类簇的聚类中心点所在的检测点中心进行框定。In an example, using the target threshold of the target object S1 to frame the detection point cluster corresponding to the target object S1 may specifically be based on the detection point center where the cluster center point of the detection point cluster corresponding to the target object S1 is located. Framed.
需要说明的是,相邻的两帧雷达点云(采集时间相邻的两帧雷达点云),可以采用聚类算法对分别对两种雷达点云中的每一帧雷达点云上的检测点进行聚类,得到类簇。然后,使用跟踪算法,关联相邻两帧雷达点云上的类簇,可以得到相邻两帧率雷达点云上可视为对应同一物体的类簇,即可以得到相邻两帧率雷达点云中可视为对应目标对象S1的类簇。跟踪算法可以为卡尔曼滤波算法、粒子滤波算法等。It should be noted that for two adjacent radar point clouds (two radar point clouds with adjacent acquisition time), a clustering algorithm can be used to detect the radar point cloud of each frame of the two radar point clouds. Points are clustered to get clusters. Then, using the tracking algorithm to correlate the clusters on the radar point cloud of two adjacent frames, the clusters of the two adjacent frame rate radar point clouds that can be regarded as corresponding to the same object can be obtained, that is, the two adjacent frame rate radar points can be obtained The cloud can be regarded as a cluster corresponding to the target object S1. The tracking algorithm may be Kalman filter algorithm, particle filter algorithm, etc.
在一些实施例中,参阅图7,可以计算目标对象S1在多帧雷达点云中每一帧雷达点云下的各个目标类别的单帧置信度和。每一帧雷达点云下的各个目标类别的单帧置信度和的计算方式可以参考上文介绍。对于目标对象S1,可以将多帧雷达点云中各帧雷达点云下的对应于同一目标类别的单帧置信度和相加,得到该目标类别的多帧置信度和。具体而言,可以将目标对象S1在雷达点云1下的类别1的单帧置信度和、……、目标对象S1在雷达点云P下的类别1的单帧置信度和、目标对象S1在雷达点云P+1下的类别1的单帧置信度和相加,得到目标对象S1对应的类别1的多帧置信度和。通过综合判断多个帧的置信度,可以将在多个帧 的时间内的目标对象特征考虑在内,进而提升判读目标对象类别的准确度。In some embodiments, referring to FIG. 7, the single-frame confidence sum of the target object S1 in each frame of the radar point cloud in the multi-frame radar point cloud can be calculated. The calculation method of the single-frame confidence sum of each target category under each frame of radar point cloud can refer to the above introduction. For the target object S1, the single-frame confidence sum corresponding to the same target category under each frame of the radar point cloud in the multi-frame radar point cloud can be added to obtain the multi-frame confidence sum of the target category. Specifically, the single frame confidence level of the target object S1 under the radar point cloud 1 of category 1 can be combined, ..., the single frame confidence level of the target object S1 under the radar point cloud P, and the target object S1 The single-frame confidence sum of category 1 under the radar point cloud P+1 is added to obtain the multi-frame confidence sum of category 1 corresponding to the target object S1. By comprehensively judging the confidence levels of multiple frames, the characteristics of the target object within the time of multiple frames can be taken into account, thereby improving the accuracy of interpreting the target object category.
在一个例子中,多帧置信度和可以为单帧置信度和的简单相加。In one example, the multi-frame confidence sum can be a simple addition of the single-frame confidence sum.
在一个例子中,也可以为单帧置信度和的加权相加。例如可以按照多帧雷达点云中各帧雷达点云的采集时间的先后顺序,为每一帧雷达点云对应的单帧置信度和分配权重。以雷达点云L1、雷达点云L2、雷达点云L3为例,可以设定雷达点云L1的采集时间早于雷达点云L2的采集时间,雷达点云L2的采集时间早于雷达点云L3的采集时间,则雷达点云L1对应的单帧置信度和的权重<雷达点云L2对应的单帧置信度和的权重<雷达点云L3对应的单帧置信度和的权重,例如,雷达点云L1对应的权重可以为0.1,雷达点云L1对应的权重可以为0.3,雷达点云L1对应的权重可以为0.6。In an example, it may also be a weighted addition of the single frame confidence sum. For example, according to the sequence of the acquisition time of each frame of the radar point cloud in the multi-frame radar point cloud, the confidence and weight of the single frame corresponding to each frame of the radar point cloud can be assigned. Taking radar point cloud L1, radar point cloud L2, and radar point cloud L3 as examples, you can set the acquisition time of radar point cloud L1 to be earlier than the acquisition time of radar point cloud L2, and the acquisition time of radar point cloud L2 to be earlier than radar point cloud At the acquisition time of L3, the weight of the single frame confidence sum corresponding to the radar point cloud L1 <the weight of the single frame confidence sum corresponding to the radar point cloud L2 <the weight of the single frame confidence sum corresponding to the radar point cloud L3, for example, The weight corresponding to the radar point cloud L1 can be 0.1, the weight corresponding to the radar point cloud L1 can be 0.3, and the weight corresponding to the radar point cloud L1 can be 0.6.
采用前述类似于类别1的多帧置信度和的计算方式,可以得到类别2、……、类别K的多帧置信度和。Using the foregoing calculation method similar to the multi-frame confidence sum of category 1, the multi-frame confidence sum of category 2,..., and category K can be obtained.
可以比较目标对象S1下的各个目标类别的多帧置信度和的大小,得到多帧置信度最高的目标类别。可以将多帧置信度最高的目标类别作为目标对象S1的估计类别。将目标对象S1的估计类别对应跟踪门限,作为用于在目标对象S1的航迹跟踪阶段跟踪目标对象S1的跟踪门限,即作为目标对象S1的目标门限。在目标对象S1的航迹跟踪阶段,可以使用目标对象S1的目标门限,对目标对象S1对应的检测点类簇进行框定,以利用目标对象S1的目标门限内的检测点确定目标对象S1的航迹。并且,目标对象S1的目标门限内的检测点不再参与聚类其他目标对象的航迹的建立过程,从而可以减少航迹互相干扰,降低误检率。The sum of the multi-frame confidence levels of each target category under the target object S1 can be compared to obtain the target category with the highest multi-frame confidence. The target category with the highest confidence in the multiple frames may be used as the estimated category of the target object S1. The estimated category of the target object S1 corresponds to the tracking threshold, which is used as the tracking threshold for tracking the target object S1 in the track tracking phase of the target object S1, that is, as the target threshold of the target object S1. In the track tracking stage of the target object S1, the target threshold of the target object S1 can be used to frame the detection point clusters corresponding to the target object S1, so that the detection points within the target threshold of the target object S1 can be used to determine the flight path of the target object S1. trace. In addition, the detection points within the target threshold of the target object S1 no longer participate in the process of establishing the track of the clustering other target objects, thereby reducing mutual interference of the track and reducing the false detection rate.
在一些实施例中,参阅图5和图8,可以判断目标对象S1在其估计类别对应的跟踪门限(即目标对象S1的目标门限)下属于其估计类别的置信度是否大于预设置信度阈值。置信度阈值可以为根据经验或实验预设的阈值,例如可以为设定为80%,也可以设定为85%,也可以设定为95%,等等,此处不再一一列举。In some embodiments, referring to FIG. 5 and FIG. 8, it can be determined whether the confidence of the target object S1 belonging to its estimated category under the tracking threshold corresponding to its estimated category (ie the target threshold of the target object S1) is greater than the preset confidence threshold . The confidence threshold can be a threshold preset based on experience or experiment, for example, it can be set to 80%, can also be set to 85%, can also be set to 95%, etc., and will not be listed here.
示例性的,对于通过单帧雷达点云确定目标对象S1的估计类别的方案而言,目标对象S1属于其估计类别的置信度具体为:将目标对象S1的估计类别对应的跟踪门限框定(目标门限)的检测点(点云数据)输入到上述分类器,分类器输出的K个置信度中对应于该估计类别的置信度。Exemplarily, for the solution of determining the estimated category of the target object S1 through a single-frame radar point cloud, the confidence that the target object S1 belongs to its estimated category is specifically: the tracking threshold corresponding to the estimated category of the target object S1 is framed (target Threshold) detection points (point cloud data) are input to the above-mentioned classifier, and the K confidence levels output by the classifier correspond to the confidence level of the estimated category.
示例性的,对于通过多帧雷达点云确定目标对象S1的估计类别的方案而言。在一个例子中,目标对象S1在其目标门限下属于其估计类别的置信度可以为多帧雷达点云对应的置信度的简单平均值。可以计算多帧雷达点云中每一帧雷达点云对应的目标对象S1在目标门限下属于估计类别的置信度。具体计算方式可以参考上文介绍,在此不再赘述。然后,可以将多帧雷达点云中每一帧雷达点云对应的目标对象S1在目标门限下属于估计类别的置信度进行加和,得到目标对象S1在目标门限下属于估计类别的多帧置信度,然后将多帧置信度除以多帧的帧数,得到多帧雷达点云对应的置信度的简单平均值,将该平均值作为目标对象S1在其目标门限下属于其估计类别的置信度。在一个例子中,目标对象S1在其目标门限下属于其估计类别的置信度可以为多帧雷达点云对应的置信度的加权平均值。可以计算多帧雷达点云中每一帧雷达点云对应的目标对象S1在目标门限下属于估计类别的置信度。具体计算方式可以参考上文介绍,在此不再赘述。可以将多帧雷达点云中每一帧雷达点云对应的目标对象S1在目标门限下属于估计类别的置信度乘以相应权重后,再相加,得到的加权平均值,可以作为标对象S1在其目标门限下属于其估计类别的置信度。多帧雷达点云中各帧的权重的分配方式可以为按照雷达点云的采集时间的先后顺序,分配不同的权重,具体可以参考上文介绍,在此不再赘述。Exemplarily, for the solution of determining the estimated category of the target object S1 through a multi-frame radar point cloud. In an example, the confidence that the target object S1 belongs to its estimated category under its target threshold may be a simple average of the confidences corresponding to multiple frames of radar point clouds. The confidence that the target object S1 corresponding to each frame of the radar point cloud in the multi-frame radar point cloud belongs to the estimated category under the target threshold can be calculated. The specific calculation method can refer to the introduction above, and will not be repeated here. Then, the confidence of the target object S1 corresponding to each frame of the radar point cloud in the multi-frame radar point cloud belonging to the estimated category under the target threshold can be added to obtain the multi-frame confidence of the target object S1 belonging to the estimated category under the target threshold Then, divide the multi-frame confidence level by the number of multi-frame frames to obtain the simple average value of the confidence level corresponding to the multi-frame radar point cloud, and use this average value as the confidence that the target object S1 belongs to its estimated category under its target threshold degree. In an example, the confidence that the target object S1 belongs to its estimated category under its target threshold may be a weighted average of the confidences corresponding to multiple frames of radar point clouds. The confidence that the target object S1 corresponding to each frame of the radar point cloud in the multi-frame radar point cloud belongs to the estimated category under the target threshold can be calculated. The specific calculation method can refer to the introduction above, and will not be repeated here. The confidence that the target object S1 corresponding to each frame of the radar point cloud in the multi-frame radar point cloud belongs to the estimated category under the target threshold can be multiplied by the corresponding weight, and then added, and the weighted average value obtained can be used as the target object S1 Confidence of belonging to its estimated category under its target threshold. The way of assigning the weight of each frame in the multi-frame radar point cloud can be to assign different weights according to the sequence of the acquisition time of the radar point cloud. For details, please refer to the above introduction and will not be repeated here.
示例性的,参阅图5和图8,如果目标对象S1属于其估计类别的置信度大于置信度阈值,则锁定目标对象S1的估计类别,在后续采集到的雷达点云中,不再对目标对象S1进行分类。在该示例中,目标对象S1属于其估计类别的置信度大于置信度阈值时,通过锁定目标对象S1的估计类别,可以减少触发分类器的计算力开销,提高了检测效率。Exemplarily, referring to Figures 5 and 8, if the confidence that the target object S1 belongs to its estimated category is greater than the confidence threshold, the estimated category of the target object S1 is locked, and the target object S1 is no longer targeted in the subsequent collected radar point cloud. Object S1 is classified. In this example, when the confidence that the target object S1 belongs to its estimated category is greater than the confidence threshold, by locking the estimated category of the target object S1, the computational overhead of triggering the classifier can be reduced, and the detection efficiency can be improved.
示例性的,参阅图5和图8所示,如果目标对象S1属于其估计类别的置信度小于或等于置信度阈值,可以在目标对象S1的航迹跟踪阶段,根据探测装置新采集到的雷达点云,重新确定目标对象S1的估计类别。重新确定目标对象S1的估计类别的方式可以参考上文对图5和图6所示实施例的介绍,在此不再赘述。在重新确定了目标对象S1的估计类别后,可以判断目标对象S1属于该重新确定的估计类别的置信度是否大于置信度阈值。如果该重新确定的估计类别的置信度大于置信度阈值,则锁定该重新确定的估计类别,在后续雷达点云中,不再对目标对象S1进行分类。如果该重新确定的估计类别的置信度小于或等于置信度阈值,则根据再次采集的雷达点云,再次确定目标对象S1的估计类别,……。重复前述过程,直到最新确定的目标对象S1的估计类别的置信度大于置信度阈值或者目标对象S1航迹跟踪结束。在一个例子中,可以参阅图5,可以设定在目标对象S1的航迹建立阶段确定的目标对象S1的估计类别的置信度小于或等于置信度阈值,可以通过雷达点云P+2,重新确定目标对象S1的估计类别。如果通过雷达点云P+2重新确定的目标对象S1的估计类别的置信度仍小于或等于置信度阈值,则可以使用采集时间在雷达点云P+2之后的雷达点云P+3(未示出),再次重新确定目标对象S1的估计类别。重复前述过程,直到最新确定的目标对象S1的估计类别的置信度大于置信度阈值或者目标对象S1航迹跟踪结束。Exemplarily, referring to Figures 5 and 8, if the confidence that the target object S1 belongs to its estimated category is less than or equal to the confidence threshold, the track tracking stage of the target object S1 can be performed according to the radar newly acquired by the detection device. Point cloud, re-determine the estimated category of the target object S1. For the manner of re-determining the estimated category of the target object S1, reference may be made to the above description of the embodiments shown in FIG. 5 and FIG. 6, which will not be repeated here. After the estimated category of the target object S1 is re-determined, it can be determined whether the confidence that the target object S1 belongs to the re-determined estimated category is greater than the confidence threshold. If the confidence of the newly determined estimated category is greater than the confidence threshold, the newly determined estimated category is locked, and the target object S1 is no longer classified in the subsequent radar point cloud. If the confidence of the newly determined estimated category is less than or equal to the confidence threshold, the estimated category of the target object S1 is determined again according to the radar point cloud collected again, .... The foregoing process is repeated until the confidence of the newly determined estimated category of the target object S1 is greater than the confidence threshold or the track tracking of the target object S1 ends. In an example, referring to Figure 5, the confidence level of the estimated category of the target object S1 determined during the trajectory establishment phase of the target object S1 can be set to be less than or equal to the confidence threshold. The radar point cloud P+2 can be used to re- Determine the estimated category of the target object S1. If the confidence level of the estimated category of the target object S1 re-determined by the radar point cloud P+2 is still less than or equal to the confidence threshold, the radar point cloud P+3 (not included) whose acquisition time is after the radar point cloud P+2 can be used. (Shown), the estimated category of the target object S1 is re-determined again. The foregoing process is repeated until the confidence of the newly determined estimated category of the target object S1 is greater than the confidence threshold or the track tracking of the target object S1 ends.
示例性的,在上述重新确定目标对象S1的估计类别过程中,在通过每一帧雷达点云,确定目标对象S1的航迹时,可以通过最新确定的目标对象S1的估计类别对应的跟踪门限在该帧雷达点云上进行框定,以门限框内的检测点确定目标对象S1的航迹,并避免门限框内的检测点参与其他目标对象的航迹建立过程。在一个例子中,可以参阅图5,可以设定在目标对象S1的航迹建立阶段确定的目标对象S1的估计类别的置信度小于或等于置信度阈值。在通过雷达点云P+2,确定目标对象S1的航迹时,可以通过在目标对象S1的航迹建立阶段确定的目标对象S1的估计类别对应的跟踪门限,在雷达点云P+2上进行框定。在通过雷达点云P+3(未示出),确定目标对象S1的航迹时,可以利用通过雷达点云P+2重新确定的目标对象S1的估计类别对应的跟踪门限,在雷达点云P+3上确定跟踪门限。Exemplarily, in the above process of re-determining the estimated category of the target object S1, when determining the track of the target object S1 through each frame of the radar point cloud, the tracking threshold corresponding to the newly determined estimated category of the target object S1 can be passed Frame the radar point cloud of this frame, determine the track of the target object S1 with the detection points in the threshold frame, and avoid the detection points in the threshold frame from participating in the track establishment process of other target objects. In an example, referring to FIG. 5, the confidence level of the estimated category of the target object S1 determined in the trajectory establishment phase of the target object S1 can be set to be less than or equal to the confidence threshold. When determining the track of the target object S1 through the radar point cloud P+2, the tracking threshold corresponding to the estimated category of the target object S1 determined in the track establishment stage of the target object S1 can be used on the radar point cloud P+2 Framed. When determining the trajectory of the target object S1 through the radar point cloud P+3 (not shown), the tracking threshold corresponding to the estimated category of the target object S1 re-determined through the radar point cloud P+2 can be used. The tracking threshold is determined on P+3.
本申请实施例提供的确定目标对象跟踪门限的方法,可以确定目标对象的类别,并将该目标对象的类别对应的跟踪门限用于跟踪该目标对象的目标门限,使得目标门限框定的检测点不再参与其他目标对象的聚类或其他目标对象航迹的建立过程,实现了目标门限框内的检测点不影响其他航迹,消除或减少了不同航迹间的干扰,降低了误检率。The method for determining the tracking threshold of a target object provided by the embodiment of the present application can determine the category of the target object, and use the tracking threshold corresponding to the category of the target object to track the target threshold of the target object, so that the detection point defined by the target threshold is not Participate in the clustering of other target objects or the process of establishing the trajectory of other target objects to realize that the detection point in the target threshold frame does not affect other trajectories, eliminates or reduces the interference between different trajectories, and reduces the false detection rate.
图9A和图9B示出了本申请实施例提供的确定目标对象跟踪门限的方法的一种实际验证结果。如图9A和图9B所示,当车辆100的左前方出现行人910时,通过本申请实施例提供的确定目标对象跟踪门限的方法确定行人的目标门限为门限920。门限920为本申请实施例中行人对应的预设跟踪门限。9A and 9B show an actual verification result of the method for determining the tracking threshold of a target object provided by an embodiment of the present application. As shown in FIGS. 9A and 9B, when a pedestrian 910 appears in the front left of the vehicle 100, the target threshold of the pedestrian is determined to be the threshold 920 by the method for determining the target object tracking threshold provided in the embodiment of the present application. The threshold 920 is a preset tracking threshold corresponding to pedestrians in the embodiment of the application.
图10A和图10B示出了本申请实施例提供的确定目标对象跟踪门限的方法的又一种实际验证结果。如图10A和图10B所示,当车辆100的左前方出现自行车(具有骑行者)1010时,通过本申请实施例提供的确定目标对象跟踪门限的方法确定自行车的目标门限为门限1020。门限1020为本申请实施例中自行车对应的预设跟踪门限。10A and 10B show another actual verification result of the method for determining the tracking threshold of a target object provided by an embodiment of the present application. As shown in FIGS. 10A and 10B, when a bicycle (with a cyclist) 1010 appears in the front left of the vehicle 100, the target threshold of the bicycle is determined to be the threshold 1020 by the method for determining the target object tracking threshold provided by the embodiment of the present application. The threshold 1020 is the preset tracking threshold corresponding to the bicycle in the embodiment of the application.
图11A和图11B示出了本申请实施例提供的确定目标对象跟踪门限的方法的又一种实际 验证结果。如图11A和图11B所示,当车辆100的左前方出现汽车1110时,通过本申请实施例提供的确定目标对象跟踪门限的方法确定汽车的目标门限为门限1120。门限1120为本申请实施例中汽车对应的预设跟踪门限。Figures 11A and 11B show another actual verification result of the method for determining the target object tracking threshold provided by an embodiment of the present application. As shown in FIG. 11A and FIG. 11B, when a car 1110 appears in the front left of the vehicle 100, the target threshold of the car is determined to be the threshold 1120 through the method for determining the target object tracking threshold provided in the embodiment of the present application. The threshold 1120 is the preset tracking threshold corresponding to the car in the embodiment of the application.
上述示出的三种实际验证结果表明了,通过本申请实施例提供的确定目标对象跟踪门限的方法,可以准确的确定出目标对象的类别,并采用目标对象的类别对应的跟踪门限对目标对象进行跟踪。The three actual verification results shown above show that the method for determining the target object tracking threshold provided by the embodiments of the present application can accurately determine the target object category, and use the tracking threshold corresponding to the target object category to target the target object. Follow up.
参阅图12,本申请实施例提供了一种确定目标对象跟踪门限的方法。该方法可以由探测装置实现,例如雷达装置。该雷达装置可以为车载雷达装置,例如图4所示的雷达1084。该方法也可以由集成在探测装置中处理装置实现,例如图4所示的雷达1084中的信号处理器。在一个例子中,该方法也可以由独立于探测装置的处理装置实现(例如控制中心300、计算系统102等),并将处理结果反馈给探测装置。Referring to FIG. 12, an embodiment of the present application provides a method for determining a target object tracking threshold. The method can be implemented by a detection device, such as a radar device. The radar device may be a vehicle-mounted radar device, such as the radar 1084 shown in FIG. 4. This method can also be implemented by a processing device integrated in the detection device, such as the signal processor in the radar 1084 shown in FIG. 4. In an example, the method can also be implemented by a processing device independent of the detection device (for example, the control center 300, the computing system 102, etc.), and the processing result is fed back to the detection device.
如图12所示,该方法可以包括如下步骤。As shown in Figure 12, the method may include the following steps.
步骤1201,确定至少一帧雷达点云,所述至少一帧雷达点云为对目标对象进行测量得到的点数据集合,所述至少一帧雷达点云包括第一帧雷达点云。Step 1201: Determine at least one frame of radar point cloud, where the at least one frame of radar point cloud is a point data set obtained by measuring a target object, and the at least one frame of radar point cloud includes the first frame of radar point cloud.
示例性的,探测装置或其他处理装置可以根据探测装置针对目标对象的一次扫描而采集到的原始探测数据或多次扫描而采集的多个原始探测数据,分别确定一帧或多帧雷达点云。其中,一帧雷达点云对应探测装置的一次扫描。具体可以参考上文介绍,在此不再赘述。Exemplarily, the detection device or other processing device may determine one or more frames of radar point clouds based on the original detection data collected by the detection device for one scan of the target object or multiple raw detection data collected by multiple scans. . Among them, one frame of radar point cloud corresponds to one scan of the detection device. For details, please refer to the introduction above, which will not be repeated here.
步骤1203,确定对应于所述第一帧雷达点云的N个跟踪门限,所述N个跟踪门限包括第一门限,根据所述第一门限中的点云数据确定K个置信度,所述K个置信度与K个目标类别一一对应。Step 1203: Determine N tracking thresholds corresponding to the first frame of radar point cloud, where the N tracking thresholds include a first threshold, and determine K confidence levels based on the point cloud data in the first threshold, the K confidence levels correspond to K target categories one-to-one.
可以预设的N个跟踪门限,分别在第一帧雷达点云上框定点云数据(检测点),分别得到对应于第一帧雷达点云的N个跟踪门限。可以将对应于第一帧雷达点云的N个跟踪门限中任一跟踪门限中的点云数据,例如第一门限中的点云数据(即第一门限框定的点云数据)输入到K分类的分类器中,得到K个置信度。N tracking thresholds can be preset, and point cloud data (detection points) are respectively framed on the first frame of radar point cloud, and N tracking thresholds corresponding to the first frame of radar point cloud are respectively obtained. The point cloud data in any one of the N tracking thresholds corresponding to the first frame of radar point cloud can be input into the K classification, for example, the point cloud data in the first threshold (that is, the point cloud data framed by the first threshold) In the classifier of, K confidences are obtained.
重复上述方式,可以得到N个跟踪门限中每一个跟踪门限对应的K个置信度。By repeating the above method, K confidence levels corresponding to each of the N tracking thresholds can be obtained.
当至少一帧雷达点云还包括其他帧雷达点云时,可以重复上述方式,得到该其他帧雷达点云对应的N个跟踪门限中的每一个跟踪门限对应的K个置信度。具体可以参考上文对图5、图6所示的各方法实施例的介绍,在此不再赘述。When at least one frame of radar point cloud also includes other frame radar point clouds, the above method can be repeated to obtain K confidence levels corresponding to each of the N tracking thresholds corresponding to the other frame radar point clouds. For details, reference may be made to the above description of the method embodiments shown in FIG. 5 and FIG. 6, which will not be repeated here.
步骤1205,根据所述K个置信度确定第一目标类别。Step 1205: Determine the first target category according to the K confidence levels.
步骤1207,根据所述第一目标类别,从所述N个跟踪门限中确定用于跟踪所述目标对象的目标门限。Step 1207: Determine a target threshold for tracking the target object from the N tracking thresholds according to the first target category.
可以将第一目标类别对应的跟踪门限,确定为用于跟踪目标对象的目标门限。在跟踪目标对象的航迹时,可以通过目标门限框定用于建立目标对象航迹的点云数据,并使得目标门限框定的点云数据不参与其他目标对象的航迹建立过程。The tracking threshold corresponding to the first target category may be determined as the target threshold for tracking the target object. When tracking the trajectory of the target object, the point cloud data used to establish the trajectory of the target object can be framed by the target threshold, and the point cloud data framed by the target threshold can not participate in the trajectory establishment process of other target objects.
在一些实施例中,所述K个置信度包括对应于所述第一目标类别的第一置信度,所述第一置信度用于表征所述第一门限中的点云数据属于所述第一目标类别的准确度。In some embodiments, the K confidence levels include a first confidence level corresponding to the first target category, and the first confidence level is used to characterize that the point cloud data in the first threshold belongs to the first target category. The accuracy of a target category.
N个跟踪门限中门限1对应的K个置信度中任一置信度用于表征门限1中的点云数据属于该置信度对应的目标类别的准确度。具体而言,置信度1表征门限1中的点云数据属于置信度1对应的目标类别的准确度,置信度2表征门限1中的点云数据属于置信度2对应的目标类别的准确度,……,置信度K表征门限1中点云数据属于置信度K对应的目标类别的准确度。Any one of the K confidence levels corresponding to the threshold 1 in the N tracking thresholds is used to characterize the accuracy of the point cloud data in the threshold 1 belonging to the target category corresponding to the confidence. Specifically, the confidence level 1 represents the accuracy of the point cloud data in the threshold 1 belonging to the target category corresponding to the confidence level 1, and the confidence level 2 represents the accuracy of the point cloud data in the threshold 1 belonging to the target category corresponding to the confidence level 2. …, the confidence K represents the accuracy of the point cloud data in the threshold 1 belonging to the target category corresponding to the confidence K.
同理,N个跟踪门限中门限2对应的K个置信度中任一置信度用于表征门限2中的点云数据属于该置信度对应的目标类别的准确度,N个跟踪门限中门限3对应的K个置信度中任一置信度用于表征门限3中的点云数据属于该置信度对应的目标类别的准确度,……,N个跟踪门限中门限N对应的K个置信度中任一置信度用于表征门限N中的点云数据属于该置信度对应的目标类别的准确度。In the same way, any one of the K confidence levels corresponding to threshold 2 in the N tracking thresholds is used to characterize the accuracy of the point cloud data in the threshold 2 belonging to the target category corresponding to the confidence level, and the threshold 3 in the N tracking thresholds Any one of the corresponding K confidence levels is used to characterize the accuracy of the point cloud data in the threshold 3 that belongs to the target category corresponding to the confidence level,..., among the K confidence levels corresponding to the threshold N in the N tracking thresholds Any confidence is used to characterize the accuracy of the point cloud data in the threshold N belonging to the target category corresponding to the confidence.
在一些实施例中,所述N个跟踪门限是根据预设的参数信息确定的,用于界定所述第一帧雷达点云中对应于所述目标对象的范围。In some embodiments, the N tracking thresholds are determined according to preset parameter information, and are used to define a range corresponding to the target object in the radar point cloud of the first frame.
不同的跟踪门限可以是根据预设的不同目标类别对应的参数信息确定的,可以理解,属于同一目标类别的目标对象具有相似的测量范围或点数据范围。可以根据同一目标类别下的目标对象的测量范围或点数据范围,设定参数信息,进而可以根据该参数信息确定对应的跟踪门限。跟踪门限用于界定的目标对象的范围可以为目标对象的点数据的范围,也可以为目标对象的测量范围。通过跟踪门限界定目标对象的点数据的范围或测量范围,可以便于分类器对目标对象进行分类。Different tracking thresholds may be determined according to preset parameter information corresponding to different target categories. It can be understood that target objects belonging to the same target category have similar measurement ranges or point data ranges. The parameter information can be set according to the measurement range or the point data range of the target object under the same target category, and then the corresponding tracking threshold can be determined according to the parameter information. The range of the target object defined by the tracking threshold may be the range of the point data of the target object or the measurement range of the target object. Defining the range or measurement range of the point data of the target object by the tracking threshold can facilitate the classifier to classify the target object.
在该实施例的一个示例中,所述参数信息包括预设目标类别的几何尺寸信息和/或预设目标类别的速度信息。属于同一目标类别的目标对象具有相似的几何尺寸以及速度。通过目标类别的的几何尺寸信息和/或速度信息,可以确定该目标类别对应的跟踪门限。In an example of this embodiment, the parameter information includes geometric size information of a preset target category and/or speed information of a preset target category. Target objects belonging to the same target category have similar geometric dimensions and speeds. Through the geometric size information and/or speed information of the target category, the tracking threshold corresponding to the target category can be determined.
在一些实施例中,若第一门限中的点云数据输入到K分类的分类器而得到K个置信度中,对应于K个目标类别中第一目标类别的置信度最大,并且大于预设的第一置信度阈值(例如95%),则可以将第一目标类别确定为目标对象的所属目标类别。In some embodiments, if the point cloud data in the first threshold is input to the classifier of K classification to obtain K confidence levels, the confidence level corresponding to the first target category in the K target categories is the largest and is greater than the preset The first confidence threshold (for example, 95%) of, then the first target category can be determined as the target category of the target object.
在一些实施例中,所述第一目标类别是根据K个总置信度确定的,所述K个总置信度包括第一总置信度,所述第一总置信度为N个所述第一置信度的总和,所述N个所述第一置信度一一对应于所述N个跟踪门限。In some embodiments, the first target category is determined according to K total confidence levels, the K total confidence levels include a first total confidence level, and the first total confidence level is N first total confidence levels. The sum of the confidences, the N first confidences correspond to the N tracking thresholds one by one.
N个跟踪门限中每一个跟踪门限中的云数据输入到K分类的分类器而得到K个置信度中,可以得到K个置信度,即每一个跟踪门限均对应有K个置信度。将N个跟踪门限中每一个K个置信度中的第一置信度相加,可以得到第一置信度的总和。将N个跟踪门限中每一个K个置信度中的第二置信度相加,可以得到第二置信度的总和。重复前述方式,可以得到K个置信度中每一个置信度的总和。然后,可以根据K个置信度中每一个置信度的总和,确定第一目标类别。示例性的,可以对K个置信度中每一个置信度的总和进行比较,将置信度的总和最大的置信度对应的目标类别作为第一目标类别。具体可以参考上文对图5和图6所示实施例的介绍。The cloud data in each of the N tracking thresholds is input to the classifier of the K classification to obtain K confidences, and K confidences can be obtained, that is, each tracking threshold corresponds to K confidences. The sum of the first confidences in each of the K confidences in the N tracking thresholds can be obtained. The sum of the second confidences in each of the K confidences in the N tracking thresholds can be obtained. By repeating the foregoing method, the sum of each of the K confidences can be obtained. Then, the first target category can be determined according to the sum of each of the K confidence degrees. Exemplarily, the sum of each confidence degree in the K confidence degrees may be compared, and the target category corresponding to the confidence degree with the largest sum of confidence degrees may be used as the first target category. For details, reference may be made to the above description of the embodiments shown in FIG. 5 and FIG. 6.
在一些实施例中,所述第一目标类别是根据K个多帧总置信度确定的,所述K个多帧总置信度包括第一多帧总置信度,所述第一多帧总置信度为至少一个第一总置信度的总和,所述至少一个第一总置信度一一对应于所述至少一帧雷达点云,其中,所述第一帧雷达点云对应的所述第一总置信度为N个所述第一置信度的总和,所述N个所述第一置信度一一对应于所述N个跟踪门限。In some embodiments, the first target category is determined according to K multi-frame total confidence levels, the K multi-frame total confidence levels include a first multi-frame total confidence level, and the first multi-frame total confidence level The degree is the sum of at least one first total confidence degree, and the at least one first total confidence degree corresponds to the at least one frame of radar point cloud one by one, wherein the first frame of radar point cloud corresponds to the first The total confidence is the sum of the N first confidences, and the N first confidences correspond to the N tracking thresholds one by one.
对于至少一帧雷达点云中的第一雷达点云,可以确定第一雷达点云对应的K个置信度的每一个置信度的总和,得到K个总置信度。具体可以参考上文介绍。重复前述方式,可得到至少一帧雷达点云中每一帧雷达点云对应的K个总置信度。将至少一帧雷达点云中每一帧雷达点云对应的K个总置信度中的第一总置信度进行加和,得到第一总置信度对应的第一多帧总置信度。重复前述步骤,可以得到K个总置信度中每一个总置信度对应的多帧总置信度,即可以得到K个多帧总置信度。可以将K个多帧总置信度中最大的多帧总置信度对应的目标 类别确定为第一目标类别。具体可以参考上文对图7所示实施例的介绍,在此不再赘述。For the first radar point cloud in at least one frame of radar point cloud, the sum of each of the K confidence degrees corresponding to the first radar point cloud can be determined to obtain K total confidence degrees. For details, please refer to the introduction above. By repeating the foregoing method, K total confidences corresponding to each frame of radar point cloud in at least one frame of radar point cloud can be obtained. The first total confidence among the K total confidences corresponding to each frame of the radar point cloud in at least one frame of radar point cloud is added to obtain the first multi-frame total confidence corresponding to the first total confidence. By repeating the foregoing steps, the multi-frame total confidence level corresponding to each of the K total confidence levels can be obtained, that is, the K multi-frame total confidence levels can be obtained. The target category corresponding to the largest multi-frame total confidence among the K multi-frame total confidences may be determined as the first target category. For details, reference may be made to the description of the embodiment shown in FIG. 7 above, and details are not described herein again.
在一些实施例中,所述K个目标类别包括以下两项或更多项:In some embodiments, the K target categories include two or more of the following:
行人、汽车、自行车、电动车。Pedestrians, cars, bicycles, electric vehicles.
在一些实施例中,所述至少一帧雷达点云为毫米波雷达点云。In some embodiments, the at least one frame of radar point cloud is a millimeter wave radar point cloud.
本申请实施例提供的确定目标对象跟踪门限的方法,可以确定目标对象的类别,并将该目标对象的类别对应的跟踪门限用于跟踪该目标对象的目标门限,使得目标门限框定的检测点不再参与其他目标对象的聚类或其他目标对象航迹的建立过程,实现了目标门限框内的检测点不影响其他航迹,消除或减少了不同航迹间的干扰,降低了误检率。The method for determining the tracking threshold of a target object provided by the embodiment of the present application can determine the category of the target object, and use the tracking threshold corresponding to the category of the target object to track the target threshold of the target object, so that the detection point defined by the target threshold is not Participate in the clustering of other target objects or the process of establishing the trajectory of other target objects to realize that the detection point in the target threshold frame does not affect other trajectories, eliminates or reduces the interference between different trajectories, and reduces the false detection rate.
参阅图13,本申请实施例提供了一种确定目标对象跟踪门限的装置。该装置可以包括处理器1310、收发器1320。在该装置运行时,处理器1310执行计算机指令,使的该装置执行图12所示的方法。其中,处理器1310可以确定至少一帧雷达点云,所述至少一帧雷达点云为对目标对象进行测量得到的点数据集合,所述至少一帧雷达点云包括第一帧雷达点云;处理器1310可以确定对应于所述第一帧雷达点云的N个跟踪门限,所述N个跟踪门限包括第一门限,根据所述第一门限中的点云数据确定K个置信度,所述K个置信度与K个目标类别一一对应。处理器1310可以根据所述K个置信度确定第一目标类别。处理器1310可以根据所述第一目标类别,从所述N个跟踪门限中确定用于跟踪所述目标对象的目标门限。Referring to FIG. 13, an embodiment of the present application provides an apparatus for determining a tracking threshold of a target object. The device may include a processor 1310 and a transceiver 1320. When the device is running, the processor 1310 executes computer instructions to make the device execute the method shown in FIG. 12. The processor 1310 may determine at least one frame of radar point cloud, where the at least one frame of radar point cloud is a point data set obtained by measuring a target object, and the at least one frame of radar point cloud includes the first frame of radar point cloud; The processor 1310 may determine N tracking thresholds corresponding to the first frame of radar point cloud, where the N tracking thresholds include a first threshold, and determine K confidence levels based on the point cloud data in the first threshold, so The K confidence levels correspond one-to-one with the K target categories. The processor 1310 may determine the first target category according to the K confidence levels. The processor 1310 may determine a target threshold for tracking the target object from the N tracking thresholds according to the first target category.
在一些实施例中,如图13所示,该装置还包括存储器1330。存储器1330可以用于存储上述计算机指令,还可以用于存储分类器等。In some embodiments, as shown in FIG. 13, the device further includes a memory 1330. The memory 1330 may be used to store the foregoing computer instructions, and may also be used to store a classifier and the like.
在一些实施例中,该电子设备还包括通信总线1340,其中,处理器1310可通过通信总线1340与收发器1320、存储器1330连接,从而可实现根据存储器1330存储的计算机执行指令,对收发器1320等部件进行相应控制。In some embodiments, the electronic device further includes a communication bus 1340, where the processor 1310 can be connected to the transceiver 1320 and the memory 1330 through the communication bus 1340, so that the computer can execute the instructions stored in the memory 1330 to execute the command to the transceiver 1320. And other components for corresponding control.
本申请实施例的电子设备各个部件/器件的具体实施方式,可参照上文如图12所示的各方法实施例实现,此处不再赘述。The specific implementation manners of the various components/devices of the electronic equipment in the embodiments of the present application can be implemented with reference to the foregoing method embodiments shown in FIG. 12, and details are not described herein again.
由此,可以确定目标对象的类别,并将该目标对象的类别对应的跟踪门限用于跟踪该目标对象的目标门限,使得目标门限框定的检测点不再参与其他目标对象的聚类或其他目标对象航迹的建立过程,实现了目标门限框内的检测点不影响其他航迹,消除或减少了不同航迹间的干扰,降低了误检率。Thus, the category of the target object can be determined, and the tracking threshold corresponding to the category of the target object can be used to track the target threshold of the target object, so that the detection points framed by the target threshold no longer participate in the clustering of other target objects or other targets The establishment process of the target trajectory realizes that the detection point in the target threshold frame does not affect other trajectories, eliminates or reduces the interference between different trajectories, and reduces the false detection rate.
可以理解的是,本申请的实施例中的处理器可以是中央处理单元(central processing unit,CPU),还可以是其他通用处理器、数字信号处理器(digital signal processor,DSP)、专用集成电路(application specific integrated circuit,ASIC)、现场可编程门阵列(field programmable gate array,FPGA)或者其他可编程逻辑器件、晶体管逻辑器件,硬件部件或者其任意组合。通用处理器可以是微处理器,也可以是任何常规的处理器。It is understandable that the processor in the embodiment of the present application may be a central processing unit (central processing unit, CPU), or other general-purpose processors, digital signal processors (digital signal processors, DSP), and application-specific integrated circuits. (application specific integrated circuit, ASIC), field programmable gate array (field programmable gate array, FPGA) or other programmable logic devices, transistor logic devices, hardware components, or any combination thereof. The general-purpose processor may be a microprocessor or any conventional processor.
参阅图14,本申请实施例提供了一种确定目标对象跟踪门限的装置1400。如图14所示,该装置1400包括处理单元1410和收发单元1420。Referring to FIG. 14, an embodiment of the present application provides an apparatus 1400 for determining a target object tracking threshold. As shown in FIG. 14, the device 1400 includes a processing unit 1410 and a transceiver unit 1420.
收发单元1420用于确定至少一帧雷达点云,该至少一帧雷达点云为对目标对象进行测量得到的点数据集合,该至少一帧雷达点云包括第一帧雷达点云;The transceiver unit 1420 is configured to determine at least one frame of radar point cloud, the at least one frame of radar point cloud is a point data set obtained by measuring a target object, and the at least one frame of radar point cloud includes the first frame of radar point cloud;
处理单元1410用于确定对应于第一帧雷达点云的N个跟踪门限,该N个跟踪门限包括第一门限,根据第一门限中的点云数据确定K个置信度,该K个置信度与K个目标类别一一对应;The processing unit 1410 is configured to determine N tracking thresholds corresponding to the first frame of radar point cloud, the N tracking thresholds include a first threshold, and K confidence levels are determined according to the point cloud data in the first threshold, and the K confidence levels One-to-one correspondence with K target categories;
处理单元1410还用于根据该K个置信度确定第一目标类别;The processing unit 1410 is further configured to determine the first target category according to the K confidence levels;
处理单元1410还用于根据第一目标类别,从N个跟踪门限中确定用于跟踪目标对象的目 标门限。The processing unit 1410 is further configured to determine a target threshold for tracking the target object from the N tracking thresholds according to the first target category.
装置1400的各功能单元的功能,可以参考上文如图12所示的各方法实施例实现,在此不再赘述。The functions of the functional units of the device 1400 can be implemented with reference to the method embodiments shown in FIG. 12 above, and details are not described herein again.
本申请实施例提供的确定目标对象跟踪门限的装置,可以确定目标对象的类别,并将该目标对象的类别对应的跟踪门限用于跟踪该目标对象的目标门限,使得目标门限框定的检测点不再参与其他目标对象的聚类或其他目标对象航迹的建立过程,实现了目标门限框内的检测点不影响其他航迹,消除或减少了不同航迹间的干扰,降低了误检率。The device for determining the tracking threshold of a target object provided by the embodiment of the present application can determine the category of the target object, and use the tracking threshold corresponding to the category of the target object to track the target threshold of the target object, so that the detection point defined by the target threshold is not Participate in the clustering of other target objects or the process of establishing the trajectory of other target objects to realize that the detection point in the target threshold frame does not affect other trajectories, eliminates or reduces the interference between different trajectories, and reduces the false detection rate.
本申请的实施例中的方法步骤可以通过硬件的方式来实现,也可以由处理器执行软件指令的方式来实现。软件指令可以由相应的软件模块组成,软件模块可以被存放于随机存取存储器(random access memory,RAM)、闪存、只读存储器(read-only memory,ROM)、可编程只读存储器(programmable rom,PROM)、可擦除可编程只读存储器(erasable PROM,EPROM)、电可擦除可编程只读存储器(electrically EPROM,EEPROM)、寄存器、硬盘、移动硬盘、CD-ROM或者本领域熟知的任何其它形式的存储介质中。一种示例性的存储介质耦合至处理器,从而使处理器能够从该存储介质读取信息,且可向该存储介质写入信息。当然,存储介质也可以是处理器的组成部分。处理器和存储介质可以位于ASIC中。The method steps in the embodiments of the present application can be implemented by hardware, and can also be implemented by a processor executing software instructions. Software instructions can be composed of corresponding software modules, which can be stored in random access memory (RAM), flash memory, read-only memory (ROM), programmable read-only memory (programmable rom) , PROM), erasable programmable read-only memory (erasable PROM, EPROM), electrically erasable programmable read-only memory (electrically EPROM, EEPROM), register, hard disk, mobile hard disk, CD-ROM or well-known in the art Any other form of storage medium. An exemplary storage medium is coupled to the processor, so that the processor can read information from the storage medium and write information to the storage medium. Of course, the storage medium may also be an integral part of the processor. The processor and the storage medium may be located in the ASIC.
在上述实施例中,可以全部或部分地通过软件、硬件、固件或者其任意组合来实现。当使用软件实现时,可以全部或部分地以计算机程序产品的形式实现。所述计算机程序产品包括一个或多个计算机指令。在计算机上加载和执行所述计算机程序指令时,全部或部分地产生按照本申请实施例所述的流程或功能。所述计算机可以是通用计算机、专用计算机、计算机网络、或者其他可编程装置。所述计算机指令可以存储在计算机可读存储介质中,或者通过所述计算机可读存储介质进行传输。所述计算机指令可以从一个网站站点、计算机、服务器或数据中心通过有线(例如同轴电缆、光纤、数字用户线(DSL))或无线(例如红外、无线、微波等)方式向另一个网站站点、计算机、服务器或数据中心进行传输。所述计算机可读存储介质可以是计算机能够存取的任何可用介质或者是包含一个或多个可用介质集成的服务器、数据中心等数据存储设备。所述可用介质可以是磁性介质,(例如,软盘、硬盘、磁带)、光介质(例如,DVD)、或者半导体介质(例如固态硬盘(solid state disk,SSD))等。In the above embodiments, it may be implemented in whole or in part by software, hardware, firmware, or any combination thereof. When implemented by software, it can be implemented in the form of a computer program product in whole or in part. The computer program product includes one or more computer instructions. When the computer program instructions are loaded and executed on the computer, the processes or functions described in the embodiments of the present application are generated in whole or in part. The computer may be a general-purpose computer, a special-purpose computer, a computer network, or other programmable devices. The computer instructions may be stored in a computer-readable storage medium or transmitted through the computer-readable storage medium. The computer instructions can be sent from a website site, computer, server, or data center to another website site via wired (such as coaxial cable, optical fiber, digital subscriber line (DSL)) or wireless (such as infrared, wireless, microwave, etc.) , Computer, server or data center for transmission. The computer-readable storage medium may be any available medium that can be accessed by a computer or a data storage device such as a server or a data center integrated with one or more available media. The usable medium may be a magnetic medium (for example, a floppy disk, a hard disk, and a magnetic tape), an optical medium (for example, a DVD), or a semiconductor medium (for example, a solid state disk (SSD)).
可以理解的是,在本申请的实施例中涉及的各种数字编号仅为描述方便进行的区分,并不用来限制本申请的实施例的范围。It can be understood that the various numerical numbers involved in the embodiments of the present application are only for easy distinction for description, and are not used to limit the scope of the embodiments of the present application.

Claims (16)

  1. 一种确定目标对象跟踪门限的方法,其特征在于,所述方法包括:A method for determining a tracking threshold of a target object, characterized in that the method includes:
    确定至少一帧雷达点云,所述至少一帧雷达点云为对目标对象进行测量得到的点数据集合,所述至少一帧雷达点云包括第一帧雷达点云;Determining at least one frame of radar point cloud, the at least one frame of radar point cloud is a point data set obtained by measuring a target object, and the at least one frame of radar point cloud includes a first frame of radar point cloud;
    确定对应于所述第一帧雷达点云的N个跟踪门限,所述N个跟踪门限包括第一门限,根据所述第一门限中的点云数据确定K个置信度,所述K个置信度与K个目标类别一一对应;Determine N tracking thresholds corresponding to the first frame of radar point cloud, where the N tracking thresholds include a first threshold, and determine K confidence levels based on the point cloud data in the first threshold, and the K confidences One-to-one correspondence between degrees and K target categories;
    根据所述K个置信度确定第一目标类别;Determine the first target category according to the K confidence levels;
    根据所述第一目标类别,从所述N个跟踪门限中确定用于跟踪所述目标对象的目标门限。According to the first target category, a target threshold for tracking the target object is determined from the N tracking thresholds.
  2. 根据权利要求1所述的方法,其特征在于,所述K个置信度包括对应于所述第一目标类别的第一置信度,所述第一置信度用于表征所述第一门限中的点云数据属于所述第一目标类别的准确度。The method according to claim 1, wherein the K confidence levels include a first confidence level corresponding to the first target category, and the first confidence level is used to characterize the The accuracy of the point cloud data belonging to the first target category.
  3. 根据权利要求1所述的方法,其特征在于,所述N个跟踪门限是根据预设的参数信息确定的,用于界定所述第一帧雷达点云中对应于所述目标对象的范围。The method according to claim 1, wherein the N tracking thresholds are determined according to preset parameter information, and are used to define a range corresponding to the target object in the radar point cloud of the first frame.
  4. 根据权利要求3所述的方法,其特征在于,所述参数信息包括预设目标类别的几何尺寸信息和/或预设目标类别的速度信息。The method according to claim 3, wherein the parameter information includes geometric size information of a preset target category and/or speed information of a preset target category.
  5. 根据权利要求1-4任一项所述的方法,其特征在于,所述第一目标类别是根据K个总置信度确定的,所述K个总置信度包括第一总置信度,所述第一总置信度为N个所述第一置信度的总和,所述N个所述第一置信度一一对应于所述N个跟踪门限。The method according to any one of claims 1 to 4, wherein the first target category is determined according to K total confidence levels, and the K total confidence levels include the first total confidence levels. The first total confidence is the sum of the N first confidences, and the N first confidences correspond to the N tracking thresholds one by one.
  6. 根据权利要求1-4任一项所述的方法,其特征在于,所述第一目标类别是根据K个多帧总置信度确定的;The method according to any one of claims 1 to 4, wherein the first target category is determined according to K multi-frame total confidence;
    所述K个多帧总置信度包括第一多帧总置信度,所述第一多帧总置信度为至少一个第一总置信度的总和,所述至少一个第一总置信度一一对应于所述至少一帧雷达点云;The K multi-frame total confidence levels include a first multi-frame total confidence level, the first multi-frame total confidence level is the sum of at least one first total confidence level, and the at least one first total confidence level corresponds to one-to-one At the at least one frame of radar point cloud;
    其中,所述第一帧雷达点云对应的所述第一总置信度为N个所述第一置信度的总和,所述N个所述第一置信度一一对应于所述N个跟踪门限。Wherein, the first total confidence level corresponding to the first frame of radar point cloud is the sum of the N first confidence levels, and the N first confidence levels are one-to-one corresponding to the N tracking Threshold.
  7. 根据权利要求1所述的方法,其特征在于,所述K个目标类别包括以下两项或更多项:The method according to claim 1, wherein the K target categories include two or more of the following:
    行人、汽车、自行车、电动车。Pedestrians, cars, bicycles, electric vehicles.
  8. 一种确定目标对象跟踪门限的装置,其特征在于,包括处理单元和收发单元;其中,A device for determining a tracking threshold of a target object, which is characterized in that it comprises a processing unit and a transceiver unit; wherein,
    所述收发单元用于确定至少一帧雷达点云,所述至少一帧雷达点云为对目标对象进行测量得到的点数据集合,所述至少一帧雷达点云包括第一帧雷达点云;The transceiver unit is configured to determine at least one frame of radar point cloud, the at least one frame of radar point cloud is a point data set obtained by measuring a target object, and the at least one frame of radar point cloud includes the first frame of radar point cloud;
    所述处理单元用于确定对应于所述第一帧雷达点云的N个跟踪门限,所述N个跟踪门限包括第一门限,根据所述第一门限中的点云数据确定K个置信度,所述K个置信度与K个目标类别一一对应;The processing unit is configured to determine N tracking thresholds corresponding to the first frame of radar point cloud, the N tracking thresholds include a first threshold, and K confidence levels are determined according to the point cloud data in the first threshold , The K confidence levels correspond one-to-one with the K target categories;
    所述处理单元还用于根据所述K个置信度确定第一目标类别;The processing unit is further configured to determine a first target category according to the K confidence levels;
    所述处理单元还用于根据所述第一目标类别,从所述N个跟踪门限中确定用于跟踪所述目标对象的目标门限。The processing unit is further configured to determine a target threshold for tracking the target object from the N tracking thresholds according to the first target category.
  9. 根据权利要求8所述的装置,其特征在于,所述K个置信度包括对应于所述第一目标类别的第一置信度,所述第一置信度用于表征所述第一门限中的点云数据属于所述第一目标类别的准确度。The apparatus according to claim 8, wherein the K confidence levels comprise a first confidence level corresponding to the first target category, and the first confidence level is used to characterize the The accuracy of the point cloud data belonging to the first target category.
  10. 根据权利要求8所述的装置,其特征在于,所述N个跟踪门限是根据预设的参数信息确定的,用于界定所述第一帧雷达点云中对应于所述目标对象的范围。The device according to claim 8, wherein the N tracking thresholds are determined according to preset parameter information, and are used to define a range corresponding to the target object in the first frame of radar point cloud.
  11. 根据权利要求10所述的装置,其特征在于,所述参数信息包括预设目标类别的几何 尺寸信息和/或预设目标类别的速度信息。The device according to claim 10, wherein the parameter information includes geometric size information of a preset target category and/or speed information of a preset target category.
  12. 根据权利要求8-11任一项所述的装置,其特征在于,所述第一目标类别是根据K个总置信度确定的,所述K个总置信度包括第一总置信度,所述第一总置信度为N个所述第一置信度的总和,所述N个所述第一置信度一一对应于所述N个跟踪门限。The device according to any one of claims 8-11, wherein the first target category is determined according to K total confidence levels, and the K total confidence levels include the first total confidence levels, and the The first total confidence is the sum of the N first confidences, and the N first confidences correspond to the N tracking thresholds one by one.
  13. 根据权利要求8-11任一项所述的装置,其特征在于,所述第一目标类别是根据K个多帧总置信度确定的;The apparatus according to any one of claims 8-11, wherein the first target category is determined according to K multi-frame total confidence;
    所述K个多帧总置信度包括第一多帧总置信度,所述第一多帧总置信度为至少一个第一总置信度的总和,所述至少一个第一总置信度一一对应于所述至少一帧雷达点云;The K multi-frame total confidence levels include a first multi-frame total confidence level, the first multi-frame total confidence level is the sum of at least one first total confidence level, and the at least one first total confidence level corresponds to one-to-one At the at least one frame of radar point cloud;
    其中,所述第一帧雷达点云对应的所述第一总置信度为N个所述第一置信度的总和,所述N个所述第一置信度一一对应于所述N个跟踪门限。Wherein, the first total confidence level corresponding to the first frame of radar point cloud is the sum of the N first confidence levels, and the N first confidence levels are one-to-one corresponding to the N tracking Threshold.
  14. 根据权利要求8所述的装置,其特征在于,所述K个目标类别包括以下两项或更多项:The device according to claim 8, wherein the K target categories include two or more of the following:
    行人、汽车、自行车、电动车。Pedestrians, cars, bicycles, electric vehicles.
  15. 一种确定目标对象跟踪门限的装置,其特征在于,包括处理器和收发器;其中,所述处理器执行计算机指令,使得所述装置执行权利要求1-7所述的方法。A device for determining a tracking threshold of a target object, which is characterized by comprising a processor and a transceiver; wherein the processor executes computer instructions so that the device executes the method of claims 1-7.
  16. 一种计算机可读存储介质,所述计算机可读存储介质中存储有指令,其特征在于,当所述指令在电子设备上运行时,使得所述电子设备执行权利要求1-7任一项所述的方法。A computer-readable storage medium having instructions stored in the computer-readable storage medium, characterized in that, when the instructions are executed on an electronic device, the electronic device is caused to execute any one of claims 1-7. The method described.
PCT/CN2020/136718 2019-12-16 2020-12-16 Method and apparatus for determining target object tracking threshold WO2021121247A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201911294731.0 2019-12-16
CN201911294731.0A CN113064153B (en) 2019-12-16 2019-12-16 Method and device for determining target object tracking threshold

Publications (1)

Publication Number Publication Date
WO2021121247A1 true WO2021121247A1 (en) 2021-06-24

Family

ID=76477088

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/136718 WO2021121247A1 (en) 2019-12-16 2020-12-16 Method and apparatus for determining target object tracking threshold

Country Status (2)

Country Link
CN (1) CN113064153B (en)
WO (1) WO2021121247A1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113734046A (en) * 2021-08-17 2021-12-03 厦门星图安达科技有限公司 Method, device and equipment for detecting personnel in vehicle location partition based on radar
CN114509720A (en) * 2022-01-18 2022-05-17 国网河北省电力有限公司信息通信分公司 Indoor positioning method and device for power grid equipment and terminal equipment

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7003136B1 (en) * 2002-04-26 2006-02-21 Hewlett-Packard Development Company, L.P. Plan-view projections of depth image data for object tracking
CN105957106A (en) * 2016-04-26 2016-09-21 湖南拓视觉信息技术有限公司 Method and apparatus for tracking three-dimensional targets
CN108427112A (en) * 2018-01-22 2018-08-21 南京理工大学 A kind of improved more extension method for tracking target
CN110488815A (en) * 2019-08-01 2019-11-22 广州小鹏汽车科技有限公司 A kind of path following method and path following system of vehicle
CN110488273A (en) * 2019-08-30 2019-11-22 成都纳雷科技有限公司 A kind of vehicle tracking detection method and device based on radar

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7003136B1 (en) * 2002-04-26 2006-02-21 Hewlett-Packard Development Company, L.P. Plan-view projections of depth image data for object tracking
CN105957106A (en) * 2016-04-26 2016-09-21 湖南拓视觉信息技术有限公司 Method and apparatus for tracking three-dimensional targets
CN108427112A (en) * 2018-01-22 2018-08-21 南京理工大学 A kind of improved more extension method for tracking target
CN110488815A (en) * 2019-08-01 2019-11-22 广州小鹏汽车科技有限公司 A kind of path following method and path following system of vehicle
CN110488273A (en) * 2019-08-30 2019-11-22 成都纳雷科技有限公司 A kind of vehicle tracking detection method and device based on radar

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113734046A (en) * 2021-08-17 2021-12-03 厦门星图安达科技有限公司 Method, device and equipment for detecting personnel in vehicle location partition based on radar
CN113734046B (en) * 2021-08-17 2023-09-19 江苏星图智能科技有限公司 Method, device and equipment for detecting personnel in vehicle position partition based on radar
CN114509720A (en) * 2022-01-18 2022-05-17 国网河北省电力有限公司信息通信分公司 Indoor positioning method and device for power grid equipment and terminal equipment

Also Published As

Publication number Publication date
CN113064153B (en) 2024-01-02
CN113064153A (en) 2021-07-02

Similar Documents

Publication Publication Date Title
US11815623B2 (en) Single frame 4D detection using deep fusion of camera image, imaging RADAR and LiDAR point cloud
US11821990B2 (en) Scene perception using coherent doppler LiDAR
US11726189B2 (en) Real-time online calibration of coherent doppler lidar systems on vehicles
WO2018086218A1 (en) Method and device for recovering vehicle braking energy
WO2021103511A1 (en) Operational design domain (odd) determination method and apparatus and related device
US10818110B2 (en) Methods and systems for providing a mixed autonomy vehicle trip summary
CN112512887B (en) Driving decision selection method and device
WO2021212379A1 (en) Lane line detection method and apparatus
US20210206395A1 (en) Methods and systems to enhance safety of bi-directional transition between autonomous and manual driving modes
CN112543877B (en) Positioning method and positioning device
WO2021121247A1 (en) Method and apparatus for determining target object tracking threshold
CN113525373B (en) Lane changing control system, control method and lane changing controller for vehicle
WO2021103536A1 (en) Vehicle adjustment and control method, apparatus, and electronic device
US20200130690A1 (en) Lateral adaptive cruise control
CN111311947B (en) Driving risk assessment method and device considering driver intention in internet environment
US20220073104A1 (en) Traffic accident management device and traffic accident management method
US11647164B2 (en) Methods and systems for camera sharing between autonomous driving and in-vehicle infotainment electronic control units
US20210362727A1 (en) Shared vehicle management device and management method for shared vehicle
US20230249660A1 (en) Electronic Mechanical Braking Method and Electronic Mechanical Braking Apparatus
US20230048680A1 (en) Method and apparatus for passing through barrier gate crossbar by vehicle
WO2022000127A1 (en) Target tracking method and device therefor
CN115147796A (en) Method and device for evaluating target recognition algorithm, storage medium and vehicle
US20210387628A1 (en) Extracting agent intent from log data for running log-based simulations for evaluating autonomous vehicle software
CN114842440B (en) Automatic driving environment sensing method and device, vehicle and readable storage medium
CN115179930B (en) Vehicle control method and device, vehicle and readable storage medium

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20903508

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20903508

Country of ref document: EP

Kind code of ref document: A1