WO2022000127A1 - Target tracking method and device therefor - Google Patents

Target tracking method and device therefor Download PDF

Info

Publication number
WO2022000127A1
WO2022000127A1 PCT/CN2020/098494 CN2020098494W WO2022000127A1 WO 2022000127 A1 WO2022000127 A1 WO 2022000127A1 CN 2020098494 W CN2020098494 W CN 2020098494W WO 2022000127 A1 WO2022000127 A1 WO 2022000127A1
Authority
WO
WIPO (PCT)
Prior art keywords
target
covariance matrix
matrix
value
target tracking
Prior art date
Application number
PCT/CN2020/098494
Other languages
French (fr)
Chinese (zh)
Inventor
李晓波
劳大鹏
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Priority to PCT/CN2020/098494 priority Critical patent/WO2022000127A1/en
Priority to CN202080016826.0A priority patent/CN113498529B/en
Publication of WO2022000127A1 publication Critical patent/WO2022000127A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • G06T7/251Analysis of motion using feature-based methods, e.g. the tracking of corners or segments involving models
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/66Tracking systems using electromagnetic waves other than radio waves
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F17/00Digital computing or data processing equipment or methods, specially adapted for specific functions
    • G06F17/10Complex mathematical operations
    • G06F17/16Matrix or vector computation, e.g. matrix-matrix or matrix-vector multiplication, matrix factorization
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F17/00Digital computing or data processing equipment or methods, specially adapted for specific functions
    • G06F17/10Complex mathematical operations
    • G06F17/18Complex mathematical operations for evaluating statistical data, e.g. average values, frequency distributions, probability functions, regression analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10032Satellite or aerial image; Remote sensing
    • G06T2207/10044Radar image

Definitions

  • the present application relates to the field of automatic driving, and in particular, to a target tracking method and device thereof.
  • Advanced driver assistant system ADAS
  • ADS autonomous driving system
  • ADAS and ADS can automatically plan the vehicle's driving path reasonably and control the driving state of the vehicle.
  • ADAS and ADS can sense the vehicle in time. Use the relevant road information on the current road, and make timely and correct driving operations accordingly, so as to avoid vehicle accidents caused by the driver's inattention or unresponsiveness. It is precisely because ADAS and ADS have the above advantages that ADAS and ADS have become a research hotspot of current artificial intelligence. As the future development trend of automobiles, ADAS and ADS have broad development prospects.
  • ADAS and ADS put forward higher requirements for the performance of vehicle radar such as distance and angular resolution.
  • the improvement of the range and angular resolution of the on-board radar enables the on-board radar system to detect multiple measurement points when imaging the target, forming a high-resolution point cloud, which we call point cloud imaging radar.
  • Target tracking is a key part of vehicle-mounted radar for intelligent driving applications.
  • the performance of target tracking depends on the selection of target motion model and filtering algorithm, while the performance of track filtering mainly depends on the covariance matrix of the measurement equation and the state equation, and the covariance matrix The more accurate the estimation, the higher the target tracking accuracy.
  • the present application provides a target tracking method, the method includes: obtaining point cloud data of a target object, and performing centroid estimation on the point cloud data; obtaining a target covariance matrix; wherein, the target covariance matrix represents The centroid position error introduced when the centroid estimation is performed; and, based on the target covariance matrix, target tracking is performed on the target object to obtain a target tracking result.
  • centroid estimation result obtained after the above centroid estimation may be the state measurement value at the current moment, and the state measurement value and the target covariance matrix may be used as input data during target tracking.
  • the radar system may scan the target object to obtain point cloud data of the target object.
  • the point cloud data is obtained by scanning the target object with the radar system, and the point cloud data can be recorded in the form of points, and each point can include coordinates and motion states. Or other types of target objects have turns or occlusions, and the centroid estimation of the point cloud data of the same target object in different frames may experience positional disturbance.
  • the measurement error related to the point cloud measurement performance of the system cannot be accurately quantified due to the estimation of the centroid of the point cloud data. Therefore, the accuracy of the target tracking will be reduced.
  • the target covariance matrix can be relative to To accurately quantify the error caused by the centroid estimation of the point cloud data, if the target object is tracked based on the target covariance matrix, the accuracy of the target tracking can be improved.
  • an innovation covariance matrix may be constructed based on the target covariance matrix, and target tracking is performed on the target object based on the innovation covariance matrix to obtain a target tracking result.
  • the obtaining the target covariance matrix includes:
  • state prediction is performed on the target tracking result at the previous moment to obtain the target tracking prediction value at the current moment;
  • the target covariance matrix is obtained based on the target tracking predicted value and the state measurement value.
  • the covariance matrices used in the filtering are not close to the true value (ie, the correct value), and the target tracking at the current moment cannot be performed based on the historical target tracking results.
  • the target tracking is performed according to the target covariance matrix set by the empirical value, wherein the target covariance matrix can represent the centroid position error introduced during centroid estimation.
  • the obtaining the target covariance matrix includes:
  • the preset covariance matrix is used as the target covariance matrix; wherein, the number of data associations indicates that the difference between the centroid measurement value and the state predicted value is less than a threshold value number of times, the centroid measurement value is obtained by estimating the centroid of the point cloud data of the target object at the current moment, and the state prediction value is obtained by predicting the centroid measurement value at the previous moment. The measured value is obtained by estimating the centroid of the point cloud data of the target object at the previous moment.
  • the moment in this embodiment may be granular in frames or other time units, for example, the previous moment may be the previous frame, and the current moment may be the current frame.
  • the obtaining the target covariance matrix includes:
  • the target covariance matrix is obtained based on the target tracking predicted value and the state measurement value; wherein the target tracking predicted value is the target tracking at the previous moment
  • the result is obtained by performing state prediction, and the state measurement value is obtained by performing centroid estimation on the point cloud data.
  • the target covariance matrix may be continuously updated according to the progress of target tracking, and each iteration (that is, each time the target covariance matrix at the current moment is calculated) is based on the target tracking prediction value and The state measurement value at the current moment is updated. With the iteration of the target tracking process, the target covariance matrix will gradually approach or reach the true value, thereby making the target tracking result more accurate.
  • obtaining the target covariance matrix based on the target tracking predicted value and the state measurement value includes: obtaining the target by the following formula based on the target tracking predicted value and the state measurement value Covariance matrix:
  • the is the state measurement value
  • the Track the predicted value for the target
  • the R 1 is the target covariance matrix
  • the method further includes: acquiring a measurement equation covariance matrix, where the point cloud data is acquired based on a radar system, and the measurement equation covariance matrix represents the measurement deviation of the radar system; The measurement equation covariance matrix and the target covariance matrix are combined to obtain a combined target covariance matrix; correspondingly, the target tracking of the target object based on the target covariance matrix includes: based on the combined target covariance matrix The target covariance matrix of the target object is tracked.
  • the synthesized target covariance matrix includes information related to the centroid position error introduced during centroid estimation, and the synthesized target covariance matrix relatively accurately quantifies the error caused by the centroid estimation of the point cloud data. Tracking the target object based on the synthesized target covariance matrix can improve the accuracy of target tracking.
  • the measurement equation covariance matrix is a matrix represented in a polar coordinate system
  • the target covariance matrix is a matrix represented in a Cartesian coordinate system
  • the measurement equation covariance matrix and Synthesizing the target covariance matrix includes: converting the measurement equation covariance matrix into a matrix represented in a Cartesian coordinate system; performing matrix addition on the converted measurement equation covariance matrix and the target covariance matrix to obtain The synthesized target covariance matrix.
  • the point cloud data collected by the radar system is usually the data expressed in the polar coordinate system. Therefore, the measurement equation covariance matrix related to the performance of the radar system is usually the matrix expressed in the polar coordinate system.
  • the target covariance matrix is a matrix represented in the Cartesian coordinate system. In this case, the measurement equation covariance matrix can be converted into a matrix represented in the Cartesian coordinate system, and the converted measurement equation covariance The matrix and the target covariance matrix are matrix added to obtain the combined target covariance matrix.
  • the point cloud data collected by the radar system is not the data expressed in the polar coordinate system, it only needs to be converted into the data expressed in the same coordinate system as the covariance matrix of the measurement equation.
  • the point cloud data collected by the radar system is the data expressed in the Cartesian coordinate system
  • the target covariance matrix is the matrix expressed in the polar coordinate system
  • only the covariance matrix of the measurement equation needs to be converted into the polar coordinate system.
  • converting the covariance matrix of the measurement equation into a matrix represented in a Cartesian coordinate system includes: acquiring a state measurement value at the current moment, where the state measurement value is a measurement of the point cloud data. Obtained by centroid estimation; based on the state measurement value, a covariance transformation synthesis matrix is obtained; based on the covariance transformation synthesis matrix, the measurement equation covariance matrix is transformed into a matrix represented in a Cartesian coordinate system.
  • a covariance transformation synthesis matrix may be constructed first, and the covariance transformation synthesis matrix may be used to convert the covariance matrix from a polar coordinate system to an expression in a Cartesian coordinate system; in one implementation, The covariance transformation synthesis matrix is related to the state measurement value at the current moment.
  • the covariance transformation can be constructed in the synthesis matrix , and fill the other part of the covariance transformation composition matrix with 0s and 1s.
  • performing matrix addition on the converted measurement equation covariance matrix and the target covariance matrix includes: by the following formula, adding the converted measurement equation covariance matrix to the target covariance matrix Do matrix addition:
  • R all R 0 +A k R m A k ′;
  • the R 0 is the target covariance matrix
  • the A k R m A k ′ is the converted measurement equation covariance matrix
  • the R k and the ⁇ k are the state measurement values
  • the A k is the covariance matrix
  • the A k ' is the transpose of the covariance transformation synthesis matrix.
  • the present application provides a target tracking device, the device comprising:
  • an acquisition module for acquiring point cloud data of the target object, and performing centroid estimation on the point cloud data; acquiring a target covariance matrix; wherein, the target covariance matrix represents a centroid position error introduced when performing the centroid estimation; and,
  • the target tracking module is configured to perform target tracking on the target object based on the target covariance matrix to obtain a target tracking result.
  • the acquisition module is used to acquire the target tracking result of the target object at the previous moment
  • state prediction is performed on the target tracking result at the previous moment to obtain the target tracking prediction value at the current moment;
  • the target covariance matrix is obtained based on the target tracking predicted value and the state measurement value.
  • the acquisition module is configured to use a preset covariance matrix as the target covariance matrix when the number of data associations is less than or equal to a preset value; wherein the data The number of associations represents the number of times that the difference between the centroid measurement value and the state predicted value is less than the threshold, the centroid measurement value is obtained by performing centroid estimation on the point cloud data of the target object at the current moment, and the state predicted value is the previous moment.
  • the centroid measurement value of the last moment is obtained by predicting the centroid measurement value of the target object, and the centroid measurement value of the target object at the last moment is obtained by performing centroid estimation on the point cloud data of the target object.
  • the obtaining module is configured to obtain the target covariance matrix based on the target tracking predicted value and the state measurement value when the data association times are greater than the preset value;
  • the target tracking prediction value is obtained by performing state prediction on the target tracking result at the previous moment, and the state measurement value is obtained by performing centroid estimation on the point cloud data.
  • the obtaining module is specifically used for:
  • the target covariance matrix is obtained by the following formula:
  • k are the state measurement values
  • the Track the predicted value for the target the R 1 is the target covariance matrix
  • the acquisition module is also used to:
  • the device also includes:
  • a matrix synthesis module for synthesizing the measurement equation covariance matrix and the target covariance matrix to obtain a synthesized target covariance matrix
  • the target tracking module is specifically configured to: perform target tracking on the target object based on the synthesized target covariance matrix.
  • the measurement equation covariance matrix is a matrix represented in a polar coordinate system
  • the target covariance matrix is a matrix represented in a Cartesian coordinate system
  • the matrix synthesis module is specifically used for:
  • Matrix addition is performed between the converted measurement equation covariance matrix and the target covariance matrix to obtain a synthesized target covariance matrix.
  • the matrix synthesis module is specifically used for:
  • a covariance transformation synthesis matrix is obtained; wherein, the covariance transformation synthesis matrix includes a plurality of elements, and some elements in the plurality of elements are generated based on the state measurement value;
  • the measurement equation covariance matrix is transformed into a matrix represented in a Cartesian coordinate system.
  • the matrix synthesis module is specifically used for:
  • the converted measurement equation covariance matrix and the target covariance matrix are matrix added by the following formula:
  • R all R 0 +A k R m A k ′;
  • the R 0 is the target covariance matrix
  • the A k R m A k ′ is the converted measurement equation covariance matrix
  • the R k and the ⁇ k are the state measurement values
  • the A k is the covariance matrix
  • the A k ' is the transpose of the covariance transformation synthesis matrix.
  • the present application provides a target tracking device, comprising: a processor and a memory; the memory stores program codes; when the processor calls the program codes in the memory, the target tracking device executes the above-mentioned first aspect any of the methods.
  • the present application provides an autonomous vehicle, which may include a processor, the processor is coupled to a memory, the memory stores program instructions, and the method of the first aspect is implemented when the program instructions stored in the memory are executed by the processor. .
  • the present application provides a computer-readable storage medium, where a computer program is stored in the computer-readable storage medium, and when it runs on a computer, causes the computer to execute the method of the first aspect.
  • the present application provides a circuit system including a processing circuit configured to perform the method of any one of the above-mentioned first aspect or various optional implementations of the first aspect.
  • the present application provides a computer program that, when run on a computer, causes the computer to execute the method of the first aspect.
  • the present application provides a chip system
  • the chip system includes a processor for supporting a server or a threshold value acquisition device to implement the functions involved in the above aspects, for example, sending or processing the above methods. data and/or information.
  • the chip system further includes a memory for storing necessary program instructions and data of the server or the communication device.
  • the chip system may be composed of chips, or may include chips and other discrete devices.
  • An embodiment of the present application provides a target tracking method.
  • the method includes: acquiring point cloud data of a target object, and performing centroid estimation on the point cloud data; and, based on the target covariance matrix, perform target tracking on the target object to obtain the target tracking result.
  • the centroid estimation of point cloud data of the same target object in different frames may experience positional disturbance.
  • the error covariance matrix only considers The measurement error related to the point cloud measurement performance of the radar system cannot accurately quantify the error caused by the centroid estimation of the point cloud data. Therefore, the accuracy of target tracking will be reduced.
  • the target covariance matrix The error caused by the centroid estimation of the point cloud data can be relatively accurately quantified. Therefore, if the target object is tracked based on the target covariance matrix, the accuracy of the target tracking can be improved.
  • FIG. 1 is a functional block diagram of a vehicle 100 according to an embodiment of the present invention.
  • FIG. 2a is a schematic diagram of an application scenario provided by an embodiment of the present invention.
  • FIG. 2b is a schematic diagram of an application scenario provided by an embodiment of the present invention.
  • FIG. 3 is a schematic diagram of a computer system provided by an embodiment of the present invention.
  • Fig. 4 is a kind of schematic diagram of point cloud data centroid estimation
  • FIG. 5 is a schematic flowchart of a target tracking method provided by an embodiment of the present application.
  • FIG. 6 is a schematic flowchart of a target tracking method provided by an embodiment of the present application.
  • FIG. 7 is a schematic flowchart of a target tracking method provided by an embodiment of the present application.
  • FIG. 8 is a schematic diagram of a target tracking apparatus provided by an embodiment of the present application.
  • FIG. 9 is a schematic diagram of a computer program product provided by an embodiment of the present application.
  • FIG. 10 is a schematic diagram of a target tracking apparatus provided by an embodiment of the present application.
  • a component may be, but is not limited to, a process running on a processor, a processor, an object, an executable, a thread of execution, a program, and/or a computer.
  • an application running on a computing device and the computing device may be components.
  • One or more components may reside within a process and/or thread of execution, and a component may be localized on one computer and/or distributed between 2 or more computers.
  • these components can execute from various computer readable media having various data structures stored thereon.
  • a component may, for example, be based on a signal having one or more data packets (eg, data from two components interacting with another component between a local system, a distributed system, and/or a network, such as the Internet interacting with other systems via signals) Communicate through local and/or remote processes.
  • data packets eg, data from two components interacting with another component between a local system, a distributed system, and/or a network, such as the Internet interacting with other systems via signals
  • FIG. 1 is a functional block diagram of a vehicle 100 according to an embodiment of the present invention.
  • the vehicle 100 is configured in a fully or partially autonomous driving mode.
  • the vehicle 100 can control itself while in an autonomous driving mode, and can determine the current state of the vehicle and its surroundings through human manipulation, determine the possible behavior of at least one other vehicle in the surrounding environment, and determine the other vehicles perform The confidence level corresponding to the likelihood of the possible behavior, the vehicle 100 is controlled based on the determined information.
  • the vehicle 100 may be placed to operate without human interaction.
  • Vehicle 100 may include various subsystems, such as travel system 102 , sensor system 104 , control system 106 , one or more peripherals 108 and power supply 110 , computer system 112 , and user interface 116 .
  • vehicle 100 may include more or fewer subsystems, and each subsystem may include multiple elements. Additionally, each of the subsystems and elements of the vehicle 100 may be interconnected by wire or wirelessly.
  • the travel system 102 may include components that provide powered motion for the vehicle 100 .
  • propulsion system 102 may include engine 118 , energy source 119 , transmission 120 , and wheels/tires 121 .
  • the engine 118 may be an internal combustion engine, an electric motor, an air compression engine, or other types of engine combinations, such as a hybrid engine consisting of a gas oil engine and an electric motor, a hybrid engine consisting of an internal combustion engine and an air compression engine.
  • Engine 118 converts energy source 119 into mechanical energy.
  • Examples of energy sources 119 include gasoline, diesel, other petroleum-based fuels, propane, other compressed gas-based fuels, ethanol, solar panels, batteries, and other sources of electricity.
  • the energy source 119 may also provide energy to other systems of the vehicle 100 .
  • Transmission 120 may transmit mechanical power from engine 118 to wheels 121 .
  • Transmission 120 may include a gearbox, a differential, and a driveshaft.
  • transmission 120 may also include other devices, such as clutches.
  • the drive shaft may include one or more axles that may be coupled to one or more wheels 121 .
  • the sensor system 104 may include several sensors that sense information about the environment surrounding the vehicle 100 .
  • the sensor system 104 may include a positioning system 122 (the positioning system may be a GPS system, a Beidou system or other positioning systems), an inertial measurement unit (IMU) 124, a radar system 126, a laser rangefinder 128 and camera 130 .
  • the sensor system 104 may also include sensors of the internal systems of the vehicle 100 being monitored (eg, an in-vehicle air quality monitor, a fuel gauge, an oil temperature gauge, etc.). Sensor data from one or more of these sensors can be used to detect objects and their corresponding characteristics (position, shape, orientation, velocity, etc.). This detection and identification is a critical function for the safe operation of the autonomous vehicle 100 .
  • ADAS advanced driver assistance systems
  • unmanned driving technologies higher requirements are placed on the performance of the radar system 126 such as distance and angular resolution.
  • the improvement of the distance and angular resolution of the vehicle-mounted radar system 126 enables the vehicle-mounted radar system 126 to detect multiple measurement points for a target object when imaging the target to form high-resolution point cloud data.
  • the radar system in this application 126 may also be referred to as point cloud imaging radar.
  • the positioning system 122 may be used to estimate the geographic location of the vehicle 100 .
  • the IMU 124 is used to sense position and orientation changes of the vehicle 100 based on inertial acceleration.
  • IMU 124 may be a combination of an accelerometer and a gyroscope.
  • Radar system 126 may utilize radio signals to sense objects within the surrounding environment of vehicle 100 . In some embodiments, in addition to sensing objects, radar system 126 may be used to sense the speed and/or heading of objects.
  • the laser rangefinder 128 may utilize laser light to sense objects in the environment in which the vehicle 100 is located.
  • the laser rangefinder 128 may include one or more laser sources, laser scanners, and one or more detectors, among other system components.
  • Camera 130 may be used to capture multiple images of the surrounding environment of vehicle 100 .
  • Camera 130 may be a still camera or a video camera.
  • Control system 106 controls the operation of the vehicle 100 and its components.
  • Control system 106 may include various elements including steering system 132 , throttle 134 , braking unit 136 , sensor fusion algorithms 138 , computer vision system 140 , route control system 142 , and obstacle avoidance system 144 .
  • the steering system 132 is operable to adjust the heading of the vehicle 100 .
  • it may be a steering wheel system.
  • the throttle 134 is used to control the operating speed of the engine 118 and thus the speed of the vehicle 100 .
  • the braking unit 136 is used to control the deceleration of the vehicle 100 .
  • the braking unit 136 may use friction to slow the wheels 121 .
  • the braking unit 136 may convert the kinetic energy of the wheels 121 into electrical current.
  • the braking unit 136 may also take other forms to slow the wheels 121 to control the speed of the vehicle 100 .
  • Computer vision system 140 may process and analyze images captured by camera 130 in order to identify objects and/or features in the environment surrounding vehicle 100 .
  • the objects and/or features may include traffic signals, road boundaries and obstacles.
  • Computer vision system 140 may use object recognition algorithms, Structure from Motion (SFM) algorithms, video tracking, and other computer vision techniques.
  • SFM Structure from Motion
  • the computer vision system 140 may be used to map the environment, track objects, estimate the speed of objects, and the like.
  • the route control system 142 is used to determine the travel route of the vehicle 100 .
  • the route control system 142 may combine data from the sensors 138 , the GPS 122 , and one or more predetermined maps to determine a driving route for the vehicle 100 .
  • the obstacle avoidance system 144 is used to identify, evaluate, and avoid or otherwise traverse potential obstacles in the environment of the vehicle 100 .
  • control system 106 may additionally or alternatively include components other than those shown and described. Alternatively, some of the components shown above may be reduced.
  • Peripherals 108 may include a wireless communication system 146 , an onboard computer 148 , a microphone 150 and/or a speaker 152 .
  • peripherals 108 provide a means for a user of vehicle 100 to interact with user interface 116 .
  • the onboard computer 148 may provide information to the user of the vehicle 100 .
  • User interface 116 may also operate on-board computer 148 to receive user input.
  • the onboard computer 148 can be operated via a touch screen.
  • peripheral devices 108 may provide a means for vehicle 100 to communicate with other devices located within the vehicle.
  • microphone 150 may receive audio (eg, voice commands or other audio input) from a user of vehicle 100 .
  • speakers 152 may output audio to a user of vehicle 100 .
  • Wireless communication system 146 may wirelessly communicate with one or more devices, either directly or via a communication network.
  • wireless communication system 146 may use 3G cellular communications such as code division multiple access (CDMA), enhanced versatile disk (EVD), global system for mobile communications , GSM)/general packet radio service (GPRS), or 4G cellular communications such as LTE. Or 5G cellular communications.
  • the wireless communication system 146 may communicate with a wireless local area network (WLAN) using WiFi.
  • the wireless communication system 146 may communicate directly with the device using an infrared link, Bluetooth, or ZigBee.
  • Other wireless protocols, such as various vehicle communication systems, for example, wireless communication system 146 may include one or more dedicated short range communications (DSRC) devices, which may include communication between vehicles and/or roadside stations public and/or private data communications.
  • DSRC dedicated short range communications
  • the power supply 110 may provide power to various components of the vehicle 100 .
  • the power source 110 may be a rechargeable lithium-ion or lead-acid battery.
  • One or more battery packs of such a battery may be configured as a power source to provide power to various components of the vehicle 100 .
  • power source 110 and energy source 119 may be implemented together, such as in some all-electric vehicles.
  • Computer system 112 may include at least one processor 113 that executes instructions 115 stored in a non-transitory computer-readable medium such as data storage device 114 .
  • Computer system 112 may also be multiple computing devices that control individual components or subsystems of vehicle 100 in a distributed fashion.
  • the processor 113 may be any conventional processor, such as a commercially available central processing unit (CPU). Alternatively, the processor may be a dedicated device such as an application specific integrated circuit (ASIC) or other hardware-based processor.
  • FIG. 1 functionally illustrates the processor, memory, and other elements of the computer 110 in the same block, one of ordinary skill in the art will understand that the processor, computer, or memory may actually include a processor, a computer, or a memory that may or may not Multiple processors, computers, or memories stored within the same physical enclosure.
  • the memory may be a hard drive or other storage medium located within an enclosure other than computer 110 .
  • reference to a processor or computer will be understood to include reference to a collection of processors or computers or memories that may or may not operate in parallel.
  • some components such as the steering and deceleration components, may each have their own processors that only perform computations related to component-specific functions.
  • the processor 113 may acquire the sensor system (eg, point movement data collected by the radar system 126 ) and motion model parameters, etc., and then perform target tracking based on the point cloud data and motion model parameters.
  • the sensor system eg, point movement data collected by the radar system 126
  • motion model parameters etc.
  • a processor may be located remotely from the vehicle and in wireless communication with the vehicle. In other aspects, some of the processes described herein are performed on a processor disposed within the vehicle while others are performed by a remote processor, including taking steps necessary to perform a single maneuver.
  • data storage 114 may include instructions 115 (eg, program logic) executable by processor 113 to perform various functions of vehicle 100 , including those described above.
  • Data storage 114 may also contain additional instructions, including sending data to, receiving data from, interacting with, and/or performing data processing on one or more of propulsion system 102 , sensor system 104 , control system 106 , and peripherals 108 . control commands.
  • the data storage device 114 may store data such as road maps, route information, vehicle location, direction, speed, and other vehicle data, among other information. Such information may be used by the vehicle 100 and the computer system 112 while the vehicle 100 is in autonomous, semi-autonomous, and/or manual modes.
  • a user interface 116 for providing information to or receiving information from a user of the vehicle 100 .
  • the user interface 116 may include one or more input/output devices within the set of peripheral devices 108 , such as a wireless communication system 146 , an onboard computer 148 , a microphone 150 and a speaker 152 .
  • Computer system 112 may control functions of vehicle 100 based on input received from various subsystems (eg, travel system 102 , sensor system 104 , and control system 106 ) and from user interface 116 .
  • computer system 112 may utilize input from control system 106 in order to control steering unit 132 to avoid obstacles detected by sensor system 104 and obstacle avoidance system 144 .
  • computer system 112 is operable to provide control of various aspects of vehicle 100 and its subsystems.
  • one or more of these components described above may be installed or associated with the vehicle 100 separately.
  • data storage device 114 may exist partially or completely separate from vehicle 1100 .
  • the above-described components may be communicatively coupled together in a wired and/or wireless manner.
  • FIG. 1 should not be construed as a limitation on the embodiment of the present invention.
  • a self-driving car traveling on a road can recognize objects within its surroundings to determine adjustments to the current speed.
  • the objects may be other vehicles, traffic control devices, or other types of objects.
  • each identified object may be considered independently, and based on the object's respective characteristics, such as its current speed, acceleration, distance from the vehicle, etc., may be used to determine the speed at which the autonomous vehicle is to adjust.
  • the autonomous vehicle vehicle 100 or a computing device associated with the autonomous vehicle 100 may be based on the characteristics of the identified objects and the surrounding environment state (eg, traffic, rain, ice on the road, etc.) to predict the behavior of the identified objects.
  • each identified object is dependent on the behavior of the other, so it is also possible to predict the behavior of a single identified object by considering all identified objects together.
  • the vehicle 100 can adjust its speed based on the predicted behavior of the identified object.
  • the self-driving car can determine what steady state the vehicle will need to adjust to (eg, accelerate, decelerate, or stop) based on the predicted behavior of the object.
  • other factors may also be considered to determine the speed of the vehicle 100, such as the lateral position of the vehicle 100 in the road being traveled, the curvature of the road, the proximity of static and dynamic objects, and the like.
  • the computing device may also provide instructions to modify the steering angle of the vehicle 100 so that the self-driving car follows a given trajectory and/or maintains contact with objects in the vicinity of the self-driving car (eg, , cars in adjacent lanes on the road) safe lateral and longitudinal distances.
  • objects in the vicinity of the self-driving car eg, , cars in adjacent lanes on the road
  • the above-mentioned vehicle 100 can be a car, a truck, a motorcycle, a bus, a boat, an airplane, a helicopter, a lawn mower, a recreational vehicle, a playground vehicle, construction equipment, a tram, a golf cart, a train, a cart, etc.
  • the embodiments of the invention are not particularly limited.
  • the computer system 101 includes a processor 103 coupled to a system bus 105 .
  • the processor 103 may be one or more processors, each of which may include one or more processor cores.
  • a video adapter 107 which can drive a display 109, is coupled to the system bus 105.
  • System bus 105 is coupled to input-output (I/O) bus 113 through bus bridge 111 .
  • I/O interface 115 is coupled to the I/O bus. I/O interface 115 communicates with various I/O devices, such as input device 117 (eg, keyboard, mouse, touch screen, etc.), media tray 121, (eg, compact disc read-only) memory, CD-ROM), multimedia interface, etc.).
  • Transceiver 123 (which can send and/or receive radio communication signals), camera 155 (which can capture still and moving digital video images) and external USB interface 125 .
  • the interface connected to the I/O interface 115 may be a USB interface.
  • the processor 103 may be any conventional processor, including a reduced instruction set computing (reduced instruction set computing, RISC) processor, a complex instruction set computing (complex instruction set computing, CISC) processor or a combination of the above.
  • the processor may be a special purpose device such as an ASIC.
  • the processor 103 may be a neural network processor or a combination of a neural network processor and the above-mentioned conventional processors.
  • computer system 101 may be located remotely from the autonomous vehicle and may communicate wirelessly with the autonomous vehicle.
  • some of the processes herein are performed on a processor disposed within an autonomous vehicle, others are performed by a remote processor, including taking actions required to perform a single maneuver.
  • Network interface 129 is a hardware network interface, such as a network card.
  • the network 127 may be an external network, such as the Internet, or an internal network, such as an Ethernet network or a virtual private network (VPN).
  • the network 127 may also be a wireless network, such as a WiFi network, a cellular network, and the like.
  • the hard disk drive interface is coupled to the system bus 105 .
  • the hard drive interface is connected to the hard drive.
  • System memory 135 is coupled to system bus 105 . Data running in system memory 135 may include operating system 137 and application programs 143 of computer 101 .
  • the operating system includes a Shell 139 and a kernel 141 .
  • Shell 139 is an interface between the user and the operating system's kernel.
  • the shell is the outermost layer of the operating system. The shell manages the interaction between the user and the operating system: waiting for user input, interpreting user input to the operating system, and processing various operating system output.
  • Kernel 141 consists of those parts of the operating system that manage memory, files, peripherals, and system resources.
  • the kernel 141 directly interacts with the hardware, and the operating system kernel usually runs processes, provides inter-process communication, provides CPU time slice management, interrupts, memory management, IO management, and the like.
  • Application 143 includes programs that control the autonomous driving of the car, for example, programs that manage the interaction of the autonomous car with obstacles on the road, programs that control the route or speed of the autonomous car, and programs that control the interaction of the autonomous car with other autonomous vehicles on the road. .
  • Application 143 also exists on the system of deploying server 149.
  • computer system 101 may download application 143 from software deployment server 149 when application 147 needs to be executed.
  • Sensor 153 is associated with computer system 101 .
  • the sensor 153 is used to detect the environment around the computer 101 .
  • the sensor 153 can detect animals, cars, obstacles and pedestrian crossings, etc.
  • the sensor 153 can also detect the environment around the above-mentioned animals, cars, obstacles and pedestrian crossings, such as: the environment around animals, for example, around animals Other animals present, weather conditions, ambient light levels, etc.
  • the computer 101 is located on a self-driving car, the sensor may be a radar system or the like.
  • Computer system 112 may also receive information from or transfer information to other computer systems. Alternatively, sensor data collected from the sensor system 104 of the vehicle 100 may be transferred to another computer for processing of the data. As shown in FIG. 3, data from computer system 112 may be transmitted via a network to cloud-side computer 720 for further processing.
  • Networks and intermediate nodes may include a variety of configurations and protocols, including the Internet, the World Wide Web, an intranet, a virtual private network, a wide area network, a local area network, a private network using one or more of the company's proprietary communication protocols, Ethernet, wireless local area networks (wireless local area networks). fidelity, WiFi) and hypertext transfer protocol (HTTP), and various combinations of the foregoing. Such communication may be accomplished by any device capable of transferring data to and from other computers, such as modems and wireless interfaces.
  • computer 720 may include a server having multiple computers, such as a load balancing server farm, that exchange information with different nodes of the network for the purpose of receiving, processing, and transmitting data from computer system 112.
  • the server may be configured similarly to computer system 110 , with processor 730 , memory 740 , instructions 750 and data 760 .
  • Example Scenario 2 Video Surveillance
  • the monitoring system is the basic equipment in video criminal investigation, security early warning, and traffic command.
  • the current object tracking technology is based on the monitoring system responsible for managing multiple cameras in the area.
  • the first step of target tracking is to determine the position of the target, and then effectively dispatch the camera to track the target according to the position of the target.
  • the surveillance management system 200 may include a server 201 and a radar system 202 .
  • the radar system 202 can collect point cloud data within the detection range of the radar system, and the server 201 can be used to manage one or more radar systems in a certain area, and can receive the point cloud data returned by the radar system 202, and then process the point cloud data. After the sum and analysis, the position information of the target is calculated and the radar system 202 is reasonably scheduled to track the target.
  • the monitoring management system 200 may also include a display screen 203 for presenting the target tracking results in real time.
  • the target object may be a person, an animal, or a vehicle
  • the target tracking method provided in this embodiment of the application can be used for tracking targets such as a human body, an animal, or a vehicle.
  • the vehicle in FIG. 1, the processor in the computer system in FIG. 2a, and the server in FIG. 2b can obtain the sensor system
  • the data collected by the radar system (such as a radar system), and the instructions related to target tracking in the storage device, and the data is processed based on the acquired instructions to obtain the target tracking result.
  • the vehicle can perform automatic driving based on the target tracking result, etc. related operations.
  • target tracking refers to analyzing the point cloud data collected by the radar system in combination with the motion model and the error covariance matrix to obtain the positions and motion states of obstacles around the vehicle.
  • it is first necessary to cluster the point cloud data collected by the radar system (the purpose is to distinguish the point cloud data of each object to obtain the point cloud data corresponding to each target object) and the centroid estimation (the purpose is to The centroid estimation is performed on the point cloud data of each target object, and the centroid measurement value corresponding to each target object is obtained), and further, the centroid measurement value can be obtained.
  • the centroid measurement value of the target object can be obtained in each of the consecutive frames, and the input of the target tracking is the above-obtained centroid measurement value of each frame.
  • the centroid estimation of point cloud data of the same target object in different frames may experience position disturbance due to problems such as vehicle turning or occlusion.
  • the error covariance matrix when tracking the target, the error covariance matrix only considers The measurement error related to the system's point cloud measurement performance cannot accurately quantify the error caused by the centroid estimation of the point cloud data. Therefore, it will reduce the accuracy of target tracking, and even lead to filter divergence and track break batches.
  • the signal transmitter of the radar system (the letter "T” in Fig. 4) sends the radar signal to the target object (the vehicle in Fig. 4), and the signal receiver (the letter in Fig. 4) sends the radar signal to the target object (the vehicle in Fig. 4).
  • R receives the signal reflected by the target object, and can process the signal reflected by the target object to obtain point cloud data, wherein when the target object is at vehicle position 1, the corresponding point cloud data is 401, and the target object When driving to the vehicle position 2, the corresponding point cloud data is 402.
  • the deviation between the estimated centroid position and the true position of the centroid obtained based on the centroid estimation of the point cloud data 401 is small, and due to the target The object turns while driving, and the estimated position of the centroid obtained based on the centroid estimation of the point cloud data 402 has a large deviation from the actual position of the centroid.
  • an embodiment of the present application provides a target tracking method.
  • FIG. 5 is a schematic flowchart of a target tracking method provided by an embodiment of the present application.
  • the target tracking method provided by the embodiment of the present application includes:
  • the radar system may scan the target object to obtain point cloud data of the target object.
  • the point cloud data is obtained by scanning the target object with the radar system, and the point cloud data can be recorded in the form of points, and each point can include coordinates and motion states.
  • the coordinates can represent the relative position of the point relative to the radar system.
  • the collected point cloud data is expressed in the polar coordinate system, and the coordinates included in each point can include the point between the point and the radar system. distance, and the bearing of the point;
  • the motion state can represent the motion speed of each point.
  • the collected point cloud data is expressed in the polar coordinate system, and the motion state included in each point can include the radial speed of the point relative to the vehicle. .
  • centroid estimation may be performed on the point cloud data.
  • the average value of the coordinate positions corresponding to each point included in the point cloud data can be calculated based on the average value calculation method, and the average value can be used as the position of the centroid;
  • the average value is taken as the movement rate of the centroid.
  • the embodiment of the present application does not limit the specific operation manner of centroid estimation.
  • the point cloud data of the target object collected by the radar system can be acquired, and the centroid estimation of the point cloud data can be performed to obtain the state measurement value.
  • the point cloud data of the target object can be obtained by a device on the terminal side (for example, the processor of the vehicle), and the centroid of the point cloud data can be estimated;
  • the device sends the point cloud data of the target object to the server on the cloud side, and the server can obtain the point cloud data of the target object and perform centroid estimation on the point cloud data.
  • a target covariance matrix may be obtained; wherein, the target covariance matrix represents a centroid position error introduced during centroid estimation.
  • the result of centroid estimation can be used for track initiation; wherein, track initiation refers to the process of confirming the target track before the system enters stable target tracking. The goal is to start the correct track with the highest possible probability and suppress false tracks. If the track starts incorrectly, the correct tracking of the target and information fusion cannot be achieved. If there are too many false tracks at the beginning, it will bring a large computational burden to the subsequent data comprehensive processing and affect the data comprehensive processing. Efficiency and correctness.
  • the track initiation algorithm can include, but is not limited to, two categories: one is a sequential processing technology represented by heuristic algorithms and logical rules, and the other is a batch processing technology represented by Hough transform.
  • a track corresponding to a target object can be determined through track initiation, and the track can include the result of centroid estimation of each frame, and the result of centroid estimation can be a centroid measurement value, which can then be based on a motion model Predict the centroid measurement value of each frame to obtain the state predicted value of the next frame, and then compare the state predicted value of the next frame with the corresponding centroid measurement value of the next frame. If the error is within the preset range, it is considered that successful association, track and record the current number of associated data, wherein the data associated with the number 0 is started, the data associated with each successful once, may increase the number of times a data association, in order to record data associated with track number N a.
  • different target covariance matrices can be obtained according to the difference of the current association times N a, which are described below:
  • the present embodiments of the application if the association number of the current record is equal to or less than N a predetermined value N, it is considered that the current target tracking process is still at an initial stage, the filtering using respective covariance matrix is not close to the true value (i.e. the correct value), the target track can not be performed based on the target current time history tracking result, i.e., if the association number of the current record is equal to or less than N a predetermined value N, the target tracking has not converged, then you can use empirically
  • the target covariance matrix set by the value is used for target tracking, wherein the target covariance matrix can represent the centroid position error introduced during centroid estimation.
  • the error may be determined to represent centroid introduced when the initial value of the covariance matrix of the covariance matrix of the target centroid estimation.
  • the target covariance matrix may be a preset covariance matrix, for example, may be set based on an empirical value.
  • centroid position error introduced in the estimated centroid estimation is 5 meters, and it is considered that the error in the x-direction and the error in the y-direction in the Cartesian coordinate system are not related (that is, the centroid position error in the x-direction is will not affect the centroid position error in the y direction, and the centroid position error in the y direction will not affect the centroid position error in the x direction), you can set the preset target covariance matrix as Among them, the target covariance matrix is represented in the Cartesian coordinate system.
  • centroid position error introduced in the estimated centroid estimation is 5 meters, and it is considered that the error in the x-direction and the error in the y-direction in the Cartesian coordinate system are related (that is, the centroid position error in the x-direction will affect the y-direction error).
  • centroid position error the centroid position error in the y direction will affect the centroid position error in the x direction
  • the influence of the error on the centroid position error in the x-direction is set, which is not limited here.
  • the target tracking at the current moment can be performed based on the historical target tracking results.
  • the target covariance matrix can be obtained based on the target tracking predicted value and the state measurement value; specifically, it can be Obtain the target tracking results of the target object at the last moment; and based on the motion model, perform state prediction on the target tracking results at the previous moment to obtain the target tracking prediction value at the current moment; obtain the state measurement value at the current moment, the state measurement value It is obtained by estimating the centroid of the point cloud data; based on the target tracking predicted value and the state measurement value, the target covariance matrix is obtained.
  • the target covariance matrix may be continuously updated according to the progress of target tracking, and each iteration (that is, each time the target covariance matrix at the current moment is calculated) is based on the target tracking prediction value and The state measurement value at the current moment is updated. With the iteration of the target tracking process, the target covariance matrix will gradually approach or reach the true value, thereby making the target tracking result more accurate.
  • the target covariance matrix may be determined according to the error between the target tracking predicted value and the state measurement value at the current moment; wherein, the calculation method of the error may be: Perform a matrix subtraction operation on the target tracking prediction value and the state measurement value.
  • the target tracking prediction value may include the position prediction value of the centroid
  • the state measurement value may include the position measurement value of the centroid.
  • the position The predicted value includes the position prediction value in the x direction and the position prediction value in the y direction; the position measurement value includes the position measurement value in the x direction and the position measurement value in the y direction, and then the position prediction value in the x direction and the position measurement in the x direction can be calculated.
  • the difference between the values calculate the difference between the position prediction value in the y direction and the position measurement value in the y direction, you can calculate the difference between the position prediction value in the x direction and the position measurement value in the x direction or the difference
  • the preset multiple of the value is used as a diagonal element of the target covariance matrix. A diagonal element.
  • the target covariance matrix can be obtained by the following formula based on the target tracking predicted value and the state measurement value:
  • k are state measurement values
  • R 1 is the target covariance matrix
  • the centroid estimation of point cloud data of the same target object in different frames may experience positional disturbance.
  • the error covariance matrix only considers The measurement error related to the point cloud measurement performance of the radar system cannot accurately quantify the error caused by the centroid estimation of the point cloud data. Therefore, the accuracy of target tracking will be reduced.
  • the target covariance matrix The error caused by the centroid estimation of the point cloud data can be relatively accurately quantified. Therefore, if the target object is tracked based on the target covariance matrix, the accuracy of the target tracking can be improved.
  • the target tracking at the current moment in which the target covariance matrix obtained above can be fused into the measurement equation covariance matrix, that is, the target covariance matrix and the measurement equation covariance matrix are synthesized to obtain the synthesized target covariance matrix, and The synthesized target covariance matrix is used as the input data for target tracking at the current moment.
  • the target tracking is performed on the target object based on the target covariance matrix.
  • the target covariance matrix is used as part of the input data to perform target tracking on the target object; , use the target covariance matrix as part of the input data.
  • the measurement equation covariance matrix may be obtained, wherein the measurement equation covariance matrix represents the measurement deviation of the radar system; the measurement equation covariance matrix and the target covariance matrix are synthesized , to get the synthesized target covariance matrix.
  • the point cloud data collected by the radar system is usually the data expressed in the polar coordinate system. Therefore, the measurement equation covariance matrix related to the performance of the radar system is usually in the polar coordinate system.
  • the matrix represented in the coordinate system, and the target covariance matrix is the matrix represented in the Cartesian coordinate system, in this case, the measurement equation covariance matrix can be converted to the matrix represented in the Cartesian coordinate system, and the The converted measurement equation covariance matrix and the target covariance matrix are matrix-added to obtain a combined target covariance matrix.
  • a covariance transformation synthesis matrix may be constructed first, and the covariance transformation synthesis matrix may be used to convert the covariance matrix from a polar coordinate system to an expression in a Cartesian coordinate system; in one implementation, The covariance transformation synthesis matrix is related to the state measurement value at the current moment.
  • the covariance transformation can be constructed in the synthesis matrix part of the elements of the covariance transformation synthesis matrix, and use 0 and 1 to fill another part of the elements in the covariance transformation synthesis matrix; specifically, cos ⁇ k , -R k cos ⁇ k can be used as the elements of the first row of the covariance transformation synthesis matrix, sin ⁇ k and R k cos ⁇ k as the elements of the second row of the covariance transformation synthesis matrix.
  • the covariance transformation synthesis matrix can be:
  • covariance transformation synthesis matrix is only an illustration. In practical applications, the dimension of the covariance transformation synthesis matrix can be flexibly adjusted, which is not limited here.
  • the measurement equation covariance matrix can be transformed into a matrix represented in the Cartesian coordinate system based on the covariance transformation synthesis matrix, and the transformed measurement equation covariance matrix and the target covariance matrix are processed. Matrix addition to obtain the synthesized target covariance matrix.
  • the target covariance matrix and the measurement equation covariance matrix can be synthesized by the following formula to obtain the synthesized target covariance matrix:
  • R all R 0 +A k R m A k ′;
  • matrix addition can be understood as matrices with the same size, and the corresponding elements are added to obtain a new matrix.
  • matrix A is Matrix B is Then the matrix C obtained after matrix A and matrix B are added is:
  • R all is the synthesized target covariance matrix
  • R 0 is the target covariance matrix
  • a k ′ is the transformed measurement equation covariance matrix
  • R k and ⁇ k are the state measurement values
  • a k is Covariance transformation synthesis matrix
  • a k ' is the transpose of the covariance transformation synthesis matrix.
  • the synthesized target covariance matrix includes information related to the centroid position error introduced during centroid estimation, and the synthesized target covariance matrix relatively accurately quantifies the error caused by the centroid estimation of the point cloud data. Therefore, if Tracking the target object based on the synthesized target covariance matrix can improve the accuracy of target tracking.
  • the target object can be tracked based on the target covariance matrix; in more detail, the target object can be tracked based on the synthesized target covariance matrix; the following describes how to track the target object based on the synthesized target Covariance matrix, to track the target object:
  • a state equation covariance matrix P may be obtained, wherein the state equation covariance matrix P may be a matrix composed of covariances between states, The diagonal elements are the variances between the states, and the remaining elements are the covariances of the corresponding elements. Therefore, P can be a multidimensional square matrix with the same dimensions as the number of states, and P can be a symmetric square matrix.
  • state X contains two quantities, position p and velocity v. Since the covariance between the same variables is its variance, the elements on the diagonal are the variances of p and v, respectively, and the other two elements are the covariance between the two elements.
  • the variance, equation of state covariance matrix is symmetric due to the covariance part order.
  • the state equation covariance matrix P is an iteratively updated quantity. After each round of prediction and update, P will be updated with a new value. Therefore, the initialization can be based on the estimation agreement, and the accuracy of initialization does not need to be too strict. Over several iterations, the true value will be tightened more and more.
  • the state measurement value obtained by performing centroid estimation at the last moment may be obtained;
  • the state measurement value may be X(k
  • k) [x k
  • the state at the current moment may be predicted by using the motion model and the state measurement value obtained by performing centroid estimation at the previous moment to obtain the state prediction value at the current moment.
  • F(k) represents the motion model
  • V(k) represents the state noise
  • the state prediction value can be calculated by the following formula:
  • X(k+1/k) represents the state prediction value at the current moment.
  • the motion model is an important part of the design of the tracking filtering algorithm. Selecting a reasonable motion model helps to accurately predict the future state or trajectory of the tracked target, which is an important condition for accurate tracking control.
  • the motion model is mainly represented by a state space model, and the motion model can express the motion law of the target.
  • the embodiment of the present application does not limit the specific motion model type.
  • the prediction of the covariance matrix of the state equation at the current moment may be performed based on the state equation covariance matrix and the motion equation at the previous moment: P(k+1
  • k) F(k)P(k
  • the target tracking prediction value at the current moment can be obtained according to the measurement matrix and the target tracking result of the previous moment.
  • H(k+1) represents the measurement matrix
  • W(k+1) represents the The measurement error
  • the target tracking prediction value at the current moment can be:
  • k) H(k+1)X(k+1
  • the innovation may represent the update of the matrix related to the state information.
  • the innovation covariance matrix can be calculated by the following formula:
  • S(k+1) H(k+1)P(k+1
  • S(k+1) can represent the innovation covariance matrix
  • H( k+1) represents the measurement matrix
  • P(k+1/k) can represent the covariance matrix of the state equation at the current moment
  • R all can represent the synthesized target covariance matrix.
  • an innovation covariance matrix may be generated based on the synthesized target covariance matrix and other covariance matrices, and then the target tracking result at the moment is calculated based on the innovation covariance matrix, where the target tracking result may be the current moment The centroid position of the target object.
  • the target tracking result at the current moment may be calculated based on the state equation covariance matrix, the measurement matrix, the innovation covariance matrix, the innovation value, and the state predicted value at the current moment, wherein the target tracking result It can be the centroid position of the target object at the current moment.
  • target tracking can be performed based on the following formula:
  • K(k+1) P(k+1
  • P(k+1/k) can represent the state equation covariance matrix at the current moment
  • H(k+1) can represent the measurement matrix
  • S(k+1) can represent the innovation covariance matrix
  • X(k+1/k) can represent the state prediction value at the current moment
  • X(k+1/k+1) can represent the target tracking result.
  • the above-mentioned specific implementation of the target tracking for the target object based on the target covariance matrix is only an illustration. In practical applications, the target tracking process based on different filtering algorithms may be different. The adjustment of compatibility is not limited in this application.
  • the point cloud data of the target object may be acquired by the terminal side (for example, a vehicle or a monitoring device), and the point cloud data of the target object may be sent to the server, and the server will perform the processing on the point cloud data.
  • the point cloud data of the target object can be obtained by the terminal side (such as a vehicle or a monitoring device), and the centroid estimation is performed on the point cloud data; to obtain the target covariance matrix, the server can be based on For the target covariance matrix, target tracking is performed on the target object to obtain a target tracking result.
  • An embodiment of the present application provides a target tracking method.
  • the method includes: acquiring point cloud data of a target object, and performing centroid estimation on the point cloud data; and, based on the target covariance matrix, perform target tracking on the target object to obtain the target tracking result.
  • the centroid estimation of point cloud data of the same target object in different frames may experience positional disturbance.
  • the error covariance matrix only considers The measurement error related to the point cloud measurement performance of the radar system cannot accurately quantify the error caused by the centroid estimation of the point cloud data. Therefore, the accuracy of target tracking will be reduced.
  • the target covariance matrix The error caused by the centroid estimation of the point cloud data can be relatively accurately quantified. Therefore, if the target object is tracked based on the target covariance matrix, the accuracy of the target tracking can be improved.
  • the CTRV motion model and the CKF filtering algorithm are used to implement target tracking based on point cloud data, wherein the specific implementation process can be shown in Figure 6, and the method includes:
  • the data related to the target tracking needs to be prepared first, wherein the data related to the target tracking may include a measurement equation covariance matrix related to the performance of the radar system, which may represent the centroid
  • the target covariance matrix of the centroid position error introduced during estimation may be made to the description related to step 502 in the embodiment corresponding to FIG. 5 , which will not be repeated here.
  • the corresponding volume point set is:
  • the point cloud data of the target object collected by the radar system can be acquired, and the centroid estimation of the point cloud data can be performed to obtain the state measurement value.
  • the target tracking result at the previous moment and the CTRV motion model can be obtained, and the track state of the target object is predicted based on the target tracking result at the previous moment and the CTRV motion model.
  • the state prediction may include: generating a plurality of extended volume points according to the initialization parameters in step 601, the CKF volume point set obtained in step 602, and the target tracking result at the previous moment, and for each of the plurality of extended volume points
  • Each extended volume point is predicted according to the CTRV model, and the predicted value of each volume point at the current moment is generated; and the target tracking predicted value and the covariance matrix of the state equation at the current moment are predicted according to the predicted value of each volume point at the current moment.
  • step 604 may include steps 6041-6043 described below.
  • multiple extended volume points may be generated according to the initialization parameters in step 601 and the CKF volume point set obtained in step 602. Specifically, multiple extended volume points may be generated based on the following formula:
  • P(k/k) can represent the covariance matrix of the state equation at the previous moment, represents the decomposition of the lower triangular matrix, ⁇ i represents the volume point, and X(k/k) can represent the target tracking result at the previous moment.
  • the state prediction value X(k+1/k) at the current moment may be obtained based on the average value of the extended volume points, and specifically, the state prediction value X(k+1/k) at the current moment may be calculated based on the following formula /k):
  • k) can be predicted based on the predicted values corresponding to the multiple volume points and the state predicted value at the current moment.
  • data association can be performed based on the state prediction value at the current moment and the centroid measurement value at the current moment.
  • the result of the centroid estimation can be performed with track start, and the track start can be used to determine the corresponding target object.
  • Track the track can include the result of centroid estimation of each frame, and the result of centroid estimation can be the centroid measurement value, and then the centroid measurement value of each frame can be predicted based on the motion model, and the state prediction value of the next frame can be obtained , and then compare the state predicted value of the next frame with the corresponding centroid measurement value of the next frame.
  • the association is considered successful, and the current track data association times are recorded, where the data association times are is 0, each data association is successful, the data may be associated with an increase in frequency and, in order to record data associated with track number N a.
  • the measurement covariance matrix is estimated.
  • the synthesized target covariance matrix may be obtained based on the step of step 502 in the embodiment corresponding to FIG. 5 , wherein the estimation result of the measurement covariance matrix is That is, the target covariance matrix.
  • the estimation result of the measurement covariance matrix is That is, the target covariance matrix.
  • the state measurement value at the next moment may be predicted based on the state measurement value at the current moment and the volume point.
  • the prediction of the state measurement value at the next moment may include steps 6061 and 6062 .
  • k), and the CKF volume point set ⁇ can be obtained to generate multiple extended volume points , specifically, multiple extended volume points can be generated based on the following formula:
  • the corresponding measurement prediction value can be generated according to the measurement equation for each extended volume point And form the extended matrix Z cuba (k+1
  • the state measurement value, the innovation covariance matrix, and the state measurement covariance matrix at the next moment can be predicted by the following formula:
  • k) can represent the expansion matrix
  • S(k+1) can represent the innovation covariance matrix
  • R all can represent the synthesized target covariance matrix
  • Z (k+1) can represent the state measurement value at the current moment
  • P XZ (k+1) represents the state measurement covariance matrix.
  • the target tracking result at the current moment may be calculated based on the state equation covariance matrix, the measurement matrix, the innovation covariance matrix, the innovation value, and the state predicted value at the current moment, wherein the target tracking result It can be the centroid position of the target object at the current moment.
  • K(k+1) P(k+1
  • v(k+1) Z(k)-Z(k+1/k)
  • v(k+1) can represent innovation
  • the point cloud target tracking based on the CTRV model and the CKF filtering algorithm can be completed, that is, the centroid position X(k+1/k+1) of the target object at the current moment is obtained.
  • the centroid estimation of the point cloud data of the same target object in different frames may appear.
  • Disturbance based on the application of the synthesized target covariance matrix in the CTRV combined with the CKF algorithm for target tracking, can effectively improve the target tracking accuracy in the scene of the vehicle going straight and turning; the CKF algorithm can achieve nonlinear third-order equivalent, and the filtering accuracy is better. high.
  • the point cloud data of the target object collected by the radar system can be acquired, and the centroid estimation of the point cloud data can be performed to obtain the state measurement value.
  • the data related to the target tracking needs to be prepared first, wherein the data related to the target tracking may include a measurement equation covariance matrix related to the performance of the radar system, which may represent the centroid
  • the target covariance matrix of the centroid position error introduced during estimation may be made to the description related to step 502 in the embodiment corresponding to FIG. 5 , which will not be repeated here.
  • k), and the state value corresponding to the CT model can be output according to the current transition probability and the current state value X 01 (k
  • k) ⁇ 1
  • the track state of the target object can be predicted according to the current state X 01 (k
  • P 01 (k/k) and P 02 (k/k) can be used in the subsequent state update and covariance update.
  • X 1 (k+1/k) is the prediction result obtained by predicting the track state of the target object based on the CV model.
  • X 2 (k+1/k) is the prediction result obtained by predicting the track state of the target object based on CT.
  • the prediction result obtained by predicting the track state of the target object based on the CV model and the prediction result obtained by predicting the track state of the target object based on the CT model can be synthesized based on the following formula:
  • k) X 1 (k+1
  • X(k+1/k) is the prediction result after synthesis
  • ⁇ 1 (k) is the weight value corresponding to the prediction result obtained by predicting the track state of the target object based on the CV model
  • ⁇ 2 (k) is The weight value corresponding to the prediction result obtained by predicting the track state of the target object based on the CT model.
  • step 707 For the specific description of step 707, reference may be made to FIG. 5 and the description related to data association in the corresponding embodiment, and details are not repeated here.
  • step 708 reference may be made to the description related to the synthetic target covariance matrix in step 502 in the foregoing embodiment, and details are not repeated here.
  • Step 709 can refer to the description related to determining the target covariance matrix according to the error between the target tracking predicted value and the state measurement value at the current moment in step 502 in the above embodiment, and details are not repeated here.
  • step 710 reference may be made to the description related to the synthetic target covariance matrix in step 503 in the foregoing embodiment, and details are not repeated here.
  • K 1 (k+1) P 01 (k+1
  • k+1) P 01 (k+1
  • K 2 (k+1) P 02 (k+1
  • X 2 (k+1/k+1) may represent the state update result
  • P 2 (k+1/k+1) may represent the covariance update result
  • k+1) X 1 (k+1
  • FIG. 8 is a schematic structural diagram of a target tracking apparatus provided by an embodiment of the present application. As shown in FIG. 8, the apparatus 800 includes:
  • an acquisition module 801 configured to acquire point cloud data of a target object, and perform centroid estimation on the point cloud data; acquire a target covariance matrix; wherein, the target covariance matrix represents a centroid position error introduced during centroid estimation; and,
  • the target tracking module 802 is configured to perform target tracking on the target object based on the target covariance matrix to obtain the target tracking result.
  • the target tracking result is a centroid state value
  • the acquiring module is specifically used for:
  • state prediction is performed on the target tracking result at the previous moment to obtain the target tracking prediction value at the current moment;
  • the target covariance matrix is obtained based on the target tracking predicted value and the state measurement value.
  • the acquisition module is configured to use a preset covariance matrix as the target covariance matrix when the number of data associations is less than or equal to a preset value; wherein the data The number of associations represents the number of times that the difference between the centroid measurement value and the state predicted value is less than the threshold, the centroid measurement value is obtained by performing centroid estimation on the point cloud data of the target object at the current moment, and the state predicted value is the previous moment.
  • the centroid measurement value of the last moment is obtained by performing centroid estimation on the point cloud data of the target object at the last moment.
  • the obtaining module is configured to obtain the target covariance based on the target tracking predicted value and the state measurement value when the data association times are greater than the preset value Matrix; wherein, the target tracking prediction value is obtained by performing state prediction on the target tracking result at the previous moment.
  • the obtaining module is configured to obtain the target covariance based on the target tracking predicted value and the state measurement value when the data association times are greater than the preset value Matrix; wherein, the target tracking prediction value is obtained by performing state prediction on the target tracking result at the previous moment.
  • the acquisition module is specifically used for:
  • the target covariance matrix is obtained by the following formula:
  • k are state measurement values
  • R 1 is the target covariance matrix
  • the fetch module is also used to:
  • the device also includes:
  • the matrix synthesis module is used for synthesizing the measurement equation covariance matrix and the target covariance matrix to obtain the synthesized target covariance matrix;
  • the target tracking module is specifically used for: tracking the target object based on the synthesized target covariance matrix.
  • the covariance matrix of the measurement equation is a matrix represented in a polar coordinate system
  • the target covariance matrix is a matrix represented in a Cartesian coordinate system
  • the matrix synthesis module is specifically used for:
  • Matrix addition is performed on the transformed measurement equation covariance matrix and the target covariance matrix to obtain the synthesized target covariance matrix.
  • the matrix synthesis module specifically:
  • the matrix synthesis module specifically:
  • the converted measurement equation covariance matrix and the target covariance matrix are matrix added by the following formula:
  • R all R 0 +A k R m A k ′;
  • R 0 is the target covariance matrix
  • a k R m is the transformed measurement equation covariance matrix
  • R k and ⁇ k are the state measurement values
  • a k is the covariance transformation synthesis matrix
  • a k ′ is Transpose of the covariance transform composite matrix.
  • example computer program product 900 is provided using signal bearing medium 901 .
  • Signal bearing medium 901 may include one or more program instructions 902 that, when executed by one or more processors, may provide the functions, or portions thereof, described above with respect to FIGS. 5-8 .
  • program instructions 902 in FIG. 9 also describe example instructions.
  • the signal bearing medium 901 may include a computer readable medium 903 such as, but not limited to, a hard drive, a compact disc (CD), a digital video disc (DVD), a digital tape, a memory, a read only memory (Read) -Only Memory, ROM) or random access memory (Random Access Memory, RAM) and so on.
  • the signal bearing medium 901 may include a computer recordable medium 904 such as, but not limited to, memory, read/write (R/W) CDs, R/W DVDs, and the like.
  • signal bearing medium 901 may include communication medium 905, such as, but not limited to, digital and/or analog communication media (eg, fiber optic cables, waveguides, wired communication links, wireless communication links, etc.).
  • the signal bearing medium 901 may be conveyed by a wireless form of communication medium 905 (eg, a wireless communication medium conforming to the IEEE 802.11 standard or other transmission protocol).
  • the one or more program instructions 902 may be, for example, computer-executable instructions or logic-implemented instructions.
  • a computing device of a computing device may be configured to respond to program instructions 902 communicated to the computing device through one or more of computer readable media 903 , computer recordable media 904 , and/or communication media 905 .
  • FIG. 10 is a schematic structural diagram of a target tracking device provided by an embodiment of the present disclosure, which is used for executing the target tracking method provided in the above-mentioned embodiment.
  • the device 1000 includes:
  • the transceiver 110 the memory 120 including one or more storage media, the input unit 130, the display unit 140, the sensor 150, the audio circuit 160, the processor 170 including one or more processing cores, etc.
  • the structure of the device 1000 shown in FIG. 10 does not constitute a limitation on the device 1000, and may include more or less components than those shown, or combine some components, or arrange different components. in:
  • the transceiver 110 can be used to receive and transmit signals in the process of transceiving information. Through the transceiver 110, the device 1000 can communicate with other devices located in the vehicle, such as various sensors.
  • the communication method includes but is not limited to a Bluetooth wireless communication method, a wireless fidelity (Wireless Fidelity, WiFi) wireless communication method, and the like.
  • the memory 120 may be used to store software programs and modules, and the processor 170 executes various functional applications and data processing by executing at least one instruction, at least one section of program, code set or instruction set stored in the memory 120 .
  • the memory 120 mainly includes a stored program area and a stored data area, wherein the stored program area can store an operating system, at least one instruction, at least a piece of program, a code set or an instruction set, etc.; data (such as audio data), etc.
  • the input unit 130 may be used to receive input numerical or character information, and generate signal input related to user settings and function control.
  • the input unit 130 may include a touch-sensitive surface as well as other input devices.
  • a touch-sensitive surface also known as a touch display or trackpad, collects the user's touches on or near it and drives the corresponding connection device according to a preset program.
  • the touch-sensitive surface may include two parts, a touch detection device and a touch controller. Among them, the touch detection device detects the user's touch orientation, detects the signal brought by the touch operation, and transmits the signal to the touch controller; the touch controller receives the touch information from the touch detection device, converts it into contact coordinates, and then sends it to the touch controller.
  • touch-sensitive surfaces can be implemented using resistive, capacitive, infrared, and surface acoustic wave types.
  • the input unit 130 may also include other input devices.
  • other input devices may include, but are not limited to, one or more of physical keyboards, function keys (such as volume control keys, switch keys, etc.).
  • the display unit 140 may be used to display information input by or provided to the user and various graphical user interfaces of the device 1000, which may be composed of graphics, text, icons, videos, and any combination thereof.
  • the display unit 140 may include a display panel, and optionally, the display panel may be configured in the form of an LCD (Liquid Crystal Display, liquid crystal display), an OLED (Organic Light-Emitting Diode, organic light-emitting diode) and the like.
  • the touch-sensitive surface may cover the display panel, and when the touch-sensitive surface detects a touch operation on or near it, it is transmitted to the processor 170 to determine the type of the touch event, and then the processor 170 displays the touch event according to the type of the touch event.
  • the corresponding visual output is provided on the panel.
  • the touch-sensitive surface and the display panel are implemented as two separate components to implement the input and output functions, in some embodiments, the touch-sensitive surface and the display panel may be integrated to implement the input and output functions.
  • Device 1000 may also include at least one sensor 150, such as a light sensor.
  • the light sensor may include an ambient light sensor, wherein the ambient light sensor may adjust the brightness of the display panel according to the brightness of the ambient light.
  • Audio circuitry 160 , speaker 161 , and microphone 162 may provide an audio interface between the user and device 1000 .
  • the audio circuit 160 can transmit the received audio data converted electrical signal to the speaker 161, and the speaker 161 converts it into a sound signal for output; on the other hand, the microphone 162 converts the collected sound signal into an electrical signal, which is converted by the audio circuit 160 into an electrical signal. After receiving, it is converted into audio data, and then processed by the output processor 170, and then sent to other devices such as in the vehicle via the transceiver 110, or the audio data is output to the memory 120 for further processing.
  • the processor 170 is the control center of the device 1000, and uses various interfaces and lines to connect various parts of the entire device 1000, by running or executing the software programs and/or modules stored in the memory 120, and calling the data stored in the memory 120. , perform various functions of the device 1000 and process data, so as to monitor the device 1000 as a whole.
  • the processor 170 may include one or more processing cores; preferably, the processor 170 may integrate an application processor and a modem processor, wherein the application processor mainly processes the operating system, user interface, and application programs, etc. , the modem processor mainly deals with wireless communication. It can be understood that, the above-mentioned modulation and demodulation processor may not be integrated into the processor 170 .
  • the processor 170 may invoke the code stored in the memory 120 to implement the target tracking method described in FIG. 5 to FIG. 7 in the above embodiment.
  • the display unit of the device 1000 may be a touch screen display, and the processor 170 of the device 1000 will execute at least one instruction, at least one program, a code set or an instruction set stored in the memory 120 to implement the above embodiments target localization method.
  • an embodiment of the present disclosure further provides a storage medium, where the storage medium stores at least one instruction, at least one piece of program, code set or instruction set, at least one instruction, at least one piece of program, code
  • the set or instruction set is loaded and executed by the processor of the device to implement the method for locating the object in the above embodiment.
  • the disclosed system, apparatus and method may be implemented in other manners.
  • the apparatus embodiments described above are only illustrative.
  • the division of units is only a logical function division.
  • there may be other division methods for example, multiple units or components may be combined or integrated. to another system, or some features can be ignored, or not implemented.
  • the shown or discussed mutual coupling or direct coupling or communication connection may be through some interfaces, indirect coupling or communication connection of devices or units, and may be in electrical, mechanical or other forms.
  • Units described as separate components may or may not be physically separated, and components shown as units may or may not be physical units, that is, may be located in one place, or may be distributed to multiple network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution in this embodiment.
  • each functional unit in each embodiment of the present application may be integrated into one processing unit, or each unit may exist physically alone, or two or more units may be integrated into one unit.
  • the above-mentioned integrated units may be implemented in the form of hardware, or may be implemented in the form of software functional units.
  • the integrated unit if implemented in the form of a software functional unit and sold or used as an independent product, may be stored in a computer-readable storage medium.
  • the technical solutions of the present application can be embodied in the form of software products in essence, or the parts that contribute to the prior art, or all or part of the technical solutions, and the computer software products are stored in a storage medium , including several instructions for causing a computer device (which may be a personal computer, a server, or other network device, etc.) to execute all or part of the steps of the method described in the embodiment of FIG. 2a of the present application.
  • the aforementioned storage medium includes: U disk, mobile hard disk, read-only memory (ROM), random access memory (RAM), magnetic disk or optical disk and other media that can store program codes .

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Theoretical Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Mathematical Analysis (AREA)
  • Computational Mathematics (AREA)
  • Mathematical Optimization (AREA)
  • Pure & Applied Mathematics (AREA)
  • Software Systems (AREA)
  • Algebra (AREA)
  • Databases & Information Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Biology (AREA)
  • Probability & Statistics with Applications (AREA)
  • Operations Research (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computing Systems (AREA)
  • Electromagnetism (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Traffic Control Systems (AREA)

Abstract

Provided in the embodiments of the present application is a target tracking method, which is applied to the field of automatic driving. The method comprises: acquiring point cloud data of a target object, and performing centroid estimation on the point cloud data; acquiring a target covariance matrix, wherein the target covariance matrix represents a centroid position error introduced during the centroid estimation; and performing target tracking on the target object on the basis of the target covariance matrix so as to obtain a target tracking result. In the present application, the target covariance matrix may relatively accurately quantify the error caused by the centroid estimation of the point cloud data, and performing target tracking on the target object on the basis of the target covariance matrix may improve the accuracy of target tracking.

Description

一种目标跟踪方法及其装置A target tracking method and device thereof 技术领域technical field
本申请涉及自动驾驶领域,尤其涉及一种目标跟踪方法及其装置。The present application relates to the field of automatic driving, and in particular, to a target tracking method and device thereof.
背景技术Background technique
先进驾驶辅助系统(advanced driver assistant system,ADAS)以及自动驾驶系统(autonomous driving system,ADS),可自动对车辆行驶路径进行合理规划以及对车辆行驶状态进行控制,比如ADAS和ADS可以及时感知自车当前行使道路上的相关道路信息,并据此做出及时并正确的驾驶操作,以避免驾驶员注意力不集中或反应不及时导致的车辆事故。正是因为ADAS和ADS具有上述优势,使得ADAS和ADS成为了当前人工智能的一个研究热点。作为汽车未来的发展趋势,ADAS和ADS具有广阔的发展前景。Advanced driver assistant system (ADAS) and autonomous driving system (ADS) can automatically plan the vehicle's driving path reasonably and control the driving state of the vehicle. For example, ADAS and ADS can sense the vehicle in time. Use the relevant road information on the current road, and make timely and correct driving operations accordingly, so as to avoid vehicle accidents caused by the driver's inattention or unresponsiveness. It is precisely because ADAS and ADS have the above advantages that ADAS and ADS have become a research hotspot of current artificial intelligence. As the future development trend of automobiles, ADAS and ADS have broad development prospects.
随着ADAS和ADS的发展,ADAS和ADS对车载雷达的距离、角度分辨率等性能提出了更高的要求。而车载雷达距离、角度分辨率的提升使得车载雷达系统在对目标进行成像时,一个目标会检测出多个测量点,形成高分辨率点云,我们称之为点云成像雷达。With the development of ADAS and ADS, ADAS and ADS put forward higher requirements for the performance of vehicle radar such as distance and angular resolution. The improvement of the range and angular resolution of the on-board radar enables the on-board radar system to detect multiple measurement points when imaging the target, forming a high-resolution point cloud, which we call point cloud imaging radar.
目标跟踪是车载雷达面向智能驾驶应用关键的一环,目标跟踪的性能依赖于目标运动模型和滤波算法的选择,而航迹滤波性能主要依赖于测量方程和状态方程的协方差矩阵,协方差矩阵估计越准确,目标跟踪精度越高。Target tracking is a key part of vehicle-mounted radar for intelligent driving applications. The performance of target tracking depends on the selection of target motion model and filtering algorithm, while the performance of track filtering mainly depends on the covariance matrix of the measurement equation and the state equation, and the covariance matrix The more accurate the estimation, the higher the target tracking accuracy.
现有技术中,在进行目标跟踪时,仅考虑到了与车载雷达性能相关的测量协方差矩阵,使得目标跟踪结果的精度较差。In the prior art, when performing target tracking, only the measurement covariance matrix related to the performance of the vehicle-mounted radar is considered, so that the accuracy of the target tracking result is poor.
发明内容SUMMARY OF THE INVENTION
第一方面,本申请提供了一种目标跟踪方法,该方法包括:获取目标对象的点云数据,并对该点云数据进行质心估计;获取目标协方差矩阵;其中,该目标协方差矩阵表示进行该质心估计时引入的质心位置误差;以及,基于该目标协方差矩阵,对该目标对象进行目标跟踪,以得到目标跟踪结果。In a first aspect, the present application provides a target tracking method, the method includes: obtaining point cloud data of a target object, and performing centroid estimation on the point cloud data; obtaining a target covariance matrix; wherein, the target covariance matrix represents The centroid position error introduced when the centroid estimation is performed; and, based on the target covariance matrix, target tracking is performed on the target object to obtain a target tracking result.
应当理解,上述质心估计后得到的质心估计结果可以为当前时刻的状态测量值,状态测量值可以和目标协方差矩阵作为目标跟踪时的输入数据。It should be understood that the centroid estimation result obtained after the above centroid estimation may be the state measurement value at the current moment, and the state measurement value and the target covariance matrix may be used as input data during target tracking.
应当理解,本申请实施例中,雷达系统可以对目标对象进行扫描,以获得目标对象的点云数据。其中,该点云数据是指利用雷达系统扫描目标对象获得的,该点云数据可以以点的形式记录,每个点可以包括坐标以及运动状态;本申请实施例提供的目标跟踪方法,由于车辆或者其他类型的目标对象存在转弯或遮挡等情况,不同帧相同目标对象的点云数据的质心估计可能会出现位置扰动,现有技术中,在进行目标跟踪时,误差协方差矩阵仅考虑与雷达系统的点云测量性能相关的测量误差,并不能很准确的量化由于点云数据的质心估计导致的误差,因此,会造成目标跟踪的精度降低,本申请实施例中,目标协方差矩阵可以相对准确的量化由于点云数据的质心估计导致的误差,因此若基于目标协方差矩阵对目标对象进行目标跟踪,则可以提高目标跟踪的精度。It should be understood that, in this embodiment of the present application, the radar system may scan the target object to obtain point cloud data of the target object. The point cloud data is obtained by scanning the target object with the radar system, and the point cloud data can be recorded in the form of points, and each point can include coordinates and motion states. Or other types of target objects have turns or occlusions, and the centroid estimation of the point cloud data of the same target object in different frames may experience positional disturbance. The measurement error related to the point cloud measurement performance of the system cannot be accurately quantified due to the estimation of the centroid of the point cloud data. Therefore, the accuracy of the target tracking will be reduced. In the embodiment of the present application, the target covariance matrix can be relative to To accurately quantify the error caused by the centroid estimation of the point cloud data, if the target object is tracked based on the target covariance matrix, the accuracy of the target tracking can be improved.
在一种可选的实现中,可以基于目标协方差矩阵来构建新息协方差矩阵,并基于新息协方差矩阵,对该目标对象进行目标跟踪,以得到目标跟踪结果。In an optional implementation, an innovation covariance matrix may be constructed based on the target covariance matrix, and target tracking is performed on the target object based on the innovation covariance matrix to obtain a target tracking result.
在一种可选的实现中,所述获取目标协方差矩阵,包括:In an optional implementation, the obtaining the target covariance matrix includes:
获取所述目标对象在上一时刻的目标跟踪结果;Obtain the target tracking result of the target object at the previous moment;
基于运动模型,对所述上一时刻的目标跟踪结果进行状态预测,以得到当前时刻的目标跟踪预测值;Based on the motion model, state prediction is performed on the target tracking result at the previous moment to obtain the target tracking prediction value at the current moment;
获取当前时刻的状态测量值,所述状态测量值为对所述点云数据进行质心估计得到的;Obtaining a state measurement value at the current moment, where the state measurement value is obtained by performing centroid estimation on the point cloud data;
基于所述目标跟踪预测值和所述状态测量值,获取所述目标协方差矩阵。The target covariance matrix is obtained based on the target tracking predicted value and the state measurement value.
若当前的目标跟踪过程还处于初始阶段,其滤波所使用的各个协方差矩阵还不接近真值(即正确值),不能基于历史的目标跟踪结果来进行当前时刻的目标跟踪,此时可以利用按照经验值设置的目标协方差矩阵来进行目标跟踪,其中,目标协方差矩阵可以表示进行质心估计时引入的质心位置误差。If the current target tracking process is still in the initial stage, the covariance matrices used in the filtering are not close to the true value (ie, the correct value), and the target tracking at the current moment cannot be performed based on the historical target tracking results. The target tracking is performed according to the target covariance matrix set by the empirical value, wherein the target covariance matrix can represent the centroid position error introduced during centroid estimation.
在一种可选的实现中,所述获取目标协方差矩阵,包括:In an optional implementation, the obtaining the target covariance matrix includes:
在数据关联次数小于或等于预设值的情况下,将预设的协方差矩阵作为所述目标协方差矩阵;其中,所述数据关联次数表示质心测量值和状态预测值之间的差异小于阈值的次数,所述质心测量值为对当前时刻目标对象的点云数据进行质心估计得到的,所述状态预测值为对上一时刻的质心测量值进行预测得到的,所述上一时刻的质心测量值为对上一时刻目标对象的点云数据进行质心估计得到的。In the case that the number of data associations is less than or equal to a preset value, the preset covariance matrix is used as the target covariance matrix; wherein, the number of data associations indicates that the difference between the centroid measurement value and the state predicted value is less than a threshold value number of times, the centroid measurement value is obtained by estimating the centroid of the point cloud data of the target object at the current moment, and the state prediction value is obtained by predicting the centroid measurement value at the previous moment. The measured value is obtained by estimating the centroid of the point cloud data of the target object at the previous moment.
可选地,本实施例中的时刻可以以帧或者其他时间单位为粒度,例如,前一时刻可以为前一帧,当前时刻可以为当前帧。Optionally, the moment in this embodiment may be granular in frames or other time units, for example, the previous moment may be the previous frame, and the current moment may be the current frame.
在一种可选的实现中,所述获取目标协方差矩阵,包括:In an optional implementation, the obtaining the target covariance matrix includes:
在所述数据关联次数大于所述预设值的情况下,基于目标跟踪预测值和状态测量值,获取所述目标协方差矩阵;其中,所述目标跟踪预测值为对上一时刻的目标跟踪结果进行状态预测得到的,所述状态测量值为对该点云数据进行质心估计得到的。When the number of data associations is greater than the preset value, the target covariance matrix is obtained based on the target tracking predicted value and the state measurement value; wherein the target tracking predicted value is the target tracking at the previous moment The result is obtained by performing state prediction, and the state measurement value is obtained by performing centroid estimation on the point cloud data.
本申请实施例中,目标协方差矩阵可以根据目标跟踪的进行而不断更新,每次迭代(即每次计算当前时刻的目标协方差矩阵)时,都是基于前一时刻的目标跟踪预测值和当前时刻的状态测量值进行更新的,随着目标跟踪过程的迭代的进行,目标协方差矩阵会逐渐逼近或者达到真实值,进而使得目标跟踪的结果更准确。In this embodiment of the present application, the target covariance matrix may be continuously updated according to the progress of target tracking, and each iteration (that is, each time the target covariance matrix at the current moment is calculated) is based on the target tracking prediction value and The state measurement value at the current moment is updated. With the iteration of the target tracking process, the target covariance matrix will gradually approach or reach the true value, thereby making the target tracking result more accurate.
在一种可选的实现中,该基于该目标跟踪预测值和该状态测量值,获取该目标协方差矩阵,包括:基于该目标跟踪预测值和该状态测量值,通过如下公式,获取该目标协方差矩阵:In an optional implementation, obtaining the target covariance matrix based on the target tracking predicted value and the state measurement value includes: obtaining the target by the following formula based on the target tracking predicted value and the state measurement value Covariance matrix:
Figure PCTCN2020098494-appb-000001
Figure PCTCN2020098494-appb-000001
Figure PCTCN2020098494-appb-000002
Figure PCTCN2020098494-appb-000002
其中,该
Figure PCTCN2020098494-appb-000003
为该状态测量值,该
Figure PCTCN2020098494-appb-000004
为该目标跟踪预测值,该R 1为该目标协方差矩阵。
Among them, the
Figure PCTCN2020098494-appb-000003
is the state measurement value, the
Figure PCTCN2020098494-appb-000004
Track the predicted value for the target, the R 1 is the target covariance matrix.
在一种可选的实现中,该方法还包括:获取测量方程协方差矩阵,其中,该点云数据为基于雷达系统采集得到的,该测量方程协方差矩阵表示该雷达系统的测量偏差;将该测量方程协方差矩阵和该目标协方差矩阵进行合成,以得到合成后的目标协方差矩阵;相应 的,该基于该目标协方差矩阵,对该目标对象进行目标跟踪,包括:基于该合成后的目标协方差矩阵,对该目标对象进行目标跟踪。In an optional implementation, the method further includes: acquiring a measurement equation covariance matrix, where the point cloud data is acquired based on a radar system, and the measurement equation covariance matrix represents the measurement deviation of the radar system; The measurement equation covariance matrix and the target covariance matrix are combined to obtain a combined target covariance matrix; correspondingly, the target tracking of the target object based on the target covariance matrix includes: based on the combined target covariance matrix The target covariance matrix of the target object is tracked.
本申请实施例中,合成后的目标协方差矩阵包括质心估计时引入的质心位置误差相关的信息,合成后的目标协方差矩阵相对准确的量化由于点云数据的质心估计导致的误差,因此若基于合成后的目标协方差矩阵对目标对象进行目标跟踪,则可以提高目标跟踪的精度。In the embodiment of the present application, the synthesized target covariance matrix includes information related to the centroid position error introduced during centroid estimation, and the synthesized target covariance matrix relatively accurately quantifies the error caused by the centroid estimation of the point cloud data. Tracking the target object based on the synthesized target covariance matrix can improve the accuracy of target tracking.
在一种可选的实现中,该测量方程协方差矩阵为在极坐标系中表示的矩阵,该目标协方差矩阵为在笛卡尔坐标系中表示的矩阵;该将该测量方程协方差矩阵和该目标协方差矩阵进行合成,包括:将该测量方程协方差矩阵转换为在笛卡尔坐标系中表示的矩阵;将转换后的测量方程协方差矩阵和该目标协方差矩阵进行矩阵加,以得到合成后的目标协方差矩阵。In an optional implementation, the measurement equation covariance matrix is a matrix represented in a polar coordinate system, and the target covariance matrix is a matrix represented in a Cartesian coordinate system; the measurement equation covariance matrix and Synthesizing the target covariance matrix includes: converting the measurement equation covariance matrix into a matrix represented in a Cartesian coordinate system; performing matrix addition on the converted measurement equation covariance matrix and the target covariance matrix to obtain The synthesized target covariance matrix.
通常由于雷达系统的特点,雷达系统采集的点云数据通常是在极坐标系中表达的数据,因此,与雷达系统的性能相关的测量方程协方差矩阵通常为在极坐标系中表示的矩阵,而目标协方差矩阵为在笛卡尔坐标系中表示的矩阵,在这种情况下,可以将测量方程协方差矩阵转换为在笛卡尔坐标系中表示的矩阵,并将转换后的测量方程协方差矩阵和目标协方差矩阵进行矩阵加,以得到合成后的目标协方差矩阵。Usually due to the characteristics of the radar system, the point cloud data collected by the radar system is usually the data expressed in the polar coordinate system. Therefore, the measurement equation covariance matrix related to the performance of the radar system is usually the matrix expressed in the polar coordinate system. The target covariance matrix is a matrix represented in the Cartesian coordinate system. In this case, the measurement equation covariance matrix can be converted into a matrix represented in the Cartesian coordinate system, and the converted measurement equation covariance The matrix and the target covariance matrix are matrix added to obtain the combined target covariance matrix.
需要说明的是,在一些场景中,雷达系统采集的点云数据不是在极坐标系中表达的数据,则只需要将其转换为与测量方程协方差矩阵在相同坐标系下表达的数据即可,例如雷达系统采集的点云数据是在笛卡尔坐标系中表达的数据,目标协方差矩阵为在极坐标系中表示的矩阵,则只需要将测量方程协方差矩阵转换到极坐标系中即可。It should be noted that in some scenarios, the point cloud data collected by the radar system is not the data expressed in the polar coordinate system, it only needs to be converted into the data expressed in the same coordinate system as the covariance matrix of the measurement equation. For example, the point cloud data collected by the radar system is the data expressed in the Cartesian coordinate system, and the target covariance matrix is the matrix expressed in the polar coordinate system, then only the covariance matrix of the measurement equation needs to be converted into the polar coordinate system. Can.
在一种可选的实现中,该将该测量方程协方差矩阵转换为在笛卡尔坐标系中表示的矩阵,包括:获取当前时刻的状态测量值,该状态测量值为对该点云数据进行质心估计得到的;基于该状态测量值,获取协方差转换合成矩阵;基于该协方差转换合成矩阵,将该测量方程协方差矩阵转换为在笛卡尔坐标系中表示的矩阵。In an optional implementation, converting the covariance matrix of the measurement equation into a matrix represented in a Cartesian coordinate system includes: acquiring a state measurement value at the current moment, where the state measurement value is a measurement of the point cloud data. Obtained by centroid estimation; based on the state measurement value, a covariance transformation synthesis matrix is obtained; based on the covariance transformation synthesis matrix, the measurement equation covariance matrix is transformed into a matrix represented in a Cartesian coordinate system.
本申请实施例中,可以首先构建一个协方差转换合成矩阵,该协方差转换合成矩阵可以用于将协方差矩阵从极坐标系下转换为在笛卡尔坐标系中表达;在一种实现中,协方差转换合成矩阵与当前时刻的状态测量值有关,具体的,当前时刻状态测量值可以为Z k=[R kk,v rk],其中R k可以表示雷达系统和质心之间的距离,θ k可以表示质心所在的方位,v rk可以表示质心的径向速度;可以基于状态测量值中雷达系统和质心之间的距离,以及质心所在的方位,来构建协方差转换合成矩阵中的一部分元素,并用0和1来填充协方差转换合成矩阵中的另一部分元素。 In the embodiment of the present application, a covariance transformation synthesis matrix may be constructed first, and the covariance transformation synthesis matrix may be used to convert the covariance matrix from a polar coordinate system to an expression in a Cartesian coordinate system; in one implementation, The covariance transformation synthesis matrix is related to the state measurement value at the current moment. Specifically, the state measurement value at the current moment can be Z k = [R k , θ k , v rk ], where R k can represent the radar system and the center of mass. Distance, θ k can represent the orientation of the centroid, and v rk can represent the radial velocity of the centroid; based on the distance between the radar system and the centroid in the state measurements, and the orientation of the centroid, the covariance transformation can be constructed in the synthesis matrix , and fill the other part of the covariance transformation composition matrix with 0s and 1s.
在一种可选的实现中,该将转换后的测量方程协方差矩阵和该目标协方差矩阵进行矩阵加,包括:通过如下公式,将转换后的测量方程协方差矩阵和该目标协方差矩阵进行矩阵加:In an optional implementation, performing matrix addition on the converted measurement equation covariance matrix and the target covariance matrix includes: by the following formula, adding the converted measurement equation covariance matrix to the target covariance matrix Do matrix addition:
R all=R 0+A kR mA k′; R all =R 0 +A k R m A k ′;
Figure PCTCN2020098494-appb-000005
Figure PCTCN2020098494-appb-000005
其中,该R 0为该目标协方差矩阵,该A kR mA k′为该转换后的测量方程协方差矩阵,该R k和该θ k为该状态测量值,该A k为该协方差转换合成矩阵,该A k′为该协方差转换合成矩阵的转置。 Wherein, the R 0 is the target covariance matrix, the A k R m A k ′ is the converted measurement equation covariance matrix, the R k and the θ k are the state measurement values, and the A k is the covariance matrix The variance transformation synthesis matrix, the A k ' is the transpose of the covariance transformation synthesis matrix.
第二方面,本申请提供了一种目标跟踪装置,该装置包括:In a second aspect, the present application provides a target tracking device, the device comprising:
获取模块,用于获取目标对象的点云数据,并对该点云数据进行质心估计;获取目标协方差矩阵;其中,该目标协方差矩阵表示进行该质心估计时引入的质心位置误差;以及,an acquisition module for acquiring point cloud data of the target object, and performing centroid estimation on the point cloud data; acquiring a target covariance matrix; wherein, the target covariance matrix represents a centroid position error introduced when performing the centroid estimation; and,
目标跟踪模块,用于基于该目标协方差矩阵,对该目标对象进行目标跟踪,以得到目标跟踪结果。The target tracking module is configured to perform target tracking on the target object based on the target covariance matrix to obtain a target tracking result.
在一种可选的实现中,所述获取模块,用于获取所述目标对象在上一时刻的目标跟踪结果;In an optional implementation, the acquisition module is used to acquire the target tracking result of the target object at the previous moment;
基于运动模型,对所述上一时刻的目标跟踪结果进行状态预测,以得到当前时刻的目标跟踪预测值;Based on the motion model, state prediction is performed on the target tracking result at the previous moment to obtain the target tracking prediction value at the current moment;
获取当前时刻的状态测量值,所述状态测量值为对所述点云数据进行质心估计得到的;Obtaining a state measurement value at the current moment, where the state measurement value is obtained by performing centroid estimation on the point cloud data;
基于所述目标跟踪预测值和所述状态测量值,获取所述目标协方差矩阵。The target covariance matrix is obtained based on the target tracking predicted value and the state measurement value.
在一种可选的实现中,所述获取模块,用于在数据关联次数小于或等于预设值的情况下,将预设的协方差矩阵作为所述目标协方差矩阵;其中,所述数据关联次数表示质心测量值和状态预测值之间的差异小于阈值的次数,所述质心测量值为对当前时刻目标对象的点云数据进行质心估计得到的,所述状态预测值为对上一时刻的质心测量值进行预测得到的,所述上一时刻的质心测量值为对上一时刻目标对象的点云数据进行质心估计得到的。In an optional implementation, the acquisition module is configured to use a preset covariance matrix as the target covariance matrix when the number of data associations is less than or equal to a preset value; wherein the data The number of associations represents the number of times that the difference between the centroid measurement value and the state predicted value is less than the threshold, the centroid measurement value is obtained by performing centroid estimation on the point cloud data of the target object at the current moment, and the state predicted value is the previous moment. The centroid measurement value of the last moment is obtained by predicting the centroid measurement value of the target object, and the centroid measurement value of the target object at the last moment is obtained by performing centroid estimation on the point cloud data of the target object.
在一种可选的实现中,所述获取模块,用于在所述数据关联次数大于所述预设值的情况下,基于目标跟踪预测值和状态测量值,获取所述目标协方差矩阵;其中,所述目标跟踪预测值为对上一时刻的目标跟踪结果进行状态预测得到的,该状态测量值为对该点云数据进行质心估计得到的。In an optional implementation, the obtaining module is configured to obtain the target covariance matrix based on the target tracking predicted value and the state measurement value when the data association times are greater than the preset value; The target tracking prediction value is obtained by performing state prediction on the target tracking result at the previous moment, and the state measurement value is obtained by performing centroid estimation on the point cloud data.
在一种可选的实现中,该获取模块,具体用于:In an optional implementation, the obtaining module is specifically used for:
基于该目标跟踪预测值和该状态测量值,通过如下公式,获取该目标协方差矩阵:Based on the target tracking predicted value and the state measurement value, the target covariance matrix is obtained by the following formula:
Figure PCTCN2020098494-appb-000006
Figure PCTCN2020098494-appb-000006
Figure PCTCN2020098494-appb-000007
Figure PCTCN2020098494-appb-000007
其中,该x k+1|k,y k-1|k为该状态测量值,该
Figure PCTCN2020098494-appb-000008
为该目标跟踪预测值,该R 1为该目标协方差矩阵。
Among them, the x k+1|k and y k-1|k are the state measurement values, and the
Figure PCTCN2020098494-appb-000008
Track the predicted value for the target, the R 1 is the target covariance matrix.
在一种可选的实现中,该获取模块,还用于:In an optional implementation, the acquisition module is also used to:
获取测量方程协方差矩阵,其中,该点云数据为基于雷达系统采集得到的,该测量方程协方差矩阵表示该雷达系统的测量偏差;Obtain the covariance matrix of the measurement equation, wherein the point cloud data is collected based on the radar system, and the covariance matrix of the measurement equation represents the measurement deviation of the radar system;
该装置还包括:The device also includes:
矩阵合成模块,用于将该测量方程协方差矩阵和该目标协方差矩阵进行合成,以得到合成后的目标协方差矩阵;a matrix synthesis module for synthesizing the measurement equation covariance matrix and the target covariance matrix to obtain a synthesized target covariance matrix;
相应的,该目标跟踪模块,具体用于:基于该合成后的目标协方差矩阵,对该目标对象进行目标跟踪。Correspondingly, the target tracking module is specifically configured to: perform target tracking on the target object based on the synthesized target covariance matrix.
在一种可选的实现中,该测量方程协方差矩阵为在极坐标系中表示的矩阵,该目标协方差矩阵为在笛卡尔坐标系中表示的矩阵;该矩阵合成模块,具体用于:In an optional implementation, the measurement equation covariance matrix is a matrix represented in a polar coordinate system, and the target covariance matrix is a matrix represented in a Cartesian coordinate system; the matrix synthesis module is specifically used for:
将该测量方程协方差矩阵转换为在笛卡尔坐标系中表示的矩阵;convert the measurement equation covariance matrix to a matrix represented in Cartesian coordinates;
将转换后的测量方程协方差矩阵和该目标协方差矩阵进行矩阵加,以得到合成后的目标协方差矩阵。Matrix addition is performed between the converted measurement equation covariance matrix and the target covariance matrix to obtain a synthesized target covariance matrix.
在一种可选的实现中,该矩阵合成模块,具体用于:In an optional implementation, the matrix synthesis module is specifically used for:
获取当前时刻的状态测量值,该状态测量值为对该点云数据进行质心估计得到的;Obtain the state measurement value at the current moment, which is obtained by estimating the centroid of the point cloud data;
基于该状态测量值,获取协方差转换合成矩阵;其中,所述协方差转换合成矩阵包括多个元素,所述多个元素中的部分元素为基于所述状态测量值生成的;Based on the state measurement value, a covariance transformation synthesis matrix is obtained; wherein, the covariance transformation synthesis matrix includes a plurality of elements, and some elements in the plurality of elements are generated based on the state measurement value;
基于该协方差转换合成矩阵,将该测量方程协方差矩阵转换为在笛卡尔坐标系中表示的矩阵。Based on the covariance transformation synthesis matrix, the measurement equation covariance matrix is transformed into a matrix represented in a Cartesian coordinate system.
在一种可选的实现中,该矩阵合成模块,具体用于:In an optional implementation, the matrix synthesis module is specifically used for:
通过如下公式,将转换后的测量方程协方差矩阵和该目标协方差矩阵进行矩阵加:The converted measurement equation covariance matrix and the target covariance matrix are matrix added by the following formula:
R all=R 0+A kR mA k′; R all =R 0 +A k R m A k ′;
Figure PCTCN2020098494-appb-000009
Figure PCTCN2020098494-appb-000009
其中,该R 0为该目标协方差矩阵,该A kR mA k′为该转换后的测量方程协方差矩阵,该R k和该θ k为该状态测量值,该A k为该协方差转换合成矩阵,该A k′为该协方差转换合成矩阵的转置。 Wherein, the R 0 is the target covariance matrix, the A k R m A k ′ is the converted measurement equation covariance matrix, the R k and the θ k are the state measurement values, and the A k is the covariance matrix The variance transformation synthesis matrix, the A k ' is the transpose of the covariance transformation synthesis matrix.
第三方面,本申请提供了一种目标跟踪装置,包括:处理器、存储器;该存储器中存储有程序代码;该处理器调用该存储器中的程序代码时,该目标跟踪装置执行上述第一方面任一该的方法。In a third aspect, the present application provides a target tracking device, comprising: a processor and a memory; the memory stores program codes; when the processor calls the program codes in the memory, the target tracking device executes the above-mentioned first aspect any of the methods.
对于处理器执行第一方面的各个可能实现方式中目标跟踪装置执行的步骤,具体均可以参阅第一方面,此处不再赘述。For the steps performed by the target tracking apparatus in each possible implementation manner of the first aspect performed by the processor, reference may be made to the first aspect for details, which will not be repeated here.
第四方面,本申请提供了一种自动驾驶车辆,可以包括处理器,处理器和存储器耦合,存储器存储有程序指令,当存储器存储的程序指令被处理器执行时实现上述第一方面该的方法。In a fourth aspect, the present application provides an autonomous vehicle, which may include a processor, the processor is coupled to a memory, the memory stores program instructions, and the method of the first aspect is implemented when the program instructions stored in the memory are executed by the processor. .
对于处理器执行第一方面的各个可能实现方式中自动驾驶车辆执行的步骤,具体均可以参阅第一方面,此处不再赘述。For the steps performed by the automatic driving vehicle in each possible implementation manner of the processor executing the first aspect, reference may be made to the first aspect for details, and details are not repeated here.
第五方面,本申请提供了一种计算机可读存储介质,该计算机可读存储介质中存储有计算机程序,当其在计算机上运行时,使得计算机执行上述第一方面该的方法。In a fifth aspect, the present application provides a computer-readable storage medium, where a computer program is stored in the computer-readable storage medium, and when it runs on a computer, causes the computer to execute the method of the first aspect.
第六方面,本申请提供了一种电路系统,该电路系统包括处理电路,该处理电路被配置为执行上述第一方面或者第一方面各种可选的实现中任一项该的方法。In a sixth aspect, the present application provides a circuit system including a processing circuit configured to perform the method of any one of the above-mentioned first aspect or various optional implementations of the first aspect.
第七方面,本申请提供了一种计算机程序,当其在计算机上运行时,使得计算机执行上述第一方面该的方法。In a seventh aspect, the present application provides a computer program that, when run on a computer, causes the computer to execute the method of the first aspect.
第八方面,本申请提供了一种芯片系统,该芯片系统包括处理器,用于支持服务器或门限值获取装置实现上述方面中所涉及的功能,例如,发送或处理上述方法中所涉及的数据和/或信息。在一种可能的设计中,该芯片系统还包括存储器,该存储器,用于保存服务器或通信设备必要的程序指令和数据。该芯片系统,可以由芯片构成,也可以包括芯片和其他分立器件。In an eighth aspect, the present application provides a chip system, the chip system includes a processor for supporting a server or a threshold value acquisition device to implement the functions involved in the above aspects, for example, sending or processing the above methods. data and/or information. In a possible design, the chip system further includes a memory for storing necessary program instructions and data of the server or the communication device. The chip system may be composed of chips, or may include chips and other discrete devices.
本申请实施例提供了一种目标跟踪方法,方法包括:获取目标对象的点云数据,并对点云数据进行质心估计;获取目标协方差矩阵;其中,目标协方差矩阵表示进行质心估计 时引入的质心位置误差;以及,基于目标协方差矩阵,对目标对象进行目标跟踪,以得到目标跟踪结果。由于车辆或者其他类型的目标对象存在转弯或遮挡等情况,不同帧相同目标对象的点云数据的质心估计可能会出现位置扰动,现有技术中,在进行目标跟踪时,误差协方差矩阵仅考虑与雷达系统的点云测量性能相关的测量误差,并不能很准确的量化由于点云数据的质心估计导致的误差,因此,会造成目标跟踪的精度降低,本申请实施例中,目标协方差矩阵可以相对准确的量化由于点云数据的质心估计导致的误差,因此若基于目标协方差矩阵对目标对象进行目标跟踪,则可以提高目标跟踪的精度。An embodiment of the present application provides a target tracking method. The method includes: acquiring point cloud data of a target object, and performing centroid estimation on the point cloud data; and, based on the target covariance matrix, perform target tracking on the target object to obtain the target tracking result. Due to the turning or occlusion of vehicles or other types of target objects, the centroid estimation of point cloud data of the same target object in different frames may experience positional disturbance. In the prior art, during target tracking, the error covariance matrix only considers The measurement error related to the point cloud measurement performance of the radar system cannot accurately quantify the error caused by the centroid estimation of the point cloud data. Therefore, the accuracy of target tracking will be reduced. In the embodiment of the present application, the target covariance matrix The error caused by the centroid estimation of the point cloud data can be relatively accurately quantified. Therefore, if the target object is tracked based on the target covariance matrix, the accuracy of the target tracking can be improved.
附图说明Description of drawings
图1为本发明实施例提供的车辆100的功能框图;FIG. 1 is a functional block diagram of a vehicle 100 according to an embodiment of the present invention;
图2a为本发明实施例提供的应用场景的示意;2a is a schematic diagram of an application scenario provided by an embodiment of the present invention;
图2b为本发明实施例提供的应用场景的示意;FIG. 2b is a schematic diagram of an application scenario provided by an embodiment of the present invention;
图3为本发明实施例提供的计算机系统的示意;3 is a schematic diagram of a computer system provided by an embodiment of the present invention;
图4为一种点云数据质心估计的示意;Fig. 4 is a kind of schematic diagram of point cloud data centroid estimation;
图5为本申请实施例提供的一种目标跟踪方法的流程示意;FIG. 5 is a schematic flowchart of a target tracking method provided by an embodiment of the present application;
图6为本申请实施例提供的一种目标跟踪方法的流程示意;FIG. 6 is a schematic flowchart of a target tracking method provided by an embodiment of the present application;
图7为本申请实施例提供的一种目标跟踪方法的流程示意;FIG. 7 is a schematic flowchart of a target tracking method provided by an embodiment of the present application;
图8为本申请实施例提供的一种目标跟踪装置的示意;FIG. 8 is a schematic diagram of a target tracking apparatus provided by an embodiment of the present application;
图9为本申请实施例提供的一种计算机程序产品的示意;FIG. 9 is a schematic diagram of a computer program product provided by an embodiment of the present application;
图10为本申请实施例提供的一种目标跟踪装置的示意。FIG. 10 is a schematic diagram of a target tracking apparatus provided by an embodiment of the present application.
具体实施方式detailed description
下面将结合本发明实施例中的附图,对本发明实施例进行描述。The embodiments of the present invention will be described below with reference to the accompanying drawings in the embodiments of the present invention.
本申请的说明书和权利要求书及所述附图中的术语“第一”、“第二”、“第三”和“第四”等是用于区别不同对象,而不是用于描述特定顺序。此外,术语“包括”和“具有”以及它们任何变形,意图在于覆盖不排他的包含。例如包含了一系列步骤或单元的过程、方法、系统、产品或设备没有限定于已列出的步骤或单元,而是可选地还包括没有列出的步骤或单元,或可选地还包括对于这些过程、方法、产品或设备固有的其它步骤或单元。The terms "first", "second", "third" and "fourth" in the description and claims of the present application and the drawings are used to distinguish different objects, rather than to describe a specific order . Furthermore, the terms "comprising" and "having" and any variations thereof are intended to cover non-exclusive inclusion. For example, a process, method, system, product or device comprising a series of steps or units is not limited to the listed steps or units, but optionally also includes unlisted steps or units, or optionally also includes For other steps or units inherent to these processes, methods, products or devices.
在本文中提及“实施例”意味着,结合实施例描述的特定特征、结构或特性可以包含在本申请的至少一个实施例中。在说明书中的各个位置出现该短语并不一定均是指相同的实施例,也不是与其它实施例互斥的独立的或备选的实施例。本领域技术人员显式地和隐式地理解的是,本文所描述的实施例可以与其它实施例相结合。Reference herein to an "embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment can be included in at least one embodiment of the present application. The appearances of the phrase in various places in the specification are not necessarily all referring to the same embodiment, nor a separate or alternative embodiment that is mutually exclusive of other embodiments. It is explicitly and implicitly understood by those skilled in the art that the embodiments described herein may be combined with other embodiments.
在本说明书中使用的术语“部件”、“模块”、“系统”等用于表示计算机相关的实体、硬件、固件、硬件和软件的组合、软件、或执行中的软件。例如,部件可以是但不限于,在处理器上运行的进程、处理器、对象、可执行文件、执行线程、程序和/或计算机。通过图示,在计算设备上运行的应用和计算设备都可以是部件。一个或多个部件可驻留在进程和/或执行线程中,部件可位于一个计算机上和/或分布在2个或更多个计算机之间。此外,这些部件可从在上面存储有各种数据结构的各种计算机可读介质执行。部件可例如根据具有一个或多个数据分组(例如来自与本地系统、分布式系统和/或网络间的另一部件交互的二 个部件的数据,例如通过信号与其它系统交互的互联网)的信号通过本地和/或远程进程来通信。The terms "component", "module", "system" and the like are used in this specification to refer to a computer-related entity, hardware, firmware, a combination of hardware and software, software, or software in execution. For example, a component may be, but is not limited to, a process running on a processor, a processor, an object, an executable, a thread of execution, a program, and/or a computer. By way of illustration, both an application running on a computing device and the computing device may be components. One or more components may reside within a process and/or thread of execution, and a component may be localized on one computer and/or distributed between 2 or more computers. In addition, these components can execute from various computer readable media having various data structures stored thereon. A component may, for example, be based on a signal having one or more data packets (eg, data from two components interacting with another component between a local system, a distributed system, and/or a network, such as the Internet interacting with other systems via signals) Communicate through local and/or remote processes.
图1是本发明实施例提供的车辆100的功能框图。在一个实施例中,将车辆100配置为完全或部分地自动驾驶模式。例如,车辆100可以在处于自动驾驶模式中的同时控制自身,并且可通过人为操作来确定车辆及其周边环境的当前状态,确定周边环境中的至少一个其他车辆的可能行为,并确定其他车辆执行可能行为的可能性相对应的置信水平,基于所确定的信息来控制车辆100。在车辆100处于自动驾驶模式中时,可以将车辆100置为在没有和人交互的情况下操作。FIG. 1 is a functional block diagram of a vehicle 100 according to an embodiment of the present invention. In one embodiment, the vehicle 100 is configured in a fully or partially autonomous driving mode. For example, the vehicle 100 can control itself while in an autonomous driving mode, and can determine the current state of the vehicle and its surroundings through human manipulation, determine the possible behavior of at least one other vehicle in the surrounding environment, and determine the other vehicles perform The confidence level corresponding to the likelihood of the possible behavior, the vehicle 100 is controlled based on the determined information. When the vehicle 100 is in an autonomous driving mode, the vehicle 100 may be placed to operate without human interaction.
车辆100可包括各种子系统,例如行进系统102、传感器系统104、控制系统106、一个或多个外围设备108以及电源110、计算机系统112和用户接口116。可选地,车辆100可包括更多或更少的子系统,并且每个子系统可包括多个元件。另外,车辆100的每个子系统和元件可以通过有线或者无线互连。Vehicle 100 may include various subsystems, such as travel system 102 , sensor system 104 , control system 106 , one or more peripherals 108 and power supply 110 , computer system 112 , and user interface 116 . Alternatively, vehicle 100 may include more or fewer subsystems, and each subsystem may include multiple elements. Additionally, each of the subsystems and elements of the vehicle 100 may be interconnected by wire or wirelessly.
行进系统102可包括为车辆100提供动力运动的组件。在一个实施例中,推进系统102可包括引擎118、能量源119、传动装置120和车轮/轮胎121。引擎118可以是内燃引擎、电动机、空气压缩引擎或其他类型的引擎组合,例如气油发动机和电动机组成的混动引擎,内燃引擎和空气压缩引擎组成的混动引擎。引擎118将能量源119转换成机械能量。The travel system 102 may include components that provide powered motion for the vehicle 100 . In one embodiment, propulsion system 102 may include engine 118 , energy source 119 , transmission 120 , and wheels/tires 121 . The engine 118 may be an internal combustion engine, an electric motor, an air compression engine, or other types of engine combinations, such as a hybrid engine consisting of a gas oil engine and an electric motor, a hybrid engine consisting of an internal combustion engine and an air compression engine. Engine 118 converts energy source 119 into mechanical energy.
能量源119的示例包括汽油、柴油、其他基于石油的燃料、丙烷、其他基于压缩气体的燃料、乙醇、太阳能电池板、电池和其他电力来源。能量源119也可以为车辆100的其他系统提供能量。Examples of energy sources 119 include gasoline, diesel, other petroleum-based fuels, propane, other compressed gas-based fuels, ethanol, solar panels, batteries, and other sources of electricity. The energy source 119 may also provide energy to other systems of the vehicle 100 .
传动装置120可以将来自引擎118的机械动力传送到车轮121。传动装置120可包括变速箱、差速器和驱动轴。在一个实施例中,传动装置120还可以包括其他器件,比如离合器。其中,驱动轴可包括可耦合到一个或多个车轮121的一个或多个轴。 Transmission 120 may transmit mechanical power from engine 118 to wheels 121 . Transmission 120 may include a gearbox, a differential, and a driveshaft. In one embodiment, transmission 120 may also include other devices, such as clutches. Among other things, the drive shaft may include one or more axles that may be coupled to one or more wheels 121 .
传感器系统104可包括感测关于车辆100周边的环境的信息的若干个传感器。例如,传感器系统104可包括定位系统122(定位系统可以是GPS系统,也可以是北斗系统或者其他定位系统)、惯性测量单元(inertial measurement unit,IMU)124、雷达系统126、激光测距仪128以及相机130。传感器系统104还可包括被监视车辆100的内部系统的传感器(例如,车内空气质量监测器、燃油量表、机油温度表等)。来自这些传感器中的一个或多个的传感器数据可用于检测对象及其相应特性(位置、形状、方向、速度等)。这种检测和识别是自主车辆100的安全操作的关键功能。The sensor system 104 may include several sensors that sense information about the environment surrounding the vehicle 100 . For example, the sensor system 104 may include a positioning system 122 (the positioning system may be a GPS system, a Beidou system or other positioning systems), an inertial measurement unit (IMU) 124, a radar system 126, a laser rangefinder 128 and camera 130 . The sensor system 104 may also include sensors of the internal systems of the vehicle 100 being monitored (eg, an in-vehicle air quality monitor, a fuel gauge, an oil temperature gauge, etc.). Sensor data from one or more of these sensors can be used to detect objects and their corresponding characteristics (position, shape, orientation, velocity, etc.). This detection and identification is a critical function for the safe operation of the autonomous vehicle 100 .
其中,随着高级驾驶辅助系统(ADAS)和无人驾驶技术的发展,其对雷达系统126的距离、角度分辨率等性能提出了更高的要求。而车载雷达系统126的距离、角度分辨率的提升使得车载雷达系统126在对目标进行成像时,一个目标对象会检测出多个测量点,形成高分辨率点云数据,本申请中的雷达系统126也可以称为点云成像雷达。Among them, with the development of advanced driver assistance systems (ADAS) and unmanned driving technologies, higher requirements are placed on the performance of the radar system 126 such as distance and angular resolution. The improvement of the distance and angular resolution of the vehicle-mounted radar system 126 enables the vehicle-mounted radar system 126 to detect multiple measurement points for a target object when imaging the target to form high-resolution point cloud data. The radar system in this application 126 may also be referred to as point cloud imaging radar.
定位系统122可用于估计车辆100的地理位置。IMU 124用于基于惯性加速度来感测车辆100的位置和朝向变化。在一个实施例中,IMU 124可以是加速度计和陀螺仪的组合。The positioning system 122 may be used to estimate the geographic location of the vehicle 100 . The IMU 124 is used to sense position and orientation changes of the vehicle 100 based on inertial acceleration. In one embodiment, IMU 124 may be a combination of an accelerometer and a gyroscope.
雷达系统126可利用无线电信号来感测车辆100的周边环境内的物体。在一些实施例中,除了感测物体以外,雷达系统126还可用于感测物体的速度和/或前进方向。Radar system 126 may utilize radio signals to sense objects within the surrounding environment of vehicle 100 . In some embodiments, in addition to sensing objects, radar system 126 may be used to sense the speed and/or heading of objects.
激光测距仪128可利用激光来感测车辆100所位于的环境中的物体。在一些实施例中,激光测距仪128可包括一个或多个激光源、激光扫描器以及一个或多个检测器,以及其他系统组件。The laser rangefinder 128 may utilize laser light to sense objects in the environment in which the vehicle 100 is located. In some embodiments, the laser rangefinder 128 may include one or more laser sources, laser scanners, and one or more detectors, among other system components.
相机130可用于捕捉车辆100的周边环境的多个图像。相机130可以是静态相机或视频相机。 Camera 130 may be used to capture multiple images of the surrounding environment of vehicle 100 . Camera 130 may be a still camera or a video camera.
控制系统106为控制车辆100及其组件的操作。控制系统106可包括各种元件,其中包括转向系统132、油门134、制动单元136、传感器融合算法138、计算机视觉系统140、路线控制系统142以及障碍物避免系统144。The control system 106 controls the operation of the vehicle 100 and its components. Control system 106 may include various elements including steering system 132 , throttle 134 , braking unit 136 , sensor fusion algorithms 138 , computer vision system 140 , route control system 142 , and obstacle avoidance system 144 .
转向系统132可操作来调整车辆100的前进方向。例如在一个实施例中可以为方向盘系统。The steering system 132 is operable to adjust the heading of the vehicle 100 . For example, in one embodiment it may be a steering wheel system.
油门134用于控制引擎118的操作速度并进而控制车辆100的速度。The throttle 134 is used to control the operating speed of the engine 118 and thus the speed of the vehicle 100 .
制动单元136用于控制车辆100减速。制动单元136可使用摩擦力来减慢车轮121。在其他实施例中,制动单元136可将车轮121的动能转换为电流。制动单元136也可采取其他形式来减慢车轮121转速从而控制车辆100的速度。The braking unit 136 is used to control the deceleration of the vehicle 100 . The braking unit 136 may use friction to slow the wheels 121 . In other embodiments, the braking unit 136 may convert the kinetic energy of the wheels 121 into electrical current. The braking unit 136 may also take other forms to slow the wheels 121 to control the speed of the vehicle 100 .
计算机视觉系统140可以处理和分析由相机130捕捉的图像以便识别车辆100周边环境中的物体和/或特征。所述物体和/或特征可包括交通信号、道路边界和障碍物。计算机视觉系统140可使用物体识别算法、运动中恢复结构(Structure from Motion,SFM)算法、视频跟踪和其他计算机视觉技术。在一些实施例中,计算机视觉系统140可以用于为环境绘制地图、跟踪物体、估计物体的速度等等。 Computer vision system 140 may process and analyze images captured by camera 130 in order to identify objects and/or features in the environment surrounding vehicle 100 . The objects and/or features may include traffic signals, road boundaries and obstacles. Computer vision system 140 may use object recognition algorithms, Structure from Motion (SFM) algorithms, video tracking, and other computer vision techniques. In some embodiments, the computer vision system 140 may be used to map the environment, track objects, estimate the speed of objects, and the like.
路线控制系统142用于确定车辆100的行驶路线。在一些实施例中,路线控制系统142可结合来自传感器138、GPS 122和一个或多个预定地图的数据以为车辆100确定行驶路线。The route control system 142 is used to determine the travel route of the vehicle 100 . In some embodiments, the route control system 142 may combine data from the sensors 138 , the GPS 122 , and one or more predetermined maps to determine a driving route for the vehicle 100 .
障碍物避免系统144用于识别、评估和避免或者以其他方式越过车辆100的环境中的潜在障碍物。The obstacle avoidance system 144 is used to identify, evaluate, and avoid or otherwise traverse potential obstacles in the environment of the vehicle 100 .
当然,在一个实例中,控制系统106可以增加或替换地包括除了所示出和描述的那些以外的组件。或者也可以减少一部分上述示出的组件。Of course, in one example, the control system 106 may additionally or alternatively include components other than those shown and described. Alternatively, some of the components shown above may be reduced.
车辆100通过外围设备108与外部传感器、其他车辆、其他计算机系统或用户之间进行交互。外围设备108可包括无线通信系统146、车载电脑148、麦克风150和/或扬声器152。Vehicle 100 interacts with external sensors, other vehicles, other computer systems, or users through peripheral devices 108 . Peripherals 108 may include a wireless communication system 146 , an onboard computer 148 , a microphone 150 and/or a speaker 152 .
在一些实施例中,外围设备108为车辆100的用户提供与用户接口116交互的手段。例如,车载电脑148可向车辆100的用户提供信息。用户接口116还可操作车载电脑148来接收用户的输入。车载电脑148可以通过触摸屏进行操作。在其他情况中,外围设备108可提供用于车辆100与位于车内的其它设备通信的手段。例如,麦克风150可从车辆100的用户接收音频(例如,语音命令或其他音频输入)。类似地,扬声器152可向车辆100的用户输出音频。In some embodiments, peripherals 108 provide a means for a user of vehicle 100 to interact with user interface 116 . For example, the onboard computer 148 may provide information to the user of the vehicle 100 . User interface 116 may also operate on-board computer 148 to receive user input. The onboard computer 148 can be operated via a touch screen. In other cases, peripheral devices 108 may provide a means for vehicle 100 to communicate with other devices located within the vehicle. For example, microphone 150 may receive audio (eg, voice commands or other audio input) from a user of vehicle 100 . Similarly, speakers 152 may output audio to a user of vehicle 100 .
无线通信系统146可以直接地或者经由通信网络来与一个或多个设备无线通信。例如,无线通信系统146可使用3G蜂窝通信,例如码分多址(code division multiple access,CDMA)、增强型多媒体盘片系统(enhanced versatile disk,EVD)、全球移动通讯系统(global system for mobile communications,GSM)/通用分组无线业务(general packet radio service,GPRS),或者4G蜂窝通信,例如LTE。或者5G蜂窝通信。无线通信系统146可利用WiFi与无线 局域网(wireless local area network,WLAN)通信。在一些实施例中,无线通信系统146可利用红外链路、蓝牙或ZigBee与设备直接通信。其他无线协议,例如各种车辆通信系统,例如,无线通信系统146可包括一个或多个专用短程通信(dedicated short range communications,DSRC)设备,这些设备可包括车辆和/或路边台站之间的公共和/或私有数据通信。Wireless communication system 146 may wirelessly communicate with one or more devices, either directly or via a communication network. For example, wireless communication system 146 may use 3G cellular communications such as code division multiple access (CDMA), enhanced versatile disk (EVD), global system for mobile communications , GSM)/general packet radio service (GPRS), or 4G cellular communications such as LTE. Or 5G cellular communications. The wireless communication system 146 may communicate with a wireless local area network (WLAN) using WiFi. In some embodiments, the wireless communication system 146 may communicate directly with the device using an infrared link, Bluetooth, or ZigBee. Other wireless protocols, such as various vehicle communication systems, for example, wireless communication system 146 may include one or more dedicated short range communications (DSRC) devices, which may include communication between vehicles and/or roadside stations public and/or private data communications.
电源110可向车辆100的各种组件提供电力。在一个实施例中,电源110可以为可再充电锂离子或铅酸电池。这种电池的一个或多个电池组可被配置为电源为车辆100的各种组件提供电力。在一些实施例中,电源110和能量源119可一起实现,例如一些全电动车中那样。The power supply 110 may provide power to various components of the vehicle 100 . In one embodiment, the power source 110 may be a rechargeable lithium-ion or lead-acid battery. One or more battery packs of such a battery may be configured as a power source to provide power to various components of the vehicle 100 . In some embodiments, power source 110 and energy source 119 may be implemented together, such as in some all-electric vehicles.
车辆100的部分或所有功能受计算机系统112控制。计算机系统112可包括至少一个处理器113,处理器113执行存储在例如数据存储装置114这样的非暂态计算机可读介质中的指令115。计算机系统112还可以是采用分布式方式控制车辆100的个体组件或子系统的多个计算设备。Some or all of the functions of the vehicle 100 are controlled by the computer system 112 . Computer system 112 may include at least one processor 113 that executes instructions 115 stored in a non-transitory computer-readable medium such as data storage device 114 . Computer system 112 may also be multiple computing devices that control individual components or subsystems of vehicle 100 in a distributed fashion.
处理器113可以是任何常规的处理器,诸如商业可获得的中央处理器(central processing unit,CPU)。替选地,该处理器可以是诸如专用集成电路(application specific integrated circuit,ASIC)或其它基于硬件的处理器的专用设备。尽管图1功能性地图示了处理器、存储器、和在相同块中的计算机110的其它元件,但是本领域的普通技术人员应该理解该处理器、计算机、或存储器实际上可以包括可以或者可以不存储在相同的物理外壳内的多个处理器、计算机、或存储器。例如,存储器可以是硬盘驱动器或位于不同于计算机110的外壳内的其它存储介质。因此,对处理器或计算机的引用将被理解为包括对可以或者可以不并行操作的处理器或计算机或存储器的集合的引用。不同于使用单一的处理器来执行此处所描述的步骤,诸如转向组件和减速组件的一些组件每个都可以具有其自己的处理器,处理器只执行与特定于组件的功能相关的计算。The processor 113 may be any conventional processor, such as a commercially available central processing unit (CPU). Alternatively, the processor may be a dedicated device such as an application specific integrated circuit (ASIC) or other hardware-based processor. Although FIG. 1 functionally illustrates the processor, memory, and other elements of the computer 110 in the same block, one of ordinary skill in the art will understand that the processor, computer, or memory may actually include a processor, a computer, or a memory that may or may not Multiple processors, computers, or memories stored within the same physical enclosure. For example, the memory may be a hard drive or other storage medium located within an enclosure other than computer 110 . Thus, reference to a processor or computer will be understood to include reference to a collection of processors or computers or memories that may or may not operate in parallel. Rather than using a single processor to perform the steps described herein, some components, such as the steering and deceleration components, may each have their own processors that only perform computations related to component-specific functions.
本申请实施例中,处理器113可以获取到传感器系统(例如雷达系统126采集到的点运数据)以及运动模型参数等等,然后基于点云数据以及运动模型参数等进行目标跟踪。In this embodiment of the present application, the processor 113 may acquire the sensor system (eg, point movement data collected by the radar system 126 ) and motion model parameters, etc., and then perform target tracking based on the point cloud data and motion model parameters.
在此处所描述的各个方面中,处理器可以位于远离该车辆并且与该车辆进行无线通信。在其它方面中,此处所描述的过程中的一些在布置于车辆内的处理器上执行而其它则由远程处理器执行,包括采取执行单一操纵的必要步骤。In various aspects described herein, a processor may be located remotely from the vehicle and in wireless communication with the vehicle. In other aspects, some of the processes described herein are performed on a processor disposed within the vehicle while others are performed by a remote processor, including taking steps necessary to perform a single maneuver.
在一些实施例中,数据存储装置114可包含指令115(例如,程序逻辑),指令115可被处理器113执行来执行车辆100的各种功能,包括以上描述的那些功能。数据存储装置114也可包含额外的指令,包括向推进系统102、传感器系统104、控制系统106和外围设备108中的一个或多个发送数据、从其接收数据、与其交互和/或对其进行控制的指令。In some embodiments, data storage 114 may include instructions 115 (eg, program logic) executable by processor 113 to perform various functions of vehicle 100 , including those described above. Data storage 114 may also contain additional instructions, including sending data to, receiving data from, interacting with, and/or performing data processing on one or more of propulsion system 102 , sensor system 104 , control system 106 , and peripherals 108 . control commands.
除了指令115以外,数据存储装置114还可存储数据,例如道路地图、路线信息,车辆的位置、方向、速度以及其它的车辆数据,以及其他信息。这种信息可在车辆100在自主、半自主和/或手动模式中被车辆100和计算机系统112使用。In addition to the instructions 115, the data storage device 114 may store data such as road maps, route information, vehicle location, direction, speed, and other vehicle data, among other information. Such information may be used by the vehicle 100 and the computer system 112 while the vehicle 100 is in autonomous, semi-autonomous, and/or manual modes.
用户接口116,用于向车辆100的用户提供信息或从车辆100接收信息。可选地,用户接口116可包括在外围设备108的集合内的一个或多个输入/输出设备,例如无线通信系统146、车车在电脑148、麦克风150和扬声器152。A user interface 116 for providing information to or receiving information from a user of the vehicle 100 . Optionally, the user interface 116 may include one or more input/output devices within the set of peripheral devices 108 , such as a wireless communication system 146 , an onboard computer 148 , a microphone 150 and a speaker 152 .
计算机系统112可基于从各种子系统(例如,行进系统102、传感器系统104和控制系统106)以及从用户接口116接收的输入来控制车辆100的功能。例如,计算机系统112可利用来自控制系统106的输入以便控制转向单元132来避免由传感器系统104和障碍物避免系统144检测到的障碍物。在一些实施例中,计算机系统112可操作来对车辆100及其子系统的许多方面提供控制。Computer system 112 may control functions of vehicle 100 based on input received from various subsystems (eg, travel system 102 , sensor system 104 , and control system 106 ) and from user interface 116 . For example, computer system 112 may utilize input from control system 106 in order to control steering unit 132 to avoid obstacles detected by sensor system 104 and obstacle avoidance system 144 . In some embodiments, computer system 112 is operable to provide control of various aspects of vehicle 100 and its subsystems.
可选地,上述这些组件中的一个或多个可与车辆100分开安装或关联。例如,数据存储装置114可以部分或完全地与车辆1100分开存在。上述组件可以按有线和/或无线方式来通信地耦合在一起。Alternatively, one or more of these components described above may be installed or associated with the vehicle 100 separately. For example, data storage device 114 may exist partially or completely separate from vehicle 1100 . The above-described components may be communicatively coupled together in a wired and/or wireless manner.
可选地,上述组件只是一个示例,实际应用中,上述各个模块中的组件有可能根据实际需要增添或者删除,图1不应理解为对本发明实施例的限制。Optionally, the above component is just an example. In practical applications, components in each of the above modules may be added or deleted according to actual needs, and FIG. 1 should not be construed as a limitation on the embodiment of the present invention.
在道路行进的自动驾驶汽车,如上面的车辆100,可以识别其周围环境内的物体以确定对当前速度的调整。物体可以是其它车辆、交通控制设备、或者其它类型的物体。在一些示例中,可以独立地考虑每个识别的物体,并且基于物体的各自的特性,诸如它的当前速度、加速度、与车辆的间距等,可以用来确定自动驾驶汽车所要调整的速度。A self-driving car traveling on a road, such as vehicle 100 above, can recognize objects within its surroundings to determine adjustments to the current speed. The objects may be other vehicles, traffic control devices, or other types of objects. In some examples, each identified object may be considered independently, and based on the object's respective characteristics, such as its current speed, acceleration, distance from the vehicle, etc., may be used to determine the speed at which the autonomous vehicle is to adjust.
可选地,自动驾驶汽车车辆100或者与自动驾驶车辆100相关联的计算设备(如图1的计算机系统112、计算机视觉系统140、数据存储装置114)可以基于所识别的物体的特性和周围环境的状态(例如,交通、雨、道路上的冰、等等)来预测识别的物体的行为。可选地,每一个所识别的物体都依赖于彼此的行为,因此还可以将所识别的所有物体全部一起考虑来预测单个识别的物体的行为。车辆100能够基于预测的识别的物体的行为来调整它的速度。换句话说,自动驾驶汽车能够基于所预测的物体的行为来确定车辆将需要调整到(例如,加速、减速、或者停止)什么稳定状态。在这个过程中,也可以考虑其它因素来确定车辆100的速度,诸如,车辆100在行驶的道路中的横向位置、道路的曲率、静态和动态物体的接近度等等。Alternatively, the autonomous vehicle vehicle 100 or a computing device associated with the autonomous vehicle 100 (eg, computer system 112, computer vision system 140, data storage device 114 of FIG. 1) may be based on the characteristics of the identified objects and the surrounding environment state (eg, traffic, rain, ice on the road, etc.) to predict the behavior of the identified objects. Optionally, each identified object is dependent on the behavior of the other, so it is also possible to predict the behavior of a single identified object by considering all identified objects together. The vehicle 100 can adjust its speed based on the predicted behavior of the identified object. In other words, the self-driving car can determine what steady state the vehicle will need to adjust to (eg, accelerate, decelerate, or stop) based on the predicted behavior of the object. In this process, other factors may also be considered to determine the speed of the vehicle 100, such as the lateral position of the vehicle 100 in the road being traveled, the curvature of the road, the proximity of static and dynamic objects, and the like.
除了提供调整自动驾驶汽车的速度的指令之外,计算设备还可以提供修改车辆100的转向角的指令,以使得自动驾驶汽车遵循给定的轨迹和/或维持与自动驾驶汽车附近的物体(例如,道路上的相邻车道中的轿车)的安全横向和纵向距离。In addition to providing instructions to adjust the speed of the self-driving car, the computing device may also provide instructions to modify the steering angle of the vehicle 100 so that the self-driving car follows a given trajectory and/or maintains contact with objects in the vicinity of the self-driving car (eg, , cars in adjacent lanes on the road) safe lateral and longitudinal distances.
上述车辆100可以为轿车、卡车、摩托车、公共汽车、船、飞机、直升飞机、割草机、娱乐车、游乐场车辆、施工设备、电车、高尔夫球车、火车、和手推车等,本发明实施例不做特别的限定。The above-mentioned vehicle 100 can be a car, a truck, a motorcycle, a bus, a boat, an airplane, a helicopter, a lawn mower, a recreational vehicle, a playground vehicle, construction equipment, a tram, a golf cart, a train, a cart, etc. The embodiments of the invention are not particularly limited.
场景示例1:自动驾驶系统Scenario Example 1: Autonomous Driving System
根据图2a,计算机系统101包括处理器103,处理器103和系统总线105耦合。处理器103可以是一个或者多个处理器,其中每个处理器都可以包括一个或多个处理器核。显示适配器(video adapter)107,显示适配器可以驱动显示器109,显示器109和系统总线105耦合。系统总线105通过总线桥111和输入输出(I/O)总线113耦合。I/O接口115和I/O总线耦合。I/O接口115和多种I/O设备进行通信,比如输入设备117(如:键盘,鼠标,触摸屏等),多媒体盘(media tray)121,(例如,只读光盘(compact disc read-only memory,CD-ROM),多媒体接口等)。收发器123(可以发送和/或接受无线电通信信号),摄像头155(可以捕捉静态和动态数字视频图像)和外部USB接口125。其中,可选地,和I/O接口115相连接的接口可以是USB接口。According to FIG. 2 a , the computer system 101 includes a processor 103 coupled to a system bus 105 . The processor 103 may be one or more processors, each of which may include one or more processor cores. A video adapter 107, which can drive a display 109, is coupled to the system bus 105. System bus 105 is coupled to input-output (I/O) bus 113 through bus bridge 111 . I/O interface 115 is coupled to the I/O bus. I/O interface 115 communicates with various I/O devices, such as input device 117 (eg, keyboard, mouse, touch screen, etc.), media tray 121, (eg, compact disc read-only) memory, CD-ROM), multimedia interface, etc.). Transceiver 123 (which can send and/or receive radio communication signals), camera 155 (which can capture still and moving digital video images) and external USB interface 125 . Wherein, optionally, the interface connected to the I/O interface 115 may be a USB interface.
其中,处理器103可以是任何传统处理器,包括精简指令集计算(reduced instruction set Computing,RISC)处理器、复杂指令集计算(complexinstruction set computing,CISC)处理器或上述的组合。可选地,处理器可以是诸如ASIC的专用装置。可选地,处理器103可以是神经网络处理器或者是神经网络处理器和上述传统处理器的组合。The processor 103 may be any conventional processor, including a reduced instruction set computing (reduced instruction set computing, RISC) processor, a complex instruction set computing (complex instruction set computing, CISC) processor or a combination of the above. Alternatively, the processor may be a special purpose device such as an ASIC. Optionally, the processor 103 may be a neural network processor or a combination of a neural network processor and the above-mentioned conventional processors.
可选地,在本文的各种实施例中,计算机系统101可位于远离自动驾驶车辆的地方,并且可与自动驾驶车辆无线通信。在其它方面,本文的一些过程在设置在自动驾驶车辆内的处理器上执行,其它由远程处理器执行,包括采取执行单个操纵所需的动作。Alternatively, in various embodiments herein, computer system 101 may be located remotely from the autonomous vehicle and may communicate wirelessly with the autonomous vehicle. In other aspects, some of the processes herein are performed on a processor disposed within an autonomous vehicle, others are performed by a remote processor, including taking actions required to perform a single maneuver.
计算机101可以通过网络接口129和软件部署服务器149通信。示例性的,网络接口129是硬件网络接口,比如,网卡。网络127可以是外部网络,比如因特网,也可以是内部网络,比如以太网或者虚拟私人网络(virtual private network,VPN)。可选地,网络127还可以是无线网络,比如WiFi网络,蜂窝网络等。 Computer 101 may communicate with software deployment server 149 through network interface 129 . Illustratively, network interface 129 is a hardware network interface, such as a network card. The network 127 may be an external network, such as the Internet, or an internal network, such as an Ethernet network or a virtual private network (VPN). Optionally, the network 127 may also be a wireless network, such as a WiFi network, a cellular network, and the like.
硬盘驱动接口和系统总线105耦合。硬件驱动接口和硬盘驱动器相连接。系统内存135和系统总线105耦合。运行在系统内存135的数据可以包括计算机101的操作系统137和应用程序143。The hard disk drive interface is coupled to the system bus 105 . The hard drive interface is connected to the hard drive. System memory 135 is coupled to system bus 105 . Data running in system memory 135 may include operating system 137 and application programs 143 of computer 101 .
操作系统包括Shell 139和内核(kernel)141。Shell 139是介于使用者和操作系统的内核间的一个接口。shell是操作系统最外面的一层。shell管理使用者与操作系统之间的交互:等待使用者的输入,向操作系统解释使用者的输入,并且处理各种各样的操作系统的输出结果。The operating system includes a Shell 139 and a kernel 141 . Shell 139 is an interface between the user and the operating system's kernel. The shell is the outermost layer of the operating system. The shell manages the interaction between the user and the operating system: waiting for user input, interpreting user input to the operating system, and processing various operating system output.
内核141由操作系统中用于管理存储器、文件、外设和系统资源的那些部分组成。内核141直接与硬件交互,操作系统内核通常运行进程,并提供进程间的通信,提供CPU时间片管理、中断、内存管理和IO管理等等。 Kernel 141 consists of those parts of the operating system that manage memory, files, peripherals, and system resources. The kernel 141 directly interacts with the hardware, and the operating system kernel usually runs processes, provides inter-process communication, provides CPU time slice management, interrupts, memory management, IO management, and the like.
应用程序143包括控制汽车自动驾驶相关的程序,比如,管理自动驾驶的汽车和路上障碍物交互的程序,控制自动驾驶汽车路线或者速度的程序,控制自动驾驶汽车和路上其他自动驾驶汽车交互的程序。应用程序143也存在于deploying server 149的系统上。在一个实施例中,在需要执行应用程序147时,计算机系统101可以从软件部署服务器149下载应用程序143。Application 143 includes programs that control the autonomous driving of the car, for example, programs that manage the interaction of the autonomous car with obstacles on the road, programs that control the route or speed of the autonomous car, and programs that control the interaction of the autonomous car with other autonomous vehicles on the road. . Application 143 also exists on the system of deploying server 149. In one embodiment, computer system 101 may download application 143 from software deployment server 149 when application 147 needs to be executed.
传感器153和计算机系统101关联。传感器153用于探测计算机101周围的环境。举例来说,传感器153可以探测动物,汽车,障碍物和人行横道等,进一步传感器153还可以探测上述动物,汽车,障碍物和人行横道等物体周围的环境,比如:动物周围的环境,例如,动物周围出现的其他动物,天气条件,周围环境的光亮度等。可选地,如果计算机101位于自动驾驶的汽车上,传感器可以是雷达系统等。 Sensor 153 is associated with computer system 101 . The sensor 153 is used to detect the environment around the computer 101 . For example, the sensor 153 can detect animals, cars, obstacles and pedestrian crossings, etc. Further, the sensor 153 can also detect the environment around the above-mentioned animals, cars, obstacles and pedestrian crossings, such as: the environment around animals, for example, around animals Other animals present, weather conditions, ambient light levels, etc. Alternatively, if the computer 101 is located on a self-driving car, the sensor may be a radar system or the like.
计算机系统112还可以从其它计算机系统接收信息或转移信息到其它计算机系统。或者,从车辆100的传感器系统104收集的传感器数据可以被转移到另一个计算机对此数据进行处理。如图3所示,来自计算机系统112的数据可以经由网络被传送到云侧的计算机720用于进一步的处理。网络以及中间节点可以包括各种配置和协议,包括因特网、万维网、内联网、虚拟专用网络、广域网、局域网、使用一个或多个公司的专有通信协议的专用网络、以太网、无线局域网(wireless fidelity,WiFi)和超文本传输协议(hypertext transfer protocol,HTTP)、以及前述的各种组合。这种通信可以由能够传送数据到其它计算机和从其它计算机传送数据的任何设备来实现,诸如调制解调器和无线接口。Computer system 112 may also receive information from or transfer information to other computer systems. Alternatively, sensor data collected from the sensor system 104 of the vehicle 100 may be transferred to another computer for processing of the data. As shown in FIG. 3, data from computer system 112 may be transmitted via a network to cloud-side computer 720 for further processing. Networks and intermediate nodes may include a variety of configurations and protocols, including the Internet, the World Wide Web, an intranet, a virtual private network, a wide area network, a local area network, a private network using one or more of the company's proprietary communication protocols, Ethernet, wireless local area networks (wireless local area networks). fidelity, WiFi) and hypertext transfer protocol (HTTP), and various combinations of the foregoing. Such communication may be accomplished by any device capable of transferring data to and from other computers, such as modems and wireless interfaces.
在一个示例中,计算机720可以包括具有多个计算机的服务器,例如负载均衡服务器群,为了从计算机系统112接收、处理并传送数据的目的,其与网络的不同节点交换信息。该服务器可以被类似于计算机系统110配置,具有处理器730、存储器740、指令750和数据760。In one example, computer 720 may include a server having multiple computers, such as a load balancing server farm, that exchange information with different nodes of the network for the purpose of receiving, processing, and transmitting data from computer system 112. The server may be configured similarly to computer system 110 , with processor 730 , memory 740 , instructions 750 and data 760 .
示例场景2:视频监控Example Scenario 2: Video Surveillance
随着智慧城市、平安城市业务的发展,监控技术成为不可或缺的产品和技术。平安城市解决方案中,监控系统是视频刑侦、安防预警、交通指挥中的基础设备。目前的目标跟踪技术是基于监控系统负责管理区域中的多个摄像头实现。目标跟踪的首要步骤是确定目标的位置,再根据目标的位置有效的调度摄像头进行目标跟踪。With the development of smart city and safe city business, monitoring technology has become an indispensable product and technology. In the Safe City solution, the monitoring system is the basic equipment in video criminal investigation, security early warning, and traffic command. The current object tracking technology is based on the monitoring system responsible for managing multiple cameras in the area. The first step of target tracking is to determine the position of the target, and then effectively dispatch the camera to track the target according to the position of the target.
以监控管理系统200为例,本申请提供的目标跟踪方法适用的系统架构如图2b所示。监控管理系统200可以包括服务器201和雷达系统202。雷达系统202可采集雷达系统探测范围内的点云数据,服务器201可用于管理一定区域内的一个或多个雷达系统,可接收雷达系统202回传的点云数据,在对点云数据进行处理和分析后计算出目标的位置信息并且合理的调度雷达系统202进行目标跟踪。监控管理系统200还可以包括显示屏幕203用于实时呈现目标跟踪结果。Taking the monitoring and management system 200 as an example, the system architecture to which the target tracking method provided by the present application is applicable is shown in FIG. 2b. The surveillance management system 200 may include a server 201 and a radar system 202 . The radar system 202 can collect point cloud data within the detection range of the radar system, and the server 201 can be used to manage one or more radar systems in a certain area, and can receive the point cloud data returned by the radar system 202, and then process the point cloud data. After the sum and analysis, the position information of the target is calculated and the radar system 202 is reasonably scheduled to track the target. The monitoring management system 200 may also include a display screen 203 for presenting the target tracking results in real time.
在本申请中,目标对象可以为人物、动物或车辆,本申请实施例提供的目标跟踪方法可用于人体、动物或者车辆等目标的跟踪。In this application, the target object may be a person, an animal, or a vehicle, and the target tracking method provided in this embodiment of the application can be used for tracking targets such as a human body, an animal, or a vehicle.
具体的,结合上述图1、图2a和图2b的描述,本申请实施例中,图1中的车辆、图2a中的计算机系统中的处理器、以及图2b中的服务器可以获取到传感器系统(例如雷达系统)采集的数据,以及存储设备中与进行目标跟踪相关的指令,并基于获取到的指令对数据进行处理,以得到目标跟踪结果,进而,车辆可以基于目标跟踪结果进行自动驾驶等相关的操作。Specifically, with reference to the descriptions of FIG. 1, FIG. 2a, and FIG. 2b, in the embodiment of the present application, the vehicle in FIG. 1, the processor in the computer system in FIG. 2a, and the server in FIG. 2b can obtain the sensor system The data collected by the radar system (such as a radar system), and the instructions related to target tracking in the storage device, and the data is processed based on the acquired instructions to obtain the target tracking result. Further, the vehicle can perform automatic driving based on the target tracking result, etc. related operations.
本申请实施例中,目标跟踪是指,结合运动模型和误差协方差矩阵等对雷达系统采集得到的点云数据进行分析,得到车辆周围的障碍物的位置以及运动状态,具体的,在进行目标跟踪时,首先需要对雷达系统采集得到的点云数据进行聚类(目的是为了将各个对象的点云数据进行区分,以获取到各个目标对象对应的点云数据)和质心估计(目的是为了对各个目标对象的点云数据进行质心估计,得到每个目标对象对应的质心测量值),进而,可以得到质心测量值。在连续帧中的每个帧都可以获取到目标对象的质心测量值,目标跟踪的输入为上述获取到的每个帧的质心测量值。在一些场景中,由于车辆转弯或遮挡等问题,不同帧相同目标对象的点云数据的质心估计可能会出现位置扰动,现有技术中,在进行目标跟踪时,误差协方差矩阵仅考虑与雷达系统的点云测量性能相关的测量误差,并不能很准确的量化由于点云数据的质心估计导致的误差,因此,会造成目标跟踪的精度降低,甚至导致滤波发散,航迹断批。具体的,如图4所示,雷达系统的信号发射器(图4中的字母“T”)发送雷达信号至目标对象(图4中为车辆),并通过信号接收器(图4中的字母“R”)接收目标对象反射的信号,可以基于对目标对象反射的信号进行处理,以得到点云数据,其中,当目标对象处于车辆位置1时,其对应的点云数据为401,目标对象行驶到车辆位置2时,其对应的点云数据为402,如图4示出的那样,基于对点云数据401进行质心估计得到的质心估计位置和质心真实位置的偏差较小,而由于目标对象在行驶过程 中进行了转弯,基于对点云数据402进行质心估计得到的质心估计位置和质心真实位置的偏差较大。In the embodiments of the present application, target tracking refers to analyzing the point cloud data collected by the radar system in combination with the motion model and the error covariance matrix to obtain the positions and motion states of obstacles around the vehicle. When tracking, it is first necessary to cluster the point cloud data collected by the radar system (the purpose is to distinguish the point cloud data of each object to obtain the point cloud data corresponding to each target object) and the centroid estimation (the purpose is to The centroid estimation is performed on the point cloud data of each target object, and the centroid measurement value corresponding to each target object is obtained), and further, the centroid measurement value can be obtained. The centroid measurement value of the target object can be obtained in each of the consecutive frames, and the input of the target tracking is the above-obtained centroid measurement value of each frame. In some scenarios, the centroid estimation of point cloud data of the same target object in different frames may experience position disturbance due to problems such as vehicle turning or occlusion. In the prior art, when tracking the target, the error covariance matrix only considers The measurement error related to the system's point cloud measurement performance cannot accurately quantify the error caused by the centroid estimation of the point cloud data. Therefore, it will reduce the accuracy of target tracking, and even lead to filter divergence and track break batches. Specifically, as shown in Fig. 4, the signal transmitter of the radar system (the letter "T" in Fig. 4) sends the radar signal to the target object (the vehicle in Fig. 4), and the signal receiver (the letter in Fig. 4) sends the radar signal to the target object (the vehicle in Fig. 4). "R") receives the signal reflected by the target object, and can process the signal reflected by the target object to obtain point cloud data, wherein when the target object is at vehicle position 1, the corresponding point cloud data is 401, and the target object When driving to the vehicle position 2, the corresponding point cloud data is 402. As shown in Figure 4, the deviation between the estimated centroid position and the true position of the centroid obtained based on the centroid estimation of the point cloud data 401 is small, and due to the target The object turns while driving, and the estimated position of the centroid obtained based on the centroid estimation of the point cloud data 402 has a large deviation from the actual position of the centroid.
为了解决上述问题,本申请实施例提供了一种目标跟踪方法。In order to solve the above problem, an embodiment of the present application provides a target tracking method.
参照图5,图5为本申请实施例提供的一种目标跟踪方法的流程示意,如图5中示出的那样,本申请实施例提供的目标跟踪方法包括:Referring to FIG. 5 , FIG. 5 is a schematic flowchart of a target tracking method provided by an embodiment of the present application. As shown in FIG. 5 , the target tracking method provided by the embodiment of the present application includes:
501、获取目标对象的点云数据,并对点云数据进行质心估计。501. Acquire point cloud data of the target object, and perform centroid estimation on the point cloud data.
本申请实施例中,雷达系统可以对目标对象进行扫描,以获得目标对象的点云数据。其中,该点云数据是指利用雷达系统扫描目标对象获得的,该点云数据可以以点的形式记录,每个点可以包括坐标以及运动状态。In this embodiment of the present application, the radar system may scan the target object to obtain point cloud data of the target object. The point cloud data is obtained by scanning the target object with the radar system, and the point cloud data can be recorded in the form of points, and each point can include coordinates and motion states.
其中,坐标可以表示该点相对于雷达系统的相对位置,针对于一些雷达系统,其采集得到的点云数据在极坐标系中表达,则每个点包括的坐标可以包括点和雷达系统之间的距离,以及点所在的方位;Among them, the coordinates can represent the relative position of the point relative to the radar system. For some radar systems, the collected point cloud data is expressed in the polar coordinate system, and the coordinates included in each point can include the point between the point and the radar system. distance, and the bearing of the point;
其中,运动状态可以表示各个点的运动速度,针对于一些雷达系统,其采集得到的点云数据在极坐标系中表达,则每个点包括的运动状态可以包括点相对于车辆的径向速度。Among them, the motion state can represent the motion speed of each point. For some radar systems, the collected point cloud data is expressed in the polar coordinate system, and the motion state included in each point can include the radial speed of the point relative to the vehicle. .
本申请实施例对点云数据的具体表达方式不做限定。The specific expression manner of the point cloud data is not limited in this embodiment of the present application.
本申请实施例中,获取目标对象的点云数据之后,可以对点云数据进行质心估计。具体的,可以基于平均值运算的方式,计算点云数据包括的各个点对应的坐标位置的平均值,并将该平均值作为质心的位置;计算点云数据包括的各个点对应的运动速率的平均值,并将该平均值作为质心的运动速率。本申请实施例并不限定质心估计的具体运算方式。In this embodiment of the present application, after acquiring the point cloud data of the target object, centroid estimation may be performed on the point cloud data. Specifically, the average value of the coordinate positions corresponding to each point included in the point cloud data can be calculated based on the average value calculation method, and the average value can be used as the position of the centroid; The average value is taken as the movement rate of the centroid. The embodiment of the present application does not limit the specific operation manner of centroid estimation.
本申请实施例中,可以获取到雷达系统采集的目标对象的点云数据,并对点云数据进行质心估计,获得状态测量值。例如,状态测量值可以为Z k=[R kk,v rk],其中R k可以表示雷达系统和质心之间的距离,θ k可以表示质心所在的方位,v rk可以表示质心的径向速度。 In the embodiment of the present application, the point cloud data of the target object collected by the radar system can be acquired, and the centroid estimation of the point cloud data can be performed to obtain the state measurement value. For example, the state measurement can be Z k = [R k , θ k , v rk ], where R k can represent the distance between the radar system and the centroid, θ k can represent the orientation of the centroid, and v rk can represent the position of the centroid radial velocity.
需要说明的是,在一种可选的实现中,可以由端侧的设备(例如车辆的处理器)获取到目标对象的点云数据,并对点云数据进行质心估计;也可以是端侧的设备将目标对象的点云数据发送至云侧的服务器,则服务器可以获取到目标对象的点云数据,并对点云数据进行质心估计。It should be noted that, in an optional implementation, the point cloud data of the target object can be obtained by a device on the terminal side (for example, the processor of the vehicle), and the centroid of the point cloud data can be estimated; The device sends the point cloud data of the target object to the server on the cloud side, and the server can obtain the point cloud data of the target object and perform centroid estimation on the point cloud data.
502、获取目标协方差矩阵;其中,目标协方差矩阵表示进行质心估计时引入的质心位置误差。502. Obtain a target covariance matrix, where the target covariance matrix represents a centroid position error introduced during centroid estimation.
本申请实施例中,可以获取目标协方差矩阵;其中,目标协方差矩阵表示进行质心估计时引入的质心位置误差。In this embodiment of the present application, a target covariance matrix may be obtained; wherein, the target covariance matrix represents a centroid position error introduced during centroid estimation.
本申请实施例中,可以对质心估计的结果进行航迹起始;其中,航迹起始是指系统在未进入稳定的目标跟踪之前对目标航迹进行确认的处理过程,航迹起始所应达到的目的就是以尽可能高的概率起始正确航迹,并抑制虚假的航迹。如果航迹起始不正确,则无法实现对目标的正确跟踪和信息融合,如果起始的虚假航迹太多,则会给后续数据综合处理带来较大的计算负担,影响数据综合处理的效率和正确性。航迹起始算法可以但不限于包括两类:一类是以基于启发式算法和逻辑规则为代表的顺序处理技术,另一类是基于霍夫Hough变换为代表的批处理技术。In this embodiment of the present application, the result of centroid estimation can be used for track initiation; wherein, track initiation refers to the process of confirming the target track before the system enters stable target tracking. The goal is to start the correct track with the highest possible probability and suppress false tracks. If the track starts incorrectly, the correct tracking of the target and information fusion cannot be achieved. If there are too many false tracks at the beginning, it will bring a large computational burden to the subsequent data comprehensive processing and affect the data comprehensive processing. Efficiency and correctness. The track initiation algorithm can include, but is not limited to, two categories: one is a sequential processing technology represented by heuristic algorithms and logical rules, and the other is a batch processing technology represented by Hough transform.
本申请实施例中,通过航迹起始可以确定一条目标对象对应的航迹,该航迹中可以包括各个帧的质心估计的结果,质心估计的结果可以是质心测量值,之后可以基于运动模型 对各个帧的质心测量值进行预测,得到下一帧的状态预测值,之后将下一帧的状态预测值和下一帧对应的质心测量值进行比较,如果误差在预设范围内,则认为关联成功,并记录当前航迹数据关联次数,其中,以数据关联次数为0开始,数据关联每成功一次,就可以将数据关联次数增加1,以此来记录航迹数据关联次数N aIn this embodiment of the present application, a track corresponding to a target object can be determined through track initiation, and the track can include the result of centroid estimation of each frame, and the result of centroid estimation can be a centroid measurement value, which can then be based on a motion model Predict the centroid measurement value of each frame to obtain the state predicted value of the next frame, and then compare the state predicted value of the next frame with the corresponding centroid measurement value of the next frame. If the error is within the preset range, it is considered that successful association, track and record the current number of associated data, wherein the data associated with the number 0 is started, the data associated with each successful once, may increase the number of times a data association, in order to record data associated with track number N a.
在一些实施例中,针对于当前关联次数N a的不同,可以获取到不同的目标协方差矩阵,接下来分别进行说明: In some embodiments, different target covariance matrices can be obtained according to the difference of the current association times N a, which are described below:
本申请实施例中,若当前记录的关联次数N a小于或等于预设值N,则可以认为当前的目标跟踪过程还处于初始阶段,其滤波所使用的各个协方差矩阵还不接近真值(即正确值),不能基于历史的目标跟踪结果来进行当前时刻的目标跟踪,即若当前记录的关联次数N a小于或等于预设值N,则目标跟踪还未收敛,此时可以利用按照经验值设置的目标协方差矩阵来进行目标跟踪,其中,目标协方差矩阵可以表示进行质心估计时引入的质心位置误差。 The present embodiments of the application, if the association number of the current record is equal to or less than N a predetermined value N, it is considered that the current target tracking process is still at an initial stage, the filtering using respective covariance matrix is not close to the true value ( i.e. the correct value), the target track can not be performed based on the target current time history tracking result, i.e., if the association number of the current record is equal to or less than N a predetermined value N, the target tracking has not converged, then you can use empirically The target covariance matrix set by the value is used for target tracking, wherein the target covariance matrix can represent the centroid position error introduced during centroid estimation.
在一种实施例中,若当前记录的关联次数N a≤N,其中N为预设值,则可以确定表示进行质心估计时引入的质心位置误差的协方差矩阵初始值为目标协方差矩阵。其中,目标协方差矩阵可以为预设的协方差矩阵,例如可以是基于经验值设置的。 In one embodiment, if the association number of the current record N a ≤N, wherein N is a preset value, the error may be determined to represent centroid introduced when the initial value of the covariance matrix of the covariance matrix of the target centroid estimation. The target covariance matrix may be a preset covariance matrix, for example, may be set based on an empirical value.
这里给出一个示例,若预估质心估计时引入的质心位置误差为5米,且认为在笛卡尔坐标系下的x方向的误差和y方向的误差没有关联(即,x方向的质心位置误差不会影响到y方向的质心位置误差,y方向的质心位置误差不会影响到x方向的质心位置误差),则可以设置预设的目标协方差矩阵为
Figure PCTCN2020098494-appb-000010
其中,目标协方差矩阵为在笛卡尔坐标系中表示的。若预估质心估计时引入的质心位置误差为5米,且认为在笛卡尔坐标系下的x方向的误差和y方向的误差存在关联(即,x方向的质心位置误差会影响到y方向的质心位置误差,y方向的质心位置误差会影响到x方向的质心位置误差),则可以设置预设的目标协方差矩阵为
Figure PCTCN2020098494-appb-000011
其中,目标协方差矩阵为在笛卡尔坐标系中表示的,且x的值可以基于x方向的质心位置误差对y方向的质心位置误差的影响进行设置,y的值可以基于y方向的质心位置误差对x方向的质心位置误差的影响进行设置,这里并不限定。
Here is an example, if the centroid position error introduced in the estimated centroid estimation is 5 meters, and it is considered that the error in the x-direction and the error in the y-direction in the Cartesian coordinate system are not related (that is, the centroid position error in the x-direction is will not affect the centroid position error in the y direction, and the centroid position error in the y direction will not affect the centroid position error in the x direction), you can set the preset target covariance matrix as
Figure PCTCN2020098494-appb-000010
Among them, the target covariance matrix is represented in the Cartesian coordinate system. If the centroid position error introduced in the estimated centroid estimation is 5 meters, and it is considered that the error in the x-direction and the error in the y-direction in the Cartesian coordinate system are related (that is, the centroid position error in the x-direction will affect the y-direction error). centroid position error, the centroid position error in the y direction will affect the centroid position error in the x direction), you can set the preset target covariance matrix as
Figure PCTCN2020098494-appb-000011
Among them, the target covariance matrix is represented in the Cartesian coordinate system, and the value of x can be set based on the influence of the centroid position error in the x direction on the centroid position error in the y direction, and the value of y can be based on the centroid position in the y direction. The influence of the error on the centroid position error in the x-direction is set, which is not limited here.
在一些实现中,若当前记录的关联次数N a大于预设值N,则可以认为当前的目标跟踪过程处于稳定的阶段,其滤波所使用的各个协方差矩阵接近或为真值(即正确值),在这种情况下,可以基于历史的目标跟踪结果来进行当前时刻的目标跟踪,在这种情况下,目标协方差矩阵可以基于目标跟踪预测值和状态测量值来获取;具体的,可以获取目标对象在上一时刻的目标跟踪结果;并基于运动模型,对上一时刻的目标跟踪结果进行状态预测,以得到当前时刻的目标跟踪预测值;获取当前时刻的状态测量值,状态测量值为对点云数据进行质心估计得到的;基于目标跟踪预测值和状态测量值,获取目标协方差矩阵。 In some implementations, if the current record number N a correlation greater than a predetermined value N, it is considered that the current target tracking process in a stable stage, which filter the respective covariance matrix is used near or true value (i.e., the correct value ), in this case, the target tracking at the current moment can be performed based on the historical target tracking results. In this case, the target covariance matrix can be obtained based on the target tracking predicted value and the state measurement value; specifically, it can be Obtain the target tracking results of the target object at the last moment; and based on the motion model, perform state prediction on the target tracking results at the previous moment to obtain the target tracking prediction value at the current moment; obtain the state measurement value at the current moment, the state measurement value It is obtained by estimating the centroid of the point cloud data; based on the target tracking predicted value and the state measurement value, the target covariance matrix is obtained.
本申请实施例中,目标协方差矩阵可以根据目标跟踪的进行而不断更新,每次迭代(即每次计算当前时刻的目标协方差矩阵)时,都是基于前一时刻的目标跟踪预测值和当前时刻的状态测量值进行更新的,随着目标跟踪过程的迭代的进行,目标协方差矩阵会逐渐逼近或者达到真实值,进而使得目标跟踪的结果更准确。In this embodiment of the present application, the target covariance matrix may be continuously updated according to the progress of target tracking, and each iteration (that is, each time the target covariance matrix at the current moment is calculated) is based on the target tracking prediction value and The state measurement value at the current moment is updated. With the iteration of the target tracking process, the target covariance matrix will gradually approach or reach the true value, thereby making the target tracking result more accurate.
接下来,介绍如何基于目标跟踪预测值和状态测量值,来获取目标协方差矩阵。Next, we describe how to obtain the target covariance matrix based on target tracking predictions and state measurements.
本申请实施例中,若关联次数N a大于预设值N,则可以根据当前时刻的目标跟踪预测值和状态测量值之间的误差来确定目标协方差矩阵;其中,误差的计算方式可以是对目 标跟踪预测值和状态测量值进行矩阵相减的运算,具体的,目标跟踪预测值可以包括质心的位置预测值,状态测量值可以包括质心的位置测量值,在笛卡尔坐标系下,位置预测值包括x方向的位置预测值以及y方向的位置预测值;位置测量值包括x方向的位置测量值以及y方向的位置测量值,之后可以计算x方向的位置预测值以及x方向的位置测量值之间的差值,计算y方向的位置预测值以及y方向的位置测量值之间的差值,可以将x方向的位置预测值以及x方向的位置测量值之间的差值或者该差值的预设倍数作为目标协方差矩阵的一个对角元素,将y方向的位置预测值以及y方向的位置测量值之间的差值或者该差值的预设倍数作为目标协方差矩阵的另一个对角元素。 In the embodiment of the present application, if the number of association times N a is greater than the preset value N, the target covariance matrix may be determined according to the error between the target tracking predicted value and the state measurement value at the current moment; wherein, the calculation method of the error may be: Perform a matrix subtraction operation on the target tracking prediction value and the state measurement value. Specifically, the target tracking prediction value may include the position prediction value of the centroid, and the state measurement value may include the position measurement value of the centroid. In the Cartesian coordinate system, the position The predicted value includes the position prediction value in the x direction and the position prediction value in the y direction; the position measurement value includes the position measurement value in the x direction and the position measurement value in the y direction, and then the position prediction value in the x direction and the position measurement in the x direction can be calculated. The difference between the values, calculate the difference between the position prediction value in the y direction and the position measurement value in the y direction, you can calculate the difference between the position prediction value in the x direction and the position measurement value in the x direction or the difference The preset multiple of the value is used as a diagonal element of the target covariance matrix. A diagonal element.
更细节的,在一种实施例中,可以基于目标跟踪预测值和状态测量值,通过如下公式,获取目标协方差矩阵:In more detail, in an embodiment, the target covariance matrix can be obtained by the following formula based on the target tracking predicted value and the state measurement value:
Figure PCTCN2020098494-appb-000012
Figure PCTCN2020098494-appb-000012
Figure PCTCN2020098494-appb-000013
Figure PCTCN2020098494-appb-000013
其中,x k-1|k,y k+1|k为状态测量值,
Figure PCTCN2020098494-appb-000014
为目标跟踪预测值,R 1为目标协方差矩阵。
Among them, x k-1|k , y k+1|k are state measurement values,
Figure PCTCN2020098494-appb-000014
is the target tracking predicted value, and R 1 is the target covariance matrix.
需要说明的是,以上目标协方差矩阵的计算过程仅为一种示意,只要可以表示出在质心估计时引入的质心位置误差,本申请并不限定目标协方差矩阵的计算方式。It should be noted that the above calculation process of the target covariance matrix is only an illustration, as long as the centroid position error introduced during centroid estimation can be represented, the present application does not limit the calculation method of the target covariance matrix.
503、基于目标协方差矩阵,对目标对象进行目标跟踪,以得到目标跟踪结果。503. Based on the target covariance matrix, perform target tracking on the target object to obtain a target tracking result.
由于车辆或者其他类型的目标对象存在转弯或遮挡等情况,不同帧相同目标对象的点云数据的质心估计可能会出现位置扰动,现有技术中,在进行目标跟踪时,误差协方差矩阵仅考虑与雷达系统的点云测量性能相关的测量误差,并不能很准确的量化由于点云数据的质心估计导致的误差,因此,会造成目标跟踪的精度降低,本申请实施例中,目标协方差矩阵可以相对准确的量化由于点云数据的质心估计导致的误差,因此若基于目标协方差矩阵对目标对象进行目标跟踪,则可以提高目标跟踪的精度。Due to the turning or occlusion of vehicles or other types of target objects, the centroid estimation of point cloud data of the same target object in different frames may experience positional disturbance. In the prior art, during target tracking, the error covariance matrix only considers The measurement error related to the point cloud measurement performance of the radar system cannot accurately quantify the error caused by the centroid estimation of the point cloud data. Therefore, the accuracy of target tracking will be reduced. In the embodiment of the present application, the target covariance matrix The error caused by the centroid estimation of the point cloud data can be relatively accurately quantified. Therefore, if the target object is tracked based on the target covariance matrix, the accuracy of the target tracking can be improved.
接下来详细描述,如何基于目标协方差矩阵,对目标对象进行目标跟踪。Next, it will be described in detail how to track the target object based on the target covariance matrix.
在进行目标跟踪时,需要根据前一时刻的目标跟踪结果、当前时刻的质心估计结果(状态测量值)、以及协方差矩阵(可以包括测量方程协方差矩阵和状态方程协方差矩阵)等数据进行当前时刻的目标跟踪,其中,可以将上述得到的目标协方差矩阵融合进测量方程协方差矩阵中,即将目标协方差矩阵和测量方程协方差矩阵进行合成,得到合成后的目标协方差矩阵,并将合成后的目标协方差矩阵作为当前时刻目标跟踪时的输入数据。When performing target tracking, it is necessary to carry out data according to the target tracking result at the previous moment, the centroid estimation result (state measurement value) at the current moment, and the covariance matrix (which may include the measurement equation covariance matrix and the state equation covariance matrix) and other data. The target tracking at the current moment, in which the target covariance matrix obtained above can be fused into the measurement equation covariance matrix, that is, the target covariance matrix and the measurement equation covariance matrix are synthesized to obtain the synthesized target covariance matrix, and The synthesized target covariance matrix is used as the input data for target tracking at the current moment.
本申请实施例中,基于目标协方差矩阵,对目标对象进行目标跟踪,可以理解为,将目标协方差矩阵作为一部分输入数据,对目标对象进行目标跟踪;或者是,在对目标对象进行目标跟踪时,将目标协方差矩阵作为一部分输入数据。In the embodiment of the present application, the target tracking is performed on the target object based on the target covariance matrix. It can be understood that the target covariance matrix is used as part of the input data to perform target tracking on the target object; , use the target covariance matrix as part of the input data.
接下来介绍如何将目标协方差矩阵和测量方程协方差矩阵进行合成,得到合成后的目标协方差矩阵。Next, we will introduce how to combine the target covariance matrix and the measurement equation covariance matrix to obtain the combined target covariance matrix.
本申请实施例中,在获取到目标协方差矩阵之后,可以获取测量方程协方差矩阵,其中,测量方程协方差矩阵表示雷达系统的测量偏差;将测量方程协方差矩阵和目标协方差矩阵进行合成,以得到合成后的目标协方差矩阵。In the embodiment of the present application, after the target covariance matrix is obtained, the measurement equation covariance matrix may be obtained, wherein the measurement equation covariance matrix represents the measurement deviation of the radar system; the measurement equation covariance matrix and the target covariance matrix are synthesized , to get the synthesized target covariance matrix.
在一种实施例中,通常由于雷达系统的限制,雷达系统采集的点云数据通常是在极坐标系中表达的数据,因此,与雷达系统的性能相关的测量方程协方差矩阵通常为在极坐标系中表示的矩阵,而目标协方差矩阵为在笛卡尔坐标系中表示的矩阵,在这种情况下,可以将测量方程协方差矩阵转换为在笛卡尔坐标系中表示的矩阵,并将转换后的测量方程协方差矩阵和目标协方差矩阵进行矩阵加,以得到合成后的目标协方差矩阵。In an embodiment, usually due to the limitation of the radar system, the point cloud data collected by the radar system is usually the data expressed in the polar coordinate system. Therefore, the measurement equation covariance matrix related to the performance of the radar system is usually in the polar coordinate system. The matrix represented in the coordinate system, and the target covariance matrix is the matrix represented in the Cartesian coordinate system, in this case, the measurement equation covariance matrix can be converted to the matrix represented in the Cartesian coordinate system, and the The converted measurement equation covariance matrix and the target covariance matrix are matrix-added to obtain a combined target covariance matrix.
接下来描述,如何将测量方程协方差矩阵转换为在笛卡尔坐标系中表示的矩阵。Next we describe how to convert the measurement equation covariance matrix into a matrix represented in Cartesian coordinates.
本申请实施例中,可以首先构建一个协方差转换合成矩阵,该协方差转换合成矩阵可以用于将协方差矩阵从极坐标系下转换为在笛卡尔坐标系中表达;在一种实现中,协方差转换合成矩阵与当前时刻的状态测量值有关,具体的,当前时刻状态测量值可以为Z k=[R kk,v rk],其中R k可以表示雷达系统和质心之间的距离,θ k可以表示质心所在的方位,v rk可以表示质心的径向速度;可以基于状态测量值中雷达系统和质心之间的距离,以及质心所在的方位,来构建协方差转换合成矩阵中的一部分元素,并用0和1来填充协方差转换合成矩阵中的另一部分元素;具体的,可以将cosθ k、-R kcosθ k作为协方差转换合成矩阵的第一行的元素,sinθ k和R kcosθ k作为协方差转换合成矩阵的第二行的元素。 In the embodiment of the present application, a covariance transformation synthesis matrix may be constructed first, and the covariance transformation synthesis matrix may be used to convert the covariance matrix from a polar coordinate system to an expression in a Cartesian coordinate system; in one implementation, The covariance transformation synthesis matrix is related to the state measurement value at the current moment. Specifically, the state measurement value at the current moment can be Z k = [R k , θ k , v rk ], where R k can represent the radar system and the center of mass. Distance, θ k can represent the orientation of the centroid, and v rk can represent the radial velocity of the centroid; based on the distance between the radar system and the centroid in the state measurements, and the orientation of the centroid, the covariance transformation can be constructed in the synthesis matrix part of the elements of the covariance transformation synthesis matrix, and use 0 and 1 to fill another part of the elements in the covariance transformation synthesis matrix; specifically, cosθ k , -R k cosθ k can be used as the elements of the first row of the covariance transformation synthesis matrix, sinθ k and R k cos θ k as the elements of the second row of the covariance transformation synthesis matrix.
更细节的,协方差转换合成矩阵可以为:In more detail, the covariance transformation synthesis matrix can be:
Figure PCTCN2020098494-appb-000015
Figure PCTCN2020098494-appb-000015
需要说明的是,上述协方差转换合成矩阵仅为一种示意,在实际应用中,协方差转换合成矩阵的维度可以灵活调整,这里并不限定。It should be noted that the above covariance transformation synthesis matrix is only an illustration. In practical applications, the dimension of the covariance transformation synthesis matrix can be flexibly adjusted, which is not limited here.
在得到协方差转换合成矩阵之后,可以基于协方差转换合成矩阵将测量方程协方差矩阵转换为在笛卡尔坐标系中表示的矩阵,并将转换后的测量方程协方差矩阵和目标协方差矩阵进行矩阵加,以得到合成后的目标协方差矩阵,例如可以通过如下公式将目标协方差矩阵和测量方程协方差矩阵进行合成,得到合成后的目标协方差矩阵:After the covariance transformation synthesis matrix is obtained, the measurement equation covariance matrix can be transformed into a matrix represented in the Cartesian coordinate system based on the covariance transformation synthesis matrix, and the transformed measurement equation covariance matrix and the target covariance matrix are processed. Matrix addition to obtain the synthesized target covariance matrix. For example, the target covariance matrix and the measurement equation covariance matrix can be synthesized by the following formula to obtain the synthesized target covariance matrix:
R all=R 0+A kR mA k′; R all =R 0 +A k R m A k ′;
其中,矩阵加可以理解为具有相同尺寸的矩阵,对应的元素进行加法,以得到一个新的矩阵,例如,矩阵A为
Figure PCTCN2020098494-appb-000016
矩阵B为
Figure PCTCN2020098494-appb-000017
则矩阵A和矩阵B进行矩阵加之后得到的矩阵C为
Figure PCTCN2020098494-appb-000018
Among them, matrix addition can be understood as matrices with the same size, and the corresponding elements are added to obtain a new matrix. For example, matrix A is
Figure PCTCN2020098494-appb-000016
Matrix B is
Figure PCTCN2020098494-appb-000017
Then the matrix C obtained after matrix A and matrix B are added is:
Figure PCTCN2020098494-appb-000018
其中R all为合成后的目标协方差矩阵,R 0为目标协方差矩阵,A kR mA k′为转换后的测量方程协方差矩阵,R k和θ k为状态测量值,A k为协方差转换合成矩阵,A k′为协方差转换合成矩阵的转置。 where R all is the synthesized target covariance matrix, R 0 is the target covariance matrix, A k R m A k ′ is the transformed measurement equation covariance matrix, R k and θ k are the state measurement values, and A k is Covariance transformation synthesis matrix, A k ' is the transpose of the covariance transformation synthesis matrix.
需要说明的是,上述将目标协方差矩阵和测量方程协方差矩阵进行合成的方法仅为一种示意,在实际应用中,只需要保证合成后的目标协方差矩阵包括质心估计时引入的质心位置误差相关的信息即可,本申请并不限定具体的矩阵合成方式。It should be noted that the above method of synthesizing the target covariance matrix and the measurement equation covariance matrix is only an illustration. In practical applications, it is only necessary to ensure that the synthesized target covariance matrix includes the centroid position introduced in the centroid estimation. The information related to the error is sufficient, and the present application does not limit the specific matrix synthesis method.
本申请实施例中,合成后的目标协方差矩阵包括质心估计时引入的质心位置误差相关的信息,合成后的目标协方差矩阵相对准确的量化由于点云数据的质心估计导致的误差, 因此若基于合成后的目标协方差矩阵对目标对象进行目标跟踪,则可以提高目标跟踪的精度。In the embodiment of the present application, the synthesized target covariance matrix includes information related to the centroid position error introduced during centroid estimation, and the synthesized target covariance matrix relatively accurately quantifies the error caused by the centroid estimation of the point cloud data. Therefore, if Tracking the target object based on the synthesized target covariance matrix can improve the accuracy of target tracking.
本申请实施例中,可以基于目标协方差矩阵,对目标对象进行目标跟踪;更细节的,可以基于合成后的目标协方差矩阵,对目标对象进行目标跟踪;接下来描述如何基于合成后的目标协方差矩阵,对目标对象进行目标跟踪:具体的,本申请实施例中,可以获取到状态方程协方差矩阵P,其中,状态方程协方差矩阵P可以为状态之间的协方差组成的矩阵,对角线元素是各个状态之间的方差,其余元素是相应元素的协方差,由此P可以是一个多维方阵,维度和状态数一致,且P可以是对称方阵。比如状态X包含位置p和速度v两个量,由于相同变量之间的协方差就是其方差,因此对角线上元素分别是p和v的方差,其余两个元素分别是两元素间的协方差,由于协方差部分次序,状态方程协方差矩阵是对称的。在使用时,状态方程协方差矩阵P是一个迭代更新的量,每一轮预测和更新后,P都会更新一个新的值,因此初始化时可以根据估计协定,不用太苛求初始化的精准度,随着几轮迭代会越来越趋紧真实值。In the embodiment of the present application, the target object can be tracked based on the target covariance matrix; in more detail, the target object can be tracked based on the synthesized target covariance matrix; the following describes how to track the target object based on the synthesized target Covariance matrix, to track the target object: Specifically, in this embodiment of the present application, a state equation covariance matrix P may be obtained, wherein the state equation covariance matrix P may be a matrix composed of covariances between states, The diagonal elements are the variances between the states, and the remaining elements are the covariances of the corresponding elements. Therefore, P can be a multidimensional square matrix with the same dimensions as the number of states, and P can be a symmetric square matrix. For example, state X contains two quantities, position p and velocity v. Since the covariance between the same variables is its variance, the elements on the diagonal are the variances of p and v, respectively, and the other two elements are the covariance between the two elements. The variance, equation of state covariance matrix is symmetric due to the covariance part order. When used, the state equation covariance matrix P is an iteratively updated quantity. After each round of prediction and update, P will be updated with a new value. Therefore, the initialization can be based on the estimation agreement, and the accuracy of initialization does not need to be too strict. Over several iterations, the true value will be tightened more and more.
本申请实施例中,可以获取上一时刻进行质心估计得到的状态测量值;状态测量值可以为X(k|k)=[x k|k,y k|k,v xk|k,v yk|k],其中,x k/k,y k/k表示当前时刻质心的位置测量值,其中,x k/k表示当前时刻质心在笛卡尔坐标系中x方向的位置测量值,y k/k表示当前时刻质心在笛卡尔坐标系中y方向的位置测量值,v xk/k,v yk/k表示当前时刻质心的速度测量值,其中,v xk/k表示当前时刻质心在笛卡尔坐标系中x方向的速度测量值,其中,v yk/k表示当前时刻质心的位置测量值。 In this embodiment of the present application, the state measurement value obtained by performing centroid estimation at the last moment may be obtained; the state measurement value may be X(k|k)=[x k|k , y k|k , v xk|k , v yk |k ], where x k/k , y k/k represent the position measurement value of the centroid at the current moment, where x k/k represent the position measurement value of the centroid at the current moment in the x direction in the Cartesian coordinate system, y k/ k represents the position measurement value of the center of mass at the current moment in the y direction in the Cartesian coordinate system, v xk/k , v yk/k represent the velocity measurement value of the center of mass at the current moment, where v xk/k represents the current moment of the center of mass in Cartesian coordinates The velocity measurement in the x direction in the system, where v yk/k represents the position measurement of the centroid at the current moment.
本申请实施例中,可以通过运动模型和上一时刻进行质心估计得到的状态测量值对当前时刻的状态进行预测,获得当前时刻的状态预测值,示例性的,F(k)表示运动模型,V(k)表示状态噪声,可以通过如下公式进行状态预测值的计算:In the embodiment of the present application, the state at the current moment may be predicted by using the motion model and the state measurement value obtained by performing centroid estimation at the previous moment to obtain the state prediction value at the current moment. Exemplarily, F(k) represents the motion model, V(k) represents the state noise, and the state prediction value can be calculated by the following formula:
X(k+1|k)=F(k)X(k|k)+V(k);X(k+1|k)=F(k)X(k|k)+V(k);
其中,X(k+1/k)表示当前时刻的状态预测值。Among them, X(k+1/k) represents the state prediction value at the current moment.
其中,运动模型是跟踪滤波算法设计的重要部分,选择合理的运动模型有助于准确地预测被跟踪目标的未来状态或运动轨迹,是实现精确跟踪控制的重要条件。运动模型主要以状态空间模型来表示,运动模型可以表达出目标的运动规律,本申请实施例并不限定具体的运动模型类型。Among them, the motion model is an important part of the design of the tracking filtering algorithm. Selecting a reasonable motion model helps to accurately predict the future state or trajectory of the tracked target, which is an important condition for accurate tracking control. The motion model is mainly represented by a state space model, and the motion model can express the motion law of the target. The embodiment of the present application does not limit the specific motion model type.
本申请实施例中,可以基于上一时刻的状态方程协方差矩阵以及运动方程来进行当前时刻的状态方程协方差矩阵的预测:P(k+1|k)=F(k)P(k|k)F'(k)+Q(k);其中,P(k+1/k)可以表示当前时刻的状态方程协方差矩阵,F(k)表示运动模型,F'(k)可以表示矩阵F(k)的转置矩阵。In this embodiment of the present application, the prediction of the covariance matrix of the state equation at the current moment may be performed based on the state equation covariance matrix and the motion equation at the previous moment: P(k+1|k)=F(k)P(k| k)F'(k)+Q(k); where, P(k+1/k) can represent the covariance matrix of the state equation at the current moment, F(k) represents the motion model, and F'(k) can represent the matrix Transpose matrix of F(k).
之后,可以根据测量矩阵、以及基于对上一时刻的目标跟踪结果进行预测得到的当前时刻的目标跟踪预测值,示例性的,H(k+1)表示测量矩阵,W(k+1)表示测量误差,当前时刻的目标跟踪预测值可以为:Afterwards, the target tracking prediction value at the current moment can be obtained according to the measurement matrix and the target tracking result of the previous moment. Exemplarily, H(k+1) represents the measurement matrix, and W(k+1) represents the The measurement error, the target tracking prediction value at the current moment can be:
Z(k+1|k)=H(k+1)X(k+1|k)+W(k+1),其中,Z(k+1/k)可以表示当前时刻的目标跟踪预测值。Z(k+1|k)=H(k+1)X(k+1|k)+W(k+1), where Z(k+1/k) can represent the target tracking prediction value at the current moment .
之后,可以基于当前时刻的目标跟踪预测值以及当前时刻的状态测量值来计算新息innovation;示例性的,新息可以为v(k+1)=Z(k+1)-Z(k+1|k),其中,v(k+1)可以表示新息,Z(k+1/k)可以表示当前时刻的目标跟踪预测值,Z(k+1)可以表示当前时刻的状态测量值。需要说明的是,新息可以表示状态信息相关的矩阵的更新。After that, the innovation innovation can be calculated based on the target tracking prediction value at the current moment and the state measurement value at the current moment; exemplarily, the innovation can be v(k+1)=Z(k+1)-Z(k+ 1|k), where v(k+1) can represent the innovation, Z(k+1/k) can represent the target tracking prediction value at the current moment, and Z(k+1) can represent the state measurement value at the current moment . It should be noted that the innovation may represent the update of the matrix related to the state information.
并基于测量矩阵、当前时刻的状态方程协方差矩阵、以及上述得到的合成后的目标协方差矩阵,来计算新息协方差矩阵,示例性的,可以通过过下公式计算新息协方差矩阵:And based on the measurement matrix, the covariance matrix of the state equation at the current moment, and the synthesized target covariance matrix obtained above, the innovation covariance matrix is calculated. Exemplarily, the innovation covariance matrix can be calculated by the following formula:
S(k+1)=H(k+1)P(k+1|k)H'(k+1)+R all;其中,S(k+1)可以表示新息协方差矩阵,H(k+1)表示测量矩阵,P(k+1/k)可以表示当前时刻的状态方程协方差矩阵,R all可以表示合成后的目标协方差矩阵。 S(k+1)=H(k+1)P(k+1|k)H'(k+1)+R all ; where S(k+1) can represent the innovation covariance matrix, H( k+1) represents the measurement matrix, P(k+1/k) can represent the covariance matrix of the state equation at the current moment, and R all can represent the synthesized target covariance matrix.
本申请实施例中,可以基于合成后的目标协方差矩阵以及其他协方差矩阵生成新息协方差矩阵,进而基于新息协方差矩阵计算时刻的目标跟踪结果,其中,目标跟踪结果可以为当前时刻的目标对象的质心位置。In this embodiment of the present application, an innovation covariance matrix may be generated based on the synthesized target covariance matrix and other covariance matrices, and then the target tracking result at the moment is calculated based on the innovation covariance matrix, where the target tracking result may be the current moment The centroid position of the target object.
本申请实施例中,可以基于当前时刻的状态方程协方差矩阵、测量矩阵、新息协方差矩阵、新息、当前时刻的状态预测值来计算得到当前时刻的目标跟踪结果,其中,目标跟踪结果可以为当前时刻的目标对象的质心位置。In the embodiment of the present application, the target tracking result at the current moment may be calculated based on the state equation covariance matrix, the measurement matrix, the innovation covariance matrix, the innovation value, and the state predicted value at the current moment, wherein the target tracking result It can be the centroid position of the target object at the current moment.
具体的,可以基于如下公式进行目标跟踪:Specifically, target tracking can be performed based on the following formula:
K(k+1)=P(k+1|k)H′(k+1)S -1(k+1) K(k+1)=P(k+1|k)H′(k+1)S -1 (k+1)
X(k+1|k+1)=X(k+1|k)+K(k+1)·v(k+1);X(k+1|k+1)=X(k+1|k)+K(k+1)·v(k+1);
其中,P(k+1/k)可以表示当前时刻的状态方程协方差矩阵,H(k+1)可以表示测量矩阵,S(k+1)可以表示新息协方差矩阵,v(k+1)可以表示新息,X(k+1/k)可以表示当前时刻的状态预测值,X(k+1/k+1)可以表示目标跟踪结果。Among them, P(k+1/k) can represent the state equation covariance matrix at the current moment, H(k+1) can represent the measurement matrix, S(k+1) can represent the innovation covariance matrix, v(k+ 1) It can represent innovation, X(k+1/k) can represent the state prediction value at the current moment, and X(k+1/k+1) can represent the target tracking result.
需要说明的是,上述基于目标协方差矩阵,对目标对象进行目标跟踪的具体实现仅为一种示意,在实际应用中,基于不同的滤波算法的目标跟踪过程可以不同,实际应用中可以进行适配性的调整,本申请并不限定。It should be noted that the above-mentioned specific implementation of the target tracking for the target object based on the target covariance matrix is only an illustration. In practical applications, the target tracking process based on different filtering algorithms may be different. The adjustment of compatibility is not limited in this application.
需要说明的是,在一些实现中,可以由端侧(例如车辆或者监控设备)来获取目标对象的点云数据,并将目标对象的点云数据发送至服务器,服务器对所述点云数据进行质心估计;获取目标协方差矩阵,服务器可以基于所述目标协方差矩阵,对所述目标对象进行目标跟踪,以得到目标跟踪结果,并将目标跟踪结果发送至端侧。It should be noted that, in some implementations, the point cloud data of the target object may be acquired by the terminal side (for example, a vehicle or a monitoring device), and the point cloud data of the target object may be sent to the server, and the server will perform the processing on the point cloud data. Centroid estimation; obtaining a target covariance matrix, the server can perform target tracking on the target object based on the target covariance matrix to obtain a target tracking result, and send the target tracking result to the terminal side.
需要说明的是,在一些实现中,可以由端侧(例如车辆或者监控设备)来获取目标对象的点云数据,并对所述点云数据进行质心估计;获取目标协方差矩阵,服务器可以基于所述目标协方差矩阵,对所述目标对象进行目标跟踪,以得到目标跟踪结果。It should be noted that, in some implementations, the point cloud data of the target object can be obtained by the terminal side (such as a vehicle or a monitoring device), and the centroid estimation is performed on the point cloud data; to obtain the target covariance matrix, the server can be based on For the target covariance matrix, target tracking is performed on the target object to obtain a target tracking result.
本申请实施例提供了一种目标跟踪方法,方法包括:获取目标对象的点云数据,并对点云数据进行质心估计;获取目标协方差矩阵;其中,目标协方差矩阵表示进行质心估计时引入的质心位置误差;以及,基于目标协方差矩阵,对目标对象进行目标跟踪,以得到目标跟踪结果。由于车辆或者其他类型的目标对象存在转弯或遮挡等情况,不同帧相同目标对象的点云数据的质心估计可能会出现位置扰动,现有技术中,在进行目标跟踪时,误差协方差矩阵仅考虑与雷达系统的点云测量性能相关的测量误差,并不能很准确的量化由于点云数据的质心估计导致的误差,因此,会造成目标跟踪的精度降低,本申请实施例中,目标协方差矩阵可以相对准确的量化由于点云数据的质心估计导致的误差,因此若基于目标协方差矩阵对目标对象进行目标跟踪,则可以提高目标跟踪的精度。An embodiment of the present application provides a target tracking method. The method includes: acquiring point cloud data of a target object, and performing centroid estimation on the point cloud data; and, based on the target covariance matrix, perform target tracking on the target object to obtain the target tracking result. Due to the turning or occlusion of vehicles or other types of target objects, the centroid estimation of point cloud data of the same target object in different frames may experience positional disturbance. In the prior art, during target tracking, the error covariance matrix only considers The measurement error related to the point cloud measurement performance of the radar system cannot accurately quantify the error caused by the centroid estimation of the point cloud data. Therefore, the accuracy of target tracking will be reduced. In the embodiment of the present application, the target covariance matrix The error caused by the centroid estimation of the point cloud data can be relatively accurately quantified. Therefore, if the target object is tracked based on the target covariance matrix, the accuracy of the target tracking can be improved.
接下来描述一个基于匀速圆周运动(constant turning rate and velocity,CTRV)模型和容积卡尔曼滤波(cubature kalman filter,CKF)算法的目标跟踪方法的应用例。Next, an application example of the target tracking method based on the constant turning rate and velocity (CTRV) model and the cubature kalman filter (CKF) algorithm is described.
本申请实施例中,采用CTRV运动模型和CKF滤波算法实现基于点云数据的目标跟踪,其中,具体的实现流程可以如图6所示,该方法包括:In the embodiment of the present application, the CTRV motion model and the CKF filtering algorithm are used to implement target tracking based on point cloud data, wherein the specific implementation process can be shown in Figure 6, and the method includes:
601、参数初始化。601. Parameter initialization.
本申请实施例中,在进行目标跟踪时,首先需要进行与目标跟踪相关的数据的准备工作,其中,与目标跟踪相关的数据可以包括与雷达系统性能相关的测量方程协方差矩阵,可以表示质心估计时引入的质心位置误差的目标协方差矩阵;具体可以参照图5对应的实施例中步骤502相关的描述,这里不再赘述。In the embodiment of the present application, when the target tracking is performed, the data related to the target tracking needs to be prepared first, wherein the data related to the target tracking may include a measurement equation covariance matrix related to the performance of the radar system, which may represent the centroid The target covariance matrix of the centroid position error introduced during estimation; for details, reference may be made to the description related to step 502 in the embodiment corresponding to FIG. 5 , which will not be repeated here.
602、根据状态方程维度设置CKF容积点集ξ。602. Set the CKF volume point set ξ according to the dimension of the state equation.
本申请实施例中,CKF容积点集与状态的维度数量有关,比如n=3时,相应的容积点集为:In the embodiment of the present application, the CKF volume point set is related to the number of dimensions of the state. For example, when n=3, the corresponding volume point set is:
Figure PCTCN2020098494-appb-000019
Figure PCTCN2020098494-appb-000019
比如n=2时,相应的容积点集为:For example, when n=2, the corresponding volume point set is:
Figure PCTCN2020098494-appb-000020
Figure PCTCN2020098494-appb-000020
603、进行质心估计。603. Perform centroid estimation.
本申请实施例中,可以获取到雷达系统采集的目标对象的点云数据,并对点云数据进行质心估计,获得状态测量值。例如,状态测量值可以为Z k=[R kk,v rk],其中R k可以表示雷达系统和质心之间的距离,θ k可以表示质心所在的方位,v rk可以表示质心的径向速度。 In the embodiment of the present application, the point cloud data of the target object collected by the radar system can be acquired, and the centroid estimation of the point cloud data can be performed to obtain the state measurement value. For example, the state measurement can be Z k = [R k , θ k , v rk ], where R k can represent the distance between the radar system and the centroid, θ k can represent the orientation of the centroid, and v rk can represent the position of the centroid radial velocity.
604、状态预测。604. State prediction.
本申请实施例中,可以获取到前一时刻的目标跟踪结果,以及CTRV运动模型,并基于前一时刻的目标跟踪结果,以及CTRV运动模型对目标对象的航迹状态进行预测。In this embodiment of the present application, the target tracking result at the previous moment and the CTRV motion model can be obtained, and the track state of the target object is predicted based on the target tracking result at the previous moment and the CTRV motion model.
其中,状态预测可以包括:根据步骤601的初始化参数、步骤602获取的CKF容积点集、以及前一时刻的目标跟踪结果,来生成多个扩展容积点,并对多个扩展容积点中的每个扩展容积点按照CTRV模型进行预测,生成当前时刻的每个容积点的预测值;并根据当前时刻的每个容积点的预测值预测当前时刻的目标跟踪预测值及状态方程协方差矩阵。具体的,步骤604可以包括如下描述的步骤6041-6043。The state prediction may include: generating a plurality of extended volume points according to the initialization parameters in step 601, the CKF volume point set obtained in step 602, and the target tracking result at the previous moment, and for each of the plurality of extended volume points Each extended volume point is predicted according to the CTRV model, and the predicted value of each volume point at the current moment is generated; and the target tracking predicted value and the covariance matrix of the state equation at the current moment are predicted according to the predicted value of each volume point at the current moment. Specifically, step 604 may include steps 6041-6043 described below.
6041、生成扩展容积点。6041. Generate an extended volume point.
本申请实施例中,可以根据步骤601的初始化参数以及步骤602获取的CKF容积点集来生成多个扩展容积点,具体的,可以基于如下公式来生成多个扩展容积点:In this embodiment of the present application, multiple extended volume points may be generated according to the initialization parameters in step 601 and the CKF volume point set obtained in step 602. Specifically, multiple extended volume points may be generated based on the following formula:
Figure PCTCN2020098494-appb-000021
其中,P(k/k)可以表示前一时刻的状态方程协方差矩阵,
Figure PCTCN2020098494-appb-000022
表示下三角矩阵分解,ξ i表示容积点,X(k/k)可以表示前一时刻的目标跟踪结果。
Figure PCTCN2020098494-appb-000021
Among them, P(k/k) can represent the covariance matrix of the state equation at the previous moment,
Figure PCTCN2020098494-appb-000022
represents the decomposition of the lower triangular matrix, ξ i represents the volume point, and X(k/k) can represent the target tracking result at the previous moment.
6042、对每个扩展容积点按照CTRV模型进行预测,以得到每个容积点对应的预测值。6042. Predict each expanded volume point according to the CTRV model to obtain a predicted value corresponding to each volume point.
6043、预测当前时刻状态值及状态方程协方差矩阵。6043. Predict the state value at the current moment and the covariance matrix of the state equation.
本申请实施例中,可以基于对扩展容积点计算平均值来得到当前时刻的状态预测值X(k+1/k),具体可以为基于如下公式计算当前时刻的状态预测值X(k+1/k):In the embodiment of the present application, the state prediction value X(k+1/k) at the current moment may be obtained based on the average value of the extended volume points, and specifically, the state prediction value X(k+1/k) at the current moment may be calculated based on the following formula /k):
Figure PCTCN2020098494-appb-000023
Figure PCTCN2020098494-appb-000023
其中,
Figure PCTCN2020098494-appb-000024
表示扩展容积点。
in,
Figure PCTCN2020098494-appb-000024
Represents an extended volume point.
之后,可以基于多个容积点对应的预测值以及当前时刻的状态预测值,来预测状态方程协方差矩阵P(k+1|k)。After that, the state equation covariance matrix P(k+1|k) can be predicted based on the predicted values corresponding to the multiple volume points and the state predicted value at the current moment.
之后可以基于当前时刻的状态预测值以及当前时刻的质心测量值进行数据关联,本申请实施例中,可以对质心估计的结果进行航迹起始,通过航迹起始可以确定一条目标对象对应的航迹,该航迹中可以包括各个帧的质心估计的结果,质心估计的结果可以是质心测量值,之后可以基于运动模型对各个帧的质心测量值进行预测,得到下一帧的状态预测值,之后将下一帧的状态预测值和下一帧对应的质心测量值进行比较,如果误差在预设范围内,则认为关联成功,并记录当前航迹数据关联次数,其中,以数据关联次数为0开始,数据每关联成功一次,就可以将数据关联次数增加1,以此来记录航迹数据关联次数N aAfterwards, data association can be performed based on the state prediction value at the current moment and the centroid measurement value at the current moment. In the embodiment of the present application, the result of the centroid estimation can be performed with track start, and the track start can be used to determine the corresponding target object. Track, the track can include the result of centroid estimation of each frame, and the result of centroid estimation can be the centroid measurement value, and then the centroid measurement value of each frame can be predicted based on the motion model, and the state prediction value of the next frame can be obtained , and then compare the state predicted value of the next frame with the corresponding centroid measurement value of the next frame. If the error is within the preset range, the association is considered successful, and the current track data association times are recorded, where the data association times are is 0, each data association is successful, the data may be associated with an increase in frequency and, in order to record data associated with track number N a.
605、测量协方差矩阵估计。605. The measurement covariance matrix is estimated.
本申请实施例中,需要进行测量协方差矩阵的获取,具体的,可以基于图5对应的实施例中,步骤502的步骤得到合成后的目标协方差矩阵,其中,测量协方差矩阵估计的结果即为目标协方差矩阵,具体可以参照上述实施例中描述,这里不再赘述。In the embodiment of the present application, it is necessary to obtain the measurement covariance matrix. Specifically, the synthesized target covariance matrix may be obtained based on the step of step 502 in the embodiment corresponding to FIG. 5 , wherein the estimation result of the measurement covariance matrix is That is, the target covariance matrix. For details, refer to the description in the above embodiment, and details are not repeated here.
606、下一时刻状态测量值的预测。606. Predict the state measurement value at the next moment.
本申请实施例中,可以基于当前时刻状态测量值以及容积点,预测下一时刻的状态测量值,具体的,对下一时刻状态测量值的预测可以包括步骤6061和步骤6062。In this embodiment of the present application, the state measurement value at the next moment may be predicted based on the state measurement value at the current moment and the volume point. Specifically, the prediction of the state measurement value at the next moment may include steps 6061 and 6062 .
6061、生成扩展容积点。6061. Generate an extended volume point.
本申请实施例中,可以获取当前时刻的状态测量值X(k+1|k),状态方程协方差矩阵P(k+1|k)、以及CKF容积点集ξ来生成多个扩展容积点,具体的,可以基于如下公式来生成多个扩展容积点:
Figure PCTCN2020098494-appb-000025
In this embodiment of the present application, the state measurement value X(k+1|k) at the current moment, the state equation covariance matrix P(k+1|k), and the CKF volume point set ξ can be obtained to generate multiple extended volume points , specifically, multiple extended volume points can be generated based on the following formula:
Figure PCTCN2020098494-appb-000025
之后可以对每个扩展容积点按照测量方程生成对应的测量预测值
Figure PCTCN2020098494-appb-000026
并形成扩展矩阵Z cuba(k+1|k)。
After that, the corresponding measurement prediction value can be generated according to the measurement equation for each extended volume point
Figure PCTCN2020098494-appb-000026
And form the extended matrix Z cuba (k+1|k).
6062、预测下一时刻的状态测量值、新息协方差矩阵以及状态测量协方差矩阵。6062. Predict the state measurement value, the innovation covariance matrix, and the state measurement covariance matrix at the next moment.
本申请实施例中,可以通过如下公式预测下一时刻的状态测量值、新息协方差矩阵以及状态测量协方差矩阵:In the embodiment of the present application, the state measurement value, the innovation covariance matrix, and the state measurement covariance matrix at the next moment can be predicted by the following formula:
Figure PCTCN2020098494-appb-000027
Figure PCTCN2020098494-appb-000027
Figure PCTCN2020098494-appb-000028
Figure PCTCN2020098494-appb-000028
其中,
Figure PCTCN2020098494-appb-000029
表示测量方程对应的测量预测值,Z cuba(k+1|k)可以表示扩展矩阵,S(k+1)可以表示新息协方差矩阵,R all可以表示合成后的目标协方差矩阵,Z(k+1)可以表示当前时刻的状态测量值,P XZ(k+1)表示状态测量协方差矩阵。
in,
Figure PCTCN2020098494-appb-000029
Represents the measurement prediction value corresponding to the measurement equation, Z cuba (k+1|k) can represent the expansion matrix, S(k+1) can represent the innovation covariance matrix, R all can represent the synthesized target covariance matrix, Z (k+1) can represent the state measurement value at the current moment, and P XZ (k+1) represents the state measurement covariance matrix.
607、状态更新。607. Status update.
本申请实施例中,可以基于当前时刻的状态方程协方差矩阵、测量矩阵、新息协方差矩阵、新息、当前时刻的状态预测值来计算得到当前时刻的目标跟踪结果,其中,目标跟踪结果可以为当前时刻的目标对象的质心位置。In the embodiment of the present application, the target tracking result at the current moment may be calculated based on the state equation covariance matrix, the measurement matrix, the innovation covariance matrix, the innovation value, and the state predicted value at the current moment, wherein the target tracking result It can be the centroid position of the target object at the current moment.
K(k+1)=P(k+1|k)H′(k+1)S -1(k+1) K(k+1)=P(k+1|k)H′(k+1)S -1 (k+1)
X(k+1|k+1)=X(k+1|k)+K(k+1)·v(k+1);X(k+1|k+1)=X(k+1|k)+K(k+1)·v(k+1);
其中,v(k+1)=Z(k)-Z(k+1/k),v(k+1)可以表示新息。Wherein, v(k+1)=Z(k)-Z(k+1/k), and v(k+1) can represent innovation.
至此可以完成基于CTRV模型和CKF滤波算法的点云目标跟踪,即得到了当前时刻的目标对象的质心位置X(k+1/k+1)。At this point, the point cloud target tracking based on the CTRV model and the CKF filtering algorithm can be completed, that is, the centroid position X(k+1/k+1) of the target object at the current moment is obtained.
在一些场景中,由于车辆或者其他类型的目标对象存在转弯或遮挡等情况(其中遮挡可以是直行过程中被其他障碍物遮挡),不同帧相同目标对象的点云数据的质心估计可能会出现位置扰动,基于在CTRV结合CKF算法中应用合成后的目标协方差矩阵进行目标跟踪,能够有效提高车辆直行和转弯的场景中的目标跟踪精度;CKF算法能够实现非线性三阶等效,滤波精度更高。In some scenes, due to the turning or occlusion of vehicles or other types of target objects (where the occlusion can be blocked by other obstacles in the process of going straight), the centroid estimation of the point cloud data of the same target object in different frames may appear. Disturbance, based on the application of the synthesized target covariance matrix in the CTRV combined with the CKF algorithm for target tracking, can effectively improve the target tracking accuracy in the scene of the vehicle going straight and turning; the CKF algorithm can achieve nonlinear third-order equivalent, and the filtering accuracy is better. high.
接下来描述一个基于交互多模型-匀速模型-协同转弯模型(interacting multiple model-constant velocity-coordinate turn,IMM-CV-CT)模型和扩展卡尔曼滤波(extended kalman filter,EKF)算法的目标跟踪方法的应用例。Next, a target tracking method based on the interacting multiple model-constant velocity-coordinate turn (IMM-CV-CT) model and the extended Kalman filter (EKF) algorithm is described. application example.
其中,具体的实现流程可以如图7所示,该方法包括:The specific implementation process can be shown in Figure 7, and the method includes:
701、质心估计。701. Centroid estimation.
本申请实施例中,可以获取到雷达系统采集的目标对象的点云数据,并对点云数据进行质心估计,获得状态测量值。例如,状态测量值可以为Z k=[R kk,v rk],其中R k可以表示雷达系统和质心之间的距离,θ k可以表示质心所在的方位,v rk可以表示质心的径向速度。 In the embodiment of the present application, the point cloud data of the target object collected by the radar system can be acquired, and the centroid estimation of the point cloud data can be performed to obtain the state measurement value. For example, the state measurement can be Z k = [R k , θ k , v rk ], where R k can represent the distance between the radar system and the centroid, θ k can represent the orientation of the centroid, and v rk can represent the position of the centroid radial velocity.
702、输入质心估计结果以及初始化参数。702. Input the centroid estimation result and initialization parameters.
本申请实施例中,在进行目标跟踪时,首先需要进行与目标跟踪相关的数据的准备工作,其中,与目标跟踪相关的数据可以包括与雷达系统性能相关的测量方程协方差矩阵,可以表示质心估计时引入的质心位置误差的目标协方差矩阵;具体可以参照图5对应的实施例中步骤502相关的描述,这里不再赘述。In the embodiment of the present application, when the target tracking is performed, the data related to the target tracking needs to be prepared first, wherein the data related to the target tracking may include a measurement equation covariance matrix related to the performance of the radar system, which may represent the centroid The target covariance matrix of the centroid position error introduced during estimation; for details, reference may be made to the description related to step 502 in the embodiment corresponding to FIG. 5 , which will not be repeated here.
703、输入交互。703. Input interaction.
本申请实施例中,可以根据当前转移概率和当前状态值,输出CV模型对应的状态值X 01(k|k)和状态协方差矩阵P 01(k|k),以及CT模型对应的状态值X 01(k|k)和状态协方差矩阵P 02(k|k)。 In this embodiment of the present application, the state value X 01 (k|k) corresponding to the CV model, the state covariance matrix P 01 (k|k), and the state value corresponding to the CT model can be output according to the current transition probability and the current state value X 01 (k|k) and state covariance matrix P 02 (k|k).
X 01(k|k)=X 1(k|k)μ 1|1(k)+X 2(k|k)μ 2|1(k) X 01 (k|k)=X 1 (k|k)μ 1|1 (k)+X 2 (k|k)μ 2|1 (k)
X 02(k|k)=X 1(k|k)μ 1|2(k)+X 2(k|k)μ 2|2(k) X 02 (k|k)=X 1 (k|k)μ 1|2 (k)+X 2 (k|k)μ 2|2 (k)
Figure PCTCN2020098494-appb-000030
Figure PCTCN2020098494-appb-000030
P 02(k|k)=μ 1|2(k)(P 1(k|k)+(X 1(k|k)-X 02(k|k))·(X 1(k|k)-X 02(k|k))')+μ 2|2(k)(P 2(k|k)+(X 2(k|k)-X 02(k|k))·(X 2(k|k)-X 02(k|k))') P 02 (k|k)=μ 1|2 (k)(P 1 (k|k)+(X 1 (k|k)−X 02 (k|k))·(X 1 (k|k) -X 02 (k|k))')+μ 2|2 (k)(P 2 (k|k)+(X 2 (k|k)-X 02 (k|k))·(X 2 ( k|k)-X 02 (k|k))')
之后可以根据当前状态X 01(k|k)、X 02(k|k)、CV以及CT运动模型,对目标对象的航迹状态进行预测。,其中,P 01(k/k)和P 02(k/k)可以在后续进行状态更新和协方差更新时用到。 Then, the track state of the target object can be predicted according to the current state X 01 (k|k), X 02 (k|k), CV and CT motion model. , where P 01 (k/k) and P 02 (k/k) can be used in the subsequent state update and covariance update.
704、基于CV对目标对象的航迹状态进行预测:704. Predict the track state of the target object based on CV:
X 1(k+1|k)=F CVX 01(k|k),
Figure PCTCN2020098494-appb-000031
X 1 (k+1|k)=F CV X 01 (k|k),
Figure PCTCN2020098494-appb-000031
其中,X 1(k+1/k)为基于CV模型对目标对象的航迹状态进行预测得到的预测结果。 Among them, X 1 (k+1/k) is the prediction result obtained by predicting the track state of the target object based on the CV model.
705、基于CT模型对目标对象的航迹状态进行预测:705. Predict the track state of the target object based on the CT model:
X 2(k+1|k)=F CTX 02(k|k),
Figure PCTCN2020098494-appb-000032
X 2 (k+1|k)=F CT X 02 (k|k),
Figure PCTCN2020098494-appb-000032
其中,X 2(k+1/k)为基于CT对目标对象的航迹状态进行预测得到的预测结果。 Among them, X 2 (k+1/k) is the prediction result obtained by predicting the track state of the target object based on CT.
706、预测状态合成。706. Predicting state synthesis.
本申请实施例中,由于基于CV模型对目标对象的航迹状态进行预测得到的预测结果和基于CT模型对目标对象的航迹状态进行预测得到的预测结果,可以刻画航迹状态中不同层次的数据,因此需要对预测结果进行合成,并基于合成后的预测结果进行后续的数据关联。示例性的,可以基于如下公式进行基于CV模型对目标对象的航迹状态进行预测得到的预测结果和基于CT模型对目标对象的航迹状态进行预测得到的预测结果的合成:In the embodiment of the present application, due to the prediction result obtained by predicting the track state of the target object based on the CV model and the prediction result obtained by predicting the track state of the target object based on the CT model, it is possible to describe the different levels of the track state. Therefore, it is necessary to synthesize the prediction results, and perform subsequent data association based on the synthesized prediction results. Exemplarily, the prediction result obtained by predicting the track state of the target object based on the CV model and the prediction result obtained by predicting the track state of the target object based on the CT model can be synthesized based on the following formula:
X(k+1|k)=X 1(k+1|k)μ 1(k)+X 2(k+1|k)μ 2(k); X(k+1|k)=X 1 (k+1|k)μ 1 (k)+X 2 (k+1|k)μ 2 (k);
其中,X(k+1/k)为合成后的预测结果,μ 1(k)为基于CV模型对目标对象的航迹状态进行预测得到的预测结果对应的权重值,μ 2(k)为基于CT模型对目标对象的航迹状态进行预测得到的预测结果对应的权重值。 Among them, X(k+1/k) is the prediction result after synthesis, μ 1 (k) is the weight value corresponding to the prediction result obtained by predicting the track state of the target object based on the CV model, and μ 2 (k) is The weight value corresponding to the prediction result obtained by predicting the track state of the target object based on the CT model.
707、数据关联。707. Data association.
步骤707的具体描述可以参照图5以及对应的实施例中与数据关联相关的描述,这里不再赘述。For the specific description of step 707, reference may be made to FIG. 5 and the description related to data association in the corresponding embodiment, and details are not repeated here.
708、判断是否满足收敛条件,如果不满足则采用预设的初始协方差矩阵R m和R 0合成用于滤波的测量方程协方差矩阵R all708. Determine whether the convergence condition is satisfied, and if not, adopt the preset initial covariance matrix R m and R 0 to synthesize the measurement equation covariance matrix R all for filtering.
步骤708可以参照上述实施例中步骤502中与合成目标协方差矩阵相关的描述,这里不再赘述。For step 708, reference may be made to the description related to the synthetic target covariance matrix in step 502 in the foregoing embodiment, and details are not repeated here.
709、判断是否满足收敛条件,如果满足则根据当前时刻的目标跟踪预测值和状态测量值之间的误差来确定目标协方差矩阵。709. Determine whether the convergence condition is met, and if so, determine the target covariance matrix according to the error between the target tracking predicted value and the state measurement value at the current moment.
步骤709可以参照上述实施例中步骤502中与根据当前时刻的目标跟踪预测值和状态测量值之间的误差来确定目标协方差矩阵相关的描述,这里不再赘述。Step 709 can refer to the description related to determining the target covariance matrix according to the error between the target tracking predicted value and the state measurement value at the current moment in step 502 in the above embodiment, and details are not repeated here.
710、合成协方差矩阵。710. Synthesize a covariance matrix.
步骤710可以参照上述实施例中步骤503中与合成目标协方差矩阵相关的描述,这里不再赘述。For step 710, reference may be made to the description related to the synthetic target covariance matrix in step 503 in the foregoing embodiment, and details are not repeated here.
711、基于CV模型进行测量预测及滤波输出,并输出滤波后状态X 1(k|k)。 711. Perform measurement prediction and filter output based on the CV model, and output the filtered state X 1 (k|k).
712、基于CT模型进行测量预测及滤波输出,并输出滤波后状态X 2(k|k)。 712. Perform measurement prediction and filter output based on the CT model, and output the filtered state X 2 (k|k).
713、根据EKF滤波算法分别获得两种模型的状态更新和协方差更新。713. Obtain the state update and covariance update of the two models respectively according to the EKF filtering algorithm.
P 01(k+1|k)=F CVP 01(k|k)F′ CV+Q noise P 01 (k+1|k)=F CV P 01 (k|k)F′ CV +Q noise
K 1(k+1)=P 01(k+1|k)·H'S 1 -1(k+1|k) K 1 (k+1)=P 01 (k+1|k)·H'S 1 -1 (k+1|k)
S 1(k+1|k)=H·P 01(k+1|k)·H'+R all S 1 (k+1|k)=H·P 01 (k+1|k)·H'+R all
v 1(k+1)=Z(k)-Z 1(k+1|k) v 1 (k+1)=Z(k)-Z 1 (k+1|k)
X 1(k+1|k+1)=X 1(k+1|k)+K 1(k+1)v 1(k+1) X 1 (k+1|k+1)=X 1 (k+1|k)+K 1 (k+1)v 1 (k+1)
P 1(k+1|k+1)=P 01(k+1|k)-K 1(k+1)S 1(k+1|k)K 1'(k+1); P 1 (k+1|k+1)=P 01 (k+1|k)−K 1 (k+1)S 1 (k+1|k)K 1 ′(k+1);
P 02(k+1|k)=F CTP 02(k|k)F′ CT+Q noise P 02 (k+1|k)=F CT P 02 (k|k)F′ CT +Q noise
K 2(k+1)=P 02(k+1|k)·H'S 2 -1(k+1|k) K 2 (k+1)=P 02 (k+1|k)·H'S 2 -1 (k+1|k)
S 2(k+1|k)=H·P 02(k+1|k)·H'+R all S 2 (k+1|k)=H·P 02 (k+1|k)·H'+R all
v 2(k+1)=Z(k)-Z 2(k+1|k) v 2 (k+1)=Z(k)-Z 2 (k+1|k)
X 2(k+1|k+1)=X 2(k+1|k)+K 2(k+1)v 2(k+1) X 2 (k+1|k+1)=X 2 (k+1|k)+K 2 (k+1)v 2 (k+1)
P 2(k+1|k+1)=P 02(k+1|k)-K 2(k+1)S 2(k+1|k)K 2'(k+1) P 2 (k+1|k+1)=P 02 (k+1|k)−K 2 (k+1)S 2 (k+1|k)K 2 ′(k+1)
其中,X 2(k+1/k+1)可以表示状态更新结果,P 2(k+1/k+1)可以表示协方差更新结果。 Wherein, X 2 (k+1/k+1) may represent the state update result, and P 2 (k+1/k+1) may represent the covariance update result.
714、交互数据输出。714. Interactive data output.
通过对应模型的新息v 1(k+1)、v 2(k+1)和新息协方差矩阵S 1(k+1|k)、S 2(k+1|k)计算模型概率μ 1(k+1)和μ 2(k+1),并进行最终状态合成,获得状态更新X(k+1|k+1),其中,上述最终状态即为目标跟踪得到的目标跟踪结果。 Calculate the model probability μ by the innovation v 1 (k+1), v 2 (k+1) of the corresponding model and the innovation covariance matrix S 1 (k+1|k), S 2 (k+1|k) 1 (k+1) and μ 2 (k+1), and perform final state synthesis to obtain a state update X(k+1|k+1), where the above final state is the target tracking result obtained by target tracking.
X(k+1|k+1)=X 1(k+1|k+1)·μ 1(k+1)+X 2(k+1|k+1)·μ 2(k+1); X(k+1|k+1)=X 1 (k+1|k+1)·μ 1 (k+1)+X 2 (k+1|k+1)·μ 2 (k+1) ;
至此完成基于IMM-CV-CT运动模型和EKF滤波算法的点云目标跟踪。So far, point cloud target tracking based on IMM-CV-CT motion model and EKF filtering algorithm is completed.
参照图8,图8为本申请实施例提供的一种目标跟踪装置的结构示意,如图8所示,该装置800包括:Referring to FIG. 8, FIG. 8 is a schematic structural diagram of a target tracking apparatus provided by an embodiment of the present application. As shown in FIG. 8, the apparatus 800 includes:
获取模块801,用于获取目标对象的点云数据,并对点云数据进行质心估计;获取目标协方差矩阵;其中,目标协方差矩阵表示进行质心估计时引入的质心位置误差;以及,an acquisition module 801, configured to acquire point cloud data of a target object, and perform centroid estimation on the point cloud data; acquire a target covariance matrix; wherein, the target covariance matrix represents a centroid position error introduced during centroid estimation; and,
目标跟踪模块802,用于基于目标协方差矩阵,对目标对象进行目标跟踪,以得到目标跟踪结果。The target tracking module 802 is configured to perform target tracking on the target object based on the target covariance matrix to obtain the target tracking result.
在一种可选的设计中,所述目标跟踪结果为质心状态值,所述获取模块,具体用于:In an optional design, the target tracking result is a centroid state value, and the acquiring module is specifically used for:
获取所述目标对象在上一时刻的目标跟踪结果;Obtain the target tracking result of the target object at the previous moment;
基于运动模型,对所述上一时刻的目标跟踪结果进行状态预测,以得到当前时刻的目标跟踪预测值;Based on the motion model, state prediction is performed on the target tracking result at the previous moment to obtain the target tracking prediction value at the current moment;
获取当前时刻的状态测量值,所述状态测量值为对所述点云数据进行质心估计得到的;Obtaining a state measurement value at the current moment, where the state measurement value is obtained by performing centroid estimation on the point cloud data;
基于所述目标跟踪预测值和所述状态测量值,获取所述目标协方差矩阵。The target covariance matrix is obtained based on the target tracking predicted value and the state measurement value.
在一种可选的设计中,所述获取模块,用于在数据关联次数小于或等于预设值的情况下,将预设的协方差矩阵作为所述目标协方差矩阵;其中,所述数据关联次数表示质心测量值和状态预测值之间的差异小于阈值的次数,所述质心测量值为对当前时刻目标对象的点云数据进行质心估计得到的,所述状态预测值为对上一时刻的质心测量值进行预测得到的,所述上一时刻的质心测量值为对上一时刻目标对象的点云数据进行质心估计得到的。In an optional design, the acquisition module is configured to use a preset covariance matrix as the target covariance matrix when the number of data associations is less than or equal to a preset value; wherein the data The number of associations represents the number of times that the difference between the centroid measurement value and the state predicted value is less than the threshold, the centroid measurement value is obtained by performing centroid estimation on the point cloud data of the target object at the current moment, and the state predicted value is the previous moment. The centroid measurement value of the last moment is obtained by performing centroid estimation on the point cloud data of the target object at the last moment.
在一种可选的设计中,所述获取模块,用于在所述数据关联次数大于所述预设值的情况下,基于目标跟踪预测值和所述状态测量值,获取所述目标协方差矩阵;其中,所述目标跟踪预测值为对上一时刻的目标跟踪结果进行状态预测得到的。In an optional design, the obtaining module is configured to obtain the target covariance based on the target tracking predicted value and the state measurement value when the data association times are greater than the preset value Matrix; wherein, the target tracking prediction value is obtained by performing state prediction on the target tracking result at the previous moment.
在一种可选的设计中,所述获取模块,用于在所述数据关联次数大于所述预设值的情况下,基于目标跟踪预测值和所述状态测量值,获取所述目标协方差矩阵;其中,所述目标跟踪预测值为对上一时刻的目标跟踪结果进行状态预测得到的。In an optional design, the obtaining module is configured to obtain the target covariance based on the target tracking predicted value and the state measurement value when the data association times are greater than the preset value Matrix; wherein, the target tracking prediction value is obtained by performing state prediction on the target tracking result at the previous moment.
在一种可选的设计中,所述获取模块,具体用于:In an optional design, the acquisition module is specifically used for:
基于目标跟踪预测值和状态测量值,通过如下公式,获取目标协方差矩阵:Based on the target tracking predicted value and the state measurement value, the target covariance matrix is obtained by the following formula:
Figure PCTCN2020098494-appb-000033
Figure PCTCN2020098494-appb-000033
Figure PCTCN2020098494-appb-000034
Figure PCTCN2020098494-appb-000034
其中,x k+1|k,y k+1|k为状态测量值,
Figure PCTCN2020098494-appb-000035
为目标跟踪预测值,R 1为目标协方差矩阵。
Among them, x k+1|k , y k+1|k are state measurement values,
Figure PCTCN2020098494-appb-000035
is the target tracking predicted value, and R 1 is the target covariance matrix.
在一种可选的设计中,获取模块,还用于:In an optional design, the fetch module is also used to:
获取测量方程协方差矩阵,其中,点云数据为基于雷达系统采集得到的,测量方程协方差矩阵表示雷达系统的测量偏差;Obtain the measurement equation covariance matrix, where the point cloud data is collected based on the radar system, and the measurement equation covariance matrix represents the measurement deviation of the radar system;
装置还包括:The device also includes:
矩阵合成模块,用于将测量方程协方差矩阵和目标协方差矩阵进行合成,以得到合成后的目标协方差矩阵;The matrix synthesis module is used for synthesizing the measurement equation covariance matrix and the target covariance matrix to obtain the synthesized target covariance matrix;
相应的,目标跟踪模块,具体用于:基于合成后的目标协方差矩阵,对目标对象进行目标跟踪。Correspondingly, the target tracking module is specifically used for: tracking the target object based on the synthesized target covariance matrix.
在一种可选的设计中,测量方程协方差矩阵为在极坐标系中表示的矩阵,目标协方差矩阵为在笛卡尔坐标系中表示的矩阵;矩阵合成模块,具体用于:In an optional design, the covariance matrix of the measurement equation is a matrix represented in a polar coordinate system, and the target covariance matrix is a matrix represented in a Cartesian coordinate system; the matrix synthesis module is specifically used for:
将测量方程协方差矩阵转换为在笛卡尔坐标系中表示的矩阵;Convert the measurement equation covariance matrix to a matrix represented in Cartesian coordinates;
将转换后的测量方程协方差矩阵和目标协方差矩阵进行矩阵加,以得到合成后的目标协方差矩阵。Matrix addition is performed on the transformed measurement equation covariance matrix and the target covariance matrix to obtain the synthesized target covariance matrix.
在一种可选的设计中,矩阵合成模块,具体用于:In an optional design, the matrix synthesis module, specifically:
获取当前时刻的状态测量值,状态测量值为对点云数据进行质心估计得到的;Obtain the state measurement value at the current moment, and the state measurement value is obtained by estimating the centroid of the point cloud data;
基于状态测量值,获取协方差转换合成矩阵;Based on the state measurement value, obtain the covariance transformation synthesis matrix;
基于协方差转换合成矩阵,将测量方程协方差矩阵转换为在笛卡尔坐标系中表示的矩阵。Transform the measurement equation covariance matrix into a matrix represented in the Cartesian coordinate system based on the covariance transformation synthesis matrix.
在一种可选的设计中,矩阵合成模块,具体用于:In an optional design, the matrix synthesis module, specifically:
通过如下公式,将转换后的测量方程协方差矩阵和目标协方差矩阵进行矩阵加:The converted measurement equation covariance matrix and the target covariance matrix are matrix added by the following formula:
R all=R 0+A kR mA k′; R all =R 0 +A k R m A k ′;
Figure PCTCN2020098494-appb-000036
Figure PCTCN2020098494-appb-000036
其中,R 0为目标协方差矩阵,A kR mA k′为转换后的测量方程协方差矩阵,R k和θ k为状态测量值,A k为协方差转换合成矩阵,A k′为协方差转换合成矩阵的转置。 Among them, R 0 is the target covariance matrix, A k R m A k ′ is the transformed measurement equation covariance matrix, R k and θ k are the state measurement values, A k is the covariance transformation synthesis matrix, and A k ′ is Transpose of the covariance transform composite matrix.
在一些实施例中,所公开的方法可以实施为以机器可读格式被编码在计算机可读存储介质上的或者被编码在其它非瞬时性介质或者制品上的计算机程序指令。图9示意性地示出根据这里展示的至少一些实施例而布置的示例计算机程序产品的概念性局部视图,示例计算机程序产品包括用于在计算设备上执行计算机进程的计算机程序。在一个实施例中,示例计算机程序产品900是使用信号承载介质901来提供的。信号承载介质901可以包括一个或多个程序指令902,其当被一个或多个处理器运行时可以提供以上针对图5至图8描述的功能或者部分功能。因此,例如,参考图3中所示的实施例,方框302-309的一个或多个特征可以由与信号承载介质901相关联的一个或多个指令来承担。此外,图9中的程序指令902也描述示例指令。In some embodiments, the disclosed methods may be implemented as computer program instructions encoded in a machine-readable format on a computer-readable storage medium or on other non-transitory media or articles of manufacture. 9 schematically illustrates a conceptual partial view of an example computer program product including a computer program for executing a computer process on a computing device, arranged in accordance with at least some embodiments presented herein. In one embodiment, example computer program product 900 is provided using signal bearing medium 901 . Signal bearing medium 901 may include one or more program instructions 902 that, when executed by one or more processors, may provide the functions, or portions thereof, described above with respect to FIGS. 5-8 . Thus, for example, with reference to the embodiment shown in FIG. 3 , one or more features of blocks 302 - 309 may be undertaken by one or more instructions associated with signal bearing medium 901 . Additionally, program instructions 902 in FIG. 9 also describe example instructions.
在一些示例中,信号承载介质901可以包含计算机可读介质903,诸如但不限于,硬盘驱动器、紧密盘(CD)、数字视频光盘(DVD)、数字磁带、存储器、只读存储记忆体(Read-Only Memory,ROM)或随机存储记忆体(Random Access Memory,RAM)等等。在一些实施方式中,信号承载介质901可以包含计算机可记录介质904,诸如但不限于,存储器、读/写(R/W)CD、R/W DVD、等等。在一些实施方式中,信号承载介质901可以包含通信介质905,诸如但不限于,数字和/或模拟通信介质(例如,光纤电缆、波导、有线通信链路、无线通信链路、等等)。因此,例如,信号承载介质901可以由无线形式的通信介质905(例如,遵守IEEE 802.11标准或者其它传输协议的无线通信介质)来传达。一个或多个程序指令902可以是,例如,计算机可执行指令或者逻辑实施指令。在一些示例中,计算设备的计算设备可以被配置为,响应于通过计算机可读介质903、计算机可记录介质904、和/或通信介质905中的一个或多个传达到计算设备的程序指令902,提供各种操作、功能、或者动作。应该理解,这里描述的布置仅仅是用于示例的目的。因而,本领域技术人员将理解,其它布置和其它元素(例如,机器、接口、功能、顺序、和功能组等等)能够被取而代之地使用,并且一些元素可以根据所期望的结果而一并省略。另外,所描述的元素中的许多是可以被实现为离散的或者分布式的组件的、或者以任何适当的组合和位置来结合其它组件实施的功能实体。In some examples, the signal bearing medium 901 may include a computer readable medium 903 such as, but not limited to, a hard drive, a compact disc (CD), a digital video disc (DVD), a digital tape, a memory, a read only memory (Read) -Only Memory, ROM) or random access memory (Random Access Memory, RAM) and so on. In some implementations, the signal bearing medium 901 may include a computer recordable medium 904 such as, but not limited to, memory, read/write (R/W) CDs, R/W DVDs, and the like. In some embodiments, signal bearing medium 901 may include communication medium 905, such as, but not limited to, digital and/or analog communication media (eg, fiber optic cables, waveguides, wired communication links, wireless communication links, etc.). Thus, for example, the signal bearing medium 901 may be conveyed by a wireless form of communication medium 905 (eg, a wireless communication medium conforming to the IEEE 802.11 standard or other transmission protocol). The one or more program instructions 902 may be, for example, computer-executable instructions or logic-implemented instructions. In some examples, a computing device of a computing device may be configured to respond to program instructions 902 communicated to the computing device through one or more of computer readable media 903 , computer recordable media 904 , and/or communication media 905 . , which provides various operations, functions, or actions. It should be understood that the arrangements described herein are for illustrative purposes only. Thus, those skilled in the art will understand that other arrangements and other elements (eg, machines, interfaces, functions, sequences, and groups of functions, etc.) can be used instead and that some elements may be omitted altogether depending on the desired results . Additionally, many of the described elements are functional entities that may be implemented as discrete or distributed components, or in conjunction with other components in any suitable combination and position.
图10是本公开实施例提供的一种目标跟踪设备的结构示意图,用于执行上述实施例中提供的目标跟踪方法。参见图10,该设备1000包括:FIG. 10 is a schematic structural diagram of a target tracking device provided by an embodiment of the present disclosure, which is used for executing the target tracking method provided in the above-mentioned embodiment. Referring to Figure 10, the device 1000 includes:
收发器110、包括有一个或一个以上存储介质的存储器120、输入单元130、显示单元140、传感器150、音频电路160、包括有一个或者一个以上处理核心的处理器170等部件。本领域技术人员可以理解,图10中示出的设备1000结构并不构成对设备1000的限定,可以包括比图示更多或更少的部件,或者组合某些部件,或者不同的部件布置。其中:The transceiver 110, the memory 120 including one or more storage media, the input unit 130, the display unit 140, the sensor 150, the audio circuit 160, the processor 170 including one or more processing cores, etc. Those skilled in the art can understand that the structure of the device 1000 shown in FIG. 10 does not constitute a limitation on the device 1000, and may include more or less components than those shown, or combine some components, or arrange different components. in:
收发器110可用于收发信息过程中信号的接收和发送。通过收发器110,设备1000可以和位于车辆内的其他设备(比如各种传感器)进行通信。其中,通信方式包括但不限于蓝牙无线通信方式、无线保真(Wireless Fidelity,WiFi)无线通信方式等。The transceiver 110 can be used to receive and transmit signals in the process of transceiving information. Through the transceiver 110, the device 1000 can communicate with other devices located in the vehicle, such as various sensors. The communication method includes but is not limited to a Bluetooth wireless communication method, a wireless fidelity (Wireless Fidelity, WiFi) wireless communication method, and the like.
存储器120可用于存储软件程序以及模块,处理器170通过运行存储在存储器120的至少一条指令、至少一段程序、代码集或指令集,从而执行各种功能应用以及数据处理。存储器120主要包括存储程序区和存储数据区,其中,存储程序区可存储操作系统、至少 一条指令、至少一段程序、代码集或指令集等;存储数据区可存储根据设备1000的使用所创建的数据(比如音频数据)等。The memory 120 may be used to store software programs and modules, and the processor 170 executes various functional applications and data processing by executing at least one instruction, at least one section of program, code set or instruction set stored in the memory 120 . The memory 120 mainly includes a stored program area and a stored data area, wherein the stored program area can store an operating system, at least one instruction, at least a piece of program, a code set or an instruction set, etc.; data (such as audio data), etc.
输入单元130可用于接收输入的数字或字符信息,以及产生与用户设置以及功能控制有关的信号输入。具体地,输入单元130可包括触敏表面以及其他输入设备。触敏表面,也称为触摸显示屏或者触控板,可收集用户在其上或附近的触摸操作,并根据预先设定的程式驱动相应的连接装置。可选的,触敏表面可包括触摸检测装置和触摸控制器两个部分。其中,触摸检测装置检测用户的触摸方位,并检测触摸操作带来的信号,将信号传送给触摸控制器;触摸控制器从触摸检测装置上接收触摸信息,并将它转换成触点坐标,再送给处理器170,并能接收处理器170发来的命令并加以执行。此外,可以采用电阻式、电容式、红外线以及表面声波等多种类型实现触敏表面。除了触敏表面,输入单元130还可以包括其他输入设备。具体地,其他输入设备可以包括但不限于物理键盘、功能键(比如音量控制按键、开关按键等)等中的一种或多种。The input unit 130 may be used to receive input numerical or character information, and generate signal input related to user settings and function control. Specifically, the input unit 130 may include a touch-sensitive surface as well as other input devices. A touch-sensitive surface, also known as a touch display or trackpad, collects the user's touches on or near it and drives the corresponding connection device according to a preset program. Alternatively, the touch-sensitive surface may include two parts, a touch detection device and a touch controller. Among them, the touch detection device detects the user's touch orientation, detects the signal brought by the touch operation, and transmits the signal to the touch controller; the touch controller receives the touch information from the touch detection device, converts it into contact coordinates, and then sends it to the touch controller. To the processor 170, and can receive the command sent by the processor 170 and execute it. Additionally, touch-sensitive surfaces can be implemented using resistive, capacitive, infrared, and surface acoustic wave types. In addition to the touch-sensitive surface, the input unit 130 may also include other input devices. Specifically, other input devices may include, but are not limited to, one or more of physical keyboards, function keys (such as volume control keys, switch keys, etc.).
显示单元140可用于显示由用户输入的信息或提供给用户的信息以及设备1000的各种图形用户接口,这些图形用户接口可以由图形、文本、图标、视频和其任意组合来构成。显示单元140可包括显示面板,可选的,可以采用LCD(Liquid Crystal Display,液晶显示器)、OLED(Organic Light-Emitting Diode,有机发光二极管)等形式来配置显示面板。进一步的,触敏表面可覆盖显示面板,当触敏表面检测到在其上或附近的触摸操作后,传送给处理器170以确定触摸事件的类型,随后处理器170根据触摸事件的类型在显示面板上提供相应的视觉输出。虽然在图10中,触敏表面与显示面板是作为两个独立的部件来实现输入和输出功能,但是在某些实施例中,可以将触敏表面与显示面板集成而实现输入和输出功能。The display unit 140 may be used to display information input by or provided to the user and various graphical user interfaces of the device 1000, which may be composed of graphics, text, icons, videos, and any combination thereof. The display unit 140 may include a display panel, and optionally, the display panel may be configured in the form of an LCD (Liquid Crystal Display, liquid crystal display), an OLED (Organic Light-Emitting Diode, organic light-emitting diode) and the like. Further, the touch-sensitive surface may cover the display panel, and when the touch-sensitive surface detects a touch operation on or near it, it is transmitted to the processor 170 to determine the type of the touch event, and then the processor 170 displays the touch event according to the type of the touch event. The corresponding visual output is provided on the panel. Although in Figure 10 the touch-sensitive surface and the display panel are implemented as two separate components to implement the input and output functions, in some embodiments, the touch-sensitive surface and the display panel may be integrated to implement the input and output functions.
设备1000还可包括至少一种传感器150,比如光传感器。具体地,光传感器可包括环境光传感器,其中环境光传感器可根据环境光线的明暗来调节显示面板的亮度。 Device 1000 may also include at least one sensor 150, such as a light sensor. Specifically, the light sensor may include an ambient light sensor, wherein the ambient light sensor may adjust the brightness of the display panel according to the brightness of the ambient light.
音频电路160、扬声器161,传声器162可提供用户与设备1000之间的音频接口。音频电路160可将接收到的音频数据转换后的电信号,传输到扬声器161,由扬声器161转换为声音信号输出;另一方面,传声器162将收集的声音信号转换为电信号,由音频电路160接收后转换为音频数据,再将音频数据输出处理器170处理后,经收发器110发送给诸如车辆内的其他设备,或者将音频数据输出至存储器120以便进一步处理。 Audio circuitry 160 , speaker 161 , and microphone 162 may provide an audio interface between the user and device 1000 . The audio circuit 160 can transmit the received audio data converted electrical signal to the speaker 161, and the speaker 161 converts it into a sound signal for output; on the other hand, the microphone 162 converts the collected sound signal into an electrical signal, which is converted by the audio circuit 160 into an electrical signal. After receiving, it is converted into audio data, and then processed by the output processor 170, and then sent to other devices such as in the vehicle via the transceiver 110, or the audio data is output to the memory 120 for further processing.
处理器170是设备1000的控制中心,利用各种接口和线路连接整个设备1000的各个部分,通过运行或执行存储在存储器120内的软件程序和/或模块,以及调用存储在存储器120内的数据,执行设备1000的各种功能和处理数据,从而对设备1000进行整体监控。可选的,处理器170可包括一个或多个处理核心;优选的,处理器170可集成应用处理器和调制解调处理器,其中,应用处理器主要处理操作系统、用户界面和应用程序等,调制解调处理器主要处理无线通信。可以理解的是,上述调制解调处理器也可以不集成到处理器170中。The processor 170 is the control center of the device 1000, and uses various interfaces and lines to connect various parts of the entire device 1000, by running or executing the software programs and/or modules stored in the memory 120, and calling the data stored in the memory 120. , perform various functions of the device 1000 and process data, so as to monitor the device 1000 as a whole. Optionally, the processor 170 may include one or more processing cores; preferably, the processor 170 may integrate an application processor and a modem processor, wherein the application processor mainly processes the operating system, user interface, and application programs, etc. , the modem processor mainly deals with wireless communication. It can be understood that, the above-mentioned modulation and demodulation processor may not be integrated into the processor 170 .
处理器170可以调用存储器120中存储的代码来实现上述实施例中图5至图7描述的目标跟踪方法。The processor 170 may invoke the code stored in the memory 120 to implement the target tracking method described in FIG. 5 to FIG. 7 in the above embodiment.
具体在本实施例中,设备1000的显示单元可以是触摸屏显示器,设备1000的处理器170会运行存储在存储器120中的至少一条指令、至少一段程序、代码集或指令集,从而实现上述实施例的目标物定位方法。Specifically, in this embodiment, the display unit of the device 1000 may be a touch screen display, and the processor 170 of the device 1000 will execute at least one instruction, at least one program, a code set or an instruction set stored in the memory 120 to implement the above embodiments target localization method.
在另一个示例性的实施例中,本公开实施例还提供了一种存储介质,存储介质中存储有至少一条指令、至少一段程序、代码集或指令集,至少一条指令、至少一段程序、代码集或指令集由设备的处理器加载并执行以实现上述实施例的目标物定位方法。In another exemplary embodiment, an embodiment of the present disclosure further provides a storage medium, where the storage medium stores at least one instruction, at least one piece of program, code set or instruction set, at least one instruction, at least one piece of program, code The set or instruction set is loaded and executed by the processor of the device to implement the method for locating the object in the above embodiment.
所属领域的技术人员可以清楚地了解到,为描述的方便和简洁,上述描述的系统,装置和单元的具体工作过程,可以参考前述方法实施例中的对应过程,在此不再赘述。Those skilled in the art can clearly understand that, for the convenience and brevity of description, the specific working process of the system, device and unit described above may refer to the corresponding process in the foregoing method embodiments, which will not be repeated here.
在本申请所提供的几个实施例中,应该理解到,所揭露的系统,装置和方法,可以通过其它的方式实现。例如,以上所描述的装置实施例仅仅是示意性的,例如,单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个系统,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些接口,装置或单元的间接耦合或通信连接,可以是电性,机械或其它的形式。In the several embodiments provided in this application, it should be understood that the disclosed system, apparatus and method may be implemented in other manners. For example, the apparatus embodiments described above are only illustrative. For example, the division of units is only a logical function division. In actual implementation, there may be other division methods, for example, multiple units or components may be combined or integrated. to another system, or some features can be ignored, or not implemented. On the other hand, the shown or discussed mutual coupling or direct coupling or communication connection may be through some interfaces, indirect coupling or communication connection of devices or units, and may be in electrical, mechanical or other forms.
作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例方案的目的。Units described as separate components may or may not be physically separated, and components shown as units may or may not be physical units, that is, may be located in one place, or may be distributed to multiple network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution in this embodiment.
另外,在本申请各个实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。上述集成的单元既可以采用硬件的形式实现,也可以采用软件功能单元的形式实现。In addition, each functional unit in each embodiment of the present application may be integrated into one processing unit, or each unit may exist physically alone, or two or more units may be integrated into one unit. The above-mentioned integrated units may be implemented in the form of hardware, or may be implemented in the form of software functional units.
所述集成的单元如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个计算机可读取存储介质中。基于这样的理解,本申请的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的全部或部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,包括若干指令用以使得一台计算机设备(可以是个人计算机,服务器,或者其他网络设备等)执行本申请图2a实施例所述方法的全部或部分步骤。而前述的存储介质包括:U盘、移动硬盘、只读存储器(read-only memory,ROM)、随机存取存储器(random access memory,RAM)、磁碟或者光盘等各种可以存储程序代码的介质。The integrated unit, if implemented in the form of a software functional unit and sold or used as an independent product, may be stored in a computer-readable storage medium. Based on this understanding, the technical solutions of the present application can be embodied in the form of software products in essence, or the parts that contribute to the prior art, or all or part of the technical solutions, and the computer software products are stored in a storage medium , including several instructions for causing a computer device (which may be a personal computer, a server, or other network device, etc.) to execute all or part of the steps of the method described in the embodiment of FIG. 2a of the present application. The aforementioned storage medium includes: U disk, mobile hard disk, read-only memory (ROM), random access memory (RAM), magnetic disk or optical disk and other media that can store program codes .
以上所述,以上实施例仅用以说明本申请的技术方案,而非对其限制;尽管参照前述实施例对本申请进行了详细的说明,本领域的普通技术人员应当理解:其依然可以对前述各实施例所记载的技术方案进行修改,或者对其中部分技术特征进行等同替换;而这些修改或者替换,并不使相应技术方案的本质脱离本申请各实施例技术方案的范围。As mentioned above, the above embodiments are only used to illustrate the technical solutions of the present application, but not to limit them; although the present application has been described in detail with reference to the foregoing embodiments, those of ordinary skill in the art should understand: The technical solutions described in the embodiments are modified, or some technical features thereof are equivalently replaced; and these modifications or replacements do not make the essence of the corresponding technical solutions deviate from the scope of the technical solutions of the embodiments of the present application.

Claims (21)

  1. 一种目标跟踪方法,其特征在于,所述方法包括:A target tracking method, characterized in that the method comprises:
    获取目标对象的点云数据,并对所述点云数据进行质心估计;Obtain the point cloud data of the target object, and perform centroid estimation on the point cloud data;
    获取目标协方差矩阵;其中,所述目标协方差矩阵表示进行所述质心估计时引入的质心位置误差;以及,Obtain a target covariance matrix; wherein, the target covariance matrix represents the centroid position error introduced when performing the centroid estimation; and,
    基于所述目标协方差矩阵,对所述目标对象进行目标跟踪,以得到目标跟踪结果。Based on the target covariance matrix, target tracking is performed on the target object to obtain a target tracking result.
  2. 根据权利要求1所述的方法,其特征在于,所述获取目标协方差矩阵,包括:The method according to claim 1, wherein the obtaining the target covariance matrix comprises:
    获取所述目标对象在上一时刻的目标跟踪结果;Obtain the target tracking result of the target object at the previous moment;
    基于运动模型,对所述上一时刻的目标跟踪结果进行状态预测,以得到当前时刻的目标跟踪预测值;Based on the motion model, state prediction is performed on the target tracking result at the previous moment to obtain the target tracking prediction value at the current moment;
    获取当前时刻的状态测量值,所述状态测量值为对所述点云数据进行质心估计得到的;Obtaining a state measurement value at the current moment, where the state measurement value is obtained by performing centroid estimation on the point cloud data;
    基于所述目标跟踪预测值和所述状态测量值,获取所述目标协方差矩阵。The target covariance matrix is obtained based on the target tracking predicted value and the state measurement value.
  3. 根据权利要求1所述的方法,其特征在于,所述获取目标协方差矩阵,包括:The method according to claim 1, wherein the obtaining the target covariance matrix comprises:
    在数据关联次数小于或等于预设值的情况下,将预设的协方差矩阵作为所述目标协方差矩阵;其中,所述数据关联次数表示质心测量值和状态预测值之间的差异小于阈值的次数,所述质心测量值为对当前时刻目标对象的点云数据进行质心估计得到的,所述状态预测值为对上一时刻的质心测量值进行预测得到的,所述上一时刻的质心测量值为对上一时刻目标对象的点云数据进行质心估计得到的。In the case that the number of data associations is less than or equal to a preset value, the preset covariance matrix is used as the target covariance matrix; wherein, the number of data associations indicates that the difference between the centroid measurement value and the state predicted value is less than a threshold value number of times, the centroid measurement value is obtained by estimating the centroid of the point cloud data of the target object at the current moment, and the state prediction value is obtained by predicting the centroid measurement value at the previous moment. The measured value is obtained by estimating the centroid of the point cloud data of the target object at the previous moment.
  4. 根据权利要求3所述的方法,其特征在于,所述获取目标协方差矩阵,包括:The method according to claim 3, wherein the obtaining the target covariance matrix comprises:
    在所述数据关联次数大于所述预设值的情况下,基于目标跟踪预测值和状态测量值,获取所述目标协方差矩阵;其中,所述目标跟踪预测值为对上一时刻的目标跟踪结果进行状态预测得到的,所述状态测量值为对所述点云数据进行质心估计得到的。When the number of data associations is greater than the preset value, the target covariance matrix is obtained based on the target tracking predicted value and the state measurement value; wherein the target tracking predicted value is the target tracking at the previous moment The result is obtained by performing state prediction, and the state measurement value is obtained by performing centroid estimation on the point cloud data.
  5. 根据权利要求2所述的方法,其特征在于,所述基于所述目标跟踪预测值和所述状态测量值,获取所述目标协方差矩阵,包括:The method according to claim 2, wherein the obtaining the target covariance matrix based on the target tracking predicted value and the state measurement value comprises:
    基于所述目标跟踪预测值和所述状态测量值,通过如下公式,获取所述目标协方差矩阵:Based on the target tracking predicted value and the state measurement value, the target covariance matrix is obtained by the following formula:
    Figure PCTCN2020098494-appb-100001
    Figure PCTCN2020098494-appb-100001
    Figure PCTCN2020098494-appb-100002
    Figure PCTCN2020098494-appb-100002
    其中,所述x k+1|k,y k+1|k为所述状态测量值,所述
    Figure PCTCN2020098494-appb-100003
    为所述目标跟踪预测值,所述R 1为所述目标协方差矩阵。
    Wherein, the x k+1|k and y k+1|k are the state measurement values, and the
    Figure PCTCN2020098494-appb-100003
    is the target tracking predicted value, and the R 1 is the target covariance matrix.
  6. 根据权利要求1至5任一所述的方法,其特征在于,所述方法还包括:The method according to any one of claims 1 to 5, wherein the method further comprises:
    获取测量方程协方差矩阵,其中,所述点云数据为基于雷达系统采集得到的,所述测量方程协方差矩阵表示所述雷达系统的测量偏差;obtaining a measurement equation covariance matrix, wherein the point cloud data is collected based on a radar system, and the measurement equation covariance matrix represents the measurement deviation of the radar system;
    将所述测量方程协方差矩阵和所述目标协方差矩阵进行合成,以得到合成后的目标协方差矩阵;The measurement equation covariance matrix and the target covariance matrix are synthesized to obtain the synthesized target covariance matrix;
    相应的,所述基于所述目标协方差矩阵,对所述目标对象进行目标跟踪,包括:Correspondingly, performing target tracking on the target object based on the target covariance matrix includes:
    基于所述合成后的目标协方差矩阵,对所述目标对象进行目标跟踪。Based on the synthesized target covariance matrix, target tracking is performed on the target object.
  7. 根据权利要求6所述的方法,其特征在于,所述测量方程协方差矩阵为在极坐标系中表示的矩阵,所述目标协方差矩阵为在笛卡尔坐标系中表示的矩阵;所述将所述测量方程协方差矩阵和所述目标协方差矩阵进行合成,包括:The method according to claim 6, wherein the measurement equation covariance matrix is a matrix represented in a polar coordinate system, and the target covariance matrix is a matrix represented in a Cartesian coordinate system; The measurement equation covariance matrix and the target covariance matrix are synthesized, including:
    将所述测量方程协方差矩阵转换为在笛卡尔坐标系中表示的矩阵;converting the measurement equation covariance matrix into a matrix represented in a Cartesian coordinate system;
    将转换后的测量方程协方差矩阵和所述目标协方差矩阵进行矩阵加,以得到合成后的目标协方差矩阵。Matrix addition is performed on the converted measurement equation covariance matrix and the target covariance matrix to obtain a combined target covariance matrix.
  8. 根据权利要求7所述的方法,其特征在于,所述将所述测量方程协方差矩阵转换为在笛卡尔坐标系中表示的矩阵,包括:The method according to claim 7, wherein the converting the measurement equation covariance matrix into a matrix represented in a Cartesian coordinate system comprises:
    获取当前时刻的状态测量值,所述状态测量值为对所述点云数据进行质心估计得到的;Obtaining a state measurement value at the current moment, where the state measurement value is obtained by performing centroid estimation on the point cloud data;
    基于所述状态测量值,获取协方差转换合成矩阵;其中,所述协方差转换合成矩阵包括多个元素,所述多个元素中的部分元素为基于所述状态测量值生成的;Based on the state measurement value, a covariance transformation synthesis matrix is obtained; wherein the covariance transformation synthesis matrix includes a plurality of elements, and some elements in the plurality of elements are generated based on the state measurement value;
    基于所述协方差转换合成矩阵,将所述测量方程协方差矩阵转换为在笛卡尔坐标系中表示的矩阵。Based on the covariance transformation synthesis matrix, the measurement equation covariance matrix is transformed into a matrix represented in a Cartesian coordinate system.
  9. 根据权利要求8所述的方法,其特征在于,所述将转换后的测量方程协方差矩阵和所述目标协方差矩阵进行矩阵加,包括:The method according to claim 8, wherein, performing matrix addition on the converted measurement equation covariance matrix and the target covariance matrix, comprising:
    通过如下公式,将转换后的测量方程协方差矩阵和所述目标协方差矩阵进行矩阵加:The converted measurement equation covariance matrix and the target covariance matrix are subjected to matrix addition by the following formula:
    R all=R 0+A kR mA k′; R all =R 0 +A k R m A k ′;
    Figure PCTCN2020098494-appb-100004
    Figure PCTCN2020098494-appb-100004
    其中,所述R 0为所述目标协方差矩阵,所述A kR mA k′为所述转换后的测量方程协方差矩阵,所述R k和所述θ k为所述状态测量值,所述A k为所述协方差转换合成矩阵,所述A k′为所述协方差转换合成矩阵的转置。 Wherein, the R 0 is the target covariance matrix, the A k R m A k ′ is the converted measurement equation covariance matrix, the R k and the θ k are the state measurement values , the A k is the covariance transformation synthesis matrix, and the A k ′ is the transpose of the covariance transformation synthesis matrix.
  10. 一种目标跟踪装置,其特征在于,所述装置包括:A target tracking device, characterized in that the device comprises:
    获取模块,用于获取目标对象的点云数据,并对所述点云数据进行质心估计;获取目标协方差矩阵;其中,所述目标协方差矩阵表示进行所述质心估计时引入的质心位置误差;以及,an acquisition module, used for acquiring point cloud data of the target object, and performing centroid estimation on the point cloud data; acquiring a target covariance matrix; wherein, the target covariance matrix represents the centroid position error introduced when performing the centroid estimation ;as well as,
    目标跟踪模块,用于基于所述目标协方差矩阵,对所述目标对象进行目标跟踪,以得到目标跟踪结果。A target tracking module, configured to perform target tracking on the target object based on the target covariance matrix to obtain a target tracking result.
  11. 根据权利要求10所述的装置,其特征在于,所述获取模块,具体用于:The device according to claim 10, wherein the acquisition module is specifically configured to:
    获取所述目标对象在上一时刻的目标跟踪结果;Obtain the target tracking result of the target object at the previous moment;
    基于运动模型,对所述上一时刻的目标跟踪结果进行状态预测,以得到当前时刻的目标跟踪预测值;Based on the motion model, state prediction is performed on the target tracking result at the previous moment to obtain the target tracking prediction value at the current moment;
    获取当前时刻的状态测量值,所述状态测量值为对所述点云数据进行质心估计得到的;Obtaining a state measurement value at the current moment, where the state measurement value is obtained by performing centroid estimation on the point cloud data;
    基于所述目标跟踪预测值和所述状态测量值,获取所述目标协方差矩阵。The target covariance matrix is obtained based on the target tracking predicted value and the state measurement value.
  12. 根据权利要求10所述的装置,其特征在于,所述获取模块,用于在数据关联次数小于或等于预设值的情况下,将预设的协方差矩阵作为所述目标协方差矩阵;其中,所述数据关联次数表示质心测量值和状态预测值之间的差异小于阈值的次数,所述质心测量值为对当前时刻目标对象的点云数据进行质心估计得到的,所述状态预测值为对上一时刻的质心测量值进行预测得到的,所述上一时刻的质心测量值为对上一时刻目标对象的点云数据进行质心估计得到的。The device according to claim 10, wherein the acquisition module is configured to use a preset covariance matrix as the target covariance matrix when the number of data associations is less than or equal to a preset value; wherein , the number of data associations represents the number of times that the difference between the centroid measurement value and the state predicted value is less than the threshold, the centroid measurement value is obtained by performing centroid estimation on the point cloud data of the target object at the current moment, and the state predicted value is It is obtained by predicting the centroid measurement value at the last moment, and the centroid measurement value at the last moment is obtained by performing centroid estimation on the point cloud data of the target object at the last moment.
  13. 根据权利要求12所述的装置,其特征在于,所述获取模块,用于在所述数据关联次数大于所述预设值的情况下,基于目标跟踪预测值和状态测量值,获取所述目标协方差矩阵;其中,所述目标跟踪预测值为对上一时刻的目标跟踪结果进行状态预测得到的,所述状态测量值为对所述点云数据进行质心估计得到的。The device according to claim 12, wherein the obtaining module is configured to obtain the target based on a target tracking predicted value and a state measurement value when the data association times are greater than the preset value Covariance matrix; wherein, the target tracking prediction value is obtained by performing state prediction on the target tracking result at the previous moment, and the state measurement value is obtained by performing centroid estimation on the point cloud data.
  14. 根据权利要求11所述的装置,其特征在于,所述获取模块,具体用于:The device according to claim 11, wherein the acquisition module is specifically configured to:
    基于所述目标跟踪预测值和所述状态测量值,通过如下公式,获取所述目标协方差矩阵:Based on the target tracking predicted value and the state measurement value, the target covariance matrix is obtained by the following formula:
    Figure PCTCN2020098494-appb-100005
    Figure PCTCN2020098494-appb-100005
    Figure PCTCN2020098494-appb-100006
    Figure PCTCN2020098494-appb-100006
    其中,所述x k+1|k,y k+1|k为所述状态测量值,所述
    Figure PCTCN2020098494-appb-100007
    为所述目标跟踪预测值,所述R 1为所述目标协方差矩阵。
    Wherein, the x k+1|k and y k+1|k are the state measurement values, and the
    Figure PCTCN2020098494-appb-100007
    is the target tracking predicted value, and the R 1 is the target covariance matrix.
  15. 根据权利要求10至14任一所述的装置,其特征在于,所述获取模块,还用于:The device according to any one of claims 10 to 14, wherein the acquiring module is further configured to:
    获取测量方程协方差矩阵,其中,所述点云数据为基于雷达系统采集得到的,所述测量方程协方差矩阵表示所述雷达系统的测量偏差;obtaining a measurement equation covariance matrix, wherein the point cloud data is collected based on a radar system, and the measurement equation covariance matrix represents the measurement deviation of the radar system;
    所述装置还包括:The device also includes:
    矩阵合成模块,用于将所述测量方程协方差矩阵和所述目标协方差矩阵进行合成,以得到合成后的目标协方差矩阵;a matrix synthesis module for synthesizing the measurement equation covariance matrix and the target covariance matrix to obtain a synthesized target covariance matrix;
    相应的,所述目标跟踪模块,具体用于:基于所述合成后的目标协方差矩阵,对所述目标对象进行目标跟踪。Correspondingly, the target tracking module is specifically configured to: perform target tracking on the target object based on the synthesized target covariance matrix.
  16. 根据权利要求15所述的装置,其特征在于,所述测量方程协方差矩阵为在极坐标系中表示的矩阵,所述目标协方差矩阵为在笛卡尔坐标系中表示的矩阵;所述矩阵合成模块,具体用于:The device according to claim 15, wherein the measurement equation covariance matrix is a matrix represented in a polar coordinate system, and the target covariance matrix is a matrix represented in a Cartesian coordinate system; the matrix Synthesis module, specifically for:
    将所述测量方程协方差矩阵转换为在笛卡尔坐标系中表示的矩阵;converting the measurement equation covariance matrix into a matrix represented in a Cartesian coordinate system;
    将转换后的测量方程协方差矩阵和所述目标协方差矩阵进行矩阵加,以得到合成后的目标协方差矩阵。Matrix addition is performed on the converted measurement equation covariance matrix and the target covariance matrix to obtain a combined target covariance matrix.
  17. 根据权利要求16所述的装置,其特征在于,所述矩阵合成模块,具体用于:The device according to claim 16, wherein the matrix synthesis module is specifically used for:
    获取当前时刻的状态测量值,所述状态测量值为对所述点云数据进行质心估计得到的;Obtaining a state measurement value at the current moment, where the state measurement value is obtained by performing centroid estimation on the point cloud data;
    基于所述状态测量值,获取协方差转换合成矩阵;其中,所述协方差转换合成矩阵包括多个元素,所述多个元素中的部分元素为基于所述状态测量值生成的;Based on the state measurement value, a covariance transformation synthesis matrix is obtained; wherein the covariance transformation synthesis matrix includes a plurality of elements, and some elements in the plurality of elements are generated based on the state measurement value;
    基于所述协方差转换合成矩阵,将所述测量方程协方差矩阵转换为在笛卡尔坐标系中表示的矩阵。Based on the covariance transformation synthesis matrix, the measurement equation covariance matrix is transformed into a matrix represented in a Cartesian coordinate system.
  18. 根据权利要求17所述的装置,其特征在于,所述矩阵合成模块,具体用于:The device according to claim 17, wherein the matrix synthesis module is specifically used for:
    通过如下公式,将转换后的测量方程协方差矩阵和所述目标协方差矩阵进行矩阵加:The converted measurement equation covariance matrix and the target covariance matrix are subjected to matrix addition by the following formula:
    R all=R 0+A kR mA k′; R all =R 0 +A k R m A k ′;
    Figure PCTCN2020098494-appb-100008
    Figure PCTCN2020098494-appb-100008
    其中,所述R 0为所述目标协方差矩阵,所述A kR mA k′为所述转换后的测量方程协方差矩阵,所述R k和所述θ k为所述状态测量值,所述A k为所述协方差转换合成矩阵,所述A k′为所述协方差转换合成矩阵的转置。 Wherein, the R 0 is the target covariance matrix, the A k R m A k ′ is the converted measurement equation covariance matrix, the R k and the θ k are the state measurement values , the A k is the covariance transformation synthesis matrix, and the A k ′ is the transpose of the covariance transformation synthesis matrix.
  19. 一种目标跟踪装置,其特征在于,包括:处理器和传输接口;所述处理器调用存储在存储器中的程序代码时,所述目标跟踪装置执行权利要求1-9中任一项所述的方法。A target tracking device, comprising: a processor and a transmission interface; when the processor calls a program code stored in a memory, the target tracking device executes the method described in any one of claims 1-9. method.
  20. 一种包含指令的计算机程序产品,其特征在于,所述指令在计算机或处理器上运行时,使得所述计算机或所述处理器执行如权利要求1-9中任一项所述的方法。A computer program product comprising instructions, wherein the instructions, when run on a computer or processor, cause the computer or the processor to perform the method of any one of claims 1-9.
  21. 一种计算机可读存储介质,包括指令,其特征在于,当所述指令在计算机或处理器上运行时,使得所述计算机或所述处理器执行如权利要求1-9中任一项所述的方法。A computer-readable storage medium, comprising instructions, characterized in that, when the instructions are executed on a computer or a processor, the computer or the processor is made to execute any one of claims 1-9. Methods.
PCT/CN2020/098494 2020-06-28 2020-06-28 Target tracking method and device therefor WO2022000127A1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
PCT/CN2020/098494 WO2022000127A1 (en) 2020-06-28 2020-06-28 Target tracking method and device therefor
CN202080016826.0A CN113498529B (en) 2020-06-28 2020-06-28 Target tracking method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2020/098494 WO2022000127A1 (en) 2020-06-28 2020-06-28 Target tracking method and device therefor

Publications (1)

Publication Number Publication Date
WO2022000127A1 true WO2022000127A1 (en) 2022-01-06

Family

ID=77994616

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/098494 WO2022000127A1 (en) 2020-06-28 2020-06-28 Target tracking method and device therefor

Country Status (2)

Country Link
CN (1) CN113498529B (en)
WO (1) WO2022000127A1 (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115171378A (en) * 2022-06-28 2022-10-11 武汉理工大学 Long-distance multi-vehicle high-precision detection tracking method based on roadside radar
CN115311761A (en) * 2022-07-15 2022-11-08 襄阳达安汽车检测中心有限公司 Non-real-time vehicle-mounted sensing system evaluation method and related equipment
CN115511955A (en) * 2022-11-18 2022-12-23 西湖大学 Distributed ground-to-air cooperative unknown unmanned aerial vehicle state estimation method and application thereof
CN117557977A (en) * 2023-12-28 2024-02-13 安徽蔚来智驾科技有限公司 Environment perception information acquisition method, readable storage medium and intelligent device

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114415157A (en) * 2021-12-30 2022-04-29 西北工业大学 Underwater target multi-model tracking method based on underwater acoustic sensor network

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101894378A (en) * 2010-06-13 2010-11-24 南京航空航天大学 Moving target visual tracking method and system based on double ROI (Region of Interest)
CN107600073A (en) * 2017-08-10 2018-01-19 同济大学 A kind of vehicle centroid side drift angle estimating system and method based on Multi-source Information Fusion
CN109493372A (en) * 2018-10-24 2019-03-19 华侨大学 The product point cloud data Fast global optimization method for registering of big data quantity, few feature
US20190096066A1 (en) * 2017-09-28 2019-03-28 4Sense, Inc. System and Method for Segmenting Out Multiple Body Parts
CN110018489A (en) * 2019-04-25 2019-07-16 上海蔚来汽车有限公司 Target tracking method, device and controller and storage medium based on laser radar

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2009085233A2 (en) * 2007-12-21 2009-07-09 21Ct, Inc. System and method for visually tracking with occlusions
CN101739687A (en) * 2009-11-23 2010-06-16 燕山大学 Covariance matrix-based fast maneuvering target tracking method
CN108734132A (en) * 2015-06-11 2018-11-02 大连理工大学 The method for establishing pedestrian's recognition classifier of vehicle-mounted pedestrian detection
CN106296726A (en) * 2016-07-22 2017-01-04 中国人民解放军空军预警学院 A kind of extraterrestrial target detecting and tracking method in space-based optical series image

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101894378A (en) * 2010-06-13 2010-11-24 南京航空航天大学 Moving target visual tracking method and system based on double ROI (Region of Interest)
CN107600073A (en) * 2017-08-10 2018-01-19 同济大学 A kind of vehicle centroid side drift angle estimating system and method based on Multi-source Information Fusion
US20190096066A1 (en) * 2017-09-28 2019-03-28 4Sense, Inc. System and Method for Segmenting Out Multiple Body Parts
CN109493372A (en) * 2018-10-24 2019-03-19 华侨大学 The product point cloud data Fast global optimization method for registering of big data quantity, few feature
CN110018489A (en) * 2019-04-25 2019-07-16 上海蔚来汽车有限公司 Target tracking method, device and controller and storage medium based on laser radar

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115171378A (en) * 2022-06-28 2022-10-11 武汉理工大学 Long-distance multi-vehicle high-precision detection tracking method based on roadside radar
CN115171378B (en) * 2022-06-28 2023-10-27 武汉理工大学 High-precision detection tracking method for long-distance multiple vehicles based on road side radar
CN115311761A (en) * 2022-07-15 2022-11-08 襄阳达安汽车检测中心有限公司 Non-real-time vehicle-mounted sensing system evaluation method and related equipment
CN115311761B (en) * 2022-07-15 2023-11-03 襄阳达安汽车检测中心有限公司 Non-real-time vehicle-mounted perception system evaluation method and related equipment
CN115511955A (en) * 2022-11-18 2022-12-23 西湖大学 Distributed ground-to-air cooperative unknown unmanned aerial vehicle state estimation method and application thereof
CN117557977A (en) * 2023-12-28 2024-02-13 安徽蔚来智驾科技有限公司 Environment perception information acquisition method, readable storage medium and intelligent device
CN117557977B (en) * 2023-12-28 2024-04-30 安徽蔚来智驾科技有限公司 Environment perception information acquisition method, readable storage medium and intelligent device

Also Published As

Publication number Publication date
CN113498529A (en) 2021-10-12
CN113498529B (en) 2022-08-26

Similar Documents

Publication Publication Date Title
US20210262808A1 (en) Obstacle avoidance method and apparatus
WO2022000127A1 (en) Target tracking method and device therefor
CN109901574B (en) Automatic driving method and device
EP3835908B1 (en) Automatic driving method, training method and related apparatuses
WO2021000800A1 (en) Reasoning method for road drivable region and device
CN110371132B (en) Driver takeover evaluation method and device
WO2021102955A1 (en) Path planning method for vehicle and path planning apparatus for vehicle
WO2021026705A1 (en) Matching relationship determination method, re-projection error calculation method and related apparatus
CN110471411A (en) Automatic Pilot method and servomechanism
WO2021212379A1 (en) Lane line detection method and apparatus
US20220080972A1 (en) Autonomous lane change method and apparatus, and storage medium
CN112543877B (en) Positioning method and positioning device
WO2021244207A1 (en) Method and apparatus for training driving behavior decision-making model
WO2022142839A1 (en) Image processing method and apparatus, and intelligent vehicle
WO2022156309A1 (en) Trajectory prediction method and apparatus, and map
US20230048680A1 (en) Method and apparatus for passing through barrier gate crossbar by vehicle
US20230399023A1 (en) Vehicle Driving Intention Prediction Method, Apparatus, and Terminal, and Storage Medium
WO2021163846A1 (en) Target tracking method and target tracking apparatus
CN113954858A (en) Method for planning vehicle driving route and intelligent automobile
CN112810603B (en) Positioning method and related product
WO2022089577A1 (en) Pose determination method and related device thereof
WO2022017307A1 (en) Autonomous driving scenario generation method, apparatus and system
US20220309806A1 (en) Road structure detection method and apparatus
CN113741384B (en) Method and device for detecting automatic driving system
WO2021159397A1 (en) Vehicle travelable region detection method and detection device

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20943507

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20943507

Country of ref document: EP

Kind code of ref document: A1