WO2018068446A1 - Tracking method, tracking device, and computer storage medium - Google Patents

Tracking method, tracking device, and computer storage medium Download PDF

Info

Publication number
WO2018068446A1
WO2018068446A1 PCT/CN2017/072999 CN2017072999W WO2018068446A1 WO 2018068446 A1 WO2018068446 A1 WO 2018068446A1 CN 2017072999 W CN2017072999 W CN 2017072999W WO 2018068446 A1 WO2018068446 A1 WO 2018068446A1
Authority
WO
WIPO (PCT)
Prior art keywords
tracking
target area
location information
parameter set
speed
Prior art date
Application number
PCT/CN2017/072999
Other languages
French (fr)
Chinese (zh)
Inventor
陈子冲
廖方波
Original Assignee
纳恩博(北京)科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 纳恩博(北京)科技有限公司 filed Critical 纳恩博(北京)科技有限公司
Publication of WO2018068446A1 publication Critical patent/WO2018068446A1/en

Links

Images

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0231Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
    • G05D1/0238Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using obstacle or wall sensors
    • G05D1/024Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using obstacle or wall sensors in combination with a laser
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C11/00Photogrammetry or videogrammetry, e.g. stereogrammetry; Photographic surveying
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/02Systems using the reflection of electromagnetic waves other than radio waves
    • G01S17/06Systems determining position data of a target
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/02Systems using the reflection of electromagnetic waves other than radio waves
    • G01S17/50Systems of measurement based on relative movement of target
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S5/00Position-fixing by co-ordinating two or more direction or position line determinations; Position-fixing by co-ordinating two or more distance determinations
    • G01S5/02Position-fixing by co-ordinating two or more direction or position line determinations; Position-fixing by co-ordinating two or more distance determinations using radio waves
    • G01S5/06Position of source determined by co-ordinating a plurality of position lines defined by path-difference measurements
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0212Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory
    • G05D1/0214Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory in accordance with safety or protection criteria, e.g. avoiding hazardous areas
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0212Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory
    • G05D1/0223Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory involving speed control of the vehicle
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0257Control of position or course in two dimensions specially adapted to land vehicles using a radar
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0276Control of position or course in two dimensions specially adapted to land vehicles using signals provided by a source external to the vehicle

Definitions

  • the invention relates to an intelligent tracking technology, in particular to a tracking method and a tracking device and a computer storage medium.
  • UWB Ultra Wideband
  • UWB technology can be used to achieve location tracking.
  • a UWB anchor node UWB anchor
  • UWB beacon UWB
  • Tag the tracking robot can use the UWB anchor to track the target object carrying the UWB tag in real time.
  • the tracking process because there is a certain distance between the target object and the tracking robot, during the movement of the target object, obstacles may appear between the two, causing the robot to collide during the tracking process, resulting in tracking failure. Even damage tracking robots.
  • an embodiment of the present invention provides a tracking method, a tracking device, and a computer storage medium.
  • the monitoring the second location information of the second object in the target area include:
  • the monitoring the second location information of the second object in the target area further includes:
  • the second location information of the second object in the monitoring target area includes:
  • the combining the first location information and the second location information, Determining the tracking of the motion data of the first object including:
  • the method further includes:
  • the motion data is adjusted to be less than or equal to a preset value.
  • the first position information is represented by a direction angle and a distance for characterizing a position of the first object
  • the first group of speed data is represented by an angular velocity and a linear velocity, and is used for characterizing Tracking the speed of the first object in a state where the second object is not in the target area
  • the second position information is represented by a direction angle and a distance for characterizing a position distribution of the second object within the target area;
  • the second set of speed data is represented by angular velocity and linear velocity for characterization Tracking the speed of the first object in a state where the second object is in the target area;
  • the determining the motion data of the first object by combining the first group of speed data, the second group of speed data, and the second position information of the second object includes:
  • the first position information is represented by a direction angle, an elevation angle, and a distance, and is used to represent a position of the first object;
  • the first group of speed data passes the first dimension speed component, the second dimension a velocity component and a third dimensional velocity component are used to represent a speed of tracking the first object in a state where the second object is not in the target region;
  • the second position information is represented by a direction angle, an elevation angle, and a distance for characterizing a position distribution of the second object in the target area;
  • the second set of speed data passes the first dimension speed component, and the second a dimension velocity component and a third dimension velocity component are used to represent a speed at which the first object is tracked in a state where the second object is in the target region;
  • the determining the motion data of the first object by combining the first group of speed data, the second group of speed data, and the second position information of the second object includes:
  • a first monitoring unit configured to monitor first location information of the first object
  • a second monitoring unit configured to monitor second location information of the second object in the target area
  • a processing unit configured to combine the first location information and the second location information to determine to track motion data of the first object
  • a driving unit configured to track the first object according to the motion data.
  • the second monitoring unit is further configured to: monitor the target area, and obtain a first coordinate parameter set that represents a position distribution of each object in the target area; Determining, according to the pose parameter, a second coordinate parameter set that represents a third object position distribution in the target area; acquiring first position information and boundary information of the first object, and determining Determining, by the first location information, a third coordinate parameter set bounded by the boundary information; removing the second coordinate parameter set and the third coordinate parameter set from the first coordinate parameter set, A fourth set of coordinate parameters characterizing the second object distribution within the target region is obtained.
  • the second monitoring unit is further configured to: project a fourth coordinate parameter set in the target area that represents the distribution of the second object into a coordinate system of a preset dimension, to obtain the a fifth coordinate parameter set in a coordinate system of the preset dimension, wherein the fifth coordinate parameter set is used to represent second position information of the second object.
  • the second monitoring unit is further configured to: monitor the target area, obtain a first coordinate parameter set that represents a position distribution of each object in the target area; and acquire first position information of the first object. And determining, by the boundary information, a third coordinate parameter set centered on the first location information and bounded by the boundary information; removing the third coordinate parameter set from the first coordinate parameter set to obtain A fourth coordinate parameter set in the target area that represents the distribution of the second object, the fourth coordinate parameter set being used to represent second location information of the second object.
  • the processing unit is further configured to: determine, according to the first location information of the first object, a first group of speed data related to tracking the first object; according to the first object Determining, by the first location information and the second location information of the second object, a second set of speed data related to tracking the first object; combining the first set of speed data, the second set of speed data, and Determining the motion data of the first object by using the second location information of the second object.
  • the device further includes:
  • An abnormality detecting unit configured to check when the first object is tracked according to the motion data Measure whether an abnormal event has occurred
  • the processing unit is further configured to adjust the motion data to be less than or equal to a preset value when an abnormal event occurs.
  • the first position information is represented by a direction angle and a distance for characterizing a position of the first object
  • the first group of speed data is represented by an angular velocity and a linear velocity, and is used for characterizing Tracking the speed of the first object in a state where the second object is not in the target area
  • the second position information is represented by a direction angle and a distance for characterizing a position distribution of the second object within the target area;
  • the second set of speed data is represented by angular velocity and linear velocity for characterization Tracking the speed of the first object in a state where the second object is in the target area;
  • the processing unit is further configured to: when tracking the first object, calculate the tracking device and the second object according to a current speed of the tracking device and second position information of the second object a distance between the first set of speed data and the second set of speed data, respectively, based on the distance; the first set of speed data and the second set of speeds based on the determined weights The data is weighted to obtain the tracking device tracking the motion data of the first object.
  • the first position information is represented by a direction angle, an elevation angle, and a distance, and is used to represent a position of the first object;
  • the first group of speed data passes the first dimension speed component, the second dimension a velocity component and a third dimensional velocity component are used to represent a speed of tracking the first object in a state where the second object is not in the target region;
  • the second position information is represented by a direction angle, an elevation angle, and a distance for characterizing a position distribution of the second object in the target area;
  • the second set of speed data passes the first dimension speed component, and the second a dimension velocity component and a third dimension velocity component are used to represent a speed at which the first object is tracked in a state where the second object is in the target region;
  • the processing unit is further configured to: when tracking the first object, calculate the tracking device and the second object according to a current speed of the tracking device and second position information of the second object a distance between the first set of speed data and the second set of speed data, respectively, based on the distance; the first set of speed data and the second set of speeds based on the determined weights The data is weighted to obtain the tracking device tracking the motion data of the first object.
  • the computer storage medium provided by the embodiment of the present invention stores a computer program configured to execute the above tracking method.
  • the first location information of the first object is monitored; the second location information of the second object in the target area is monitored; and the first location information and the second location information are combined to determine Tracking motion data of the first object; tracking the first object based on the motion data.
  • the tracking device detects the second object (also referred to as an obstacle) in the target area while tracking the first object, and simultaneously realizes tracking of the target and avoiding obstacles, thereby greatly reducing The possibility of collision with obstacles during the tracking process protects the tracking device.
  • FIG. 1 is a schematic flowchart 1 of a tracking method according to an embodiment of the present invention.
  • FIG. 2 is a second schematic flowchart of a tracking method according to an embodiment of the present invention.
  • FIG. 3 is a schematic diagram 1 of a scenario according to an embodiment of the present invention.
  • FIG. 4 is a schematic diagram 1 of information fusion according to an embodiment of the present invention.
  • FIG. 5 is a schematic flowchart 3 of a tracking method according to an embodiment of the present invention.
  • FIG. 6 is a second schematic diagram of a scenario according to an embodiment of the present invention.
  • FIG. 7 is a schematic diagram 2 of information fusion according to an embodiment of the present invention.
  • FIG. 8 is a schematic structural diagram of a tracking device according to an embodiment of the present invention.
  • FIG. 1 is a schematic flowchart 1 of a tracking method according to an embodiment of the present invention.
  • the tracking method in this example is applied to a tracking device. As shown in FIG. 1 , the tracking method includes the following steps:
  • Step 101 Monitor first location information of the first object.
  • the tracking device includes two types of sensors, wherein the first type of sensors are used to monitor first position information of the first object, and the second type of sensors are used to monitor second position information of the second object in the target area. .
  • the first type of sensor may be a UWB anchor.
  • the first object needs to carry a UWB tag, and the tracking device locates the UWB tag carried by the first object by using the UWB anchor to obtain the first position of the first object. information.
  • the UWB anchor is usually composed of two or more UWB communication nodes, and the UWB tag is composed of another UWB communication node, and the UWB tag is determined relative to the UWB by using the principle of time-of-flight (TOF) and triangulation.
  • TOF time-of-flight
  • the position information of the anchor that is, the first position information of the first object.
  • the first object refers to an object to be tracked.
  • Step 102 Monitor second location information of the second object in the target area.
  • the second location information of the second object in the target area is monitored by the second type of sensor.
  • the second type of sensor may be a 3D camera
  • the third position image of the second image in the target area may be obtained by performing a three-dimensional image acquisition on the target area by using the 3D camera.
  • the 3D camera obtains positional information of each object in the camera field of view (corresponding to the target area) with respect to the 3D camera by means of structured light technology, or TOF technology, or binocular vision.
  • TOF technology is a two-way ranging technology that measures the distance between nodes by using the time between the two asynchronous transceivers.
  • the second type of sensor may be a Lidar (LiDAR) sensor, and the distance information of the surrounding object relative to the sensor is obtained by laser scanning.
  • Lidar Lidar
  • the second object refers to an obstacle with respect to the first object.
  • the tracking device may be a ground robot. Since the ground robot can only move on the two-dimensional ground, the first position information of the first object and the second position information of the second object are represented in the two-dimensional space. .
  • the first position information of the first object is represented by the direction angle ⁇ and the distance d
  • the position of the first object in the two-dimensional space is represented by (d, ⁇ ) .
  • the second position information of the second object is represented by a direction angle ⁇ ' and a distance d', and the position of the second object in the two-dimensional space is represented by (d', ⁇ '), and all the second objects in the target area are The second position information of the objects is gathered together to form a two-dimensional obstacle avoidance map M.
  • the tracking device may be a drone, and the first position information of the first object and the second position information of the second object are represented in the three-dimensional space because the drone can move in the three-dimensional space.
  • the first position information of the first object passes the direction angle ⁇
  • the elevation angle And the distance d is expressed by Characterizing the position of the first object in three-dimensional space.
  • the second position information of the second object passes through the direction angle ⁇ ', the elevation angle And distance d' to indicate, pass Characterizing the position of the second object in the three-dimensional space, and combining the second position information of all the second objects in the target area to form a three-dimensional obstacle avoidance map M.
  • Step 103 Combine the first location information with the second location information, determine to track motion data of the first object, and track the first object according to the motion data.
  • the second location information determines that the motion data of the first object is tracked.
  • the tracking device has a proportional-integral-derivative (PID) module, the input of the PID module is first position information of the first object, and the output is the first tracking device tracking the first object without an obstacle Group speed data.
  • the tracking device further has an obstacle avoidance module, wherein the input of the obstacle avoidance module is an obstacle avoidance map M formed based on the second position information of the second object and the first position information of the first object, and the output is the second group of speed data,
  • the second set of speed data is selected from all possible motion trajectories according to the motion model of the tracking device to avoid the second object and as close as possible to the velocity data of the first object.
  • the tracking device further has an information fusion module, wherein the input of the information fusion module is the first group of speed data, the second group of speed data, and the obstacle avoidance map M formed based on the second position information of the second object, and the output of the information fusion module It is the final motion data of the tracking device.
  • the first group of speed data and the second group of speed data are merged based on the obstacle avoidance map M, and the fusion is based on: predicting the tracking device and the second object in the obstacle avoidance map M according to the current motion data of the tracking device.
  • the distance between the tracking device and the second object is greater, the greater the weight of the first set of speed data; conversely, the smaller the distance between the tracking device and the second object, the second set of speed data The greater the weight.
  • the first set of velocity data and the second set of velocity numbers are weighted based on the respective weights, that is, the motion data of the first object is obtained.
  • the motion data when the first object is tracked according to the motion data, it is detected whether an abnormal event occurs; when an abnormal event occurs, the motion data is adjusted to be less than or equal to a preset value. In an embodiment, the preset value is zero. At this time, if the tracking device is at risk of falling or colliding, the brake logic is forcibly activated to ensure the safety of the tracking device.
  • FIG. 2 is a schematic flowchart 2 of a tracking method according to an embodiment of the present invention.
  • the tracking method in this example is applied to a ground robot. As shown in FIG. 2, the tracking method includes the following steps:
  • Step 201 Monitor first location information of the first object.
  • the ground robot includes two types of sensors, wherein the first type of sensors are used to monitor first position information of the first object, and the second type of sensors are used to monitor second position information of the second object in the target area. .
  • the first type of sensor may be a UWB anchor.
  • the first object needs to carry the UWB tag, and the ground robot locates the UWB tag carried by the first object by using the UWB anchor to obtain the first position of the first object. information.
  • the UWB anchor is usually composed of two or more UWB communication nodes, and the UWB tag is composed of another UWB communication node, and the UWB tag is determined relative to the UWB by using the principle of time-of-flight (TOF) and triangulation.
  • TOF time-of-flight
  • the position information of the anchor that is, the first position information of the first object.
  • the first object refers to an object to be tracked.
  • the first position information is represented by a direction angle ⁇ and a distance d
  • the position of the first object is represented by (d, ⁇ ).
  • Step 202 Monitor the target area to obtain a first coordinate parameter set in the target area that represents the location distribution of each object.
  • the second location information of the second object in the target area is monitored by the second type of sensor.
  • the second type of sensor is a 3D camera, and the third position image of the second image in the target area is obtained by performing a three-dimensional image acquisition on the target area by using the 3D camera.
  • the second type of sensor is a LiDAR sensor, and the distance information of the surrounding object relative to the sensor is obtained by laser scanning.
  • the second object refers to an obstacle with respect to the first object.
  • the target area needs to be monitored first, and a first coordinate parameter set representing the position distribution of each object in the target area is obtained.
  • Step 203 Acquire a pose parameter of the monitoring device, and determine, according to the pose parameter, a second coordinate parameter set that represents a third object position distribution in the target area.
  • the ground robot since the ground robot moves on the ground, it is necessary to calculate the three-dimensional position of the ground according to the posture of the ground robot and the height of the second type of sensor installation (that is, the second coordinate parameter set of the third object position). And remove the ground position from the obstacle distribution O A to obtain an obstacle distribution O B without the ground.
  • Step 204 Acquire first location information of the first object and boundary information, and determine a third coordinate parameter set centered on the first location information and bound by the boundary information.
  • the characterization can be determined.
  • the third coordinate parameter set of the spatial distribution of the first object removes all obstacles in the 3D bounding box centered on the first position from the obstacle distribution O B to obtain a final obstacle distribution O c .
  • Step 205 Removing the second coordinate parameter set and the third coordinate parameter set from the first coordinate parameter set to obtain a fourth coordinate parameter set in the target area that represents the second object distribution.
  • the ground position is first removed from the obstacle distribution O A to obtain an obstacle distribution O B without the ground; then, all obstacles in the 3D bounding box centered on the first object are removed from the obstacle distribution O B The final obstacle distribution O c is obtained .
  • Step 206 Projecting a fourth coordinate parameter set in the target area that represents the second object distribution to a coordinate system of a preset dimension, and obtaining a fifth coordinate parameter set in the coordinate system of the preset dimension.
  • the fifth coordinate parameter set is used to represent second location information of the second object.
  • the ground robot moves in a two-dimensional space, it is necessary to The fourth set of coordinate parameters is projected into the two-dimensional coordinate system, such that the obtained second position information can be represented by a direction angle and a distance in the two-dimensional polar coordinates for characterizing the second object in the target area Location distribution.
  • the obstacle distribution O c is projected onto a horizontal plane (ie, the ground) to obtain a two-dimensional partial obstacle avoidance map M, and the obstacle avoidance map M includes second position information of each second object.
  • Step 207 Determine, according to the first location information of the first object, a first set of speed data related to tracking the first object.
  • the first set of speed data is represented by an angular velocity and a linear velocity for characterizing the speed of tracking the first object in a state where the second object is not in the target region.
  • the ground robot has a local motion controller, and the local motion controller includes: a PID module, an obstacle avoidance module, and an information fusion module.
  • the input of the PID module is first position information (d, ⁇ ) of the first object
  • the output is that the ground robot tracks the first set of velocity data (v 1 , ⁇ 1 ) of the first object without an obstacle.
  • Step 208 Determine, according to the first location information of the first object and the second location information of the second object, a second set of speed data related to tracking the first object.
  • the second set of speed data is represented by an angular velocity and a linear velocity for characterizing the speed of tracking the first object in a state where the second object is in the target region.
  • the input of the obstacle avoidance module is an obstacle avoidance map M formed based on the second position information of the second object and the first position information (d, ⁇ ) of the first object
  • the output is the second set of speed data (v 2 , ⁇ 2 )
  • the second set of speed data is based on the motion model of the ground robot, and selects the speed data that avoids the second object from all possible motion trajectories and is as close as possible to the first object.
  • Step 209 Determine, according to the first group of speed data, the second group of speed data, and the second position information of the second object, tracking motion data of the first object; The motion data tracks the first object.
  • the distance between the ground robot and the second object is calculated according to the current speed of the ground robot and the second position information of the second object; Determining a weight corresponding to the first group of speed data and the second group of speed data respectively; performing weighting processing on the first group of speed data and the second group of speed data based on the determined weights
  • the ground robot tracks motion data of the first object.
  • the input of the information fusion module is a first set of velocity data (v 1 , ⁇ 1 ), a second set of velocity data (v 2 , ⁇ 2 ), and an obstacle avoidance based on the second location information of the second object.
  • Map M the output of the information fusion module is the final motion data (v 3 , ⁇ 3 ) of the ground robot.
  • the first set of velocity data and the second set of velocity data are merged based on the obstacle avoidance map M, and the fusion is based on: predicting in the obstacle avoidance map M according to the current motion data (v 0 , ⁇ 0 ) of the ground robot The distance d c between the ground robot and the second object, the greater the distance d c between the ground robot and the second object, the greater the weight of the first set of velocity data (v 1 , ⁇ 1 ); otherwise, the ground robot The smaller the distance d c between the second object and the second object, the greater the weight of the second set of velocity data (v 2 , ⁇ 2 ).
  • the first set of velocity data (v 1 , ⁇ 1 ) and the second set of velocity numbers (v 2 , ⁇ 2 ) are weighted based on the respective weights, that is, the motion data of the first object is obtained.
  • the motion data when the first object is tracked according to the motion data, it is detected whether an abnormal event occurs; when an abnormal event occurs, the motion data is adjusted to be less than or equal to a preset value. In an embodiment, the preset value is zero. At this time, once the ground robot has a risk of falling or colliding, the brake logic is forcibly activated to ensure the safety of the ground robot.
  • FIG. 5 is a schematic flowchart of a tracking method according to an embodiment of the present invention.
  • the tracking method in this example is applied to a drone. As shown in FIG. 5, the tracking method includes the following steps:
  • Step 501 Monitor first location information of the first object.
  • the drone includes two types of sensors, wherein the first type of sensor is used for The first location information of the first object is monitored, and the second type of sensor is used to monitor the second location information of the second object in the target area.
  • the first type of sensor is a UWB anchor.
  • the first object needs to carry a UWB tag, and the UWB anchors the UWB tag carried by the first object by using the UWB anchor to obtain the first position of the first object. information.
  • the UWB anchor is usually composed of two or more UWB communication nodes, and the UWB tag is composed of another UWB communication node, and the UWB tag is determined relative to the UWB by using the principle of time-of-flight (TOF) and triangulation.
  • TOF time-of-flight
  • the position information of the anchor that is, the first position information of the first object.
  • the first object refers to an object to be tracked.
  • the first position information passes the direction angle ⁇ and the elevation angle And the distance d is expressed by Characterizing the location of the first object.
  • Step 502 Monitor the target area to obtain a first coordinate parameter set in the target area that represents the location distribution of each object.
  • the second location information of the second object in the target area is monitored by the second type of sensor.
  • the second type of sensor is a 3D camera, and the third position image of the second image in the target area is obtained by performing a three-dimensional image acquisition on the target area by using the 3D camera.
  • the second type of sensor is a LiDAR sensor, and the distance information of the surrounding object relative to the sensor is obtained by laser scanning.
  • the second object refers to an obstacle with respect to the first object.
  • the target area needs to be monitored first, and a first coordinate parameter set representing the position distribution of each object in the target area is obtained.
  • Step 503 Acquire first location information of the first object and boundary information, and determine The first location information is a center and the third coordinate parameter set bounded by the boundary information is bounded.
  • the first position information of the first object relative to the robot And the boundary information of the first object known in advance, that is, the 3D bounding box size, the third coordinate parameter set representing the spatial distribution of the first object can be determined, and the first distribution is removed from the obstacle distribution O A All obstacles in the 3D bounding box centered at the location, resulting in the final obstacle distribution O B .
  • Step 504 Removing the third coordinate parameter set from the first coordinate parameter set, and obtaining a fourth coordinate parameter set in the target area that represents the second object distribution, where the fourth coordinate parameter set is used Representing second location information of the second object.
  • the distribution of the obstacle O is removed from A to all the obstacles in the first object-centric 3D bounding box, to obtain the final distribution of the obstacle O B,
  • O B is the three-dimensional map of obstacle avoidance, obstacle avoidance map O B includes second location information for each of the second objects.
  • Step 505 Determine, according to the first location information of the first object, a first set of speed data related to tracking the first object.
  • the first set of velocity data is represented by a first dimension velocity component, a second dimension velocity component, and a third dimension velocity component, and is used to represent tracking in a state where the second object is not in the target region. The speed of the first object.
  • the drone has a local motion controller, and the local motion controller includes: a PID module, an obstacle avoidance module, and an information fusion module.
  • the input of the PID module is the first location information of the first object.
  • the output is that the drone tracks the first set of velocity data ( ⁇ 1 , ⁇ 1 , ⁇ 1 ) of the first object without obstacles.
  • the velocity data is velocity data in a three-dimensional space, wherein the first dimension velocity component is a velocity component of the drone rotating around the x-axis (ie, the roll axis), and the second dimension velocity component is an unmanned The velocity component of the machine rotating around the y-axis (that is, the pitch axis), the third-dimensional velocity component It is the velocity component of the drone that rotates around the z-axis (that is, the yaw axis).
  • Step 506 Determine, according to the first location information of the first object and the second location information of the second object, a second set of speed data related to tracking the first object.
  • the second set of velocity data is represented by a first dimension velocity component, a second dimension velocity component, and a third dimension velocity component, and is used to represent that the second object is tracked in the target region.
  • the speed of the first object is represented by a first dimension velocity component, a second dimension velocity component, and a third dimension velocity component, and is used to represent that the second object is tracked in the target region. The speed of the first object.
  • the input of the obstacle avoidance module is an obstacle avoidance map O B formed based on the second position information of the second object and the first position information of the first object
  • the output is a second set of velocity data ( ⁇ 2 , ⁇ 2 , ⁇ 2 ), where the second set of velocity data is selected from all possible motion trajectories to avoid the second object according to the motion model of the drone, and Try to be close to the speed data of the first object.
  • Step 507 Combine the first group of speed data, the second group of speed data, and the second position information of the second object to determine that the motion data of the first object is tracked; and track the tracking according to the motion data. Said the first object.
  • the distance between the drone and the second object is calculated according to the current speed of the drone and the second position information of the second object; Determining weights respectively corresponding to the first group of speed data and the second group of speed data according to the distance; weighting the first group of speed data and the second group of speed data based on the determined weights Obtaining, by the drone, tracking motion data of the first object.
  • the input of the information fusion module is a first set of velocity data ( ⁇ 1 , ⁇ 1 , ⁇ 1 ), a second set of velocity data ( ⁇ 2 , ⁇ 2 , ⁇ 2 ), and a second location based on the second object.
  • the obstacle avoidance map O B formed by the information, the output of the information fusion module is the final motion data ( ⁇ 3 , ⁇ 3 , ⁇ 3 ) of the drone.
  • the first set of velocity data and the second set of velocity data are merged based on the obstacle avoidance map O B , and the fusion is based on: avoiding the current motion data ( ⁇ 0 , ⁇ 0 , ⁇ 0 ) of the drone O B barrier predicted distance map d c between the UAV and the second object, the greater the distance d c between the UAV and a second object, the first set of data rate ( ⁇ 1, ⁇ 1, ⁇ 1 ) the greater the weight; conversely, the smaller the distance d c between the drone and the second object, the greater the weight of the second set of velocity data ( ⁇ 2 , ⁇ 2 , ⁇ 2 ).
  • the motion data when the first object is tracked according to the motion data, it is detected whether an abnormal event occurs; when an abnormal event occurs, the motion data is adjusted to be less than or equal to a preset value. In an embodiment, the preset value is zero. At this time, once the drone has a risk of falling or colliding, the brake logic is forcibly activated to ensure the safety of the drone.
  • FIG. 8 is a schematic structural diagram of a tracking device according to an embodiment of the present invention. As shown in FIG. 8, the tracking device includes:
  • the first monitoring unit 81 is configured to monitor first location information of the first object
  • a second monitoring unit 82 configured to monitor second location information of the second object in the target area
  • the processing unit 83 is configured to determine, according to the first location information and the second location information, tracking motion data of the first object;
  • the driving unit 84 is configured to track the first object according to the motion data.
  • the second monitoring unit 82 is further configured to: monitor the target area, obtain a first coordinate parameter set that represents a position distribution of each object in the target area; acquire a pose parameter of the monitoring device, according to the Determining, by the pose parameter, a second coordinate parameter set that represents a third object position distribution in the target area; acquiring first position information of the first object and boundary information, determining that the first location information is centered a third coordinate parameter set bounded by the boundary information; removing the second coordinate parameter set and the third coordinate parameter set from the first coordinate parameter set to obtain a representation in the target area A set of fourth coordinate parameters of the two object distributions.
  • the second monitoring unit 82 is further configured to: project a fourth coordinate parameter set in the target area that represents the distribution of the second object into a coordinate system of a preset dimension, and obtain a a fifth coordinate parameter set in a coordinate system of the preset dimension, the fifth coordinate parameter
  • the set of numbers is used to represent the second location information of the second object.
  • the second monitoring unit 82 is further configured to: monitor the target area, obtain a first coordinate parameter set that represents the position distribution of each object in the target area; and acquire the first position of the first object. Determining, by the information and the boundary information, a third coordinate parameter set centered on the first location information and bounded by the boundary information; removing the third coordinate parameter set from the first coordinate parameter set, Obtaining a fourth coordinate parameter set in the target area that represents the distribution of the second object, the fourth coordinate parameter set is used to represent second location information of the second object.
  • the processing unit 83 is further configured to: determine, according to the first location information of the first object, a first group of speed data related to tracking the first object; according to the first object And determining, by the first location information and the second location information of the second object, a second set of speed data associated with tracking the first object; combining the first set of speed data, the second set of speed data, and The second location information of the second object determines to track motion data of the first object.
  • the device further includes:
  • the abnormality detecting unit 85 is configured to detect whether an abnormal event occurs when the first object is tracked according to the motion data
  • the processing unit 83 is further configured to adjust the motion data to be less than or equal to a preset value when an abnormal event occurs.
  • the first position information is represented by a direction angle and a distance for characterizing a position of the first object
  • the first group of speed data is represented by an angular velocity and a linear velocity, and is used for characterizing Tracking the speed of the first object in a state where the second object is not in the target area
  • the second position information is represented by a direction angle and a distance for characterizing a position distribution of the second object within the target area;
  • the second set of speed data passes angular velocity and linear velocity Representing a speed for tracking the first object in a state where the second object is in the target area;
  • the processing unit 83 is further configured to: when tracking the first object, calculate the tracking device and the second object according to a current speed of the tracking device and second position information of the second object a distance between the first set of speed data and the second set of speed data, respectively; determining, according to the determined weight, the first set of speed data and the second group The speed data is weighted to obtain the tracking device tracking the motion data of the first object.
  • the first position information is represented by a direction angle, an elevation angle, and a distance, and is used to represent a position of the first object;
  • the first group of speed data passes the first dimension speed component, the second dimension a velocity component and a third dimensional velocity component are used to represent a speed of tracking the first object in a state where the second object is not in the target region;
  • the second position information is represented by a direction angle, an elevation angle, and a distance for characterizing a position distribution of the second object in the target area;
  • the second set of speed data passes the first dimension speed component, and the second a dimension velocity component and a third dimension velocity component are used to represent a speed at which the first object is tracked in a state where the second object is in the target region;
  • the processing unit 83 is further configured to: when tracking the first object, calculate the tracking device and the second object according to a current speed of the tracking device and second position information of the second object a distance between the first set of speed data and the second set of speed data, respectively; determining, according to the determined weight, the first set of speed data and the second group The speed data is weighted to obtain the tracking device tracking the motion data of the first object.
  • the first monitoring unit can be implemented by a UWB anchor, UWB.
  • An anchor usually consists of more than two UWB communication nodes.
  • the second monitoring unit can be implemented by a 3D camera.
  • the drive unit can be realized by a motor.
  • the anomaly detection unit can be implemented by a pose sensor.
  • the processing unit can be implemented by a processor.
  • Embodiments of the Invention can also be stored in a computer readable storage medium if it is implemented in the form of a software function module and sold or used as a standalone product. Based on such understanding, the technical solution of the embodiments of the present invention may be embodied in the form of a software product in essence or in the form of a software product stored in a storage medium, including a plurality of instructions.
  • a computer device (which may be a personal computer, server, or network device, etc.) is caused to perform all or part of the methods described in various embodiments of the present invention.
  • the foregoing storage medium includes various media that can store program codes, such as a USB flash drive, a mobile hard disk, a read only memory (ROM), a magnetic disk, or an optical disk.
  • embodiments of the invention are not limited to any specific combination of hardware and software.
  • an embodiment of the present invention further provides a computer storage medium, wherein a computer program is stored, and the computer program is used to execute the tracking method of the embodiment of the present invention.
  • the technical solution of the embodiment of the present invention monitors first location information of the first object, monitors second location information of the second object in the target area, and determines the tracking by combining the first location information and the second location information.
  • Motion data of the first object tracking the first object based on the motion data.
  • the tracking device detects the second object (also called an obstacle) in the target area while tracking the first object, and at the same time realizes tracking of the target and avoiding obstacles, thereby greatly reducing the possibility of collision obstacles during the tracking process.
  • Sexuality protects tracking devices.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • General Physics & Mathematics (AREA)
  • Electromagnetism (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • Automation & Control Theory (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Optics & Photonics (AREA)
  • Multimedia (AREA)
  • Radar Systems Or Details Thereof (AREA)

Abstract

Disclosed are a tracking method, a tracking device, and a computer storage medium, comprising: monitoring first location information of a first object; monitoring second location information of a second object in a target area; combining the first location information with the second location information, determining motion data for tracking the first object; and tracking the first object on the basis of the motion data.

Description

一种追踪方法及追踪设备、计算机存储介质Tracking method and tracking device, computer storage medium 技术领域Technical field
本发明涉及智能追踪技术,尤其涉及一种追踪方法及追踪设备、计算机存储介质。The invention relates to an intelligent tracking technology, in particular to a tracking method and a tracking device and a computer storage medium.
背景技术Background technique
超宽带(UWB,Ultra Wideband)是一种无线载波通信技术,利用UWB技术能够实现定位追踪,具体地,在追踪机器人上设置UWB锚节点(UWB anchor),在目标对象上设置UWB信标(UWB tag),这样,追踪机器人利用UWB anchor便可实时追踪携带UWB tag的目标对象。在追踪过程中,因为目标对象与追踪机器人之间存在一定的距离,在目标对象运动的过程中,可能在两者之间出现障碍物,从而导致机器人在追踪的过程中发生碰撞,导致追踪失败甚至损坏追踪机器人。UWB (Ultra Wideband) is a wireless carrier communication technology. UWB technology can be used to achieve location tracking. Specifically, a UWB anchor node (UWB anchor) is set on the tracking robot, and a UWB beacon (UWB) is set on the target object. Tag), in this way, the tracking robot can use the UWB anchor to track the target object carrying the UWB tag in real time. During the tracking process, because there is a certain distance between the target object and the tracking robot, during the movement of the target object, obstacles may appear between the two, causing the robot to collide during the tracking process, resulting in tracking failure. Even damage tracking robots.
发明内容Summary of the invention
为解决上述技术问题,本发明实施例提供了一种追踪方法及追踪设备、计算机存储介质。To solve the above technical problem, an embodiment of the present invention provides a tracking method, a tracking device, and a computer storage medium.
本发明实施例提供的追踪方法,包括:The tracking method provided by the embodiment of the present invention includes:
监测第一对象的第一位置信息;Monitoring first location information of the first object;
监测目标区域内的第二对象的第二位置信息;Monitoring second location information of the second object in the target area;
结合所述第一位置信息与所述第二位置信息,确定出追踪所述第一对象的运动数据;Combining the first location information with the second location information, determining to track motion data of the first object;
根据所述运动数据追踪所述第一对象。Tracking the first object based on the motion data.
本发明实施例中,所述监测目标区域内的第二对象的第二位置信息, 包括:In the embodiment of the present invention, the monitoring the second location information of the second object in the target area, include:
对目标区域进行监测,得到目标区域内表征各个对象位置分布的第一坐标参数集合;Monitoring the target area to obtain a first coordinate parameter set representing the position distribution of each object in the target area;
获取监测装置的位姿参数,根据所述位姿参数确定所述目标区域内表征第三对象位置分布的第二坐标参数集合;Obtaining a pose parameter of the monitoring device, and determining, according to the pose parameter, a second coordinate parameter set that represents a third object position distribution in the target area;
获取所述第一对象的第一位置信息以及边界信息,确定出以所述第一位置信息为中心且以所述边界信息为边界约束的第三坐标参数集合;Obtaining first location information of the first object and boundary information, and determining a third coordinate parameter set centered on the first location information and bounded by the boundary information;
从所述第一坐标参数集合中去除所述第二坐标参数集合以及所述第三坐标参数集合,得到目标区域内的表征所述第二对象分布的第四坐标参数集合。And removing the second coordinate parameter set and the third coordinate parameter set from the first coordinate parameter set to obtain a fourth coordinate parameter set in the target area that represents the second object distribution.
本发明实施例中,所述监测目标区域内的第二对象的第二位置信息,还包括:In the embodiment of the present invention, the monitoring the second location information of the second object in the target area further includes:
将所述目标区域内的表征所述第二对象分布的第四坐标参数集合投影至预设维度的坐标系中,得到所述预设维度的坐标系内的第五坐标参数集合,所述第五坐标参数集合即用于表示所述第二对象的第二位置信息。Projecting a fourth coordinate parameter set in the target area that represents the second object distribution to a coordinate system of a preset dimension, to obtain a fifth coordinate parameter set in the coordinate system of the preset dimension, where the The five-coordinate parameter set is used to represent the second location information of the second object.
本发明实施例中,所述监测目标区域内的第二对象的第二位置信息,包括:In the embodiment of the present invention, the second location information of the second object in the monitoring target area includes:
对目标区域进行监测,得到目标区域内表征各个对象位置分布的第一坐标参数集合;Monitoring the target area to obtain a first coordinate parameter set representing the position distribution of each object in the target area;
获取所述第一对象的第一位置信息以及边界信息,确定出以所述第一位置信息为中心且以所述边界信息为边界约束的第三坐标参数集合;Obtaining first location information of the first object and boundary information, and determining a third coordinate parameter set centered on the first location information and bounded by the boundary information;
从所述第一坐标参数集合中去除所述第三坐标参数集合,得到目标区域内的表征所述第二对象分布的第四坐标参数集合,所述第四坐标参数集合即用于表示所述第二对象的第二位置信息。Removing the third coordinate parameter set from the first coordinate parameter set to obtain a fourth coordinate parameter set in the target area that represents the second object distribution, where the fourth coordinate parameter set is used to represent the Second location information of the second object.
本发明实施例中,所述结合所述第一位置信息与所述第二位置信息, 确定出追踪所述第一对象的运动数据,包括:In the embodiment of the present invention, the combining the first location information and the second location information, Determining the tracking of the motion data of the first object, including:
根据所述第一对象的第一位置信息,确定与追踪所述第一对象相关的第一组速度数据;Determining, according to the first location information of the first object, a first set of speed data related to tracking the first object;
根据所述第一对象的第一位置信息和所述第二对象的第二位置信息,确定与追踪所述第一对象相关的第二组速度数据;Determining, according to the first location information of the first object and the second location information of the second object, a second set of speed data related to tracking the first object;
结合所述第一组速度数据、所述第二组速度数据以及所述第二对象的第二位置信息,确定出追踪所述第一对象的运动数据。Combining the first set of velocity data, the second set of velocity data, and the second location information of the second object, determining to track motion data of the first object.
本发明实施例中,所述方法还包括:In the embodiment of the present invention, the method further includes:
在根据所述运动数据追踪所述第一对象时,检测是否发生异常事件;When the first object is tracked according to the motion data, detecting whether an abnormal event occurs;
当发生异常事件时,调整所述运动数据小于等于预设值。When an abnormal event occurs, the motion data is adjusted to be less than or equal to a preset value.
本发明实施例中,所述第一位置信息通过方向角和距离来表示,用于表征所述第一对象的位置;所述第一组速度数据通过角速度和线速度来表示,用于表征在目标区域内没有所述第二对象的状态下追踪所述第一对象的速度;In the embodiment of the present invention, the first position information is represented by a direction angle and a distance for characterizing a position of the first object; the first group of speed data is represented by an angular velocity and a linear velocity, and is used for characterizing Tracking the speed of the first object in a state where the second object is not in the target area;
所述第二位置信息通过方向角和距离来表示,用于表征所述第二对象在所述目标区域内的位置分布;所述第二组速度数据通过角速度和线速度来表示,用于表征在目标区域内有所述第二对象的状态下追踪所述第一对象的速度;The second position information is represented by a direction angle and a distance for characterizing a position distribution of the second object within the target area; the second set of speed data is represented by angular velocity and linear velocity for characterization Tracking the speed of the first object in a state where the second object is in the target area;
相应地,所述结合所述第一组速度数据、所述第二组速度数据以及所述第二对象的第二位置信息,确定出追踪所述第一对象的运动数据,包括:Correspondingly, the determining the motion data of the first object by combining the first group of speed data, the second group of speed data, and the second position information of the second object includes:
追踪所述第一对象时,根据追踪设备当前的速度和所述第二对象的第二位置信息,计算所述追踪设备与所述第二对象之间的距离;When tracking the first object, calculating a distance between the tracking device and the second object according to a current speed of the tracking device and second position information of the second object;
根据所述距离确定所述第一组速度数据和所述第二组速度数据分别对应的权重;Determining weights respectively corresponding to the first group of speed data and the second group of speed data according to the distance;
基于所确定的权重,对所述第一组速度数据和所述第二组速度数据进 行加权处理,得到所述追踪设备追踪所述第一对象的运动数据。Importing the first set of speed data and the second set of speed data based on the determined weights The row weighting process results in the tracking device tracking the motion data of the first object.
本发明实施例中,所述第一位置信息通过方向角、仰角和距离来表示,用于表征所述第一对象的位置;所述第一组速度数据通过第一维速度分量、第二维速度分量和第三维速度分量来表示,用于表征在目标区域内没有所述第二对象的状态下追踪所述第一对象的速度;In the embodiment of the present invention, the first position information is represented by a direction angle, an elevation angle, and a distance, and is used to represent a position of the first object; the first group of speed data passes the first dimension speed component, the second dimension a velocity component and a third dimensional velocity component are used to represent a speed of tracking the first object in a state where the second object is not in the target region;
所述第二位置信息通过方向角、仰角和距离来表示,用于表征所述第二对象在所述目标区域内的位置分布;所述第二组速度数据通过第一维速度分量、第二维速度分量和第三维速度分量来表示,用于表征在目标区域内有所述第二对象的状态下追踪所述第一对象的速度;The second position information is represented by a direction angle, an elevation angle, and a distance for characterizing a position distribution of the second object in the target area; the second set of speed data passes the first dimension speed component, and the second a dimension velocity component and a third dimension velocity component are used to represent a speed at which the first object is tracked in a state where the second object is in the target region;
相应地,所述结合所述第一组速度数据、所述第二组速度数据以及所述第二对象的第二位置信息,确定出追踪所述第一对象的运动数据,包括:Correspondingly, the determining the motion data of the first object by combining the first group of speed data, the second group of speed data, and the second position information of the second object includes:
追踪所述第一对象时,根据追踪设备当前的速度和所述第二对象的第二位置信息,计算所述追踪设备与所述第二对象之间的距离;When tracking the first object, calculating a distance between the tracking device and the second object according to a current speed of the tracking device and second position information of the second object;
根据所述距离确定所述第一组速度数据和所述第二组速度数据分别对应的权重;Determining weights respectively corresponding to the first group of speed data and the second group of speed data according to the distance;
基于所确定的权重,对所述第一组速度数据和所述第二组速度数据进行加权处理,得到所述追踪设备追踪所述第一对象的运动数据。And weighting the first group of speed data and the second group of speed data based on the determined weights, to obtain the tracking device tracking the motion data of the first object.
本发明实施例提供的追踪设备,包括:The tracking device provided by the embodiment of the present invention includes:
第一监测单元,配置为监测第一对象的第一位置信息;a first monitoring unit configured to monitor first location information of the first object;
第二监测单元,配置为监测目标区域内的第二对象的第二位置信息;a second monitoring unit configured to monitor second location information of the second object in the target area;
处理单元,配置为结合所述第一位置信息与所述第二位置信息,确定出追踪所述第一对象的运动数据;a processing unit, configured to combine the first location information and the second location information to determine to track motion data of the first object;
驱动单元,配置为根据所述运动数据追踪所述第一对象。And a driving unit configured to track the first object according to the motion data.
本发明实施例中,所述第二监测单元,还配置为:对目标区域进行监测,得到目标区域内表征各个对象位置分布的第一坐标参数集合;获取监 测装置的位姿参数,根据所述位姿参数确定所述目标区域内表征第三对象位置分布的第二坐标参数集合;获取所述第一对象的第一位置信息以及边界信息,确定出以所述第一位置信息为中心且以所述边界信息为边界约束的第三坐标参数集合;从所述第一坐标参数集合中去除所述第二坐标参数集合以及所述第三坐标参数集合,得到目标区域内的表征所述第二对象分布的第四坐标参数集合。In the embodiment of the present invention, the second monitoring unit is further configured to: monitor the target area, and obtain a first coordinate parameter set that represents a position distribution of each object in the target area; Determining, according to the pose parameter, a second coordinate parameter set that represents a third object position distribution in the target area; acquiring first position information and boundary information of the first object, and determining Determining, by the first location information, a third coordinate parameter set bounded by the boundary information; removing the second coordinate parameter set and the third coordinate parameter set from the first coordinate parameter set, A fourth set of coordinate parameters characterizing the second object distribution within the target region is obtained.
本发明实施例中,所述第二监测单元,还配置为:将所述目标区域内的表征所述第二对象分布的第四坐标参数集合投影至预设维度的坐标系中,得到所述预设维度的坐标系内的第五坐标参数集合,所述第五坐标参数集合即用于表示所述第二对象的第二位置信息。In the embodiment of the present invention, the second monitoring unit is further configured to: project a fourth coordinate parameter set in the target area that represents the distribution of the second object into a coordinate system of a preset dimension, to obtain the a fifth coordinate parameter set in a coordinate system of the preset dimension, wherein the fifth coordinate parameter set is used to represent second position information of the second object.
本发明实施例中,所述第二监测单元,还配置为:对目标区域进行监测,得到目标区域内表征各个对象位置分布的第一坐标参数集合;获取所述第一对象的第一位置信息以及边界信息,确定出以所述第一位置信息为中心且以所述边界信息为边界约束的第三坐标参数集合;从所述第一坐标参数集合中去除所述第三坐标参数集合,得到目标区域内的表征所述第二对象分布的第四坐标参数集合,所述第四坐标参数集合即用于表示所述第二对象的第二位置信息。In the embodiment of the present invention, the second monitoring unit is further configured to: monitor the target area, obtain a first coordinate parameter set that represents a position distribution of each object in the target area; and acquire first position information of the first object. And determining, by the boundary information, a third coordinate parameter set centered on the first location information and bounded by the boundary information; removing the third coordinate parameter set from the first coordinate parameter set to obtain A fourth coordinate parameter set in the target area that represents the distribution of the second object, the fourth coordinate parameter set being used to represent second location information of the second object.
本发明实施例中,所述处理单元,还配置为:根据所述第一对象的第一位置信息,确定与追踪所述第一对象相关的第一组速度数据;根据所述第一对象的第一位置信息和所述第二对象的第二位置信息,确定与追踪所述第一对象相关的第二组速度数据;结合所述第一组速度数据、所述第二组速度数据以及所述第二对象的第二位置信息,确定出追踪所述第一对象的运动数据。In the embodiment of the present invention, the processing unit is further configured to: determine, according to the first location information of the first object, a first group of speed data related to tracking the first object; according to the first object Determining, by the first location information and the second location information of the second object, a second set of speed data related to tracking the first object; combining the first set of speed data, the second set of speed data, and Determining the motion data of the first object by using the second location information of the second object.
本发明实施例中,所述设备还包括:In the embodiment of the present invention, the device further includes:
异常检测单元,配置为在根据所述运动数据追踪所述第一对象时,检 测是否发生异常事件;An abnormality detecting unit configured to check when the first object is tracked according to the motion data Measure whether an abnormal event has occurred;
所述处理单元,还配置为当发生异常事件时,调整所述运动数据小于等于预设值。The processing unit is further configured to adjust the motion data to be less than or equal to a preset value when an abnormal event occurs.
本发明实施例中,所述第一位置信息通过方向角和距离来表示,用于表征所述第一对象的位置;所述第一组速度数据通过角速度和线速度来表示,用于表征在目标区域内没有所述第二对象的状态下追踪所述第一对象的速度;In the embodiment of the present invention, the first position information is represented by a direction angle and a distance for characterizing a position of the first object; the first group of speed data is represented by an angular velocity and a linear velocity, and is used for characterizing Tracking the speed of the first object in a state where the second object is not in the target area;
所述第二位置信息通过方向角和距离来表示,用于表征所述第二对象在所述目标区域内的位置分布;所述第二组速度数据通过角速度和线速度来表示,用于表征在目标区域内有所述第二对象的状态下追踪所述第一对象的速度;The second position information is represented by a direction angle and a distance for characterizing a position distribution of the second object within the target area; the second set of speed data is represented by angular velocity and linear velocity for characterization Tracking the speed of the first object in a state where the second object is in the target area;
相应地,所述处理单元,还配置为:追踪所述第一对象时,根据追踪设备当前的速度和所述第二对象的第二位置信息,计算所述追踪设备与所述第二对象之间的距离;根据所述距离确定所述第一组速度数据和所述第二组速度数据分别对应的权重;基于所确定的权重,对所述第一组速度数据和所述第二组速度数据进行加权处理,得到所述追踪设备追踪所述第一对象的运动数据。Correspondingly, the processing unit is further configured to: when tracking the first object, calculate the tracking device and the second object according to a current speed of the tracking device and second position information of the second object a distance between the first set of speed data and the second set of speed data, respectively, based on the distance; the first set of speed data and the second set of speeds based on the determined weights The data is weighted to obtain the tracking device tracking the motion data of the first object.
本发明实施例中,所述第一位置信息通过方向角、仰角和距离来表示,用于表征所述第一对象的位置;所述第一组速度数据通过第一维速度分量、第二维速度分量和第三维速度分量来表示,用于表征在目标区域内没有所述第二对象的状态下追踪所述第一对象的速度;In the embodiment of the present invention, the first position information is represented by a direction angle, an elevation angle, and a distance, and is used to represent a position of the first object; the first group of speed data passes the first dimension speed component, the second dimension a velocity component and a third dimensional velocity component are used to represent a speed of tracking the first object in a state where the second object is not in the target region;
所述第二位置信息通过方向角、仰角和距离来表示,用于表征所述第二对象在所述目标区域内的位置分布;所述第二组速度数据通过第一维速度分量、第二维速度分量和第三维速度分量来表示,用于表征在目标区域内有所述第二对象的状态下追踪所述第一对象的速度; The second position information is represented by a direction angle, an elevation angle, and a distance for characterizing a position distribution of the second object in the target area; the second set of speed data passes the first dimension speed component, and the second a dimension velocity component and a third dimension velocity component are used to represent a speed at which the first object is tracked in a state where the second object is in the target region;
相应地,所述处理单元,还配置为:追踪所述第一对象时,根据追踪设备当前的速度和所述第二对象的第二位置信息,计算所述追踪设备与所述第二对象之间的距离;根据所述距离确定所述第一组速度数据和所述第二组速度数据分别对应的权重;基于所确定的权重,对所述第一组速度数据和所述第二组速度数据进行加权处理,得到所述追踪设备追踪所述第一对象的运动数据。Correspondingly, the processing unit is further configured to: when tracking the first object, calculate the tracking device and the second object according to a current speed of the tracking device and second position information of the second object a distance between the first set of speed data and the second set of speed data, respectively, based on the distance; the first set of speed data and the second set of speeds based on the determined weights The data is weighted to obtain the tracking device tracking the motion data of the first object.
本发明实施例提供的计算机存储介质存储有计算机程序,该计算机程序配置为执行上述追踪方法。The computer storage medium provided by the embodiment of the present invention stores a computer program configured to execute the above tracking method.
本发明实施例的技术方案中,监测第一对象的第一位置信息;监测目标区域内的第二对象的第二位置信息;结合所述第一位置信息与所述第二位置信息,确定出追踪所述第一对象的运动数据;根据所述运动数据追踪所述第一对象。通过对本发明实施例的实施,追踪设备在追踪第一对象的同时检测目标区域内的第二对象(也称为障碍物),同时实现对目标的追踪和对障碍物的躲避,大大减小了追踪过程中碰撞障碍物的可能性,保护了追踪设备。In the technical solution of the embodiment of the present invention, the first location information of the first object is monitored; the second location information of the second object in the target area is monitored; and the first location information and the second location information are combined to determine Tracking motion data of the first object; tracking the first object based on the motion data. Through the implementation of the embodiment of the present invention, the tracking device detects the second object (also referred to as an obstacle) in the target area while tracking the first object, and simultaneously realizes tracking of the target and avoiding obstacles, thereby greatly reducing The possibility of collision with obstacles during the tracking process protects the tracking device.
附图说明DRAWINGS
图1为本发明实施例的追踪方法的流程示意图一;1 is a schematic flowchart 1 of a tracking method according to an embodiment of the present invention;
图2为本发明实施例的追踪方法的流程示意图二;2 is a second schematic flowchart of a tracking method according to an embodiment of the present invention;
图3为本发明实施例的场景示意图一;3 is a schematic diagram 1 of a scenario according to an embodiment of the present invention;
图4为本发明实施例的信息融合示意图一;4 is a schematic diagram 1 of information fusion according to an embodiment of the present invention;
图5为本发明实施例的追踪方法的流程示意图三;FIG. 5 is a schematic flowchart 3 of a tracking method according to an embodiment of the present invention;
图6为本发明实施例的场景示意图二;FIG. 6 is a second schematic diagram of a scenario according to an embodiment of the present invention;
图7为本发明实施例的信息融合示意图二;FIG. 7 is a schematic diagram 2 of information fusion according to an embodiment of the present invention;
图8为本发明实施例的追踪设备的结构组成示意图。 FIG. 8 is a schematic structural diagram of a tracking device according to an embodiment of the present invention.
具体实施方式detailed description
为了能够更加详尽地了解本发明实施例的特点与技术内容,下面结合附图对本发明实施例的实现进行详细阐述,所附附图仅供参考说明之用,并非用来限定本发明实施例。The embodiments of the present invention are described in detail below with reference to the accompanying drawings.
图1为本发明实施例的追踪方法的流程示意图一,本示例中的追踪方法应用于追踪设备,如图1所示,所述追踪方法包括以下步骤:FIG. 1 is a schematic flowchart 1 of a tracking method according to an embodiment of the present invention. The tracking method in this example is applied to a tracking device. As shown in FIG. 1 , the tracking method includes the following steps:
步骤101:监测第一对象的第一位置信息。Step 101: Monitor first location information of the first object.
本发明实施例中,追踪设备包括两类传感器,其中,第一类传感器用于监测第一对象的第一位置信息,第二类传感器用于监测目标区域内的第二对象的第二位置信息。In the embodiment of the present invention, the tracking device includes two types of sensors, wherein the first type of sensors are used to monitor first position information of the first object, and the second type of sensors are used to monitor second position information of the second object in the target area. .
在一实施方式中,第一类传感器可以为UWB anchor,相应地,第一对象需携带UWB tag,追踪设备通过UWB anchor对第一对象携带的UWB tag进行定位,得到第一对象的第一位置信息。In an embodiment, the first type of sensor may be a UWB anchor. Correspondingly, the first object needs to carry a UWB tag, and the tracking device locates the UWB tag carried by the first object by using the UWB anchor to obtain the first position of the first object. information.
上述方案中,UWB anchor通常由两个以上UWB通信节点组成,UWB tag由另一个UWB通信节点组成,采用飞行时间测距法(TOF,Time of Flight)和三角定位的原理确定UWB tag相对于UWB anchor的位置信息,也即第一对象的第一位置信息。In the above solution, the UWB anchor is usually composed of two or more UWB communication nodes, and the UWB tag is composed of another UWB communication node, and the UWB tag is determined relative to the UWB by using the principle of time-of-flight (TOF) and triangulation. The position information of the anchor, that is, the first position information of the first object.
本发明实施例中,第一对象指待追踪的对象。In the embodiment of the present invention, the first object refers to an object to be tracked.
步骤102:监测目标区域内的第二对象的第二位置信息。Step 102: Monitor second location information of the second object in the target area.
本发明实施例中,通过第二类传感器监测目标区域内的第二对象的第二位置信息。In the embodiment of the present invention, the second location information of the second object in the target area is monitored by the second type of sensor.
在一实施方式中,第二类传感器可以为3D相机,通过3D相机对目标区域进行三维图像采集,即可获得目标区域内第二图像的第二位置信息。这里,3D相机通过结构光技术、或者TOF技术、或者双目视觉等技术来获得相机视场(对应于目标区域)中各个物体相对于3D相机的位置信息。以 TOF技术为例,TOF技术属于双向测距技术,主要利用信号在两个异步收发机之间往返的飞行时间来测量节点间的距离。In an embodiment, the second type of sensor may be a 3D camera, and the third position image of the second image in the target area may be obtained by performing a three-dimensional image acquisition on the target area by using the 3D camera. Here, the 3D camera obtains positional information of each object in the camera field of view (corresponding to the target area) with respect to the 3D camera by means of structured light technology, or TOF technology, or binocular vision. Take For example, TOF technology is a two-way ranging technology that measures the distance between nodes by using the time between the two asynchronous transceivers.
在另一实施方式中,第二类传感器可以为激光雷达(LiDAR)传感器,用激光扫描的方法获得周围物体相对传感器的距离信息。In another embodiment, the second type of sensor may be a Lidar (LiDAR) sensor, and the distance information of the surrounding object relative to the sensor is obtained by laser scanning.
本发明实施例中,第二对象相对于第一对象而言,指障碍物。追踪第一对象时,需躲避第二对象,避免与第二对象发生碰撞。In the embodiment of the present invention, the second object refers to an obstacle with respect to the first object. When tracking the first object, you need to avoid the second object and avoid collision with the second object.
在一实施方式中,追踪设备可以是地面机器人,由于地面机器人只能在二维地面上运动,因此,在二维空间中表示第一对象的第一位置信息、第二对象的第二位置信息。例如,以极坐标系来表示二维空间时,第一对象的第一位置信息通过方向角θ和距离d来表示,通过(d,θ)表征所述第一对象在二维空间中的位置。第二对象的第二位置信息通过方向角θ′和距离d′来表示,通过(d′,θ′)表征所述第二对象在二维空间中的位置,将目标区域内所有的第二对象的第二位置信息集合在一起,形成二维避障地图M。In an embodiment, the tracking device may be a ground robot. Since the ground robot can only move on the two-dimensional ground, the first position information of the first object and the second position information of the second object are represented in the two-dimensional space. . For example, when the two-dimensional space is represented by a polar coordinate system, the first position information of the first object is represented by the direction angle θ and the distance d, and the position of the first object in the two-dimensional space is represented by (d, θ) . The second position information of the second object is represented by a direction angle θ' and a distance d', and the position of the second object in the two-dimensional space is represented by (d', θ'), and all the second objects in the target area are The second position information of the objects is gathered together to form a two-dimensional obstacle avoidance map M.
在另一实施方式中,追踪设备可以是无人机,由于无人机能在三维空间中运动,因此,在三维空间中表示第一对象的第一位置信息、第二对象的第二位置信息。例如,以极坐标系来表示三维空间时,第一对象的第一位置信息通过方向角θ、仰角
Figure PCTCN2017072999-appb-000001
和距离d来表示,通过
Figure PCTCN2017072999-appb-000002
表征所述第一对象在三维空间中的位置。第二对象的第二位置信息通过方向角θ′、仰角
Figure PCTCN2017072999-appb-000003
和距离d′来表示,通过
Figure PCTCN2017072999-appb-000004
表征所述第二对象在三维空间中的位置,将目标区域内所有的第二对象的第二位置信息集合在一起,形成三维避障地图M。
In another embodiment, the tracking device may be a drone, and the first position information of the first object and the second position information of the second object are represented in the three-dimensional space because the drone can move in the three-dimensional space. For example, when the three-dimensional space is represented by a polar coordinate system, the first position information of the first object passes the direction angle θ, the elevation angle
Figure PCTCN2017072999-appb-000001
And the distance d is expressed by
Figure PCTCN2017072999-appb-000002
Characterizing the position of the first object in three-dimensional space. The second position information of the second object passes through the direction angle θ', the elevation angle
Figure PCTCN2017072999-appb-000003
And distance d' to indicate, pass
Figure PCTCN2017072999-appb-000004
Characterizing the position of the second object in the three-dimensional space, and combining the second position information of all the second objects in the target area to form a three-dimensional obstacle avoidance map M.
步骤103:结合所述第一位置信息与所述第二位置信息,确定出追踪所述第一对象的运动数据;根据所述运动数据追踪所述第一对象。Step 103: Combine the first location information with the second location information, determine to track motion data of the first object, and track the first object according to the motion data.
本发明实施例中,根据所述第一对象的第一位置信息,确定与追踪所述第一对象相关的第一组速度数据;根据所述第一对象的第一位置信息和 所述第二对象的第二位置信息,确定与追踪所述第一对象相关的第二组速度数据;结合所述第一组速度数据、所述第二组速度数据以及所述第二对象的第二位置信息,确定出追踪所述第一对象的运动数据。In the embodiment of the present invention, determining, according to the first location information of the first object, a first group of speed data related to tracking the first object; and according to the first location information of the first object Second location information of the second object, determining a second set of speed data associated with tracking the first object; combining the first set of speed data, the second set of speed data, and the second object The second location information determines that the motion data of the first object is tracked.
具体地,1)追踪设备具有比例-积分-微分(PID)模块,PID模块的输入是第一对象的第一位置信息,输出是在没有障碍物的情况下追踪设备追踪第一对象的第一组速度数据。2)追踪设备还具有避障模块,避障模块的输入是基于第二对象的第二位置信息而形成的避障地图M以及第一对象的第一位置信息,输出是第二组速度数据,这里,第二组速度数据是根据追踪设备的运动模型,从所有可能的运动轨迹中选出避开第二对象,且尽量靠近第一对象的速度数据。3)追踪设备还具有信息融合模块,信息融合模块的输入是第一组速度数据、第二组速度数据以及基于第二对象的第二位置信息而形成的避障地图M,信息融合模块的输出是追踪设备最终的运动数据。这里,基于避障地图M对第一组速度数据和所述第二组速度数据进行融合,融合的依据是:根据追踪设备当前的运动数据在避障地图M中预测追踪设备与第二对象之间的距离,追踪设备与第二对象之间的距离越大,则第一组速度数据的权重越大;反之,追踪设备与第二对象之间的距离越小,则第二组速度数据的权重越大。最后,基于各自的权重对第一组速度数据和所述第二组速度数进行加权处理,即得到追踪第一对象的运动数据。Specifically, 1) the tracking device has a proportional-integral-derivative (PID) module, the input of the PID module is first position information of the first object, and the output is the first tracking device tracking the first object without an obstacle Group speed data. 2) The tracking device further has an obstacle avoidance module, wherein the input of the obstacle avoidance module is an obstacle avoidance map M formed based on the second position information of the second object and the first position information of the first object, and the output is the second group of speed data, Here, the second set of speed data is selected from all possible motion trajectories according to the motion model of the tracking device to avoid the second object and as close as possible to the velocity data of the first object. 3) The tracking device further has an information fusion module, wherein the input of the information fusion module is the first group of speed data, the second group of speed data, and the obstacle avoidance map M formed based on the second position information of the second object, and the output of the information fusion module It is the final motion data of the tracking device. Here, the first group of speed data and the second group of speed data are merged based on the obstacle avoidance map M, and the fusion is based on: predicting the tracking device and the second object in the obstacle avoidance map M according to the current motion data of the tracking device. The distance between the tracking device and the second object is greater, the greater the weight of the first set of speed data; conversely, the smaller the distance between the tracking device and the second object, the second set of speed data The greater the weight. Finally, the first set of velocity data and the second set of velocity numbers are weighted based on the respective weights, that is, the motion data of the first object is obtained.
本发明实施例中,在根据所述运动数据追踪所述第一对象时,检测是否发生异常事件;当发生异常事件时,调整所述运动数据小于等于预设值。在一实施方式中,预设值为零,此时,追踪设备一旦出现跌落或者碰撞的风险,则强行启动刹车逻辑,保证追踪设备的安全。In the embodiment of the present invention, when the first object is tracked according to the motion data, it is detected whether an abnormal event occurs; when an abnormal event occurs, the motion data is adjusted to be less than or equal to a preset value. In an embodiment, the preset value is zero. At this time, if the tracking device is at risk of falling or colliding, the brake logic is forcibly activated to ensure the safety of the tracking device.
图2为本发明实施例的追踪方法的流程示意图二,本示例中的追踪方法应用于地面机器人,如图2所示,所述追踪方法包括以下步骤:2 is a schematic flowchart 2 of a tracking method according to an embodiment of the present invention. The tracking method in this example is applied to a ground robot. As shown in FIG. 2, the tracking method includes the following steps:
步骤201:监测第一对象的第一位置信息。 Step 201: Monitor first location information of the first object.
本发明实施例中,地面机器人包括两类传感器,其中,第一类传感器用于监测第一对象的第一位置信息,第二类传感器用于监测目标区域内的第二对象的第二位置信息。In the embodiment of the present invention, the ground robot includes two types of sensors, wherein the first type of sensors are used to monitor first position information of the first object, and the second type of sensors are used to monitor second position information of the second object in the target area. .
在一实施方式中,第一类传感器可以为UWB anchor,相应地,第一对象需携带UWB tag,地面机器人通过UWB anchor对第一对象携带的UWB tag进行定位,得到第一对象的第一位置信息。In an embodiment, the first type of sensor may be a UWB anchor. Correspondingly, the first object needs to carry the UWB tag, and the ground robot locates the UWB tag carried by the first object by using the UWB anchor to obtain the first position of the first object. information.
上述方案中,UWB anchor通常由两个以上UWB通信节点组成,UWB tag由另一个UWB通信节点组成,采用飞行时间测距法(TOF,Time of Flight)和三角定位的原理确定UWB tag相对于UWB anchor的位置信息,也即第一对象的第一位置信息。In the above solution, the UWB anchor is usually composed of two or more UWB communication nodes, and the UWB tag is composed of another UWB communication node, and the UWB tag is determined relative to the UWB by using the principle of time-of-flight (TOF) and triangulation. The position information of the anchor, that is, the first position information of the first object.
本发明实施例中,第一对象指待追踪的对象。In the embodiment of the present invention, the first object refers to an object to be tracked.
本发明实施例中,所述第一位置信息通过方向角θ和距离d来表示,通过(d,θ)表征所述第一对象的位置。In the embodiment of the present invention, the first position information is represented by a direction angle θ and a distance d, and the position of the first object is represented by (d, θ).
步骤202:对目标区域进行监测,得到目标区域内表征各个对象位置分布的第一坐标参数集合。Step 202: Monitor the target area to obtain a first coordinate parameter set in the target area that represents the location distribution of each object.
本发明实施例中,通过第二类传感器监测目标区域内的第二对象的第二位置信息。在一实施方式中,第二类传感器为3D相机,通过3D相机对目标区域进行三维图像采集,即可获得目标区域内第二图像的第二位置信息。在另一实施方式中,第二类传感器为LiDAR传感器,用激光扫描的方法获得周围物体相对传感器的距离信息。In the embodiment of the present invention, the second location information of the second object in the target area is monitored by the second type of sensor. In an embodiment, the second type of sensor is a 3D camera, and the third position image of the second image in the target area is obtained by performing a three-dimensional image acquisition on the target area by using the 3D camera. In another embodiment, the second type of sensor is a LiDAR sensor, and the distance information of the surrounding object relative to the sensor is obtained by laser scanning.
本发明实施例中,第二对象相对于第一对象而言,指障碍物。追踪第一对象时,需躲避第二对象,避免与第二对象发生碰撞。In the embodiment of the present invention, the second object refers to an obstacle with respect to the first object. When tracking the first object, you need to avoid the second object and avoid collision with the second object.
具体实现时,首先需要对目标区域进行监测,得到目标区域内表征各个对象位置分布的第一坐标参数集合。具体地,地面机器人从第二类传感器获得目标区域内的所有可视障碍物的三维空间分布OA={oi:(xi,yi,zi)}。 In the specific implementation, the target area needs to be monitored first, and a first coordinate parameter set representing the position distribution of each object in the target area is obtained. Specifically, the ground robot obtains a three-dimensional spatial distribution O A ={o i :(x i , y i , z i )} of all visible obstacles in the target area from the second type of sensor.
步骤203:获取监测装置的位姿参数,根据所述位姿参数确定所述目标区域内表征第三对象位置分布的第二坐标参数集合。Step 203: Acquire a pose parameter of the monitoring device, and determine, according to the pose parameter, a second coordinate parameter set that represents a third object position distribution in the target area.
本发明实施例中,因为地面机器人在地面上运动,因此需要根据地面机器人的姿态,第二类传感器安装的高度,推算出地面的三维位置(也即第三对象位置的第二坐标参数集合),并从障碍物分布OA中将地面位置去除,得到没有地面的障碍物分布OBIn the embodiment of the present invention, since the ground robot moves on the ground, it is necessary to calculate the three-dimensional position of the ground according to the posture of the ground robot and the height of the second type of sensor installation (that is, the second coordinate parameter set of the third object position). And remove the ground position from the obstacle distribution O A to obtain an obstacle distribution O B without the ground.
步骤204:获取所述第一对象的第一位置信息以及边界信息,确定出以所述第一位置信息为中心且以所述边界信息为边界约束的第三坐标参数集合。Step 204: Acquire first location information of the first object and boundary information, and determine a third coordinate parameter set centered on the first location information and bound by the boundary information.
具体地,参照图3,根据第一对象相对机器人的第一位置信息(d,θ),以及预先知道的第一对象的边界信息,即三维边界框(3D bounding box)尺寸,能够确定出表征第一对象的空间分布的第三坐标参数集合,从障碍物分布OB中去除以第一位置为中心的3D bounding box内的所有障碍物,得到最终的障碍物分布OcSpecifically, referring to FIG. 3, according to the first position information (d, θ) of the first object relative to the robot, and the boundary information of the first object known in advance, that is, the 3D bounding box size, the characterization can be determined. The third coordinate parameter set of the spatial distribution of the first object removes all obstacles in the 3D bounding box centered on the first position from the obstacle distribution O B to obtain a final obstacle distribution O c .
步骤205:从所述第一坐标参数集合中去除所述第二坐标参数集合以及所述第三坐标参数集合,得到目标区域内的表征所述第二对象分布的第四坐标参数集合。Step 205: Removing the second coordinate parameter set and the third coordinate parameter set from the first coordinate parameter set to obtain a fourth coordinate parameter set in the target area that represents the second object distribution.
具体地,首先从障碍物分布OA中将地面位置去除,得到没有地面的障碍物分布OB;然后,从障碍物分布OB中去除以第一对象为中心的3D bounding box内的所有障碍物,得到最终的障碍物分布OcSpecifically, the ground position is first removed from the obstacle distribution O A to obtain an obstacle distribution O B without the ground; then, all obstacles in the 3D bounding box centered on the first object are removed from the obstacle distribution O B The final obstacle distribution O c is obtained .
步骤206:将所述目标区域内的表征所述第二对象分布的第四坐标参数集合投影至预设维度的坐标系中,得到所述预设维度的坐标系内的第五坐标参数集合,所述第五坐标参数集合即用于表示所述第二对象的第二位置信息。Step 206: Projecting a fourth coordinate parameter set in the target area that represents the second object distribution to a coordinate system of a preset dimension, and obtaining a fifth coordinate parameter set in the coordinate system of the preset dimension. The fifth coordinate parameter set is used to represent second location information of the second object.
本发明实施例中,由于地面机器人在二维空间中运动,因此,需要将 第四坐标参数集合投影至二维坐标系中,这样,得到的第二位置信息可通过二维极坐标中的方向角和距离来表示,用于表征所述第二对象在所述目标区域内的位置分布。In the embodiment of the present invention, since the ground robot moves in a two-dimensional space, it is necessary to The fourth set of coordinate parameters is projected into the two-dimensional coordinate system, such that the obtained second position information can be represented by a direction angle and a distance in the two-dimensional polar coordinates for characterizing the second object in the target area Location distribution.
具体地,将障碍物分布Oc投影到水平平面(也即地面)上,得到二维的局部避障地图M,避障地图M包括了各个第二对象的第二位置信息。Specifically, the obstacle distribution O c is projected onto a horizontal plane (ie, the ground) to obtain a two-dimensional partial obstacle avoidance map M, and the obstacle avoidance map M includes second position information of each second object.
步骤207:根据所述第一对象的第一位置信息,确定与追踪所述第一对象相关的第一组速度数据。Step 207: Determine, according to the first location information of the first object, a first set of speed data related to tracking the first object.
本发明实施例中,所述第一组速度数据通过角速度和线速度来表示,用于表征在目标区域内没有所述第二对象的状态下追踪所述第一对象的速度。In the embodiment of the present invention, the first set of speed data is represented by an angular velocity and a linear velocity for characterizing the speed of tracking the first object in a state where the second object is not in the target region.
本发明实施例中,地面机器人具有局部运动控制器,局部运动控制器包括:PID模块、避障模块和信息融合模块。In the embodiment of the present invention, the ground robot has a local motion controller, and the local motion controller includes: a PID module, an obstacle avoidance module, and an information fusion module.
具体地,PID模块的输入是第一对象的第一位置信息(d,θ),输出是在没有障碍物的情况下地面机器人追踪第一对象的第一组速度数据(v11)。Specifically, the input of the PID module is first position information (d, θ) of the first object, and the output is that the ground robot tracks the first set of velocity data (v 1 , ω 1 ) of the first object without an obstacle. .
步骤208:根据所述第一对象的第一位置信息和所述第二对象的第二位置信息,确定与追踪所述第一对象相关的第二组速度数据。Step 208: Determine, according to the first location information of the first object and the second location information of the second object, a second set of speed data related to tracking the first object.
本发明实施例中,所述第二组速度数据通过角速度和线速度来表示,用于表征在目标区域内有所述第二对象的状态下追踪所述第一对象的速度。In the embodiment of the present invention, the second set of speed data is represented by an angular velocity and a linear velocity for characterizing the speed of tracking the first object in a state where the second object is in the target region.
具体地,避障模块的输入是基于第二对象的第二位置信息而形成的避障地图M以及第一对象的第一位置信息(d,θ),输出是第二组速度数据(v22),这里,第二组速度数据是根据地面机器人的运动模型,从所有可能的运动轨迹中选出避开第二对象,且尽量靠近第一对象的速度数据。Specifically, the input of the obstacle avoidance module is an obstacle avoidance map M formed based on the second position information of the second object and the first position information (d, θ) of the first object, and the output is the second set of speed data (v 2 , ω 2 ), here, the second set of speed data is based on the motion model of the ground robot, and selects the speed data that avoids the second object from all possible motion trajectories and is as close as possible to the first object.
步骤209:结合所述第一组速度数据、所述第二组速度数据以及所述第二对象的第二位置信息,确定出追踪所述第一对象的运动数据;根据所述 运动数据追踪所述第一对象。Step 209: Determine, according to the first group of speed data, the second group of speed data, and the second position information of the second object, tracking motion data of the first object; The motion data tracks the first object.
本发明实施例中,追踪所述第一对象时,根据地面机器人当前的速度和所述第二对象的第二位置信息,计算所述地面机器人与所述第二对象之间的距离;根据所述距离确定所述第一组速度数据和所述第二组速度数据分别对应的权重;基于所确定的权重,对所述第一组速度数据和所述第二组速度数据进行加权处理,得到所述地面机器人追踪所述第一对象的运动数据。In the embodiment of the present invention, when the first object is tracked, the distance between the ground robot and the second object is calculated according to the current speed of the ground robot and the second position information of the second object; Determining a weight corresponding to the first group of speed data and the second group of speed data respectively; performing weighting processing on the first group of speed data and the second group of speed data based on the determined weights The ground robot tracks motion data of the first object.
参照图4,信息融合模块的输入是第一组速度数据(v11)、第二组速度数据(v22)以及基于第二对象的第二位置信息而形成的避障地图M,信息融合模块的输出是地面机器人最终的运动数据(v33)。这里,基于避障地图M对第一组速度数据和所述第二组速度数据进行融合,融合的依据是:根据地面机器人当前的运动数据(v00)在避障地图M中预测地面机器人与第二对象之间的距离dc,地面机器人与第二对象之间的距离dc越大,则第一组速度数据(v11)的权重越大;反之,地面机器人与第二对象之间的距离dc越小,则第二组速度数据(v22)的权重越大。最后,基于各自的权重对第一组速度数据(v11)和所述第二组速度数(v22)进行加权处理,即得到追踪第一对象的运动数据。Referring to FIG. 4, the input of the information fusion module is a first set of velocity data (v 1 , ω 1 ), a second set of velocity data (v 2 , ω 2 ), and an obstacle avoidance based on the second location information of the second object. Map M, the output of the information fusion module is the final motion data (v 3 , ω 3 ) of the ground robot. Here, the first set of velocity data and the second set of velocity data are merged based on the obstacle avoidance map M, and the fusion is based on: predicting in the obstacle avoidance map M according to the current motion data (v 0 , ω 0 ) of the ground robot The distance d c between the ground robot and the second object, the greater the distance d c between the ground robot and the second object, the greater the weight of the first set of velocity data (v 1 , ω 1 ); otherwise, the ground robot The smaller the distance d c between the second object and the second object, the greater the weight of the second set of velocity data (v 2 , ω 2 ). Finally, the first set of velocity data (v 1 , ω 1 ) and the second set of velocity numbers (v 2 , ω 2 ) are weighted based on the respective weights, that is, the motion data of the first object is obtained.
本发明实施例中,在根据所述运动数据追踪所述第一对象时,检测是否发生异常事件;当发生异常事件时,调整所述运动数据小于等于预设值。在一实施方式中,预设值为零,此时,地面机器人一旦出现跌落或者碰撞的风险,则强行启动刹车逻辑,保证地面机器人的安全。In the embodiment of the present invention, when the first object is tracked according to the motion data, it is detected whether an abnormal event occurs; when an abnormal event occurs, the motion data is adjusted to be less than or equal to a preset value. In an embodiment, the preset value is zero. At this time, once the ground robot has a risk of falling or colliding, the brake logic is forcibly activated to ensure the safety of the ground robot.
图5为本发明实施例的追踪方法的流程示意图三,本示例中的追踪方法应用于无人机,如图5所示,所述追踪方法包括以下步骤:FIG. 5 is a schematic flowchart of a tracking method according to an embodiment of the present invention. The tracking method in this example is applied to a drone. As shown in FIG. 5, the tracking method includes the following steps:
步骤501:监测第一对象的第一位置信息。Step 501: Monitor first location information of the first object.
本发明实施例中,无人机包括两类传感器,其中,第一类传感器用于 监测第一对象的第一位置信息,第二类传感器用于监测目标区域内的第二对象的第二位置信息。In the embodiment of the present invention, the drone includes two types of sensors, wherein the first type of sensor is used for The first location information of the first object is monitored, and the second type of sensor is used to monitor the second location information of the second object in the target area.
在一实施方式中,第一类传感器为UWB anchor,相应地,第一对象需携带UWB tag,无人机通过UWB anchor对第一对象携带的UWB tag进行定位,得到第一对象的第一位置信息。In an embodiment, the first type of sensor is a UWB anchor. Correspondingly, the first object needs to carry a UWB tag, and the UWB anchors the UWB tag carried by the first object by using the UWB anchor to obtain the first position of the first object. information.
上述方案中,UWB anchor通常由两个以上UWB通信节点组成,UWB tag由另一个UWB通信节点组成,采用飞行时间测距法(TOF,Time of Flight)和三角定位的原理确定UWB tag相对于UWB anchor的位置信息,也即第一对象的第一位置信息。In the above solution, the UWB anchor is usually composed of two or more UWB communication nodes, and the UWB tag is composed of another UWB communication node, and the UWB tag is determined relative to the UWB by using the principle of time-of-flight (TOF) and triangulation. The position information of the anchor, that is, the first position information of the first object.
本发明实施例中,第一对象指待追踪的对象。In the embodiment of the present invention, the first object refers to an object to be tracked.
本发明实施例中,所述第一位置信息通过方向角θ、仰角
Figure PCTCN2017072999-appb-000005
和距离d来表示,通过
Figure PCTCN2017072999-appb-000006
表征所述第一对象的位置。
In the embodiment of the present invention, the first position information passes the direction angle θ and the elevation angle
Figure PCTCN2017072999-appb-000005
And the distance d is expressed by
Figure PCTCN2017072999-appb-000006
Characterizing the location of the first object.
步骤502:对目标区域进行监测,得到目标区域内表征各个对象位置分布的第一坐标参数集合。Step 502: Monitor the target area to obtain a first coordinate parameter set in the target area that represents the location distribution of each object.
本发明实施例中,通过第二类传感器监测目标区域内的第二对象的第二位置信息。在一实施方式中,第二类传感器为3D相机,通过3D相机对目标区域进行三维图像采集,即可获得目标区域内第二图像的第二位置信息。在另一实施方式中,第二类传感器为LiDAR传感器,用激光扫描的方法获得周围物体相对传感器的距离信息。In the embodiment of the present invention, the second location information of the second object in the target area is monitored by the second type of sensor. In an embodiment, the second type of sensor is a 3D camera, and the third position image of the second image in the target area is obtained by performing a three-dimensional image acquisition on the target area by using the 3D camera. In another embodiment, the second type of sensor is a LiDAR sensor, and the distance information of the surrounding object relative to the sensor is obtained by laser scanning.
本发明实施例中,第二对象相对于第一对象而言,指障碍物。追踪第一对象时,需躲避第二对象,避免与第二对象发生碰撞。In the embodiment of the present invention, the second object refers to an obstacle with respect to the first object. When tracking the first object, you need to avoid the second object and avoid collision with the second object.
具体实现时,首先需要对目标区域进行监测,得到目标区域内表征各个对象位置分布的第一坐标参数集合。具体地,无人机从第二类传感器获得目标区域内的所有可视障碍物的三维空间分布OA={oi:(xi,yi,zi)}。In the specific implementation, the target area needs to be monitored first, and a first coordinate parameter set representing the position distribution of each object in the target area is obtained. Specifically, the drone obtains a three-dimensional spatial distribution of all visual obstacles within the target area from the second type of sensor O A = {o i :(x i , y i , z i )}.
步骤503:获取所述第一对象的第一位置信息以及边界信息,确定出以 所述第一位置信息为中心且以所述边界信息为边界约束的第三坐标参数集合。Step 503: Acquire first location information of the first object and boundary information, and determine The first location information is a center and the third coordinate parameter set bounded by the boundary information is bounded.
具体地,参照图6,根据第一对象相对机器人的第一位置信息
Figure PCTCN2017072999-appb-000007
以及预先知道的第一对象的边界信息,即三维边界框(3D bounding box)尺寸,能够确定出表征第一对象的空间分布的第三坐标参数集合,从障碍物分布OA中去除以第一位置为中心的3D bounding box内的所有障碍物,得到最终的障碍物分布OB
Specifically, referring to FIG. 6, the first position information of the first object relative to the robot
Figure PCTCN2017072999-appb-000007
And the boundary information of the first object known in advance, that is, the 3D bounding box size, the third coordinate parameter set representing the spatial distribution of the first object can be determined, and the first distribution is removed from the obstacle distribution O A All obstacles in the 3D bounding box centered at the location, resulting in the final obstacle distribution O B .
步骤504:从所述第一坐标参数集合中去除所述第三坐标参数集合,得到目标区域内的表征所述第二对象分布的第四坐标参数集合,所述第四坐标参数集合即用于表示所述第二对象的第二位置信息。Step 504: Removing the third coordinate parameter set from the first coordinate parameter set, and obtaining a fourth coordinate parameter set in the target area that represents the second object distribution, where the fourth coordinate parameter set is used Representing second location information of the second object.
具体地,从障碍物分布OA中去除以第一对象为中心的3D bounding box内的所有障碍物,得到最终的障碍物分布OB,OB即为三维的避障地图,避障地图OB包括了各个第二对象的第二位置信息。In particular, the distribution of the obstacle O is removed from A to all the obstacles in the first object-centric 3D bounding box, to obtain the final distribution of the obstacle O B, O B is the three-dimensional map of obstacle avoidance, obstacle avoidance map O B includes second location information for each of the second objects.
步骤505:根据所述第一对象的第一位置信息,确定与追踪所述第一对象相关的第一组速度数据。Step 505: Determine, according to the first location information of the first object, a first set of speed data related to tracking the first object.
本发明实施例中,所述第一组速度数据通过第一维速度分量、第二维速度分量和第三维速度分量来表示,用于表征在目标区域内没有所述第二对象的状态下追踪所述第一对象的速度。In the embodiment of the present invention, the first set of velocity data is represented by a first dimension velocity component, a second dimension velocity component, and a third dimension velocity component, and is used to represent tracking in a state where the second object is not in the target region. The speed of the first object.
本发明实施例中,无人机具有局部运动控制器,局部运动控制器包括:PID模块、避障模块和信息融合模块。In the embodiment of the present invention, the drone has a local motion controller, and the local motion controller includes: a PID module, an obstacle avoidance module, and an information fusion module.
具体地,PID模块的输入是第一对象的第一位置信息
Figure PCTCN2017072999-appb-000008
输出是在没有障碍物的情况下无人机追踪第一对象的第一组速度数据(α111)。
Specifically, the input of the PID module is the first location information of the first object.
Figure PCTCN2017072999-appb-000008
The output is that the drone tracks the first set of velocity data (α 1 , β 1 , γ 1 ) of the first object without obstacles.
本发明实施例中,速度数据均为三维空间中的速度数据,其中,第一维速度分量是无人机围绕x轴(也即roll轴)旋转的速度分量,第二维速度分量是无人机围绕y轴(也即pitch轴)旋转的速度分量,第三维速度分量 是无人机围绕z轴(也即yaw轴)旋转的速度分量。In the embodiment of the present invention, the velocity data is velocity data in a three-dimensional space, wherein the first dimension velocity component is a velocity component of the drone rotating around the x-axis (ie, the roll axis), and the second dimension velocity component is an unmanned The velocity component of the machine rotating around the y-axis (that is, the pitch axis), the third-dimensional velocity component It is the velocity component of the drone that rotates around the z-axis (that is, the yaw axis).
步骤506:根据所述第一对象的第一位置信息和所述第二对象的第二位置信息,确定与追踪所述第一对象相关的第二组速度数据。Step 506: Determine, according to the first location information of the first object and the second location information of the second object, a second set of speed data related to tracking the first object.
本发明实施例中,所述第二组速度数据通过第一维速度分量、第二维速度分量和第三维速度分量来表示,用于表征在目标区域内有所述第二对象的状态下追踪所述第一对象的速度。In the embodiment of the present invention, the second set of velocity data is represented by a first dimension velocity component, a second dimension velocity component, and a third dimension velocity component, and is used to represent that the second object is tracked in the target region. The speed of the first object.
具体地,避障模块的输入是基于第二对象的第二位置信息而形成的避障地图OB以及第一对象的第一位置信息
Figure PCTCN2017072999-appb-000009
输出是第二组速度数据(α222),这里,第二组速度数据是根据无人机的运动模型,从所有可能的运动轨迹中选出避开第二对象,且尽量靠近第一对象的速度数据。
Specifically, the input of the obstacle avoidance module is an obstacle avoidance map O B formed based on the second position information of the second object and the first position information of the first object
Figure PCTCN2017072999-appb-000009
The output is a second set of velocity data (α 2 , β 2 , γ 2 ), where the second set of velocity data is selected from all possible motion trajectories to avoid the second object according to the motion model of the drone, and Try to be close to the speed data of the first object.
步骤507:结合所述第一组速度数据、所述第二组速度数据以及所述第二对象的第二位置信息,确定出追踪所述第一对象的运动数据;根据所述运动数据追踪所述第一对象。Step 507: Combine the first group of speed data, the second group of speed data, and the second position information of the second object to determine that the motion data of the first object is tracked; and track the tracking according to the motion data. Said the first object.
本发明实施例中,追踪所述第一对象时,根据无人机当前的速度和所述第二对象的第二位置信息,计算所述无人机与所述第二对象之间的距离;根据所述距离确定所述第一组速度数据和所述第二组速度数据分别对应的权重;基于所确定的权重,对所述第一组速度数据和所述第二组速度数据进行加权处理,得到所述无人机追踪所述第一对象的运动数据。In the embodiment of the present invention, when the first object is tracked, the distance between the drone and the second object is calculated according to the current speed of the drone and the second position information of the second object; Determining weights respectively corresponding to the first group of speed data and the second group of speed data according to the distance; weighting the first group of speed data and the second group of speed data based on the determined weights Obtaining, by the drone, tracking motion data of the first object.
参照图7,信息融合模块的输入是第一组速度数据(α111)、第二组速度数据(α222)以及基于第二对象的第二位置信息而形成的避障地图OB,信息融合模块的输出是无人机最终的运动数据(α333)。这里,基于避障地图OB对第一组速度数据和所述第二组速度数据进行融合,融合的依据是:根据无人机当前的运动数据(α000)在避障地图OB中预测无人机与第二对象之间的距离dc,无人机与第二对象之间的距离dc越大,则第一组速度数据(α111)的权重越大;反之,无人机与第二对象之间的距离dc越小,则第 二组速度数据(α222)的权重越大。最后,基于各自的权重对第一组速度数据(α111)和所述第二组速度数(α222)进行加权处理,即得到追踪第一对象的运动数据。Referring to FIG. 7, the input of the information fusion module is a first set of velocity data (α 1 , β 1 , γ 1 ), a second set of velocity data (α 2 , β 2 , γ 2 ), and a second location based on the second object. The obstacle avoidance map O B formed by the information, the output of the information fusion module is the final motion data (α 3 , β 3 , γ 3 ) of the drone. Here, the first set of velocity data and the second set of velocity data are merged based on the obstacle avoidance map O B , and the fusion is based on: avoiding the current motion data (α 0 , β 0 , γ 0 ) of the drone O B barrier predicted distance map d c between the UAV and the second object, the greater the distance d c between the UAV and a second object, the first set of data rate (α 1, β 1, γ 1 ) the greater the weight; conversely, the smaller the distance d c between the drone and the second object, the greater the weight of the second set of velocity data (α 2 , β 2 , γ 2 ). Finally, weighting the first set of velocity data (α 1 , β 1 , γ 1 ) and the second set of velocity numbers (α 2 , β 2 , γ 2 ) based on the respective weights, thereby obtaining the tracking first object Sports data.
本发明实施例中,在根据所述运动数据追踪所述第一对象时,检测是否发生异常事件;当发生异常事件时,调整所述运动数据小于等于预设值。在一实施方式中,预设值为零,此时,无人机一旦出现跌落或者碰撞的风险,则强行启动刹车逻辑,保证无人机的安全。In the embodiment of the present invention, when the first object is tracked according to the motion data, it is detected whether an abnormal event occurs; when an abnormal event occurs, the motion data is adjusted to be less than or equal to a preset value. In an embodiment, the preset value is zero. At this time, once the drone has a risk of falling or colliding, the brake logic is forcibly activated to ensure the safety of the drone.
图8为本发明实施例的追踪设备的结构组成示意图,如图8所示,所述追踪设备包括:FIG. 8 is a schematic structural diagram of a tracking device according to an embodiment of the present invention. As shown in FIG. 8, the tracking device includes:
第一监测单元81,配置为监测第一对象的第一位置信息;The first monitoring unit 81 is configured to monitor first location information of the first object;
第二监测单元82,配置为监测目标区域内的第二对象的第二位置信息;a second monitoring unit 82, configured to monitor second location information of the second object in the target area;
处理单元83,配置为结合所述第一位置信息与所述第二位置信息,确定出追踪所述第一对象的运动数据;The processing unit 83 is configured to determine, according to the first location information and the second location information, tracking motion data of the first object;
驱动单元84,配置为根据所述运动数据追踪所述第一对象。The driving unit 84 is configured to track the first object according to the motion data.
本发明实施例中,所述第二监测单元82,还配置为:对目标区域进行监测,得到目标区域内表征各个对象位置分布的第一坐标参数集合;获取监测装置的位姿参数,根据所述位姿参数确定所述目标区域内表征第三对象位置分布的第二坐标参数集合;获取所述第一对象的第一位置信息以及边界信息,确定出以所述第一位置信息为中心且以所述边界信息为边界约束的第三坐标参数集合;从所述第一坐标参数集合中去除所述第二坐标参数集合以及所述第三坐标参数集合,得到目标区域内的表征所述第二对象分布的第四坐标参数集合。In the embodiment of the present invention, the second monitoring unit 82 is further configured to: monitor the target area, obtain a first coordinate parameter set that represents a position distribution of each object in the target area; acquire a pose parameter of the monitoring device, according to the Determining, by the pose parameter, a second coordinate parameter set that represents a third object position distribution in the target area; acquiring first position information of the first object and boundary information, determining that the first location information is centered a third coordinate parameter set bounded by the boundary information; removing the second coordinate parameter set and the third coordinate parameter set from the first coordinate parameter set to obtain a representation in the target area A set of fourth coordinate parameters of the two object distributions.
本发明实施例中,所述第二监测单元82,还配置为:将所述目标区域内的表征所述第二对象分布的第四坐标参数集合投影至预设维度的坐标系中,得到所述预设维度的坐标系内的第五坐标参数集合,所述第五坐标参 数集合即用于表示所述第二对象的第二位置信息。In the embodiment of the present invention, the second monitoring unit 82 is further configured to: project a fourth coordinate parameter set in the target area that represents the distribution of the second object into a coordinate system of a preset dimension, and obtain a a fifth coordinate parameter set in a coordinate system of the preset dimension, the fifth coordinate parameter The set of numbers is used to represent the second location information of the second object.
本发明实施例中,所述第二监测单元82,还配置为:对目标区域进行监测,得到目标区域内表征各个对象位置分布的第一坐标参数集合;获取所述第一对象的第一位置信息以及边界信息,确定出以所述第一位置信息为中心且以所述边界信息为边界约束的第三坐标参数集合;从所述第一坐标参数集合中去除所述第三坐标参数集合,得到目标区域内的表征所述第二对象分布的第四坐标参数集合,所述第四坐标参数集合即用于表示所述第二对象的第二位置信息。In the embodiment of the present invention, the second monitoring unit 82 is further configured to: monitor the target area, obtain a first coordinate parameter set that represents the position distribution of each object in the target area; and acquire the first position of the first object. Determining, by the information and the boundary information, a third coordinate parameter set centered on the first location information and bounded by the boundary information; removing the third coordinate parameter set from the first coordinate parameter set, Obtaining a fourth coordinate parameter set in the target area that represents the distribution of the second object, the fourth coordinate parameter set is used to represent second location information of the second object.
本发明实施例中,所述处理单元83,还配置为:根据所述第一对象的第一位置信息,确定与追踪所述第一对象相关的第一组速度数据;根据所述第一对象的第一位置信息和所述第二对象的第二位置信息,确定与追踪所述第一对象相关的第二组速度数据;结合所述第一组速度数据、所述第二组速度数据以及所述第二对象的第二位置信息,确定出追踪所述第一对象的运动数据。In the embodiment of the present invention, the processing unit 83 is further configured to: determine, according to the first location information of the first object, a first group of speed data related to tracking the first object; according to the first object And determining, by the first location information and the second location information of the second object, a second set of speed data associated with tracking the first object; combining the first set of speed data, the second set of speed data, and The second location information of the second object determines to track motion data of the first object.
本发明实施例中,所述设备还包括:In the embodiment of the present invention, the device further includes:
异常检测单元85,配置为在根据所述运动数据追踪所述第一对象时,检测是否发生异常事件;The abnormality detecting unit 85 is configured to detect whether an abnormal event occurs when the first object is tracked according to the motion data;
所述处理单元83,还配置为当发生异常事件时,调整所述运动数据小于等于预设值。The processing unit 83 is further configured to adjust the motion data to be less than or equal to a preset value when an abnormal event occurs.
本发明实施例中,所述第一位置信息通过方向角和距离来表示,用于表征所述第一对象的位置;所述第一组速度数据通过角速度和线速度来表示,用于表征在目标区域内没有所述第二对象的状态下追踪所述第一对象的速度;In the embodiment of the present invention, the first position information is represented by a direction angle and a distance for characterizing a position of the first object; the first group of speed data is represented by an angular velocity and a linear velocity, and is used for characterizing Tracking the speed of the first object in a state where the second object is not in the target area;
所述第二位置信息通过方向角和距离来表示,用于表征所述第二对象在所述目标区域内的位置分布;所述第二组速度数据通过角速度和线速度 来表示,用于表征在目标区域内有所述第二对象的状态下追踪所述第一对象的速度;The second position information is represented by a direction angle and a distance for characterizing a position distribution of the second object within the target area; the second set of speed data passes angular velocity and linear velocity Representing a speed for tracking the first object in a state where the second object is in the target area;
相应地,所述处理单元83,还配置为:追踪所述第一对象时,根据追踪设备当前的速度和所述第二对象的第二位置信息,计算所述追踪设备与所述第二对象之间的距离;根据所述距离确定所述第一组速度数据和所述第二组速度数据分别对应的权重;基于所确定的权重,对所述第一组速度数据和所述第二组速度数据进行加权处理,得到所述追踪设备追踪所述第一对象的运动数据。Correspondingly, the processing unit 83 is further configured to: when tracking the first object, calculate the tracking device and the second object according to a current speed of the tracking device and second position information of the second object a distance between the first set of speed data and the second set of speed data, respectively; determining, according to the determined weight, the first set of speed data and the second group The speed data is weighted to obtain the tracking device tracking the motion data of the first object.
本发明实施例中,所述第一位置信息通过方向角、仰角和距离来表示,用于表征所述第一对象的位置;所述第一组速度数据通过第一维速度分量、第二维速度分量和第三维速度分量来表示,用于表征在目标区域内没有所述第二对象的状态下追踪所述第一对象的速度;In the embodiment of the present invention, the first position information is represented by a direction angle, an elevation angle, and a distance, and is used to represent a position of the first object; the first group of speed data passes the first dimension speed component, the second dimension a velocity component and a third dimensional velocity component are used to represent a speed of tracking the first object in a state where the second object is not in the target region;
所述第二位置信息通过方向角、仰角和距离来表示,用于表征所述第二对象在所述目标区域内的位置分布;所述第二组速度数据通过第一维速度分量、第二维速度分量和第三维速度分量来表示,用于表征在目标区域内有所述第二对象的状态下追踪所述第一对象的速度;The second position information is represented by a direction angle, an elevation angle, and a distance for characterizing a position distribution of the second object in the target area; the second set of speed data passes the first dimension speed component, and the second a dimension velocity component and a third dimension velocity component are used to represent a speed at which the first object is tracked in a state where the second object is in the target region;
相应地,所述处理单元83,还配置为:追踪所述第一对象时,根据追踪设备当前的速度和所述第二对象的第二位置信息,计算所述追踪设备与所述第二对象之间的距离;根据所述距离确定所述第一组速度数据和所述第二组速度数据分别对应的权重;基于所确定的权重,对所述第一组速度数据和所述第二组速度数据进行加权处理,得到所述追踪设备追踪所述第一对象的运动数据。Correspondingly, the processing unit 83 is further configured to: when tracking the first object, calculate the tracking device and the second object according to a current speed of the tracking device and second position information of the second object a distance between the first set of speed data and the second set of speed data, respectively; determining, according to the determined weight, the first set of speed data and the second group The speed data is weighted to obtain the tracking device tracking the motion data of the first object.
本领域技术人员应当理解,图8所示的追踪设备中的各单元的实现功能可参照前述追踪方法的相关描述而理解。Those skilled in the art should understand that the implementation functions of the units in the tracking device shown in FIG. 8 can be understood by referring to the related description of the foregoing tracking method.
在实际应用中,所述第一监测单元可以通过UWB anchor来实现,UWB  anchor通常由两个以上UWB通信节点组成。第二监测单元可以通过3D相机来实现。驱动单元可以通过电机来实现。异常检测单元可以通过位姿传感器来实现。处理单元可通过处理器来实现。In practical applications, the first monitoring unit can be implemented by a UWB anchor, UWB. An anchor usually consists of more than two UWB communication nodes. The second monitoring unit can be implemented by a 3D camera. The drive unit can be realized by a motor. The anomaly detection unit can be implemented by a pose sensor. The processing unit can be implemented by a processor.
本发明实施例上述追踪设备如果以软件功能模块的形式实现并作为独立的产品销售或使用时,也可以存储在一个计算机可读取存储介质中。基于这样的理解,本发明实施例的技术方案本质上或者说对现有技术做出贡献的部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,包括若干指令用以使得一台计算机设备(可以是个人计算机、服务器、或者网络设备等)执行本发明各个实施例所述方法的全部或部分。而前述的存储介质包括:U盘、移动硬盘、只读存储器(ROM,Read Only Memory)、磁碟或者光盘等各种可以存储程序代码的介质。这样,本发明实施例不限制于任何特定的硬件和软件结合。Embodiments of the Invention The above tracking device can also be stored in a computer readable storage medium if it is implemented in the form of a software function module and sold or used as a standalone product. Based on such understanding, the technical solution of the embodiments of the present invention may be embodied in the form of a software product in essence or in the form of a software product stored in a storage medium, including a plurality of instructions. A computer device (which may be a personal computer, server, or network device, etc.) is caused to perform all or part of the methods described in various embodiments of the present invention. The foregoing storage medium includes various media that can store program codes, such as a USB flash drive, a mobile hard disk, a read only memory (ROM), a magnetic disk, or an optical disk. Thus, embodiments of the invention are not limited to any specific combination of hardware and software.
相应地,本发明实施例还提供一种计算机存储介质,其中存储有计算机程序,该计算机程序用于执行本发明实施例的追踪方法。Correspondingly, an embodiment of the present invention further provides a computer storage medium, wherein a computer program is stored, and the computer program is used to execute the tracking method of the embodiment of the present invention.
尽管为示例目的,已经公开了本发明的优选实施例,本领域的技术人员将意识到各种改进、增加和取代也是可能的,因此,本发明的范围应当不限于上述实施例。While the preferred embodiments of the present invention have been disclosed for purposes of illustration, those skilled in the art will recognize that various modifications, additions and substitutions are possible, and the scope of the invention should not be limited to the embodiments described above.
工业实用性Industrial applicability
本发明实施例的技术方案,监测第一对象的第一位置信息;监测目标区域内的第二对象的第二位置信息;结合所述第一位置信息与所述第二位置信息,确定出追踪所述第一对象的运动数据;根据所述运动数据追踪所述第一对象。追踪设备在追踪第一对象的同时检测目标区域内的第二对象(也称为障碍物),同时实现对目标的追踪和对障碍物的躲避,大大减小了追踪过程中碰撞障碍物的可能性,保护了追踪设备。 The technical solution of the embodiment of the present invention monitors first location information of the first object, monitors second location information of the second object in the target area, and determines the tracking by combining the first location information and the second location information. Motion data of the first object; tracking the first object based on the motion data. The tracking device detects the second object (also called an obstacle) in the target area while tracking the first object, and at the same time realizes tracking of the target and avoiding obstacles, thereby greatly reducing the possibility of collision obstacles during the tracking process. Sexuality protects tracking devices.

Claims (17)

  1. 一种追踪方法,所述方法包括:A tracking method, the method comprising:
    监测第一对象的第一位置信息;Monitoring first location information of the first object;
    监测目标区域内的第二对象的第二位置信息;Monitoring second location information of the second object in the target area;
    结合所述第一位置信息与所述第二位置信息,确定出追踪所述第一对象的运动数据;Combining the first location information with the second location information, determining to track motion data of the first object;
    根据所述运动数据追踪所述第一对象。Tracking the first object based on the motion data.
  2. 根据权利要求1所述的追踪方法,其中,所述监测目标区域内的第二对象的第二位置信息,包括:The tracking method of claim 1, wherein the monitoring the second location information of the second object in the target area comprises:
    对目标区域进行监测,得到目标区域内表征各个对象位置分布的第一坐标参数集合;Monitoring the target area to obtain a first coordinate parameter set representing the position distribution of each object in the target area;
    获取监测装置的位姿参数,根据所述位姿参数确定所述目标区域内表征第三对象位置分布的第二坐标参数集合;Obtaining a pose parameter of the monitoring device, and determining, according to the pose parameter, a second coordinate parameter set that represents a third object position distribution in the target area;
    获取所述第一对象的第一位置信息以及边界信息,确定出以所述第一位置信息为中心且以所述边界信息为边界约束的第三坐标参数集合;Obtaining first location information of the first object and boundary information, and determining a third coordinate parameter set centered on the first location information and bounded by the boundary information;
    从所述第一坐标参数集合中去除所述第二坐标参数集合以及所述第三坐标参数集合,得到目标区域内的表征所述第二对象分布的第四坐标参数集合。And removing the second coordinate parameter set and the third coordinate parameter set from the first coordinate parameter set to obtain a fourth coordinate parameter set in the target area that represents the second object distribution.
  3. 根据权利要求2所述的追踪方法,其中,所述监测目标区域内的第二对象的第二位置信息,还包括:The tracking method of claim 2, wherein the monitoring the second location information of the second object in the target area further comprises:
    将所述目标区域内的表征所述第二对象分布的第四坐标参数集合投影至预设维度的坐标系中,得到所述预设维度的坐标系内的第五坐标参数集合,所述第五坐标参数集合即用于表示所述第二对象的第二位置信息。Projecting a fourth coordinate parameter set in the target area that represents the second object distribution to a coordinate system of a preset dimension, to obtain a fifth coordinate parameter set in the coordinate system of the preset dimension, where the The five-coordinate parameter set is used to represent the second location information of the second object.
  4. 根据权利要求1所述的追踪方法,其中,所述监测目标区域内的第二对象的第二位置信息,包括: The tracking method of claim 1, wherein the monitoring the second location information of the second object in the target area comprises:
    对目标区域进行监测,得到目标区域内表征各个对象位置分布的第一坐标参数集合;Monitoring the target area to obtain a first coordinate parameter set representing the position distribution of each object in the target area;
    获取所述第一对象的第一位置信息以及边界信息,确定出以所述第一位置信息为中心且以所述边界信息为边界约束的第三坐标参数集合;Obtaining first location information of the first object and boundary information, and determining a third coordinate parameter set centered on the first location information and bounded by the boundary information;
    从所述第一坐标参数集合中去除所述第三坐标参数集合,得到目标区域内的表征所述第二对象分布的第四坐标参数集合,所述第四坐标参数集合即用于表示所述第二对象的第二位置信息。Removing the third coordinate parameter set from the first coordinate parameter set to obtain a fourth coordinate parameter set in the target area that represents the second object distribution, where the fourth coordinate parameter set is used to represent the Second location information of the second object.
  5. 根据权利要求1所述的追踪方法,其中,所述结合所述第一位置信息与所述第二位置信息,确定出追踪所述第一对象的运动数据,包括:The tracking method according to claim 1, wherein the determining the motion data of the first object by combining the first location information and the second location information comprises:
    根据所述第一对象的第一位置信息,确定与追踪所述第一对象相关的第一组速度数据;Determining, according to the first location information of the first object, a first set of speed data related to tracking the first object;
    根据所述第一对象的第一位置信息和所述第二对象的第二位置信息,确定与追踪所述第一对象相关的第二组速度数据;Determining, according to the first location information of the first object and the second location information of the second object, a second set of speed data related to tracking the first object;
    结合所述第一组速度数据、所述第二组速度数据以及所述第二对象的第二位置信息,确定出追踪所述第一对象的运动数据。Combining the first set of velocity data, the second set of velocity data, and the second location information of the second object, determining to track motion data of the first object.
  6. 根据权利要求5所述的追踪方法,其中,所述方法还包括:The tracking method according to claim 5, wherein the method further comprises:
    在根据所述运动数据追踪所述第一对象时,检测是否发生异常事件;When the first object is tracked according to the motion data, detecting whether an abnormal event occurs;
    当发生异常事件时,调整所述运动数据小于等于预设值。When an abnormal event occurs, the motion data is adjusted to be less than or equal to a preset value.
  7. 根据权利要求5所述的追踪方法,其中,The tracking method according to claim 5, wherein
    所述第一位置信息通过方向角和距离来表示,用于表征所述第一对象的位置;所述第一组速度数据通过角速度和线速度来表示,用于表征在目标区域内没有所述第二对象的状态下追踪所述第一对象的速度;The first position information is represented by a direction angle and a distance for characterizing a position of the first object; the first set of speed data is represented by an angular velocity and a linear velocity for characterizing that the target region does not have the Tracking the speed of the first object in the state of the second object;
    所述第二位置信息通过方向角和距离来表示,用于表征所述第二对象在所述目标区域内的位置分布;所述第二组速度数据通过角速度和线速度来表示,用于表征在目标区域内有所述第二对象的状态下追踪所述第一对 象的速度;The second position information is represented by a direction angle and a distance for characterizing a position distribution of the second object within the target area; the second set of speed data is represented by angular velocity and linear velocity for characterization Tracking the first pair in a state where the second object is in the target area Speed of elephant
    相应地,所述结合所述第一组速度数据、所述第二组速度数据以及所述第二对象的第二位置信息,确定出追踪所述第一对象的运动数据,包括:Correspondingly, the determining the motion data of the first object by combining the first group of speed data, the second group of speed data, and the second position information of the second object includes:
    追踪所述第一对象时,根据追踪设备当前的速度和所述第二对象的第二位置信息,计算所述追踪设备与所述第二对象之间的距离;When tracking the first object, calculating a distance between the tracking device and the second object according to a current speed of the tracking device and second position information of the second object;
    根据所述距离确定所述第一组速度数据和所述第二组速度数据分别对应的权重;Determining weights respectively corresponding to the first group of speed data and the second group of speed data according to the distance;
    基于所确定的权重,对所述第一组速度数据和所述第二组速度数据进行加权处理,得到所述追踪设备追踪所述第一对象的运动数据。And weighting the first group of speed data and the second group of speed data based on the determined weights, to obtain the tracking device tracking the motion data of the first object.
  8. 根据权利要求5所述的追踪方法,其中,The tracking method according to claim 5, wherein
    所述第一位置信息通过方向角、仰角和距离来表示,用于表征所述第一对象的位置;所述第一组速度数据通过第一维速度分量、第二维速度分量和第三维速度分量来表示,用于表征在目标区域内没有所述第二对象的状态下追踪所述第一对象的速度;The first position information is represented by a direction angle, an elevation angle, and a distance for characterizing a position of the first object; the first set of velocity data passes the first dimension velocity component, the second dimension velocity component, and the third dimension speed a component representation for characterizing a speed of tracking the first object in a state where the second object is not in the target area;
    所述第二位置信息通过方向角、仰角和距离来表示,用于表征所述第二对象在所述目标区域内的位置分布;所述第二组速度数据通过第一维速度分量、第二维速度分量和第三维速度分量来表示,用于表征在目标区域内有所述第二对象的状态下追踪所述第一对象的速度;The second position information is represented by a direction angle, an elevation angle, and a distance for characterizing a position distribution of the second object in the target area; the second set of speed data passes the first dimension speed component, and the second a dimension velocity component and a third dimension velocity component are used to represent a speed at which the first object is tracked in a state where the second object is in the target region;
    相应地,所述结合所述第一组速度数据、所述第二组速度数据以及所述第二对象的第二位置信息,确定出追踪所述第一对象的运动数据,包括:Correspondingly, the determining the motion data of the first object by combining the first group of speed data, the second group of speed data, and the second position information of the second object includes:
    追踪所述第一对象时,根据追踪设备当前的速度和所述第二对象的第二位置信息,计算所述追踪设备与所述第二对象之间的距离;When tracking the first object, calculating a distance between the tracking device and the second object according to a current speed of the tracking device and second position information of the second object;
    根据所述距离确定所述第一组速度数据和所述第二组速度数据分别对应的权重;Determining weights respectively corresponding to the first group of speed data and the second group of speed data according to the distance;
    基于所确定的权重,对所述第一组速度数据和所述第二组速度数据进 行加权处理,得到所述追踪设备追踪所述第一对象的运动数据。Importing the first set of speed data and the second set of speed data based on the determined weights The row weighting process results in the tracking device tracking the motion data of the first object.
  9. 一种追踪设备,所述设备包括:A tracking device, the device comprising:
    第一监测单元,配置为监测第一对象的第一位置信息;a first monitoring unit configured to monitor first location information of the first object;
    第二监测单元,配置为监测目标区域内的第二对象的第二位置信息;a second monitoring unit configured to monitor second location information of the second object in the target area;
    处理单元,配置为结合所述第一位置信息与所述第二位置信息,确定出追踪所述第一对象的运动数据;a processing unit, configured to combine the first location information and the second location information to determine to track motion data of the first object;
    驱动单元,配置为根据所述运动数据追踪所述第一对象。And a driving unit configured to track the first object according to the motion data.
  10. 根据权利要求9所述的追踪设备,其中,所述第二监测单元,还配置为:对目标区域进行监测,得到目标区域内表征各个对象位置分布的第一坐标参数集合;获取监测装置的位姿参数,根据所述位姿参数确定所述目标区域内表征第三对象位置分布的第二坐标参数集合;获取所述第一对象的第一位置信息以及边界信息,确定出以所述第一位置信息为中心且以所述边界信息为边界约束的第三坐标参数集合;从所述第一坐标参数集合中去除所述第二坐标参数集合以及所述第三坐标参数集合,得到目标区域内的表征所述第二对象分布的第四坐标参数集合。The tracking device according to claim 9, wherein the second monitoring unit is further configured to: monitor the target area, obtain a first coordinate parameter set in the target area that represents a distribution of each object position; and acquire a bit of the monitoring device. a posture parameter, determining, according to the pose parameter, a second coordinate parameter set that represents a third object position distribution in the target area; acquiring first position information of the first object and boundary information, and determining the first a third coordinate parameter set centered on the location information and bounded by the boundary information; removing the second coordinate parameter set and the third coordinate parameter set from the first coordinate parameter set to obtain a target area Characterizing a fourth set of coordinate parameters of the second object distribution.
  11. 根据权利要求10所述的追踪设备,其中,所述第二监测单元,还配置为:将所述目标区域内的表征所述第二对象分布的第四坐标参数集合投影至预设维度的坐标系中,得到所述预设维度的坐标系内的第五坐标参数集合,所述第五坐标参数集合即用于表示所述第二对象的第二位置信息。The tracking device according to claim 10, wherein the second monitoring unit is further configured to: project a fourth coordinate parameter set representing the second object distribution in the target area to a coordinate of a preset dimension In the system, a fifth coordinate parameter set in the coordinate system of the preset dimension is obtained, and the fifth coordinate parameter set is used to represent the second location information of the second object.
  12. 根据权利要求9所述的追踪设备,其中,所述第二监测单元,还配置为:对目标区域进行监测,得到目标区域内表征各个对象位置分布的第一坐标参数集合;获取所述第一对象的第一位置信息以及边界信息,确定出以所述第一位置信息为中心且以所述边界信息为边界约束的第三坐标参数集合;从所述第一坐标参数集合中去除所述第三坐标参数集合,得到目标区域内的表征所述第二对象分布的第四坐标参数集合,所述第四坐标 参数集合即用于表示所述第二对象的第二位置信息。The tracking device according to claim 9, wherein the second monitoring unit is further configured to: monitor the target area, obtain a first coordinate parameter set that represents a distribution of each object position in the target area; and acquire the first Determining, by the first position information of the object and the boundary information, a third coordinate parameter set centered on the first position information and bounded by the boundary information; removing the first coordinate parameter set from the first coordinate parameter set a set of three coordinate parameters, and obtaining a fourth coordinate parameter set in the target area that represents the distribution of the second object, the fourth coordinate The parameter set is used to represent the second location information of the second object.
  13. 根据权利要求9所述的追踪设备,其中,所述处理单元,还配置为:根据所述第一对象的第一位置信息,确定与追踪所述第一对象相关的第一组速度数据;根据所述第一对象的第一位置信息和所述第二对象的第二位置信息,确定与追踪所述第一对象相关的第二组速度数据;结合所述第一组速度数据、所述第二组速度数据以及所述第二对象的第二位置信息,确定出追踪所述第一对象的运动数据。The tracking device according to claim 9, wherein the processing unit is further configured to: determine, according to the first location information of the first object, a first set of speed data related to tracking the first object; Determining, according to the first location information of the first object and the second location information of the second object, a second set of speed data related to tracking the first object; combining the first group of speed data, the first The two sets of speed data and the second position information of the second object determine to track the motion data of the first object.
  14. 根据权利要求13所述的追踪设备,其中,所述设备还包括:The tracking device of claim 13, wherein the device further comprises:
    异常检测单元,配置为在根据所述运动数据追踪所述第一对象时,检测是否发生异常事件;An abnormality detecting unit configured to detect whether an abnormal event occurs when the first object is tracked according to the motion data;
    所述处理单元,还配置为当发生异常事件时,调整所述运动数据小于等于预设值。The processing unit is further configured to adjust the motion data to be less than or equal to a preset value when an abnormal event occurs.
  15. 根据权利要求13所述的追踪设备,其中,The tracking device according to claim 13, wherein
    所述第一位置信息通过方向角和距离来表示,用于表征所述第一对象的位置;所述第一组速度数据通过角速度和线速度来表示,用于表征在目标区域内没有所述第二对象的状态下追踪所述第一对象的速度;The first position information is represented by a direction angle and a distance for characterizing a position of the first object; the first set of speed data is represented by an angular velocity and a linear velocity for characterizing that the target region does not have the Tracking the speed of the first object in the state of the second object;
    所述第二位置信息通过方向角和距离来表示,用于表征所述第二对象在所述目标区域内的位置分布;所述第二组速度数据通过角速度和线速度来表示,用于表征在目标区域内有所述第二对象的状态下追踪所述第一对象的速度;The second position information is represented by a direction angle and a distance for characterizing a position distribution of the second object within the target area; the second set of speed data is represented by angular velocity and linear velocity for characterization Tracking the speed of the first object in a state where the second object is in the target area;
    相应地,所述处理单元,还配置为:追踪所述第一对象时,根据追踪设备当前的速度和所述第二对象的第二位置信息,计算所述追踪设备与所述第二对象之间的距离;根据所述距离确定所述第一组速度数据和所述第二组速度数据分别对应的权重;基于所确定的权重,对所述第一组速度数据和所述第二组速度数据进行加权处理,得到所述追踪设备追踪所述第一 对象的运动数据。Correspondingly, the processing unit is further configured to: when tracking the first object, calculate the tracking device and the second object according to a current speed of the tracking device and second position information of the second object a distance between the first set of speed data and the second set of speed data, respectively, based on the distance; the first set of speed data and the second set of speeds based on the determined weights Data is weighted to obtain the tracking device tracking the first The motion data of the object.
  16. 根据权利要求13所述的追踪设备,其中,The tracking device according to claim 13, wherein
    所述第一位置信息通过方向角、仰角和距离来表示,用于表征所述第一对象的位置;所述第一组速度数据通过第一维速度分量、第二维速度分量和第三维速度分量来表示,用于表征在目标区域内没有所述第二对象的状态下追踪所述第一对象的速度;The first position information is represented by a direction angle, an elevation angle, and a distance for characterizing a position of the first object; the first set of velocity data passes the first dimension velocity component, the second dimension velocity component, and the third dimension speed a component representation for characterizing a speed of tracking the first object in a state where the second object is not in the target area;
    所述第二位置信息通过方向角、仰角和距离来表示,用于表征所述第二对象在所述目标区域内的位置分布;所述第二组速度数据通过第一维速度分量、第二维速度分量和第三维速度分量来表示,用于表征在目标区域内有所述第二对象的状态下追踪所述第一对象的速度;The second position information is represented by a direction angle, an elevation angle, and a distance for characterizing a position distribution of the second object in the target area; the second set of speed data passes the first dimension speed component, and the second a dimension velocity component and a third dimension velocity component are used to represent a speed at which the first object is tracked in a state where the second object is in the target region;
    相应地,所述处理单元,还配置为:追踪所述第一对象时,根据追踪设备当前的速度和所述第二对象的第二位置信息,计算所述追踪设备与所述第二对象之间的距离;根据所述距离确定所述第一组速度数据和所述第二组速度数据分别对应的权重;基于所确定的权重,对所述第一组速度数据和所述第二组速度数据进行加权处理,得到所述追踪设备追踪所述第一对象的运动数据。Correspondingly, the processing unit is further configured to: when tracking the first object, calculate the tracking device and the second object according to a current speed of the tracking device and second position information of the second object a distance between the first set of speed data and the second set of speed data, respectively, based on the distance; the first set of speed data and the second set of speeds based on the determined weights The data is weighted to obtain the tracking device tracking the motion data of the first object.
  17. 一种计算机存储介质,所述计算机存储介质中存储有计算机可执行指令,该计算机可执行指令配置为执行权利要求1-8任一项所述的追踪方法。 A computer storage medium having stored therein computer executable instructions configured to perform the tracking method of any of claims 1-8.
PCT/CN2017/072999 2016-10-12 2017-02-06 Tracking method, tracking device, and computer storage medium WO2018068446A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201610891570 2016-10-12
CN201610891570.3 2016-10-12

Publications (1)

Publication Number Publication Date
WO2018068446A1 true WO2018068446A1 (en) 2018-04-19

Family

ID=58973757

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2017/072999 WO2018068446A1 (en) 2016-10-12 2017-02-06 Tracking method, tracking device, and computer storage medium

Country Status (2)

Country Link
CN (1) CN106774303B (en)
WO (1) WO2018068446A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111161319A (en) * 2019-12-30 2020-05-15 秒针信息技术有限公司 Work supervision method and device and storage medium

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108255173A (en) * 2017-12-20 2018-07-06 北京理工大学 Robot follows barrier-avoiding method and device
CN110191414A (en) * 2019-05-27 2019-08-30 段德山 Method for tracing and system based on terminal
US11367211B2 (en) * 2019-07-29 2022-06-21 Raytheon Company Inertially-assisted target detection
CN112595338B (en) * 2020-12-24 2023-04-07 中国联合网络通信集团有限公司 Navigation method and navigation system

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101667037A (en) * 2008-09-03 2010-03-10 中国科学院自动化研究所 Feasible channel-based robot target tracking method
CN102411368A (en) * 2011-07-22 2012-04-11 北京大学 Active vision human face tracking method and tracking system of robot
CN103454919A (en) * 2013-08-19 2013-12-18 江苏科技大学 Motion control system and method of mobile robot in intelligent space
CN103473542A (en) * 2013-09-16 2013-12-25 清华大学 Multi-clue fused target tracking method
WO2016026039A1 (en) * 2014-08-18 2016-02-25 Verity Studios Ag Invisible track for an interactive mobile robot system

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7710322B1 (en) * 2005-05-10 2010-05-04 Multispectral Solutions, Inc. Extensible object location system and method using multiple references
CN105652895A (en) * 2014-11-12 2016-06-08 沈阳新松机器人自动化股份有限公司 Mobile robot human body tracking system and tracking method based on laser sensor
CN105527975A (en) * 2015-12-09 2016-04-27 周润华 Target tracking system based on UAV
CN105955268B (en) * 2016-05-12 2018-10-26 哈尔滨工程大学 A kind of UUV moving-target sliding mode tracking control methods considering Local obstacle avoidance

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101667037A (en) * 2008-09-03 2010-03-10 中国科学院自动化研究所 Feasible channel-based robot target tracking method
CN102411368A (en) * 2011-07-22 2012-04-11 北京大学 Active vision human face tracking method and tracking system of robot
CN103454919A (en) * 2013-08-19 2013-12-18 江苏科技大学 Motion control system and method of mobile robot in intelligent space
CN103473542A (en) * 2013-09-16 2013-12-25 清华大学 Multi-clue fused target tracking method
WO2016026039A1 (en) * 2014-08-18 2016-02-25 Verity Studios Ag Invisible track for an interactive mobile robot system

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111161319A (en) * 2019-12-30 2020-05-15 秒针信息技术有限公司 Work supervision method and device and storage medium

Also Published As

Publication number Publication date
CN106774303B (en) 2019-04-02
CN106774303A (en) 2017-05-31

Similar Documents

Publication Publication Date Title
WO2018068446A1 (en) Tracking method, tracking device, and computer storage medium
JP7345504B2 (en) Association of LIDAR data and image data
US10591292B2 (en) Method and device for movable object distance detection, and aerial vehicle
KR101776622B1 (en) Apparatus for recognizing location mobile robot using edge based refinement and method thereof
JP5944781B2 (en) Mobile object recognition system, mobile object recognition program, and mobile object recognition method
CN113168184A (en) Terrain-aware step planning system
KR20150144729A (en) Apparatus for recognizing location mobile robot using key point based on gradient and method thereof
TW201728876A (en) Autonomous visual navigation
CN106569225B (en) Unmanned vehicle real-time obstacle avoidance method based on ranging sensor
KR20150144728A (en) Apparatus for recognizing location mobile robot using search based correlative matching and method thereof
KR20150144730A (en) APPARATUS FOR RECOGNIZING LOCATION MOBILE ROBOT USING KEY POINT BASED ON ADoG AND METHOD THEREOF
JP6140458B2 (en) Autonomous mobile robot
TW201734687A (en) Method and apparatus for controlling aircraft
JP2016009487A (en) Sensor system for determining distance information on the basis of stereoscopic image
JP2013200604A (en) Mobile robot
JP6014484B2 (en) Autonomous mobile robot
JP2020126612A (en) Method and apparatus for providing advanced pedestrian assistance system for protecting pedestrian using smartphone
Liau et al. Non-metric navigation for mobile robot using optical flow
CN113610910B (en) Obstacle avoidance method for mobile robot
CN115494856A (en) Obstacle avoidance method and device, unmanned aerial vehicle and electronic equipment
JP2020064029A (en) Mobile body controller
Dinaux et al. FAITH: Fast iterative half-plane focus of expansion estimation using optic flow
JP7179687B2 (en) Obstacle detector
Marlow et al. Dynamically sized occupancy grids for obstacle avoidance
Bingo et al. Two-dimensional obstacle avoidance behavior based on three-dimensional environment map for a ground vehicle

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17859420

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 17859420

Country of ref document: EP

Kind code of ref document: A1