WO2018068446A1 - Procédé et dispositif de suivi, et support de stockage informatique - Google Patents

Procédé et dispositif de suivi, et support de stockage informatique Download PDF

Info

Publication number
WO2018068446A1
WO2018068446A1 PCT/CN2017/072999 CN2017072999W WO2018068446A1 WO 2018068446 A1 WO2018068446 A1 WO 2018068446A1 CN 2017072999 W CN2017072999 W CN 2017072999W WO 2018068446 A1 WO2018068446 A1 WO 2018068446A1
Authority
WO
WIPO (PCT)
Prior art keywords
tracking
target area
location information
parameter set
speed
Prior art date
Application number
PCT/CN2017/072999
Other languages
English (en)
Chinese (zh)
Inventor
陈子冲
廖方波
Original Assignee
纳恩博(北京)科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 纳恩博(北京)科技有限公司 filed Critical 纳恩博(北京)科技有限公司
Publication of WO2018068446A1 publication Critical patent/WO2018068446A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course or altitude of land, water, air, or space vehicles, e.g. automatic pilot
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0231Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
    • G05D1/0238Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using obstacle or wall sensors
    • G05D1/024Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using obstacle or wall sensors in combination with a laser
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C11/00Photogrammetry or videogrammetry, e.g. stereogrammetry; Photographic surveying
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/02Systems using the reflection of electromagnetic waves other than radio waves
    • G01S17/06Systems determining position data of a target
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/02Systems using the reflection of electromagnetic waves other than radio waves
    • G01S17/50Systems of measurement based on relative movement of target
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S5/00Position-fixing by co-ordinating two or more direction or position line determinations; Position-fixing by co-ordinating two or more distance determinations
    • G01S5/02Position-fixing by co-ordinating two or more direction or position line determinations; Position-fixing by co-ordinating two or more distance determinations using radio waves
    • G01S5/06Position of source determined by co-ordinating a plurality of position lines defined by path-difference measurements
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course or altitude of land, water, air, or space vehicles, e.g. automatic pilot
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0212Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory
    • G05D1/0214Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory in accordance with safety or protection criteria, e.g. avoiding hazardous areas
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course or altitude of land, water, air, or space vehicles, e.g. automatic pilot
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0212Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory
    • G05D1/0223Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory involving speed control of the vehicle
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course or altitude of land, water, air, or space vehicles, e.g. automatic pilot
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0257Control of position or course in two dimensions specially adapted to land vehicles using a radar
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course or altitude of land, water, air, or space vehicles, e.g. automatic pilot
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0276Control of position or course in two dimensions specially adapted to land vehicles using signals provided by a source external to the vehicle

Definitions

  • the invention relates to an intelligent tracking technology, in particular to a tracking method and a tracking device and a computer storage medium.
  • UWB Ultra Wideband
  • UWB technology can be used to achieve location tracking.
  • a UWB anchor node UWB anchor
  • UWB beacon UWB
  • Tag the tracking robot can use the UWB anchor to track the target object carrying the UWB tag in real time.
  • the tracking process because there is a certain distance between the target object and the tracking robot, during the movement of the target object, obstacles may appear between the two, causing the robot to collide during the tracking process, resulting in tracking failure. Even damage tracking robots.
  • an embodiment of the present invention provides a tracking method, a tracking device, and a computer storage medium.
  • the monitoring the second location information of the second object in the target area include:
  • the monitoring the second location information of the second object in the target area further includes:
  • the second location information of the second object in the monitoring target area includes:
  • the combining the first location information and the second location information, Determining the tracking of the motion data of the first object including:
  • the method further includes:
  • the motion data is adjusted to be less than or equal to a preset value.
  • the first position information is represented by a direction angle and a distance for characterizing a position of the first object
  • the first group of speed data is represented by an angular velocity and a linear velocity, and is used for characterizing Tracking the speed of the first object in a state where the second object is not in the target area
  • the second position information is represented by a direction angle and a distance for characterizing a position distribution of the second object within the target area;
  • the second set of speed data is represented by angular velocity and linear velocity for characterization Tracking the speed of the first object in a state where the second object is in the target area;
  • the determining the motion data of the first object by combining the first group of speed data, the second group of speed data, and the second position information of the second object includes:
  • the first position information is represented by a direction angle, an elevation angle, and a distance, and is used to represent a position of the first object;
  • the first group of speed data passes the first dimension speed component, the second dimension a velocity component and a third dimensional velocity component are used to represent a speed of tracking the first object in a state where the second object is not in the target region;
  • the second position information is represented by a direction angle, an elevation angle, and a distance for characterizing a position distribution of the second object in the target area;
  • the second set of speed data passes the first dimension speed component, and the second a dimension velocity component and a third dimension velocity component are used to represent a speed at which the first object is tracked in a state where the second object is in the target region;
  • the determining the motion data of the first object by combining the first group of speed data, the second group of speed data, and the second position information of the second object includes:
  • a first monitoring unit configured to monitor first location information of the first object
  • a second monitoring unit configured to monitor second location information of the second object in the target area
  • a processing unit configured to combine the first location information and the second location information to determine to track motion data of the first object
  • a driving unit configured to track the first object according to the motion data.
  • the second monitoring unit is further configured to: monitor the target area, and obtain a first coordinate parameter set that represents a position distribution of each object in the target area; Determining, according to the pose parameter, a second coordinate parameter set that represents a third object position distribution in the target area; acquiring first position information and boundary information of the first object, and determining Determining, by the first location information, a third coordinate parameter set bounded by the boundary information; removing the second coordinate parameter set and the third coordinate parameter set from the first coordinate parameter set, A fourth set of coordinate parameters characterizing the second object distribution within the target region is obtained.
  • the second monitoring unit is further configured to: project a fourth coordinate parameter set in the target area that represents the distribution of the second object into a coordinate system of a preset dimension, to obtain the a fifth coordinate parameter set in a coordinate system of the preset dimension, wherein the fifth coordinate parameter set is used to represent second position information of the second object.
  • the second monitoring unit is further configured to: monitor the target area, obtain a first coordinate parameter set that represents a position distribution of each object in the target area; and acquire first position information of the first object. And determining, by the boundary information, a third coordinate parameter set centered on the first location information and bounded by the boundary information; removing the third coordinate parameter set from the first coordinate parameter set to obtain A fourth coordinate parameter set in the target area that represents the distribution of the second object, the fourth coordinate parameter set being used to represent second location information of the second object.
  • the processing unit is further configured to: determine, according to the first location information of the first object, a first group of speed data related to tracking the first object; according to the first object Determining, by the first location information and the second location information of the second object, a second set of speed data related to tracking the first object; combining the first set of speed data, the second set of speed data, and Determining the motion data of the first object by using the second location information of the second object.
  • the device further includes:
  • An abnormality detecting unit configured to check when the first object is tracked according to the motion data Measure whether an abnormal event has occurred
  • the processing unit is further configured to adjust the motion data to be less than or equal to a preset value when an abnormal event occurs.
  • the first position information is represented by a direction angle and a distance for characterizing a position of the first object
  • the first group of speed data is represented by an angular velocity and a linear velocity, and is used for characterizing Tracking the speed of the first object in a state where the second object is not in the target area
  • the second position information is represented by a direction angle and a distance for characterizing a position distribution of the second object within the target area;
  • the second set of speed data is represented by angular velocity and linear velocity for characterization Tracking the speed of the first object in a state where the second object is in the target area;
  • the processing unit is further configured to: when tracking the first object, calculate the tracking device and the second object according to a current speed of the tracking device and second position information of the second object a distance between the first set of speed data and the second set of speed data, respectively, based on the distance; the first set of speed data and the second set of speeds based on the determined weights The data is weighted to obtain the tracking device tracking the motion data of the first object.
  • the first position information is represented by a direction angle, an elevation angle, and a distance, and is used to represent a position of the first object;
  • the first group of speed data passes the first dimension speed component, the second dimension a velocity component and a third dimensional velocity component are used to represent a speed of tracking the first object in a state where the second object is not in the target region;
  • the second position information is represented by a direction angle, an elevation angle, and a distance for characterizing a position distribution of the second object in the target area;
  • the second set of speed data passes the first dimension speed component, and the second a dimension velocity component and a third dimension velocity component are used to represent a speed at which the first object is tracked in a state where the second object is in the target region;
  • the processing unit is further configured to: when tracking the first object, calculate the tracking device and the second object according to a current speed of the tracking device and second position information of the second object a distance between the first set of speed data and the second set of speed data, respectively, based on the distance; the first set of speed data and the second set of speeds based on the determined weights The data is weighted to obtain the tracking device tracking the motion data of the first object.
  • the computer storage medium provided by the embodiment of the present invention stores a computer program configured to execute the above tracking method.
  • the first location information of the first object is monitored; the second location information of the second object in the target area is monitored; and the first location information and the second location information are combined to determine Tracking motion data of the first object; tracking the first object based on the motion data.
  • the tracking device detects the second object (also referred to as an obstacle) in the target area while tracking the first object, and simultaneously realizes tracking of the target and avoiding obstacles, thereby greatly reducing The possibility of collision with obstacles during the tracking process protects the tracking device.
  • FIG. 1 is a schematic flowchart 1 of a tracking method according to an embodiment of the present invention.
  • FIG. 2 is a second schematic flowchart of a tracking method according to an embodiment of the present invention.
  • FIG. 3 is a schematic diagram 1 of a scenario according to an embodiment of the present invention.
  • FIG. 4 is a schematic diagram 1 of information fusion according to an embodiment of the present invention.
  • FIG. 5 is a schematic flowchart 3 of a tracking method according to an embodiment of the present invention.
  • FIG. 6 is a second schematic diagram of a scenario according to an embodiment of the present invention.
  • FIG. 7 is a schematic diagram 2 of information fusion according to an embodiment of the present invention.
  • FIG. 8 is a schematic structural diagram of a tracking device according to an embodiment of the present invention.
  • FIG. 1 is a schematic flowchart 1 of a tracking method according to an embodiment of the present invention.
  • the tracking method in this example is applied to a tracking device. As shown in FIG. 1 , the tracking method includes the following steps:
  • Step 101 Monitor first location information of the first object.
  • the tracking device includes two types of sensors, wherein the first type of sensors are used to monitor first position information of the first object, and the second type of sensors are used to monitor second position information of the second object in the target area. .
  • the first type of sensor may be a UWB anchor.
  • the first object needs to carry a UWB tag, and the tracking device locates the UWB tag carried by the first object by using the UWB anchor to obtain the first position of the first object. information.
  • the UWB anchor is usually composed of two or more UWB communication nodes, and the UWB tag is composed of another UWB communication node, and the UWB tag is determined relative to the UWB by using the principle of time-of-flight (TOF) and triangulation.
  • TOF time-of-flight
  • the position information of the anchor that is, the first position information of the first object.
  • the first object refers to an object to be tracked.
  • Step 102 Monitor second location information of the second object in the target area.
  • the second location information of the second object in the target area is monitored by the second type of sensor.
  • the second type of sensor may be a 3D camera
  • the third position image of the second image in the target area may be obtained by performing a three-dimensional image acquisition on the target area by using the 3D camera.
  • the 3D camera obtains positional information of each object in the camera field of view (corresponding to the target area) with respect to the 3D camera by means of structured light technology, or TOF technology, or binocular vision.
  • TOF technology is a two-way ranging technology that measures the distance between nodes by using the time between the two asynchronous transceivers.
  • the second type of sensor may be a Lidar (LiDAR) sensor, and the distance information of the surrounding object relative to the sensor is obtained by laser scanning.
  • Lidar Lidar
  • the second object refers to an obstacle with respect to the first object.
  • the tracking device may be a ground robot. Since the ground robot can only move on the two-dimensional ground, the first position information of the first object and the second position information of the second object are represented in the two-dimensional space. .
  • the first position information of the first object is represented by the direction angle ⁇ and the distance d
  • the position of the first object in the two-dimensional space is represented by (d, ⁇ ) .
  • the second position information of the second object is represented by a direction angle ⁇ ' and a distance d', and the position of the second object in the two-dimensional space is represented by (d', ⁇ '), and all the second objects in the target area are The second position information of the objects is gathered together to form a two-dimensional obstacle avoidance map M.
  • the tracking device may be a drone, and the first position information of the first object and the second position information of the second object are represented in the three-dimensional space because the drone can move in the three-dimensional space.
  • the first position information of the first object passes the direction angle ⁇
  • the elevation angle And the distance d is expressed by Characterizing the position of the first object in three-dimensional space.
  • the second position information of the second object passes through the direction angle ⁇ ', the elevation angle And distance d' to indicate, pass Characterizing the position of the second object in the three-dimensional space, and combining the second position information of all the second objects in the target area to form a three-dimensional obstacle avoidance map M.
  • Step 103 Combine the first location information with the second location information, determine to track motion data of the first object, and track the first object according to the motion data.
  • the second location information determines that the motion data of the first object is tracked.
  • the tracking device has a proportional-integral-derivative (PID) module, the input of the PID module is first position information of the first object, and the output is the first tracking device tracking the first object without an obstacle Group speed data.
  • the tracking device further has an obstacle avoidance module, wherein the input of the obstacle avoidance module is an obstacle avoidance map M formed based on the second position information of the second object and the first position information of the first object, and the output is the second group of speed data,
  • the second set of speed data is selected from all possible motion trajectories according to the motion model of the tracking device to avoid the second object and as close as possible to the velocity data of the first object.
  • the tracking device further has an information fusion module, wherein the input of the information fusion module is the first group of speed data, the second group of speed data, and the obstacle avoidance map M formed based on the second position information of the second object, and the output of the information fusion module It is the final motion data of the tracking device.
  • the first group of speed data and the second group of speed data are merged based on the obstacle avoidance map M, and the fusion is based on: predicting the tracking device and the second object in the obstacle avoidance map M according to the current motion data of the tracking device.
  • the distance between the tracking device and the second object is greater, the greater the weight of the first set of speed data; conversely, the smaller the distance between the tracking device and the second object, the second set of speed data The greater the weight.
  • the first set of velocity data and the second set of velocity numbers are weighted based on the respective weights, that is, the motion data of the first object is obtained.
  • the motion data when the first object is tracked according to the motion data, it is detected whether an abnormal event occurs; when an abnormal event occurs, the motion data is adjusted to be less than or equal to a preset value. In an embodiment, the preset value is zero. At this time, if the tracking device is at risk of falling or colliding, the brake logic is forcibly activated to ensure the safety of the tracking device.
  • FIG. 2 is a schematic flowchart 2 of a tracking method according to an embodiment of the present invention.
  • the tracking method in this example is applied to a ground robot. As shown in FIG. 2, the tracking method includes the following steps:
  • Step 201 Monitor first location information of the first object.
  • the ground robot includes two types of sensors, wherein the first type of sensors are used to monitor first position information of the first object, and the second type of sensors are used to monitor second position information of the second object in the target area. .
  • the first type of sensor may be a UWB anchor.
  • the first object needs to carry the UWB tag, and the ground robot locates the UWB tag carried by the first object by using the UWB anchor to obtain the first position of the first object. information.
  • the UWB anchor is usually composed of two or more UWB communication nodes, and the UWB tag is composed of another UWB communication node, and the UWB tag is determined relative to the UWB by using the principle of time-of-flight (TOF) and triangulation.
  • TOF time-of-flight
  • the position information of the anchor that is, the first position information of the first object.
  • the first object refers to an object to be tracked.
  • the first position information is represented by a direction angle ⁇ and a distance d
  • the position of the first object is represented by (d, ⁇ ).
  • Step 202 Monitor the target area to obtain a first coordinate parameter set in the target area that represents the location distribution of each object.
  • the second location information of the second object in the target area is monitored by the second type of sensor.
  • the second type of sensor is a 3D camera, and the third position image of the second image in the target area is obtained by performing a three-dimensional image acquisition on the target area by using the 3D camera.
  • the second type of sensor is a LiDAR sensor, and the distance information of the surrounding object relative to the sensor is obtained by laser scanning.
  • the second object refers to an obstacle with respect to the first object.
  • the target area needs to be monitored first, and a first coordinate parameter set representing the position distribution of each object in the target area is obtained.
  • Step 203 Acquire a pose parameter of the monitoring device, and determine, according to the pose parameter, a second coordinate parameter set that represents a third object position distribution in the target area.
  • the ground robot since the ground robot moves on the ground, it is necessary to calculate the three-dimensional position of the ground according to the posture of the ground robot and the height of the second type of sensor installation (that is, the second coordinate parameter set of the third object position). And remove the ground position from the obstacle distribution O A to obtain an obstacle distribution O B without the ground.
  • Step 204 Acquire first location information of the first object and boundary information, and determine a third coordinate parameter set centered on the first location information and bound by the boundary information.
  • the characterization can be determined.
  • the third coordinate parameter set of the spatial distribution of the first object removes all obstacles in the 3D bounding box centered on the first position from the obstacle distribution O B to obtain a final obstacle distribution O c .
  • Step 205 Removing the second coordinate parameter set and the third coordinate parameter set from the first coordinate parameter set to obtain a fourth coordinate parameter set in the target area that represents the second object distribution.
  • the ground position is first removed from the obstacle distribution O A to obtain an obstacle distribution O B without the ground; then, all obstacles in the 3D bounding box centered on the first object are removed from the obstacle distribution O B The final obstacle distribution O c is obtained .
  • Step 206 Projecting a fourth coordinate parameter set in the target area that represents the second object distribution to a coordinate system of a preset dimension, and obtaining a fifth coordinate parameter set in the coordinate system of the preset dimension.
  • the fifth coordinate parameter set is used to represent second location information of the second object.
  • the ground robot moves in a two-dimensional space, it is necessary to The fourth set of coordinate parameters is projected into the two-dimensional coordinate system, such that the obtained second position information can be represented by a direction angle and a distance in the two-dimensional polar coordinates for characterizing the second object in the target area Location distribution.
  • the obstacle distribution O c is projected onto a horizontal plane (ie, the ground) to obtain a two-dimensional partial obstacle avoidance map M, and the obstacle avoidance map M includes second position information of each second object.
  • Step 207 Determine, according to the first location information of the first object, a first set of speed data related to tracking the first object.
  • the first set of speed data is represented by an angular velocity and a linear velocity for characterizing the speed of tracking the first object in a state where the second object is not in the target region.
  • the ground robot has a local motion controller, and the local motion controller includes: a PID module, an obstacle avoidance module, and an information fusion module.
  • the input of the PID module is first position information (d, ⁇ ) of the first object
  • the output is that the ground robot tracks the first set of velocity data (v 1 , ⁇ 1 ) of the first object without an obstacle.
  • Step 208 Determine, according to the first location information of the first object and the second location information of the second object, a second set of speed data related to tracking the first object.
  • the second set of speed data is represented by an angular velocity and a linear velocity for characterizing the speed of tracking the first object in a state where the second object is in the target region.
  • the input of the obstacle avoidance module is an obstacle avoidance map M formed based on the second position information of the second object and the first position information (d, ⁇ ) of the first object
  • the output is the second set of speed data (v 2 , ⁇ 2 )
  • the second set of speed data is based on the motion model of the ground robot, and selects the speed data that avoids the second object from all possible motion trajectories and is as close as possible to the first object.
  • Step 209 Determine, according to the first group of speed data, the second group of speed data, and the second position information of the second object, tracking motion data of the first object; The motion data tracks the first object.
  • the distance between the ground robot and the second object is calculated according to the current speed of the ground robot and the second position information of the second object; Determining a weight corresponding to the first group of speed data and the second group of speed data respectively; performing weighting processing on the first group of speed data and the second group of speed data based on the determined weights
  • the ground robot tracks motion data of the first object.
  • the input of the information fusion module is a first set of velocity data (v 1 , ⁇ 1 ), a second set of velocity data (v 2 , ⁇ 2 ), and an obstacle avoidance based on the second location information of the second object.
  • Map M the output of the information fusion module is the final motion data (v 3 , ⁇ 3 ) of the ground robot.
  • the first set of velocity data and the second set of velocity data are merged based on the obstacle avoidance map M, and the fusion is based on: predicting in the obstacle avoidance map M according to the current motion data (v 0 , ⁇ 0 ) of the ground robot The distance d c between the ground robot and the second object, the greater the distance d c between the ground robot and the second object, the greater the weight of the first set of velocity data (v 1 , ⁇ 1 ); otherwise, the ground robot The smaller the distance d c between the second object and the second object, the greater the weight of the second set of velocity data (v 2 , ⁇ 2 ).
  • the first set of velocity data (v 1 , ⁇ 1 ) and the second set of velocity numbers (v 2 , ⁇ 2 ) are weighted based on the respective weights, that is, the motion data of the first object is obtained.
  • the motion data when the first object is tracked according to the motion data, it is detected whether an abnormal event occurs; when an abnormal event occurs, the motion data is adjusted to be less than or equal to a preset value. In an embodiment, the preset value is zero. At this time, once the ground robot has a risk of falling or colliding, the brake logic is forcibly activated to ensure the safety of the ground robot.
  • FIG. 5 is a schematic flowchart of a tracking method according to an embodiment of the present invention.
  • the tracking method in this example is applied to a drone. As shown in FIG. 5, the tracking method includes the following steps:
  • Step 501 Monitor first location information of the first object.
  • the drone includes two types of sensors, wherein the first type of sensor is used for The first location information of the first object is monitored, and the second type of sensor is used to monitor the second location information of the second object in the target area.
  • the first type of sensor is a UWB anchor.
  • the first object needs to carry a UWB tag, and the UWB anchors the UWB tag carried by the first object by using the UWB anchor to obtain the first position of the first object. information.
  • the UWB anchor is usually composed of two or more UWB communication nodes, and the UWB tag is composed of another UWB communication node, and the UWB tag is determined relative to the UWB by using the principle of time-of-flight (TOF) and triangulation.
  • TOF time-of-flight
  • the position information of the anchor that is, the first position information of the first object.
  • the first object refers to an object to be tracked.
  • the first position information passes the direction angle ⁇ and the elevation angle And the distance d is expressed by Characterizing the location of the first object.
  • Step 502 Monitor the target area to obtain a first coordinate parameter set in the target area that represents the location distribution of each object.
  • the second location information of the second object in the target area is monitored by the second type of sensor.
  • the second type of sensor is a 3D camera, and the third position image of the second image in the target area is obtained by performing a three-dimensional image acquisition on the target area by using the 3D camera.
  • the second type of sensor is a LiDAR sensor, and the distance information of the surrounding object relative to the sensor is obtained by laser scanning.
  • the second object refers to an obstacle with respect to the first object.
  • the target area needs to be monitored first, and a first coordinate parameter set representing the position distribution of each object in the target area is obtained.
  • Step 503 Acquire first location information of the first object and boundary information, and determine The first location information is a center and the third coordinate parameter set bounded by the boundary information is bounded.
  • the first position information of the first object relative to the robot And the boundary information of the first object known in advance, that is, the 3D bounding box size, the third coordinate parameter set representing the spatial distribution of the first object can be determined, and the first distribution is removed from the obstacle distribution O A All obstacles in the 3D bounding box centered at the location, resulting in the final obstacle distribution O B .
  • Step 504 Removing the third coordinate parameter set from the first coordinate parameter set, and obtaining a fourth coordinate parameter set in the target area that represents the second object distribution, where the fourth coordinate parameter set is used Representing second location information of the second object.
  • the distribution of the obstacle O is removed from A to all the obstacles in the first object-centric 3D bounding box, to obtain the final distribution of the obstacle O B,
  • O B is the three-dimensional map of obstacle avoidance, obstacle avoidance map O B includes second location information for each of the second objects.
  • Step 505 Determine, according to the first location information of the first object, a first set of speed data related to tracking the first object.
  • the first set of velocity data is represented by a first dimension velocity component, a second dimension velocity component, and a third dimension velocity component, and is used to represent tracking in a state where the second object is not in the target region. The speed of the first object.
  • the drone has a local motion controller, and the local motion controller includes: a PID module, an obstacle avoidance module, and an information fusion module.
  • the input of the PID module is the first location information of the first object.
  • the output is that the drone tracks the first set of velocity data ( ⁇ 1 , ⁇ 1 , ⁇ 1 ) of the first object without obstacles.
  • the velocity data is velocity data in a three-dimensional space, wherein the first dimension velocity component is a velocity component of the drone rotating around the x-axis (ie, the roll axis), and the second dimension velocity component is an unmanned The velocity component of the machine rotating around the y-axis (that is, the pitch axis), the third-dimensional velocity component It is the velocity component of the drone that rotates around the z-axis (that is, the yaw axis).
  • Step 506 Determine, according to the first location information of the first object and the second location information of the second object, a second set of speed data related to tracking the first object.
  • the second set of velocity data is represented by a first dimension velocity component, a second dimension velocity component, and a third dimension velocity component, and is used to represent that the second object is tracked in the target region.
  • the speed of the first object is represented by a first dimension velocity component, a second dimension velocity component, and a third dimension velocity component, and is used to represent that the second object is tracked in the target region. The speed of the first object.
  • the input of the obstacle avoidance module is an obstacle avoidance map O B formed based on the second position information of the second object and the first position information of the first object
  • the output is a second set of velocity data ( ⁇ 2 , ⁇ 2 , ⁇ 2 ), where the second set of velocity data is selected from all possible motion trajectories to avoid the second object according to the motion model of the drone, and Try to be close to the speed data of the first object.
  • Step 507 Combine the first group of speed data, the second group of speed data, and the second position information of the second object to determine that the motion data of the first object is tracked; and track the tracking according to the motion data. Said the first object.
  • the distance between the drone and the second object is calculated according to the current speed of the drone and the second position information of the second object; Determining weights respectively corresponding to the first group of speed data and the second group of speed data according to the distance; weighting the first group of speed data and the second group of speed data based on the determined weights Obtaining, by the drone, tracking motion data of the first object.
  • the input of the information fusion module is a first set of velocity data ( ⁇ 1 , ⁇ 1 , ⁇ 1 ), a second set of velocity data ( ⁇ 2 , ⁇ 2 , ⁇ 2 ), and a second location based on the second object.
  • the obstacle avoidance map O B formed by the information, the output of the information fusion module is the final motion data ( ⁇ 3 , ⁇ 3 , ⁇ 3 ) of the drone.
  • the first set of velocity data and the second set of velocity data are merged based on the obstacle avoidance map O B , and the fusion is based on: avoiding the current motion data ( ⁇ 0 , ⁇ 0 , ⁇ 0 ) of the drone O B barrier predicted distance map d c between the UAV and the second object, the greater the distance d c between the UAV and a second object, the first set of data rate ( ⁇ 1, ⁇ 1, ⁇ 1 ) the greater the weight; conversely, the smaller the distance d c between the drone and the second object, the greater the weight of the second set of velocity data ( ⁇ 2 , ⁇ 2 , ⁇ 2 ).
  • the motion data when the first object is tracked according to the motion data, it is detected whether an abnormal event occurs; when an abnormal event occurs, the motion data is adjusted to be less than or equal to a preset value. In an embodiment, the preset value is zero. At this time, once the drone has a risk of falling or colliding, the brake logic is forcibly activated to ensure the safety of the drone.
  • FIG. 8 is a schematic structural diagram of a tracking device according to an embodiment of the present invention. As shown in FIG. 8, the tracking device includes:
  • the first monitoring unit 81 is configured to monitor first location information of the first object
  • a second monitoring unit 82 configured to monitor second location information of the second object in the target area
  • the processing unit 83 is configured to determine, according to the first location information and the second location information, tracking motion data of the first object;
  • the driving unit 84 is configured to track the first object according to the motion data.
  • the second monitoring unit 82 is further configured to: monitor the target area, obtain a first coordinate parameter set that represents a position distribution of each object in the target area; acquire a pose parameter of the monitoring device, according to the Determining, by the pose parameter, a second coordinate parameter set that represents a third object position distribution in the target area; acquiring first position information of the first object and boundary information, determining that the first location information is centered a third coordinate parameter set bounded by the boundary information; removing the second coordinate parameter set and the third coordinate parameter set from the first coordinate parameter set to obtain a representation in the target area A set of fourth coordinate parameters of the two object distributions.
  • the second monitoring unit 82 is further configured to: project a fourth coordinate parameter set in the target area that represents the distribution of the second object into a coordinate system of a preset dimension, and obtain a a fifth coordinate parameter set in a coordinate system of the preset dimension, the fifth coordinate parameter
  • the set of numbers is used to represent the second location information of the second object.
  • the second monitoring unit 82 is further configured to: monitor the target area, obtain a first coordinate parameter set that represents the position distribution of each object in the target area; and acquire the first position of the first object. Determining, by the information and the boundary information, a third coordinate parameter set centered on the first location information and bounded by the boundary information; removing the third coordinate parameter set from the first coordinate parameter set, Obtaining a fourth coordinate parameter set in the target area that represents the distribution of the second object, the fourth coordinate parameter set is used to represent second location information of the second object.
  • the processing unit 83 is further configured to: determine, according to the first location information of the first object, a first group of speed data related to tracking the first object; according to the first object And determining, by the first location information and the second location information of the second object, a second set of speed data associated with tracking the first object; combining the first set of speed data, the second set of speed data, and The second location information of the second object determines to track motion data of the first object.
  • the device further includes:
  • the abnormality detecting unit 85 is configured to detect whether an abnormal event occurs when the first object is tracked according to the motion data
  • the processing unit 83 is further configured to adjust the motion data to be less than or equal to a preset value when an abnormal event occurs.
  • the first position information is represented by a direction angle and a distance for characterizing a position of the first object
  • the first group of speed data is represented by an angular velocity and a linear velocity, and is used for characterizing Tracking the speed of the first object in a state where the second object is not in the target area
  • the second position information is represented by a direction angle and a distance for characterizing a position distribution of the second object within the target area;
  • the second set of speed data passes angular velocity and linear velocity Representing a speed for tracking the first object in a state where the second object is in the target area;
  • the processing unit 83 is further configured to: when tracking the first object, calculate the tracking device and the second object according to a current speed of the tracking device and second position information of the second object a distance between the first set of speed data and the second set of speed data, respectively; determining, according to the determined weight, the first set of speed data and the second group The speed data is weighted to obtain the tracking device tracking the motion data of the first object.
  • the first position information is represented by a direction angle, an elevation angle, and a distance, and is used to represent a position of the first object;
  • the first group of speed data passes the first dimension speed component, the second dimension a velocity component and a third dimensional velocity component are used to represent a speed of tracking the first object in a state where the second object is not in the target region;
  • the second position information is represented by a direction angle, an elevation angle, and a distance for characterizing a position distribution of the second object in the target area;
  • the second set of speed data passes the first dimension speed component, and the second a dimension velocity component and a third dimension velocity component are used to represent a speed at which the first object is tracked in a state where the second object is in the target region;
  • the processing unit 83 is further configured to: when tracking the first object, calculate the tracking device and the second object according to a current speed of the tracking device and second position information of the second object a distance between the first set of speed data and the second set of speed data, respectively; determining, according to the determined weight, the first set of speed data and the second group The speed data is weighted to obtain the tracking device tracking the motion data of the first object.
  • the first monitoring unit can be implemented by a UWB anchor, UWB.
  • An anchor usually consists of more than two UWB communication nodes.
  • the second monitoring unit can be implemented by a 3D camera.
  • the drive unit can be realized by a motor.
  • the anomaly detection unit can be implemented by a pose sensor.
  • the processing unit can be implemented by a processor.
  • Embodiments of the Invention can also be stored in a computer readable storage medium if it is implemented in the form of a software function module and sold or used as a standalone product. Based on such understanding, the technical solution of the embodiments of the present invention may be embodied in the form of a software product in essence or in the form of a software product stored in a storage medium, including a plurality of instructions.
  • a computer device (which may be a personal computer, server, or network device, etc.) is caused to perform all or part of the methods described in various embodiments of the present invention.
  • the foregoing storage medium includes various media that can store program codes, such as a USB flash drive, a mobile hard disk, a read only memory (ROM), a magnetic disk, or an optical disk.
  • embodiments of the invention are not limited to any specific combination of hardware and software.
  • an embodiment of the present invention further provides a computer storage medium, wherein a computer program is stored, and the computer program is used to execute the tracking method of the embodiment of the present invention.
  • the technical solution of the embodiment of the present invention monitors first location information of the first object, monitors second location information of the second object in the target area, and determines the tracking by combining the first location information and the second location information.
  • Motion data of the first object tracking the first object based on the motion data.
  • the tracking device detects the second object (also called an obstacle) in the target area while tracking the first object, and at the same time realizes tracking of the target and avoiding obstacles, thereby greatly reducing the possibility of collision obstacles during the tracking process.
  • Sexuality protects tracking devices.

Abstract

La présente invention porte sur un procédé et un dispositif de suivi, et sur un support de stockage informatique, consistant : à surveiller des premières informations d'emplacement d'un premier objet ; à surveiller des secondes informations d'emplacement d'un second objet dans une zone cible ; à combiner les premières et secondes informations d'emplacement, puis à déterminer des données de mouvement en vue du suivi du premier objet ; et à suivre le premier objet sur la base des données de mouvement.
PCT/CN2017/072999 2016-10-12 2017-02-06 Procédé et dispositif de suivi, et support de stockage informatique WO2018068446A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201610891570 2016-10-12
CN201610891570.3 2016-10-12

Publications (1)

Publication Number Publication Date
WO2018068446A1 true WO2018068446A1 (fr) 2018-04-19

Family

ID=58973757

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2017/072999 WO2018068446A1 (fr) 2016-10-12 2017-02-06 Procédé et dispositif de suivi, et support de stockage informatique

Country Status (2)

Country Link
CN (1) CN106774303B (fr)
WO (1) WO2018068446A1 (fr)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111161319A (zh) * 2019-12-30 2020-05-15 秒针信息技术有限公司 工作监督方法和装置、存储介质

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108255173A (zh) * 2017-12-20 2018-07-06 北京理工大学 机器人跟随避障方法及装置
CN110191414A (zh) * 2019-05-27 2019-08-30 段德山 基于终端的追踪方法及系统
US11367211B2 (en) * 2019-07-29 2022-06-21 Raytheon Company Inertially-assisted target detection
CN112595338B (zh) * 2020-12-24 2023-04-07 中国联合网络通信集团有限公司 导航方法以及导航系统

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101667037A (zh) * 2008-09-03 2010-03-10 中国科学院自动化研究所 一种基于可行通道的机器人目标追踪方法
CN102411368A (zh) * 2011-07-22 2012-04-11 北京大学 机器人的主动视觉人脸跟踪方法和跟踪系统
CN103454919A (zh) * 2013-08-19 2013-12-18 江苏科技大学 智能空间中移动机器人的运动控制系统及方法
CN103473542A (zh) * 2013-09-16 2013-12-25 清华大学 多线索融合的目标跟踪方法
WO2016026039A1 (fr) * 2014-08-18 2016-02-25 Verity Studios Ag Piste invisible pour un système de robot mobile interactif

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7710322B1 (en) * 2005-05-10 2010-05-04 Multispectral Solutions, Inc. Extensible object location system and method using multiple references
CN105652895A (zh) * 2014-11-12 2016-06-08 沈阳新松机器人自动化股份有限公司 基于激光传感器的移动机器人人体跟踪系统及跟踪方法
CN105527975A (zh) * 2015-12-09 2016-04-27 周润华 一种基于无人机的目标跟踪系统
CN105955268B (zh) * 2016-05-12 2018-10-26 哈尔滨工程大学 一种考虑局部避碰的uuv动目标滑模跟踪控制方法

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101667037A (zh) * 2008-09-03 2010-03-10 中国科学院自动化研究所 一种基于可行通道的机器人目标追踪方法
CN102411368A (zh) * 2011-07-22 2012-04-11 北京大学 机器人的主动视觉人脸跟踪方法和跟踪系统
CN103454919A (zh) * 2013-08-19 2013-12-18 江苏科技大学 智能空间中移动机器人的运动控制系统及方法
CN103473542A (zh) * 2013-09-16 2013-12-25 清华大学 多线索融合的目标跟踪方法
WO2016026039A1 (fr) * 2014-08-18 2016-02-25 Verity Studios Ag Piste invisible pour un système de robot mobile interactif

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111161319A (zh) * 2019-12-30 2020-05-15 秒针信息技术有限公司 工作监督方法和装置、存储介质

Also Published As

Publication number Publication date
CN106774303A (zh) 2017-05-31
CN106774303B (zh) 2019-04-02

Similar Documents

Publication Publication Date Title
WO2018068446A1 (fr) Procédé et dispositif de suivi, et support de stockage informatique
JP7345504B2 (ja) Lidarデータと画像データの関連付け
US10591292B2 (en) Method and device for movable object distance detection, and aerial vehicle
KR101725060B1 (ko) 그래디언트 기반 특징점을 이용한 이동 로봇의 위치를 인식하기 위한 장치 및 그 방법
KR101776622B1 (ko) 다이렉트 트래킹을 이용하여 이동 로봇의 위치를 인식하기 위한 장치 및 그 방법
Larson et al. Lidar based off-road negative obstacle detection and analysis
JP5944781B2 (ja) 移動体認識システム、移動体認識プログラム、及び移動体認識方法
CN113168184A (zh) 地形感知步伐计划系统
TW201728876A (zh) 自主視覺導航
CN106569225B (zh) 一种基于测距传感器的无人车实时避障方法
KR20150144728A (ko) 이동 로봇의 맵을 업데이트하기 위한 장치 및 그 방법
JP6140458B2 (ja) 自律移動ロボット
TW201734687A (zh) 飛行器的控制方法和裝置
JP2016009487A (ja) 立体画像に基づいて距離情報を求めるためのセンサシステム
JP2013200604A (ja) 移動ロボット
JP6014484B2 (ja) 自律移動ロボット
JP2020126612A (ja) スマートフォンを利用する歩行者を保護するための先端歩行者補助システムを提供する方法及び装置
Liau et al. Non-metric navigation for mobile robot using optical flow
CN113610910B (zh) 一种移动机器人避障方法
CN115494856A (zh) 避障方法、装置、无人机及电子设备
JP2020064029A (ja) 移動体制御装置
Dinaux et al. FAITH: Fast iterative half-plane focus of expansion estimation using optic flow
JP7179687B2 (ja) 障害物検知装置
Marlow et al. Dynamically sized occupancy grids for obstacle avoidance
Bingo et al. Two-dimensional obstacle avoidance behavior based on three-dimensional environment map for a ground vehicle

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17859420

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 17859420

Country of ref document: EP

Kind code of ref document: A1