WO2022001748A1 - 目标跟踪方法、装置、电子设备及移动载具 - Google Patents

目标跟踪方法、装置、电子设备及移动载具 Download PDF

Info

Publication number
WO2022001748A1
WO2022001748A1 PCT/CN2021/101518 CN2021101518W WO2022001748A1 WO 2022001748 A1 WO2022001748 A1 WO 2022001748A1 CN 2021101518 W CN2021101518 W CN 2021101518W WO 2022001748 A1 WO2022001748 A1 WO 2022001748A1
Authority
WO
WIPO (PCT)
Prior art keywords
target
tracking
dimensional map
dimensional
tracking target
Prior art date
Application number
PCT/CN2021/101518
Other languages
English (en)
French (fr)
Inventor
郑欣
黄金鑫
Original Assignee
深圳市道通智能航空技术股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳市道通智能航空技术股份有限公司 filed Critical 深圳市道通智能航空技术股份有限公司
Publication of WO2022001748A1 publication Critical patent/WO2022001748A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/254Analysis of motion involving subtraction of images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • G06T2207/10012Stereo images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20224Image subtraction

Definitions

  • the present invention relates to the technical field of machine vision, and in particular, to a target tracking method, device, electronic device and mobile vehicle.
  • the three-dimensional position of the tracking target obtained by accurate measurement can realize trajectory planning or path planning with excellent performance, and realize the automatic tracking of the tracking target.
  • a three-dimensional map is usually constructed by relying on depth sensors such as binocular or lidar mounted on a mobile vehicle, and then the tracking target is positioned in the three-dimensional map, and then path planning is performed as a benchmark.
  • the existing lidar or other visual sensors have a limited measurement distance, so they cannot locate and track targets at long distances, and are prone to tracking failures due to the sudden acceleration of the tracked targets and briefly exceeding the measurement range, which affects the robustness.
  • the embodiments of the present invention aim to provide a target tracking method, device, electronic device and mobile vehicle, which can solve the defects existing in the existing target tracking methods.
  • a target tracking method includes:
  • Collect the position information of the tracking target determine whether the tracking target exceeds the known 3D map; if not, plan a first tracking path according to the 3D position of the tracking target in the 3D map; follow the first tracking move the path to track the tracking target; if so, generate a virtual target point in the same direction as the tracking target on the boundary of the three-dimensional map; plan a second tracking path according to the three-dimensional position of the virtual target point; moving the second tracking path to track the virtual target point.
  • the judging whether the tracking target exceeds a known three-dimensional map specifically includes:
  • the target frame is:
  • the tracking target exceeds a known three-dimensional map.
  • the determining the target pixel corresponding to each of the point cloud information on the two-dimensional image specifically includes:
  • the three-dimensional position information is projected into the two-dimensional image to obtain corresponding target pixels.
  • generating a virtual target point in the same direction as the tracking target on the boundary of the three-dimensional map specifically includes:
  • intersection of the ray in the direction of the tracking target and the boundary of the three-dimensional map is calculated as the virtual target point.
  • determining the direction in which the tracking target is located by using the reference point and the center of the three-dimensional map specifically includes:
  • the reference point is subtracted from the three-dimensional position information of the center of the three-dimensional map to obtain a unit vector of the direction in which the tracking target is located.
  • the calculation of the intersection of the ray in the direction where the tracking target is located and the boundary of the three-dimensional map, as the virtual target point specifically includes:
  • the three-dimensional position information of the virtual target point in the world coordinate system is determined.
  • the solution of the target distance between the center of the three-dimensional map in the direction of the unit vector and the boundary of the three-dimensional map specifically includes:
  • the center of the three-dimensional map is in the direction of the unit vector, and the distance between the center of the three-dimensional map and the boundary of the three-dimensional map is the third projection length on the Z axis of the world coordinate system;
  • the shortest length among the first projected length, the second projected length and the third projected length is selected as the target distance.
  • the moving along the second tracking path to track the virtual target point specifically includes:
  • the movement speed is increased until a maximum movement speed is reached or it is determined that the tracking target does not exceed the three-dimensional map.
  • a target tracking device includes:
  • a collection device is used to collect the position information of the tracking target; a detection device is used to judge whether the tracking target exceeds a known three-dimensional map; a virtual target point generation device is used to generate and track the target at the boundary of the three-dimensional map. a virtual target point where the target is in the same direction; a path planning device for planning a first tracking path according to the three-dimensional position of the tracking target in the three-dimensional map when the tracking target exceeds a known three-dimensional map, and When the tracking target does not exceed the known three-dimensional map, plan a second tracking path according to the three-dimensional position of the virtual target point; a tracking device is configured to move along the first tracking path or the second tracking path to track the tracking target.
  • an electronic device comprising: a processor and a memory communicatively connected to the processor; the memory stores computer program instructions, and the computer program The instructions, when invoked by the processor, cause the processor to perform the target tracking method as described above.
  • a mobile vehicle comprising:
  • a carrier body which is provided with a depth sensor for collecting point cloud information and an image acquisition device for collecting two-dimensional image information;
  • a driving mechanism which is used for outputting power to drive the carrier body to move
  • a controller configured to receive the point cloud information collected by the depth sensor and the two-dimensional image information collected by the image collection device, execute the target tracking method described above, and control the mobile vehicle through the drive mechanism Keep track of the tracking target.
  • the target tracking method of the embodiment of the present invention can still perform path planning when the tracking target exceeds the range of the sensor or the range of the known three-dimensional map, keep the pursuit of the tracking target, and make the tracking as much as possible.
  • the target is back within the range of the sensor. It effectively overcomes the limitation of depth sensor for automatic tracking algorithm, and has a good application prospect.
  • FIG. 1 is a schematic diagram of an application scenario of a target tracking method according to an embodiment of the present invention
  • FIG. 2 is a structural block diagram of a mobile vehicle provided by an embodiment of the present invention.
  • FIG. 3 is a schematic diagram of a target tracking device provided by an embodiment of the present invention.
  • FIG. 4 is a method flowchart of a target tracking method provided by an embodiment of the present invention.
  • 5a is a flowchart of a method for judging whether a tracking target exceeds a known three-dimensional map provided by an embodiment of the present invention
  • 5b is a schematic diagram of a two-dimensional image including a tracking target provided by an embodiment of the present invention.
  • 5c is a flowchart of a method for calculating a virtual target point position provided by an embodiment of the present invention.
  • FIG. 6 is a schematic diagram of a relative positional relationship between a moving vehicle and a tracking target in a world coordinate system according to an embodiment of the present invention
  • FIG. 7 is a schematic diagram of a result of an electronic device provided by an embodiment of the present invention.
  • Target tracking means that the unmanned mobile vehicle relies on the measurement data information collected by the various sensor equipment it carries, realizes the perception of the external environment, and follows the tracking target to complete various work tasks under the condition of specifying the tracking target. the process of.
  • FIG. 1 is an application scenario of a target tracking method provided by an embodiment of the present invention. As shown in FIG. 1 , in this application scenario, a mobile vehicle 10 , a tracking target 20 , an intelligent terminal 30 and a wireless network 40 are included.
  • the mobile vehicle 10 may be any type of power-driven unmanned mobile vehicle (UAV is taken as an example in FIG. 1 ), including but not limited to drones, AGVs, and other types of robots. It can have the corresponding volume or power according to the needs of the actual situation, so as to provide the load capacity, moving speed and cruising range that can meet the needs of use.
  • UAV unmanned mobile vehicle
  • the mobile vehicle 10 is equipped with at least a depth sensor for detecting depth information and an image acquisition device for acquiring two-dimensional image information, so as to ensure the ability to perceive the external environment.
  • the tracking target 20 is a target object for guiding the movement of the mobile vehicle 10 .
  • it can be any type of object or device, which only needs to be able to be detected by the depth sensor and the image acquisition device of the mobile vehicle 10 .
  • the intelligent terminal 30 is an intelligent device located on the user side and used to realize the interaction of the user 50, including but not limited to a computer, a tablet computer, and an intelligent remote control. It can establish a communication connection with the mobile vehicle 10 and/or the tracking target 20 through the wireless network 40 to realize data transmission between the mobile vehicle 10 and the tracking target 20 (such as issuing control commands or receiving uploaded data information).
  • the smart terminal 30 is equipped with one or more different user interaction devices for collecting user instructions or displaying and feeding back information to the user. These interactive devices include, but are not limited to: buttons, display screens, touch screens, and speakers.
  • the smart terminal 30 may be equipped with a touch display screen, through which the tracking target of the mobile vehicle 10 is selected, or the tracking target 20 is operated to move so as to guide the mobile vehicle 10, and the mobile vehicle 10 can also be received at The data information generated during the target tracking process.
  • the wireless network 40 may be a wireless communication network based on any type of data transmission principle for establishing a data transmission channel between two nodes, such as a Bluetooth network, a WiFi network, a wireless cellular network or a combination thereof located in a specific signal frequency band.
  • FIG. 1 is only used for exemplary illustration. Those skilled in the art can add or omit one or more of the devices according to actual needs, which are not limited to those shown in FIG. 1 .
  • FIG. 2 is a structural block diagram of a mobile vehicle 10 according to an embodiment of the present invention.
  • the mobile carrier 10 may include: a carrier body 110 , a driving mechanism 120 and a controller 130 .
  • the carrier body 110 is the main part of the mobile carrier. Its specific structure setting depends on the actual mobile vehicle used, and it can be the main structure of any material or shape. To ensure the ability to perceive the outside world, the vehicle body 110 is provided with a depth sensor 140 for acquiring point cloud information and an image acquisition device 150 for acquiring two-dimensional image information.
  • the "point cloud information" is acquired by the depth sensor and contains the three-dimensional position information of the depth information.
  • the two-dimensional image is relative to the point cloud information, and can only provide color images or other suitable types of images with two-dimensional position information.
  • the specific depth sensor used can be determined according to the needs of the actual situation, such as binocular camera, structured light camera, tof, lidar, etc.
  • the driving mechanism 120 is a power system for outputting power to drive the carrier body 110 to move, such as an electric driving mechanism 120 composed of a motor, a battery and a transmission mechanism.
  • the specific driving mechanism 120 used can be selected according to actual needs, which is not limited herein.
  • the driving mechanism 120 determines the mobility index such as the acceleration and the maximum speed that the mobile vehicle 10 can have, and a driving mechanism that meets the usage requirements can be selected accordingly.
  • the controller 130 is an electronic computing platform with logic operation capability, capable of executing a series of operation steps based on an internally stored computer program and outputting corresponding data information.
  • the control core of the entire mobile vehicle it can use the preset target tracking method based on receiving the point cloud information collected by the depth sensor and the two-dimensional image information collected by the image collection device, and send corresponding control commands to The driving mechanism 120 keeps the moving vehicle tracking the tracking target.
  • the perception ability of the mobile vehicle 10 to the outside world comes from the depth sensor 140 and the image acquisition device 150 mounted on the mobile vehicle 10 . Since the measurement range of the depth sensor 140 is limited, in the actual application process as shown in FIG. 1 , if the tracking target exceeds the measurement range of the depth sensor 140 , the mobile vehicle 10 will not be able to perceive the tracking target, resulting in tracking failure. the result of.
  • the controller 130 can use the existing knowledge and the available detection results to construct suitable virtual target points to keep the tracking continued when executing the target tracking method. , so that the tracking target can return to the detection range of the depth sensor 140 as much as possible.
  • the functional modules of the mobile vehicle 10 are only exemplarily described in FIG. 2 , and are not used to limit the functional modules of the mobile vehicle 10 . Changes, replacements or integrations of the mobile vehicle 10 shown in FIG. 2 can be easily conceived by technicians based on the inventive idea provided by the present invention according to the needs of the actual situation, and fall within the protection scope of the present invention.
  • FIG. 3 is a functional block diagram of a target tracking apparatus provided by an embodiment of the present invention.
  • the target tracking device may be implemented by the controller of the mobile vehicle described above.
  • the modules shown in FIG. 3 can be selectively implemented by software, hardware, or a combination of software and hardware according to actual needs.
  • the target tracking device 300 includes: a collection module 310 , a detection module 320 , a virtual target point generation module 330 , a path planning module 340 and a display tracking module 350 .
  • the collection module 310 is used to collect the position information of the tracking target.
  • the location information of the acquisition module 310 may come from a depth sensor and an image acquisition device.
  • the position information of the tracking target can be obtained by filtering the raw data provided by the depth sensor and the image acquisition device in any suitable manner.
  • the position information that the depth sensor can provide is three-dimensional and can contain depth information, but the detection distance is limited.
  • the image acquisition device can only provide relative position information on a two-dimensional plane, and cannot determine the depth of the tracking target.
  • the detection module 320 is used for judging whether the tracking target exceeds the known three-dimensional map.
  • the detection module 320 is a module running in real time, and continuously detects the tracking target during the tracking process. Of course, the detection module 320 can also select an appropriate detection period to reduce the occupied resources.
  • the "known three-dimensional map” is constructed by the controller 130 through the point cloud information collected by the depth sensor, and the situation around the moving vehicle, for example, whether there is an obstacle or the like. That is, the known three-dimensional map represents the detection range of the depth sensor.
  • 3D map representation method such as octree, voxel, grid map and direct point cloud map, etc. It only needs to be able to represent the 3D spatial relationship of objects around the moving vehicle.
  • the virtual target point generating module 330 is configured to generate a virtual target point in the same direction as the tracking target on the boundary of the three-dimensional map.
  • the virtual target point generation module 330 is activated after the detection module 320 detects that the tracking target exceeds the three-dimensional map (ie, leaves the detection range of the depth sensor), thereby temporarily providing a new tracking target for the tracking algorithm.
  • the path planning module 340 is configured to plan a first tracking path according to the 3D position of the tracking target in the 3D map when the tracking target exceeds the known 3D map, and when the tracking target does not exceed the known 3D map When the three-dimensional map is generated, the second tracking path is planned according to the three-dimensional position of the virtual target point.
  • the path planning module 340 can directly perform path planning based on the tracking target.
  • the path planning module 340 uses the virtual target point as the basis for path planning.
  • first tracking path and second tracking path are used in this embodiment to represent paths obtained based on tracking targets and virtual target points.
  • path planning algorithm any suitable type of path planning algorithm according to the needs of the actual situation.
  • the tracking module 350 is configured to move along the first tracking path or the second tracking path to track the tracking target. Based on the tracking path provided by the path planning module 340, the tracking module 350 can output corresponding control commands to control the driving mechanism so that the mobile vehicle moves according to the tracking path, so as to track the tracking target.
  • the tracking module 350 may use a "chasing" mode to track a tracking target that deviates from a known three-dimensional map. Specifically, the tracking module 350 is configured to control the mobile vehicle 10 to increase the moving speed until it reaches the maximum moving speed when moving along the second tracking path, or the detection module 320 determines that the tracking target does not exceed the three-dimensional map.
  • the distance between the mobile vehicle 10 and the tracking target 20 can be shortened as soon as possible, so that it can re-enter the detection range of the depth sensor, which effectively improves the robustness and avoids the occurrence of tracking failure.
  • the target tracking device provided by the embodiment of the present invention through the virtual target point provided by the virtual target point building module, can still keep the tracking algorithm running when the tracking target temporarily leaves the detection range, and try to make the tracking target return to the range of the sensor as much as possible In this way, the limitation of the depth sensor range for the automatic tracking algorithm is effectively overcome.
  • the application is used in drones as an example.
  • the target tracking apparatus can also be used in other types of scenarios and devices to improve the robustness of the target tracking algorithm, and is not limited to the scenario shown in FIG. 1 .
  • FIG. 4 is a method flowchart of a target tracking method provided by an embodiment of the present invention. As shown in Figure 4, the target tracking method includes the following steps:
  • the "tracking target” may be any kind of target object set or pre-specified by the user, including but not limited to a specific person, animal, vehicle, boat, or aircraft.
  • Position information refers to the relative positional relationship between the tracking target 20 and the mobile vehicle 10 .
  • the mobile vehicle 10 can obtain three-dimensional position information through a depth sensor, and obtain two-dimensional position information through image information of an image acquisition device.
  • step 420 Determine whether the tracking target exceeds a known three-dimensional map. If not, go to step 430, if yes, go to step 440.
  • the "three-dimensional map” is the spatial positional relationship between the mobile vehicle 10 and surrounding objects detected by the depth sensor.
  • the size of the known three-dimensional map range depends on the detection range of the depth sensor, and can also be considered as the detection range of the depth sensor.
  • the judgment method may include the following steps:
  • the target frame A the area occupied by the tracking target on the two-dimensional image is used as the target frame A.
  • the target frame can be generated by screening from a two-dimensional image by any type of computer vision algorithm.
  • point cloud information can be mapped to a two-dimensional image through a series of operation steps such as coordinate system transformation and projection transformation.
  • the conversion process of point cloud information to target pixels can be completed through the following steps:
  • the point cloud information is converted into three-dimensional position information in the coordinate system of the image acquisition device through a first transformation matrix.
  • the first transformation matrix is determined by the relative position between the depth sensor and the image capture device.
  • the three-dimensional position information is projected into the two-dimensional image through the internal parameter matrix of the image acquisition device to obtain corresponding target pixels.
  • step 423 Determine whether the number of the target pixels in the target frame is greater than a preset threshold. If yes, go to step 424, if not, go to step 425.
  • the preset threshold is an empirical value, which can be verified and determined by a technician through experiments and other means according to the actual situation.
  • the number of target pixels in the target frame is small, and the small degree of coincidence between the two indicates that the point cloud information collected by the depth sensor basically does not contain the data of the tracking target. Therefore, it can be considered that the tracking target has exceeded the detection range at this time, that is, beyond the known three-dimensional map.
  • any existing suitable path planning algorithm can be used to plan and obtain the desired tracking path.
  • the "virtual target point” is a point on the boundary of the 3D map, which has the same direction as the real tracking target, which can ensure that the moving direction of the mobile vehicle is correct during tracking.
  • the virtual target point is the intersection point between the direction ray and the detection range between the tracking target and the moving vehicle. In some embodiments, it can be calculated and obtained by the following steps:
  • the tracking target usually occupies a certain area (ie the target frame) in the 2D image. Therefore, the two-dimensional position information may specifically be any suitable, representative pixel positions in the tracking target. For example, it can be the center of gravity of the tracking target or the pixel where the center is located.
  • a reference point in the same direction as the tracking target is generated.
  • the "reference point” is the point at which the two-dimensional position information of the tracked target is remapped to three-dimensional space.
  • the direction in which the tracking target is located is determined through the reference point and the center of the three-dimensional map.
  • the center of the three-dimensional map is the origin position where the mobile vehicle 10 is located. Similar to the tracking target, the center of gravity of the mobile vehicle 10, the installation position of the depth sensor, etc., can also be used to represent the mobile vehicle 10 as the center of the three-dimensional map.
  • the direction of the tracking target can be represented by a unit vector.
  • the reference point can be converted into the three-dimensional position information in the world coordinate system through the second transformation matrix, and The center remains unified. Then, the reference point is subtracted from the three-dimensional position information of the center of the three-dimensional map to obtain a unit vector of the direction in which the tracking target is located.
  • the intersection of the ray in the direction of the tracking target and the boundary of the three-dimensional map is calculated as the virtual target point.
  • the ray refers to a straight line extending outward along the tracking target 20 from the moving vehicle 10 as a starting point.
  • the virtual target point can be obtained by calculating and determining the intersection of the ray and the boundary of the three-dimensional map in the three-dimensional space.
  • the virtual target point can be obtained by calculating the following steps:
  • the target distance can be achieved by:
  • the shortest length among the first projected length, the second projected length and the third projected length is selected as the target distance.
  • the three-dimensional position information of the virtual target point (that is, the three-axis coordinates in the world coordinate system) can be obtained by calculation.
  • the virtual target point calculated by the above method has the characteristics of strong stability, can provide a better guidance effect, and support virtual target point tracking for a long time. Therefore, it can meet the needs of use when the target is not within the detection range for a long time, and can also be used as an alternative solution for the depth sensor when the long-distance detection accuracy is not high, avoiding the interference of the false detection results of the depth sensor.
  • the same or different path planning algorithm as in step 430 may be used to plan and obtain the corresponding tracking path.
  • first tracking path and second tracking path are only used to distinguish and illustrate step 430 and step 450, but are not used to define a specific path.
  • the generated tracking paths are not distinguished, and it is only necessary to control the driving mechanism to move the mobile vehicle along the tracking path.
  • the target tracking method provided by the embodiment of the present invention can still perform path planning when the tracking target exceeds the range of the sensor or the range of the known three-dimensional map, keep the tracking of the target, and make the tracking target return to the depth sensor as much as possible. within the range. It effectively overcomes the limitation of depth sensor for automatic tracking algorithm, and has a good application prospect.
  • the original point cloud information (x, y, z) obtained by the depth sensor is converted into the point cloud information (x ⁇ , y ⁇ , z ⁇ ) in the coordinate system of the image acquisition device by formula (1).
  • T is the conversion matrix from the depth sensor to the image acquisition device (such as a color camera).
  • the transformation matrix T can be obtained by calculating the installation positions of various sensors (accelerometers, inertial measurement elements, GPS modules, etc.) and depth sensors on the mobile vehicle.
  • the three-dimensional point cloud information (x ⁇ , y ⁇ , z ⁇ ) in the coordinate system of the image acquisition device is projected onto the two-dimensional picture by formula (2), and the pixel points (u, v) corresponding to the point cloud information are determined. ).
  • the k matrix is the internal parameter matrix of the color camera, which can be determined by the monocular camera calibration technology.
  • the number of pixels in the target frame can be calculated.
  • the number of pixel points exceeds the preset threshold and the corresponding three-dimensional point is within the range of the three-dimensional map, it can be determined that the tracking target is within the three-dimensional map. On the contrary, it is determined that the tracking target exceeds the range of the three-dimensional map.
  • the point with a depth of 1 on the target direction ray can be set as the reference point, which is expressed as
  • k is the internal parameter matrix of the image acquisition device.
  • T is the transformation matrix from the coordinate system of the image acquisition device to the world coordinate system.
  • the world coordinate system is the coordinate system with the moving vehicle as the origin.
  • the ray It can be obtained by subtracting the known reference point Pc ⁇ and the center O of the moving vehicle, as shown in the following formula (7):
  • K is the length of the line segment
  • K 1 , K 2 , and K 3 respectively indicate that the center of the three-dimensional map is in the direction of the unit vector, and the distance from the boundary of the three-dimensional map is the X axis, the Y axis and the Z axis of the world coordinate system.
  • FIG. 7 shows a schematic structural diagram of an electronic device according to an embodiment of the present invention.
  • the specific embodiment of the present invention does not limit the specific implementation of the electronic device.
  • the electronic device may include: a processor (processor) 702 , a communication interface (Communications Interface) 704 , a memory (memory) 706 , and a communication bus 708 .
  • processor processor
  • Communication interface Communication interface
  • memory memory
  • communication bus 708 a communication bus
  • the processor 702 , the communication interface 704 , and the memory 706 communicate with each other through the communication bus 708 .
  • the communication interface 704 is used to communicate with network elements of other devices such as clients or other servers.
  • the processor 702 is configured to execute the program 710, and specifically may execute the relevant steps in the above-mentioned embodiments of the target tracking method.
  • the program 710 may include program code including computer operation instructions.
  • the processor 702 may be a central processing unit CPU, or an Application Specific Integrated Circuit (ASIC), or one or more integrated circuits configured to implement embodiments of the present invention.
  • the one or more processors included in the network slicing device may be the same type of processors, such as one or more CPUs; or may be different types of processors, such as one or more CPUs and one or more ASICs.
  • the memory 706 is used to store the program 710 .
  • Memory 706 may include high-speed RAM memory, and may also include non-volatile memory, such as at least one disk memory.
  • the program 710 can specifically be used to cause the processor 702 to execute the target tracking method in any of the above method embodiments.
  • each step of the exemplary target tracking method described in conjunction with the embodiments disclosed herein can be implemented by electronic hardware, computer software or a combination of the two, for the purpose of clear illustration Interchangeability of hardware and software, the above description has generally described the components and steps of each example in terms of functions. Whether these functions are performed in hardware or software depends on the specific application and design constraints of the technical solution.
  • the computer software can be stored in a computer-readable storage medium, and when the program is executed, it can include the processes of the above-mentioned method embodiments.
  • the storage medium can be a magnetic disk, an optical disk, a read-only storage memory, or a random storage memory, and the like.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)

Abstract

一种目标跟踪方法、装置、电子设备以及移动载具,该目标跟踪方法包括:采集跟踪目标的位置信息(410);判断所述跟踪目标是否超出已知的三维地图(420);若否,根据所述跟踪目标在所述三维地图中的三维位置,规划第一跟踪路径(430);若是,在所述三维地图的边界生成与所述跟踪目标处于同一方向的虚拟目标点(440);根据所述虚拟目标点的三维位置,规划第二跟踪路径(450);沿所述第二或第一跟踪路径移动(460)。其克服了深度传感器对于自动追踪算法的限制,可以在跟踪目标超出了传感器量程时,仍然保持对跟踪目标的追击,并尽可能的使跟踪目标重回传感器的量程范围内。

Description

目标跟踪方法、装置、电子设备及移动载具
本申请要求于2020年6月30日提交中国专利局、申请号为2020106211834、申请名称为“目标跟踪方法、装置、电子设备及移动载具”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
【技术领域】
本发明涉及机器视觉技术领域,尤其涉及一种目标跟踪方法、装置、电子设备以及移动载具。
【背景技术】
随着计算机智能化技术的不断发展,基于机器视觉而实现的自动跟踪算法被广泛的用于各种移动载具(如工业机器人,无人驾驶汽车或无人机)中,有效的提升了这些无人移动载具的智能化程度。
在现有的自动跟踪算法中,利用精确测量得到的跟踪目标三维位置可以实现性能优异的轨迹规划或者路径规划,实现对跟踪目标的自动跟踪。
实际应用时,通常依靠搭载在移动载具上的双目或激光雷达等深度传感构建三维地图,随后将跟踪目标定位在三维地图中,进而以此作为基准执行路径规划。
但是,现有的激光雷达或其他视觉传感器的测量距离有限,无法在远距离定位跟踪目标,也容易因跟踪目标突然加速短暂超出测量范围而导致跟踪失败,影响鲁棒性。
【发明内容】
本发明实施例旨在提供一种目标跟踪方法、装置、电子设备以及移动载具,能够解决现有目标跟踪方式所存在的缺陷。
为解决上述技术问题,本发明实施例提供以下技术方案:一种目标跟踪方法。该方法包括:
采集跟踪目标的位置信息;判断所述跟踪目标是否超出已知的三维地图;若否,根据所述跟踪目标在所述三维地图中的三维位置,规划第一跟踪路径;沿所述第一跟踪路径移动以跟踪所述跟踪目标;若是,在所述三维地图的边界生成与所述跟踪目标处于同一方向的虚拟目标点;根据所述虚拟目标点的三维位置,规划第二跟踪路径;沿所述第二跟踪路径移动以跟踪所述虚拟目标点。
可选地,所述判断所述跟踪目标是否超出已知的三维地图,具体包括:
通过图像采集设备采集包含跟踪目标的二维图像,并且通过深度传感器获取对应的点云信息;
将所述跟踪目标在所述二维图像上所占的区域作为目标框;
确定每一个所述点云信息在所述二维图像上对应的目标像素;
在处于所述目标框内的所述目标像素的数量大于预设阈值时,确定所述跟踪目标没有超出已知的三维地图,所述目标框为;
在处于所述目标框内的所述目标像素的数量小于等于预设阈值时,确定所述跟踪目标超出已知的三维地图。
可选地,所述确定每一个所述点云信息在所述二维图像上对应的目标像素,具体包括:
将所述点云信息通过第一转换矩阵,转换为在所述图像采集设备的坐标系下的三维位置信息;
通过所述图像采集设备的内参矩阵,将所述三维位置信息投影到所述二维图像中,获得对应的目标像素。
可选地,所述在所述三维地图的边界生成与跟踪目标处于同一方向的虚拟目标点,具体包括:
获取所述跟踪目标在所述二维图像中的二维位置信息;
根据所述二维位置信息以及所述图像采集设备的内参矩阵,生成与所述跟踪目标处于同一方向上的参考点;
通过所述参考点与所述三维地图的中心,确定所述跟踪目标所在的方向;
计算在所述跟踪目标所在的方向上的射线与所述三维地图的边界的交点,作为所述虚拟目标点。
可选地,所述通过所述参考点与所述三维地图的中心,确定所述跟踪目标所在的方向,具体包括:
通过第二转换矩阵,将所述参考点转换为世界坐标系中的三维位置信息;
将所述参考点与所述三维地图的中心的三维位置信息相减,获得所述跟踪目标所在方向的单位向量。
可选地,所述计算在所述跟踪目标所在的方向上的射线与所述三维地图的边界的交点,作为所述虚拟目标点,具体包括:
求解所述三维地图的中心在所述单位向量的方向上,与所述三维地图的边界之间的目标距离;
通过所述目标距离和所述单位向量,确定所述虚拟目标点在所述世界坐标系中的三维位置信息。
可选地,所述求解所述三维地图的中心在所述单位向量的方向上,与所述三维地图的边界之间的目标距离,具体包括:
计算所述三维地图的中心在所述单位向量的方向上,与所述三维地图的边界之间的距离在所述世界坐标系的X轴上的第一投影长度;
计算所述三维地图的中心在所述单位向量的方向上,与所述三维地图的边界之间的距离在所述世界坐标系的Y轴上的第二投影长度;
所述三维地图的中心在所述单位向量的方向上,与所述三维地图的边界之间的距离在所述世界坐标系的Z轴上的第三投影长度;
选择所述第一投影长度、第二投影长度以及第三投影长度之中长度最短的作为所述目标距离。
可选地,所述沿所述第二跟踪路径移动以跟踪所述虚拟目标点,具体包括:
在沿所述第二跟踪路径移动时,增加移动速度直至达到最大移动速度或者确定所述跟踪目标没有超出所述三维地图。
为解决上述技术问题,本发明实施例还提供以下技术方案:一种目标跟踪装置。该装置包括:
采集装置,用于采集跟踪目标的位置信息;检测装置,用于判断所述跟踪目标是否超出已知的三维地图;虚拟目标点生成装置,用于在所述三维地 图的边界生成与所述跟踪目标处于同一方向的虚拟目标点;路径规划装置,用于在所述跟踪目标超出已知的三维地图时,根据所述跟踪目标在所述三维地图中的三维位置,规划第一跟踪路径,并且在所述跟踪目标没有超出已知的三维地图时,根据所述虚拟目标点的三维位置,规划第二跟踪路径;跟踪装置,用于沿所述第一跟踪路径或第二跟踪路径移动以跟踪所述跟踪目标。
为解决上述技术问题,本发明实施例还提供以下技术方案:一种电子设备,包括:处理器以及与所述处理器通信连接的存储器;所述存储器中存储有计算机程序指令,所述计算机程序指令在被所述处理器调用时,以使所述处理器执行如上所述的目标跟踪方法。
为解决上述技术问题,本发明实施例还提供以下技术方案:一种移动载具,包括:
载具本体,所述载具本体上设置有用于采集点云信息的深度传感器以及用于采集二维图像信息的图像采集设备;
驱动机构,所述驱动机构用于输出动力,驱动所述载具本体移动;
控制器,用于接收所述深度传感器采集的点云信息和所述图像采集设备采集的二维图像信息,并执行如上所述的目标跟踪方法,通过所述驱动机构,控制所述移动载具保持对跟踪目标的跟踪。
与现有技术相比较,本发明实施例的目标跟踪方法在跟踪目标超出了传感器量程或者已知的三维地图范围时,仍然可以进行路径规划,保持对跟踪目标的追击,并尽可能的使跟踪目标重回传感器的量程范围内。其有效的克服了深度传感器对于自动追踪算法的限制,具有良好的应用前景。
【附图说明】
一个或多个实施例通过与之对应的附图中的图片进行示例性说明,这些示例性说明并不构成对实施例的限定,附图中具有相同参考数字标号的元件表示为类似的元件,除非有特别申明,附图中的图不构成比例限制。
图1为本发明实施例的目标跟踪方法的应用场景的示意图;
图2为本发明实施例提供的移动载具的结构框图;
图3为本发明实施例提供的目标跟踪装置的示意图;
图4为本发明实施例提供的目标跟踪方法的方法流程图;
图5a为本发明实施例提供的判断跟踪目标是否超出已知的三维地图的方法流程图;
图5b为本发明实施例提供的包含跟踪目标的二维图像的示意图;
图5c为本发明实施例提供的计算虚拟目标点位置的方法流程图;
图6为本发明实施例提供的移动载具与跟踪目标在世界坐标系中相对位置关系的示意图;
图7为本发明实施例提供的电子设备的结果示意图。
【具体实施方式】
为了便于理解本发明,下面结合附图和具体实施例,对本发明进行更详细的说明。需要说明的是,当元件被表述“固定于”另一个元件,它可以直接在另一个元件上、或者其间可以存在一个或多个居中的元件。当一个元件被表述“连接”另一个元件,它可以是直接连接到另一个元件、或者其间可以存在一个或多个居中的元件。本说明书所使用的术语“上”、“下”、“内”、“外”、“底部”等指示的方位或位置关系为基于附图所示的方位或位置关系,仅是为了便于描述本发明和简化描述,而不是指示或暗示所指的装置或元件必须具有特定的方位、以特定的方位构造和操作,因此不能理解为对本发明的限制。此外,术语“第一”、“第二”“第三”等仅用于描述目的,而不能理解为指示或暗示相对重要性。
除非另有定义,本说明书所使用的所有的技术和科学术语与属于本发明的技术领域的技术人员通常理解的含义相同。本说明书中在本发明的说明书中所使用的术语只是为了描述具体的实施例的目的,不是用于限制本发明。本说明书所使用的术语“和/或”包括一个或多个相关的所列项目的任意的和所有的组合。
此外,下面所描述的本发明不同实施例中所涉及的技术特征只要彼此之间未构成冲突就可以相互结合。
“目标跟踪”是无人移动载具依靠其搭载的多种传感器设备采集获得的测量数据信息,实现对外部环境的感知并在指定跟踪目标的情况下,跟随跟踪目标移动以完成各类工作任务的过程。
图1为本发明实施例提供的目标跟踪方法的应用场景。如图1所示,在该应用场景中,包括了移动载具10、跟踪目标20、智能终端30以及无线网络40。
移动载具10可以是以任何类型的动力驱动的无人移动载具(在图1中以无人机为例),包括但不限于无人机、AGV以及其他类型的机器人等。其可以根据实际情况的需要,具备相应的体积或者动力,从而提供能够满足使用需要的载重能力、移动速度以及续航里程等。
移动载具10上搭载有各种类型的设备或者功能模块,以实现相应的功能,满足不同场景下的使用需求,包括但不限于传感器设备、抓取机构或者清扫机构等。在本实施例中,移动载具10上至少搭载有用于探测深度信息的深度传感器和采集二维图像信息的图像采集设备,以保证对于外部环境的感知能力。
跟踪目标20是用于引导移动载具10移动的目标对象。其具体可以是任何类型的物体或者设备,只需要能够被移动载具10的深度传感器和图像采集设备检测即可。
智能终端30是位于用户侧,用于实现用户50交互的智能化设备,包括但不限于机、平板电脑和智能遥控器。其可以通过无线网络40与移动载具10和/或跟踪目标20建立通信连接,以实现对移动载具10和跟踪目标20之间的数据传输(如下发控制指令或者接收上传的数据信息)。
智能终端30装配有一种或者多种不同的用户交互装置,用以采集用户指令或者向用户展示和反馈信息。这些交互装置包括但不限于:按键、显示屏、触摸屏以及扬声器。例如,智能终端30可以装配有触控显示屏,通过该触控显示屏选定移动载具10的跟踪目标,或者操作跟踪目标20移动从而引导移动载具10,还可以接收移动载具10在目标跟踪过程中产生的数据信息。
无线网络40可以是基于任何类型的数据传输原理,用于建立两个节点之间的数据传输信道的无线通信网络,例如位于特定信号频段的蓝牙网络、WiFi网络、无线蜂窝网络或者其结合。
应当说明的是,图1所示的应用场景仅用于示例性说明。本领域技术人员可以根据实际情况的需要,添加或者减省其中的一个或者多个设备,而不限于图1所示。
图2为本发明实施例提供的移动载具10的结构框图。如图2所示,该移动载具10可以包括:载具本体110,驱动机构120以及控制器130。
其中,载具本体110是移动载具的主体部分。其具体的结构设置取决于实际使用的移动载具,可以是任何材质或者形状的主体结构。为确保对外界的感知能力,在载具本体110上设置有用于采集点云信息的深度传感器140以及用于采集二维图像信息的图像采集设备150。
“点云信息”是深度传感器采集获得,包含深度信息的三维位置信息。而二维图像是相对于点云信息而言的,只能提供二维位置信息的彩色图像或者其他合适类型的图像。具体使用的深度传感器可以根据实际情况的需要而确定,例如双目相机,结构光相机,tof,激光雷达等。
驱动机构120是用于输出动力,驱动所述载具本体110移动的动力系统,例如电动机、电池以及传动机构组成的电力驱动机构120。具体使用的驱动机构120可以根据实际情况的需要而选定,在此不作限制。例如,驱动机构120决定了移动载具10能够具有的加速度、最大速度等的机动性指标,可以据此选择符合使用要求的驱动机构。
控制器130是具有逻辑运算能力,能够基于内部存储的计算机程序执行一系列的运算步骤并输出对应的数据信息的电子计算平台。其作为整个移动载具的控制核心,可以基于接收所述深度传感器采集的点云信息和所述图像采集设备采集的二维图像信息,采用预先设置好的目标跟踪方法,发送相应的控制指令至所述驱动机构120,使移动载具保持对跟踪目标的跟踪。
如图2所示,移动载具10对于外界的感知能力来自于其搭载的深度传感器140和图像采集设备150。由于深度传感器140的测量范围是有限的,因此,在如图1所示的实际应用过程中,若跟踪目标超出了深度传感器140的测量范围,移动载具10将无法感知跟踪目标而导致跟踪失败的结果。
为提升跟踪算法的鲁棒性,避免跟踪失败的发生,控制器130可以在执行目标跟踪方法时,利用已有的知识和可以利用的检测结果,构建合适的虚拟目标点来保持跟踪的继续进行,尽可能的令跟踪目标能够重新回到深度传感器140的检测范围内。
应当说明的是,图2中仅对所述移动载具10的功能模块进行示例性描述, 而不用于限制移动载具10所具有的功能模组。技术人员根据实际情况的需要,基于本发明提供的发明思路而对图2所示的移动载具10的改变、替换或者整合均是容易想到的,属于本发明的保护范围之内。
图3为本发明实施例提供的目标跟踪装置的功能框图。该目标跟踪装置可以由上述移动载具的控制器所执行。本领域技术人员可以理解的是,图3所示的模块可以根据实际情况的需要,选择性的通过软件、硬件或者软件和硬件相结合的方式来实现。
如图3所示,该目标跟踪装置300包括:采集模块310、检测模块320、虚拟目标点生成模块330、路径规划模块340以及展示跟踪模块350。
其中,采集模块310用于采集跟踪目标的位置信息。如图2所示的,采集模块310的位置信息可以来自于深度传感器和图像采集设备。在给定跟踪目标的情况下,可以通过任何合适的方式从深度传感器和图像采集设备提供的原始数据中筛选获得跟踪目标的位置信息。
其中,深度传感器可以提供的位置信息是三维的,可以包含深度信息,但是探测距离有限。而图像采集设备只能提供二维平面上的相对位置信息,无法确定跟踪目标的深度。
检测模块320用于判断所述跟踪目标是否超出已知的三维地图。检测模块320是一个实时运行的模块,在跟踪过程中持续的对跟踪目标进行检测。当然,检测模块320也可以选择合适长短的检测周期以降低占用的资源。
“已知的三维地图”是控制器130通过深度传感器采集的点云信息所构建的,移动载具周边的情况,例如是否存在障碍物等。亦即,该已知的三维地图代表了深度传感器的探测范围。
具体可以选择使用任何形式的三维地图表示方法,例如八叉树,体素,网格图以及直接点云图等,只需要能够表示移动载具周边物体的三维空间关系即可。
虚拟目标点生成模块330用于在所述三维地图的边界生成与所述跟踪目标处于同一方向的虚拟目标点。该虚拟目标点生成模块330在检测模块320检测到跟踪目标超出了三维地图(即离开深度传感器的探测范围)后被启用,从而暂时为跟踪算法提供一个新的跟踪目标。
路径规划模块340用于在所述跟踪目标超出已知的三维地图时,根据所述跟踪目标在所述三维地图中的三维位置,规划第一跟踪路径,并且在所述跟踪目标没有超出已知的三维地图时,根据所述虚拟目标点的三维位置,规划第二跟踪路径。
在本实施例中,可能存在两种不同的情况。其一是跟踪目标没有脱离已知的三维地图,此时路径规划模块340可以直接以跟踪目标为依据进行路径规划。其二是跟踪目标脱离了已知的三维地图,此时路径规划模块340则使用虚拟目标点作为路径规划的依据。
为了区分上述的两种情况,在本实施例中使用“第一跟踪路径”和“第二跟踪路径”来表示使用跟踪目标和虚拟目标点为依据获得的路径。本领域技术人员可以根据实际情况的需要,选择使用任何合适类型的路径规划算法。
跟踪模块350用于沿所述第一跟踪路径或第二跟踪路径移动以跟踪所述跟踪目标。基于路径规划模块340提供的跟踪路径,跟踪模块350可以输出相应的控制指令,控制驱动机构从而使移动载具按照该跟踪路径移动,实现对跟踪目标的跟踪。
在一些实施例中,跟踪模块350可以使用“追击”的模式对脱离已知的三维地图的跟踪目标进行跟踪。具体而言,跟踪模块350用于控制移动载具10在沿所述第二跟踪路径移动时,增加移动速度直至达到最大移动速度或者检测模块320确定所述跟踪目标没有超出所述三维地图。
通过这样的方式,可以尽快的缩短移动载具10与跟踪目标20之间的距离,使其重新进入到深度传感器的检测范围内,有效的提升了鲁棒性,避免了跟踪失败的情况发生。
本发明实施例提供的目标跟踪装置,通过虚拟目标点构建模块提供的虚拟目标点,可以在跟踪目标暂时脱离检测范围时仍然保持跟踪算法的运行,尽可能的使跟踪目标重回传感器的量程范围内,从而有效的克服了深度传感器量程对于自动追踪算法的限制。
虽然,图1所示的应用场景中以应用在无人机为例。但是,本领域技术人员可以理解的是,该目标跟踪装置还可以在其它类型的场景和设备中使用,以提高目标跟踪算法的鲁棒性而不限于在图1所示的场景中应用。
基于图3所示的目标跟踪装置,本发明实施例还提供了一种目标跟踪方法。图4为本发明实施例提供的目标跟踪方法的方法流程图。如图4所示,该目标跟踪方法包括如下步骤:
410、采集跟踪目标的位置信息。
其中,“跟踪目标”可以用户设定或者预先指定的任何种类的目标对象,包括但不限于特定的人、动物、车辆、船只或者飞行器等。“位置信息”是指跟踪目标20与移动载具10之间的相对位置关系。通常的,移动载具10可以通过深度传感器获取三维的位置信息,通过图像采集设备的图像信息获取二维的位置信息。
420、判断所述跟踪目标是否超出已知的三维地图。若否,执行步骤430,若是,执行步骤440。
“三维地图”是移动载具10通过深度传感器检测得到的,与周边物体之间的空间位置关系。当然,已知的三维地图范围大小取决于深度传感器的探测范围,也可以认为是深度传感器的探测范围。
具体可以通过任何合适的方式,判断跟踪目标是否已经超出或者离开了已知的三维地图。在一些实施例中,如图5a所示,该判断方法可以包括如下步骤:
421、通过图像采集设备采集包含跟踪目标的二维图像,并且通过深度传感器获取对应的点云信息。
其中,如图5b所示,所述跟踪目标在所述二维图像上所占的区域作为目标框A。具体可以通过任何类型的计算机视觉算法从二维图像中筛选生成该目标框。
422、确定每一个所述点云信息在所述二维图像上对应的目标像素。点云信息作为包含三维信息的数据,可以通过坐标系转换和投影转换等一系列操作步骤,映射到二维图像上。
具体的,点云信息到目标像素的转换过程可以通过如下步骤完成:
首先,将所述点云信息通过第一转换矩阵,转换为在所述图像采集设备的坐标系下的三维位置信息。该第一转换矩阵由深度传感器和图像采集设备两者之间的相对位置所确定。
然后,通过所述图像采集设备的内参矩阵,将所述三维位置信息投影到所述二维图像中,获得对应的目标像素。
423、判断处于所述目标框内的所述目标像素的数量是否大于预设阈值。若是,执行步骤424,若否,执行步骤425。
预设阈值是一个经验性数值,可以根据实际情况,由技术人员通过实验等方式验证确定。
424、确定所述跟踪目标没有超出已知的三维地图。
425、确定所述跟踪目标超出已知的三维地图。
在本实施例中,处于目标框的目标像素数量较少,两者之间的重合度较小表明了深度传感器采集到的点云信息中已经基本不包含跟踪目标的数据。因此,可以认为此时跟踪目标已经超出了检测范围,即超出了已知的三维地图。
430、根据所述跟踪目标在所述三维地图中的三维位置,规划第一跟踪路径。
其中,在跟踪目标没有超出检测范围时,跟踪算法没有异常,不需要进行特别处理。因此,可以以跟踪目标为基础,采用现有任何合适的路径规划算法规划获得所需的跟踪路径。
440、在所述三维地图的边界生成与所述跟踪目标处于同一方向的虚拟目标点。
其中,在跟踪目标超出检测范围的情况下,则需要提供一个暂时性的虚拟目标以替代真实的跟踪目标。“虚拟目标点”是在三维地图的边界上的点,具有与真实跟踪目标相同的方向,这样可以保证移动载具在跟踪时的移动方向是正确的。
本领域技术人员可以理解的是,该虚拟目标点为跟踪目标与移动载具之间的方向射线与探测范围之间的交点。在一些实施例中,可以通过如下步骤计算获得:
首先,获取所述跟踪目标在所述二维图像中的二维位置信息。如图5b所示,跟踪目标通常在二维图像中占据有一定的面积(即目标框)。因此,该二维位置信息具体可以是跟踪目标中任何合适的,具有代表性的像素所在的位 置。例如,可以是跟踪目标的重心或者中心所在的像素点。
其次,根据所述二维位置信息以及所述图像采集设备的内参矩阵,生成与所述跟踪目标处于同一方向上的参考点。“参考点”是跟踪目标的二维位置信息重新映射到三维空间的点。
由于缺少深度信息,因此实际上可以选择使用的,与二维位置信息对应的参考点存在无数个。具体可以选择选择使用任何的深度信息。在实际操作中,为运算简便,可以选择深度信息为1时,与二维位置信息的参考点。
再次,通过所述参考点与所述三维地图的中心,确定所述跟踪目标所在的方向。如图1所示,该三维地图的中心即移动载具10所在的原点位置。与跟踪目标类似的是,也可以使用移动载具10的重心,深度传感器的安装位置等表示移动载具10,作为三维地图的中心。
具体的,该跟踪目标所在的方向可以通过单位向量的方式来表示,为了计算获得该单位向量,可以首先通过第二转换矩阵,将所述参考点转换为世界坐标系中的三维位置信息,与中心保持统一。然后,将所述参考点与所述三维地图的中心的三维位置信息相减,获得所述跟踪目标所在方向的单位向量。
最后,计算在所述跟踪目标所在的方向上的射线与所述三维地图的边界的交点,作为所述虚拟目标点。该射线是指从移动载具10为起点,沿跟踪目标20向外延伸的直线。由此,计算确定该射线与三维地图的边界在三维空间中的交点即可获得该虚拟目标点。
具体的,如图5c所示,可以通过如下步骤计算得到虚拟目标点:
510、求解所述三维地图的中心在所述单位向量的方向上,与所述三维地图的边界之间的目标距离。
在一些实施例中,可以通过如下方式该目标距离:
分别计算所述三维地图的中心在在所述单位向量的方向上,与所述三维地图的边界之间的距离在所述世界坐标系的X轴上的第一投影长度,所述三维地图的中心在在所述单位向量的方向上,与所述三维地图的边界之间的距离在所述世界坐标系的Y轴上的第二投影长度以及所述三维地图的中心在在所述单位向量的方向上,与所述三维地图的边界之间的距离在所述世界坐标 系的Z轴上的第三投影长度。
然后,选择所述第一投影长度、第二投影长度以及第三投影长度之中长度最短的作为所述目标距离。
520、通过所述目标距离和所述单位向量,确定所述虚拟目标点在所述世界坐标系中的三维位置信息。
在已知方向和距离的情况下,即可计算得到虚拟目标点的三维位置信息(即在世界坐标系中的三轴坐标)。
通过上述方式计算获得的虚拟目标点具有稳定性强的特点,可以提供较好的指引效果,并支持长时间进行虚拟目标点跟踪。因此,可以满足目标长时间不在探测范围内的情况的使用需求,也可作为深度传感器在远距离探测精度不高时的替代方案,避免深度传感器的错误探测结果的干扰。
450、根据所述虚拟目标点的三维位置,规划第二跟踪路径。
在确定虚拟目标点以后,可以采用与步骤430相同或者不同的路径规划算法,规划获得对应的跟踪路径。
460、沿所述第一跟踪路径或所述第二跟踪路径移动。
其中,“第一跟踪路径”和“第二跟踪路径”仅用于区分说明步骤430和步骤450,而不用于限定具体的路径。在实际操作过程中,对生成的跟踪路径并不作区分,只需通过控制驱动机构使移动载具沿跟踪路径移动即可。
本发明实施例提供的目标跟踪方法在跟踪目标超出了传感器量程或者已知的三维地图范围时,仍然可以进行路径规划,保持对跟踪目标的追击,并尽可能的使跟踪目标重回深度传感器的量程范围内。其有效的克服了深度传感器对于自动追踪算法的限制,具有良好的应用前景。
为充分说明本发明实施例提供的目标跟踪方法,以下结合图1所示的应用场景,对基于虚拟目标点的路径规划过程进行详细描述。
一、关于跟踪目标是否超出已知的三维地图的判断:
首先,通过算式(1)将深度传感器获得的原始点云信息(x,y,z)转换为图像采集设备的坐标系下的点云信息(x`,y`,z`)。
Figure PCTCN2021101518-appb-000001
其中,T为深度传感器到图像采集设备(如彩色相机)的转换矩阵。转换矩阵T可以通过移动载具上的各类传感器(加速度计、惯性测量元件、GPS模块等)和深度传感器安装位置计算获得。
然后,通过算式(2)将图像采集设备的坐标系下的三维点云信息(x`,y`,z`)投影到二维图片上,确定与点云信息对应的像素点(u,v)。
Figure PCTCN2021101518-appb-000002
其中,k矩阵为彩色相机的内参矩阵,可以通过单目相机标定技术确定。
最后,在确定对应的像素点(u,v)后,可以计算处于目标框内的像素个数。当像素点的数量超过预设阈值,并且对应的三维点在三维地图范围内时,可以判断跟踪目标在三维地图内。反之,则判定跟踪目标超出三维地图范围。
二、关于生成虚拟目标点:
首先,在本实施例中,可以以目标框的中心点作为跟踪目标在二维图像上的二维位置信息,表示为Pc=(u,v)。
由于此时深度传感器已无法获取深度信息。因此,可以设在目标方向射线上深度为单位1的点为参考点,表示为
Figure PCTCN2021101518-appb-000003
二维位置信息与参考点之间的关系可以通过如下算式(3)所示:
P c=k×P c′  (3)
其中,k为图像采集设备的内参矩阵。
在本实施例中,
Figure PCTCN2021101518-appb-000004
在已知k和二维位置信息后,可以计算得出如下算式(4)所示的参考点:
Figure PCTCN2021101518-appb-000005
通过如下算式(5),将Pc`从图像采集设备坐标系转换至世界坐标系:
Pc″=T×Pc′  (5)
其中,T为图像采集设备坐标系至世界坐标系的转换矩阵。世界坐标系是以移动载具为原点的坐标系。
如图6所示,在世界坐标系中,设移动载具的中心所在的坐标O(Xo,Yo,Zo)。虚拟目标点P为射线
Figure PCTCN2021101518-appb-000006
与已知三维地图边界的交点,可以通过如下算式(6)表示:
Figure PCTCN2021101518-appb-000007
其中,射线
Figure PCTCN2021101518-appb-000008
可以通过已知的参考点Pc``和移动载具的中心O相减后求解获得,如下算式(7)所示:
Figure PCTCN2021101518-appb-000009
而K为线段|op|的长度,可以根据移动载具的中心O与三维地图的边界之间的垂直距离计算得出(由深度传感器的探测距离决定)。
以0.5dm表示移动载具的中心O与边界的垂直距离,K存在如下的3个可能解:
Figure PCTCN2021101518-appb-000010
K 1、K 2、K 3分别表示所述三维地图的中心在所述单位向量的方向上,与所述三维地图的边界之间的距离在所述世界坐标系的X轴、Y轴和Z轴上的投影长度。
分别求得X,Y,Z轴上的投影长度,选择3个可能解中最小作为线段|op| 的长度,具体可以通过如下算式(8)表示:
k=min{k 1,k 2,k 3},(k n>0)   (8)
总结而言,将上述算式(4),(5),(7)以及(8)代入算式(6)后,可以得到虚拟目标点P的坐标,通过如下算式(9)表示:
P=(1-k)O+kTP c
图7示出了本发明实施例的电子设备的结构示意图,本发明具体实施例并不对电子设备的具体实现做限定。
如图7所示,该电子设备可以包括:处理器(processor)702、通信接口(Communications Interface)704、存储器(memory)706、以及通信总线708。
其中:处理器702、通信接口704、以及存储器706通过通信总线708完成相互间的通信。通信接口704,用于与其它设备比如客户端或其它服务器等的网元通信。处理器702,用于执行程序710,具体可以执行上述目标跟踪方法实施例中的相关步骤。
具体地,程序710可以包括程序代码,该程序代码包括计算机操作指令。
处理器702可能是中央处理器CPU,或者是特定集成电路ASIC(Application Specific Integrated Circuit),或者是被配置成实施本发明实施例的一个或多个集成电路。网络切片设备包括的一个或多个处理器,可以是同一类型的处理器,如一个或多个CPU;也可以是不同类型的处理器,如一个或多个CPU以及一个或多个ASIC。
存储器706,用于存放程序710。存储器706可能包含高速RAM存储器,也可能还包括非易失性存储器(non-volatile memory),例如至少一个磁盘存储器。
程序710具体可以用于使得处理器702执行上述任意方法实施例中的目标跟踪方法。
本领域技术人员应该还可以进一步意识到,结合本文中所公开的实施例描述的示例性的目标跟踪方法的各个步骤,能够以电子硬件、计算机软件或者二者的结合来实现,为了清楚地说明硬件和软件的可互换性,在上述说明中已经按照功能一般性地描述了各示例的组成及步骤。这些功能究竟以硬件还是软件方式来执行,取决于技术方案的特定应用和设计约束条件。
本领域技术人员可以对每个特定的应用来使用不同方法来实现所描述的功能,但是这种实现不应认为超出本发明的范围。所述的计算机软件可存储于计算机可读取存储介质中,该程序在执行时,可包括如上述各方法的实施例的流程。其中,所述的存储介质可为磁碟、光盘、只读存储记忆体或随机存储记忆体等。
最后应说明的是:以上实施例仅用以说明本发明的技术方案,而非对其限制;在本发明的思路下,以上实施例或者不同实施例中的技术特征之间也可以进行组合,步骤可以以任意顺序实现,并存在如上所述的本发明的不同方面的许多其它变化,为了简明,它们没有在细节中提供;尽管参照前述实施例对本发明进行了详细的说明,本领域的普通技术人员应当理解:其依然可以对前述各实施例所记载的技术方案进行修改,或者对其中部分技术特征进行等同替换;而这些修改或者替换,并不使相应技术方案的本质脱离本发明各实施例技术方案的范围。

Claims (11)

  1. 一种目标跟踪方法,其特征在于,包括:
    采集跟踪目标的位置信息;
    判断所述跟踪目标是否超出已知的三维地图;
    若否,根据所述跟踪目标在所述三维地图中的三维位置,规划第一跟踪路径;
    沿所述第一跟踪路径移动以跟踪所述跟踪目标;
    若是,在所述三维地图的边界生成与所述跟踪目标处于同一方向的虚拟目标点;
    根据所述虚拟目标点的三维位置,规划第二跟踪路径;
    沿所述第二跟踪路径移动以跟踪所述虚拟目标点。
  2. 根据权利要求1所述的方法,其特征在于,所述判断所述跟踪目标是否超出已知的三维地图,具体包括:
    通过图像采集设备采集包含跟踪目标的二维图像,并且通过深度传感器获取对应的点云信息;
    将所述跟踪目标在所述二维图像上所占的区域作为目标框;
    确定每一个所述点云信息在所述二维图像上对应的目标像素;
    在处于所述目标框内的所述目标像素的数量大于预设阈值时,确定所述跟踪目标没有超出已知的三维地图;
    在处于所述目标框内的所述目标像素的数量小于等于预设阈值时,确定所述跟踪目标超出已知的三维地图。
  3. 根据权利要求2所述的方法,其特征在于,所述确定每一个所述点云信息在所述二维图像上对应的目标像素,具体包括:
    将所述点云信息通过第一转换矩阵,转换为在所述图像采集设备的坐标系下的三维位置信息;
    通过所述图像采集设备的内参矩阵,将所述三维位置信息投影到所述二维图像中,获得对应的目标像素。
  4. 根据权利要求2所述的方法,其特征在于,所述在所述三维地图的边界生成与跟踪目标处于同一方向的虚拟目标点,具体包括:
    获取所述跟踪目标在所述二维图像中的二维位置信息;
    根据所述二维位置信息以及所述图像采集设备的内参矩阵,生成与所述跟踪目标处于同一方向上的参考点;
    通过所述参考点与所述三维地图的中心,确定所述跟踪目标所在的方向;
    计算在所述跟踪目标所在的方向上的射线与所述三维地图的边界的交点,作为所述虚拟目标点。
  5. 根据权利要求4所述的方法,其特征在于,所述通过所述参考点与所述三维地图的中心,确定所述跟踪目标所在的方向,具体包括:
    通过第二转换矩阵,将所述参考点转换为世界坐标系中的三维位置信息;
    将所述参考点与所述三维地图的中心的三维位置信息相减,获得所述跟踪目标所在方向的单位向量。
  6. 根据权利要求5所述的方法,其特征在于,所述计算在所述跟踪目标所在的方向上的射线与所述三维地图的边界的交点,作为所述虚拟目标点,具体包括:
    求解所述三维地图的中心在所述单位向量的方向上,与所述三维地图的边界之间的目标距离;
    通过所述目标距离和所述单位向量,确定所述虚拟目标点在所述世界坐标系中的三维位置信息。
  7. 根据权利要求6所述的方法,其特征在于,所述求解所述三维地图的中心在所述单位向量的方向上,与所述三维地图的边界之间的目标距离,具体包括:
    计算所述三维地图的中心在在所述单位向量的方向上,与所述三维地图的边界之间的距离在所述世界坐标系的X轴上的第一投影长度;
    计算所述三维地图的中心在在所述单位向量的方向上,与所述三维地图的边界之间的距离在所述世界坐标系的Y轴上的第二投影长度;
    计算所述三维地图的中心在在所述单位向量的方向上,与所述三维地图的边界之间的距离在所述世界坐标系的Z轴上的第三投影长度;
    选择所述第一投影长度、第二投影长度以及第三投影长度之中长度最短的作为所述目标距离。
  8. 根据权利要求1所述的方法,其特征在于,所述沿所述第二跟踪路径移动以跟踪所述虚拟目标点,具体包括:
    在沿所述第二跟踪路径移动时,增加移动速度直至达到最大移动速度或者确定所述跟踪目标没有超出所述三维地图。
  9. 一种目标跟踪装置,其特征在于,包括:
    采集模块,用于采集跟踪目标的位置信息;
    检测模块,用于判断所述跟踪目标是否超出已知的三维地图;
    虚拟目标点生成模块,用于在所述三维地图的边界生成与所述跟踪目标处于同一方向的虚拟目标点;
    路径规划模块,用于在所述跟踪目标超出已知的三维地图时,根据所述跟踪目标在所述三维地图中的三维位置,规划第一跟踪路径,并且在所述跟踪目标没有超出已知的三维地图时,根据所述虚拟目标点的三维位置,规划第二跟踪路径;
    跟踪模块,用于沿所述第一跟踪路径或第二跟踪路径移动以跟踪所述跟踪目标。
  10. 一种电子设备,其特征在于,包括:处理器以及与所述处理器通信连接的存储器;
    所述存储器中存储有计算机程序指令,所述计算机程序指令在被所述处理器调用时,以使所述处理器执行如权利要求1-8任一项所述目标跟踪方法。
  11. 一种移动载具,其特征在于,包括:
    载具本体,所述载具本体上设置有用于采集点云信息的深度传感器以及用于采集二维图像信息的图像采集设备;
    驱动机构,所述驱动机构用于输出动力,驱动所述载具本体移动;
    控制器,用于接收所述深度传感器采集的点云信息和所述图像采集设备采集的二维图像信息,并执行如权利要求1-8任一项所述的目标跟踪方法,通过所述驱动机构,控制所述移动载具保持对跟踪目标的跟踪。
PCT/CN2021/101518 2020-06-30 2021-06-22 目标跟踪方法、装置、电子设备及移动载具 WO2022001748A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202010621183.4A CN111784748B (zh) 2020-06-30 2020-06-30 目标跟踪方法、装置、电子设备及移动载具
CN202010621183.4 2020-06-30

Publications (1)

Publication Number Publication Date
WO2022001748A1 true WO2022001748A1 (zh) 2022-01-06

Family

ID=72760843

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/101518 WO2022001748A1 (zh) 2020-06-30 2021-06-22 目标跟踪方法、装置、电子设备及移动载具

Country Status (2)

Country Link
CN (1) CN111784748B (zh)
WO (1) WO2022001748A1 (zh)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114419152A (zh) * 2022-01-14 2022-04-29 中国农业大学 一种基于多维度点云特征的目标检测与跟踪方法及系统
EP4261647A3 (en) * 2022-04-13 2024-01-10 The Boeing Company Aircraft guidance to moving target point
CN117648037A (zh) * 2024-01-29 2024-03-05 北京未尔锐创科技有限公司 一种目标视线跟踪方法及系统
US11975869B2 (en) 2022-05-02 2024-05-07 The Boeing Company Lighting system inspection using an unmanned aerial vehicle

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111784748B (zh) * 2020-06-30 2023-05-02 深圳市道通智能航空技术股份有限公司 目标跟踪方法、装置、电子设备及移动载具
CN112378397B (zh) * 2020-11-02 2023-10-10 中国兵器工业计算机应用技术研究所 无人机跟踪目标的方法、装置及无人机
CN112419417B (zh) * 2021-01-25 2021-05-18 成都翼比特自动化设备有限公司 一种基于无人机的拍照点定位方法及相关装置
CN113066100A (zh) * 2021-03-25 2021-07-02 东软睿驰汽车技术(沈阳)有限公司 目标跟踪方法、装置、设备及存储介质
CN114147664A (zh) * 2021-12-09 2022-03-08 苏州华星光电技术有限公司 一种治具更换方法以及电子设备的制备方法

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180297207A1 (en) * 2017-04-14 2018-10-18 TwoAntz, Inc. Visual positioning and navigation device and method thereof
CN109900272A (zh) * 2019-02-25 2019-06-18 浙江大学 视觉定位与建图方法、装置及电子设备
CN110472553A (zh) * 2019-08-12 2019-11-19 北京易航远智科技有限公司 图像和激光点云融合的目标追踪方法、计算装置和介质
CN111784748A (zh) * 2020-06-30 2020-10-16 深圳市道通智能航空技术有限公司 目标跟踪方法、装置、电子设备及移动载具

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101887271A (zh) * 2010-07-19 2010-11-17 东莞职业技术学院 一种移动机器人的路径规划方法

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180297207A1 (en) * 2017-04-14 2018-10-18 TwoAntz, Inc. Visual positioning and navigation device and method thereof
CN109900272A (zh) * 2019-02-25 2019-06-18 浙江大学 视觉定位与建图方法、装置及电子设备
CN110472553A (zh) * 2019-08-12 2019-11-19 北京易航远智科技有限公司 图像和激光点云融合的目标追踪方法、计算装置和介质
CN111784748A (zh) * 2020-06-30 2020-10-16 深圳市道通智能航空技术有限公司 目标跟踪方法、装置、电子设备及移动载具

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114419152A (zh) * 2022-01-14 2022-04-29 中国农业大学 一种基于多维度点云特征的目标检测与跟踪方法及系统
CN114419152B (zh) * 2022-01-14 2024-04-26 中国农业大学 一种基于多维度点云特征的目标检测与跟踪方法及系统
EP4261647A3 (en) * 2022-04-13 2024-01-10 The Boeing Company Aircraft guidance to moving target point
US11975869B2 (en) 2022-05-02 2024-05-07 The Boeing Company Lighting system inspection using an unmanned aerial vehicle
CN117648037A (zh) * 2024-01-29 2024-03-05 北京未尔锐创科技有限公司 一种目标视线跟踪方法及系统
CN117648037B (zh) * 2024-01-29 2024-04-19 北京未尔锐创科技有限公司 一种目标视线跟踪方法及系统

Also Published As

Publication number Publication date
CN111784748B (zh) 2023-05-02
CN111784748A (zh) 2020-10-16

Similar Documents

Publication Publication Date Title
WO2022001748A1 (zh) 目标跟踪方法、装置、电子设备及移动载具
CN110967011B (zh) 一种定位方法、装置、设备及存储介质
CN111990929B (zh) 一种障碍物探测方法、装置、自行走机器人和存储介质
TWI558525B (zh) 機器人及其控制方法
WO2018086130A1 (zh) 飞行轨迹的生成方法、控制装置及无人飞行器
CN111344644B (zh) 用于基于运动的自动图像捕获的技术
US11347238B2 (en) System and method for probabilistic multi-robot SLAM
WO2021078003A1 (zh) 无人载具的避障方法、避障装置及无人载具
WO2016031105A1 (ja) 情報処理装置、情報処理方法、及びプログラム
CN207115193U (zh) 一种用于处理任务区域的任务的移动电子设备
WO2019001237A1 (zh) 一种移动电子设备以及该移动电子设备中的方法
WO2021027886A1 (zh) 无人机飞行控制方法及无人机
WO2021081774A1 (zh) 一种参数优化方法、装置及控制设备、飞行器
WO2020024182A1 (zh) 一种参数处理方法、装置及摄像设备、飞行器
CN115328153A (zh) 传感器数据处理方法、系统及可读存储介质
CN116416291A (zh) 电子围栏自动生成方法、实时检测方法及装置
WO2020019175A1 (zh) 图像处理方法和设备、摄像装置以及无人机
CN113701750A (zh) 一种井下多传感器的融合定位系统
WO2021016875A1 (zh) 飞行器的降落方法、无人飞行器及计算机可读存储介质
WO2020024150A1 (zh) 地图处理方法、设备、计算机可读存储介质
CN110930506A (zh) 三维地图生成方法、移动装置和计算机可读存储介质
US20220315220A1 (en) Autonomous Aerial Navigation In Low-Light And No-Light Conditions
JP2020012774A (ja) 建造物の測定方法
JP7437930B2 (ja) 移動体及び撮像システム
JP2022011821A (ja) 情報処理装置、情報処理方法、移動ロボット

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21834341

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21834341

Country of ref document: EP

Kind code of ref document: A1