WO2020187095A1 - Procédé et appareil de suivi de cible, et véhicule aérien sans pilote - Google Patents

Procédé et appareil de suivi de cible, et véhicule aérien sans pilote Download PDF

Info

Publication number
WO2020187095A1
WO2020187095A1 PCT/CN2020/078629 CN2020078629W WO2020187095A1 WO 2020187095 A1 WO2020187095 A1 WO 2020187095A1 CN 2020078629 W CN2020078629 W CN 2020078629W WO 2020187095 A1 WO2020187095 A1 WO 2020187095A1
Authority
WO
WIPO (PCT)
Prior art keywords
target
detected
image
tracker
target frame
Prior art date
Application number
PCT/CN2020/078629
Other languages
English (en)
Chinese (zh)
Inventor
崔希鹏
Original Assignee
深圳市道通智能航空技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳市道通智能航空技术有限公司 filed Critical 深圳市道通智能航空技术有限公司
Publication of WO2020187095A1 publication Critical patent/WO2020187095A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/40Image enhancement or restoration using histogram techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
    • G06V10/758Involving statistics of pixels or of feature values, e.g. histogram matching

Definitions

  • This application relates to the technical field of unmanned aerial vehicles, in particular to a target tracking method, device and unmanned aerial vehicle.
  • Intelligent tracking of moving targets using drones has been widely used, and intelligent tracking can be used for fugitive tracking, abnormal target behavior analysis, etc.
  • filtering algorithms are often used to track targets, and the tracking speed is fast.
  • the target tracking method based on filtering algorithm is easy to lose the target when the target is occluded or the target is deformed, especially when tracking the target for a long time.
  • the purpose of the embodiments of the present invention is to provide a target tracking method, device and drone, which can reduce the loss rate of target tracking.
  • an embodiment of the present invention provides a target tracking method, the method including:
  • a tracker to detect the target in the image to be detected, wherein the tracker is obtained by training based on the characteristics of the target;
  • the target is not detected in the image to be detected, input the image to be detected into a preset target detection model based on deep learning to obtain at least one candidate target frame and a category corresponding to the candidate target frame;
  • the determining whether the target is detected in the image to be detected includes:
  • the selecting one candidate target frame from the at least one candidate target frame according to the category and color characteristics of the target includes:
  • the determining whether the target is detected in the feature detection image includes:
  • the method further includes:
  • the position corresponding to the maximum response value is taken as the position of the target in the image to be detected.
  • the feature of the target includes the initial target frame of the target. Then, before the detecting the target in the image to be detected by the tracker, the method further includes:
  • the color feature includes a color statistical histogram.
  • the method further includes:
  • a color statistical histogram of the target is obtained based on the initial target frame.
  • the method further includes:
  • a candidate target frame that is the same as the target category and whose color feature similarity is greater than the preset similarity threshold is not obtained from the at least one candidate target frame, increase the preset timing value, and obtain a new image to be detected Perform detection again based on the preset target detection model;
  • a candidate target frame that is the same as the target category and has a maximum similarity of color features greater than a preset similarity threshold is obtained from the at least one candidate target frame, reset the preset timing to zero;
  • the initial target frame is acquired again, and the tracker is updated based on the initial target frame.
  • the tracker is a tracker based on a core-related filtering algorithm.
  • the preset target detection model is a target detection model based on an SSD algorithm.
  • an embodiment of the present invention provides a target tracking device, the device including:
  • Image acquisition module for acquiring the image to be detected
  • the tracker detection module is configured to use a tracker to detect the target in the image to be detected, wherein the tracker is obtained by training based on the characteristics of the target;
  • the judgment module is used to judge whether the target is detected in the image to be detected
  • the target detection module is configured to, if the target is not detected in the image to be detected, input the image to be detected into a preset target detection model based on deep learning to obtain at least one candidate target frame and the candidate The category corresponding to the target frame;
  • a candidate target frame selection module configured to select a candidate target frame from the at least one candidate target frame according to the category and color characteristics of the target;
  • the first tracker update module is used to retrain the tracking model based on the selected candidate target frame and update the tracker.
  • the judgment module is specifically used for:
  • the candidate target frame selection module is specifically configured to:
  • the judgment module is also specifically used to:
  • the device further includes:
  • the target position determining module is configured to use the position corresponding to the maximum response value as the position of the target in the image to be detected.
  • the feature of the target includes the initial target frame of the target
  • the device also includes a tracker training module, which is used to: before using the tracker to detect the target in the image to be detected:
  • the color feature includes a color statistical histogram.
  • the device further includes:
  • the target color feature acquisition module is configured to obtain a color statistical histogram of the target based on the initial target frame.
  • the device further includes a second tracker update module for:
  • a candidate target frame that is the same as the target category and whose color feature similarity is greater than the preset similarity threshold is not obtained from the at least one candidate target frame, increase the preset timing value, and obtain a new image to be detected Perform detection again based on the preset target detection model;
  • a candidate target frame that is the same as the target category and has a maximum similarity of color features greater than a preset similarity threshold is obtained from the at least one candidate target frame, reset the preset timing to zero;
  • the initial target frame is acquired again, and the tracker is updated based on the initial target frame.
  • the tracker is a tracker based on a core-related filtering algorithm.
  • the preset target detection model is a target detection model based on an SSD algorithm.
  • an embodiment of the present invention provides an unmanned aerial vehicle, the unmanned aerial vehicle including a fuselage, an arm connected to the fuselage, a power system provided on the arm, and
  • the camera device and the tracking chip of the body, the camera device and the tracking chip are electrically connected, wherein the camera device is used to obtain the image to be detected, and the tracking chip includes:
  • At least one processor and,
  • a memory communicatively connected with the at least one processor; wherein,
  • the memory stores instructions that can be executed by the at least one processor, and the instructions are executed by the at least one processor, so that the at least one processor can execute the foregoing method.
  • an embodiment of the present invention provides a non-volatile computer-readable storage medium, wherein the computer-readable storage medium stores computer-executable instructions.
  • the computer-executable instructions When the computer-executable instructions are When the drone is executed, the drone is made to execute the above-mentioned method.
  • the target in the image to be detected is first detected by the tracker obtained in advance, and if the target is not detected in the image to be detected, the target is preset
  • the detection model detects the image to be detected, and obtains at least one candidate target frame in the image to be detected and a category corresponding to the candidate target frame. Then, a candidate target frame is selected from the at least one candidate target frame according to the color feature and category of the target, and the tracking model is retrained to obtain a new tracker. That is, when the target in the image to be detected cannot be detected by the tracker, the candidate target frame corresponding to the target obtained by the target detection model is used to update the tracker. It can improve the detection ability of the tracker and reduce the loss rate of target tracking.
  • FIG. 1 is a schematic diagram of an application scenario of a target tracking method and device according to an embodiment of the present invention
  • Figure 2 is a schematic structural diagram of an embodiment of the drone of the present invention.
  • Figure 3 is a schematic diagram of a target frame in an embodiment of the present invention.
  • Figure 4 is a schematic structural diagram of an embodiment of the drone of the present invention.
  • FIG. 5 is a schematic flowchart of an embodiment of the target tracking method of the present invention.
  • Figure 6a is a schematic diagram of training a tracker model in an embodiment of the present invention.
  • Fig. 6b is a schematic diagram of using a tracker to detect an image to be detected in an embodiment of the present invention
  • FIG. 7 is a schematic structural diagram of an embodiment of the target tracking device of the present invention.
  • FIG. 8 is a schematic structural diagram of an embodiment of the target tracking device of the present invention.
  • Fig. 9 is a schematic diagram of the hardware structure of the tracking chip or the controller in an embodiment of the unmanned aerial vehicle of the present invention.
  • the target tracking method, device and drone provided by the embodiments of the present invention can be applied to the application scenario shown in FIG. 1. Please refer to FIG. 1.
  • the drone 100 and the target 200 are included.
  • the UAV 100 is used to track the target 200.
  • the UAV 100 may be suitable unmanned aerial vehicles including fixed-wing unmanned aerial vehicles and rotary-wing unmanned aerial vehicles, such as helicopters, quadrotors, and aircraft with other numbers of rotors and/or rotor configurations.
  • the UAV 100 may also be other movable objects, such as a manned aircraft, a model airplane, an unmanned airship, and an unmanned hot air balloon.
  • the target 200 can be any suitable movable or non-movable object, including vehicles, people, animals, buildings, mountains and rivers, etc.
  • the drone 100 includes a fuselage 10, an arm connected to the fuselage 10, and an The power system of the arm and the control system provided in the fuselage 10.
  • the power system is used to provide thrust, lift, etc. for the flight of the UAV 100.
  • the control system is the central nerve of the UAV 100 and can include multiple functional units, such as flight control systems, tracking systems, path planning systems, and other specific Functional system.
  • the tracking system is used to obtain the position and tracking distance of the tracking target (that is, the distance between the UAV 100 and the target 200), and the like.
  • the flight control system includes various sensors (such as gyroscopes, accelerometers, etc.), and the flight control system is used to control the flight attitude of the UAV.
  • the path planning system is used to plan the flight path of the UAV based on the location of the tracking target, and instruct the flight control system to control the flight attitude of the UAV to make the UAV fly along the specified path.
  • the tracking system includes a camera device 20 and a tracking chip 30.
  • the camera device 20 and the tracking chip 30 are electrically connected.
  • the camera device 20 is used to capture the image to be detected, and the tracking chip 30 is used to obtain the image to be detected and determine the The position of the target in the image to be detected, so as to obtain the true position of the target.
  • the camera device 20 can be a high-definition digital camera or other camera device.
  • the camera device 20 can be set at any suitable location that is convenient for shooting.
  • the camera device 20 is installed on the bottom of the body 10 through a pan-tilt.
  • the tracking chip 30 can track the target according to the characteristics of the target.
  • the characteristics of the target may be a target frame for frame selection of the target.
  • the small person represents a target
  • the small person enclosed by a dashed line represents a target frame.
  • Some application scenarios of the drone 100 also include an electronic device 300, and the target frame can be sent to the drone 100 through the electronic device 300.
  • the electronic device 300 may display a picture taken by the drone 100, and the user can select a target in the picture to obtain an initial target frame, and then upload the initial target frame to the drone 100.
  • the electronic device 300 is, for example, a smart phone, a tablet computer, a computer, a remote control, and the like.
  • the user can interact with the electronic device 300 through any suitable type of one or more user interaction devices, and these user interaction devices may be a mouse, a button, a touch screen, and the like.
  • a communication connection can be established through wireless communication modules (such as a signal receiver, a signal transmitter, etc.) respectively provided in each of them, and data/commands can be uploaded or issued.
  • the initial target frame may also be stored in the storage device or tracking chip 30 of the drone 100 in advance.
  • a tracking model may be trained based on the initial target frame to obtain a tracker, and the tracking chip 30 uses the tracker to detect the target in the image to be detected obtained by the camera 20.
  • the target is included in the image to be detected, and the tracking chip 30 can determine the position of the target in the image to be detected, so as to obtain the true position of the target.
  • the target cannot be detected in the to-be-detected image due to reasons such as the target being occluded or the target being deformed. In this case, the true position of the target cannot be obtained.
  • a pre-obtained target detection model based on deep learning may be used to identify the image to be detected, and to obtain at least one candidate target frame in the image to be detected and a category corresponding to the candidate target frame. Then, a candidate target frame is selected from at least one candidate target frame according to the color feature and category of the target, the tracking model is retrained, and the tracker is updated.
  • the embodiment of the present invention uses the retrieved candidate target frame to update the tracker, which can improve the detection ability of the tracker and reduce the target tracking loss rate.
  • a separate tracking chip 30 may not be provided, and the methods executed in the tracking chip 30, the flight control system, and the path planning system may be executed by one or more other controllers ( Please refer to the controller 40 in FIG. 4). That is, it is executed by one or more other controllers: based on the image to be detected taken by the camera device 20, the position of the target in the image to be detected is determined, so as to obtain the true position of the target, and based on the position of the tracking target. Plan the flight path of the man-machine and control the flight attitude of the UAV to make the UAV fly according to the designated path.
  • FIG. 5 is a schematic flowchart of a target tracking method provided by an embodiment of the present invention.
  • the method may be executed by the drone 100 in FIG. 1 (specifically, in some embodiments, the method is executed by the drone 100).
  • the tracking chip 30 of the tracking system is executed.
  • the method is executed by the controller 40 of the drone 100). As shown in FIG. 5, the method includes:
  • the image to be detected is obtained by the camera device 20 of the drone 100. Before the camera device 20 captures the image to be detected, the drone 100 needs to face the camera device 20 in the direction of the target.
  • 102 Use a tracker to detect the target in the image to be detected, where the tracker is obtained by training based on the characteristics of the target.
  • the tracker may be directly loaded on the drone 100 after being obtained by other devices through training the tracking model.
  • the tracker is obtained by the UAV 100 itself by training a tracking model.
  • the method before the tracker is used to detect the target in the image to be detected, the method further includes training tracking The step of the device is to obtain an initial target frame, train a tracking model based on the initial target frame, and obtain a tracker.
  • the initial target frame may be stored in advance on the drone 100, or uploaded by the electronic device 300 to the drone 100.
  • the tracker is a tracker based on a kernel correlation filter algorithm (Kernel Correlation Filter, KCF).
  • KCF Kernel Correlation Filter
  • other related filter trackers may also be used. The following takes the KCF tracker as an example to illustrate the principles of tracker training and the use of the tracker to detect the target in the image to be detected.
  • the initial target frame (the frame indicated by the dashed line, which is a positive sample) is cyclically shifted to obtain multiple sample frames.
  • the labels corresponding to the sample frames are assigned according to the distance from the positive sample. Closer, the larger the label value.
  • the dashed frame is first cyclically shifted to obtain multiple frames to be detected.
  • the tracker is used to calculate the response value of each frame to be detected, and the position of the frame to be detected with the largest response value is the position of the target in the image to be detected, so that the true position of the target can be obtained.
  • the frame to be detected with the largest response value should be the frame represented by the bold line.
  • the target will not be detected in the image to be detected.
  • the KCF tracker is used to detect the target
  • the target is not detected in the image to be detected, input the image to be detected into a preset target detection model based on deep learning to obtain at least one candidate target frame and the corresponding candidate target frame category.
  • the image to be detected is input into the preset target detection model for identification, and multiple candidate target frames and the categories corresponding to the candidate target frames of each target in the image to be detected are obtained.
  • the preset target detection model may be obtained by other devices by training a neural network model based on deep learning, and directly loaded on the drone 100.
  • the preset target detection model is obtained by the drone 100 itself by training a neural network model based on deep learning.
  • the preset target detection model can be obtained by training a large amount of sample data and the label (ie category) corresponding to the sample data, for example, based on data training on the data set PASCAL VOC.
  • the preset target detection model is a network model based on the SSD (Single Shot MultiBox Detector) algorithm.
  • it can also be replaced by other deep learning networks, for example, YOLO (You Only Look Once), Fast-RCNN (Regions with CNN), etc.
  • 105 Select a candidate target frame from the at least one candidate target frame according to the category and color feature of the target.
  • the candidate target frame corresponding to the target can be selected according to the target category and color characteristics.
  • a candidate target frame with the same target category as the target category is selected from the at least one candidate target frame.
  • a candidate target frame whose color feature is the most similar to the color feature of the target and is greater than a preset similarity threshold is selected from the candidate target frame with the same target category. That is, the color feature of the candidate target frame is used to match the color feature of the target, and if the similarity of the most similar candidate target frame is greater than the preset similarity threshold, the candidate target frame is used to update the tracker.
  • a value with a better target similarity effect can be selected as the preset similarity threshold.
  • the Euclidean distance between the color feature of the target and the color feature of the candidate target frame can be calculated.
  • the color feature includes a color statistical histogram.
  • the color statistical histogram of the target can be obtained directly based on the initial target frame.
  • the color statistical histogram may be a color statistical histogram of part or all of the three channels of R, G, and B.
  • the color value 0-255 can be quantized in steps, for example, quantized to 0-31 in steps of 8.
  • the image corresponding to the initial target frame is cut, divided into m multiplied by n small blocks, and the number of each color value in each small block is counted, that is, the color statistical histogram of the target is obtained.
  • the similarity between the two can be obtained.
  • the target in the image to be detected is first detected by the tracker obtained in advance, and if the target is not detected in the image to be detected, the image to be detected is detected through a preset target detection model, Obtain at least one candidate target frame in the image to be detected and the category corresponding to the candidate target frame. Then, a candidate target frame is selected from the at least one candidate target frame according to the color feature and category of the target, and the tracking model is retrained to obtain a new tracker. That is, when the target in the image to be detected cannot be detected by the tracker, the candidate target frame obtained by the target detection model is used to update the tracker. It can improve the detection ability of the tracker and reduce the loss rate of target tracking.
  • the image to be detected is a continuous frame image, and the detection of the image to be detected is continuously performed.
  • the timing may be preset, and an initial value may be assigned. If at least one candidate target frame obtained from the preset target detection model does not have the same target category as the target, and the color feature similarity is greater than For the candidate target frame with a preset similarity threshold, the preset timing value is increased, and a new image to be detected is obtained for re-detection based on the preset target detection model. If, from the at least one candidate target frame, a candidate target frame with the same target category and with a maximum similarity of color features greater than a preset similarity threshold is obtained, then the preset timing is cleared.
  • the timing function can be realized by a counter or a timer.
  • the preset threshold can take any suitable value, for example, a value equivalent to 30 seconds or 1 minute.
  • an embodiment of the present invention also provides a target tracking device, which can be used for the drone shown in FIG. 1, and the target tracking device 700 includes:
  • the image acquisition module 701 is used to acquire an image to be detected
  • the tracker detection module 702 is configured to use a tracker to detect a target in the image to be detected, wherein the tracker is obtained by training based on the characteristics of the target;
  • the judgment module 703 is used to judge whether the target is detected in the image to be detected
  • the target detection module 704 is configured to, if the target is not detected in the image to be detected, input the image to be detected into a preset target detection model based on deep learning to obtain at least one candidate target frame and the The category corresponding to the candidate target frame;
  • Candidate target frame selection module 705, configured to select one candidate target frame from the at least one candidate target frame according to the category and color characteristics of the target;
  • the first tracker update module 706 is configured to retrain the tracking model based on the selected candidate target frame to update the tracker.
  • the target in the image to be detected is first detected by the tracker obtained in advance, and if the target is not detected in the image to be detected, the image to be detected is detected through a preset target detection model, Obtain at least one candidate target frame in the image to be detected and the category corresponding to the candidate target frame. Then, a candidate target frame is selected from the at least one candidate target frame according to the color feature and category of the target, and the tracking model is retrained to obtain a new tracker. That is, when the target in the image to be detected cannot be detected by the tracker, the candidate target frame obtained by the target detection model is used to update the tracker. It can improve the detection ability of the tracker and reduce the loss rate of target tracking.
  • the judgment module 703 is specifically configured to:
  • the candidate target frame selection module 705 is specifically configured to:
  • the judgment module 703 is also specifically configured to:
  • the device further includes:
  • the target position determining module 710 is configured to use the position corresponding to the maximum response value as the position of the target in the image to be detected.
  • the feature of the target includes the initial target frame of the target; the device further includes a tracker training module 707, which is used to detect the image to be detected by using a tracker Before the goal:
  • the color feature includes a color statistical histogram.
  • the device further includes:
  • the target color feature obtaining module 708 is configured to obtain a color statistical histogram of the target based on the initial target frame.
  • the device further includes a second tracker update module 709 for:
  • a candidate target frame that is the same as the target category and whose color feature similarity is greater than the preset similarity threshold is not obtained from the at least one candidate target frame, increase the preset timing value, and obtain a new image to be detected Perform detection again based on the preset target detection model;
  • a candidate target frame that is the same as the target category and has a maximum similarity of color features greater than a preset similarity threshold is obtained from the at least one candidate target frame, reset the preset timing to zero;
  • the initial target frame is acquired again, and the tracker is updated based on the initial target frame.
  • the tracker is a tracker based on a core-related filtering algorithm.
  • the preset target detection model is a target detection model based on an SSD algorithm.
  • the target tracking method described in any of the above embodiments can be executed by the tracking chip 30 or the controller 40 in the drone 100, and the tracking chip 30 (please refer to FIG. 2) or the controller 40 (please refer to FIG. 4) can be implemented as Figure 9 shows the hardware structure.
  • the hardware structure includes:
  • One processor 1 is taken as an example in FIG. 9.
  • the processor 1 and the memory 2 may be connected through a bus or in other ways, and the connection through a bus is taken as an example in FIG. 9.
  • the memory 2 can be used to store non-volatile software programs, non-volatile computer-executable programs, and modules, such as program instructions corresponding to the target tracking method in the embodiments of the present application /Module (for example, the image acquisition module 701, the tracker detection module 702, the judgment module 703, the target detection module 704, the candidate target frame selection module 705, and the first tracker update module 706 shown in FIG. 7).
  • the processor 1 executes various functional applications and data processing of the controller or tracking chip by running the non-volatile software programs, instructions, and modules stored in the memory 2, that is, realizing the target tracking method of the foregoing method embodiment.
  • the memory 2 may include a program storage area and a data storage area.
  • the program storage area may store an operating system and an application program required by at least one function; the data storage area may store data created according to the use of the controller.
  • the memory 2 may include a high-speed random access memory, and may also include a non-volatile memory, such as at least one magnetic disk storage device, a flash memory device, or other non-volatile solid-state storage devices.
  • the storage 2 may optionally include storage remotely arranged relative to the processor 1, and these remote storages may be connected to the drone through a network. Examples of the aforementioned networks include but are not limited to the Internet, corporate intranets, local area networks, mobile communication networks, and combinations thereof.
  • the one or more modules are stored in the memory 2, and when executed by the one or more processors 1, the target tracking method in any of the foregoing method embodiments is executed, for example, the target tracking method described in FIG. Step 101 to step 106 of the method; realize the functions of modules 701-706 in Fig. 7 and modules 701-710 in Fig. 8.
  • the embodiments of the present application provide a non-volatile computer-readable storage medium, the computer-readable storage medium stores computer-executable instructions, and the computer-executable instructions are executed by one or more processors, for example, in FIG. 9
  • One processor 1 of the above-mentioned one or more processors can execute the target tracking method in any of the above-mentioned method embodiments, for example, execute the above-described method steps 101 to 106 in FIG. 5; Functions of modules 701-706 and modules 701-710 in Figure 8.
  • the device embodiments described above are merely illustrative.
  • the units described as separate components may or may not be physically separated, and the components displayed as units may or may not be physical units, that is, they may be located in One place, or it can be distributed to multiple network units. Some or all of the modules may be selected according to actual needs to achieve the objectives of the solutions of the embodiments.
  • each embodiment can be implemented by software plus a general hardware platform, and of course, it can also be implemented by hardware.
  • a person of ordinary skill in the art can understand that all or part of the processes in the method of the foregoing embodiments can be implemented by instructing relevant hardware through a computer program.
  • the program can be stored in a computer readable storage medium. When executed, it may include the processes of the above-mentioned method embodiments.
  • the storage medium can be a magnetic disk, an optical disc, a read-only memory (Read-Only Memory, ROM), or a random access memory (Random Access Memory, RAM), etc.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Databases & Information Systems (AREA)
  • Computing Systems (AREA)
  • Artificial Intelligence (AREA)
  • Health & Medical Sciences (AREA)
  • Image Analysis (AREA)

Abstract

L'invention concerne un procédé et un appareil de suivi de cible, et un véhicule aérien sans pilote. Le procédé consiste : à obtenir une image à détecter (101); à faire appel à un dispositif de suivi afin de détecter une cible dans ladite image (102); à déterminer si la cible est détectée dans ladite image (103); si la cible ne parvient pas à être détectée dans ladite image, à entrer ladite image dans un modèle de détection de cible prédéfini basé sur un apprentissage profond de manière à obtenir une trame de cible candidate et la catégorie correspondant à la trame de cible candidate (104); à sélectionner une trame de cible candidate à partir de ladite trame de cible candidate en fonction de la catégorie et de la caractéristique de couleur de la cible (105); et à mettre à jour le dispositif de suivi sur la base de la trame de cible candidate sélectionnée (106). Quand la cible dans ladite image ne parvient pas à être détectée à l'aide du dispositif de suivi, la trame de cible candidate obtenue par le modèle de détection de cible et correspondant à la cible est utilisée pour mettre à jour le dispositif de suivi. La capacité de détection du dispositif de suivi peut être améliorée, et le taux de perte de suivi de cible peut être réduit.
PCT/CN2020/078629 2019-03-20 2020-03-10 Procédé et appareil de suivi de cible, et véhicule aérien sans pilote WO2020187095A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201910213970.2 2019-03-20
CN201910213970.2A CN109978045A (zh) 2019-03-20 2019-03-20 一种目标跟踪方法、装置和无人机

Publications (1)

Publication Number Publication Date
WO2020187095A1 true WO2020187095A1 (fr) 2020-09-24

Family

ID=67079721

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/078629 WO2020187095A1 (fr) 2019-03-20 2020-03-10 Procédé et appareil de suivi de cible, et véhicule aérien sans pilote

Country Status (2)

Country Link
CN (1) CN109978045A (fr)
WO (1) WO2020187095A1 (fr)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112686921A (zh) * 2021-01-08 2021-04-20 西安羚控电子科技有限公司 一种基于轨迹特征的多干扰无人机检测跟踪方法
CN112700469A (zh) * 2020-12-30 2021-04-23 武汉卓目科技有限公司 一种基于eco算法和目标检测的视觉目标跟踪方法及装置
CN112733741A (zh) * 2021-01-14 2021-04-30 苏州挚途科技有限公司 交通标识牌识别方法、装置和电子设备
CN113808170A (zh) * 2021-09-24 2021-12-17 电子科技大学长三角研究院(湖州) 一种基于深度学习的反无人机跟踪方法
CN113807464A (zh) * 2021-09-29 2021-12-17 东南大学 基于改进yolo v5的无人机航拍图像目标检测方法
CN117765031A (zh) * 2024-02-21 2024-03-26 四川盎芯科技有限公司 一种边缘智能设备的图像多目标预跟踪方法及系统

Families Citing this family (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109978045A (zh) * 2019-03-20 2019-07-05 深圳市道通智能航空技术有限公司 一种目标跟踪方法、装置和无人机
CN110569840B (zh) * 2019-08-13 2023-05-16 浙江大华技术股份有限公司 目标检测方法及相关装置
CN110687922B (zh) * 2019-11-08 2022-10-28 湖北经济学院 一种无人机的视觉跟踪方法及具有视觉跟踪功能的无人机
CN111242981A (zh) * 2020-01-21 2020-06-05 北京捷通华声科技股份有限公司 一种定置物品的跟踪方法、装置和安保设备
CN111667505B (zh) * 2020-04-30 2023-04-07 北京捷通华声科技股份有限公司 一种定置物品跟踪的方法和装置
CN112700478A (zh) * 2020-12-31 2021-04-23 北京澎思科技有限公司 目标跟踪方法、系统、计算机可读存储介质和程序产品
CN113256680A (zh) * 2021-05-13 2021-08-13 燕山大学 一种基于无监督学习高精度的目标跟踪系统
CN113470078A (zh) * 2021-07-15 2021-10-01 浙江大华技术股份有限公司 一种目标跟踪方法、装置及系统
CN114937231B (zh) * 2022-07-21 2022-09-30 成都西物信安智能系统有限公司 一种目标识别跟踪的方法

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130301911A1 (en) * 2012-05-08 2013-11-14 Samsung Electronics Co., Ltd Apparatus and method for detecting body parts
CN107066990A (zh) * 2017-05-04 2017-08-18 厦门美图之家科技有限公司 一种目标跟踪方法及移动设备
CN108062764A (zh) * 2017-11-30 2018-05-22 极翼机器人(上海)有限公司 一种基于视觉的物体跟踪方法
CN109409354A (zh) * 2017-08-18 2019-03-01 深圳市道通智能航空技术有限公司 无人机智能跟随目标确定方法、无人机和遥控器
CN109978045A (zh) * 2019-03-20 2019-07-05 深圳市道通智能航空技术有限公司 一种目标跟踪方法、装置和无人机

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106960446B (zh) * 2017-04-01 2020-04-24 广东华中科技大学工业技术研究院 一种面向无人艇应用的水面目标检测跟踪一体化方法
CN107564034A (zh) * 2017-07-27 2018-01-09 华南理工大学 一种监控视频中多目标的行人检测与跟踪方法
CN107563313B (zh) * 2017-08-18 2020-07-07 北京航空航天大学 基于深度学习的多目标行人检测与跟踪方法
CN107943837B (zh) * 2017-10-27 2022-09-30 江苏理工学院 一种前景目标关键帧化的视频摘要生成方法
CN107918765A (zh) * 2017-11-17 2018-04-17 中国矿业大学 一种移动目标检测并追踪系统及其方法
CN108229442B (zh) * 2018-02-07 2022-03-11 西南科技大学 基于ms-kcf的图像序列中人脸快速稳定检测方法
CN108596955B (zh) * 2018-04-25 2020-08-28 Oppo广东移动通信有限公司 一种图像检测方法、图像检测装置及移动终端

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130301911A1 (en) * 2012-05-08 2013-11-14 Samsung Electronics Co., Ltd Apparatus and method for detecting body parts
CN107066990A (zh) * 2017-05-04 2017-08-18 厦门美图之家科技有限公司 一种目标跟踪方法及移动设备
CN109409354A (zh) * 2017-08-18 2019-03-01 深圳市道通智能航空技术有限公司 无人机智能跟随目标确定方法、无人机和遥控器
CN108062764A (zh) * 2017-11-30 2018-05-22 极翼机器人(上海)有限公司 一种基于视觉的物体跟踪方法
CN109978045A (zh) * 2019-03-20 2019-07-05 深圳市道通智能航空技术有限公司 一种目标跟踪方法、装置和无人机

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112700469A (zh) * 2020-12-30 2021-04-23 武汉卓目科技有限公司 一种基于eco算法和目标检测的视觉目标跟踪方法及装置
CN112686921A (zh) * 2021-01-08 2021-04-20 西安羚控电子科技有限公司 一种基于轨迹特征的多干扰无人机检测跟踪方法
CN112686921B (zh) * 2021-01-08 2023-12-01 西安羚控电子科技有限公司 一种基于轨迹特征的多干扰无人机检测跟踪方法
CN112733741A (zh) * 2021-01-14 2021-04-30 苏州挚途科技有限公司 交通标识牌识别方法、装置和电子设备
CN113808170A (zh) * 2021-09-24 2021-12-17 电子科技大学长三角研究院(湖州) 一种基于深度学习的反无人机跟踪方法
CN113808170B (zh) * 2021-09-24 2023-06-27 电子科技大学长三角研究院(湖州) 一种基于深度学习的反无人机跟踪方法
CN113807464A (zh) * 2021-09-29 2021-12-17 东南大学 基于改进yolo v5的无人机航拍图像目标检测方法
CN113807464B (zh) * 2021-09-29 2022-05-13 东南大学 基于改进yolo v5的无人机航拍图像目标检测方法
CN117765031A (zh) * 2024-02-21 2024-03-26 四川盎芯科技有限公司 一种边缘智能设备的图像多目标预跟踪方法及系统
CN117765031B (zh) * 2024-02-21 2024-05-03 四川盎芯科技有限公司 一种边缘智能设备的图像多目标预跟踪方法及系统

Also Published As

Publication number Publication date
CN109978045A (zh) 2019-07-05

Similar Documents

Publication Publication Date Title
WO2020187095A1 (fr) Procédé et appareil de suivi de cible, et véhicule aérien sans pilote
US11238605B2 (en) Simultaneous localization and mapping method, computer device, and storage medium
WO2020244649A1 (fr) Procédé et appareil d'évitement d'obstacle, et dispositif électronique
KR102254491B1 (ko) 지능형 영상 분석 모듈을 탑재한 자율비행 드론
US11287828B2 (en) Obstacle detection method and apparatus and robot using the same
WO2021088684A1 (fr) Procédé d'évitement d'obstacle omnidirectionnel et véhicule aérien sans pilote
US20170344026A1 (en) Uav, uav flight control method and device
WO2018045976A1 (fr) Procédé de commande de vol pour aéronefs et appareil de commande de vol
CN108121350B (zh) 一种控制飞行器降落的方法以及相关装置
WO2020135449A1 (fr) Procédé et appareil de génération de point de relais, et véhicule aérien sans pilote
Shen et al. Person tracking and frontal face capture with UAV
WO2019061111A1 (fr) Procédé de réglage de trajet et véhicule aérien sans pilote
WO2022016534A1 (fr) Procédé de commande de vol d'engin volant sans pilote embarqué et engin volant sans pilote embarqué
WO2020233682A1 (fr) Procédé et appareil de photographie circulaire autonome et véhicule aérien sans pilote
US20210325909A1 (en) Method, apparatus and unmanned aerial vehicle for processing depth map
CN110751270A (zh) 一种无人机电线故障检测方法、系统及设备
Bondi et al. Near Real-Time Detection of Poachers from Drones in AirSim.
US20190072986A1 (en) Unmanned aerial vehicles
US11106223B2 (en) Apparatus and methods for landing unmanned aerial vehicle
KR102194127B1 (ko) 멤스센서를 구비한 드론
US11964775B2 (en) Mobile object, information processing apparatus, information processing method, and program
WO2021135823A1 (fr) Procédé et dispositif de commande de vol, et véhicule aérien sans pilote
CN109815861B (zh) 一种基于人脸识别的用户行为信息统计方法
Lu et al. Target localization with drones using mobile CNNs
JP6360650B1 (ja) 異常検知システム、方法及びプログラム

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20772549

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20772549

Country of ref document: EP

Kind code of ref document: A1