WO2017096949A1 - 一种对目标进行跟踪拍摄的方法、控制装置及系统 - Google Patents

一种对目标进行跟踪拍摄的方法、控制装置及系统 Download PDF

Info

Publication number
WO2017096949A1
WO2017096949A1 PCT/CN2016/096070 CN2016096070W WO2017096949A1 WO 2017096949 A1 WO2017096949 A1 WO 2017096949A1 CN 2016096070 W CN2016096070 W CN 2016096070W WO 2017096949 A1 WO2017096949 A1 WO 2017096949A1
Authority
WO
WIPO (PCT)
Prior art keywords
tracking target
tracking
feature
target
template
Prior art date
Application number
PCT/CN2016/096070
Other languages
English (en)
French (fr)
Inventor
李佐广
Original Assignee
深圳市道通智能航空技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳市道通智能航空技术有限公司 filed Critical 深圳市道通智能航空技术有限公司
Priority to EP16872145.4A priority Critical patent/EP3373248A4/en
Publication of WO2017096949A1 publication Critical patent/WO2017096949A1/zh
Priority to US16/002,548 priority patent/US10782688B2/en

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/66Remote control of cameras or camera parts, e.g. by remote control devices
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B64AIRCRAFT; AVIATION; COSMONAUTICS
    • B64CAEROPLANES; HELICOPTERS
    • B64C39/00Aircraft not otherwise provided for
    • B64C39/02Aircraft not otherwise provided for characterised by special use
    • B64C39/024Aircraft not otherwise provided for characterised by special use of the remote controlled vehicle type, i.e. RPV
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/0094Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots involving pointing a payload, e.g. camera, weapon, sensor, towards a fixed or moving target
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/12Target-seeking control
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/46Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
    • G06V10/462Salient features, e.g. scale invariant feature transforms [SIFT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/62Extraction of image or video features relating to a temporal dimension, e.g. time-based feature extraction; Pattern tracking
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
    • G06V10/751Comparing pixel values or logical combinations thereof, or feature values having positional relevance, e.g. template matching
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • G06V20/13Satellite images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V30/00Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
    • G06V30/10Character recognition
    • G06V30/19Recognition using electronic means
    • G06V30/192Recognition using electronic means using simultaneous comparisons or correlations of the image signals with a plurality of references
    • G06V30/194References adjustable by an adaptive method, e.g. learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V30/00Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
    • G06V30/10Character recognition
    • G06V30/24Character recognition characterised by the processing or recognition method
    • G06V30/248Character recognition characterised by the processing or recognition method involving plural approaches, e.g. verification by template match; Resolving confusion among similar patterns, e.g. "O" versus "Q"
    • G06V30/2504Coarse or fine approaches, e.g. resolution of ambiguities or multiscale approaches
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/695Control of camera direction for changing a field of view, e.g. pan, tilt or based on tracking of objects
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B64AIRCRAFT; AVIATION; COSMONAUTICS
    • B64UUNMANNED AERIAL VEHICLES [UAV]; EQUIPMENT THEREFOR
    • B64U2101/00UAVs specially adapted for particular uses or applications
    • B64U2101/30UAVs specially adapted for particular uses or applications for imaging, photography or videography
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B64AIRCRAFT; AVIATION; COSMONAUTICS
    • B64UUNMANNED AERIAL VEHICLES [UAV]; EQUIPMENT THEREFOR
    • B64U2201/00UAVs characterised by their flight controls
    • B64U2201/10UAVs characterised by their flight controls autonomous, i.e. by navigating independently from ground or air stations, e.g. by using inertial navigation systems [INS]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/22Indexing; Data structures therefor; Storage structures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10032Satellite or aerial image; Remote sensing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20092Interactive image processing based on input by user
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30241Trajectory
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/07Target detection

Definitions

  • the present invention relates to the field of electronic camera technology, and in particular, to a method, a control device and a system for tracking and shooting a target.
  • the invention aims to provide a method, a control device and a system for tracking and shooting a target.
  • the manual remote control drone is used for tracking and shooting, it is difficult to track moving objects such as high speed motion, irregular motion or variable speed motion. , the defect of poor shooting.
  • the embodiment of the present invention provides the following technical solutions:
  • an embodiment of the present invention provides a method for tracking and shooting a target, which is suitable for shooting by a drone, and the method includes:
  • control device controls the camera on the drone to collect images and acquire the captured images
  • the drone Based on the calculated motion speed, direction of motion, and position in the current image, the drone is controlled to track the tracking target such that the tracking target is always located in the image captured by the camera.
  • the selecting a tracking target in the image comprises: receiving, via a user interface, a user input signal indicating a selection of a tracking target by a user, determining a tracking target in the image according to a user input signal; or comprising: The acquired image is subjected to recognition analysis, and the moving target in the image is determined as the tracking target according to the analysis result.
  • the method further includes: extracting feature points of the tracking target; recording the extracted feature point set and the number of feature points, and generating a feature template; depositing the feature template Template library.
  • the method further includes: extracting feature points of the tracking target; recording the number of extracted feature points and generating a new feature template; The feature points of the template are compared with the feature points of the feature template in the template library; if the feature points of the new feature template and the feature points of each feature template in the template library are less than a preset value, the new feature template is stored. Go to the template library.
  • the method further comprises: when the tracking failure occurs in controlling the tracking of the tracking target by the drone, extracting the SIFT feature from the current image in which the tracking failure occurs, to obtain the feature point set S1; Reading the feature template to obtain the feature point set T1 of the feature template; matching the feature point set T1 with the feature point set S1, searching for the tracking target according to the matching result; if the tracking target is found, continuing to track the tracking target, otherwise repeating The above process until the tracking target is found.
  • the method further comprises: when a matching error occurs, evaluating a negative example of the error that occurred and a positive example of the error; generating a learning sample according to the evaluation result, and updating the tracking target stored in the template library Feature templates and feature points.
  • the calculating a moving speed, a moving direction, and a position in the current image of the tracking target includes: calculating a moving speed of the tracking target between images of consecutive adjacent image frames by using an LK optical flow algorithm , the direction of motion and its position in the current image.
  • the method further includes:
  • an embodiment of the present invention provides a control device for tracking and shooting a target, which is suitable for shooting by a drone
  • the control device includes: an acquiring unit, configured to control during the tracking and shooting process.
  • the photographing device on the drone acquires an image and acquires the acquired image;
  • the selecting unit is configured to select a tracking target in the image;
  • the calculating unit is configured to calculate the tracking target by comparing images of consecutive adjacent frames The speed of motion, the direction of motion, and the position in the current image;
  • the tracking unit is configured to control the drone to track the tracking target according to the calculated motion speed, the direction of motion, and the position in the current image, so that the tracking target is always In the image captured by the photographing device;
  • the photographing control unit is configured to control the photographing device to photograph the tracking target when the tracking target is always in the image captured by the photographing device, or to The tracking target is always in the image captured by the camera, and when the other device is provided on the drone, the other camera is controlled to capture the tracking target.
  • the selection unit includes: a signal acquisition subunit for acquiring a user input signal indicating a selection of a tracking target by a user, and a target for determining a tracking target in the image according to the user input signal Determining a subunit; or,
  • the selection unit includes: an analysis subunit for performing recognition analysis on the acquired image, and a target recognition subunit for identifying a moving target in the image as a tracking target according to an analysis result of the analysis subunit.
  • control device further includes: a feature extracting unit, configured to extract a feature point of the tracking target after the selecting unit selects the tracking target; and a template generating unit, configured to record the extracted feature point set and the feature Point the number and generate a feature template; a storage unit for storing the feature template in the template library.
  • a feature extracting unit configured to extract a feature point of the tracking target after the selecting unit selects the tracking target
  • a template generating unit configured to record the extracted feature point set and the feature Point the number and generate a feature template
  • storage unit for storing the feature template in the template library.
  • the feature extraction unit is further configured to: extract feature points of the tracking target during the process of controlling the drone to track the tracking target; the template generating unit is further configured to record the extracted feature points. And generating a new feature template, and wherein the control device further comprises: a comparison unit, configured to compare feature points of the new feature template with feature points of the feature template in the template library, and wherein the storing The unit is further configured to store the new template into the template library when the comparison result of the comparison unit is that the number of matching of the feature points of each feature template in the new feature template and the template library is less than a preset value.
  • the feature extraction unit is further configured to: when the tracking unit fails to track, extract the SIFT feature from the current image in which the tracking failure occurs, to obtain the feature point set S1, and wherein the control device further includes: the reading unit And the feature point set T1 is obtained by reading the feature template from the template library, and the matching unit is configured to match the feature point set T1 with the feature point set S1, and search for the tracking target according to the matching result, and wherein the The tracking unit is also used to continue tracking the tracking target when it finds the tracking target.
  • control device further includes: an evaluation unit, configured to evaluate a negative sample of the error that occurred and a positive example of the error when a matching error occurs; and an update unit configured to generate the learning according to the evaluation result. Samples, and update the feature templates and feature points of the tracking targets stored in the template library.
  • the calculating unit is further configured to calculate a moving speed, a moving direction, and a position in the current image of the tracking target between images of consecutive adjacent image frames by using an LK optical flow algorithm.
  • an embodiment of the present invention provides a system for tracking and shooting a target, including the control device, the drone, and the photographing device mounted on the drone, wherein the control device is used for And controlling the drone to track the tracking target according to the image acquired by the photographing device, so that the tracking target is located in an image captured by the photographing device, and the photographing device is configured to photograph the tracking target.
  • the beneficial effects of the embodiments of the present invention are: selecting a tracking target by acquiring an image captured by the capturing device, and controlling the drone to track the tracking target, and controlling the capturing device to capture the tracking target during the tracking process, Therefore, the manual remote control drone is no longer needed to track the moving target, and the positioning accuracy is high, and the moving target can be tracked and photographed, and high quality video or photo can be obtained.
  • FIG. 1 is a flowchart of a method for tracking and shooting a target according to a first embodiment of the present invention.
  • FIG. 2 is a flow chart for calculating the moving speed of the tracking target, the moving direction, and its position in the current image in step S130 in the method shown in FIG. 1.
  • FIG. 3 is a flow chart of the detection steps performed by the method of FIG. 1 in the event of a tracking failure.
  • FIG. 4 is a flow chart of learning the tracking target when the tracking target is tracked by the method shown in FIG. 1.
  • FIG. 5 is a schematic structural diagram of a system for tracking and shooting a target according to a second embodiment of the present invention.
  • Figure 6 is a block diagram showing the structure of a control device in the system shown in Figure 5.
  • FIG. 7 is a schematic structural diagram of hardware of an electronic device for performing a method for tracking and shooting a target according to a third embodiment of the present invention.
  • Embodiment 1 of the present invention provides a method for tracking and shooting a target, and the method is suitable for shooting by a drone.
  • a photographing device and a control device are mounted on the drone.
  • the control device can control the drone to perform tracking shooting on the tracking target according to the image captured by the camera, so that the position of the tracking target in the image captured by the camera is stable.
  • the control unit can be installed on the drone, or can communicate with the drone and the camera through wireless communication.
  • the photographing device in this embodiment may be a motion camera to photograph a tracking target. It will be understood by those skilled in the art that there may be two camera devices to perform both the positioning shooting and the moving shooting functions. That is, the first photographing device captures the positioning image, the control device determines the tracking target based on the analysis of the positioning image, and the second photographing device captures the moving image for tracking the tracking target.
  • the method flow includes:
  • control device acquires an image captured by the camera on the drone, and acquires the acquired image
  • S140 Control the drone to track the tracking target according to the calculated motion speed, the motion direction, and the position in the current image, so that the tracking target is always located in the image captured by the first camera.
  • step S120 the manner of selecting a tracking target in the image may be implemented as follows:
  • a user input signal indicating a user's selection of a tracking target is obtained, and a tracking target in the image is determined based on the user input signal.
  • the user can input only the user input interface signal indicating the tracking target to be selected, and the control device can determine the target selected by the user as the tracking target according to the acquired user input signal.
  • the images of consecutive adjacent frames are compared, and the moving target in the image is determined as the tracking target according to the comparison result. Since the scene of the continuous video stream has continuity, the change between consecutive frame images is small when the target is stationary; conversely, if there is motion, it causes significant interframe difference.
  • the adjacent frame difference method is to use the difference of two consecutive frames or several frames of video images to perform moving target detection and detection, thereby realizing automatic selection of moving targets.
  • the adjacent frame difference method is used for moving target detection, which is suitable for the scene where the background remains relatively static and the moving target moves relative to the background. Therefore, when the drone automatically selects the moving target, it is necessary to keep the aircraft relatively stationary (moving speed is 0, or motion). The speed is small and can be ignored.)
  • the absolute value of the frame difference is directly calculated and the difference between the pixel corresponding to the kth frame image and the kth frame image is calculated, and the motion detection condition is as follows:
  • a binary difference image D(x, y) is obtained according to the above formula 1.
  • S(x, y, t) is the gray value of the luminance image sequence at point (x, y) at time t
  • ⁇ t is an integer multiple of the frame interval
  • T is the ⁇ value, which determines the sensitivity of motion detection.
  • morphological treatment is used to perform expansion and corrosion. Can filter out most of the noise points and get a clearer target.
  • region growing is to group pixels with similar properties to form regions.
  • the specific implementation method is: finding a seed pixel for each divided region as a starting point of growth, and then having pixels having the same or similar properties as the seed pixels in the neighborhood around the seed pixel (according to some predetermined growth or similarity criterion) The decision is merged into the area where the seed pixel is located. Placing these new pixels as new seed pixels continues the above process until no more pixels are satisfied, so one The area has grown.
  • the test image is detected, and after all regions are formed, the area of all connected regions is calculated, and the region with the largest area is regarded as the target region, and the target region is the tracking target.
  • the largest moving target can be selected as the moving target that needs to be tracked.
  • the outer rectangular frame of the tracking target can be superimposed in the acquired image.
  • step S130 may use an LK (Lucas-Kanade) optical flow algorithm to calculate the motion speed, the motion direction, and the position in the current image of the tracking target between images of consecutive adjacent frame frames.
  • LK Lasas-Kanade
  • the method utilizes a target frame (ie, an outer rectangular box) to represent the tracked object and estimate the motion of the target between images of successive adjacent frames.
  • a target frame ie, an outer rectangular box
  • the principle is as follows: a plurality of pixel points are selected as feature points in the target frame of the previous frame image, and corresponding positions of the feature points in the previous frame image in the current frame image are searched for in the next frame image. Then, the displacement changes of the plurality of feature points between adjacent frame images are sorted to obtain a median value of the displacement change, and the median value is used to obtain a feature point that is less than 50% of the median value, and the 50% feature is obtained. As the feature point of the next frame, and proceeding accordingly, the purpose of dynamically updating the feature points is achieved.
  • S250 Determine whether the number of optical flow points in the ROI region is within a certain threshold range. If the number of optical flows is less than threshold, re-initialize the optical flow point by using the current frame image feature point.
  • the optical flow method mainly calculates the moving optical flow of the pixel, that is, the velocity, thereby forming an optical flow velocity field. Therefore, the optical flow method needs to preset the number of feature points that need to calculate the optical flow. The more the feature points, the higher the accuracy of the final result.
  • step S120 the method further includes:
  • tracking failure may occur.
  • the missing target may be re-discovered through a detecting step.
  • parallel processing is performed in a mode in which the tracking unit and the detecting unit are complementary.
  • the tracking unit assumes that the motion of the object between adjacent frame images is limited, and the tracking target is visible, thereby estimating the motion of the tracking target. If the tracking target disappears in the field of view of the camera, tracking will fail.
  • the detection unit assumes that each frame image is independent of each other, and performs a full-image search for each frame image to locate an area where the tracking target may appear, based on the previously detected and learned target model.
  • the specific detection steps include:
  • step S340 Determine whether the tracking target is found. If the step S350 is performed, the process proceeds to step S320.
  • the matching process can be calculated by using the Euclidean distance.
  • a matching error may also occur when performing the detection step.
  • the matching error that occurs in the detection unit is usually the negative example of the error and the positive example of the error.
  • the model and key feature points of the tracking target can be updated by the learning unit to avoid similar errors in the future.
  • the specific method is as follows:
  • the learning unit evaluates negative examples of errors that may occur during the tracking process and positive examples of errors
  • a learning sample is generated based on the evaluation result, and the "feature template” and “feature point” of the tracking target stored in the template library are updated.
  • the tracking target when tracking the tracking target, can also be learned, as shown in FIG. 4, including:
  • the method further includes:
  • the photographing device is controlled to photograph the tracking target.
  • other camera devices can be set on the drone, and the method further includes:
  • the other camera is controlled to shoot the tracking target.
  • a photographing device and a control device are disposed on the drone, and the control device selects a tracking target by acquiring an image captured by the photographing device, and controls the drone to perform tracking and shooting on the tracking target, so that The position of the tracking target in the image captured by the camera remains stable during the tracking process. Therefore, it is no longer necessary to manually control the drone to track the moving target, and the positioning accuracy is high, and the shooting can be tracked for various moving targets, and high-quality videos or photos can be obtained.
  • the second embodiment of the present invention provides a system for tracking and shooting a target.
  • the system includes a drone 51, a camera 52 mounted on the drone, and a control device 54.
  • the control device 54 is configured to control the drone 51 to track the tracking target according to the image acquired by the imaging device 52, and control the imaging device 52 to capture the tracking target during the tracking process.
  • the control device 54 can be installed on the drone 51, or can communicate with the drone 51 and the camera 52 by wireless communication.
  • control device 54 includes:
  • the acquiring unit 541 is configured to control an image capturing device on the drone to collect an image, and acquire the captured image;
  • a selecting unit 542, configured to select a tracking target among the collected images
  • a calculating unit 543 configured to calculate a moving speed, a moving direction, and a position in the current image of the tracking target by comparing the images of consecutive adjacent frames (for example, according to the LK optical flow algorithm);
  • the tracking unit 544 is configured to control the drone to track the tracking target according to the calculated motion speed, the moving direction, and the position in the current image, so that the position of the tracking target in the image captured by the capturing device remains stable;
  • the shooting control unit 547 is configured to control the shooting device 52 to capture the tracking target when the tracking target is always in the image captured by the capturing device 52, or for the tracking target to be always collected by the capturing device 52.
  • the other camera device is controlled to capture the tracking target.
  • the selection unit 542 when it selects the tracking target, it can be manually selected, or selected using the frame difference method, or selected using the connected region extraction method. Accordingly, when manually tracking the target, the selection unit 542 includes a signal acquisition subunit for acquiring a user input signal indicating the user's selection of the tracking target; and a target determination subunit for determining the content according to the user input signal The tracking target in the image. When the tracking target is automatically selected, the selection unit includes an analysis subunit for performing recognition analysis on the collected image, and a target recognition subunit for identifying the moving target in the image as the tracking target according to the analysis result. The identification analysis of the analysis subunit may include a frame difference method comparison analysis or a recognition analysis by a connected region extraction method.
  • the apparatus further includes a detection unit 545, the detection unit 545 comprising:
  • a feature extraction unit configured to extract a feature point of the tracking target after the selection unit selects the tracking target
  • a template generating unit configured to record the extracted feature point set and the number of feature points and generate a feature template
  • a storage unit for storing the feature template in the template library.
  • the apparatus further includes a learning unit 546, the learning unit 546 comprising:
  • a feature extraction unit configured to extract a feature point of the tracking target in the process of tracking the target to the tracking target
  • a template generating unit configured to record the number of the feature points and generate a new feature template
  • a comparison unit configured to compare feature points of the new feature template with feature points of the feature template in the template library
  • the storage unit is configured to store the new template into the template library when the comparison result of the comparison unit is that the new feature template matches the feature point of each template in the template library by less than a preset value.
  • the detecting unit 545 further includes a reading unit and a matching unit:
  • the feature extraction unit is further configured to: when the tracking unit fails to track, extract the SIFT feature from the image of the current target tracking failure, to obtain the feature point set S1;
  • a reading unit configured to read a feature template from the template library to obtain a feature point set T1 of the feature template
  • a matching unit configured to match the feature point set T1 and the feature point set S1, and search the tracking target according to the matching result
  • the tracking unit 543 is further configured to continue tracking the tracking target when the tracking target is found.
  • the learning unit 546 further includes an evaluation unit and an update unit.
  • the evaluation unit is configured to evaluate a negative example of the error that may occur during the tracking process and a positive example of the error when the detection unit has a matching error
  • the update unit is configured to generate a learning sample according to the evaluation result, and update The "feature template” and "feature points" of the tracking target in the template library.
  • the photographing device and the control device are set on the drone, and the image captured by the photographing device is acquired, the tracking target is selected, and the drone is controlled to track and shoot the tracking target. Therefore, it is no longer necessary to manually control the drone to track the moving target, and the positioning accuracy is high, and the shooting can be tracked for various moving targets, and high-quality videos or photos can be obtained.
  • FIG. 7 is a schematic diagram of a hardware structure of an electronic device for performing a method for tracking and shooting a target according to a third embodiment of the present invention. As shown in FIG. 7, the electronic device 700 includes:
  • processors 710 and memory 720 one processor 710 is taken as an example in FIG.
  • the processor 710 and the memory 720 may be connected by a bus or other means, as exemplified by a bus connection in FIG.
  • the memory 720 is a non-volatile computer readable storage medium, and can be used for storing a non-volatile software program, a non-volatile computer executable program, and a module, such as tracking a target in an embodiment of the present invention.
  • the method corresponds to a program instruction/module (for example, the acquisition unit 541, the selection unit 542, the calculation unit 543, the tracking unit 544, the detection unit 545, the learning unit 546, and the photographing control unit 547 shown in FIG.
  • the processor 710 executes various functional applications and data processing of the server by running non-volatile software programs, instructions, and modules stored in the memory 720, that is, implementing the method of the above method embodiment for tracking and shooting the target.
  • the memory 720 may include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application required for at least one function; and the storage data area may store the use of the control device according to the tracking of the target. Data, etc.
  • memory 720 can include high speed random access memory, and can also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other non-volatile solid state storage device.
  • the memory 720 is optional Included is a remotely located memory relative to the processor 710 that can be connected via a network to a control device that tracks the shot of the target. Examples of such networks include, but are not limited to, the Internet, intranets, local area networks, mobile communication networks, and combinations thereof.
  • the one or more modules are stored in the memory 720, and when executed by the one or more processors 710, perform a method of tracking and shooting a target in any of the above method embodiments, for example, performing The method flow S110 to S140 in FIG. 1 described above, the steps S210 to S260 in FIG. 2, the detecting step S310 to the step S350 in FIG. 3, and the steps S410-S440 in FIG. 4, the unit 541 in FIG. 6 is implemented. -547 features.
  • the above product can perform the method provided by the third embodiment of the present invention, and has the corresponding functional modules and beneficial effects of the execution method.
  • the above product can perform the method provided by the third embodiment of the present invention, and has the corresponding functional modules and beneficial effects of the execution method.
  • the electronic device of the third embodiment of the present invention exists in various forms including, but not limited to:
  • Portable entertainment devices These devices can display and play multimedia content. Such devices include: audio, video players (such as iPod), handheld game consoles, e-books, and smart toys and portable car navigation devices.
  • the server A device that provides computing services.
  • the server consists of a processor, a hard disk, a memory, a system bus, etc.
  • the server is similar to a general-purpose computer architecture, but because of the need to provide highly reliable services, processing power and stability High reliability in terms of reliability, security, scalability, and manageability.
  • a fourth embodiment of the present invention provides a non-transitory computer readable storage medium storing computer-executable instructions that are executed by one or more processors, such as a map One of the processors 710, wherein the one or more processors may perform one of the foregoing method embodiments to perform tracking shooting on the target, for example, to perform the method flow S110 in FIG. 1 described above.
  • the device embodiments described above are merely illustrative, wherein the units described as separate components may or may not be physically separate, and the components displayed as units may be or It may not be a physical unit, that is, it may be located in one place, or it may be distributed to multiple network units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the embodiment.
  • the apparatus and method disclosed herein may also be implemented in other manners.
  • the division of modules and units in a device is only a logical function division.
  • multiple units or modules may be combined or integrated into another system, or some features may be ignored or Not executed.
  • the mutual coupling or direct coupling or communication connection shown or discussed may be an indirect coupling or communication connection through some interface, device or unit, and may be in an electrical, mechanical or other form.
  • the units described as separate components may or may not be physically separated, and the components displayed as units may or may not be physical units, that is, may be located in one place, or may be distributed to multiple network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of the embodiment.
  • each functional unit in each embodiment of the present invention may be integrated into one processing unit, or each unit may exist physically separately, or two or more units may be integrated into one unit.
  • the above integrated unit can be implemented in the form of hardware or in the form of a software functional unit.
  • the integrated unit if implemented in the form of a software functional unit and sold or used as a standalone product, may be stored in a computer readable storage medium.
  • the technical solution of the present invention which is essential or contributes to the prior art, or all or part of the technical solution, may be embodied in the form of a software product stored in a storage medium.
  • a number of instructions are included to cause a computer device (which may be a personal computer, server, or network device, etc.) to perform all or part of the methods described in various embodiments of the present invention.
  • the foregoing storage medium includes various media that can store program codes, such as a USB flash drive, a mobile hard disk, a read only memory, a random access memory, a magnetic disk, or an optical disk.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Remote Sensing (AREA)
  • Signal Processing (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • Databases & Information Systems (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Automation & Control Theory (AREA)
  • Artificial Intelligence (AREA)
  • Health & Medical Sciences (AREA)
  • Astronomy & Astrophysics (AREA)
  • Computing Systems (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Image Analysis (AREA)
  • Studio Devices (AREA)
  • Closed-Circuit Television Systems (AREA)

Abstract

本发明涉及电子摄像技术领域,提供了一种对目标进行跟踪拍摄的方法、控制装置及系统。该方法包括:在跟踪拍摄的过程中,控制装置控制无人机上的拍摄装置采集图像,并获取采集到的图像;在所述图像中选择跟踪目标;根据对连续相邻帧的图像的比较,计算跟踪目标的运动速度、运动方向以及其在当前图像中的位置;根据所计算的运动速度、运动方向以及在当前图像中的位置,控制无人机对跟踪目标进行跟踪,使得所述跟踪目标始终位于所述拍摄装置采集的图像中。本发明可以在航拍过程中自动对目标进行跟踪拍摄,不需要人工遥控无人机跟踪运动目标,因此定位精度高,可以对运动目标进行跟踪拍摄,并获得高质量的视频或照片。

Description

一种对目标进行跟踪拍摄的方法、控制装置及系统 【技术领域】
本发明涉及电子摄像技术领域,尤其涉及一种对目标进行跟踪拍摄的方法、控制装置及系统。
【背景技术】
现有技术中,在对目标进行跟踪拍摄时,大多采用无人机携带拍摄装置进行拍摄。但是,利用无人机进行拍摄时,需要人工控制遥控器进行跟随拍摄,当拍摄目标处于运动状态时,无法保证无人机与拍摄目标同步运动,并且也无法使拍摄目标始终处于拍摄画面的理想位置处,无法保证无人机运动的平稳,拍摄出来的画面质量难以得到保证。
因此,有必要提供一种对目标进行跟踪拍摄的方法、控制装置及系统,实现对目标的智能跟踪拍摄,克服现有技术采用人工遥控无人机跟踪拍摄时,对正在高速运动、不规则运动或变速运动等运动目标跟踪困难,拍摄效果差的缺陷。
【发明内容】
本发明旨在提供一种对目标进行跟踪拍摄的方法、控制装置及系统,克服现有技术采用人工遥控无人机跟踪拍摄时,对正在高速运动、不规则运动或变速运动等运动目标跟踪困难,拍摄效果差的缺陷。
为解决上述技术问题,本发明实施例提供以下技术方案:
一方面,本发明实施例提供一种对目标进行跟踪拍摄的方法,适用于无人机拍摄,所述方法包括:
在跟踪拍摄的过程中,控制装置控制无人机上的拍摄装置采集图像,并获取采集到的图像;
在所述图像中选择跟踪目标;
计算跟踪目标的运动速度、运动方向以及在当前图像中的位置;
根据所计算的运动速度、运动方向以及在当前图像中的位置,控制无人机对跟踪目标进行跟踪,使得所述跟踪目标始终位于所述拍摄装置采集的图像中。
一些实施例中,所述在所述图像中选择跟踪目标包括:经由用户接口接收指示用户对跟踪目标的选择的用户输入信号,根据用户输入信号确定所述图像中的跟踪目标;或者包括:对采集到的图像进行识别分析,根据分析结果确定所述图像中的运动目标作为跟踪目标。
一些实施例中,在所述图像中选择跟踪目标之后,所述方法还包括:提取跟踪目标的特征点;记录提取的特征点集及特征点个数,并生成特征模板;将特征模板存入模板库。
一些实施例中,在控制无人机对跟踪目标进行跟踪的过程中,所述方法还包括:提取跟踪目标的特征点;记录所提取的特征点的个数并生成新特征模板;将新特征模板的特征点与模板库中的特征模板的特征点进行比对;若新特征模板的特征点与模板库中每个特征模板的特征点匹配数目少于预设值,则将新特征模板存储到模板库中。
一些实施例中,所述方法还包括:当在控制无人机对跟踪目标进行跟踪的过程中出现跟踪失败时,从出现跟踪失败的当前图像提取SIFT特征,得到特征点集S1;从模板库中读取特征模板,得到特征模板的特征点集T1;将特征点集T1与特征点集S1进行匹配,根据匹配结果查找跟踪目标;若查找到跟踪目标则继续对跟踪目标进行跟踪,否则重复上述过程直到找到跟踪目标。
一些实施例中,所述方法还包括:当出现匹配错误时,对出现的错误的负样例和错误的正样例进行评估;根据评估结果生成学习样本,并更新模板库中存储的跟踪目标的特征模板及特征点。
一些实施例中,所述计算跟踪目标的运动速度、运动方向以及在当前图像中的位置,包括:采用LK光流算法在连续的相邻图帧的图像之间计算所述跟踪目标的运动速度、运动方向及其在当前图像中的位置。
一些实施例中,所述方法还包括:
当所述跟踪目标始终位于所述拍摄装置采集的图像中并保持稳定时,控制所述拍摄装置对跟踪目标进行拍摄;或者
当所述跟踪目标始终处于所述拍摄装置采集的图像中并且所述无人机上设有其他摄装置时,控制所述其他摄装置对所述跟踪目标进行拍摄
另一方面,本发明实施例提供一种对目标进行跟踪拍摄的控制装置,适用于无人机拍摄,所述控制装置包括:获取单元,用于在跟踪拍摄的过程中,控 制无人机上的拍摄装置采集图像,并获取采集到的图像;选择单元,用于在所述图像中选择跟踪目标;计算单元,用于通过对连续相邻帧的图像的比较,计算跟踪目标的运动速度、运动方向以及在当前图像中的位置;跟踪单元,用于根据所计算的运动速度、运动方向以及在当前图像中的位置,控制无人机对跟踪目标进行跟踪,使得跟踪目标始终处于所述拍摄装置采集的图像中;拍摄控制单元,用于在所述跟踪目标始终处于所述拍摄装置采集的图像中时,控制所述拍摄装置对所述跟踪目标进行拍摄,或者用于在所述跟踪目标始终处于所述拍摄装置采集的图像中并且所述无人机上设有其他摄装置时,控制所述其他摄装置对所述跟踪目标进行拍摄。
一些实施例中,所述选择单元包括:用于获取指示用户对跟踪目标的选择的用户输入信号的信号获取子单元,以及用于根据所述用户输入信号确定所述图像中的跟踪目标的目标确定子单元;或者,
所述选择单元包括:用于对采集到的图像进行识别分析的分析子单元,以及用于根据所述分析子单元的分析结果识别所述图像中的运动目标作为跟踪目标的目标识别子单元。
一些实施例中,所述控制装置还包括:特征提取单元,用于在所述选择单元选择了跟踪目标之后,提取跟踪目标的特征点;模板生成单元,用于记录提取的特征点集及特征点个数并生成特征模板;存储单元,用于将所述特征模板存入模板库。
一些实施例中,所述特征提取单元还用于在控制无人机对跟踪目标进行跟踪的过程中,提取跟踪目标的特征点;所述模板生成单元还用于记录所提取的特征点的个数并生成新特征模板,并且其中,所述控制装置还包括:比对单元,用于将新特征模板的特征点与模板库中的特征模板的特征点进行比对,并且其中,所述存储单元还用于在比对单元的对比结果为新特征模板与模板库中每个特征模板的特征点匹配数目少于预设值时,将新模板存储到模板库中。
一些实施例中,所述特征提取单元还用于在跟踪单元跟踪失败时,从出现跟踪失败的当前图像提取SIFT特征,得到特征点集S1,并且其中,所述控制装置还包括:读取单元,用于从模板库中读取特征模板,得到特征模板的特征点集T1;匹配单元,用于将特征点集T1与特征点集S1进行匹配,根据匹配结果查找跟踪目标,并且其中所述跟踪单元还用于在查找到跟踪目标时继续对跟踪目标进行跟踪
一些实施例中,所述控制装置还包括:评估单元,用于当出现匹配错误时,对出现的错误的负样例和错误的正样例进行评估;更新单元,用于根据评估结果生成学习样本,并更新模板库中存储的跟踪目标的特征模板及特征点。
一些实施例中,所述计算单元,还用于采用LK光流算法在连续的相邻图帧的图像之间计算所述跟踪目标的运动速度、运动方向及其在当前图像中的位置。
再一方面,本发明实施例提供一种用于对目标进行跟踪拍摄的系统,包括如上所述的控制装置、无人机,以及安装在无人机上的拍摄装置,其中所述控制装置用于根据所述拍摄装置采集的图像,控制无人机对跟踪目标进行跟踪,以使得跟踪目标位于所述拍摄装置采集的图像中,所述拍摄装置用于对所述跟踪目标进行拍摄。
与现有技术相比,本发明实施例的有益效果在于:通过获取拍摄装置采集的图像选择跟踪目标以及控制无人机对跟踪目标进行跟踪,在跟踪过程中控制拍摄装置对跟踪目标进行拍摄,因此不再需要人工遥控无人机跟踪运动目标,定位精度高,可以对运动目标进行跟踪拍摄,并获得高质量的视频或照片。
【附图说明】
图1为本发明第一实施例提供的一种对目标进行跟踪拍摄的方法的流程图。
图2为图1所示方法中步骤S130中计算跟踪目标的运动速度、运动方向以及其在当前图像中的位置的流程图。
图3为图1所示方法在出现跟踪失败时执行的检测步骤的流程图。
图4为图1所示方法在跟踪目标进行跟踪时,对跟踪目标进行学习的流程图。
图5为本发明第二实施例提供的一种对目标进行跟踪拍摄的系统的结构示意图。
图6为图5所示系统中控制装置的单元结构示意图。
图7为本发明第三实施例提供的执行一种对目标进行跟踪拍摄的方法的电子设备的硬件结构示意图。
【具体实施方式】
为了使本发明的目的、技术方案及优点更加清楚明白,以下结合附图及实施例,对本发明进行进一步详细说明。应当理解,此处所描述的具体实施例仅用以解释本发明,并不用于限定本发明。
本发明实施例一提供了一种对目标进行跟踪拍摄的方法,该方法适用于无人机拍摄。本实施例中在无人机上安装有拍摄装置和控制装置。控制装置可以根据拍摄装置采集的图像控制无人机对跟踪目标进行跟踪拍摄,以使跟踪目标在拍摄装置采集的图像中的位置保持稳定。控制装置可以安装在无人机上,也可以通过无线通信与无人机、拍摄装置建立通讯。本实施例中的拍摄装置可以为运动相机,以对跟踪目标进行拍摄。本领域技术人员可以理解,拍摄装置也可以有两个,以分别执行定位拍摄和运动拍摄两种功能。即第一个拍摄装置拍摄定位图像,控制装置根据对定位图像的分析确定跟踪目标;第二个拍摄装置拍摄运动图像,用于对跟踪目标进行跟踪拍摄。
为了使描述更加清楚,下面将从控制装置的角度对本实施例的方法进行说明。请参阅图1,方法流程包括:
S110、在跟踪拍摄的过程中,控制装置获取无人机上的拍摄装置采集的图像,并获取采集到的图像;
S120、在该图像中选择跟踪目标;
S130、计算跟踪目标的运动速度、运动方向以及在当前图像中的位置;
S140、根据所计算的运动速度、运动方向以及在当前图像中的位置,控制无人机对跟踪目标进行跟踪,使得跟踪目标始终位于所述第一拍摄装置采集的图像中。
实际应用中,步骤S120,在图像中选择跟踪目标的方式可以通过如下方式实现:
方式一 人工选择
获取指示用户对跟踪目标的选择的用户输入信号,根据用户输入信号确定所述图像中的跟踪目标。用户可以仅有用户输入接口输入信号,该信号指示要选择的跟踪目标,控制装置可以根据获取到的用户输入信号,将用户选择的目标确定为跟踪目标。
方式二 自动选择
对采集到的图像进行识别分析,根据分析结果确定所述图像中的运动目标 作为跟踪目标。
这又包括两种方式,其一为利用帧差法进行的选择,其二为利用连通区域提取法的选择。
(1)利用帧差法选择
比较连续相邻帧的图像,根据比较结果确定图像中的运动目标作为跟踪目标。由于连续视频流的场景具有连续性,当目标静止的时候连续帧图像之间的变化很小;反之,若有运动则会引起显著帧间差。相邻帧差法就是利用连续的两帧或几帧视频图像的差分,进行运动目标检测检测,从而实现自动选择运动目标。
利用相邻帧差法进行运动目标检测,适合背景保持相对静止,运动目标相对背景运动的场景,因此,无人机在自动选择运动目标时,需要保持飞机相对静止(运动速度为0,或运动速度较小可以忽略不计)。
具体计算步骤如下:
直接计算帧差绝对值并计算第k帧图像与第k-l帧图像对应像素点的差别,运动检测的判决条件如下:
Figure PCTCN2016096070-appb-000001
根据上述公式1得到一个二值差分图像D(x,y)。
Figure PCTCN2016096070-appb-000002
其中S(x,y,t)为亮度图像序列在t时刻点(x,y)处的灰度值,Δt为帧间隔的整数倍,T为阂值,决定了运动检测的灵敏度。
得到二值图后,采用形态学处理,进行膨胀和腐蚀。能够滤除大部分的噪声点,得到较为清晰的目标。
(2)利用连通区域提取法选择
区域生长的基本思想是将具有相似性质的像素集合起来构成区域。具体实现方法是:对每个分割的区域找个种子像素作为生长的起点,再将种子像素周围邻域中与种子像素有相同或相似性质的像素(根据某种事先确定的生长或相似准则来判定)合并到种子像素所在的区域中。将这些新像素当作新的种子像素继续进行上面的过程,直到再没有满足条件的像素可被包括进来,这样一个 区域就长成了。对测试图像进行检测,在所有区域形成以后,对所有的连通区域的面积进行计算,将面积最大的区域视为目标区域,该目标区域便是跟踪目标。
需要说明的是,如果实际应用中出现多个符合条件的运动目标,则可以选择最大的运动目标,作为需要跟踪的运动目标。
实际应用中,可以在采集的图像中叠加跟踪目标的外界矩形框。
具体地,步骤S130可以采用LK(Lucas–Kanade)光流算法在连续的相邻图帧的图像之间计算该跟踪目标的运动速度、运动方向及其在当前图像中的位置。
该方法是利用目标框(即外界矩形框)来表示被跟踪目标,并在连续的相邻帧的图像之间估计目标的运动。
原理如下:在上一帧图像的目标框中选择若干个像素点作为特征点,在下一帧图像中中寻找上一帧图像中的特征点在当前帧图像中的对应位置。然后,将这若干个特征点在相邻帧图像之间的位移变化进行排序,得到位移变化的中值,利用该中值,得到小于中值的50%的特征点,将这50%的特征点作为下一帧的特征点,并依此进行下去,就实现了动态更新特征点的目的。
光流跟踪的基本步骤如图2所示:
S210、将轮廓角点作为光流跟踪特征点,并初始化光流点points[0],points[1];
S220、设置金字塔搜索窗口,为每个金字塔层指定迭代终止条件;
S230、在前一帧图像prev_gray和当前帧图像gray两帧间计算特征点光流集points[1],当计算出金字塔某一层光流后,将其加到对应的初始值中,作为下一层金字塔计算的初始输入;
S240、交换points[0]和points[1],prev_gray和gray;
S250、判断ROI区域内光流点数目是否在一定阈值范围内,如果光流数量是否小于threshold,利用当前帧图像特征点重新初始化光流点;
S260、若进程结束回到第4步依次执行,否则停止跟踪。
需要说明的是,光流法主要是计算像素的移动光流,也就是速度,从而形成光流速度场。因此,光流法需要预先设定需要计算光流的特征点的数目,特征点越多,最终结果的精度就越高。
在一个优选的方案中,在步骤S120之后,该方法还包括:
S121、提取该跟踪目标的特征点;
S122、记录提取的特征点集及特征点个数并生成特征模板;
S123、将该特征模板存入模板库。
在一个可能的场景中,跟踪的过程中,可能会出现跟踪失败的情况,在一个优选的方案中可以通过一个检测的步骤重新查找失踪的目标。
本实施例中采用跟踪单元与检测单元互补的模式进行并行处理。首先跟踪单元假设相邻帧图像之间物体的运动是有限的,且跟踪目标是可见的,以此来估计跟踪目标的运动。如果跟踪目标在摄像装置的视野中消失,将造成跟踪失败。检测单元假设每一帧图像都是彼此独立的,并且根据以往检测和学习到的目标模型,对每一帧图像进行全图搜索以定位跟踪目标可能出现的区域。
请参阅图3,具体地检测步骤包括:
S310、从当前目标跟踪失败的图像提取SIFT特征,得到特征点集S1;
S320、从模板库中读取一个特征模板,得到特征模板的特征点集T1;
S330、将该特征点集T1和该特征点集S1进行匹配,根据匹配结果查找该跟踪目标;
S340、判断是否查找到该跟踪目标,若是执行步骤S350,否则继续执行步骤S320。
S350、继续对该跟踪目标进行跟踪。
具体地,在对特征点集T1和S1进行匹配时,判定匹配的特征点数目最多的为匹配到的运动目标,匹配过程可采用欧式距离进行计算。
在另一可能的场景中,在执行检测步骤时,也可能出现匹配错误的情况。
检测单元出现的匹配错误通常是错误的负样例和错误的正样例这两种情况。可以通过学习单元对跟踪目标的模型及关键特征点进行更新,以此来避免以后出现类似的错误。具体方法如下:
学习单元对跟踪过程中可能出现的错误的负样例和错误的正样例进行评估;
据评估结果生成学习样本,并更新模板库中存储的该跟踪目标的“特征模板”及“特征点”。
在一个优选的方案中,在对跟踪目标进行跟踪时,还可以对跟踪目标进行学习,请参阅图4,包括:
S410、提取跟踪目标的特征点;
S420、记录该特征点的个数并生成新特征模板;
S430、将该新特征模板的特征点与该模板库中的特征模板的特征点进行比对;
S440、若与该模板库中每个模板的特征点匹配的点数少于预设值,则将该新模板存储到该模板库中。
在一个可能的场景中,该方法还包括:
当所述跟踪目标始终位于所述拍摄装置采集的图像中并保持稳定时,控制所述拍摄装置对跟踪目标进行拍摄。
在另一可能的场景中,可以在无人机上设置其他摄装置,该方法还包括:
当跟踪目标始终位于拍摄装置采集的图像中并保持稳定时,控制该其他摄装置对跟踪目标进行拍摄。
本实施例的对目标进行跟踪拍摄的方法,在无人机上设置了拍摄装置和控制装置,控制装置通过获取拍摄装置采集的图像选择跟踪目标,并控制无人机对跟踪目标进行跟踪拍摄,使得在跟踪过程中跟踪目标在拍摄装置采集的图像中的位置保持稳定。因此,不再需要人工遥控无人机跟踪运动目标,定位精度高,可以对于各种运动目标跟踪拍摄,并获得高质量的视频或照片。
在上述实施例的基础上,本发明实施例二提供了一种对目标进行跟踪拍摄的系统。请参阅图5,该系统包括无人机51、安装在无人机上的拍摄装置52,以及控制装置54。其中,控制装置54用于根据拍摄装置52采集的图像控制无人机51对跟踪目标进行跟踪,并在跟踪过程中控制拍摄装置52对跟踪目标进行拍摄。实际应用中,控制装置54可以安装在无人机51上,也可以通过无线通信与无人机51、拍摄装置52建立通讯。
请参阅图6,控制装置54包括:
获取单元541,用于控制无人机上的拍摄装置采集图像,并获取采集到的图像;
选择单元542,用于在所采集的图像中选择跟踪目标;
计算单元543,用于通过对连续相邻帧的图像的比较,(例如可以根据LK光流算法)计算跟踪目标的运动速度、运动方向以及在当前图像中的位置;
跟踪单元544,用于根据计算的运动速度、运动方向以及在当前图像中的位置,控制无人机对跟踪目标进行跟踪,使得跟踪目标在拍摄装置采集的图像中的位置保持稳定;
拍摄控制单元547,用于在该跟踪目标始终处于该拍摄装置52采集的图像中时,控制该拍摄装置52对该跟踪目标进行拍摄,或者用于在该跟踪目标始终处于该拍摄装置52采集的图像中并且该无人机上设有其他摄装置时,控制该其他摄装置对该跟踪目标进行拍摄。
在一个优选的方案中,选择单元542选择跟踪目标时,可以通过人工进行选择,或者利用帧差法进行选择,或者利用连通区域提取法进行选择。相应地,当通过人工选择跟踪目标时,选择单元542包括信号获取子单元,用于获取指示用户对跟踪目标的选择的用户输入信号;以及目标确定子单元,用于根据用户输入信号确定所述图像中的跟踪目标。当自动选择跟踪哦目标时,选择单元包括分析子单元,用于对采集到的图像进行识别分析;以及目标识别子单元,用于根据分析结果识别所述图像中的运动目标作为跟踪目标。分析子单元的识别分析可以包括帧差法比较分析,或是通过连通区域提取法进行的识别分析。
在一个可能的场景中,该装置还包括检测单元545,该检测单元545包括:
特征提取单元,用于在选择单元选择了跟踪目标之后,提取该跟踪目标的特征点;
模板生成单元,用于记录提取的特征点集及特征点个数并生成特征模板;
存储单元,用于将该特征模板存入模板库。
在一个可能的场景中,该装置还包括学习单元546,该学习单元546包括:
特征提取单元,用于在对该跟踪目标追踪目标的过程中,提取该跟踪目标的特征点;
模板生成单元,用于记录该特征点的个数并生成新特征模板;
比对单元,用于将该新特征模板的特征点与该模板库中的特征模板的特征点进行比对;
存储单元,用于在对比单元的对比结果为该新特征模板与该模板库中每个模板的特征点匹配的点数少于预设值时,则将该新模板存储到该模板库中。
在一个可能的场景中,该检测单元545,还包括读取单元和匹配单元:
特征提取单元,还用于在跟踪单元跟踪失败时,从当前目标跟踪失败的图像提取SIFT特征,得到特征点集S1;
读取单元,用于从该模板库中读取特征模板,得到特征模板的特征点集T1;
匹配单元,用于将该特征点集T1和该特征点集S1进行匹配,根据匹配结果查找该跟踪目标;
该跟踪单元543,还用于在查找到该跟踪目标时继续对该跟踪目标进行跟踪。
在一个可能的场景中,该学习单元546,还包括评估单元和更新单元。评估单元用于在该检测单元的出现匹配错误时,对该对跟踪过程中可能出现的错误的负样例和错误的正样例进行评估,更新单元用于根据评估结果生成学习样本,并更新模板库中该跟踪目标的“特征模板”及“特征点”。
本实施例的对目标进行跟踪拍摄的系统,在无人机上设置了拍摄装置和控制装置,通过获取拍摄装置采集的图像,选择跟踪目标以及控制无人机对跟踪目标进行跟踪拍摄。因此,不再需要人工遥控无人机跟踪运动目标,定位精度高,可以对于各种运动目标跟踪拍摄,并获得高质量的视频或照片。
图7为本发明第三实施例提供的执行一种对目标进行跟踪拍摄的方法的电子设备的硬件结构示意图,如图7所示,该电子设备700包括:
一个或多个处理器710以及存储器720,图7中以一个处理器710为例。
处理器710和存储器720可以通过总线或者其他方式连接,图7中以通过总线连接为例。
存储器720作为一种非易失性计算机可读存储介质,可用于存储非易失性软件程序、非易失性计算机可执行程序以及模块,如本发明实施例中的一种对目标进行跟踪拍摄的方法对应的程序指令/模块(例如,附图6所示的获取单元541、选择单元542、计算单元543、跟踪单元544、检测单元545、学习单元546和拍摄控制单元547。
处理器710通过运行存储在存储器720中的非易失性软件程序、指令以及模块,从而执行服务器的各种功能应用以及数据处理,即实现上述方法实施例一种对目标进行跟踪拍摄的方法。
存储器720可以包括存储程序区和存储数据区,其中,存储程序区可存储操作系统、至少一个功能所需要的应用程序;存储数据区可存储根据对目标进行跟踪拍摄的控制装置的使用所创建的数据等。此外,存储器720可以包括高速随机存取存储器,还可以包括非易失性存储器,例如至少一个磁盘存储器件、闪存器件、或其他非易失性固态存储器件。在一些实施例中,存储器720可选 包括相对于处理器710远程设置的存储器,这些远程存储器可以通过网络连接至对目标进行跟踪拍摄的控制装置。上述网络的实例包括但不限于互联网、企业内部网、局域网、移动通信网及其组合。
所述一个或者多个模块存储在所述存储器720中,当被所述一个或者多个处理器710执行时,执行上述任意方法实施例中的一种对目标进行跟踪拍摄的方法,例如,执行以上描述的图1中的方法流程S110至S140,图2中的步骤S210至步骤S260,图3中的检测步骤S310至步骤S350和图4中的步骤S410-S440,实现图6中的单元541-547的功能。
上述产品可执行本发明第三实施例所提供的方法,具备执行方法相应的功能模块和有益效果。未在本实施例中详尽描述的技术细节,可参见本发明其他实施例所提供的方法。
本发明第三实施例的电子设备以多种形式存在,包括但不限于:
(1)便携式娱乐设备:这类设备可以显示和播放多媒体内容。该类设备包括:音频、视频播放器(例如iPod),掌上游戏机,电子书,以及智能玩具和便携式车载导航设备。
(2)服务器:提供计算服务的设备,服务器的构成包括处理器、硬盘、内存、系统总线等,服务器和通用的计算机架构类似,但是由于需要提供高可靠的服务,因此在处理能力、稳定性、可靠性、安全性、可扩展性、可管理性等方面要求较高。
(3)其他具有数据交互功能的电子装置。
本发明第四实施例提供了一种非易失性计算机可读存储介质,所述计算机可读存储介质存储有计算机可执行指令,该计算机可执行指令被一个或多个处理器执行,例如图7中的一个处理器710,可使得上述一个或多个处理器可执行上述任意方法实施例中的一种对目标进行跟踪拍摄的方法,例如,执行以上描述的图1中的方法流程S110至S140,图2中的步骤S210至步骤S260,图3中的检测步骤S310至步骤S350和图4中的步骤S410-S440,实现图6中的单元541-547的功能。
以上所描述的装置实施例仅仅是示意性的,其中所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也 可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部模块来实现本实施例方案的目的。
应理解,本发明所揭露的装置和方法,也可以通过其它的方式实现。例如,装置中对模块、单元的划分仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或模块可以结合或者可以集成到另一个系统,或一些特征可以忽略或不执行。另外,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些接口、装置或单元的间接耦合或通信连接,可以是电性、机械或其它的形式。
所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例方案的目的。
另外,在本发明各个实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。上述集成的单元既可以采用硬件的形式实现,也可以采用软件功能单元的形式实现。
所述集成的单元如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个计算机可读取存储介质中。基于这样的理解,本发明的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的全部或部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,包括若干指令用以使得一台计算机设备(可以是个人计算机,服务器,或者网络设备等)执行本发明各个实施例所述方法的全部或部分。而前述的存储介质包括:U盘、移动硬盘、只读存储器、随机存取存储器、磁碟或者光盘等各种可以存储程序代码的介质。
以上所述仅为本发明的较佳实施例而已,并不用以限制本发明,凡在本发明的精神和原则之内所作的任何修改、等同替换和改进等,均应包含在本发明的保护范围之内。

Claims (16)

  1. 一种对目标进行跟踪拍摄的方法,适用于无人机拍摄,其特征在于,所述方法包括:
    在跟踪拍摄的过程中,控制装置控制无人机上的拍摄装置采集图像,并获取采集到的图像;
    在所述图像中选择跟踪目标;
    计算跟踪目标的运动速度、运动方向以及在当前图像中的位置;
    根据所计算的运动速度、运动方向以及在当前图像中的位置,控制无人机对跟踪目标进行跟踪,使得所述跟踪目标始终位于所述拍摄装置采集的图像中。
  2. 根据权利要求1所述的方法,其特征在于,在所述图像中选择跟踪目标包括:
    获取指示用户对跟踪目标的选择的用户输入信号,根据用户输入信号确定所述图像中的跟踪目标;或者
    对采集到的图像进行识别分析,根据分析结果确定所述图像中的运动目标作为跟踪目标。
  3. 根据权利要求1或2所述的方法,其特征在于,在所述图像中选择跟踪目标之后,所述方法还包括:
    提取跟踪目标的特征点;
    记录提取的特征点集及特征点个数,并生成特征模板;
    将特征模板存入模板库。
  4. 根据权利要求3所述的方法,其特征在于,在控制无人机对跟踪目标进行跟踪的过程中,所述方法还包括:
    提取跟踪目标的特征点;
    记录所提取的特征点的个数并生成新特征模板;
    将新特征模板的特征点与模板库中的特征模板的特征点进行比对;
    若新特征模板的特征点与模板库中每个特征模板的特征点匹配数目少于 预设值,则将新特征模板存储到模板库中。
  5. 根据权利要求4所述的方法,其特征在于,所述方法还包括:
    当在控制无人机对跟踪目标进行跟踪的过程中出现跟踪失败时,从出现跟踪失败的当前图像提取SIFT特征,得到特征点集S1;
    从模板库中读取特征模板,得到特征模板的特征点集T1;
    将特征点集T1与特征点集S1进行匹配,根据匹配结果查找跟踪目标;
    若查找到跟踪目标则继续对跟踪目标进行跟踪,否则重复上述过程直到找到跟踪目标。
  6. 根据权利要求5所述的方法,其特征在于,所述方法还包括:
    当出现匹配错误时,对出现的错误的负样例和错误的正样例进行评估;
    根据评估结果生成学习样本,并更新模板库中存储的跟踪目标的特征模板及特征点。
  7. 根据权利要求6所述的方法,其特征在于,所述计算跟踪目标的运动速度、运动方向以及在当前图像中的位置,包括:
    采用LK光流算法在连续的相邻图帧的图像之间计算所述跟踪目标的运动速度、运动方向及其在当前图像中的位置。
  8. 根据权利要求1所述的方法,其特征在于,所述方法还包括:
    当所述跟踪目标始终位于所述拍摄装置采集的图像中并保持稳定时,控制所述拍摄装置对跟踪目标进行拍摄;或者
    当所述跟踪目标始终处于所述拍摄装置采集的图像中并且所述无人机上设有其他拍摄装置时,控制所述其他拍摄装置对所述跟踪目标进行拍摄。
  9. 一种对目标进行跟踪拍摄的控制装置,适用于无人机拍摄,其特征在于,所述控制装置包括:
    获取单元,用于在跟踪拍摄的过程中,控制无人机上的拍摄装置采集图像,并获取采集到的图像;
    选择单元,用于在所述图像中选择跟踪目标;
    计算单元,用于通过对连续相邻帧的图像的比较,计算跟踪目标的运动速度、运动方向以及在当前图像中的位置;
    跟踪单元,用于根据所计算的运动速度、运动方向以及在当前图像中的位置,控制所述无人机对跟踪目标进行跟踪,使得所述跟踪目标始终处于所述拍摄装置采集的图像中;
    拍摄控制单元,用于在所述跟踪目标始终处于所述拍摄装置采集的图像中时,控制所述拍摄装置对所述跟踪目标进行拍摄,或者用于在所述跟踪目标始终处于所述拍摄装置采集的图像中并且所述无人机上设有其他摄装置时,控制所述其他摄装置对所述跟踪目标进行拍摄。
  10. 根据权利要求9所述的控制装置,其特征在于,
    所述选择单元包括:用于获取指示用户对跟踪目标的选择的用户输入信号的信号获取子单元,以及用于根据所述用户输入信号确定所述图像中的跟踪目标的目标确定子单元;或者,
    所述选择单元包括:用于对采集到的图像进行识别分析的分析子单元,以及用于根据所述分析子单元的分析结果识别所述图像中的运动目标作为跟踪目标的目标识别子单元。
  11. 根据权利要求9或10所述的控制装置,其特征在于,所述控制装置还包括:
    特征提取单元,用于在所述选择单元选择了跟踪目标之后,提取跟踪目标的特征点;
    模板生成单元,用于记录提取的特征点集及特征点个数并生成特征模板;
    存储单元,用于将所述特征模板存入模板库。
  12. 根据权利要求11所述的控制装置,其特征在于,
    所述特征提取单元还用于在控制无人机对跟踪目标进行跟踪的过程中,提取跟踪目标的特征点;
    所述模板生成单元还用于记录所提取的特征点的个数并生成新特征模板,并且其中,
    所述控制装置还包括:比对单元,用于将新特征模板的特征点与模板库中 的特征模板的特征点进行比对,并且其中,
    所述存储单元还用于在比对单元的对比结果为新特征模板与模板库中每个特征模板的特征点匹配数目少于预设值时,将新模板存储到模板库中。
  13. 根据权利要求12所述的控制装置,其特征在于,
    所述特征提取单元还用于在跟踪单元跟踪失败时,从出现跟踪失败的当前图像提取SIFT特征,得到特征点集S1,并且其中,
    所述控制装置还包括:
    读取单元,用于从模板库中读取特征模板,得到特征模板的特征点集T1;
    匹配单元,用于将特征点集T1与特征点集S1进行匹配,根据匹配结果查找跟踪目标,并且其中
    所述跟踪单元还用于在查找到跟踪目标时继续对跟踪目标进行跟踪。
  14. 根据权利要求13所述的控制装置,其特征在于,还包括:
    评估单元,用于当出现匹配错误时,对出现的错误的负样例和错误的正样例进行评估;
    更新单元,用于根据评估结果生成学习样本,并更新模板库中存储的跟踪目标的特征模板及特征点。
  15. 根据权利要求8所述的控制装置,其特征在于,所述计算单元,还用于采用LK光流算法在连续的相邻图帧的图像之间计算所述跟踪目标的运动速度、运动方向及其在当前图像中的位置。
  16. 一种用于对目标进行跟踪拍摄的系统,其特征在于,所述系统包括如权利要求8至15任一项所述的控制装置、无人机,以及安装在无人机上的拍摄装置,其中所述控制装置用于根据所述拍摄装置采集的图像,控制无人机对跟踪目标进行跟踪,以使得跟踪目标位于所述拍摄装置采集的图像中,所述拍摄装置用于对所述跟踪目标进行拍摄。
PCT/CN2016/096070 2015-12-10 2016-08-19 一种对目标进行跟踪拍摄的方法、控制装置及系统 WO2017096949A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
EP16872145.4A EP3373248A4 (en) 2015-12-10 2016-08-19 Method, control device, and system for tracking and photographing target
US16/002,548 US10782688B2 (en) 2015-12-10 2018-06-07 Method, control apparatus, and system for tracking and shooting target

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201510916635.0A CN105578034A (zh) 2015-12-10 2015-12-10 一种对目标进行跟踪拍摄的控制方法、控制装置及系统
CN201510916635.0 2015-12-10

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US16/002,548 Continuation US10782688B2 (en) 2015-12-10 2018-06-07 Method, control apparatus, and system for tracking and shooting target

Publications (1)

Publication Number Publication Date
WO2017096949A1 true WO2017096949A1 (zh) 2017-06-15

Family

ID=55887642

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2016/096070 WO2017096949A1 (zh) 2015-12-10 2016-08-19 一种对目标进行跟踪拍摄的方法、控制装置及系统

Country Status (4)

Country Link
US (1) US10782688B2 (zh)
EP (1) EP3373248A4 (zh)
CN (1) CN105578034A (zh)
WO (1) WO2017096949A1 (zh)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2019028761A1 (en) * 2017-08-10 2019-02-14 Beijing Airlango Technology, Co., Ltd. TRACKING OBJECT USING DEPTH INFORMATION
CN110245641A (zh) * 2019-06-21 2019-09-17 上海摩象网络科技有限公司 一种目标追踪拍摄方法、装置、电子设备
CN111311642A (zh) * 2020-02-26 2020-06-19 深圳英飞拓科技股份有限公司 一种高速球型摄像机下的目标跟踪优化方法
CN112001949A (zh) * 2020-08-13 2020-11-27 地平线(上海)人工智能技术有限公司 确定目标点移动速度的方法、装置、可读存储介质及设备
US10964106B2 (en) 2018-03-30 2021-03-30 Cae Inc. Dynamically modifying visual rendering of a visual element comprising pre-defined characteristics
US11380054B2 (en) 2018-03-30 2022-07-05 Cae Inc. Dynamically affecting tailored visual rendering of a visual element
US11875560B2 (en) 2020-03-18 2024-01-16 Coretronic Corporation Unmanned aerial vehicle and image recognition method thereof

Families Citing this family (52)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105578034A (zh) 2015-12-10 2016-05-11 深圳市道通智能航空技术有限公司 一种对目标进行跟踪拍摄的控制方法、控制装置及系统
WO2017219203A1 (zh) * 2016-06-20 2017-12-28 李珂悦 无人飞行器
CN106204549B (zh) * 2016-06-30 2018-09-11 北京文安智能技术股份有限公司 一种基于视频分析的广告牌监测方法、装置及电子设备
CN109416536B (zh) * 2016-07-04 2022-03-22 深圳市大疆创新科技有限公司 用于自动跟踪和导航的系统和方法
CN107622273B (zh) * 2016-07-13 2021-03-19 深圳雷柏科技股份有限公司 一种目标检测和辨识的方法和装置
CN106250938B (zh) * 2016-07-19 2021-09-10 易视腾科技股份有限公司 目标跟踪方法、增强现实方法及其装置
CN106303222A (zh) * 2016-07-29 2017-01-04 广东欧珀移动通信有限公司 防抖处理方法、装置及终端设备
WO2018027339A1 (en) * 2016-08-06 2018-02-15 SZ DJI Technology Co., Ltd. Copyright notice
WO2018027789A1 (zh) * 2016-08-11 2018-02-15 深圳市道通智能航空技术有限公司 一种跟踪和识别的方法、系统及飞行器
CN106370184A (zh) * 2016-08-29 2017-02-01 北京奇虎科技有限公司 无人机自动跟踪拍摄的方法、无人机和移动终端设备
FR3057347B1 (fr) * 2016-10-06 2021-05-28 Univ Aix Marseille Systeme de mesure de la distance d'un obstacle par flux optique
CN106845363A (zh) * 2016-12-27 2017-06-13 广州杰赛科技股份有限公司 巡航拍摄跟踪的方法及装置
CN106874928B (zh) * 2016-12-28 2020-05-19 中国科学院长春光学精密机械与物理研究所 跟踪目标突发关键事件自动判决方法及系统
CN108537726B (zh) * 2017-03-03 2022-01-04 杭州海康威视数字技术股份有限公司 一种跟踪拍摄的方法、设备和无人机
CN106921833A (zh) * 2017-03-31 2017-07-04 思依暄机器人科技(深圳)有限公司 一种拍摄控制方法、装置和动态跟踪拍摄设备
CN107272732A (zh) * 2017-06-12 2017-10-20 广东工业大学 无人飞行装置集群系统
CN109214243A (zh) * 2017-07-03 2019-01-15 昊翔电能运动科技(昆山)有限公司 目标跟踪方法、装置及无人机
CN109215056A (zh) * 2017-07-03 2019-01-15 昊翔电能运动科技(昆山)有限公司 目标追踪方法及装置
CN107590450A (zh) * 2017-09-01 2018-01-16 歌尔科技有限公司 一种运动目标的标记方法、装置和无人机
CN107748860A (zh) * 2017-09-01 2018-03-02 中国科学院深圳先进技术研究院 无人机的目标跟踪方法、装置、无人机及存储介质
CN107577245A (zh) * 2017-09-18 2018-01-12 深圳市道通科技股份有限公司 一种飞行器参数设定方法和装置及计算机可读存储介质
CN109753076B (zh) * 2017-11-03 2022-01-11 南京奇蛙智能科技有限公司 一种无人机视觉追踪实现方法
CN109746909A (zh) * 2017-11-08 2019-05-14 深圳先进技术研究院 一种机器人运动控制方法及设备
CN108010055B (zh) * 2017-11-23 2022-07-12 塔普翊海(上海)智能科技有限公司 三维物体的跟踪系统及其跟踪方法
CN107728637A (zh) * 2017-12-02 2018-02-23 广东容祺智能科技有限公司 一种智能化调整摄影角度的无人机系统
CN108734726A (zh) * 2017-12-04 2018-11-02 北京猎户星空科技有限公司 一种目标跟踪方法、装置、电子设备及存储介质
US10706561B2 (en) * 2017-12-21 2020-07-07 612 Authentic Media Systems and methods to track objects in video
EP3531375B1 (en) * 2017-12-25 2021-08-18 Autel Robotics Co., Ltd. Method and apparatus for measuring distance, and unmanned aerial vehicle
TWI701609B (zh) * 2018-01-04 2020-08-11 緯創資通股份有限公司 影像物件追蹤方法及其系統與電腦可讀取儲存媒體
CN108566513A (zh) * 2018-03-28 2018-09-21 深圳臻迪信息技术有限公司 一种无人机对运动目标的拍摄方法
CN110291775B (zh) * 2018-05-29 2021-07-06 深圳市大疆创新科技有限公司 一种跟踪拍摄方法、设备及存储介质
WO2020014987A1 (zh) * 2018-07-20 2020-01-23 深圳市大疆创新科技有限公司 移动机器人的控制方法、装置、设备及存储介质
CN109509212B (zh) * 2018-09-30 2023-11-24 惠州市德赛西威汽车电子股份有限公司 目标跟踪处理方法、电子设备
CN109618131B (zh) * 2018-11-22 2021-08-24 亮风台(上海)信息科技有限公司 一种用于呈现决策辅助信息的方法与设备
CN109544590B (zh) 2018-11-27 2020-05-15 上海芯仑光电科技有限公司 一种目标跟踪方法及计算设备
CN111383247A (zh) * 2018-12-29 2020-07-07 北京易讯理想科技有限公司 增强金字塔lk光流算法图像跟踪稳定性的方法
CN109948526B (zh) * 2019-03-18 2021-10-29 北京市商汤科技开发有限公司 图像处理方法及装置、检测设备及存储介质
CN111754543B (zh) * 2019-03-29 2024-03-29 杭州海康威视数字技术股份有限公司 图像处理方法、装置及系统
CN110020624B (zh) * 2019-04-08 2023-04-18 石家庄铁道大学 图像识别方法、终端设备及存储介质
JP7305263B2 (ja) * 2019-05-16 2023-07-10 アルパイン株式会社 無人航空機、点検方法および点検プログラム
CN111127518B (zh) * 2019-12-24 2023-04-14 深圳禾苗通信科技有限公司 基于无人机的目标跟踪方法及装置
CN111314609B (zh) * 2020-02-24 2021-07-20 浙江大华技术股份有限公司 一种控制云台追踪摄像的方法及装置
CN112955712A (zh) * 2020-04-28 2021-06-11 深圳市大疆创新科技有限公司 目标跟踪方法、设备及存储介质
CN112184776A (zh) * 2020-09-30 2021-01-05 珠海大横琴科技发展有限公司 一种目标跟踪方法、装置及存储介质
CN113721449A (zh) * 2021-01-05 2021-11-30 北京理工大学 一种多旋翼机控制系统及方法
CN113095141A (zh) * 2021-03-15 2021-07-09 南通大学 一种基于人工智能的无人机视觉学习系统
CN113139985B (zh) * 2021-03-16 2022-09-16 北京理工大学 消除无人机与地面站通信延迟影响的跟踪目标框选方法
CN113489897B (zh) * 2021-06-28 2023-05-26 杭州逗酷软件科技有限公司 图像处理方法及相关装置
CN113791640A (zh) * 2021-09-10 2021-12-14 深圳市道通智能航空技术股份有限公司 一种图像获取方法、装置、飞行器和存储介质
CN113848979B (zh) * 2021-10-12 2023-01-17 苏州大学 基于前馈补偿pid控制的无人机复杂动态目标追踪方法
CN114241008B (zh) * 2021-12-21 2023-03-07 北京航空航天大学 一种适应场景和目标变化的长时区域跟踪方法
CN115200554B (zh) * 2022-07-14 2023-06-27 深圳市水务工程检测有限公司 一种基于图片识别技术的无人机摄影测量监管系统及方法

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103149939A (zh) * 2013-02-26 2013-06-12 北京航空航天大学 一种基于视觉的无人机动态目标跟踪与定位方法
CN103455797A (zh) * 2013-09-07 2013-12-18 西安电子科技大学 航拍视频中运动小目标的检测与跟踪方法
WO2014049372A1 (en) * 2012-09-28 2014-04-03 Omg Plc Determination of position from images and associated camera positions
CN104036524A (zh) * 2014-06-18 2014-09-10 哈尔滨工程大学 一种改进sift算法的快速目标跟踪方法
CN104574384A (zh) * 2014-12-26 2015-04-29 北京航天控制仪器研究所 一种基于mser和surf特征点匹配的目标丢失再捕获方法
CN105023278A (zh) * 2015-07-01 2015-11-04 中国矿业大学 一种基于光流法的运动目标跟踪方法及系统
CN105578034A (zh) * 2015-12-10 2016-05-11 深圳市道通智能航空技术有限公司 一种对目标进行跟踪拍摄的控制方法、控制装置及系统

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102074016A (zh) 2009-11-24 2011-05-25 杭州海康威视软件有限公司 运动目标自动跟踪的装置和方法
EP3060966B1 (en) * 2014-07-30 2021-05-05 SZ DJI Technology Co., Ltd. Systems and methods for target tracking
US10586102B2 (en) * 2015-08-18 2020-03-10 Qualcomm Incorporated Systems and methods for object tracking

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2014049372A1 (en) * 2012-09-28 2014-04-03 Omg Plc Determination of position from images and associated camera positions
CN103149939A (zh) * 2013-02-26 2013-06-12 北京航空航天大学 一种基于视觉的无人机动态目标跟踪与定位方法
CN103455797A (zh) * 2013-09-07 2013-12-18 西安电子科技大学 航拍视频中运动小目标的检测与跟踪方法
CN104036524A (zh) * 2014-06-18 2014-09-10 哈尔滨工程大学 一种改进sift算法的快速目标跟踪方法
CN104574384A (zh) * 2014-12-26 2015-04-29 北京航天控制仪器研究所 一种基于mser和surf特征点匹配的目标丢失再捕获方法
CN105023278A (zh) * 2015-07-01 2015-11-04 中国矿业大学 一种基于光流法的运动目标跟踪方法及系统
CN105578034A (zh) * 2015-12-10 2016-05-11 深圳市道通智能航空技术有限公司 一种对目标进行跟踪拍摄的控制方法、控制装置及系统

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP3373248A4 *

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2019028761A1 (en) * 2017-08-10 2019-02-14 Beijing Airlango Technology, Co., Ltd. TRACKING OBJECT USING DEPTH INFORMATION
US10964106B2 (en) 2018-03-30 2021-03-30 Cae Inc. Dynamically modifying visual rendering of a visual element comprising pre-defined characteristics
US11380054B2 (en) 2018-03-30 2022-07-05 Cae Inc. Dynamically affecting tailored visual rendering of a visual element
CN110245641A (zh) * 2019-06-21 2019-09-17 上海摩象网络科技有限公司 一种目标追踪拍摄方法、装置、电子设备
CN111311642A (zh) * 2020-02-26 2020-06-19 深圳英飞拓科技股份有限公司 一种高速球型摄像机下的目标跟踪优化方法
US11875560B2 (en) 2020-03-18 2024-01-16 Coretronic Corporation Unmanned aerial vehicle and image recognition method thereof
CN112001949A (zh) * 2020-08-13 2020-11-27 地平线(上海)人工智能技术有限公司 确定目标点移动速度的方法、装置、可读存储介质及设备
CN112001949B (zh) * 2020-08-13 2023-12-05 地平线(上海)人工智能技术有限公司 确定目标点移动速度的方法、装置、可读存储介质及设备

Also Published As

Publication number Publication date
EP3373248A1 (en) 2018-09-12
US10782688B2 (en) 2020-09-22
CN105578034A (zh) 2016-05-11
US20180284777A1 (en) 2018-10-04
EP3373248A4 (en) 2018-09-12

Similar Documents

Publication Publication Date Title
WO2017096949A1 (zh) 一种对目标进行跟踪拍摄的方法、控制装置及系统
US20230077355A1 (en) Tracker assisted image capture
CN109426782B (zh) 对象检测方法和用于对象检测的神经网络系统
CN107274433B (zh) 基于深度学习的目标跟踪方法、装置及存储介质
CN110555901B (zh) 动静态场景的定位和建图方法、装置、设备和存储介质
CN110706258B (zh) 对象追踪方法及装置
WO2016034059A1 (zh) 基于颜色-结构特征的目标对象跟踪方法
CN109389086B (zh) 检测无人机影像目标的方法和系统
CN107452015B (zh) 一种具有重检测机制的目标跟踪系统
JP2017091549A (ja) 単一のカメラを用いた移動物体検出のための方法およびシステム
CN111382613B (zh) 图像处理方法、装置、设备和介质
WO2020037881A1 (zh) 运动轨迹绘制方法、装置、设备和存储介质
CN112703533A (zh) 对象跟踪
CN115131420A (zh) 基于关键帧优化的视觉slam方法及装置
CN112149762A (zh) 目标跟踪方法、目标跟踪装置及计算机可读存储介质
US9947106B2 (en) Method and electronic device for object tracking in a light-field capture
KR20150082417A (ko) 병렬화가능한 구조에서의 이미지들을 사용하여 표면 엘리먼트들의 로컬 지오메트리 또는 표면 법선들을 초기화 및 솔빙하는 방법
Zhang et al. An optical flow based moving objects detection algorithm for the UAV
CN113763466B (zh) 一种回环检测方法、装置、电子设备和存储介质
KR20140141239A (ko) 평균이동 알고리즘을 적용한 실시간 객체 추적방법 및 시스템
CN110009683B (zh) 基于MaskRCNN的实时平面上物体检测方法
CN103914850A (zh) 一种基于运动匹配的视频自动标注方法及自动标注系统
CN110849380A (zh) 一种基于协同vslam的地图对齐方法及系统
JP2017016592A (ja) 主被写体検出装置、主被写体検出方法及びプログラム
CN114399532A (zh) 一种相机位姿确定方法和装置

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 16872145

Country of ref document: EP

Kind code of ref document: A1

WWE Wipo information: entry into national phase

Ref document number: 2016872145

Country of ref document: EP

NENP Non-entry into the national phase

Ref country code: DE