WO2017030259A1 - 자동추적 기능을 갖는 무인항공기 및 그 제어방법 - Google Patents
자동추적 기능을 갖는 무인항공기 및 그 제어방법 Download PDFInfo
- Publication number
- WO2017030259A1 WO2017030259A1 PCT/KR2016/001970 KR2016001970W WO2017030259A1 WO 2017030259 A1 WO2017030259 A1 WO 2017030259A1 KR 2016001970 W KR2016001970 W KR 2016001970W WO 2017030259 A1 WO2017030259 A1 WO 2017030259A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- tracking
- unmanned aerial
- aerial vehicle
- image
- unit
- Prior art date
Links
- 238000000034 method Methods 0.000 title claims abstract description 59
- 238000004422 calculation algorithm Methods 0.000 claims abstract description 29
- 230000006870 function Effects 0.000 claims abstract description 21
- 238000001514 detection method Methods 0.000 claims abstract description 12
- 238000007635 classification algorithm Methods 0.000 claims description 14
- 230000001965 increasing effect Effects 0.000 claims description 6
- 238000003860 storage Methods 0.000 claims description 5
- 210000000746 body region Anatomy 0.000 claims description 3
- 230000000694 effects Effects 0.000 abstract description 2
- 230000002093 peripheral effect Effects 0.000 abstract description 2
- 238000010586 diagram Methods 0.000 description 8
- 239000006185 dispersion Substances 0.000 description 3
- 230000014509 gene expression Effects 0.000 description 3
- 238000004519 manufacturing process Methods 0.000 description 3
- 208000033999 Device damage Diseases 0.000 description 2
- 230000006698 induction Effects 0.000 description 2
- 238000005304 joining Methods 0.000 description 2
- 241000612182 Rexea solandri Species 0.000 description 1
- 230000015572 biosynthetic process Effects 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 239000000284 extract Substances 0.000 description 1
- 230000001939 inductive effect Effects 0.000 description 1
- 239000000463 material Substances 0.000 description 1
- 239000011159 matrix material Substances 0.000 description 1
- 238000002156 mixing Methods 0.000 description 1
- 238000003786 synthesis reaction Methods 0.000 description 1
- 230000002123 temporal effect Effects 0.000 description 1
- 230000009466 transformation Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05D—SYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
- G05D1/00—Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
- G05D1/0094—Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots involving pointing a payload, e.g. camera, weapon, sensor, towards a fixed or moving target
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/61—Control of cameras or camera modules based on recognised objects
- H04N23/611—Control of cameras or camera modules based on recognised objects where the recognised objects include parts of the human body
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B64—AIRCRAFT; AVIATION; COSMONAUTICS
- B64C—AEROPLANES; HELICOPTERS
- B64C39/00—Aircraft not otherwise provided for
- B64C39/02—Aircraft not otherwise provided for characterised by special use
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B64—AIRCRAFT; AVIATION; COSMONAUTICS
- B64C—AEROPLANES; HELICOPTERS
- B64C39/00—Aircraft not otherwise provided for
- B64C39/02—Aircraft not otherwise provided for characterised by special use
- B64C39/024—Aircraft not otherwise provided for characterised by special use of the remote controlled vehicle type, i.e. RPV
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B64—AIRCRAFT; AVIATION; COSMONAUTICS
- B64D—EQUIPMENT FOR FITTING IN OR TO AIRCRAFT; FLIGHT SUITS; PARACHUTES; ARRANGEMENT OR MOUNTING OF POWER PLANTS OR PROPULSION TRANSMISSIONS IN AIRCRAFT
- B64D47/00—Equipment not otherwise provided for
- B64D47/08—Arrangements of cameras
-
- G—PHYSICS
- G03—PHOTOGRAPHY; CINEMATOGRAPHY; ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ELECTROGRAPHY; HOLOGRAPHY
- G03B—APPARATUS OR ARRANGEMENTS FOR TAKING PHOTOGRAPHS OR FOR PROJECTING OR VIEWING THEM; APPARATUS OR ARRANGEMENTS EMPLOYING ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ACCESSORIES THEREFOR
- G03B15/00—Special procedures for taking photographs; Apparatus therefor
- G03B15/006—Apparatus mounted on flying objects
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05D—SYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
- G05D1/00—Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
- G05D1/12—Target-seeking control
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
- G06F18/2148—Generating training patterns; Bootstrap methods, e.g. bagging or boosting characterised by the process organisation or structure, e.g. boosting cascade
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/241—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
- G06F18/2413—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on distances to training or reference patterns
- G06F18/24147—Distances to closest patterns, e.g. nearest neighbour classification
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/20—Analysis of motion
- G06T7/246—Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
- G06T7/248—Analysis of motion using feature-based methods, e.g. the tracking of corners or segments involving reference images or patches
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/20—Analysis of motion
- G06T7/277—Analysis of motion involving stochastic approaches, e.g. using Kalman filters
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/44—Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
- G06V10/443—Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components by matching or filtering
- G06V10/446—Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components by matching or filtering using Haar-like filters, e.g. using integral image techniques
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/764—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/77—Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
- G06V10/774—Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
- G06V10/7747—Organisation of the process, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/10—Terrestrial scenes
- G06V20/13—Satellite images
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/10—Terrestrial scenes
- G06V20/17—Terrestrial scenes taken from planes or by drones
-
- G—PHYSICS
- G08—SIGNALLING
- G08G—TRAFFIC CONTROL SYSTEMS
- G08G5/00—Traffic control systems for aircraft, e.g. air-traffic control [ATC]
- G08G5/0017—Arrangements for implementing traffic-related aircraft activities, e.g. arrangements for generating, displaying, acquiring or managing traffic information
- G08G5/0021—Arrangements for implementing traffic-related aircraft activities, e.g. arrangements for generating, displaying, acquiring or managing traffic information located in the aircraft
-
- G—PHYSICS
- G08—SIGNALLING
- G08G—TRAFFIC CONTROL SYSTEMS
- G08G5/00—Traffic control systems for aircraft, e.g. air-traffic control [ATC]
- G08G5/0047—Navigation or guidance aids for a single aircraft
- G08G5/0069—Navigation or guidance aids for a single aircraft specially adapted for an unmanned aircraft
-
- G—PHYSICS
- G08—SIGNALLING
- G08G—TRAFFIC CONTROL SYSTEMS
- G08G5/00—Traffic control systems for aircraft, e.g. air-traffic control [ATC]
- G08G5/0073—Surveillance aids
- G08G5/0078—Surveillance aids for monitoring traffic from the aircraft
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/61—Control of cameras or camera modules based on recognised objects
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/667—Camera operation mode switching, e.g. between still and video, sport and normal or high- and low-resolution modes
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/695—Control of camera direction for changing a field of view, e.g. pan, tilt or based on tracking of objects
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/698—Control of cameras or camera modules for achieving an enlarged field of view, e.g. panoramic image capture
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B64—AIRCRAFT; AVIATION; COSMONAUTICS
- B64U—UNMANNED AERIAL VEHICLES [UAV]; EQUIPMENT THEREFOR
- B64U2101/00—UAVs specially adapted for particular uses or applications
- B64U2101/30—UAVs specially adapted for particular uses or applications for imaging, photography or videography
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B64—AIRCRAFT; AVIATION; COSMONAUTICS
- B64U—UNMANNED AERIAL VEHICLES [UAV]; EQUIPMENT THEREFOR
- B64U2201/00—UAVs characterised by their flight controls
- B64U2201/10—UAVs characterised by their flight controls autonomous, i.e. by navigating independently from ground or air stations, e.g. by using inertial navigation systems [INS]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10032—Satellite or aerial image; Remote sensing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30196—Human being; Person
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30232—Surveillance
Definitions
- the present invention relates to an unmanned aerial vehicle having an automatic tracking function and a control method thereof, and more particularly, an automatic tracking function capable of recognizing and automatically tracking an object to be tracked using external image information input to a camera mounted on an unmanned aerial vehicle.
- An unmanned aerial vehicle having a and a control method thereof.
- UAV unmanned aerial vehicles
- the unmanned aerial vehicle since the unmanned aerial vehicle according to the prior art can be controlled wirelessly through a remote controller (RC) and a smart user terminal, it is inefficient in that a user must be present. In addition, due to the characteristics of the unmanned aerial vehicle, the user manually controls the ground. In case of accidents, accidents occur frequently due to immaturity of the user, thereby causing expensive equipment to be damaged or causing safety accidents.
- RC remote controller
- the unmanned aerial vehicle autonomously recognizes and automatically tracks the subject of camera shooting without the user's control on the ground, so it is a realistic and applicable drone that can prevent safety accidents and expensive drones from being damaged.
- the technology is urgently needed.
- the present invention has been made to solve the above problems, the present invention, by using the external image information input to the camera mounted on the unmanned aerial vehicle to recognize the tracking target and automatically tracked and enable the autonomous flight of the unmanned aerial vehicle Therefore, the present invention provides an unmanned aerial vehicle having an automatic tracking function and a control method thereof which can reduce manufacturing costs of the unmanned aerial vehicle without requiring an expensive tracking induction device.
- An unmanned aerial vehicle having an automatic tracking function includes an unmanned aerial vehicle for automatically tracking a subject to be photographed, the image input unit configured to acquire an image of a surrounding image of the subject to be photographed; An object recognition unit extracting a region of interest using an image acquired through the image input unit, detecting a specific region located in the region of interest, measuring coordinates, and recognizing the specific region as a tracking object; An object tracking unit which calculates and tracks a position of the object to be tracked by the object recognition unit using a tracking learning detection (TLD) learning algorithm and generates a driving command for driving the unmanned aerial vehicle corresponding thereto; A motion recognition unit recognizing the motion of the object to be tracked and generating a driving command corresponding to a photo shooting mode, a video shooting mode, and a return mode; And a driving controller for driving the unmanned aerial vehicle according to a driving command of the object tracking unit and the operation recognition unit.
- TLD tracking learning detection
- An unmanned aerial vehicle having an automatic tracking function may acquire an image of a surrounding image of the tracking object while rotating the unmanned aerial vehicle in a direction in which the tracking object moves. If it is not found in the image, the tracking object may be recognized using a single panoramic image obtained while rotating the unmanned aerial vehicle in place.
- the object recognition unit detects a specific region using a haar-like feature from the image acquired through the image input unit, and increases the judgment rate of the specific region detected using an AdaBoost learning algorithm.
- the coordinates of the window corresponding to the detected specific region may be set and recognized as the tracking object.
- the specific area may be an area corresponding to the upper body and the face.
- the object tracking unit may include a tracker capable of increasing tracking performance of a tracking target object using an extended Kalman filter; A detector for generating a plurality of search windows for the entire input image and determining whether a tracking object exists in each search window using a stepwise classification algorithm; Learner that can improve the performance of the detector by using a quasi-supervised learning algorithm method in a continuous input image; And an integrator which integrates the results of the tracker and the detector to finally determine the position of the object to be tracked and then transfers the corresponding coordinates to the driving control unit.
- the classification algorithm method applied to the detector may enable real-time processing using a classification algorithm method of a cascade structure including a dispersion filter, an ensemble classifier, and a nearest neighbor classifier.
- Unmanned aerial vehicle having an automatic tracking function according to an embodiment of the present invention, by storing the scale value for the image data of the tracking target object recognized by the object recognition unit and comparing the amount of change to the size of the tracking target object And a distance measuring unit capable of maintaining a constant distance between the drone and the unmanned aerial vehicle.
- An unmanned aerial vehicle having an automatic tracking function stores an image stored in an image file photographed in a photographing mode and a movie file photographed in a video photographing mode corresponding to an operation mode recognized by the operation recognition unit. It may further include a.
- a control method of an unmanned aerial vehicle having an automatic tracking function includes a control method of an unmanned aerial vehicle for automatically tracking a subject to be photographed, the photographing step of photographing a surrounding of the subject to be photographed; A specific area detection step of detecting a face area and an upper body area from the image acquired in the photographing step; An object recognition step of measuring coordinates of a window corresponding to the face area and the upper body area detected as the specific area and recognizing the object as a tracking object; An object tracking step of controlling the unmanned aerial vehicle to track the tracking target object recognized in the object recognition step at a predetermined distance; A motion recognition step of recognizing the motion of the object to be tracked to determine a photographing mode, a moving picture photographing mode, and a return mode; And an unmanned aerial vehicle driving step of driving the unmanned aerial vehicle corresponding to the operation mode recognized in the operation recognition step.
- the photographing step of photographing the surroundings of the subject to be photographed may include obtaining an image of the surrounding image of the object to be tracked while rotating the unmanned aerial vehicle in a direction in which the object to be tracked moves; And photographing while rotating the unmanned aerial vehicle in place when the object to be tracked is not found in the image.
- a specific region corresponding to the face region and the upper body region is detected by using a haar-like feature from the image acquired in the photographing step, and using an AdaBoost learning algorithm.
- the judgment rate of the detected specific region can be increased.
- the object tracking step may be performed by using an extended Kalman filter to increase the tracking performance of the object to be tracked, generate a plurality of search windows for the entire input image, and determine whether the object to be traced exists in each of the search windows.
- the cascading classification algorithm of the cascade structure including the ensemble classifier and the nearest neighbor classifier is determined using the quasi-supervised learning algorithm to improve the performance.
- the object tracking step may include: storing a scale value of image data of an object to be tracked; Detecting and storing a change amount of a size of a tracking object using the scale value; Calculating moving coordinates of the unmanned aerial vehicle using the size change of the tracking object; And tracking while maintaining a constant distance from the tracking target object by using the moving coordinates of the unmanned aerial vehicle.
- the present invention enables the autonomous flight of the unmanned aerial vehicle by enabling recognition and automatic tracking of a specific object by using external image information input to the camera for mounting the unmanned aerial vehicle, thereby inducing expensive tracking. There is no need for a separate device, thereby reducing the manufacturing cost of the unmanned aerial vehicle.
- the present invention is capable of autonomous driving by the unmanned aerial vehicle, and thus does not require a user terminal that has been manually operated by ordinary users, thereby preventing safety accidents and device damage due to immature operation of the user. have.
- the present invention uses a method of finding a tracking target using a single panoramic image made by successively joining a plurality of images photographed at the time of rotation, the operation speed and recognition rate may be improved.
- the present invention provides an unmanned aerial vehicle that can automatically track a specific target without user control by using a method that can automatically measure the coordinates to be identified as a tracking object when the face and the upper body of the human body are recognized. It can work.
- the present invention can be used to process the image input to the camera mounted on the unmanned aerial vehicle without additional configuration, to specify and automatically track the object as well as to transmit control commands to the user and the unmanned aerial vehicle It is effective to implement a convenient interface for the period.
- FIG. 1 is a schematic diagram illustrating a camera mounted on an unmanned aerial vehicle according to an embodiment of the present invention.
- FIG. 2 is a block diagram showing the configuration of an unmanned aerial vehicle according to an embodiment of the present invention.
- FIG. 3 is a flowchart illustrating a control method of the unmanned aerial vehicle according to the embodiment of the present invention.
- FIG. 4 is a flowchart for explaining a photographing step of the flowchart illustrated in FIG. 3.
- 5 is a schematic diagram for explaining an image stitching technique.
- FIG. 6 is a flowchart illustrating a specific region detection step and an object recognition step of the flowchart shown in FIG. 3.
- FIG. 7 is a schematic diagram illustrating a Halle feature technique applied to an object recognition unit according to an embodiment of the present invention.
- FIG. 8 is a flowchart illustrating an object tracking step of the flowchart illustrated in FIG. 3.
- FIG. 9 is a flowchart illustrating a stepwise classification algorithm applied to the detector shown in FIG. 8.
- FIG. 10 is a flowchart for describing an operation of the distance measuring unit illustrated in FIG. 2.
- FIG. 11 is a flowchart illustrating an operation recognition step illustrated in FIG. 3.
- first and second are intended to distinguish one component from another component, and the scope of rights should not be limited by these terms.
- first component may be named a second component, and similarly, the second component may also be named a first component.
- an identification code (e.g., a, b, c, etc.) is used for convenience of description, and the identification code does not describe the order of the steps, and each step clearly indicates a specific order in context. Unless stated otherwise, they may occur out of the order noted. That is, each step may occur in the same order as specified, may be performed substantially simultaneously, or may be performed in the reverse order.
- FIG. 1 is a schematic diagram illustrating a camera mounted on an unmanned aerial vehicle according to an embodiment of the present invention.
- An unmanned aerial vehicle having an automatic tracking function relates to an unmanned aerial vehicle that automatically tracks a subject to be photographed.
- an unmanned aerial vehicle installed in front and a bottom of an unmanned aerial vehicle 1 is provided.
- the on-board camera 10 may recognize a specific object and perform automatic tracking by using image information acquired by photographing a subject to be photographed.
- the camera 10 is shown to be mounted on the front and the bottom of the unmanned aerial vehicle 1, but is not limited to this may be mounted on the rear and above the unmanned aerial vehicle 1 according to the tracking target and environment.
- FIG. 2 is a block diagram showing the configuration of an unmanned aerial vehicle according to an embodiment of the present invention.
- FIG. 2 in detail the configuration of an unmanned aerial vehicle according to an embodiment of the present invention.
- the unmanned aerial vehicle 1 includes an image input unit 100, an object recognition unit 200, an object tracking unit 300, an operation recognition unit 400, and driving.
- the controller 500, the distance measuring unit 600, and the image storage unit 700 may be included.
- the image input unit 100 may refer to the unmanned aerial vehicle-mounted camera 10 shown in FIG. 1 as a means for acquiring an image of a surrounding image of a subject to be photographed.
- various image acquisition means for acquiring a peripheral image of a subject to be photographed may be indicated.
- an image of the surrounding image of the tracking object can be obtained, and the tracking object If it is not found in the image, the tracking object may be recognized using a single panoramic image acquired while rotating the unmanned aerial vehicle in place.
- the object recognition unit 200 detects a region of interest by using the image acquired through the image input unit 100, detects a specific region located in the region of interest, and measures coordinates.
- the specific area may be recognized as a tracking object.
- the object recognition unit 200 detects a specific region from the image acquired through the image input unit using a haar-like feature, and detects the specific region detected using an AdaBoost learning algorithm.
- the judgment rate of the area may be increased, and the coordinates of the window corresponding to the detected specific area may be set and recognized as the tracking object.
- the specific area may be an area corresponding to the upper body and the face, and the hake feature and the a-boost learning algorithm are applied to the embodiment of the present invention.
- One of the face detection algorithms One of the face detection algorithms.
- the Halle feature is a technique mainly used for face searching, and there are many prototypes. These prototypes may be used for efficient face detection by using only those that express the face well by the Adaboost learning algorithm.
- the object tracking unit 300 calculates and tracks a location of the tracking target object recognized by the object recognition unit 200 in a tracking learning detection (TLD) learning algorithm, and simultaneously corresponds to the unmanned aerial vehicle.
- a drive command for driving can be generated.
- the object tracking unit 300 using the extended Kalman filter, the tracker 310 to increase the tracking performance for the tracking target object, and generates a plurality of search windows for the entire input image and the respective search
- a detector 320 for determining whether a tracking object exists in a window using a stepwise classification algorithm method, and a learner 330 that can improve the performance of the detector using a quasi-directed learning algorithm method in a continuous input image.
- an integrator 340 for integrating the results of the tracker 310 and the detector 320 to finally determine the position of the object to be tracked and then transmitting the corresponding coordinates to the driving controller 500.
- the stepwise classification algorithm method applied to the detector 320 is a real-time processing using a classification algorithm method of the cascade structure including the dispersion filter 321, the ensemble classifier 322, and the nearest neighbor classifier 323. You can do that.
- the motion recognition unit 400 may recognize a motion of the object to be tracked to generate a driving command corresponding to a photo shooting mode, a video shooting mode, and a return mode.
- the driving controller 500 may include: The unmanned aerial vehicle may be driven according to a driving command generated by the object tracking unit 300 and the operation recognition unit 400.
- the driving command may include a moving direction, a moving distance, a rotation of the unmanned aerial vehicle, a rise and fall of altitude, landing, taking off, photographing using a camera and shooting a video.
- the distance measuring unit 600 stores the scale value for the image data of the tracking object recognized by the object recognition unit 200 and compares the amount of change with respect to the size of the tracking target object to be unmanned with the tracking object. Maintain a certain distance between aircraft.
- the image storage unit 700 may store an image file photographed in the photographing mode and a video file photographed in the video photographing mode corresponding to the operation mode recognized by the motion recognition unit 400.
- FIG. 3 is a flowchart illustrating a control method of the unmanned aerial vehicle according to the embodiment of the present invention.
- the control method of the unmanned aerial vehicle having the automatic tracking function according to an embodiment of the present invention, the photographing step (S10), the specific area detection step (S20), the object recognition step (S30), object tracking A step S40, an operation recognition step S50, and an unmanned aerial vehicle driving step S60 may be included.
- the photographing step (S10) is a step of photographing the surroundings of the subject to be photographed, while rotating the unmanned aerial vehicle in a direction in which the tracking target object corresponding to the subject to be photographed is rotated. Acquiring the step, and taking a picture while rotating the unmanned aerial vehicle in place when the object to be tracked is not found in the image.
- the specific region detecting step S20 may be a step of detecting a face region and an upper body region from the image acquired in the capturing step.
- the specific region detecting step S20 detects a specific region corresponding to the face region and the upper torso region from the image acquired in the photographing step S10 by using a haar-like feature, and then boosts it.
- the AdaBoost learning algorithm can be used to increase the judgment rate of the detected specific area.
- the object recognition step (S30) may be a step of measuring the coordinates of the window corresponding to the face area and the upper torso area detected as the specific area to recognize as a tracking object.
- the object tracking step S40 may be a step of controlling the unmanned aerial vehicle so as to track the tracked object recognized in the object recognition step S30 at a predetermined distance.
- the operation recognition step (S50) may be a step of recognizing the motion of the object to be tracked to determine a photo shooting mode, a video shooting mode, and a return mode.
- the unmanned aerial vehicle driving step S60 may be a step of driving the unmanned aerial vehicle corresponding to the operation mode recognized in the operation recognition step.
- FIG. 4 is a flowchart for schematically illustrating a photographing step of the flowchart illustrated in FIG. 3, and FIG. 5 is a schematic diagram for describing an image stitching technique used in the image synthesis illustrated in FIG. 4.
- the photographing step S10 illustrated in FIG. 3 will be described with reference to FIGS. 4 to 5 as follows.
- the unmanned aerial vehicle may acquire an image of a surrounding image of a tracking object corresponding to a photographing subject, and cannot find the tracking object in the image.
- the object to be tracked can be recognized while rotating the unmanned aerial vehicle in place.
- tracking can be performed while rotating the unmanned aerial vehicle generally in a direction in which a tracking object corresponding to a photographing subject moves.
- the object recognition unit 200 for recognizing the tracking object using the image input to the image input unit 100 does not detect a specific area corresponding to the tracking object
- the gyro By using the sensor information (S11) while rotating the unmanned aerial vehicle in place (S12) can be taken to recognize the tracking object.
- the computation speed and recognition rate can be increased.
- FIG. 5 is a view for explaining the above-described image stitching technique, according to an embodiment of the present invention, first, by using a gyro sensor while the unmanned aerial vehicle rotates in place at intervals of a constant rotation angle and the camera 10 Photos may be taken continuously and stored in a memory (not shown) of the image input unit 100.
- the images may be synthesized by using an image stitching technique, which is mainly used to generate a panorama image using all the continuously photographed images.
- the above-described image stitching technique is described in detail. After calculating the fast feature point and the feature descriptor for the fast feature point for all the images obtained by successive photographing, the Hamming distance between the feature descriptors between the continuous images is calculated. Set the closest feature points as matching candidates, remove the boundary of the matching candidates using NCC (Normalized cross correlation) and gyroscope data between matching candidates, and use RANSAC (RANdom SAmple Consensus) as the relationship between matching candidates.
- the homography matrix can be calculated (see paper [3]), and the images can be synthesized by applying the alpha blending algorithm used to generate the panorama image.
- FIG. 6 is a flowchart illustrating a specific region detection step and an object recognition step of the flowchart shown in FIG. 3, and
- FIG. 7 is a schematic diagram illustrating a Halle feature technique applied to an object recognition unit according to an embodiment of the present invention. Drawing.
- the specific region detecting step S20 of FIG. 3 first extracts the ROI from the image acquired by the image input unit 100 (S21), and then performs a haar-like feature. In operation S23, a window corresponding to the face and the upper body area may be searched for in operation S24.
- the haar-like feature is composed of two or more adjacent rectangular regions having simple features, and a value thereof may be defined as a brightness difference between the regions.
- the object recognition step (S30) of FIG. 3 may be a process of setting the coordinates of the searched window and recognizing the object as a tracking object.
- the learning of Adaboost for object recognition is shown.
- the algorithm reference paper [4]
- S22 the model made of a face and upper body model learner
- the object recognition unit 200 of FIG. 2 may use a classifier to classify the object and the background.
- the classifier may be trained using object samples and background samples, and may use the above-described ad-boost learning algorithm.
- the Adaboost learning algorithm is a algorithm that constructs a strong classifier by combining weak classifiers and lowers the weights of the samples correctly judged by the weak classifier and increases the weights of the samples incorrectly determined. Repeating to configure the strong classifier. This process can be repeated until the performance of the classifier reaches the target, and after measuring the coordinates of the window corresponding to the recognized face and the upper body area, the window is designated as the tracking object. Can be.
- FIG. 8 is a flowchart illustrating an object tracking step of the flowchart illustrated in FIG. 3
- FIG. 9 is a flowchart illustrating a stepwise classification algorithm applied to the detector illustrated in FIG. 8.
- the object tracking step (S40) in the embodiment of the present invention by using the extended Kalman filter to increase the tracking performance for the tracking target object (S41), a plurality of search windows for the entire input image
- the stepwise classification algorithm of the cascade structure including the distributed filter, the ensemble classifier and the nearest neighbor classifier
- the existence of the tracked object in each search window is determined (S42).
- the performance may be improved by using the quasi-supervised learning algorithm on the continuously input images (S43).
- the object tracking step S40 may use a TLD (Tld Learning Detection) learning algorithm scheme (reference paper [5], reference paper [6]).
- TLD Tld Learning Detection
- the tracking object When the tracking object is detected using the result of the object recognition unit 200 described above, the tracking object may be tracked using a TLD learning algorithm.
- the TLD learning algorithm method may basically include a tracker 310, a detector 320, a learner 330, and an integrator 340.
- the tracker 310 may use a median filter, but may use tracking using an extended Kalman filter to increase tracking performance.
- the detector 320 may be configured as a step detector.
- the detector 320 may create a search window over the entire input image and determine whether a subject exists for each search window.
- the number of windows for searching all areas of the input image may be determined according to the size of the initial bounding box.
- the detector 320 may use a classifier having a cascade structure to determine the presence or absence of the subject in real time in the search window.
- a three-step stage classifier including a dispersion filter 321, an ensemble classifier 322, and a nearest neighbor classifier 323 may be used. It can be seen that the data is transmitted to 340.
- the learner 330 may be for improving online the performance of the detector 320 in an input continuous image.
- This may use a quasi-supervised learning algorithm that trains with existing labeled and unlabeled input data.
- the P-N learner shown in the figure can estimate the classifier error independently by two types of patches and retrain the classifier using the misclassified example as learning material.
- the positive learning patch may assume that the subject moves along the path by utilizing the temporal structure in the continuous video image.
- the position of the object in the previous frame may be stored and the position may be tracked using the interframe tracker 310.
- the reliability of the patch through the tracker 310 is high, it may be determined to be a valid path, and when the detector 320 recognizes the negative as a valid movement path, a positive learning patch may be generated, and a positive error may be detected. In this case, a negative learning patch can be generated.
- the position of the subject is on the effective moving path, the surroundings can be made into a negative learning patch.
- the integrator 340 may integrate the results of the tracker 310 and the detector 320 to finally output the position of the subject, and then transfer the coordinates to the driving controller 500.
- FIG. 10 is a flowchart illustrating an operation of the distance measuring unit illustrated in FIG. 2, and illustrates a process for tracking a tracking object at a predetermined distance in the above-described object tracking step S40.
- the image A of the specific area detected by the object recognition unit 200 is changed to the image B, the image B 'is generated by the perspective transformation, and then the perspective image of the image A is converted.
- the ratio (coordinate) of the (A ') scale value and the image (B') scale value can be calculated.
- the unmanned aerial vehicle can always be maintained at the reference point.
- the error range was determined that the coordinates that are twice the width, length, and height of the unmanned aerial vehicle are the same position.
- FIG. 11 is a flowchart illustrating an operation recognition step illustrated in FIG. 3.
- the operation recognition unit 400 may determine each mode. .
- the determination method of the operation recognition unit 400 may use the same method used in the object tracking step (S40), a detailed description thereof may be referred to the above-described Figs.
- the drive control unit 500 controls the image storage unit 700 in the image input unit 100
- the input image can be compressed and saved as JPEG.
- the video recording mode when the video recording mode is recognized, the video is recorded for the time set by the time management means 410, and the driving controller 500 controls the video storage unit 700 to compress and store the video input from the video input unit 100. Can be.
- the present invention enables the autonomous flight of the unmanned aerial vehicle by enabling the recognition and automatic tracking of a specific object using external image information input to the camera mounted on the unmanned aerial vehicle, thereby providing an expensive tracking induction apparatus. Since it is not required separately, there is an effect that can reduce the manufacturing cost of the unmanned aerial vehicle.
- the present invention is capable of autonomous driving by the unmanned aerial vehicle, and thus does not require a user terminal that has been manually operated by ordinary users, thereby preventing safety accidents and device damage due to immature operation of the user. have.
- the present invention uses a method of finding a tracking target using a single panoramic image made by successively joining a plurality of images photographed at the time of rotation, the operation speed and recognition rate may be improved.
- the present invention provides an unmanned aerial vehicle that can automatically track a specific target without user control by using a method that can automatically measure the coordinates to be identified as a tracking object when the face and the upper body of the human body are recognized. It can work.
- the present invention can be used to process the image input to the camera mounted on the unmanned aerial vehicle without additional configuration, to specify and automatically track the object as well as to transmit control commands to the user and the unmanned aerial vehicle It is effective to implement a convenient interface for the period.
- the present invention can be used in the field of an unmanned aerial vehicle having an automatic tracking function and a control method thereof.
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Theoretical Computer Science (AREA)
- Aviation & Aerospace Engineering (AREA)
- Remote Sensing (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Radar, Positioning & Navigation (AREA)
- Signal Processing (AREA)
- Artificial Intelligence (AREA)
- Evolutionary Computation (AREA)
- Software Systems (AREA)
- Computing Systems (AREA)
- General Health & Medical Sciences (AREA)
- Databases & Information Systems (AREA)
- Data Mining & Analysis (AREA)
- Medical Informatics (AREA)
- Health & Medical Sciences (AREA)
- Automation & Control Theory (AREA)
- Life Sciences & Earth Sciences (AREA)
- General Engineering & Computer Science (AREA)
- Astronomy & Astrophysics (AREA)
- Evolutionary Biology (AREA)
- Bioinformatics & Computational Biology (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Image Analysis (AREA)
- Studio Devices (AREA)
- Closed-Circuit Television Systems (AREA)
Abstract
Description
Claims (13)
- 촬영대상 피사체를 자동으로 추적하는 무인항공기에 있어서,상기 촬영대상 피사체의 주변영상에 대한 이미지를 취득하는 영상입력부;상기 영상입력부를 통해 취득한 이미지를 이용하여 관심영역을 추출하고, 상기 관심영역내에 위치하는 특정영역을 검출하여 좌표를 측정한 다음 상기 특정영역을 추적대상 물체로 인식하는 물체인식부;상기 물체인식부에서 인식한 상기 추적대상 물체를 TLD(Tracking Learning Detection) 학습 알고리즘 방식으로 위치를 산출하여 추적함과 동시에 이에 대응되게 무인항공기를 구동시키기 위한 구동명령을 생성하는 물체추적부;상기 추적대상 물체의 동작을 인식하여 사진 촬영모드, 동영상 촬영모드, 및 복귀모드에 대응되는 구동명령을 생성하는 동작인식부; 및상기 물체추적부 및 동작인식부의 각각의 구동명령에 따라 무인항공기를 구동시키는 구동제어부;를 포함하는 것을 특징으로 하는 자동추적 기능을 갖는 무인항공기
- 제 1항에 있어서,상기 추적대상 물체가 이동하는 방향으로 무인항공기를 회전시키면서 상기 추적대상 물체의 주변영상에 대한 이미지를 취득할 수 있으며,상기 추적대상 물체를 상기 이미지에서 찾지 못하는 경우에 무인항공기를 제자리에서 회전시키면서 취득하는 단일의 파노라마 이미지를 이용하여 추적대상 물체를 인식하는 것을 특징으로 하는 자동추적 기능을 갖는 무인항공기
- 제 1항에 있어서, 상기 물체인식부는,상기 영상입력부를 통해 취득한 이미지로부터 할라이크 특징(Haar-like feature)을 이용하여 특정영역을 검출하고, 아다부스트(AdaBoost)학습 알고리즘을 이용하여 검출된 특정영역의 판단율을 높이며, 검출된 특정영역에 대응되는 윈도우의 좌표를 설정하여 추적대상물체로 인식시키는 것을 특징으로 하는 자동추적 기능을 갖는 무인항공기
- 제 3항에 있어서,상기 추적대상 물체가 사람인 경우에 상기 특정영역은 얼굴과 상반신에 해당하는 영역인 것을 특징으로 하는 자동추적 기능을 갖는 무인항공기
- 제 1항에 있어서, 상기 물체추적부는,확장칼만필터를 이용하여 추적대상 물체에 대한 추적성능을 높일 수 있는 추적기와;입력영상 전체에 대한 다수개의 검색윈도우를 생성하고 상기 각각의 검색윈도우에서 추적대상물체의 존재여부를 단계별 분류알고리즘 방식을 사용하여 결정하는 검출기와;연속으로 입력되는 영상에서 준지도학습 알고리즘방식을 이용하여 상기 검출기의 성능을 개선시킬 수 있는 학습기; 및상기 추적기와 검출기에 의한 결과를 통합하여 추적대상물체의 위치를 최종판단한 다음 해당좌표를 상기 구동제어부에 전달하는 통합기;를 구비하는 것을 특징으로 하는 자동추적 기능을 갖는 무인항공기
- 제 5항에 있어서, 상기 검출기에 적용된 단계별 분류알고리즘 방식은,분산필터, 앙상블 분류기, 및 최근접 이웃분류기를 포함하는 캐스케이드 구조의 분류알고리즘방식을 사용하여 실시간 처리가 가능하게 한 것을 특징으로 하는 자동추적 기능을 갖는 무인항공기
- 제 1항에 있어서,상기 물체인식부에서 인식된 추적대상 물체의 이미지데이터에 대한 스케일값을 저장하고 추적대상 물체의 크기에 대한 변화량을 비교함으로써 추적대상물체와 무인항공기 사이에 일정한 거리를 유지시킬 수 있는 거리측정부;를 더 구비하는 것을 특징으로 하는 자동추적 기능을 갖는 무인항공기
- 제 1항에 있어서,상기 동작인식부에서 인식한 동작모드에 대응하여 사진촬영모드에서 촬영된 이미지파일과 동영상촬영모드에서 촬영된 동영상파일이 저장되는 영상저장부;를 더 구비하는 것을 특징으로 하는 자동추적 기능을 갖는 무인항공기
- 촬영대상 피사체를 자동으로 추적하는 무인항공기의 제어방법에 있어서,상기 촬영대상 피사체의 주변을 촬영하는 촬영단계;상기 촬영단계에서 취득된 이미지로부터 얼굴영역과 상반신영역을 검출하는 특정영역 검출단계;상기 특정영역으로 검출되는 얼굴영역과 상반신영역에 대응되는 윈도우의 좌표를 측정하여 추적대상물체로 인식하는 물체인식단계;상기 물체인식단계에서 인식된 추적대상물체를 일정한 거리를 두고 추적할 수 있도록 무인항공기를 제어하는 물체추적단계;상기 추적대상물체의 동작을 인식하여 사진 촬영모드, 동영상 촬영모드, 및 복귀모드를 판단하는 동작인식단계; 및상기 동작인식단계에서 인식한 동작모드와 대응되게 무인항공기를 구동시키는 무인항공기 구동단계;를 포함하는 것을 특징으로 하는 자동추적 기능을 갖는 무인항공기의 제어방법
- 제 9항에 있어서, 상기 촬영대상 피사체의 주변을 촬영하는 촬영단계는,상기 추적대상 물체가 이동하는 방향으로 무인항공기를 회전시키면서 상기 추적대상 물체의 주변영상에 대한 이미지를 취득하는 단계; 및상기 추적대상 물체를 상기 이미지에서 찾지 못하는 경우에 무인항공기를 제자리에서 회전시키면서 촬영하는 단계;를 포함하는 것을 특징으로 하는 자동추적 기능을 갖는 무인항공기의 제어방법
- 제 9항에 있어서, 상기 특정영역 검출단계는,상기 촬영단계에서 취득한 이미지로부터 할라이크 특징(Haar-like feature)을 이용하여 얼굴영역과 상반신영역에 대응되는 특정영역을 검출하고, 아다부스트(AdaBoost)학습 알고리즘을 이용하여 검출된 특정영역의 판단율을 높이는 것을 특징으로 하는 자동추적 기능을 갖는 무인항공기의 제어방법
- 제 9항에 있어서, 상기 물체추적단계는,확장칼만필터를 이용하여 추적대상 물체에 대한 추적성능을 높이고, 입력영상 전체에 대한 다수개의 검색윈도우를 생성하고 상기 각각의 검색윈도우에서 추적대상물체의 존재여부를 분산필터와 앙상블 분류기 및 최근접 이웃분류기를 포함하는 캐스케이드 구조의 단계별 분류알고리즘 방식을 사용하여 결정하며, 연속으로 입력되는 영상에서 준지도학습 알고리즘방식을 이용하여 성능을 개선시키는 것을 특징으로 하는 자동추적 기능을 갖는 무인항공기의 제어방법
- 제 12항에 있어서, 상기 물체추적단계는,추적대상 물체의 이미지데이터에 대한 스케일값을 저장하는 단계;상기 스케일값을 이용하여 추적대상 물체의 크기에 대한 변화량을 감지하여 저장하는 단계;상기 추적대상 물체의 크기 변화량을 이용하여 무인항공기의 이동좌표를 계산하는 단계; 및상기 무인항공기의 이동좌표를 이용하여 추적대상 물체와의 거리를 일정하게 유지한 채 추적하는 단계;를 포함하는 것을 특징으로 하는 자동추적 기능을 갖는 무인항공기의 제어방법
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US15/556,203 US10303172B2 (en) | 2015-08-19 | 2016-02-27 | Unmanned aerial vehicle having automatic tracking function and method of controlling the same |
CN201680014180.6A CN107406142A (zh) | 2015-08-19 | 2016-02-27 | 具有自动跟踪功能的无人机及其控制方法 |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
KR1020150116709A KR101645722B1 (ko) | 2015-08-19 | 2015-08-19 | 자동추적 기능을 갖는 무인항공기 및 그 제어방법 |
KR10-2015-0116709 | 2015-08-19 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2017030259A1 true WO2017030259A1 (ko) | 2017-02-23 |
Family
ID=56711391
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/KR2016/001970 WO2017030259A1 (ko) | 2015-08-19 | 2016-02-27 | 자동추적 기능을 갖는 무인항공기 및 그 제어방법 |
Country Status (4)
Country | Link |
---|---|
US (1) | US10303172B2 (ko) |
KR (1) | KR101645722B1 (ko) |
CN (1) | CN107406142A (ko) |
WO (1) | WO2017030259A1 (ko) |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107426548A (zh) * | 2017-09-07 | 2017-12-01 | 四川智慧鹰航空科技有限公司 | 一种超小型低功耗图像传输设备 |
CN108427960A (zh) * | 2018-02-10 | 2018-08-21 | 南京航空航天大学 | 基于改进Online Boosting和卡尔曼滤波器改进的TLD跟踪方法 |
CN108965689A (zh) * | 2017-05-27 | 2018-12-07 | 昊翔电能运动科技(昆山)有限公司 | 无人机拍摄方法及装置、无人机和地面控制装置 |
WO2023106558A1 (ko) * | 2021-12-07 | 2023-06-15 | 한국전자기술연구원 | 영상정보 기반의 벼 도복 자동 판독 장치 및 방법 |
Families Citing this family (34)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
FR3054336A1 (fr) * | 2016-07-22 | 2018-01-26 | Parrot Drones | Systeme autonome de prise de vues animees par un drone avec poursuite de cible et localisation amelioree de la cible. |
JP2018037886A (ja) * | 2016-08-31 | 2018-03-08 | 株式会社東芝 | 画像配信装置、画像配信システム、および画像配信方法 |
CN108886572B (zh) * | 2016-11-29 | 2021-08-06 | 深圳市大疆创新科技有限公司 | 调整图像焦点的方法和系统 |
JP6794284B2 (ja) * | 2017-01-31 | 2020-12-02 | キヤノン株式会社 | カメラ機能を有する携帯可能な情報処理装置、その表示制御方法、及びプログラム |
US10375289B2 (en) * | 2017-03-31 | 2019-08-06 | Hangzhou Zero Zero Technology Co., Ltd. | System and method for providing autonomous photography and videography |
CN107024725B (zh) * | 2017-05-31 | 2023-09-22 | 湖南傲英创视信息科技有限公司 | 一种大视场微光低空无人机探测装置 |
US11064184B2 (en) | 2017-08-25 | 2021-07-13 | Aurora Flight Sciences Corporation | Aerial vehicle imaging and targeting system |
US10495421B2 (en) | 2017-08-25 | 2019-12-03 | Aurora Flight Sciences Corporation | Aerial vehicle interception system |
CN109814588A (zh) * | 2017-11-20 | 2019-05-28 | 深圳富泰宏精密工业有限公司 | 飞行器以及应用于飞行器的目标物追踪系统和方法 |
WO2019102498A1 (en) * | 2017-11-27 | 2019-05-31 | Nagarajan Bharath | System and method for tracking performance of physical activity of a user |
KR102045667B1 (ko) * | 2018-02-14 | 2019-12-03 | 동국대학교 산학협력단 | 드론의 제어 신호 생성 방법 및 제어 장치 |
US11205274B2 (en) * | 2018-04-03 | 2021-12-21 | Altumview Systems Inc. | High-performance visual object tracking for embedded vision systems |
EP3821313A4 (en) | 2018-07-12 | 2022-07-27 | Terraclear Inc. | SYSTEM AND METHOD FOR IDENTIFICATION AND COLLECTION OF OBJECTS |
CN110832495A (zh) * | 2018-07-13 | 2020-02-21 | 深圳市大疆创新科技有限公司 | 波浪识别方法、装置、计算机可读存储介质和无人飞行器 |
KR102177655B1 (ko) | 2018-11-14 | 2020-11-11 | 이병섭 | Mvs 기반의 무인항공기를 갖춘 객체 추적 시스템 |
CN109636834A (zh) * | 2018-11-22 | 2019-04-16 | 北京工业大学 | 基于tld改进算法的视频车辆目标跟踪算法 |
CN109857128B (zh) * | 2018-12-18 | 2022-07-15 | 丰翼科技(深圳)有限公司 | 无人机视觉定点降落方法、系统、设备及存储介质 |
US10891490B2 (en) | 2019-01-18 | 2021-01-12 | International Business Machines Corporation | Passive approaching object recognition for enhancing security systems |
CN110008919A (zh) * | 2019-04-09 | 2019-07-12 | 南京工业大学 | 基于视觉的四旋翼无人机人脸识别系统 |
WO2020219692A1 (en) * | 2019-04-25 | 2020-10-29 | Nec Laboratories America, Inc. | Tracking indoor objects with inertial sensor measurements |
KR102297683B1 (ko) * | 2019-07-01 | 2021-09-07 | (주)베이다스 | 복수의 카메라들을 캘리브레이션하는 방법 및 장치 |
CN110852146B (zh) * | 2019-09-23 | 2023-05-16 | 合肥赛为智能有限公司 | 一种无人机图像特征点检测方法 |
WO2021231219A1 (en) * | 2020-05-11 | 2021-11-18 | Canon U.S.A., Inc. | An unmanned autonomous vehicle and method for controlling the same |
CN111812096B (zh) * | 2020-06-02 | 2023-07-07 | 国网浙江嘉善县供电有限公司 | 一种绝缘子电弧烧伤的快速定位智能图像检测方法 |
US11792517B2 (en) * | 2020-09-30 | 2023-10-17 | Snap Inc. | Pose tracking for rolling shutter camera |
CN112966149B (zh) * | 2020-10-20 | 2022-02-18 | 圣凯诺服饰有限公司 | 海量数据实时搜索系统 |
CN112414224A (zh) * | 2020-12-10 | 2021-02-26 | 成都微精控科技有限公司 | 一种针对特定目标的空域安防方法及系统 |
US11445121B2 (en) * | 2020-12-29 | 2022-09-13 | Industrial Technology Research Institute | Movable photographing system and photography composition control method |
CN113721449A (zh) * | 2021-01-05 | 2021-11-30 | 北京理工大学 | 一种多旋翼机控制系统及方法 |
CN112866579B (zh) * | 2021-02-08 | 2022-07-01 | 上海巡智科技有限公司 | 数据采集方法、装置及可读存储介质 |
TWI779850B (zh) | 2021-06-24 | 2022-10-01 | 仁寶電腦工業股份有限公司 | 無人機遊戲之渲染方法 |
CN113837097B (zh) * | 2021-09-26 | 2024-05-07 | 南京航空航天大学 | 一种面向视觉目标识别的无人机边缘计算验证系统和方法 |
CN114066936B (zh) * | 2021-11-06 | 2023-09-12 | 中国电子科技集团公司第五十四研究所 | 一种小目标捕获过程中目标可靠性跟踪方法 |
CN114389623B (zh) * | 2022-03-23 | 2022-07-26 | 湖南华诺星空电子技术有限公司 | 一种穿越机识别驱离方法、系统及存储介质 |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2009202737A (ja) * | 2008-02-27 | 2009-09-10 | Mitsubishi Heavy Ind Ltd | 無人航空機及び無人航空機システム |
KR20130067847A (ko) * | 2011-12-14 | 2013-06-25 | 한국전자통신연구원 | 무인 항공기를 이용한 공중 정찰 시스템 및 방법 |
KR101313908B1 (ko) * | 2013-05-10 | 2013-10-01 | 위아코퍼레이션 주식회사 | 레이저 레인지 게이트 방식을 이용한 영상 보안 시스템 |
JP2014119828A (ja) * | 2012-12-13 | 2014-06-30 | Secom Co Ltd | 自律飛行ロボット |
US20140211987A1 (en) * | 2013-01-30 | 2014-07-31 | International Business Machines Corporation | Summarizing salient events in unmanned aerial videos |
Family Cites Families (19)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
AU2003264929A1 (en) | 2002-09-23 | 2004-04-08 | Stefan Reich | Measuring and stabilising system for machine-controllable vehicles |
US7789341B2 (en) | 2004-04-14 | 2010-09-07 | Arlton Paul E | Rotary wing aircraft having a non-rotating structural backbone and a rotor blade pitch controller |
KR100842101B1 (ko) | 2007-06-15 | 2008-06-30 | 주식회사 대한항공 | 영상정보를 이용한 무인항공기 자동회수 방법 |
US9026272B2 (en) | 2007-12-14 | 2015-05-05 | The Boeing Company | Methods for autonomous tracking and surveillance |
KR100947990B1 (ko) | 2008-05-15 | 2010-03-18 | 성균관대학교산학협력단 | 차영상 엔트로피를 이용한 시선 추적 장치 및 그 방법 |
JP4896115B2 (ja) * | 2008-11-21 | 2012-03-14 | 三菱電機株式会社 | 空中移動体からの自動追尾撮影装置 |
KR20100129143A (ko) | 2010-04-12 | 2010-12-08 | 서이환 | 낙하산에 의한 무선조종항공기의 착륙장치 |
KR101157484B1 (ko) | 2010-12-14 | 2012-06-20 | 주식회사 대한항공 | 무인항공기 자동회수 방법 |
US9147260B2 (en) * | 2010-12-20 | 2015-09-29 | International Business Machines Corporation | Detection and tracking of moving objects |
JP2013119328A (ja) * | 2011-12-08 | 2013-06-17 | Canon Inc | 自動追尾カメラシステム |
FR2985581B1 (fr) * | 2012-01-05 | 2014-11-28 | Parrot | Procede de pilotage d'un drone a voilure tournante pour operer une prise de vue par une camera embarquee avec minimisation des mouvements perturbateurs |
KR20130009895A (ko) | 2013-01-05 | 2013-01-23 | 이상윤 | 공간정보기술을 이용한 무인항공기 통합네트워크시스템 |
KR20130086192A (ko) | 2013-06-18 | 2013-07-31 | 이상윤 | 스마트안경을 이용한 무인항공기 제어와 조종시스템 |
US20150134143A1 (en) * | 2013-10-04 | 2015-05-14 | Jim Willenborg | Novel tracking system using unmanned aerial vehicles |
CN104777847A (zh) * | 2014-01-13 | 2015-07-15 | 中南大学 | 基于机器视觉和超宽带定位技术的无人机目标追踪系统 |
JP6574938B2 (ja) * | 2014-05-19 | 2019-09-18 | ソニー株式会社 | 飛行装置および撮像装置 |
US9536320B1 (en) * | 2014-12-23 | 2017-01-03 | John H. Prince | Multiple coordinated detectors for examination and ranging |
CN104796611A (zh) * | 2015-04-20 | 2015-07-22 | 零度智控(北京)智能科技有限公司 | 移动终端遥控无人机实现智能飞行拍摄的方法及系统 |
JP6100868B1 (ja) * | 2015-11-09 | 2017-03-22 | 株式会社プロドローン | 無人移動体の操縦方法および無人移動体監視装置 |
-
2015
- 2015-08-19 KR KR1020150116709A patent/KR101645722B1/ko active IP Right Grant
-
2016
- 2016-02-27 US US15/556,203 patent/US10303172B2/en not_active Expired - Fee Related
- 2016-02-27 WO PCT/KR2016/001970 patent/WO2017030259A1/ko active Application Filing
- 2016-02-27 CN CN201680014180.6A patent/CN107406142A/zh active Pending
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2009202737A (ja) * | 2008-02-27 | 2009-09-10 | Mitsubishi Heavy Ind Ltd | 無人航空機及び無人航空機システム |
KR20130067847A (ko) * | 2011-12-14 | 2013-06-25 | 한국전자통신연구원 | 무인 항공기를 이용한 공중 정찰 시스템 및 방법 |
JP2014119828A (ja) * | 2012-12-13 | 2014-06-30 | Secom Co Ltd | 自律飛行ロボット |
US20140211987A1 (en) * | 2013-01-30 | 2014-07-31 | International Business Machines Corporation | Summarizing salient events in unmanned aerial videos |
KR101313908B1 (ko) * | 2013-05-10 | 2013-10-01 | 위아코퍼레이션 주식회사 | 레이저 레인지 게이트 방식을 이용한 영상 보안 시스템 |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108965689A (zh) * | 2017-05-27 | 2018-12-07 | 昊翔电能运动科技(昆山)有限公司 | 无人机拍摄方法及装置、无人机和地面控制装置 |
CN107426548A (zh) * | 2017-09-07 | 2017-12-01 | 四川智慧鹰航空科技有限公司 | 一种超小型低功耗图像传输设备 |
CN108427960A (zh) * | 2018-02-10 | 2018-08-21 | 南京航空航天大学 | 基于改进Online Boosting和卡尔曼滤波器改进的TLD跟踪方法 |
CN108427960B (zh) * | 2018-02-10 | 2020-04-21 | 南京航空航天大学 | 基于改进Online Boosting和卡尔曼滤波器改进的TLD跟踪方法 |
WO2023106558A1 (ko) * | 2021-12-07 | 2023-06-15 | 한국전자기술연구원 | 영상정보 기반의 벼 도복 자동 판독 장치 및 방법 |
Also Published As
Publication number | Publication date |
---|---|
US20180046188A1 (en) | 2018-02-15 |
US10303172B2 (en) | 2019-05-28 |
CN107406142A (zh) | 2017-11-28 |
KR101645722B1 (ko) | 2016-08-05 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2017030259A1 (ko) | 자동추적 기능을 갖는 무인항공기 및 그 제어방법 | |
WO2020085881A1 (en) | Method and apparatus for image segmentation using an event sensor | |
WO2017049816A1 (zh) | 一种控制无人机随脸转动的方法和装置 | |
WO2012005387A1 (ko) | 다중 카메라와 물체 추적 알고리즘을 이용한 광범위한 지역에서의 물체 이동 감시 방법 및 그 시스템 | |
US9892322B1 (en) | Cascade recognition for personal tracking via unmanned aerial vehicle (UAV) | |
WO2020027607A1 (ko) | 객체 탐지 장치 및 제어 방법 | |
US10019624B2 (en) | Face recognition system and face recognition method | |
US20160232410A1 (en) | Vehicle speed detection | |
KR20170022872A (ko) | 자동추적 기능을 갖는 무인항공기 | |
US20130076943A1 (en) | Apparatus and method for image recognition of facial areas in photographic images from a digital camera | |
WO2017034220A1 (en) | Method of automatically focusing on region of interest by an electronic device | |
WO2017018744A1 (ko) | 무인 스마트카를 이용한 공익서비스 시스템 및 방법 | |
WO2021091021A1 (ko) | 화재 검출 시스템 | |
KR101850534B1 (ko) | 적외선 카메라와 마커를 이용한 영상 촬영 시스템, 방법 및 어플리케이션 | |
WO2016024680A1 (ko) | 주행차량의 번호판 인식이 실시간으로 가능한 차량용 블랙박스 | |
WO2021172833A1 (ko) | 물체 인식 장치, 물체 인식 방법 및 이를 수행하기 위한 컴퓨터 판독 가능한 기록 매체 | |
WO2017111257A1 (ko) | 영상 처리 장치 및 영상 처리 방법 | |
WO2016072627A1 (ko) | 전방위 카메라를 이용한 1대 다면 주차장 관리 시스템 및 관리방법 | |
WO2021091053A1 (ko) | 영상의 유사도 분석을 이용한 위치 측정 시스템 및 그 방법 | |
CN109218587A (zh) | 一种基于双目摄像头的图像采集方法及系统 | |
KR101656519B1 (ko) | 확장칼만필터를 이용하여 추적성능을 높인 자동추적 기능을 갖는 무인항공기 | |
WO2023149603A1 (ko) | 다수의 카메라를 이용한 열화상 감시 시스템 | |
WO2017007077A1 (ko) | 감시 방법 | |
WO2023080667A1 (ko) | Ai 기반 객체인식을 통한 감시카메라 wdr 영상 처리 | |
CN113438399B (zh) | 用于无人机的目标导引系统、方法、无人机和存储介质 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 16837201 Country of ref document: EP Kind code of ref document: A1 |
|
WWE | Wipo information: entry into national phase |
Ref document number: 15556203 Country of ref document: US |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
32PN | Ep: public notification in the ep bulletin as address of the adressee cannot be established |
Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 16.07.2018) |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 16837201 Country of ref document: EP Kind code of ref document: A1 |