CN108171121A - UAV Intelligent tracking and system - Google Patents

UAV Intelligent tracking and system Download PDF

Info

Publication number
CN108171121A
CN108171121A CN201711308149.6A CN201711308149A CN108171121A CN 108171121 A CN108171121 A CN 108171121A CN 201711308149 A CN201711308149 A CN 201711308149A CN 108171121 A CN108171121 A CN 108171121A
Authority
CN
China
Prior art keywords
pedestrian
tracking
head
shoulder
region
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201711308149.6A
Other languages
Chinese (zh)
Inventor
张翼成
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Yuneec Shanghai Electronic Technology Co Ltd
Original Assignee
Yuneec Shanghai Electronic Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Yuneec Shanghai Electronic Technology Co Ltd filed Critical Yuneec Shanghai Electronic Technology Co Ltd
Priority to CN201711308149.6A priority Critical patent/CN108171121A/en
Publication of CN108171121A publication Critical patent/CN108171121A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • G06V40/28Recognition of hand or arm movements, e.g. recognition of deaf sign language
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/017Gesture based interaction, e.g. based on a set of recognized hand gestures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/50Extraction of image or video features by performing operations within image blocks; by using histograms, e.g. histogram of oriented gradients [HoG]; by summing image-intensity values; Projection analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/46Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
    • G06V10/467Encoded features or binary features, e.g. local binary patterns [LBP]

Abstract

The invention discloses a kind of UAV Intelligent tracking and systems.This method includes:Receive digital picture;Pedestrian tracking region is determined in digital picture;Pedestrian tracking region is detected, obtains the hand exercise track of pedestrian tracking area image one skilled in the art;The hand exercise track of pedestrian tracking region one skilled in the art and pedestrian's gesture motion model are compared, identifies the gesture motion type of pedestrian tracing area one skilled in the art;The instruction of tracking pedestrians is sent to unmanned plane according to gesture motion type.After the embodiment of the present invention, pedestrian tracking region is determined according to the head and shoulder feature of pedestrian, further according to the gesture motion Type Control unmanned plane tracking pedestrians of pedestrian tracking region one skilled in the art, unmanned plane can be controlled accurately to track tracked object.

Description

UAV Intelligent tracking and system
Technical field
The invention belongs to technical field of machine vision more particularly to a kind of UAV Intelligent tracking and systems.
Background technology
Unmanned plane is non-manned aircraft, can utilize radio robot and the presetting apparatus provided for oneself manipulate or Person fully or is intermittently independently manipulated by car-mounted computer.The carry filming apparatus on unmanned plane can be used for taking photo by plane, survey Paint, agricultural, the fields such as investigation, in order to which unmanned plane can accurately shoot tracked object, unmanned plane is needed to realize autonomous tracking quilt Tracking object.
Current unmanned plane independently tracks, and is normally based on the autonomous tracking of vision.Unmanned plane is received in smart-interactive terminal Image data selects tracking object in interface of mobile terminal upper ledge, and each frame picture number is detected in real time by the method for image procossing The image data of object is tracked in.The relative position relation that object is tracked in adjacent data frames is calculated, and by image phase Practical three-dimensional position relationship is converted to position relationship, tracked pair of unmanned plane tracking is controlled according to the three-dimensional position relationship As.
The autonomous tracking of view-based access control model needs user to select needs on smart-interactive terminal when selection is tracked object The object of tracking, and user selects tracking object that can generate human error, it is impossible to it is accurate to determine tracked object.
Invention content
The embodiment of the present invention provides a kind of UAV Intelligent tracking and system, it is only necessary to which tracked object does specified hand Gesture acts, and unmanned plane can accurately track tracked object according to the gesture motion for being tracked object.
On the one hand, the embodiment of the present invention provides a kind of UAV Intelligent tracking, including:
Receive digital picture;
Pedestrian tracking region is determined in digital picture;
Pedestrian tracking region is detected, obtains the hand exercise track of pedestrian tracking area image one skilled in the art;
The hand exercise track of pedestrian tracking region one skilled in the art and pedestrian's gesture motion model are compared, identifies pedestrian's tracking area The gesture motion type of domain one skilled in the art;
The instruction of tracking pedestrians is sent to unmanned plane according to gesture motion type.
On the other hand, an embodiment of the present invention provides a kind of UAV Intelligent tracking system, including:
Receiving module, for receiving digital picture;
Determining module, for determining pedestrian tracking region in digital picture;
Detection module for detecting pedestrian tracking region, obtains the hand exercise rail of pedestrian tracking area image one skilled in the art Mark;
Contrast module, for comparing the hand exercise track of pedestrian tracking region one skilled in the art and pedestrian's gesture motion model, Identify the gesture motion type of pedestrian tracing area one skilled in the art;
Tracking module, for sending the instruction of tracking pedestrians to unmanned plane according to gesture motion type.
The UAV Intelligent tracking and system of the embodiment of the present invention determine pedestrian tracking by the head and shoulder feature of pedestrian Region identifies the gesture motion type of pedestrian in pedestrian's tracing area, is sent out according to the gesture motion type of pedestrian to unmanned plane Trace command is sent, when tracked object makes specified gesture, can accurately track tracked object.
Description of the drawings
In order to illustrate the technical solution of the embodiments of the present invention more clearly, it will make below to required in the embodiment of the present invention Attached drawing is briefly described, for those of ordinary skill in the art, without creative efforts, also Other attached drawings can be obtained according to these attached drawings.
Fig. 1 is the flow diagram of UAV Intelligent tracking provided in an embodiment of the present invention;
Fig. 2 is the flow diagram of pedestrian tracking area determination method provided in an embodiment of the present invention;
Fig. 3 is the structure diagram of UAV Intelligent tracking system provided in an embodiment of the present invention.
Specific embodiment
The feature and exemplary embodiment of various aspects of the invention is described more fully below, in order to make the mesh of the present invention , technical solution and advantage be more clearly understood, below in conjunction with drawings and the specific embodiments, the present invention is carried out further detailed Description.It should be understood that specific embodiment described herein is only configured to explain the present invention, it is not configured as limiting this hair It is bright.To those skilled in the art, the present invention can be in the case of some details in not needing to these details Implement.The description of embodiment is used for the purpose of below to provide to preferably reason of the invention by showing the example of the present invention Solution.
It should be noted that herein, relational terms such as first and second and the like are used merely to a reality Body or operation are distinguished with another entity or operation, are deposited without necessarily requiring or implying between these entities or operation In any this practical relationship or sequence.Moreover, term " comprising ", "comprising" or its any other variant are intended to Non-exclusive inclusion, so that process, method, article or equipment including a series of elements not only will including those Element, but also including other elements that are not explicitly listed or further include as this process, method, article or equipment Intrinsic element.In the absence of more restrictions, the element limited by sentence " including ... ", it is not excluded that including Also there are other identical elements in the process of the element, method, article or equipment.
In order to solve prior art problem, an embodiment of the present invention provides a kind of UAV Intelligent tracking and systems. The UAV Intelligent tracking provided first below the embodiment of the present invention is introduced.
Fig. 1 shows the flow diagram of UAV Intelligent tracking provided in an embodiment of the present invention.As shown in Figure 1, This method can include:
S110 receives digital picture.
The digital picture that unmanned plane uploads is received, and digital picture is handled and identified.
Illustratively, camera sensor device is carried on unmanned plane ontology holder, to the bat in the range of camera field of view Image digitization imaging is taken the photograph, and digital picture is sent to server.Wherein, unmanned plane ontology holder refers to install, fixes camera shooting The support equipment of machine.
Illustratively, mobile intelligent terminal is carried on unmanned plane ontology holder, such as mobile intelligent terminal can be hand Machine, camera or camera, to the shooting image digital imagery in mobile intelligent terminal field range.Mobile intelligent terminal ontology cloud Digital Image Communication transmitting device is further included on platform, digital picture is transmitted by Digital Image Communication transmitting device.Wherein, nobody Machine ontology holder refers to install, fix the support equipment of mobile intelligent terminal.
S120 determines pedestrian tracking region in digital picture.
The digital picture received is filtered, the noise of filtering effects feature extraction, after extraction is filtered Digital picture in pedestrian's head and shoulder characteristics of image, characteristics of image includes Ha Er (Haar) feature of image, the part two of image Into pattern (Local Binary Pattern, LBP) feature and/or histograms of oriented gradients (the Histogram of of image Oriented Gradient, HOG) feature.
It compares the characteristics of image of pedestrian's head and shoulder and head and shoulder in pedestrian tracking region and trains corresponding characteristics of image in file, root Pedestrian tracking region is determined according to comparing result.For example, if the Haar features of pedestrian's head and shoulder train file with head and shoulder in digital picture Haar characteristic matchings success, then region of pedestrian's head and shoulder in digital picture is pedestrian tracking region.
In one embodiment of the invention, under off-line state, pedestrian's head shoulder images positive sample and pedestrian's head and shoulder are built Pedestrian's head shoulder images positive sample is referred to as positive sample by image negative sample below, and pedestrian's head shoulder images negative sample is referred to as negative Sample.
Pedestrian's head shoulder images positive sample and pedestrian's head shoulder images negative sample are built, specially:The figure of pedestrian's head and shoulder will be included Picture saves as positive sample, and the image of the non-pedestrian head and shoulder of same pixel is saved as negative sample, calculates the characteristics of image of positive sample, And the characteristics of image of negative sample.By the characteristics of image of the characteristics of image of positive sample and negative sample be put into head and shoulder training aids into Row training determines that head and shoulder trains file.Characteristics of image includes the Haar features of image, the LBP features of image and/or image HOG features, wherein, the characteristics of image of positive sample is identical with the characteristics of image of negative sample, such as Haar features of positive sample and negative The Haar features of sample.
S130, detection pedestrian tracking region, the hand exercise track of acquisition pedestrian tracking region one skilled in the art.
Edge detection and characteristic point detection are carried out to pedestrian tracking region, edge refers to that the variation of pedestrian tracking regional luminance is bright The set of aobvious point, it is bent on violent point or pedestrian tracking edges of regions curve that characteristic point refers to that pedestrian tracking regional luminance changes The set of the point of rate maximum.
Edge detection and characteristic point detection will be carried out per frame line people tracing area, will be determined per all of frame line people's tracing area Marginal point and all characteristic points form the edge point set per frame line people's tracing area and the characteristic point per frame line people's tracing area Set.It is compared by the edge point set of every frame line people tracing area and per the set of characteristic points of frame line people's tracing area, root According to the variation of the edge point set of every frame line people tracing area and the set of characteristic points per frame line people's tracing area, you can in pedestrian Pedestrian movement region is extracted in tracing area.
In one embodiment of the invention, in order to make the result that edge detection and characteristic point detect more accurate, this reality Apply that example is preferable to be smoothed pedestrian tracking region, smoothing processing can making an uproar effectively in place to go pedestrian tracking region Point, the result detected in this way are more accurate.
It should be noted that in the embodiment of the present invention, while detected using edge detection and characteristic point, it is to improve fortune The accuracy of dynamic region detection can also be detected only with edge detection or only with characteristic point.Illustratively, common side Edge detection method for example can be Sobel edge detection algorithms, Canny edge detection algorithms, common feature point detecting method Such as can be KLT feature point detection algorithms and SUSAN feature point detection algorithms.
The non-human colour of skin is removed by carrying out human body complexion identification to pedestrian tracking region, pedestrian tracking retains in region The possibility come includes face, hand and other exposed positions of pedestrian, due to the face of pedestrian and other exposed positions simultaneously Non-moving areas further rejects the face of pedestrian and other exposed portions in the pedestrian movement region after carrying out human bioequivalence Position, you can identify the hand position of user.
According to present frame pedestrian tracking region and the location determination pedestrian of the hand of front and rear frame line people tracing area one skilled in the art Hand exercise track.Specially:After pedestrian's hand position in every frame line people tracing area determines, by will be per frame line The hand position of people tracing area one skilled in the art arranges sequentially in time, obtains the hand exercise track of pedestrian.
S140 compares the hand exercise track of pedestrian tracking region one skilled in the art and pedestrian's gesture motion model, identifies pedestrian The gesture motion type of tracing area one skilled in the art.
The hand exercise track of pedestrian tracking region one skilled in the art and pedestrian's gesture motion model are compared, is moved according to pedestrian's gesture Make the gesture motion type of Model Identification pedestrian tracking region one skilled in the art.Gesture motion type can for example include " waving ", " pendulum Hand " and " clenching fist ".
In one embodiment of the invention, under off-line state, pedestrian's images of gestures sample is built, by will be per frame hand The hand position of pedestrian arranges sequentially in time in gesture image pattern, obtains the hand exercise track of pedestrian.Extract gesture figure Hand exercise track set in images of gestures sample is put into gesture training aids by the hand exercise track set of decent middle pedestrian In be trained, generate pedestrian's gesture motion model.
S150 sends the instruction of tracking pedestrians according to gesture motion type to unmanned plane.
Tracking pedestrians instruction is sent to unmanned plane according to gesture motion type, unmanned plane is according to tracking pedestrians instruction trace row People.If gesture motion type meets triggering tracking condition, trigger unmanned plane tracking and be tracked pedestrian, if gesture motion type is not Meet triggering tracking condition, then do not trigger unmanned plane tracking and be tracked pedestrian.
In embodiments of the present invention, unmanned plane treated digital picture is received, row is determined according to the head and shoulder feature of pedestrian People's tracing area identifies the gesture motion type of pedestrian in pedestrian's tracing area, according to the gesture motion type of pedestrian to nothing Man-machine transmission trace command controls unmanned plane tracking pedestrians, is known according to the gesture motion type of pedestrian tracking region recognition and pedestrian Not, unmanned plane tracking can be accurately controlled and be tracked pedestrian.
Below for S120, pedestrian tracking region is determined in digital picture, is described in detail, Fig. 2 shows rows People's tracing area determines the flow diagram of method, as shown in Fig. 2, this method can include:
S210 is filtered digital picture.
The digital picture of reception is filtered, due to digital picture formed, during transmission log often It is polluted by various noises, influences subsequent image feature extraction, in image preprocessing, removed be not required in digital picture as possible The part wanted.Filtering method can for example include:Mean filter, gaussian filtering, medium filtering and bilateral filtering.
S220 extracts the head and shoulder feature of pedestrian in filtered digital picture.
Filtered treated digital picture can for example include pedestrian, animal and/or building, to filtered number Image carries out edge and characteristic point detection obtains the profile of pedestrian's head and shoulder in digital picture, and common edge detection method is for example Can be Sobel edge detection algorithms and Canny edge detection algorithms, common feature point detecting method for example can be KLT Feature point detection algorithm and SUSAN feature point detection algorithms.
Extract the Haar features of pedestrian's head and shoulder in filtered digital picture, LBP features and/or HOG features.
S230 compares the head and shoulder feature of pedestrian and head and shoulder training file in digital picture, determines pedestrian tracking region.
Compare the head and shoulder feature of pedestrian and preset head and shoulder training file in digital picture, if successful match, pedestrian's head Region of the shoulder in digital picture is pedestrian tracking region.
In one embodiment of the invention, in personal data of being expert at library, a large amount of pedestrian's head and shoulder sample is extracted, calculates positive sample The characteristics of image of the characteristics of image of positive sample and negative sample is put into head and shoulder instruction by this characteristics of image and the characteristics of image of negative sample Practice and be trained in device, obtain head and shoulder training file.Characteristics of image includes the Haar features of image, the LBP features of image and/or The HOG features of image.
In one embodiment of the invention, the passenger in unmanned plane tracking railway station is needed to be shot, carries video camera The unmanned plane of sensing device shoots passenger's photo at the train station, identifies passenger's photo and is converted to digital picture.
The digital picture that camera sensor device uploads is received, the railway station picture of reception is filtered, is extracted The HOG features of passenger's head and shoulder of extraction and head and shoulder are trained text by the HOG features of passenger's head and shoulder in the digital picture after filtered The HOG Characteristic Contrasts of part determine pedestrian tracking region according to comparing result, if specifically, the HOG of the head and shoulder of the passenger of extraction is special Sign is consistent with the HOG features in head and shoulder training file, then the region in picture is passenger's tracing area to passenger's head and shoulder at the train station.
Edge detection and characteristic point detection are carried out to passenger's tracing area, determines the passenger motor area in passenger's tracing area Domain with reference to Face Detection, determines passenger's hand position, according to the location determination passenger of passenger's hand in every frame passenger tracing area Hand exercise track, according to the gesture motion type of pedestrian gesture motion Model Identification pedestrian.
Trace command is sent, such as " waving " is that triggering tracking is adjusted to unmanned plane according to the gesture motion type of passenger, When detecting that passenger's gesture motion is " waving ", then trace command control unmanned plane tracking " waving " passenger is sent to unmanned plane.
In one embodiment of the invention, after determining pedestrian tracking region, amplify pedestrian tracking region, to the row of amplification People's tracing area is filtered, the Haar features of pedestrian tracking region one skilled in the art's head and shoulder after extraction is filtered, and LBP is special Sign and/or HOG features.
The characteristics of image of amplified pedestrian tracking region one skilled in the art's head and shoulder and the characteristics of image of head and shoulder training file are matched, Characteristics of image includes:The Haar features of image, the LBP features of image and/or the HOG features of image.Wherein, matched image is special Sign will correspond, that is to say, that the Haar features of extraction pedestrian tracking region one skilled in the art's head and shoulder, then accordingly matching head and shoulder training The Haar features of file, matched feature can also be the combinations of characteristics of image, such as LBP features add HOG features.
If successful match, edge detection is carried out to amplified pedestrian tracking region and characteristic point detects, per frame line people The edge point set and set of characteristic points of tracing area are compared, you can judge the moving region in pedestrian tracking region.Knot The hand position that pedestrian's skin color model goes out pedestrian is closed, according to present frame pedestrian tracking region and the hand of front and rear frame line people tracing area The hand exercise track of the location determination pedestrian in portion.The hand exercise track of pedestrian and pedestrian's gesture motion model are compared, according to The gesture motion type of gesture motion Model Identification pedestrian.
The instruction of tracking pedestrians is sent to unmanned plane according to gesture motion type, continues to match amplified pedestrian tracking area Head and shoulder feature and head and shoulder the training file of domain one skilled in the art, if successful match, compares amplified pedestrian tracking region one skilled in the art Hand exercise track and pedestrian's gesture motion model, the gesture motion type of recognition and tracking region one skilled in the art transported according to gesture Dynamic type sends the instruction of tracking pedestrians to unmanned plane again.If matching is unsuccessful, amplified pedestrian tracking area is not handled Domain.
Corresponding with above-mentioned embodiment of the method, the embodiment of the present invention also provides a kind of UAV Intelligent tracks of device.Fig. 3 Show the structure diagram of UAV Intelligent tracks of device provided in an embodiment of the present invention.As shown in figure 3, the device includes: Receiving module 310, determining module 320, detection module 330, contrast module 340 and tracking module 350.
Receiving module 310, for receiving digital picture.
Determining module 320, for determining pedestrian tracking region in digital picture.
Detection module 330 for detecting pedestrian tracking region, obtains the hand exercise of pedestrian tracking area image one skilled in the art Track.
Contrast module 340, for comparing the hand exercise track of pedestrian tracking region one skilled in the art and pedestrian's gesture motion mould Type identifies the gesture motion type of pedestrian tracing area one skilled in the art.
Tracking module 350, for sending the instruction of tracking pedestrians to unmanned plane according to gesture motion type.
In one embodiment of the invention, receiving module 310 is specifically used for receiving the digital picture that unmanned plane uploads, and Digital picture is handled and is identified.
Illustratively, camera sensor device is carried on unmanned plane ontology holder, to the bat in the range of camera field of view Image digitization imaging is taken the photograph, and digital picture is sent to server.Wherein, unmanned plane ontology holder refers to install, fixes camera shooting The support equipment of machine.
Illustratively, mobile intelligent terminal is carried on unmanned plane ontology holder, such as mobile intelligent terminal can be hand Machine, camera or camera, to the shooting image digital imagery in mobile intelligent terminal field range.Mobile intelligent terminal ontology cloud Digital Image Communication transmitting device is further included on platform, digital picture is transmitted by Digital Image Communication transmitting device.Wherein, nobody Machine ontology holder refers to install, fix the support equipment of mobile intelligent terminal.
In one embodiment of the invention, determining module 320 is specifically used for being filtered the digital picture received It handles, the noise of filtering effects feature extraction, the characteristics of image of pedestrian's head and shoulder, image in the digital picture after extraction is filtered Feature includes the Haar features of image, the LBP features of image and/or the HOG features of image.
It compares the characteristics of image of pedestrian's head and shoulder and head and shoulder in pedestrian tracking region and trains corresponding characteristics of image in file, root Pedestrian tracking region is determined according to comparing result.For example, if the Haar features of pedestrian's head and shoulder train file with head and shoulder in digital picture The Haar characteristic matchings success of middle image, then region of pedestrian's head and shoulder in digital picture is pedestrian tracking region.
In one embodiment of the invention, under off-line state, pedestrian's head shoulder images positive sample and pedestrian's head and shoulder are built Pedestrian's head shoulder images positive sample is referred to as positive sample by image negative sample below, and pedestrian's head shoulder images negative sample is referred to as negative Sample.
Pedestrian's head shoulder images positive sample and pedestrian's head shoulder images negative sample are built, specially:The figure of pedestrian's head and shoulder will be included Picture saves as positive sample, and the image of the non-pedestrian head and shoulder of same pixel is saved as negative sample, calculates the characteristics of image of positive sample, And the characteristics of image of negative sample.By the characteristics of image of the characteristics of image of positive sample and negative sample be put into head and shoulder training aids into Row training determines that head and shoulder trains file.Characteristics of image includes the Haar features of image, the LBP features of image and/or image HOG features, wherein, the characteristics of image of positive sample is identical with the characteristics of image of negative sample, such as Haar features of positive sample and negative The Haar features of sample.
In one embodiment of the invention, detection module 330 is specifically used for carrying out edge detection to pedestrian tracking region It is detected with characteristic point, edge refers to the set that the variation of pedestrian tracking regional luminance is significantly put, and characteristic point refers to pedestrian tracking area The set of the point of curvature maximum on the violent point of domain brightness change or pedestrian tracking edges of regions curve.
Edge detection and characteristic point detection will be carried out per frame line people tracing area, will be determined per all of frame line people's tracing area Marginal point and all characteristic points form the edge point set per frame line people's tracing area and the characteristic point per frame line people's tracing area Set.It is compared by the edge point set of every frame line people tracing area and per the set of characteristic points of frame line people's tracing area, root According to the variation of the edge point set of every frame line people tracing area and the set of characteristic points per frame line people's tracing area, you can in pedestrian Pedestrian movement region is extracted in tracing area.
In one embodiment of the invention, in order to make the result that edge detection and characteristic point detect more accurate, this reality Apply that example is preferable to be smoothed pedestrian tracking region, smoothing processing can making an uproar effectively in place to go pedestrian tracking region Point, the result detected in this way are more accurate.
It should be noted that in the embodiment of the present invention, while detected using edge detection and characteristic point, it is to improve fortune The accuracy of dynamic region detection can also be detected only with edge detection or only with characteristic point.Illustratively, common side Edge detection method for example can be Sobel edge detection algorithms, Canny edge detection algorithms, common feature point detecting method Such as can be KLT feature point detection algorithms and SUSAN feature point detection algorithms.
The non-human colour of skin is removed by carrying out human body complexion identification to pedestrian tracking region, pedestrian tracking retains in region The possibility come includes face, hand and other exposed positions of pedestrian, due to the face of pedestrian and other exposed positions simultaneously Non-moving areas further rejects the face of pedestrian and other exposed positions in the pedestrian movement region extracted, you can know Do not go out the hand position of user.
According to present frame pedestrian tracking region and the location determination pedestrian of the hand of front and rear frame line people tracing area one skilled in the art Hand exercise track.Specially:After pedestrian's hand position in every frame line people tracing area determines, by will be per frame line The hand position of people tracing area one skilled in the art arranges sequentially in time, obtains the hand exercise track of pedestrian.
In one embodiment of the invention, contrast module 340 is specifically used for the hand of comparison pedestrian tracking region one skilled in the art Portion's movement locus and pedestrian's gesture motion model, according to the gesture of pedestrian gesture motion Model Identification pedestrian tracking region one skilled in the art Type of sports.
In one embodiment of the invention, determining module 320 includes filter unit (not shown), filter unit tool Body is used to be filtered the digital picture of reception.
In one embodiment of the invention, determining module 320 includes extraction unit (not shown), extraction unit tool Body is used to extract the head and shoulder feature of pedestrian in filtered digital picture.
In one embodiment of the invention, determining module 320 includes comparison unit (not shown), comparison unit tool Body determines pedestrian tracking region for comparing the head and shoulder feature of pedestrian and head and shoulder training file in digital picture.
In one embodiment of the invention, detection module 330 includes determination unit (not shown), determination unit tool Body is used for the edge according to the pedestrian tracking region, the characteristic point in the pedestrian tracking region and the pedestrian tracking area The colour of skin of domain one skilled in the art determines the hand exercise track of the pedestrian.
For device embodiment, since it is substantially similar to embodiment of the method, so description is fairly simple, it is related Part illustrates referring to the part of embodiment of the method.
It should also be noted that, the exemplary embodiment referred in the present invention, is retouched based on a series of step or device State certain methods or system.But the present invention is not limited to the sequence of above-mentioned steps, that is to say, that can be according in embodiment The sequence referred to performs step, may also be distinct from that the sequence in embodiment or several steps are performed simultaneously.
The above description is merely a specific embodiment, it is apparent to those skilled in the art that, For convenience of description and succinctly, the specific work process of the system of foregoing description, module and unit can refer to preceding method Corresponding process in embodiment, details are not described herein.It should be understood that protection scope of the present invention is not limited thereto, it is any to be familiar with Those skilled in the art in the technical scope disclosed by the present invention, can readily occur in various equivalent modifications or substitutions, These modifications or substitutions should be covered by the protection scope of the present invention.

Claims (10)

1. a kind of UAV Intelligent tracking, which is characterized in that the method includes:
Receive digital picture;
Pedestrian tracking region is determined in the digital picture;
The pedestrian tracking region is detected, obtains the hand exercise track of the pedestrian tracking area image one skilled in the art;
Compare the hand exercise track of the pedestrian tracking region one skilled in the art and pedestrian's gesture motion model, identify the pedestrian with The gesture motion type of track region one skilled in the art;
The instruction of tracking pedestrians is sent to unmanned plane according to the gesture motion type.
2. tracking according to claim 1, which is characterized in that described that pedestrian tracking is determined in the digital picture Region, including:
The digital picture is filtered;
Extract the head and shoulder feature of pedestrian in filtered digital picture;
The head and shoulder feature of pedestrian and head and shoulder training file in the digital picture are compared, determines the pedestrian tracking region.
3. tracking according to claim 2, which is characterized in that the head and shoulder of pedestrian in the comparison digital picture Feature and head and shoulder training file, before determining the pedestrian tracking region, further include:
Structure pedestrian's head shoulder images positive sample and pedestrian's head shoulder images negative sample, the pixel of the positive sample and the negative sample Pixel is identical;
Extract the characteristics of image of pedestrian's head shoulder images positive sample and the characteristics of image of pedestrian's head shoulder images negative sample;
In head and shoulder training aids, characteristics of image and pedestrian's head shoulder images based on pedestrian's head shoulder images positive sample bear sample This characteristics of image is trained, and determines the head and shoulder training file.
4. tracking according to claim 3, which is characterized in that
Described image feature includes the Harr features of image, LBP features and/or HOG features.
5. tracking according to claim 1, which is characterized in that the detection pedestrian tracking region obtains institute The hand exercise track of pedestrian tracking area image one skilled in the art is stated, including:
According to the edge in the pedestrian tracking region, the characteristic point in the pedestrian tracking region and the pedestrian tracking region The colour of skin of one skilled in the art determines the hand exercise track of the pedestrian.
6. tracking according to claim 1, which is characterized in that the comparison pedestrian tracking region one skilled in the art's Hand exercise track and pedestrian's gesture motion model, before the gesture motion type for identifying the pedestrian tracking region one skilled in the art, It further includes:
Build pedestrian's images of gestures sample;
Extract the hand exercise track of pedestrian's images of gestures sample;
In gesture training aids, the hand exercise track based on extraction is trained, and determines pedestrian's gesture motion model.
7. tracking according to claim 1, which is characterized in that it is described according to the gesture motion type to unmanned plane After the instruction for sending tracking pedestrians, further include:
Amplify the pedestrian tracking region;
Match the head and shoulder feature of amplified pedestrian tracking region one skilled in the art and head and shoulder training file;
If successful match, the hand exercise track of the amplified pedestrian tracking region one skilled in the art and pedestrian's hand are compared Gesture action model identifies the gesture motion type of the tracing area one skilled in the art;
The instruction of tracking pedestrians is sent to unmanned plane according to the gesture motion type, continues to match, until it fails to match.
8. a kind of UAV Intelligent tracking system, which is characterized in that the system comprises:
Receiving module, for receiving digital picture;
Determining module, for determining pedestrian tracking region in the digital picture;
Detection module for detecting the pedestrian tracking region, obtains the hand fortune of the pedestrian tracking area image one skilled in the art Dynamic rail mark;
Contrast module, for comparing the hand exercise track of the pedestrian tracking region one skilled in the art and pedestrian's gesture motion model, Identify the gesture motion type of the pedestrian tracking region one skilled in the art;
Tracking module, for sending the instruction of tracking pedestrians to unmanned plane according to the gesture motion type.
9. tracking system according to claim 8, which is characterized in that the determining module includes:Filter unit, extraction are single Member and comparison unit,
The filter unit, for being filtered to the digital picture;
The extraction unit, for extracting the head and shoulder feature of pedestrian in filtered digital picture;
The comparison unit for comparing the head and shoulder feature of pedestrian and head and shoulder training file in the digital picture, determines described Pedestrian tracking region.
10. tracking system according to claim 8, which is characterized in that the detection module includes:Determination unit,
The determination unit, for the edge according to the pedestrian tracking region, the characteristic point in the pedestrian tracking region and The colour of skin of the pedestrian tracking region one skilled in the art determines the hand exercise track of the pedestrian.
CN201711308149.6A 2017-12-11 2017-12-11 UAV Intelligent tracking and system Pending CN108171121A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201711308149.6A CN108171121A (en) 2017-12-11 2017-12-11 UAV Intelligent tracking and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201711308149.6A CN108171121A (en) 2017-12-11 2017-12-11 UAV Intelligent tracking and system

Publications (1)

Publication Number Publication Date
CN108171121A true CN108171121A (en) 2018-06-15

Family

ID=62524856

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201711308149.6A Pending CN108171121A (en) 2017-12-11 2017-12-11 UAV Intelligent tracking and system

Country Status (1)

Country Link
CN (1) CN108171121A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109977906A (en) * 2019-04-04 2019-07-05 睿魔智能科技(深圳)有限公司 Gesture identification method and system, computer equipment and storage medium

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101777114A (en) * 2009-01-08 2010-07-14 北京中星微电子有限公司 Intelligent analysis system and intelligent analysis method for video monitoring, and system and method for detecting and tracking head and shoulder
CN103295029A (en) * 2013-05-21 2013-09-11 深圳Tcl新技术有限公司 Interaction method and device of gesture control terminal
CN104392210A (en) * 2014-11-13 2015-03-04 海信集团有限公司 Gesture recognition method
CN104808799A (en) * 2015-05-20 2015-07-29 成都通甲优博科技有限责任公司 Unmanned aerial vehicle capable of indentifying gesture and identifying method thereof
CN106020227A (en) * 2016-08-12 2016-10-12 北京奇虎科技有限公司 Control method and device for unmanned aerial vehicle
KR20170090603A (en) * 2016-01-29 2017-08-08 아주대학교산학협력단 Method and system for controlling drone using hand motion tracking

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101777114A (en) * 2009-01-08 2010-07-14 北京中星微电子有限公司 Intelligent analysis system and intelligent analysis method for video monitoring, and system and method for detecting and tracking head and shoulder
CN103295029A (en) * 2013-05-21 2013-09-11 深圳Tcl新技术有限公司 Interaction method and device of gesture control terminal
CN104392210A (en) * 2014-11-13 2015-03-04 海信集团有限公司 Gesture recognition method
CN104808799A (en) * 2015-05-20 2015-07-29 成都通甲优博科技有限责任公司 Unmanned aerial vehicle capable of indentifying gesture and identifying method thereof
KR20170090603A (en) * 2016-01-29 2017-08-08 아주대학교산학협력단 Method and system for controlling drone using hand motion tracking
CN106020227A (en) * 2016-08-12 2016-10-12 北京奇虎科技有限公司 Control method and device for unmanned aerial vehicle

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
王文杰 等: "《日新日进 "挑战杯"大学生课外学术科技竞赛作品集》", 31 May 2015, 北京工业大学出版社 *
白光清: "《IP创新怎样赢?》", 31 August 2017, 知识产权出版社 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109977906A (en) * 2019-04-04 2019-07-05 睿魔智能科技(深圳)有限公司 Gesture identification method and system, computer equipment and storage medium
CN109977906B (en) * 2019-04-04 2021-06-01 睿魔智能科技(深圳)有限公司 Gesture recognition method and system, computer device and storage medium

Similar Documents

Publication Publication Date Title
US11188734B2 (en) Systems and methods for performing fingerprint based user authentication using imagery captured using mobile devices
CN106407875B (en) Target's feature-extraction method and device
KR100847136B1 (en) Method and Apparatus for Shoulder-line detection and Gesture spotting detection
US9047507B2 (en) Upper-body skeleton extraction from depth maps
CN110264493B (en) Method and device for tracking multiple target objects in motion state
CN111862296B (en) Three-dimensional reconstruction method, three-dimensional reconstruction device, three-dimensional reconstruction system, model training method and storage medium
KR101035055B1 (en) System and method of tracking object using different kind camera
WO2019071664A1 (en) Human face recognition method and apparatus combined with depth information, and storage medium
CN106909911A (en) Image processing method, image processing apparatus and electronic installation
CN106991654A (en) Human body beautification method and apparatus and electronic installation based on depth
CN104200200B (en) Fusion depth information and half-tone information realize the system and method for Gait Recognition
CN108089695B (en) Method and device for controlling movable equipment
CN111241932A (en) Automobile exhibition room passenger flow detection and analysis system, method and storage medium
CN110569785B (en) Face recognition method integrating tracking technology
EP3819812B1 (en) A method of object re-identification
CN111178252A (en) Multi-feature fusion identity recognition method
JP2021503139A (en) Image processing equipment, image processing method and image processing program
KR101737430B1 (en) A method of detecting objects in the image with moving background
CN107016348A (en) With reference to the method for detecting human face of depth information, detection means and electronic installation
CN108592915A (en) Air navigation aid, shopping cart and navigation system
CN106599873A (en) Figure identity identification method based on three-dimensional attitude information
CN108334870A (en) The remote monitoring system of AR device data server states
CN110175553A (en) The method and device of feature database is established based on Gait Recognition and recognition of face
CN104573628A (en) Three-dimensional face recognition method
CN108171121A (en) UAV Intelligent tracking and system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20180615

RJ01 Rejection of invention patent application after publication