CN104899590B - A kind of unmanned plane sensation target follower method and system - Google Patents

A kind of unmanned plane sensation target follower method and system Download PDF

Info

Publication number
CN104899590B
CN104899590B CN201510263209.1A CN201510263209A CN104899590B CN 104899590 B CN104899590 B CN 104899590B CN 201510263209 A CN201510263209 A CN 201510263209A CN 104899590 B CN104899590 B CN 104899590B
Authority
CN
China
Prior art keywords
region
similarity
point
target
object region
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201510263209.1A
Other languages
Chinese (zh)
Other versions
CN104899590A (en
Inventor
蒙山
黄容
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
ACUS TECHNOLOGIES CO.,LTD.
Original Assignee
Shenzhen University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen University filed Critical Shenzhen University
Priority to CN201510263209.1A priority Critical patent/CN104899590B/en
Publication of CN104899590A publication Critical patent/CN104899590A/en
Application granted granted Critical
Publication of CN104899590B publication Critical patent/CN104899590B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/50Extraction of image or video features by performing operations within image blocks; by using histograms, e.g. histogram of oriented gradients [HoG]; by summing image-intensity values; Projection analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/56Extraction of image or video features relating to colour

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)

Abstract

The present invention is suitable for air vehicle technique field, it provides a kind of unmanned plane sensation target follower method and includes the following steps: step A, FAST angle point grid is carried out to the selected first object region followed, the FAST angle point histogram of the Weighted Coefficients in the first object region is generated, and according to the FAST corner feature similarity between the FAST angle point histogram calculation first object region of generation and the selected each candidate target region followed;Step B, for the selected each candidate target region followed, calculate its color characteristic similarity between first object region, then its FAST corner feature similarity and color characteristic similarity with first object region are merged, and follows result as current goal region with the highest candidate target region of first object Regional Similarity in fusion results.The present invention has merged Fast Corner feature and color characteristic, is effectively utilized the local message and global information of image, to reduce influence of the airborne sensor to Target state estimator.

Description

A kind of unmanned plane sensation target follower method and system
Technical field
The invention belongs to air vehicle technique field more particularly to a kind of unmanned plane sensation target follower method and systems, can It is followed when following locking unfriendly target, city anti-terrorism suitable for air battle and finds and lock in time when terrified vehicle, maritime search and rescue It meets with misfortune personnel, high altitude operation, Mine pit survey etc..
Background technique
With the continuous progress of science and technology, unmanned plane is gradually applied to many aspects.It is all wrapped in many tasks of unmanned plane The subtask of target is followed containing unmanned plane.It is reliable that unmanned plane sensation target follows the analysis for relying primarily on image feature information to provide Target information.In recent decades, unmanned plane sensation target, which follows, achieves a series of progress, such as: Carnegie Mellon University Robot research a kind of view-based access control model odometer for proposing of O.Amidi et al. come the side of real-time estimation unmanned plane motion state Method, core are to be followed using the video camera for being mounted on uav bottom to ground static target;University of Southern California's machine People research institute has designed and Implemented nobody of a kind of view-based access control model using Bergen Industrial Twin helicopter as carrier Machine feature follow-up control method;Saripalli et al. devises a kind of method for planning track based on Hamilton's equation, Ke Yishi Flight etc. now is followed to mobile ground surface platform.But the target being followed in above-mentioned research is usually static or slow The target of movement.
Target, which follows mainly, has model following and model-free to follow two major classes.There are model and model-free following algorithm main generation Table has following Extended Kalman filter (EKF) algorithm and particle filter (PF) algorithm respectively:
Have model filtering follower method, suitable for linear system or approximately linear system, have ignored system mode and The random distribution characteristic of noise only makees linear transformation on current state, estimated value point.These for conversion after mean variable value, Covariance estimation introduces large error, even results in system diverging.
Model-free filters follower method, is a kind of Bayesian Estimation method based on Monte Carlo.It is not linearized The limitation of error and Gaussian noise compensates for the deficiency of model filtering follower method.But common problem encountered is that degradation phenomena, I.e. by after iteration several times, the weight of other particles is small to can ignore other than one of particle.
It is to realize the detection of target based on Image Feature Matching and follow that both targets, which follow model,.What it was mainly utilized Characteristics of image has: 1) color characteristic of image.Color feature be image global characteristics, since it is to image-region The variation of direction, size etc. is insensitive, therefore cannot obtain the local feature of target well.2) corner feature.Corner feature Speed is fast, but is influenced by illumination, picture noise etc., its robustness is simultaneously bad.3) scale invariant feature (sift) and add Fast robust features (surf).Its with fabulous goal description ability, but feature calculation process is complicated, feature describe dimension compared with High, characteristic matching calculates complicated, it is difficult to directly apply to unmanned plane and follow in scene in real time.
Summary of the invention
Technical problem to be solved by the present invention lies in provide a kind of unmanned plane sensation target follower method and system, it is intended to Unmanned plane can steadily follow dynamic object, and realization is good to follow effect.
The invention is realized in this way a kind of unmanned plane sensation target follower method, the method includes the following steps:
Step A carries out FAST angle point grid to the selected first object region followed, generates the first object region The FAST angle point histogram of Weighted Coefficients, and according to the FAST angle point histogram calculation first object region of generation and selected follow FAST corner feature similarity between each candidate target region;The first object region is the video flowing shot in unmanned plane Or in the first frame image of sequence image, the target area of tracking is selected;Each candidate target region is to shoot in unmanned plane Video flowing or sequence image subsequent frame image in the target area predicted;
Step B calculates its color characteristic between first object region for the selected each candidate target region followed Then similarity merges its FAST corner feature similarity and color characteristic similarity with first object region, and will fusion As a result result is followed as current goal region with the highest candidate target region of first object Regional Similarity in.
The present invention also provides a kind of unmanned plane sensation target system for tracking, comprising:
Similarity calculation module, for carrying out FAST angle point grid to the selected first object region followed, described in generation The FAST angle point histogram of the Weighted Coefficients in first object region, and according to the FAST angle point histogram calculation first object area of generation FAST corner feature similarity between domain and the selected each candidate target region followed;The first object region is at nobody In the video flowing of machine shooting or the first frame image of sequence image, the target area of tracking is selected;Each candidate target region For the target area predicted in the subsequent frame image of video flowing or sequence image that unmanned plane is shot;
Target determination module is followed, for calculating itself and first object area for the selected each candidate target region followed Then color characteristic similarity between domain merges its FAST corner feature similarity and color characteristic with first object region Similarity, and using in fusion results with the highest candidate target region of first object Regional Similarity as current goal region Follow result.
Compared with prior art, the present invention having the following beneficial effects:
Firstly, the present invention has merged Fast Corner feature and color characteristic, it is effectively utilized the local message of image and complete Office's information, to reduce influence of the airborne sensor to Target state estimator.
Secondly, 3D point cloud feature can quickly determine roughly the pose of candidate target region.2D image multiple features (angle point and Color characteristic) preferably candidate target region is described, thus the pose of accurate candidate target region.Merge 2D image and 3D point cloud feature can determine to fast and stable the target that unmanned plane follows.
Finally, following the effect of target that can be declined during long-time unmanned plane follows target.Using SURF Feature realizes that unmanned plane follows the verifying of target, improves long-time unmanned plane to the effect of dynamic target tracking.
Detailed description of the invention
Fig. 1 is the implementation flow chart of unmanned plane sensation target follower method provided by the invention;
Fig. 2 is a kind of specific implementation flow chart of step A in Fig. 1;
Fig. 3 is the specific implementation flow chart for the step B that first embodiment of the invention provides;
Fig. 4 is the specific implementation flow chart for the step B that second embodiment of the invention provides;
Fig. 5 is the implementation flow chart provided by the invention that follow compliance test result during target follows;
Fig. 6 is the structure principle chart of unmanned plane sensation target system for tracking provided by the invention;
Fig. 7 is the structure principle chart of similarity calculation module in Fig. 6;
Fig. 8 is to follow the structure principle chart of target determination module in Fig. 6 that first embodiment of the invention provides;
Fig. 9 is to follow the structure principle chart of target determination module in Fig. 6 that second embodiment of the invention provides;
Figure 10 is the structure principle chart provided by the invention for following compliance test result module.
Specific embodiment
In order to make the objectives, technical solutions, and advantages of the present invention clearer, with reference to the accompanying drawings and embodiments, right The present invention is further elaborated.It should be appreciated that the specific embodiments described herein are merely illustrative of the present invention, and It is not used in the restriction present invention.
Allow to preferably track dynamic object for how to control unmanned plane during flying state, we have proposed a kind of bases In the unmanned plane vision follower method of model-free tracking system level based adjustment image multiple features.Obtain the various features of target, benefit Target is detected with ranking of features, target is matched and tracked according to the multiple features level based adjustment method of proposition.This FAST corner feature, color characteristic, surf feature, the 3D point cloud edge feature of 2D image are mainly merged in invention, to unmanned plane Target following effect is improved.
Fig. 1 shows the implementation process of unmanned plane sensation target follower method provided by the invention, and details are as follows:
In step, FAST angle point grid is carried out to the selected first object region followed, generates the first object area The FAST angle point histogram of the Weighted Coefficients in domain, and according to the FAST angle point histogram calculation first object region of generation and it is selected with With each candidate target region between FAST corner feature similarity;The first object region is the view shot in unmanned plane In the first frame image of frequency stream or sequence image, the target area of tracking is selected;Each candidate target region is in unmanned plane In the video flowing of shooting or the subsequent frame image of sequence image, the target area of prediction.
In stepb, for the selected each candidate target region followed, its corresponding FAST corner feature similarity is merged With color characteristic similarity, and using in fusion results with the highest candidate target region of first object Regional Similarity as current Target area follows result.
Fig. 2 shows a kind of specific implementation flows of above-mentioned steps A, specifically include the following steps:
Step A1 carries out multiple dimensioned FAST angle point grid to first object region, obtains the multiple of first object region Angle point collectionWherein, s indicates that graphical rule, Q indicate the quantity of angle point, and q indicates natural number, cqIt indicates Q-th of angle point, (xq,yq) it is location of pixels of q-th of angle point in the first frame image.
Step A2 estimates each candidate target region using the 3D point cloud depth information in collected first object region Image scale s, estimate the scale of the image of each candidate target region.
Step A3, by first object region division at N number of region, the step-length of each peak width and height is respectively as follows: xstep/s、ystep/ s, wherein xstepIndicate the width in first object region, ystepIndicate the height in first object region.
Step A4 counts the angle point collectionAngle point number in each region of divisionThe angle point histogram in N number of section is generated, is then trained by the angle point histogram to multiple image, it is raw At the weight in each section of histogramWherein, w'nIndicate n-th of area of histogram in first object region under scale s Between weight.
Step A5 selects first object region under the scale according to the dimensional information s that 3D point cloud depth information determines FAST angle point histogram, the histogram in first object region as a comparison
Step A6, using Euclidean distance, according to formulaCalculate first object region and each candidate The similarity of corner feature between target area, whereinFor n-th of area of histogram of candidate target region under original size c Between angle point book,What is indicated is the angle point number in n-th of section of histogram in first object region under scale s.
In step, in addition to that can also be matched using the method for Block- matching with angle point Histogram Matching.Angle point Histogram in addition to 2 dimensional region histogram, can also use one-dimensional region histogram.Angle point histogram and color histogram Nonlinear fusion method can also be used in figure, calculates the similarity of particle.
In the present invention, color feature be image global information, FAST corner feature describes the office of image Portion's information.Under the frame of particle filter tracking, the global information and local information architecture of the image in first object region are merged Particle observes likelihood function to estimate the state of tracked target, is capable of drawing for tenacious tracking realization of goal unmanned plane robot It leads.It is specific to realize above-mentioned steps B by two kinds of embodiments again.Wherein one color characteristic of embodiment and FAST corner feature melt It closes, and second embodiment is then to merge 3D point cloud feature center of gravity on the basis of color characteristic and FAST corner feature.
Referring to figure 3., step B includes the following steps: embodiment one
Step B11 calculates first object region color feature and candidate target using Pasteur's distance or log-linear function The color characteristic similarity in region.Wherein, color characteristic can be obtained from the RGB information of image, candidate target region FAST angle point Feature obtains the obtaining of process and first object region FAST corner feature.
At this point, the similarity of first object region FAST angle point histogram and candidate target region angle point histogram redefines Are as follows:
Wherein, σcornerFor preset FAST corner feature standard deviation.
The color of image characteristic similarity of the color of image feature and candidate target region in first object region is defined as:
Wherein, σcolorFor preset color characteristic standard deviation, D is Pasteur's distance of color histogram.
By formula (1), the phase of first object region angle point histogram and each candidate target region angle point histogram is calculated Like degree, obtains the similarity of angle point and it is normalized(wherein, M is the number of candidate target region Amount);By formula (2), the image-region color histogram in first object region and the color histogram of candidate image area are calculated Similarity obtains the similarity of color and it is normalized(wherein, M is the number of candidate target region Amount).
Step B12, using linear scale, FAST corner feature similarity and face corresponding to each candidate target region The fusion of color characteristic similarity is got up, and the similarity function w in first object region Yu all candidate target regions is calculatediAnd to it It is normalized
For each candidate target region, similarity are as follows:Meter It calculates the similarity of all 2D image target areas and candidate target region and it is normalizedIt utilizes FAST corner feature obtains the local message of candidate target region.Using color characteristic, the overall situation of candidate target region has been obtained Information.The local message and global information of blending image areas, the candidate target region best from preferable determining similarity, from And reduce influence of the airborne sensor to Target state estimator.
Step B13, by the similarity function w of candidate target regioniObservation likelihood function as building targetAnd by the observation likelihood function determine with the highest candidate target region of first object Regional Similarity, To follow result as current goal region.
Unlike embodiment, second embodiment is to merge its correspondence for the selected each candidate target region followed FAST corner feature similarity, color characteristic similarity and 3D point cloud characteristic similarity, and by fusion results with the first mesh The highest candidate target region of Regional Similarity is marked as current goal region and follows result.
3D point cloud edge feature can quickly determine roughly the pose of candidate target region.The 2D images such as color, FAST angle point Feature is preferably described candidate target region, thus the pose of accurate candidate target region.In particle filter tracking Under frame, the state of 2D characteristics of image and 3D point cloud feature construction particle observation likelihood function estimation target is further merged.From And it realizes stable unmanned plane and follows target.
During realizing 2D characteristics of image and 3D point cloud Fusion Features, obtaining 3D point cloud feature center of gravity is its primary Business.The acquisition process of its 3D point cloud feature center of gravity is as follows:
Firstly, calibrating to color image data and depth image data, 2D image of the invention is color image.
Secondly, extracting the marginal point for being followed object.Edge Gradient Feature is carried out to three dimensional point cloud, and using straight-through Filter wiping out background marginal information, obtains the marginal point of object.
Again, it is clustered.After wiping out background marginal information, next object or two objects are generally only remained in scene Marginal information.The quantity N of point cloud data after filteringpointsWith preset threshold value TpointsIt compares.If Npoints<Tpoints, It does not need then to carry out cluster calculation, directly be clustered using all point cloud datas as one, and enable k=N1.If Npoints≥ Tpoints, then object edge point is clustered using k-means clustering algorithm, and take k=N2, obtained k cluster
Finally, calculating center of gravity projected position.The 3D position of centre of gravity of each cluster is calculated, and is projected into 2D image, Obtain position of the cluster center of gravity in 2D image
Log-linear function used herein is to measure target area and based on the candidate target of 2D image multiple features determination The similitude in region, meanwhile, 2D image candidate target area center and 3D point cloud are also measured using log-linear function The similarity of feature position of centre of gravity, is defined as:
Wherein, DWFor the Euclidean distance of 3D point cloud feature center of gravity and 2D image candidate target area center, σWFor standard deviation.
Due to the similarity and 2D image candidate of target area and the candidate target region determined based on 2D image multiple features Target area center and 3D point cloud characteristic similarity are to merge 2D characteristics of image and 3D point cloud using measuring at a distance from different When feature construction image observation likelihood function, obtained similarity must be normalized respectively.
Specifically referring to figure 4., in second embodiment, step B includes the following steps:
Step B21, as follows determine 3D point cloud feature position of centre of gravity: if cluster center of gravity only one, should The 3D point cloud feature center of gravity that center of gravity is determined directly as 3D point cloud feature is clustered, otherwise, predicts that similarity is most in previous frame image High 3D point cloud feature position of centre of gravity finds out what the cluster center of gravity w (x', y') nearest from the position was determined as 3D point cloud feature 3D point cloud feature center of gravity;Wherein, candidate target is the target that movement is generated in first object region.
Step B22 calculates separately color characteristic similarity, the FAST angle point in first object region and candidate target region Characteristic similarity, 3D point cloud characteristic similarity, and it is normalized to obtain corresponding three groups of similarities:
Wherein M is candidate target region Number.
Step B23 gets up the fusion of each group similarity degree obtained in step B22, is obtained image using linear scale Similarity are as follows:Wherein, k1+k2+k3=1 and k1,k2,k3∈[0,1].Meter It calculates the similarity of all candidate target regions and first object region and it is normalized, obtain the similarity of image
Step B24, by the similarity function w of candidate target regioniObservation likelihood function as building targetAnd by the observation likelihood function determine with the highest candidate target region of first object Regional Similarity, with Result is followed as current goal region.
It is further contemplated that following the effect of target can be during following target to long-time unmanned plane Decline.In order to guarantee unmanned plane it is stable follow target, need periodically to verify the result for following target.According to verifying As a result, if target is followed to fail, that chooses unmanned plane again follows target, starts new to follow object procedure.Using SURF Feature realizes that unmanned plane follows the verifying of target.Unmanned plane follows in journey, at interval of P frame image, to follow the effect of target into Row one-time authentication.And according to verifying as a result, making corresponding operation.
Referring in particular to Fig. 5, compliance test result process is followed to include the following steps:
Step C1 generates the SURF feature set in first object region.
The SURF feature for extracting first object region, obtains SURF feature point set(M is 2D image object area The quantity of characteristic of field point), and generate SURF feature point setDescriptor set as verifying foundation.
Step C2 establishes the SURF image pyramid of candidate image.Fusion FAST corner feature and color characteristic are obtained The candidate image of tracking guidance carries out the foundation of SURF image pyramid.
Step C3, according to the position of the pose extracted and candidate target region in the SURF feature point set of first object region Appearance determines first object region SURF feature point set corresponding pose on candidate image pyramid, obtained crucial point set.
Inverse process of this step based on SURF feature extraction, according to the pose of target area SURF feature point set, pre- astronomical observation Select on image pyramid that there may be the poses of SURF characteristic point (i.e. key point).
Step C4 calculates i-th of key point in the candidate image that the key point is concentrated's The descriptor set of SURF feature descriptor and this all neighbor pixels on candidate image pyramid.
Step C5 calculates the ith feature point in first object regionDescriptorIt is closed with candidate image target area Key pointThe Euclidean distance of corresponding descriptorIf it existsLess than preset threshold value dst, then the candidate image target area Domain key pointFor SURF characteristic point, and it is defined as SURF characteristic matching point pair, otherwise, candidate image key pointIt is not SURF characteristic point.
Step C6, according to the Euclidean distance of the obtained all characteristic points and key point of step C5, dynamic adjustment effect compares Threshold value Tsurf
Step C7, in candidate image, if the quantity of SURF characteristic matching point pair and 2D image target area SURF feature The ratio of the quantity of point is more than Tsurf, then it is determined as that unmanned plane follows the effect of target preferable, continues to follow target;Otherwise, sentence Being set to unmanned plane follows target to fail, and unmanned plane hovering starts the target that search follows.
Those of ordinary skill in the art will appreciate that all or part of the steps in realization the various embodiments described above method is can It is completed with instructing relevant hardware by program, corresponding program can store in a computer-readable storage medium In, the storage medium, such as ROM/RAM, disk or CD.
Fig. 6 shows the structure principle chart of unmanned plane sensation target system for tracking provided by the invention, for ease of description, Only the parts related to the present invention are shown.
Referring to Fig. 6, unmanned plane sensation target system for tracking provided by the invention includes similarity calculation module 61 and follows Target determination module 62, wherein similarity calculation module 61 is used to carry out FAST angle point to the selected first object region followed It extracts, generates the FAST angle point histogram of the Weighted Coefficients in the first object region, and according to the FAST angle point histogram of generation Calculate the FAST corner feature similarity between first object region and the selected each candidate target region followed;First mesh Marking region is to select the target area of tracking in the first frame image of video flowing or sequence image that unmanned plane is shot;It is described Each candidate target region is the target area predicted in the subsequent frame image of video flowing or sequence image that unmanned plane is shot.With It is used to calculate its face between first object region for the selected each candidate target region followed with target determination module 62 Then color characteristic similarity merges its FAST corner feature similarity and color characteristic similarity with first object region, and Result is followed as current goal region with the highest candidate target region of first object Regional Similarity using in fusion results.
Fig. 7 shows the structural principle of similarity calculation module 61 in Fig. 6, including FAST angle point grid submodule 611, figure As size estimation submodule 612, target area division submodule 613, weight generation are straight from submodule 614, comparison target area Scheme to determine submodule 615, similarity calculation submodule 616 in side.Wherein the function of each module is as follows:
FAST angle point grid submodule 616 is obtained for carrying out multiple dimensioned FAST angle point grid to first object region Multiple angle point collection in first object regionWherein, Q indicates the quantity of angle point, and q indicates natural number, cqTable Show q-th of angle point, (xq,yq) it is location of pixels of n-th of angle point in the first frame image.
Graphical rule estimates that submodule 612 is used for the 3D point cloud depth information using collected first object region, estimates Count out the scale s of the image of each candidate target region.
Target area divides submodule 613 and is used for first object region division into N number of region, each peak width and height The step-length of degree is respectively as follows: xstep/s、ystep/ s, wherein xstepIndicate the width in first object region, ystepIndicate first object The height in region.
Weight is generated from submodule 614 for counting the angle point collectionIn each region of division In angle point numberGenerate the angle point histogram in N number of section, then by the angle point histogram to multiple image into Row training, generates the weight in each section of histogramWherein, w'nIndicate the histogram in first object region under scale s The weight in n-th of section of figure.
Comparison target area histogram determines dimensional information s of the submodule 615 for determining according to 3D point cloud depth information, Select the FAST angle point histogram in first object region under the scale, the histogram in first object region as a comparison
Similarity calculation submodule 616 is used to use Euclidean distance, according to formulaCalculate first The similarity of corner feature between target area and each candidate target region, whereinFor candidate target area under original size c The angle point number in n-th of section of histogram in domain,What is indicated is in n-th of section of histogram in first object region under scale s Angle point number.
Fig. 8 shows the structural principle that target determination module 62 is followed in first embodiment of the invention offer, specific to wrap It includes the first similarity calculation submodule 6211, first fusion submodule 6212, first target followed to determine submodule 6213.Its In, the first similarity calculation submodule 6211 is used to calculate first object field color using Pasteur's distance or log-linear function The color characteristic similarity of feature and candidate target region;First fusion submodule 6212 is used to use linear scale, each FAST corner feature similarity corresponding to candidate target region and the fusion of color characteristic similarity are got up, and first object is calculated The similarity function w in region and all candidate target regionsiAnd it is normalizedFirst follows target to determine Submodule 6213 is used for the similarity function w of candidate target regioniObservation likelihood function as building targetAnd by the observation likelihood function determine with the highest candidate target region of first object Regional Similarity, with Result is followed as current goal region.
Fig. 9 shows the structural principle for following target determination module 62 of second embodiment of the invention offer, including 3D point Cloud feature position of centre of gravity determines submodule 6221, the second similarity calculation submodule 6222, second fusion submodule 6223, second Target is followed to determine submodule 6224.Wherein, 3D point cloud feature position of centre of gravity determines submodule 6221 for as follows Determine candidate target cluster position of centre of gravity: if cluster center of gravity only one, the cluster center of gravity is directly as 3D point cloud feature Otherwise determining 3D point cloud feature center of gravity is predicted the highest 3D point cloud feature position of centre of gravity of similarity in previous frame image, is found out The 3D point cloud feature center of gravity that the cluster center of gravity w (x', y') nearest from the position is determined as 3D point cloud feature;Wherein, candidate mesh It is designated as generating the target of movement in first object region.Second similarity calculation submodule 6222 is for calculating separately first object The color characteristic similarity of region and candidate target region, FAST corner feature similarity, 3D point cloud characteristic similarity, and it is right It is normalized to obtain corresponding three groups of similarities:Wherein M is the number of candidate target region.Second It merges submodule 6223 to be used to use linear scale, each group similarity degree that the second similarity calculation submodule obtains is merged Come, calculates the similarity of all candidate target regions and first object region and it is normalized, obtain the phase of image Like degreeSecond follows target to determine that submodule 6224 is used for the similarity function w of candidate target regioniAs structure Build the observation likelihood function of targetAnd it is determined and first object Regional Similarity by the observation likelihood function Highest candidate target region, to follow result as current goal region.
Figure 10 shows the structural principle provided by the invention for following compliance test result module, this module is for following process In periodically carry out following compliance test result, specifically include SURF feature set generate submodule 631, SURF image pyramid establish son Module 632, crucial point set determine submodule 633, descriptor computation submodule 634, SURF characteristic matching point to determining submodule 635, adjusting thresholds submodule 636, judging submodule 637.Wherein, SURF feature set generates submodule 631 for generating first The SURF feature set of target area;SURF image pyramid setting up submodule 632 is used to establish the SURF image gold of candidate image Word tower;Crucial point set determines submodule 633 for according to the pose and time extracted in the SURF feature point set of first object region The pose for selecting target area determines first object region SURF feature point set corresponding pose on candidate image pyramid, obtains The crucial point set arrived;Descriptor computation submodule 634 is used to calculate i-th of key in the candidate image that the key point is concentrated PointSURF feature descriptor and this all neighbor pixels on candidate image pyramid description Symbol collection;SURF characteristic matching point is used to calculate the ith feature point in first object region to determining submodule 635Description SymbolWith candidate image target area key pointThe Euclidean distance of corresponding descriptorIf it existsLess than preset Threshold value dst, then the candidate image target area key pointFor SURF characteristic point, and it is defined as SURF characteristic matching point pair, it is no Then, candidate image key pointIt is not SURF characteristic point;Adjusting thresholds submodule 636 is used for according to the SURF characteristic matching The Euclidean distance of all characteristic points and key point that point obtains determining submodule, dynamic adjustment effect compare threshold value Tsurf;Sentence Disconnected submodule 637 is used in candidate image, if the quantity of SURF characteristic matching point pair and 2D image target area SURF feature The ratio of the quantity of point is more than Tsurf, then it is determined as that unmanned plane follows the effect of target preferable, continues to follow target;Otherwise, sentence Being set to unmanned plane follows target to fail, and unmanned plane hovering starts the target that search follows.
Above-mentioned color characteristic, corner feature, 3D point cloud depth data are by the airborne sensor on unmanned plane from being followed It is acquired on object.It is worth noting that, modules included by above-described embodiment, submodule are only drawn according to function logic Point, but be not limited to the above division, as long as corresponding functions can be realized;In addition, the specific name of each functional unit Title is also only for convenience of distinguishing each other, the protection scope being not intended to restrict the invention.Above-mentioned each module, submodule can be adopted It is realized with the mode of software, hardware or software and hardware combining.
The foregoing is merely illustrative of the preferred embodiments of the present invention, is not intended to limit the invention, all in essence of the invention Made any modifications, equivalent replacements, and improvements etc., should all be included in the protection scope of the present invention within mind and principle.

Claims (7)

1. a kind of unmanned plane sensation target follower method, which is characterized in that the method includes the following steps:
Step A carries out FAST angle point grid to the selected first object region followed, generates the cum rights in the first object region The FAST angle point histogram of value, and according to the FAST angle point histogram calculation first object region of generation and the selected each time followed Select the FAST corner feature similarity between target area;The first object region is the video flowing shot in unmanned plane or sequence In the first frame image of column image, the target area of tracking is selected;Each candidate target region is the view shot in unmanned plane The target area predicted in the subsequent frame image of frequency stream or sequence image;
It is similar to calculate its color characteristic between first object region for the selected each candidate target region followed by step B Degree, then merges its FAST corner feature similarity and color characteristic similarity with first object region, and by fusion results In with the highest candidate target region of first object Regional Similarity as current goal region follow result;
Wherein, step B includes the following steps:
Step B11 calculates first object region color feature and candidate target region using Pasteur's distance or log-linear function Color characteristic similarity;
Step B12, it is using linear scale, FAST corner feature similarity corresponding to each candidate target region and color is special It levies similarity fusion to get up, calculates the similarity function w in first object region Yu all candidate target regionsiAnd to its carry out NormalizationWherein, M is the number of candidate target region;
Step B13, by the similarity function w of candidate target regioniAs the observation likelihood function of building target, and by the sight Likelihood function determination and the highest candidate target region of first object Regional Similarity are surveyed, using following as current goal region As a result;
Wherein, the step B specifically: for the selected each candidate target region followed, calculate itself and first object region it Between color characteristic similarity and 3D point cloud characteristic similarity, then merge its FAST corner feature phase with first object region Like degree, color characteristic similarity and 3D point cloud characteristic similarity, and by fusion results with first object Regional Similarity highest Candidate target region follow result as current goal region;
Wherein, 3D point cloud feature is the edge feature of three dimensional point cloud, and 3D point cloud characteristic similarity is in candidate target region The similarity of heart position and the 3D point cloud feature position of centre of gravity.
2. unmanned plane sensation target follower method as described in claim 1, which is characterized in that step A includes the following steps:
Step A1 carries out multiple dimensioned FAST angle point grid to first object region, obtains multiple angle points in first object region CollectionWherein, s indicates that graphical rule, Q indicate the quantity of angle point, and q indicates natural number, cqIt indicates q-th Angle point, (xq,yq) it is location of pixels of q-th of angle point in the first frame image;
Step A2 estimates the figure of each candidate target region using the 3D point cloud depth information in collected first object region The scale s of picture;
Step A3, by first object region division at N number of region, the step-length of each peak width and height is respectively as follows: xstep/s、 ystep/s;Wherein, xstepIndicate the width in first object region, ystepIndicate the height in first object region;
Step A4 counts the angle point collectionAngle point number in each region of division The angle point histogram in N number of section is generated, is then trained by the angle point histogram to multiple image, each area of histogram is generated Between weightWherein, w'nIndicate the power in n-th of section of histogram in first object region under scale s Weight;
Step A5 selects the angle FAST in first object region under the scale according to the dimensional information s that 3D point cloud depth information determines Point histogram, the histogram in first object region as a comparison
Step A6, using Euclidean distance, according to formulaCalculate first object region and each The similarity of corner feature between candidate target region, whereinFor the histogram n-th of candidate target region under original scale c The angle point number in a section,What is indicated is the angle point number in n-th of section of histogram in first object region under scale s.
3. unmanned plane sensation target follower method as described in claim 1, which is characterized in that the step B includes following steps It is rapid:
Step B21, as follows determine 3D point cloud feature position of centre of gravity: if cluster center of gravity only one, the cluster Otherwise the 3D point cloud feature center of gravity that center of gravity is determined directly as 3D point cloud feature predicts that similarity is highest in previous frame image 3D point cloud feature position of centre of gravity finds out the 3D point that the cluster center of gravity w (x', y') nearest from the position is determined as 3D point cloud feature Cloud feature center of gravity;Wherein, candidate target is the target that movement is generated in first object region;
Step B22 calculates separately color characteristic similarity, the FAST corner feature phase in first object region and candidate target region Like degree, 3D point cloud characteristic similarity, and it is normalized to obtain corresponding three groups of similarities:
Wherein M is to wait Select the number of target area;
Step B23 gets up the fusion of each group similarity obtained in step B22 using linear scale, calculates all candidate mesh It marks the similarity in region and first object region and it is normalized, obtain the similarity of image;
Step B24, by the similarity function w of candidate target regioniAs the observation likelihood function of building target, and by the sight Likelihood function determination and the highest candidate target region of first object Regional Similarity are surveyed, using following as current goal region As a result.
4. unmanned plane sensation target follower method as claimed any one in claims 1 to 3, which is characterized in that following Cheng Zhong, the method also includes what is be periodically executed to follow compliance test result process, and described to follow compliance test result process include following steps It is rapid:
Step C1 generates the SURF feature set in first object region;
Step C2 establishes the SURF image pyramid of candidate image;
Step C3, according to the pose extracted and the pose of candidate target region in the SURF feature point set of first object region, really Determine first object region SURF feature point set corresponding pose on candidate image pyramid, obtained crucial point set;
Step C4 calculates i-th of key point in the candidate image that the key point is concentratedSURF it is special Levy the descriptor set of descriptor and this all neighbor pixels on candidate image pyramid;
Step C5 calculates the ith feature point in first object regionDescriptorWith candidate image target area key pointThe Euclidean distance of corresponding descriptorIf it existsLess than preset threshold value dst, then the candidate image target area is closed Key pointFor SURF characteristic point, and it is defined as SURF characteristic matching point pair, otherwise, candidate image key pointIt is not SURF spy Sign point;
Step C6, according to the Euclidean distance of the obtained all characteristic points and key point of step C5, dynamic adjustment effect compares threshold value Tsurf
Step C7, in candidate image, if the quantity of SURF characteristic matching point pair and 2D image target area SURF characteristic point The ratio of quantity is more than Tsurf, then it is determined as that unmanned plane follows the effect of target preferable, continues to follow target;Otherwise, it is determined that being Unmanned plane follows target to fail, and unmanned plane hovering starts the target that search follows.
5. a kind of unmanned plane sensation target system for tracking characterized by comprising
Similarity calculation module generates described first for carrying out FAST angle point grid to the selected first object region followed The FAST angle point histogram of the Weighted Coefficients of target area, and according to the FAST angle point histogram calculation first object region of generation with FAST corner feature similarity between the selected each candidate target region followed;The first object region is to clap in unmanned plane In the first frame image of the video flowing or sequence image taken the photograph, the target area of tracking is selected;Each candidate target region be The target area predicted in the video flowing of unmanned plane shooting or the subsequent frame image of sequence image;
Follow target determination module, for for the selected each candidate target region followed, calculate itself and first object region it Between color characteristic similarity, it is similar to the FAST corner feature similarity and color characteristic in first object region then to merge it Degree, and using in fusion results with the highest candidate target region of first object Regional Similarity following as current goal region As a result;
It is described that target determination module is followed to follow target including the first similarity calculation submodule, the first fusion submodule, first It determines submodule, or determines submodule, the second similarity calculation submodule, second including candidate 3D point cloud feature position of centre of gravity Fusion submodule, second follow target to determine submodule;
The first similarity calculation submodule is used to calculate first object region face using Pasteur's distance or log-linear function The color characteristic similarity of color characteristic and candidate target region;
The first fusion submodule is used to use linear scale, FAST corner feature corresponding to each candidate target region Similarity and the fusion of color characteristic similarity are got up, and the similarity letter in first object region Yu all candidate target regions is calculated Number wiAnd it is normalizedWherein, M is the number of candidate target region;
Described first follows target to determine that submodule is used for the similarity function w of candidate target regioniSight as building target Likelihood function is surveyed, and is determined and the highest candidate target region of first object Regional Similarity by the observation likelihood function, with Result is followed as current goal region;
The 3D point cloud feature position of centre of gravity determines submodule for determining 3D point cloud feature position of centre of gravity as follows: if The center of gravity of cluster only one, then the cluster center of gravity directly as 3D point cloud feature determine 3D point cloud feature center of gravity, otherwise, in advance Survey previous frame image in the highest 3D point cloud feature position of centre of gravity of similarity, find out the cluster center of gravity w nearest from the position (x', Y') the 3D point cloud feature center of gravity determined as 3D point cloud feature;Wherein, candidate target is that movement is generated in first object region Target;
The second similarity calculation submodule is used to calculate separately the color characteristic of first object region and candidate target region Similarity, FAST corner feature similarity, 3D point cloud characteristic similarity, and it is normalized obtain corresponding three groups it is similar Degree:
Wherein M is of candidate target region Number;
The second fusion submodule is used to use linear scale, and obtained each group similarity is merged, is calculated all The similarity in candidate target region and first object region is simultaneously normalized it, obtains the similarity of image
Described second follows target to determine that submodule is used for the similarity function w of candidate target regioniSight as building target Likelihood function is surveyed, and is determined and the highest candidate target region of first object Regional Similarity by the observation likelihood function, with Result is followed as current goal region.
6. unmanned plane sensation target system for tracking as claimed in claim 5, which is characterized in that the similarity calculation module packet Include FAST angle point grid submodule, graphical rule estimation submodule, target area divide submodule, weight generate from submodule, Comparison target area histogram determines submodule, similarity calculation submodule;
Wherein, the FAST angle point grid submodule is obtained for carrying out multiple dimensioned FAST angle point grid to first object region To multiple angle point collection in first object regionWherein, s indicates that graphical rule, Q indicate the quantity of angle point, Q indicates natural number, cqIndicate q-th of angle point, (xq,yq) it is location of pixels of q-th of angle point in the first frame image;
Described image size estimation submodule is used for the 3D point cloud depth information using collected first object region, estimates The scale s of the image of each candidate target region;
The target area divides submodule and is used for first object region division into N number of region, each peak width and height Step-length be respectively as follows: xstep/s、ystep/s;Wherein, xstepIndicate the width in first object region, ystepIndicate first object area The height in domain;
The weight is generated from submodule for counting the angle point collectionIn each region of division Angle point numberThe angle point histogram in N number of section is generated, is then instructed by the angle point histogram to multiple image Practice, generates the weight in each section of histogramWherein, w'nIndicate the histogram n-th in first object region under scale s The weight in a section;
The comparison target area histogram determines dimensional information s of the submodule for determining according to 3D point cloud depth information, choosing Select the FAST angle point histogram in first object region under the scale, the histogram in first object region as a comparison
The similarity calculation submodule is used to use Euclidean distance, according to formulaCalculate the first mesh Mark the similarity of corner feature between region and each candidate target region, whereinFor candidate target region under original size c The angle point number in n-th of section of histogram,What is indicated is the angle in n-th of section of histogram in first object region under scale s Points, w'nIndicate the weight in n-th of section of histogram in first object region under scale s.
7. the unmanned plane sensation target system for tracking as described in any one of claim 5 to 6, which is characterized in that further include one Compliance test result module is followed, follows compliance test result for periodically carrying out during following;It is described to follow compliance test result module packet It includes:
SURF feature set generates submodule, for generating the SURF feature set in first object region;
SURF image pyramid setting up submodule, for establishing the SURF image pyramid of candidate image;
Crucial point set determines submodule, for according to the pose extracted in the SURF feature point set of first object region and candidate mesh The pose for marking region, determines first object region SURF feature point set corresponding pose on candidate image pyramid, obtains Crucial point set;
Descriptor computation submodule, for calculating i-th of key point in the candidate image that the key point is concentratedSURF feature descriptor and this all neighbor pixels on candidate image pyramid descriptor Collection;
SURF characteristic matching point is to submodule is determined, for calculating the ith feature point in first object regionDescriptor With candidate image target area key pointThe Euclidean distance of corresponding descriptorIf it existsLess than preset threshold value dst, then the candidate image target area key pointFor SURF characteristic point, and it is defined as SURF characteristic matching point pair, otherwise, waited Select image key pointsIt is not SURF characteristic point;
Adjusting thresholds submodule, for according to the SURF characteristic matching point to determining the obtained all characteristic points of submodule and close The Euclidean distance of key point, dynamic adjustment effect compare threshold value Tsurf
Judging submodule is used in candidate image, if the quantity of SURF characteristic matching point pair and 2D image target area SURF The ratio of the quantity of characteristic point is more than Tsurf, then it is determined as that unmanned plane follows the effect of target preferable, continues to follow target;It is no Then, it is determined as that unmanned plane follows target to fail, unmanned plane hovering starts the target that search follows.
CN201510263209.1A 2015-05-21 2015-05-21 A kind of unmanned plane sensation target follower method and system Active CN104899590B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201510263209.1A CN104899590B (en) 2015-05-21 2015-05-21 A kind of unmanned plane sensation target follower method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201510263209.1A CN104899590B (en) 2015-05-21 2015-05-21 A kind of unmanned plane sensation target follower method and system

Publications (2)

Publication Number Publication Date
CN104899590A CN104899590A (en) 2015-09-09
CN104899590B true CN104899590B (en) 2019-08-09

Family

ID=54032244

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201510263209.1A Active CN104899590B (en) 2015-05-21 2015-05-21 A kind of unmanned plane sensation target follower method and system

Country Status (1)

Country Link
CN (1) CN104899590B (en)

Families Citing this family (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105718895A (en) * 2016-01-22 2016-06-29 张健敏 Unmanned aerial vehicle based on visual characteristics
CN105894542B (en) * 2016-04-26 2019-06-11 深圳大学 A kind of online method for tracking target and device
CN106096516A (en) * 2016-06-01 2016-11-09 常州漫道罗孚特网络科技有限公司 The method and device that a kind of objective is followed the tracks of
CN106843278B (en) * 2016-11-24 2020-06-19 腾讯科技(深圳)有限公司 Aircraft tracking method and device and aircraft
US10409276B2 (en) * 2016-12-21 2019-09-10 Hangzhou Zero Zero Technology Co., Ltd. System and method for controller-free user drone interaction
CN106952295A (en) * 2017-03-17 2017-07-14 公安部第三研究所 A kind of implementation method of the rotor wing unmanned aerial vehicle pursuit movement target of view-based access control model
CN108874269B (en) * 2017-05-12 2020-12-29 北京臻迪科技股份有限公司 Target tracking method, device and system
CN107341814B (en) * 2017-06-14 2020-08-18 宁波大学 Four-rotor unmanned aerial vehicle monocular vision range measurement method based on sparse direct method
CN107390704B (en) * 2017-07-28 2020-12-04 西安因诺航空科技有限公司 IMU attitude compensation-based multi-rotor unmanned aerial vehicle optical flow hovering method
CN109472995B (en) * 2017-09-07 2021-01-15 广州极飞科技有限公司 Method and device for planning flight area of unmanned aerial vehicle and remote controller
CN107943072B (en) * 2017-11-13 2021-04-09 深圳大学 Unmanned aerial vehicle flight path generation method and device, storage medium and equipment
CN108399642B (en) * 2018-01-26 2021-07-27 上海深视信息科技有限公司 General target following method and system fusing rotor unmanned aerial vehicle IMU data
CN108564787A (en) * 2018-05-31 2018-09-21 北京理工大学 Traffic observation procedure, system and equipment based on Floating Car method
CN109146919B (en) * 2018-06-21 2020-08-04 全球能源互联网研究院有限公司 Tracking and aiming system and method combining image recognition and laser guidance
CN109086724B (en) * 2018-08-09 2019-12-24 北京华捷艾米科技有限公司 Accelerated human face detection method and storage medium
CN110244772B (en) * 2019-06-18 2021-12-03 中国科学院上海微系统与信息技术研究所 Navigation following system and navigation following control method of mobile robot
CN111325770B (en) * 2020-02-13 2023-12-22 中国科学院自动化研究所 RGBD camera-based target following method, system and device
CN111372122B (en) * 2020-02-27 2022-03-15 腾讯科技(深圳)有限公司 Media content implantation method, model training method and related device
CN113075937B (en) * 2021-03-17 2022-12-02 北京理工大学 Control method for capturing target by unmanned aerial vehicle based on target acceleration estimation
CN114659450B (en) * 2022-03-25 2023-11-14 北京小米机器人技术有限公司 Robot following method, device, robot and storage medium

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104392228A (en) * 2014-12-19 2015-03-04 中国人民解放军国防科学技术大学 Unmanned aerial vehicle image target class detection method based on conditional random field model

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104392228A (en) * 2014-12-19 2015-03-04 中国人民解放军国防科学技术大学 Unmanned aerial vehicle image target class detection method based on conditional random field model

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
"结合尺度空间FAST角点检测器和SURF描绘器的图像特征";王飞宇 等;《液晶与现实》;20140831;第29卷(第4期);第598-604页 *
"自适应融合角点特征的Camshift目标跟踪";陈丽君 等;《计算机工程与应用》;20141231;第178-182页 *
"融合角点特征与颜色特征的Mean-Shift目标跟踪算法";宋丹 等;《系统工程与电子技术》;20120131;第34卷(第1期);第199-203页 *
"面向无人机影像的目标特征跟踪方法研究";张辰 等;《红外技术》;20150331;第37卷(第3期);第224-228页 *

Also Published As

Publication number Publication date
CN104899590A (en) 2015-09-09

Similar Documents

Publication Publication Date Title
CN104899590B (en) A kind of unmanned plane sensation target follower method and system
Chuang et al. Underwater fish tracking for moving cameras based on deformable multiple kernels
Chen et al. A deep learning approach to drone monitoring
Kolsch et al. Fast 2d hand tracking with flocks of features and multi-cue integration
Sidla et al. Pedestrian detection and tracking for counting applications in crowded situations
Li et al. Adaptive pyramid mean shift for global real-time visual tracking
CN103824070B (en) A kind of rapid pedestrian detection method based on computer vision
CN105022982B (en) Hand motion recognition method and apparatus
CN109919981A (en) A kind of multi-object tracking method of the multiple features fusion based on Kalman filtering auxiliary
CN103426179B (en) A kind of method for tracking target based on mean shift multiple features fusion and device
Rivera et al. Background modeling through statistical edge-segment distributions
CN107798691B (en) A kind of unmanned plane independent landing terrestrial reference real-time detection tracking of view-based access control model
CN110276785A (en) One kind is anti-to block infrared object tracking method
CN108009494A (en) A kind of intersection wireless vehicle tracking based on unmanned plane
CN105279769A (en) Hierarchical particle filtering tracking method combined with multiple features
Morimitsu et al. Exploring structure for long-term tracking of multiple objects in sports videos
CN109063549A (en) High-resolution based on deep neural network is taken photo by plane video moving object detection method
Cancela et al. Unsupervised trajectory modelling using temporal information via minimal paths
CN109448023A (en) A kind of satellite video Small object method for real time tracking of combination space confidence map and track estimation
CN113340312A (en) AR indoor live-action navigation method and system
Li et al. Weak moving object detection in optical remote sensing video with motion-drive fusion network
Qi et al. Alpine skiing tracking method based on deep learning and correlation filter
Chu et al. Target tracking via particle filter and convolutional network
Lu et al. A particle filter without dynamics for robust 3d face tracking
Sun et al. Automatic annotation of web videos

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right

Effective date of registration: 20220324

Address after: 518000 room 216, building 4, Shenzhen International Software Park, No. 2, Gaoxin Zhonger Road, Nanshan District, Shenzhen, Guangdong Province

Patentee after: ACUS TECHNOLOGIES CO.,LTD.

Address before: 518060 No. 3688 Nanhai Road, Shenzhen, Guangdong, Nanshan District

Patentee before: SHENZHEN University

TR01 Transfer of patent right