CN107273929A - A kind of unmanned plane Autonomous landing method based on depth synergetic neural network - Google Patents

A kind of unmanned plane Autonomous landing method based on depth synergetic neural network Download PDF

Info

Publication number
CN107273929A
CN107273929A CN201710446953.4A CN201710446953A CN107273929A CN 107273929 A CN107273929 A CN 107273929A CN 201710446953 A CN201710446953 A CN 201710446953A CN 107273929 A CN107273929 A CN 107273929A
Authority
CN
China
Prior art keywords
vector
unmanned plane
measured
neural network
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201710446953.4A
Other languages
Chinese (zh)
Inventor
孟继成
魏源璋
杨涛
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
University of Electronic Science and Technology of China
Original Assignee
University of Electronic Science and Technology of China
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by University of Electronic Science and Technology of China filed Critical University of Electronic Science and Technology of China
Priority to CN201710446953.4A priority Critical patent/CN107273929A/en
Publication of CN107273929A publication Critical patent/CN107273929A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • Software Systems (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Mathematical Physics (AREA)
  • Computational Linguistics (AREA)
  • Health & Medical Sciences (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Image Analysis (AREA)

Abstract

The present invention relates to a kind of unmanned plane Autonomous landing method based on depth synergetic neural network, including simultaneously pretreatment goal image is training sample for collection;Training sample is converted into vector, and construction force equation;Neutral net is trained with kinetics equation using vector, pseudo inverse matrix is obtained;Gather and pre-process testing image for sample to be tested;Sample to be tested is converted into vector to be measured;Output mode to be measured is obtained using vector sum pseudo inverse matrix to be measured;Control whether unmanned plane lands according to output mode to be measured.Present invention collection ground target is used as training sample, synergetic neural network is trained by training sample again, make synergetic neural network that there is learning ability, when unmanned plane needs landing, target landing dot image is recognized by the synergetic neural network with learning ability, it can interpolate that the lower zone of unmanned plane during flying whether there is level point, being carried out to surrounding environment of making that unmanned plane can independently in flight course is cognitive, recognize target level point.

Description

A kind of unmanned plane Autonomous landing method based on depth synergetic neural network
Technical field
The present invention relates to UAV Intelligent identification technology field, specifically, refer to a kind of based on depth collaboration nerve net The unmanned plane Autonomous landing method of network.
Background technology
Requirement of the rotary wings unmanned plane take-off and landing to space is smaller, has in barrier very intensive environment stronger Control ability, navigation posture holding capacity the advantages of, be with a wide range of applications in military and civilian field, such as:Nothing Man-machine patrol, automatic scouting, science data collection and video monitoring etc., use unmanned plane, energy to complete these tasks Enough substantially reduce cost, improve the safety guarantee of operating personnel.
Unmanned plane is using hand-held remote controller or computer is arranged on by the way of earth station controls the flight of unmanned plane, As continuing to develop for machine learning is goed deep into, the utilization of the carrier such as intelligent robot, unmanned plane with independent navigation ability Extensive concern is arrived.Wherein the autonomous pinpoint landing technology of unmanned plane, is the basic fundamental that unmanned plane realizes autonomous flight, has Autonomous fixed point perches system, and unmanned plane could independently make a return voyage charging, and reduction energy expenditure reaches the target for continuing independently to operate. Existing unmanned plane Autonomous landing needs the mark by landing ramp, and the mark of landing ramp is recognized by the camera on unmanned plane To be landed, this method due to light, environment and camera quality etc. can to identification image in pixel gray scale Value affects, thus needs to carry out landing ramp image suitable Threshold segmentation processing, makes the landing model of unmanned plane Enclose and be restricted.
The content of the invention
The technical problems to be solved by the invention are to provide a kind of unmanned plane based on depth synergetic neural network and independently dropped Fall method, utilize synergetic neural network combination deep learning algorithm so that unmanned plane can be independently in flight course to week Collarette border carries out cognitive, identification target level point, expands the scope of unmanned plane landing.
The technical scheme that the present invention solves above-mentioned technical problem is as follows:A kind of unmanned plane based on depth synergetic neural network Autonomous landing method, comprises the following steps:
S1. simultaneously pretreatment goal image is training sample for collection;
S2. training sample is converted into vector, and construction force equation:
ξk(n+1)-ξk(n)=γ (λk-D+Bξk 2(n))ξk(n)
Wherein,Q is vector;vkFor initial prototype vector;For vkOrthogonal adjoint vector;K is represented in k classes One;K ` represent to sum to k;N represents iterations;γ represents iteration step length;ξk(n) sequence when for iteration n time is joined Amount, represents q under least square meaning in vkOn projection;λkFor the attention parameters more than zero; B, C are respectively constant coefficient;
S3. neutral net is trained with kinetics equation using vector, obtains pseudo inverse matrix;
S4. gather and pre-process testing image for sample to be tested;
S5. sample to be tested is converted into vector to be measured;
S6. output mode to be measured is obtained using vector sum pseudo inverse matrix to be measured;
S7. control whether unmanned plane lands according to output mode to be measured.
The beneficial effects of the invention are as follows:It is used as training sample, and then structure synergetic neural network by gathering ground target, The training by training sample to synergetic neural network, makes synergetic neural network have learning ability again, when unmanned plane needs drop When falling, the vector to be measured of target landing dot image is recognized by the synergetic neural network with learning ability, nobody is can interpolate that The lower zone of machine flight whether there is level point, unmanned plane independently can be recognized in flight course surrounding environment Know, recognize target level point, expand the scope of unmanned plane landing.
On the basis of above-mentioned technical proposal, the present invention can also do following improvement.
Further, the step S1 comprises the following steps:
S11. the target image that unmanned plane is gathered by camera thereon is obtained, the target image includes performing drop Fall, forbid landing, keep three kinds of classifications of hovering;
S12. the classification according to where present image, same sample graph image set is divided to by same category of target image In;
S13. the sample image that each sample image is concentrated is cut out, until the resolution ratio of all sample images is identical;
S14. the sample image after cutting out is converted into gray level image, is used as training sample.
Beneficial effect using above-mentioned further scheme is:Because camera acquired image is coloured image, it is impossible to It is directly inputted in follow-up neutral net and is operated, it is therefore desirable to is ability after gray level image to acquired image pretreatment Used by subsequent treatment.And what coloured image was included contains much information, and subsequent treatment only needs to use in gray level image Information, such as texture, brightness, contrast, therefore gray-scale map or bianry image are used in pattern-recognition, it can greatly reduce Amount of calculation, accelerates the calculating speed of the present invention.
Further, the step S2 comprises the following steps:
S21. it is column vector by the grayvalue transition of pixel by training sample;
S22. vector is obtained after carrying out zero-mean and normalized to column vector.
Beneficial effect using above-mentioned further scheme is:The pictorial information that camera is gathered, which is converted to, can be used in The vector of calculating, is easy to the study and training of follow-up neutral net.
Further, the step S3 comprises the following steps:
S31. from the training sample of each classification respectively selection one vector as initial prototype vector, using it is remaining to Amount training neutral net;
S32. use any one in remaining vector in each classification vectorial as initial vector, to obtain initial sequence ginseng Amount, then obtain by kinetics equation stablizing S order parameter;
S33. output mode is calculated:
qi(n)=ξk(n)VT, V=(v1,v2,...vi)
Wherein, qi(n) it is output mode;ξk(n) it is to stablize S order parameter;v1To viFor prototype corresponding with initial vector to Amount;V is the prototype matrix being made up of prototype vector;VTFor the transposed matrix of prototype matrix;
Judge whether output mode is identical with initial vector:
(1) if identical, it regard prototype vector as new prototype vector;
(2) if differing, new prototype vector is used as using the average value of initial vector and prototype vector;
S34. repeat step S32 to S33, is finished, and obtain by new prototype vector until all remaining vectors are read The new prototype matrix of composition;
S35. repeat step S32 to S34, until each numerical value no longer changes in new prototype matrix, obtains the matrix Pseudo inverse matrix.
Beneficial effect using above-mentioned further scheme is:After the image of target landing point is handled, training is used as Sample, after whole training samples input neutral net, is trained by depth synergetic neural network, can be obtained having and be learned The neutral net of habit ability.
Further, the acquisition methods of the sample to be tested and the acquisition methods of training sample are identical.
Beneficial effect using above-mentioned further scheme is:It ensure that sample to be tested image and training sample image have Identical can recognize that feature.
Further, the vectorial acquisition methods to be measured are identical with the acquisition methods of vector.
Beneficial effect using above-mentioned further scheme is:It ensure that there is identical can calculate for vector to be measured and vector Property.
Further, the step S6 comprises the following steps:
S61. initial S order parameter to be measured is calculated using pseudo inverse matrix and vector to be measured;
If S62. the maximum of modulus value is less than default threshold value in initial S order parameter to be measured, return to step S4 continues to adopt Collect testing image, otherwise, perform step S63;
S63. initial S order parameter to be measured is obtained into stable S order parameter to be measured by kinetics equation;
S64. output mode to be measured is obtained using stable S order parameter to be measured.
Beneficial effect using above-mentioned further scheme is:Output mode to be measured can interpolate that according to stable S order parameter to be measured Affiliated classification, so that unmanned plane landing has basis for estimation.
Further, in the step S7 output mode to be measured to that should perform landing, forbid landing, keep three kinds of marks of hovering Will:
If output mode to be measured is performs landing mark, ground system output order 0001, unmanned plane is to target location Landing;
If output mode to be measured is forbids landing to indicate, ground system output order 0000, unmanned plane continues to cruise;
If output mode to be measured is keeps hovering to indicate, ground system output order 0010, unmanned plane hovers in current Position.
Beneficial effect using above-mentioned further scheme is:The instruction that unmanned plane is exported according to ground system is made accordingly Perform landing or forbid landing or keep hovering to act, realize the target of unmanned plane Autonomous landing.
The present invention is compared to prior art, also has the advantages that:
1. carrying out recognition target image with the method that deep learning is combined by using synergetics, Coodination theory is realized The breakthrough applied in terms of neutral net, another powerful method is provided for neural metwork training.
2. the recognition methods of synergetics is from image entirety, based on global characteristics identification, Coodination theory pair is utilized The control of systematic entirety, Synergy is merged with deep neural network, realizes the more preferable depth of highly efficient, degree of fitting Practise network and algorithm.
3. depth synergetic neural network is applied into unmanned plane Autonomous landing process, unmanned plane is realized to landing environment The effectively cognitive, identification in target level point, improves the security and accuracy of unmanned plane landing, finally realizes unmanned plane Autonomous landing process.
Brief description of the drawings
Fig. 1 is flow chart of the invention;
Fig. 2 is the structural representation of neutral net.
Embodiment
The principle and feature of the present invention are described below in conjunction with accompanying drawing, the given examples are served only to explain the present invention, and It is non-to be used to limit the scope of the present invention.
A kind of unmanned plane Autonomous landing method based on depth synergetic neural network, as shown in Figure 1 and Figure 2, including following step Suddenly:
First, IMAQ and pretreatment
Land object image is gathered by the use of camera and as sample image, and sample image includes performing landing, forbids drop Fall, three kinds of classifications of hovering are kept, manually by multiple same category of graphic collections.All sample images are cut out, made every The resolution ratio of width sample image is identical, then the sample image after cutting out is converted into gray level image, is used as training sample.
2nd, construction force equation
According to the grayvalue transition of pixel it is column vector by training sample, column vector is carried out at zero-mean and normalization Vectorial q is obtained after reason;
Introduce S order parameter ξk(n), S order parameter ξk(n) represent vector q under least square meaning in vkOn projection, power Learn equation as follows:
Wherein, q is vector;vkFor initial prototype vector;For vkOrthogonal adjoint vector;K represents one in k classes;K ` Represent to sum to k;N represents iterations;γ represents iteration step length;λkFor the attention parameters more than zero;B, C are respectively constant coefficient.
K classification correspondence performs landing, forbids landing, keeps three kinds of hovering.
Above-mentioned steps are the method for synergetics.
3rd, depth synergetic neural network is trained
(1) respectively one vector q of selection is used as initial prototype vector v from the training sample of each classificationk, initial prototype to Measure vkEvolution in formula (2) is not substituted into, and remaining vectorial q is used to train neutral net, and substitutes into evolution in formula (2), remaining Vectorial q initial prototype vector v is constantly corrected according to evolution resultk
(2) select one from remaining vectorial q of each classification and be used as initial vector qi(0), and pass throughEnergy Enough by initial vector qi(0) initial S order parameter ξ is obtainedk(0), by initial S order parameter ξk(0) substitute into kinetics equation and carry out n times Iteration, here it is evolutionary process, until final S order parameter no longer changes, that is, obtains stablizing S order parameter ξk(n)。
(3) output mode is calculated:
qi(n)=ξk(n)VT, V=(v1,v2,...vi)
Wherein, qi(n) it is output mode;ξk(n) it is to stablize S order parameter;v1To viFor prototype corresponding with initial vector to Amount;V is the prototype matrix being made up of prototype vector;
Compare output mode qi(n) with initial vector qi(0) it is whether identical:
(i) classify correct if identical, directly by prototype vector viIt is used as new prototype vector vi';
(ii) classification error if differing, using initial vector qi(0) with initial vector qi(0) prototype corresponding to Measure viAverage value be used as new prototype vector vi'。
(4) (2) to (3) in repeat step three, finished until all the remaining vectorial q are read, and by new prototype vector vi' obtain new prototype matrix V ', V'=(v '1,v′2,...v′i)。
(5) (2) arrive (4) in repeat step three, make in new prototype matrix V ' constantly update, until new prototype matrix V ' Each numerical value no longer changes, the new prototype matrix V for example obtained ' beTherein 1,2,3,4 value no longer changes Produce the pseudo inverse matrix of final new prototype matrix V ', obtain the new prototype matrix V '.
Above-mentioned steps are the method for deep learning, are, first by land object image manual sort, then to use ground target figure As neutral net being repeated training, finally make new prototype matrix V that neutral net stablized ' pseudo inverse matrix, from And make neutral net that there is learning ability.
4th, ground target is recognized using depth synergetic neural network
(1) the present embodiment is provided with earth station system, and two cameras are installed to constitute binocular camera in unmanned machine face, Binocular camera gathers the image information on ground and uploads to unmanned aerial vehicle station system in real time, acquired image information and step Rapid one makees same processing, and is converted into vectorial q' to be measured.
(2) pseudo inverse matrix obtained using (5) in vectorial q' to be measured and step 3, passes through ξk(0) '=V+Q' obtains to be measured Initial S order parameter ξk(0) ', wherein V+For pseudo inverse matrix.
(3) threshold alpha=0.4 is set up, initial S order parameter ξ to be measured is foundk(0) ' in modulus value the maximum, if initial sequence to be measured Parameter ξk(0) ' in modulus value the maximum be less than threshold alpha, then illustrate unmanned plane be not present level point, camera continue gather it is to be measured Image;If initial S order parameter ξ to be measuredk(0) ' in modulus value the maximum be more than or equal to threshold alpha, then perform following steps (4).
(4) by initial S order parameter ξ to be measuredk(0) ' developed with kinetics equation, until some S order parameter component is equal to 1, when remaining S order parameter component is equal to 0, that is, obtain stable S order parameter ξ to be measuredk(n)'。
(5) according to stable S order parameter ξ to be measuredk(n) ' middle component of the modulus value equal to 1, judge output mode q to be measuredi(n) ' institute The classification of category.
Above-mentioned steps are that image is identified on a surface target with the neutral net with learning ability, finally give and treat Survey output mode qi(n) ' belonging to classification.
5th, Autonomous landing
The ground image information gathered treated in step 4 is passed back in earth station system, input step three and trained Good depth synergetic neural network, according to obtained output mode q to be measuredi(n) ' belonging to classification, to below unmanned plane during flying Image in region is identified, wherein output mode q to be measuredi(n) ' correspondingly perform landing, forbid landing, keep three kinds of hovering Mark, therefore unmanned plane makes following three kinds corresponding action commands:
Indicate if (i) being identified as performing landing, ground system output order 0001, ground system is sat according to unmanned plane Mark is calculated drop target positional information, is landed with reference to identifying system guiding unmanned plane to target location.
(ii) if being identified as forbidding landing to indicate, ground system output order 0000, unmanned plane keeps current flight State, continues to cruise.
(iii) if being identified as keeping hovering mark, ground system output order 0010, unmanned plane stops cruise, and Hover in current location, keep coordinate constant.
The present invention is by manually being categorized as performing landing to target image, forbidding landing, keep three species of hovering first Not, then utilize sub-category target image to be trained neutral net, enable three classes of neutral net automatic identification Not.Land object image is gathered when unmanned plane is constantly real-time in flight course, can be autonomous by the identification of neutral net Judge that unmanned plane whether there is level point, it is achieved thereby that what unmanned plane can be independently in flight course is carried out to surrounding environment Cognitive, identification target level point function, expands the scope of unmanned plane landing.
The foregoing is only presently preferred embodiments of the present invention, be not intended to limit the invention, it is all the present invention spirit and Within principle, any modification, equivalent substitution and improvements made etc. should be included in the scope of the protection.

Claims (8)

1. a kind of unmanned plane Autonomous landing method based on depth synergetic neural network, it is characterised in that comprise the following steps:
S1. simultaneously pretreatment goal image is training sample for collection;
S2. training sample is converted into vector, and construction force equation:
ξk(n+1)-ξk(n)=γ (λk-D+Bξk 2(n))ξk(n)
Wherein,Q is vector;vkFor initial prototype vector;For vkOrthogonal adjoint vector;K represents one in k classes It is individual;K ` represent to sum to k;N represents iterations;γ represents iteration step length;ξk(n) S order parameter when for iteration n times, table Show q under least square meaning in vkOn projection;λkFor the attention parameters more than zero;B, C points Wei not constant coefficient;
S3. neutral net is trained with kinetics equation using vector, obtains pseudo inverse matrix;
S4. gather and pre-process testing image for sample to be tested;
S5. sample to be tested is converted into vector to be measured;
S6. output mode to be measured is obtained using vector sum pseudo inverse matrix to be measured;
S7. control whether unmanned plane lands according to output mode to be measured.
2. the unmanned plane Autonomous landing method according to claim 1 based on depth synergetic neural network, it is characterised in that The step S1 comprises the following steps:
S11. the target image that unmanned plane is gathered by camera thereon is obtained, the target image includes performing landing, taboo Only land, keep three kinds of classifications of hovering;
S12. the classification according to where present image, is divided to same sample image by same category of target image and concentrates;
S13. the sample image that each sample image is concentrated is cut out, until the resolution ratio of all sample images is identical;
S14. the sample image after cutting out is converted into gray level image, is used as training sample.
3. the unmanned plane Autonomous landing method according to claim 1 based on depth synergetic neural network, it is characterised in that The step S2 comprises the following steps:
S21. it is column vector by the grayvalue transition of pixel by training sample;
S22. vector is obtained after carrying out zero-mean and normalized to column vector.
4. the unmanned plane Autonomous landing method according to claim 2 based on depth synergetic neural network, it is characterised in that The step S3 comprises the following steps:
S31. respectively one vector of selection, as initial prototype vector, is instructed using remaining vector from the training sample of each classification Practice neutral net;
S32. use any one in remaining vector in each classification vectorial as initial vector, to obtain initial S order parameter, Obtain stablizing S order parameter by kinetics equation again;
S33. output mode is calculated:
qi(n)=ξk(n)VT, V=(v1,v2,...vi)
Wherein, qi(n) it is output mode;ξk(n) it is to stablize S order parameter;v1To viFor prototype vector corresponding with initial vector;V is The prototype matrix being made up of prototype vector;VTFor the transposed matrix of prototype matrix;
Judge whether output mode is identical with initial vector:
(1) if identical, it regard prototype vector as new prototype vector;
(2) if differing, new prototype vector is used as using the average value of initial vector and prototype vector;
S34. repeat step S32 to S33, is finished, and obtain being made up of new prototype vector until all remaining vectors are read New prototype matrix;
S35. repeat step S32 to S34, until each numerical value no longer changes in new prototype matrix, obtains the pseudoinverse of the matrix Matrix.
5. the unmanned plane Autonomous landing method according to claim 1 based on depth synergetic neural network, it is characterised in that: The acquisition methods of the sample to be tested are identical with the acquisition methods of training sample.
6. the unmanned plane Autonomous landing method according to claim 1 based on depth synergetic neural network, it is characterised in that: The vectorial acquisition methods to be measured are identical with the acquisition methods of vector.
7. the unmanned plane Autonomous landing method according to claim 1 based on depth synergetic neural network, it is characterised in that The step S6 comprises the following steps:
S61. initial S order parameter to be measured is calculated using pseudo inverse matrix and vector to be measured;
If S62. the maximum of modulus value is less than default threshold value in initial S order parameter to be measured, return to step S4 continues collection and treated Altimetric image, otherwise, performs step S63;
S63. initial S order parameter to be measured is obtained into stable S order parameter to be measured by kinetics equation;
S64. output mode to be measured is obtained using stable S order parameter to be measured.
8. the unmanned plane Autonomous landing method according to claim 1 based on depth synergetic neural network, it is characterised in that Output mode to be measured is to that should perform landing, forbid landing, keep three kinds of marks of hovering in the step S7:
If output mode to be measured indicates to perform landing, ground system output order 0001, unmanned plane lands to target location;
If output mode to be measured is forbids landing to indicate, ground system output order 0000, unmanned plane continues to cruise;
If output mode to be measured is keeps hovering to indicate, ground system output order 0010, unmanned plane hovers in current location.
CN201710446953.4A 2017-06-14 2017-06-14 A kind of unmanned plane Autonomous landing method based on depth synergetic neural network Pending CN107273929A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710446953.4A CN107273929A (en) 2017-06-14 2017-06-14 A kind of unmanned plane Autonomous landing method based on depth synergetic neural network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710446953.4A CN107273929A (en) 2017-06-14 2017-06-14 A kind of unmanned plane Autonomous landing method based on depth synergetic neural network

Publications (1)

Publication Number Publication Date
CN107273929A true CN107273929A (en) 2017-10-20

Family

ID=60066701

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710446953.4A Pending CN107273929A (en) 2017-06-14 2017-06-14 A kind of unmanned plane Autonomous landing method based on depth synergetic neural network

Country Status (1)

Country Link
CN (1) CN107273929A (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108153334A (en) * 2017-12-01 2018-06-12 南京航空航天大学 No cooperative target formula unmanned helicopter vision is independently maked a return voyage and drop method and system
CN109885091A (en) * 2019-03-21 2019-06-14 华北电力大学(保定) A kind of unmanned plane autonomous flight control method and system
CN110196601A (en) * 2018-02-26 2019-09-03 北京京东尚科信息技术有限公司 Unmanned aerial vehicle (UAV) control method, apparatus, system and computer readable storage medium
CN110231829A (en) * 2019-06-20 2019-09-13 上海大学 Increase the intensified learning miniature self-service gyroplane independent landing method melted based on data
CN112947526A (en) * 2021-03-12 2021-06-11 华中科技大学 Unmanned aerial vehicle autonomous landing method and system

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20020001398A1 (en) * 2000-06-28 2002-01-03 Matsushita Electric Industrial Co., Ltd. Method and apparatus for object recognition
CN102662931A (en) * 2012-04-13 2012-09-12 厦门大学 Semantic role labeling method based on synergetic neural network
CN105512680A (en) * 2015-12-02 2016-04-20 北京航空航天大学 Multi-view SAR image target recognition method based on depth neural network
CN106127146A (en) * 2016-06-22 2016-11-16 电子科技大学 A kind of unmanned aerial vehicle flight path guidance method based on gesture identification
CN106168808A (en) * 2016-08-25 2016-11-30 南京邮电大学 A kind of rotor wing unmanned aerial vehicle automatic cruising method based on degree of depth study and system thereof
CN106503665A (en) * 2016-10-26 2017-03-15 电子科技大学 A kind of face identification method based on synergetic neural network
CN106650806A (en) * 2016-12-16 2017-05-10 北京大学深圳研究生院 Cooperative type deep network model method for pedestrian detection

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20020001398A1 (en) * 2000-06-28 2002-01-03 Matsushita Electric Industrial Co., Ltd. Method and apparatus for object recognition
CN102662931A (en) * 2012-04-13 2012-09-12 厦门大学 Semantic role labeling method based on synergetic neural network
CN105512680A (en) * 2015-12-02 2016-04-20 北京航空航天大学 Multi-view SAR image target recognition method based on depth neural network
CN106127146A (en) * 2016-06-22 2016-11-16 电子科技大学 A kind of unmanned aerial vehicle flight path guidance method based on gesture identification
CN106168808A (en) * 2016-08-25 2016-11-30 南京邮电大学 A kind of rotor wing unmanned aerial vehicle automatic cruising method based on degree of depth study and system thereof
CN106503665A (en) * 2016-10-26 2017-03-15 电子科技大学 A kind of face identification method based on synergetic neural network
CN106650806A (en) * 2016-12-16 2017-05-10 北京大学深圳研究生院 Cooperative type deep network model method for pedestrian detection

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
SHUIPIN GOU 等: "Image Recognition Using Synergetic Neural Network", 《SECOND INTERNATIONAL SYMPOSIUM ON NEURAL NETWORKS》 *
YUANZHANG WEI 等: "A new method of Deep Synergetic Neural Network For Face Recognition", 《ICIIP》 *
ZONG-HUI SHEN 等: "A New Method of Object Recognition Based on Deep Synergetic Neural Network", 《2016 INTERNATIONAL CONFERENCE ON APPLIED MECHANICS, MECHANICAL AND MATERIALS ENGINEERING》 *
张美璟: "基于协同神经网络的车牌字符识别", 《电脑知识与技术》 *

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108153334A (en) * 2017-12-01 2018-06-12 南京航空航天大学 No cooperative target formula unmanned helicopter vision is independently maked a return voyage and drop method and system
CN108153334B (en) * 2017-12-01 2020-09-25 南京航空航天大学 Visual autonomous return and landing method and system for unmanned helicopter without cooperative target
CN110196601A (en) * 2018-02-26 2019-09-03 北京京东尚科信息技术有限公司 Unmanned aerial vehicle (UAV) control method, apparatus, system and computer readable storage medium
CN109885091A (en) * 2019-03-21 2019-06-14 华北电力大学(保定) A kind of unmanned plane autonomous flight control method and system
CN110231829A (en) * 2019-06-20 2019-09-13 上海大学 Increase the intensified learning miniature self-service gyroplane independent landing method melted based on data
CN110231829B (en) * 2019-06-20 2022-01-07 上海大学 Intensive learning small unmanned gyroplane autonomous landing method based on data fusion
CN112947526A (en) * 2021-03-12 2021-06-11 华中科技大学 Unmanned aerial vehicle autonomous landing method and system

Similar Documents

Publication Publication Date Title
CN107273929A (en) A kind of unmanned plane Autonomous landing method based on depth synergetic neural network
CN107103164B (en) Distribution method and device for unmanned aerial vehicle to execute multiple tasks
CN108229587B (en) Autonomous transmission tower scanning method based on hovering state of aircraft
CN108921200A (en) Method, apparatus, equipment and medium for classifying to Driving Scene data
CN109445456A (en) A kind of multiple no-manned plane cluster air navigation aid
CN107481292A (en) The attitude error method of estimation and device of vehicle-mounted camera
CN110991502B (en) Airspace security situation assessment method based on category activation mapping technology
CN108197584A (en) A kind of recognition methods again of the pedestrian based on triple deep neural network
Koutras et al. Autonomous and cooperative design of the monitor positions for a team of uavs to maximize the quantity and quality of detected objects
CN107886099A (en) Synergetic neural network and its construction method and aircraft automatic obstacle avoiding method
CN107480591A (en) Flying bird detection method and device
Zhang et al. A bionic dynamic path planning algorithm of the micro UAV based on the fusion of deep neural network optimization/filtering and hawk-eye vision
Rojas-Perez et al. Real-time landing zone detection for UAVs using single aerial images
CN110866548A (en) Infrared intelligent matching identification and distance measurement positioning method and system for insulator of power transmission line
JP2020047272A (en) Learning method and learning device for detecting lane on the basis of cnn, and testing method and testing device using the same
Chen et al. Integrated air-ground vehicles for uav emergency landing based on graph convolution network
Rojas-Perez et al. A temporal CNN-based approach for autonomous drone racing
Wu et al. Multi-objective reinforcement learning for autonomous drone navigation in urban areas with wind zones
Naso et al. Autonomous flight insurance method of unmanned aerial vehicles Parot Mambo using semantic segmentation data
Zwick et al. Sensor Model-Based Trajectory Optimization for UAVs Using Nonlinear Model Predictive Control
Prates et al. Autonomous 3-D aerial navigation system for precision agriculture
Wu et al. Planning efficient and robust behaviors for model-based power tower inspection
Dong et al. Research on enemy aircraft bank angle estimation for autonomous air combat
Xiang et al. A Study of Autonomous Landing of UAV for Mobile Platform
Estlin et al. Enabling autonomous science for a Mars rover

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20171020

RJ01 Rejection of invention patent application after publication