KR101813697B1 - Unmanned aerial vehicle flight control system and method using deep learning - Google Patents

Unmanned aerial vehicle flight control system and method using deep learning Download PDF

Info

Publication number
KR101813697B1
KR101813697B1 KR1020150183932A KR20150183932A KR101813697B1 KR 101813697 B1 KR101813697 B1 KR 101813697B1 KR 1020150183932 A KR1020150183932 A KR 1020150183932A KR 20150183932 A KR20150183932 A KR 20150183932A KR 101813697 B1 KR101813697 B1 KR 101813697B1
Authority
KR
South Korea
Prior art keywords
uav
posture
neural network
control module
deep learning
Prior art date
Application number
KR1020150183932A
Other languages
Korean (ko)
Other versions
KR20170074539A (en
Inventor
최영식
황금별
Original Assignee
한국항공대학교산학협력단
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 한국항공대학교산학협력단 filed Critical 한국항공대학교산학협력단
Priority to KR1020150183932A priority Critical patent/KR101813697B1/en
Publication of KR20170074539A publication Critical patent/KR20170074539A/en
Application granted granted Critical
Publication of KR101813697B1 publication Critical patent/KR101813697B1/en

Links

Images

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course or altitude of land, water, air, or space vehicles, e.g. automatic pilot
    • G05D1/08Control of attitude, i.e. control of roll, pitch, or yaw
    • G05D1/0808Control of attitude, i.e. control of roll, pitch, or yaw specially adapted for aircraft
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B64AIRCRAFT; AVIATION; COSMONAUTICS
    • B64CAEROPLANES; HELICOPTERS
    • B64C27/00Rotorcraft; Rotors peculiar thereto
    • B64C27/006Safety devices
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B64AIRCRAFT; AVIATION; COSMONAUTICS
    • B64CAEROPLANES; HELICOPTERS
    • B64C27/00Rotorcraft; Rotors peculiar thereto
    • B64C27/04Helicopters
    • B64C27/08Helicopters with two or more rotors
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B64AIRCRAFT; AVIATION; COSMONAUTICS
    • B64CAEROPLANES; HELICOPTERS
    • B64C39/00Aircraft not otherwise provided for
    • B64C39/02Aircraft not otherwise provided for characterised by special use
    • B64C39/024Aircraft not otherwise provided for characterised by special use of the remote controlled vehicle type, i.e. RPV
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course or altitude of land, water, air, or space vehicles, e.g. automatic pilot
    • G05D1/08Control of attitude, i.e. control of roll, pitch, or yaw
    • G05D1/0808Control of attitude, i.e. control of roll, pitch, or yaw specially adapted for aircraft
    • G05D1/0816Control of attitude, i.e. control of roll, pitch, or yaw specially adapted for aircraft to ensure stability
    • G05D1/0825Control of attitude, i.e. control of roll, pitch, or yaw specially adapted for aircraft to ensure stability using mathematical models
    • B64C2201/141
    • B64C2700/6292
    • B64C2700/6294

Abstract

The method includes obtaining a current attitude of a UAV from a posture predicting module, obtaining a target value including a target posture and a magnitude of a target thrust of the UAV from a position control module, calculating a dip running neural network based on the current posture and the target value Wherein the step of forming the dip-laying neural network comprises the step of forming the dip-learning neural network, wherein the step of forming the deep-drawing neural network comprises the steps of: The present invention relates to a UAV flight control method using deep learning capable of performing learning of the deep learning neural network based on the current attitude and the target value.

Description

BACKGROUND OF THE INVENTION 1. Field of the Invention The present invention relates to a flight control system and method,

The present invention relates to a method for controlling the flight of a UAV, such as automatically recovering a flight obstacle of UAV using deep running or maintaining hovering.

UAVs typically have a position controller for smooth and stable movement and hovering, and the position controller must be able to accurately predict the current position of the UAV in order for it to operate according to its intended algorithm. Unmanned aerial vehicles generally use GPS for reliable location prediction, and computer vision is mainly used in recent years.

Prior art techniques for measuring the location of UAVs based on GPS or computer vision have not been able to maintain a drift position due to the inability of the GPS to operate reliably when the GPS is not received or the computer vision system is not operating properly There is a problem that a temporary flying obstacle occurs due to a fall due to a state in which the vehicle continues to drift in one direction).

In this way, failure of the position sensor, such as failure of GPS reception or prediction of the position of the computer vision system, directly affects the stability of the flight and causes problems such as gas damage, human injury, property damage .

The background technology of the present application is disclosed in Korean Patent Registration No. 10-1472392 (Registered on Apr. 20, 2014).

The present invention has been made to solve the above-mentioned problems of the conventional art, and it is an object of the present invention to provide a position controller of a UAV that uses deep running which can solve problems such as falling, gas breakage, And to provide a method of controlling the UAV.

SUMMARY OF THE INVENTION It is an object of the present invention to provide a UAV flight control method using deep running which enables a UAV to be hovered safely when the position controller does not operate stably.

It should be understood, however, that the technical scope of the embodiments of the present invention is not limited to the above-described technical problems, and other technical problems may exist.

As a technical means for accomplishing the above technical object, the UAV flight control method using deep learning according to one embodiment of the present invention includes acquiring the current attitude of the UAV from the attitude prediction module, And obtaining a target value including a magnitude of the target thrust; forming a deep running neural network based on the current posture and the target value; and controlling the posture of the unmanned aerial vehicle based on the deep running neural network, The step of forming the deep learning neural network may perform the learning of the deep learning neural network based on the current posture and the target value obtained in a state where the UAV is hovering normally in conjunction with the position control module.

The step of forming the deep learning neural network may further include setting the current posture acquired in the normal hovering state as an input of the deep learning neural network and outputting the target value obtained in the normal hovering state to the output of the deep learning neural network The learning can be performed. Further, in the step of forming the deep learning neural network, sensor measurement values such as a distance measurement sensor and an accelerometer may be added as neural network inputs for more accurate learning.

Also, the step of controlling the posture of the UAV may include a step of, when the position control module operates abnormally, the step of controlling the posture of the UAV based on the current posture output from the posture predicting module and the target value output of the deep- It is possible to control the UAV to take a hovering posture.

Also, the step of controlling the posture of the UAV may be performed by controlling the posture using the current position and the current posture predicted based on the Extended Kalman Filter when the position control module operates normally, And when the position control module operates abnormally, the posture can be controlled using the predicted current posture based on the extended Kalman filter.

Further, the step of forming the deep-running neural network may further include the step of calculating the depth-learning neural network based on the current attitude obtained at the first time and the target value obtained at the second time before the first time, The target value can be calculated. In this case, when a sensor measurement value such as a distance measurement sensor or an accelerometer is added to the neural network input for more accurate learning, the sensor measurement value can be used at the first time.

Further, the step of forming the deep-learning neural network may further include calculating, based on a current posture obtained at the first time and a weight and an activation function for each of the target values obtained at the second time, The target value can be calculated.

In addition, the step of forming the deep-learning neural network may include updating the weight based on an output error of the deep-learning neural network calculated using a cost function.

The updating of the weights may also update the weights based on a gradient descent such that the output error is minimized.

Meanwhile, the UAV flight control system using deep learning according to an embodiment of the present invention includes a posture prediction module for calculating a current posture of a UAV, a position control module for calculating a target value including a target posture of the UAV and a magnitude of a target thrust, A deep learning neural network generation module for forming a deep learning neural network based on the current posture and the target value and an attitude control module for controlling the posture of the UAV based on the deep learning neural network, Learning of the deep learning neural network based on the current posture and the target value acquired in a state where the robot is normally hovering in conjunction with the position control module.

The dip learning neural network generating module sets the current posture obtained in the normal hovering state as the input of the deep learning neural network and sets the target value obtained in the state of normally hovering as the output of the deep learning neural network The learning can be performed.

The posture control module may be configured such that, when the position control module operates abnormally, based on the current posture output from the posture predicting module and the target value output of the deep learning neural network learned corresponding to the current posture, As shown in Fig.

The posture control module controls the posture using the current position and the current posture of the UAV predicted based on an Extended Kalman Filter when the position control module operates normally, When the control module operates abnormally, the posture can be controlled using the predicted current posture based on the extended Kalman filter.

Also, the deep-learning neural network generating module calculates a target value at a first time as an output of the deep-learning neural network, based on a current attitude obtained at a first time and a target value obtained at a second time before the first time can do. In this case, when a sensor measurement value such as a distance measurement sensor or an accelerometer is added to the neural network input for more accurate learning, the sensor measurement value can be used at the first time.

The dehydrating neural network generation module may calculate a target value at the first time based on the current attitude obtained at the first time and a weight and an activation function for each of the target values obtained at the second time, Can be calculated.

Also, the deep-learning neural network generation module may update the weight based on an output error of the deep-learning neural network calculated using a cost function.

In addition, the de-learning neural network generating module may update the weight based on a gradient descent so that the output error is minimized.

The above-described task solution is merely exemplary and should not be construed as limiting the present disclosure. In addition to the exemplary embodiments described above, there may be additional embodiments in the drawings and the detailed description of the invention.

According to the present invention, the learning of the deep learning neural network for attitude control of the UAV is performed on the basis of the current attitude and the target value of the UAV obtained in the state where the UAV is hovering normally in cooperation with the position control module , The posture of the UAV can be automatically restored or the hovering can be maintained based on the learned deep neural network in the case of a flight obstacle.

According to the above-mentioned task solution of the present invention, the present invention controls the position control module to operate abnormally by controlling the UAV to take a hovering posture based on the target value output of the learned deep learning neural network when the position control module operates abnormally It is possible to solve the problems such as falling, gas breakage, human injury, property damage, etc.

FIG. 1 is a diagram schematically illustrating the configuration of a UAV flight control system using deep running according to an embodiment of the present invention. Referring to FIG.
2 is a diagram illustrating a data flow in a state where the position control module operates normally in a UAV flight control system using deep learning according to an embodiment of the present invention.
3 is a view illustrating a data flow in a state where the position control module operates abnormally in the UAV flight control system using deep learning according to an embodiment of the present invention.
4 is a diagram illustrating a configuration of a deep learning neural network in a UAV flight control system using deep learning according to an embodiment of the present invention.
5 is a diagram illustrating a configuration of a deep learning neural network in a UAV flight control system using deep learning according to an embodiment of the present invention over time.
FIG. 6 is a diagram illustrating a process of updating a weight of a deep learning neural network in a UAV flight control system using deep learning according to an embodiment of the present invention. Referring to FIG.
7 is a flowchart illustrating a method for controlling a UAV using deep running according to an embodiment of the present invention.

Hereinafter, embodiments of the present invention will be described in detail with reference to the accompanying drawings so that those skilled in the art can easily carry out the present invention. It should be understood, however, that the present invention may be embodied in many different forms and should not be construed as limited to the embodiments set forth herein. In the drawings, the same reference numbers are used throughout the specification to refer to the same or like parts.

Throughout this specification, when a part is referred to as being "connected" to another part, it is not limited to a case where it is "directly connected" but also includes the case where it is "electrically connected" do.

It will be appreciated that throughout the specification it will be understood that when a member is located on another member "top", "top", "under", "bottom" But also the case where there is another member between the two members as well as the case where they are in contact with each other.

Throughout this specification, when an element is referred to as "including " an element, it is understood that the element may include other elements as well, without departing from the other elements unless specifically stated otherwise.

In this paper, we propose a method to recover the position and posture safely and to maintain hovering (idle) state in the absence of position sensor such as GPS reception failure, computer vision system failure prediction To a UAV flight control technique using deep running.

Herein, UAV (Unmanned Aerial Vehicle) is a UAV that is controlled by remote operation, and may be a MAV (Micro Aerial Vehicle).

FIG. 1 is a schematic view illustrating a configuration of a UAV flight control system using deep running according to an embodiment of the present invention. FIG. 2 is a block diagram of a UAV flight control system using deep running according to an embodiment of the present invention. FIG. 3 is a view showing a data flow in a state in which the position control module operates abnormally in the UAV flight control system using deep learning according to an embodiment of the present invention. FIG.

2 to 3, the UAV flight control system 100 using deep-running according to an embodiment of the present invention is configured such that when the position control module 120 normally operates as shown in FIG. 2 And a case where the position control module 120 operates abnormally as shown in FIG.

The state where the position control module 120 is normally operated means that the position measurement sensor such as an accelerometer, a speedometer, a geodetic system, a GPS, a computer vision or the like operates normally to acquire position information of the UAV On the other hand, the state where the position control module 120 operates abnormally means that the position information of the UAV can be acquired by abnormal state such as breakdown of the GPS or failure, damage or error of the position control module 120 It can mean a missing state.

2, the learning of the deep learning neural network can be performed through the deep learning neural network generation module 130 when the position control module 120 operates normally. At this time, data used for learning of the deep learning neural network may be the current posture of the UAV, the current position, the target posture, and the magnitude of the target thrust. This is because when the UAV is hovering in a state where the position control module 120 is normally operating ≪ / RTI >

2, on the basis of the target posture output from the position control module 120, the size of the target thrust, and the current posture output from the posture predicting module 110 in consideration of the current position output from the posture predicting module 110, , The posture control module 140 can control the posture of the UAV, and the deep learning neural network generation module 130 can perform learning of the deep learning neural network.

3, when the position control module 120 operates abnormally, the posture control module 140 calculates the target position of the learned deep-processing neural network corresponding to the current posture and the current posture output from the posture predicting module 110 It is possible to control the UAV to take a hovering posture based on the output.

In the case of FIG. 3, the UAV 100 using deep learning according to an embodiment of the present invention is configured such that when the position control module 120 operates abnormally, It is possible to control the UAV to take the hovering attitude based on the target value output of the deep learning neural network (i.e., the size of the target posture and the target thrust output from the learning device 135 for dehydrating neural network generated in response to the current posture) GPS reception failure, failure to predict the position of the computer vision, and so on, so that the UAV can safely be hovered without falling down.

1, a UAV flight control system 100 using deep learning according to an embodiment of the present invention includes a posture prediction module 110, a position control module 120, a deep learning neural network generation module 130, Module 140, as shown in FIG.

The posture predicting module 110 can calculate the current posture of the UAV.

The posture predicting module 110 can predict the current position and the current posture of the UAV based on an extended Kalman filter and the posture predicting module 110 can estimate the current position and the current posture of the UAV by using an accelerometer, Computer Vision) and the like can be predicted based on the extended Kalman filter to predict the current position and the current attitude of the UAV.

The posture predicting module 110 can predict the current position of the UAV, which includes information such as X coordinate, Y coordinate, Z coordinate, and the like, and can also estimate the current position of the UAV, such as roll, pitch, It is possible to predict the current posture of the UAV with information. The roll means to rotate about the fore and aft axis, the pitch means the up and down movement in front of the UAV, and the yaw angle means the angle tilted to one side.

The posture predicting module 110 may include the current position and the current position of the UAV as an output.

The current position of the UAV predicted by the posture predicting module 110 may be received by the position control module 120 and the current posture of the UAV predicted by the posture predicting module 110 may be received by the posture control module 140 can do.

The position control module 120 may calculate the target value including the target posture of the UAV and the magnitude of the target thrust.

The position control module 120 calculates the size of the target attitude and the target thrust of the UAV based on the current position of the UAV received from the attitude prediction module 110 and the target position and target yaw angle of the UAV .

The position control module 120 may include a current position of the UAV that is received from the attitude prediction module 110 as input and a target position and a target yaw angle of a predetermined UAV, And may include the magnitude of the thrust.

The deep learning neural network generation module 130 generates a deep learning neural network based on the current posture output from the posture prediction module 120 and the target value output from the position control module 120 (i.e., the size of the target posture and the target thrust) .

The deep learning neural network generation module 130 can perform learning of the deep learning neural network based on the current posture and the target value obtained in a state where the UAV is interlocked with the position control module 120 and hovering normally. This can be more easily understood with reference to FIG.

The current position, the current posture, and the target value (i.e., the size of the target posture and the target thrust) used for performing the learning of the deep learning neural network may be acquired when the position control module 120 is operating normally and the UAV is hovering have. At this time, the state where the position control module 120 normally operates may refer to a state where the position information of the UAV can be obtained through the position measuring means such as GPS, computer vision, and the like.

The deep learning neural network generation module 130 sets the current attitude obtained in the state where the UAV is normally hovering to the input of the deep learning neural network and sets the target value obtained in the state where the UAV is normally hovering to the output of the deep learning neural network, Learning of the deep learning neural network can be performed.

In this case, the UAV may further include a sensor unit having a distance measuring sensor, an acceleration measuring sensor, etc., and the deep learning neural network generating module 130 may further include a dip learning neural network generating module 130 as an input of a deep learning neural network , A distance measurement sensor, an acceleration measurement sensor, or the like.

The posture control module 140 can control the posture of the UAV.

The attitude control module 140 may use the current position and the current attitude of the UAV predicted based on the Extended Kalman Filter when the position control module 120 operates normally The posture of the UAV can be controlled and the posture of the UAV can be controlled using the predicted current posture based on the extended Kalman filter when the position control module 120 operates abnormally have.

When the position control module 120 operates abnormally as shown in FIG. 3, the posture control module 140 outputs the target posture of the learned deep-learning neural network corresponding to the current posture and the current posture output from the posture predicting module 110 (That is, the magnitude of the target attitude and the target thrust), so that the UAV can assume the hovering attitude.

In this case, the state where the position control module 120 operates abnormally means that the position information of the UAV can not be acquired through the position measurement means such as GPS, computer vision, etc., 120) or an abnormal state of the position measuring means (e.g., failure, damage, measurement error, etc.).

Hereinafter, a method of constructing the deep learning neural network formed by the deep learning neural network generation module 130 will be described in more detail.

The deep learning neural network generated by the deep learning neural network generation module 130 may have an input as shown in Equation 1 and an output as shown in Equation 2. [

[Equation 1]

Figure 112015125878584-pat00001

&Quot; (2) "

Figure 112015125878584-pat00002

At this time, x (t) can be expressed as a four-dimensional vector expressed by a quaternion as the deep posture neural network input at time t, and the current posture (q 0 , q 1 , q 2 , q 3 ) of the UAV. (t) is the output of the deep-running neural network at time t, and the number of employees of the target attitude (q0 , des , q1 , des , q2 , des , q3 , des ) des ), which can be expressed as a five-dimensional vector.

The input / output data (i.e., the current posture, the target posture, the size of the target thrust) used for the learning of the deep learning neural network may be a learning sample, and the learning data may include a posture predicting module 110 and a position control module 120).

The deep running neural network generated by the deep run generating module 130 may be generated based on a Recurrent Neural Network (RNN). The RNN is a method of constructing a deep neural network for learning time sequence data according to time. In the RNN configuration method, at time t, the output of a specific neuron in a neural network at time t + 1 The neural network can be formed by repeating the process of entering the input of another neuron again.

Since the deep learning generating module 130 performs learning of the deep learning neural network based on data sequentially obtained with time (i.e., the current posture, the target posture, and the target thrust) in the state where the UAV is normally hovering, It may be suitable for the configuration method.

4 is a diagram illustrating a configuration of a deep learning neural network in a UAV flight control system 100 using deep learning according to an embodiment of the present invention.

Referring to FIG. 4, the deep-learning neural network generated by the deep-run generating module 130 may be formed into two layers in one embodiment. x (t) means the current posture of the UAV at time t, and o (t-1) means the size of the target posture and target thrust at time t-1.

U (t) is a weighting matrix for feed forwarding from x (t) to o (t) and may have a size of 5x4 in the example of FIG. V (t) is a weighting matrix for feedforwarding from o (t-1) to o (t) and may have a size of 5x5 in one example of FIG. In this case, the output o (t) of the deep running neural network at the current time may be as shown in equation (3).

&Quot; (3) "

Figure 112015125878584-pat00003

In this case, f is an activation function. In one embodiment, a logistic function, a hyperbolic tangent (tanh) function, or the like can be used.

If f is a logistic function, it can be defined as Equation (4), and when f is a hyperbolic tangent function, it can be defined as Equation (5).

&Quot; (4) "

Figure 112015125878584-pat00004

&Quot; (5) "

Figure 112015125878584-pat00005

5 is an unfolded view of the configuration of a deep learning neural network in a UAV flight control system using deep learning according to an embodiment of the present invention.

Referring to FIG. 5, k means that the learning of the deep learning neural network is performed up to k time, and the amount of learning of sequential data over time can be determined according to the size of k.

According to Equations (1) to (3), the deep learning neural network generation module 130 generates the deep training neural network generation module 130 based on the current posture x (t) obtained at the first time (T) at the first time as the output of the deep learning neural network, based on the target value o (t-1) obtained at the time t-1, for example.

In this case, according to an embodiment of the present invention, the input of the deep learning neural network in Equation (1) is considered to be taken into consideration only the current posture of the UAV that is acquired in the first time, but the present invention is not limited thereto. Therefore, the sensor measurement values measured through the distance measuring sensor, the acceleration measuring sensor and the like can be further considered at the input of the deep running neural network. That is, the de-learning neural network generation module 130 calculates the current posture obtained at the first time (e.g., t time), the sensor measurement obtained at the first time, and the second time (e.g., t-1 (I.e., the size of the target posture and the target thrust) obtained in the first time is set as the input of the deep learning neural network, the target value at the first time can be calculated.

The de-learning neural network generation module 130 calculates weights (e.g., U (t), V (t)) for each of the current position x (t) obtained at the first time and the target value o (t) at the first time can be calculated on the basis of the activation function (t) and the activation function (f). The process of calculating the target value o (t) at the first time in the deep learning neural network generation module 130 may be referred to as a feed forward process.

The deep learning neural network generation module 130 may update the weights (e.g., U, V) each time the learning data is sequentially acquired, and the deep learning neural network generation module 130 may update the weight functions The weights can be updated based on the output error of the deep running neural network calculated using the above equation.

Also, the deep learning neural network generation module 130 may update the weight based on the gradient descent so that the output error of the deep learning neural network is minimized. The process of calculating the output error of the neural network and updating the weights can be referred to as a feedback process, which can be more easily understood through the following description.

FIG. 6 is a diagram illustrating a process of updating a weight of a deep learning neural network in a UAV flight control system using deep learning according to an embodiment of the present invention. Referring to FIG.

In Figure 6, the solid line represents the feed forward step and the dotted line represents the feed back step.

The feed forward step and the feedback step are performed for the learning of the deep learning neural network. The feed forward step receives the sample data from the deep learning neural network generation module 130, and calculates the weight and the activation function function is used to calculate the output of the deep-running neural network.

As described above, the feed-forwarding step includes a step of calculating a target value o (t-1) at time t as an output of the deep-running neural network based on the current posture x (t) obtained at time t and the target value o (t).

The feedback step is a step of calculating the target output of the sample and the output error of the neural network in the deep learning neural network generation module 130 and updating the weight of the current neural network accordingly.

The deep learning neural network generation module 130 may update the weight based on back propagation through time (BPTT), which is a time weight updating method, as an embodiment.

For the weight update using the BPTT, the deep learning neural network generation module 130 may calculate an output error of the deep learning neural network using a cost function, and the cost function is defined as Equation 6 .

&Quot; (6) "

Figure 112015125878584-pat00006

In this case, p represents the number of samples of the learning data (that is, the current posture, the target attitude, and the size of the target thrust), and d p (t) represents the target of the output y (t) of the deep learning neural network at time t of the p- Represents the desired output.

In Equation (3), the weight U and weight V can be updated in a direction that minimizes the output error of the neural network, which is calculated based on the cost function. To this end, the deep learning neural network generation module 130 uses a gradient descent method, Can be used to update the weights. The weights can be updated based on Equations (7) to (12) below.

&Quot; (7) "

Figure 112015125878584-pat00007

&Quot; (8) "

Figure 112015125878584-pat00008

At this time,? U and? V represent a weight update change amount,

Figure 112015125878584-pat00009
Represents a learning rate.

5, the weight U and the weight V must be updated k times from the time t-k + 1 to t, and the weight U and the weight V can be updated repeatedly as shown in Equations 9 and 10 have.

&Quot; (9) "

Figure 112015125878584-pat00010

&Quot; (10) "

Figure 112015125878584-pat00011

Here, e (t) denotes an error propagated from the cost function C at time t, and e (t) can be calculated based on Equation (11) and Equation (12).

&Quot; (11) "

Figure 112015125878584-pat00012

&Quot; (12) "

Figure 112015125878584-pat00013

4 to 6, the dip-learning neural network according to one embodiment of the present invention is formed as two layers. However, the present invention is not limited thereto. In another embodiment, the deep- Three layers, four layers, or the like, through which the deep-running neural network generation module 130 can form a deeper neural network.

Hereinafter, the operation flow of the present invention will be briefly described based on the details described above.

7 is a flowchart illustrating a method for controlling a UAV using deep running according to an embodiment of the present invention.

Referring to FIG. 7, in step S710, the UAV 100 using deep learning according to an embodiment of the present invention can acquire the current position of the UAV from the posture predicting module 110. FIG.

In step S710, the current position and the current attitude of the UAV calculated on the basis of the extended Kalman filter can be obtained from the attitude prediction module 110. The current position includes information such as X coordinate, Y coordinate, Z coordinate, The current attitude may include information such as roll, pitch, and yaw angle.

Next, in step S720, the UAV 100 using deep learning according to an embodiment of the present invention can acquire a target value including a target attitude and a target thrust of the UAV from the position control module 120. [

Next, the UAV 100 using deep learning according to an embodiment of the present invention can form a deep learning neural network by the deep learning neural network generation module 130 based on the current posture and the target value in step S730 .

In step S730, the deep learning neural network generation module 130 may perform the learning of the deep learning neural network based on the current posture and the target value obtained in a state where the UAV is hovering normally in conjunction with the position control module 120. [

In step S730, the deep learning neural network generation module 130 sets the current posture acquired in the state where the UAV is normally hovering to the input of the deep learning neural network, and outputs the target value obtained in the state where the UAV is normally hovering to the output of the deep learning neural network Learning can be performed by setting.

In step S730, the deep learning neural network generation module 130 calculates the target attitude obtained in the current time attained at the first time (for example, time t) and the second time (for example, t-1 time) The target value at the first time can be calculated as the output of the deep learning neural network based on the weight of the neural network and the activation function.

In addition, in step S730, the deep learning neural network generation module 130 may update the weight based on the output error of the deep neural network calculated using the cost function. At this time, in order to minimize the output error, and update the weights based on the gradient descent.

Next, the UAV 100 using deep learning according to an embodiment of the present invention can control the attitude of the UAV by the attitude control module 140 in step S740.

In step S740, when the position control module 120 operates abnormally, the posture control module 140 determines whether or not the current posture output from the posture predicting module 110 and the target value output of the learned deep- It is possible to control so that the UAV takes a hovering posture based on this.

In step S740, if the position control module 120 operates normally (i.e., in the case of FIG. 2), the posture control module 140 determines the current position of the UAV and the current position of the UAV based on the Extended Kalman Filter The attitude of the UAV can be controlled using the attitude of the UAV, and when the position control module 120 operates abnormally (i.e., in the case of FIG. 3) The posture can be controlled.

The UAV flight control method using deep learning according to one embodiment of the present invention may be implemented in a form of a program command that can be executed through various computer means and recorded in a computer readable medium. The computer-readable medium may include program instructions, data files, data structures, and the like, alone or in combination. The program instructions recorded on the medium may be those specially designed and configured for the present invention or may be available to those skilled in the art of computer software. Examples of computer-readable media include magnetic media such as hard disks, floppy disks and magnetic tape; optical media such as CD-ROMs and DVDs; magnetic media such as floppy disks; Magneto-optical media, and hardware devices specifically configured to store and execute program instructions such as ROM, RAM, flash memory, and the like. Examples of program instructions include machine language code such as those produced by a compiler, as well as high-level language code that can be executed by a computer using an interpreter or the like. The hardware devices described above may be configured to operate as one or more software modules to perform the operations of the present invention, and vice versa.

It will be understood by those of ordinary skill in the art that the foregoing description of the embodiments is for illustrative purposes and that those skilled in the art can easily modify the invention without departing from the spirit or essential characteristics thereof. It is therefore to be understood that the above-described embodiments are illustrative in all aspects and not restrictive. For example, each component described as a single entity may be distributed and implemented, and components described as being distributed may also be implemented in a combined form.

The scope of the present invention is defined by the appended claims rather than the detailed description, and all changes or modifications derived from the meaning and scope of the claims and their equivalents should be construed as being included within the scope of the present invention.

100: UAV flight control system using deep running
110: posture prediction module 120: position control module
130: Deep Learning Neural Network Generation Module 140: Posture Control Module

Claims (17)

Acquiring the current position of the UAV from a posture prediction module that calculates the current posture and the current position when the UAV is hovering in a state where the position information of the UAV can be acquired and the position control module operates normally;
When the UAV is hovering in a state in which the position control module is operating normally, the position control module is capable of obtaining the position information of the UAV and the current position of the UAV predicted from the posture prediction module, Obtaining a target value including a target posture and a magnitude of a target thrust of the UAV by taking into consideration a target position and a target yaw angle;
Inputting the acquired current position, setting the target value as an output, and performing learning to form a deep learning neural network; And
Controlling the posture of the UAV based on the deep learning neural network when the position control module is operating abnormally because position information of the UAV can not be obtained,
, ≪ / RTI &
The step of controlling the posture of the UAV may include: when the position control module operates abnormally, based on a target value output of the deep learning neural network learned corresponding to the current posture output from the posture prediction module, Respectively,
Wherein the step of forming the deep learning neural network comprises the steps of: when the UAV is hovering in the normal operation state of the position control module, Wherein the learning is performed by calculating a target value at the first time as an output of the deep learning neural network based on the first learning result.
delete delete The method according to claim 1,
The step of controlling the posture of the UAV
When the position control module operates normally, controls the attitude using the current position and the current attitude of the UAV predicted based on an Extended Kalman Filter,
Wherein when the position control module operates abnormally, the posture is controlled using the predicted current posture based on the extended Kalman filter.
delete The method according to claim 1,
The step of forming the deep-
And calculating a target value at the first time based on a weight and an activation function for each of the target attitude obtained at the second time and the current attitude obtained at the first time, UAV flight control method.
The method according to claim 6,
The step of forming the deep-
Updating the weight based on an output error of the deep learning neural network calculated using a cost function,
Wherein the method comprises the steps of:
8. The method of claim 7,
The step of updating the weights
And updating the weight based on a gradient descent such that the output error is minimized.
A posture predicting module for calculating a current posture and a current position of the UAV when the UAV is hovering in a state where the position information of the UAV can be obtained and the position control module is operating normally;
When the UAV is hovering in a state where the position control module is operating normally, the current position of the UAV predicted from the posture prediction module and the set target position and target position of the UAV, the position control module calculating the target value including the target posture and the magnitude of the target thrust of the UAV by taking the yaw angle into account;
A deep learning neural network generation module for inputting the acquired current position, setting the obtained target value as an output, and performing learning to form a deep learning neural network; And
An attitude control module that controls the attitude of the UAV based on the dip learning neural network when the position control module operates abnormally because the position information of the UAV can not be acquired,
≪ / RTI >
The posture control module controls the posture predicting module so as to take the hovering posture based on the target value output of the deep learning neural network learned corresponding to the current posture output from the posture predicting module when the position control module operates abnormally and,
Wherein the dip-learning neural network generating module is configured to generate the dip-learning neural network based on the current attitude obtained at the first time and the target value obtained at the second time before the first time when the UAV is hovering while the position control module is normally operating Wherein the learning is performed by calculating a target value at the first time as an output of the deep learning neural network.
delete delete 10. The method of claim 9,
The posture control module
When the position control module operates normally, controls the attitude using the current position and the current attitude of the UAV predicted based on an Extended Kalman Filter,
Wherein when the position control module operates abnormally, the posture is controlled using the predicted current posture based on the extended Kalman filter.
delete The method of claim 9, wherein
The deep learning neural network generation module
And calculating a target value at the first time based on a weight and an activation function for each of the target attitude obtained at the second time and the current attitude obtained at the first time, UAV flight control system.
15. The method of claim 14,
The deep learning neural network generation module
Wherein the weighting value is updated based on an output error of the deep learning neural network calculated using a cost function.
16. The method of claim 15,
The deep learning neural network generation module
And updates the weight based on a gradient descent such that the output error is minimized.
A computer-readable recording medium having recorded therein a program for executing the method according to any one of claims 1, 4 and 6 to 8.
KR1020150183932A 2015-12-22 2015-12-22 Unmanned aerial vehicle flight control system and method using deep learning KR101813697B1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
KR1020150183932A KR101813697B1 (en) 2015-12-22 2015-12-22 Unmanned aerial vehicle flight control system and method using deep learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
KR1020150183932A KR101813697B1 (en) 2015-12-22 2015-12-22 Unmanned aerial vehicle flight control system and method using deep learning

Publications (2)

Publication Number Publication Date
KR20170074539A KR20170074539A (en) 2017-06-30
KR101813697B1 true KR101813697B1 (en) 2017-12-29

Family

ID=59279485

Family Applications (1)

Application Number Title Priority Date Filing Date
KR1020150183932A KR101813697B1 (en) 2015-12-22 2015-12-22 Unmanned aerial vehicle flight control system and method using deep learning

Country Status (1)

Country Link
KR (1) KR101813697B1 (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101981624B1 (en) * 2018-10-16 2019-05-23 엘아이지넥스원 주식회사 Low-observable target detection apparatus using artificial intelligence based on big data and method thereof
CN110488861A (en) * 2019-07-30 2019-11-22 北京邮电大学 Unmanned plane track optimizing method, device and unmanned plane based on deeply study
KR102194238B1 (en) 2020-06-24 2020-12-22 국방과학연구소 System and method for estimating a danger of aircraft loss of stability
KR20210039156A (en) * 2019-10-01 2021-04-09 주식회사 엘지유플러스 Unmanned aerial apparatus and operating method thereof

Families Citing this family (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR102021384B1 (en) * 2017-11-14 2019-09-20 주식회사 셈웨어 Method for creating dynamic modeling of drone based on artificial intelligence
KR102067997B1 (en) * 2017-12-18 2020-01-20 한밭대학교 산학협력단 Apparatus and method for wireless location using deep learning
CN108216233B (en) * 2017-12-28 2019-10-15 北京经纬恒润科技有限公司 A kind of scaling method and device of self-adaption cruise system control parameter
KR102592830B1 (en) 2018-12-05 2023-10-23 현대자동차주식회사 Apparatus and method for predicting sensor fusion target in vehicle and vehicle including the same
US20200301446A1 (en) * 2019-02-25 2020-09-24 Aero Knowhow Limited Tilt-Wing Aircraft
CN110806756B (en) * 2019-09-10 2022-08-02 西北工业大学 Unmanned aerial vehicle autonomous guidance control method based on DDPG
CN110879602B (en) * 2019-12-06 2023-04-28 安阳全丰航空植保科技股份有限公司 Unmanned aerial vehicle control law parameter adjusting method and system based on deep learning
CN114684293B (en) * 2020-12-28 2023-07-25 成都启源西普科技有限公司 Robot walking simulation algorithm
KR102549001B1 (en) * 2021-08-19 2023-06-27 한국로봇융합연구원 An apparatus of structuring input data for a deep neural network that estimates the pose from accelerometers and gyroscopes sensor information and a method thereof
CN116880538B (en) * 2023-09-06 2024-01-09 杭州牧星科技有限公司 High subsonic unmanned plane large maneuvering flight control system and method thereof

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2006312344A (en) * 2005-05-06 2006-11-16 Kenzo Nonami Autonomous flight control device and autonomous flight control method for small size unmanned helicopter

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2006312344A (en) * 2005-05-06 2006-11-16 Kenzo Nonami Autonomous flight control device and autonomous flight control method for small size unmanned helicopter

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101981624B1 (en) * 2018-10-16 2019-05-23 엘아이지넥스원 주식회사 Low-observable target detection apparatus using artificial intelligence based on big data and method thereof
CN110488861A (en) * 2019-07-30 2019-11-22 北京邮电大学 Unmanned plane track optimizing method, device and unmanned plane based on deeply study
KR20210039156A (en) * 2019-10-01 2021-04-09 주식회사 엘지유플러스 Unmanned aerial apparatus and operating method thereof
KR102309042B1 (en) * 2019-10-01 2021-10-05 주식회사 엘지유플러스 Unmanned aerial apparatus and operating method thereof
KR102194238B1 (en) 2020-06-24 2020-12-22 국방과학연구소 System and method for estimating a danger of aircraft loss of stability

Also Published As

Publication number Publication date
KR20170074539A (en) 2017-06-30

Similar Documents

Publication Publication Date Title
KR101813697B1 (en) Unmanned aerial vehicle flight control system and method using deep learning
JP6739463B2 (en) Multi-sensor fusion for stable autonomous flight in indoor and outdoor environments with a rotary wing miniature vehicle (MAV)
Shen et al. Multi-sensor fusion for robust autonomous flight in indoor and outdoor environments with a rotorcraft MAV
Wu Coordinated path planning for an unmanned aerial-aquatic vehicle (UAAV) and an autonomous underwater vehicle (AUV) in an underwater target strike mission
EP3158417B1 (en) Sensor fusion using inertial and image sensors
US11693373B2 (en) Systems and methods for robust learning-based control during forward and landing flight under uncertain conditions
CN108919640B (en) Method for realizing self-adaptive multi-target tracking of unmanned aerial vehicle
JP6884685B2 (en) Control devices, unmanned systems, control methods and programs
WO2016187760A1 (en) Sensor fusion using inertial and image sensors
WO2016187757A1 (en) Sensor fusion using inertial and image sensors
Ludington et al. Augmenting UAV autonomy
EP3734394A1 (en) Sensor fusion using inertial and image sensors
Raja et al. PFIN: An efficient particle filter-based indoor navigation framework for UAVs
Tzoumanikas et al. Fully autonomous micro air vehicle flight and landing on a moving target using visual–inertial estimation and model‐predictive control
JP2007317165A (en) Method, apparatus, and program for planning operation of autonomous mobile robot, method for controlling autonomous mobile robot using method, recording medium thereof, and program for controlling autonomous mobile robot
Shamsudin et al. Identification of an unmanned helicopter system using optimised neural network structure
CN111624875A (en) Visual servo control method and device and unmanned equipment
Outeiro et al. Multiple-model control architecture for a quadrotor with constant unknown mass and inertia
Elisha et al. Active online visual-inertial navigation and sensor calibration via belief space planning and factor graph based incremental smoothing
Watanabe et al. Optimal 3-D guidance from a 2-D vision sensor
Bellini et al. Information driven path planning and control for collaborative aerial robotic sensors using artificial potential functions
Lee et al. Autopilot design for unmanned combat aerial vehicles (UCAVs) via learning-based approach
CN113692560A (en) Control method and device for movable platform, movable platform and storage medium
JP2020107013A (en) Optimal route generation system
Outeiro et al. MMAE/LQR Yaw Control System of a Quadrotor for Constant Unknown Inertia

Legal Events

Date Code Title Description
E902 Notification of reason for refusal
E701 Decision to grant or registration of patent right
GRNT Written decision to grant