CN115953700A - Unmanned aerial vehicle task identification method based on fusion of map task elements and state features - Google Patents

Unmanned aerial vehicle task identification method based on fusion of map task elements and state features Download PDF

Info

Publication number
CN115953700A
CN115953700A CN202211661737.9A CN202211661737A CN115953700A CN 115953700 A CN115953700 A CN 115953700A CN 202211661737 A CN202211661737 A CN 202211661737A CN 115953700 A CN115953700 A CN 115953700A
Authority
CN
China
Prior art keywords
unmanned aerial
aerial vehicle
task
flight
tasks
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211661737.9A
Other languages
Chinese (zh)
Inventor
张祖耀
殷汶鑫
董佳琦
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chengdu Furui Kongtian Technology Co ltd
Original Assignee
Chengdu Furui Kongtian Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chengdu Furui Kongtian Technology Co ltd filed Critical Chengdu Furui Kongtian Technology Co ltd
Priority to CN202211661737.9A priority Critical patent/CN115953700A/en
Publication of CN115953700A publication Critical patent/CN115953700A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)

Abstract

The invention provides an unmanned aerial vehicle task identification method with fused map task elements and state features, which is used for finishing task element mapping of a flight map, wherein the mapping result is the ratio of each grid 5-type task; obtaining characteristic data of the flight state of the unmanned aerial vehicle; directly taking the task type as a sample label Y for historical data with corresponding tasks; clustering the characteristic samples for the historical data which does not record the task type, and mapping the task type of the clustering result; putting the characteristic data and the sample data into a deep learning neural network for training, and optimizing a network structure and parameters; inputting new characteristic data, and predicting the tasks, wherein the prediction result is the probability of 5 types of tasks; and fusing the class occupation ratio of the flight map task elements with the output class probability to obtain the fusion probability of the associated environment space and the flight state characteristics. The invention improves the comprehensiveness, objectivity and accuracy of task identification of the unmanned aerial vehicle in the airspace.

Description

Unmanned aerial vehicle task identification method with fusion of map task elements and state features
Technical Field
The invention belongs to the technical field of unmanned aerial vehicles, and particularly relates to an unmanned aerial vehicle task identification method with fusion of map task elements and state features.
Background
With the maturity of the unmanned aerial vehicle industry, the civil unmanned aerial vehicle is not blocked in the controlled airspace. The commercialization of unmanned aerial vehicles is emerging, and the explosive growth of the types and the number of civil unmanned aerial vehicles is brought. Different types of unmanned aerial vehicles determine that the unmanned aerial vehicles have a plurality of specific task attributes, so that the tasks currently executed by the unmanned aerial vehicles in the airspace are efficiently and accurately identified, the behavior intention reasoning of the unmanned aerial vehicles and the potential conflict risk avoidance are referred to, the supervision capability of civil unmanned aerial vehicles in China can be further improved, and the technical support is provided for the unmanned aerial vehicles to be rapidly integrated into an airspace supervision system.
At present, the unmanned aerial vehicle flight monitoring technology is relatively perfect, flight state characteristic data such as position coordinates, speed, attitude, timestamp of cooperative and non-cooperative unmanned aerial vehicles are detected, received and collected mainly by technical means such as radar and ADS-B, and reliable data support is provided for task recognition of the unmanned aerial vehicle.
In addition to the flight characteristics of the drone itself, the environmental space around the drone is also an important reference. Although the unmanned aerial vehicle executes different tasks, the unmanned aerial vehicle may have similar flight characteristics at a certain time, so that the current task of the unmanned aerial vehicle is difficult to objectively and accurately judge only according to the flight data characteristics.
In the prior art, the publication number is CN114169367a, and the method and system for identifying a low, small and slow unmanned aerial vehicle based on deep learning detect and process radio signals, complete feature extraction based on an unmanned aerial vehicle classification identification algorithm, and realize identification of the type of the unmanned aerial vehicle by using flight signal frequency band feature parameters and a deep learning convolutional neural network. The disclosure number is CN110958200A, and the unmanned aerial vehicle identification method based on radio characteristics identifies and extracts the signal characteristics of an unmanned aerial vehicle by establishing an unmanned aerial vehicle signal characteristic library, and finally compares the unmanned aerial vehicle signals by using the characteristic library to realize identification of the model of the unmanned aerial vehicle.
The two inventions realize classification and identification of the type and the model of the unmanned aerial vehicle by processing the signal characteristics in the communication process of the unmanned aerial vehicle. Although the method can reliably classify the unmanned aerial vehicles based on the characteristic that each type of unmanned aerial vehicle has specific radio signal characteristics, the method still has certain limitation on the unmanned aerial vehicle mission field identification scene mentioned in the scheme. Firstly, the unmanned aerial vehicles of the same model still have the possibility of executing different tasks according to different requirements of users; meanwhile, with the development of the civil unmanned aerial vehicle industry, it is more common that multiple types of unmanned aerial vehicles and even other electronic devices share the same signal frequency band, which inevitably causes serious co-channel interference, and provides great challenges for the design and anti-interference capability of hardware. Secondly, a database with matched signal characteristics and machine types needs to be established, which means that machine types outside the database cannot be effectively identified, and the generalization performance of the identification result is limited.
In a thesis of unmanned aerial vehicle behavior intention classification method based on appearance characteristics (scientific and technological innovation and application, 11 th volume in 2021, 14 th period, 139-142), based on the appearance characteristics of the load of an unmanned aerial vehicle, a civil unmanned aerial vehicle is divided into five categories, namely monitoring and surveying, emergency rescue, transportation, agriculture and forestry plant protection and aerial photography surveying and mapping, and an unmanned aerial vehicle behavior pattern discrimination database is established based on the appearance characteristics. The method usually needs to utilize a photoelectric detection means to collect the video or image data of the unmanned aerial vehicle, and the video image data is processed to obtain the appearance characteristics so as to judge the type of the unmanned aerial vehicle. However, the photoelectric detection is influenced by objective factors such as environment, obstacles and equipment detection capability, the error of the identification result is large, the unmanned aerial vehicle in the airspace is difficult to carry out all-weather undifferentiated monitoring, and the method has obvious limitation.
Disclosure of Invention
Aiming at the problems of poor task identification generalization and low precision caused by limited characteristic dimension of task identification of the civil unmanned aerial vehicle, lack of multi-dimensional stereoscopic task identification capability and the like, the invention provides a multi-feature heterogeneity extraction method based on task division and an unmanned aerial vehicle flight map task element division method, integrates two characteristic dimensions of the flight state and the environment space of the unmanned aerial vehicle, and realizes effective identification and division of the unmanned aerial vehicle at the task level.
The specific technical scheme of the invention is as follows:
an unmanned aerial vehicle task identification method with fusion of map task elements and state features is characterized in that:
firstly, a monitoring area is required to be provided with radar and ADS-B equipment, and the position coordinates, the speed and the time stamp flight state characteristics of the unmanned aerial vehicle in the airspace can be effectively captured.
The task of the unmanned aerial vehicle is divided into 5 types including terminal logistics, aerial photography mapping, emergency rescue, agriculture and forestry plant protection and industrial inspection, and a characteristic function is constructed according to 5 types of tasks and flight state characteristics. When the maximum value of the class 5 task recognition probability is <50%, the casting task is not recognized.
The identification method comprises the following specific steps:
step 1: according to the land utilization property of the flight space of the unmanned aerial vehicle, the task element mapping of the flight map is completed, each grid can have several task attributes at the same time, and the mapping result is the proportion M of 5 types of tasks of each grid k ={m k1 ,m k2 ,m k3 ,m k4 ,m k5 },M k Class 5 task distribution showing the kth grid, { m k1 ,...,m k5 The occupation ratios of 5 types of tasks of terminal logistics, aerial photography surveying and mapping, emergency rescue, agriculture and forestry plant protection and industrial inspection in a task grid are respectively set;
and 2, step: collecting the flight state historical data of the unmanned aerial vehicle, and preprocessing the acquired flight state historical data to obtain characteristic data X i ={x i1 ,x i2 ,x i3 ,x i4 ,x i5 },X i Some feature vector, { x, { representing a structure from historical data of drones i1 ,...,x i5 Represents 5 pieces eachConstructing a feature;
and 3, step 3: for historical data with corresponding tasks, directly taking the task type as a sample label Y i (ii) a Clustering the characteristic samples X for historical data which do not record task types, and mapping the task types of clustering results to ensure that each piece of characteristic data has a corresponding label value;
and 4, step 4: putting the characteristic data and the sample data into a deep learning neural network with 5 input and output neurons for training, and optimizing the network structure and parameters;
and 5: inputting new characteristic data by using the trained network parameters to predict the tasks, wherein the prediction result is the probability O of 5 types of tasks j ={o j1 ,o j2 ,o j3 ,o j4 ,o j5 In which O is j Neural network prediction result vector, { o ] representing drone j j1 ,...,o j5 Denotes the prediction probabilities of 5 types of tasks, respectively;
step 6: and fusing the class ratio of the flight map task elements with the output class probability to obtain the fusion probability of the associated environment space and the flight state characteristics:
FP jk ={αo j1 +βm k1 ,αo j2 +βm k2 ,αo j3 +βm k3 ,αo j4 +βm k4 ,αo j5 +βm k5 }
FP jk and (3) representing the fusion probability vector of each task of the unmanned aerial vehicle j running on the grid k, wherein alpha and beta are fusion coefficients and represent the proportion of the environment space and the flying state.
Wherein, 5 kinds of tasks are respectively:
A. end logistics:
under the terminal logistics scene, the current terminal logistics unmanned aerial vehicle origin-destination point is a freight warehouse or a transit station and flies along a reported air route plan in the midway under the condition of infrastructure conditions;
B. aerial photography and mapping:
flying within the visual range;
C. emergency rescue:
under an emergency rescue scene, the unmanned aerial vehicle is positioned at the origin-destination point of an emergency response department and can hover in a residential living area; D. and (3) agriculture and forestry plant protection:
in an agriculture and forestry plant protection scene, a flight area is basically located in an agriculture and forestry area;
E. industrial inspection:
under the scene is patrolled and examined in industry, unmanned aerial vehicle flies along the route and the speed planned in advance according to arranging of pipeline, and the flight area is located the region that the pipeline circuit distributes concentratedly.
Further, the flight characteristics of the unmanned aerial vehicle in the 5 types of task scenes construct the following characteristic functions:
(1) Degree of yaw
The yaw degree refers to the deviation degree of the actual flight path of the unmanned aerial vehicle and the reported planned flight path, and the distance d between two points is calculated according to the coordinates of the reported planned flight path node and the actual coordinates of the current unmanned aerial vehicle i
d i =1000·R·arccos(siny′ i sin y i +cos y′ i cos y i cos(x′ i -c i ))
R is the radius of the earth, the mean value is 6370km, and the i coordinate longitude and latitude of the actual node of the unmanned aerial vehicle is (x) i ′,y i ') the coordinate latitude and longitude of the proposed planned route node i is (x) i ,y i ) (ii) a Setting a yaw counter C according to the deviation degree of the distance i
Figure BDA0004013367800000031
The method for calculating the integral yaw degree of the flight range of the unmanned aerial vehicle comprises the following steps:
Figure BDA0004013367800000041
(2) Horizontal accumulated voyage
The horizontal accumulated range refers to the accumulated flight distance of the flight path of the unmanned aerial vehicle, is an important reference of a working area of the unmanned aerial vehicle, and reflects the coverage range of the current task:
Figure BDA0004013367800000042
(3) Average hover time ratio
The average hovering time ratio is the ratio of the hovering time to the total time in each average flight path of the unmanned aerial vehicle, represents the time proportion occupied by the hovering state in the flight of the unmanned aerial vehicle, and is used for reflecting the hovering state of the unmanned aerial vehicle:
Figure BDA0004013367800000043
th i representing the ith flight, and the time when the unmanned aerial vehicle is in a hovering state; t is i Representing the time of the ith voyage;
(4) Rate of change of horizontal distance
The horizontal distance change rate is the speed of the spatial position change of the unmanned aerial vehicle in the horizontal dimension and is used for reflecting the speed component of the unmanned aerial vehicle in the horizontal direction; the method for calculating the horizontal distance change rate of the ith flight path comprises the following steps:
Figure BDA0004013367800000044
(5) Rate of change of vertical distance
The vertical distance change rate is the speed of the change of the spatial position of the unmanned aerial vehicle in the vertical dimension and is used for reflecting the speed component of the unmanned aerial vehicle in the vertical direction, and the vertical distance change rate calculation method of the ith flight path is as follows:
Figure BDA0004013367800000045
h i+1 ' and h i ' represents the vertical height of the unmanned aerial vehicle at the time of the node i +1 and the time of the node i respectively; t is i Representing the time elapsed for the ith flight.
Aiming at the problems of incomplete characteristics, complex identification method, low efficiency and the like in a task identification scene, the invention creatively provides the method for calculating the fusion probability by taking the flight state and the environment space of the unmanned aerial vehicle as the task identification reference factors, and improves the comprehensiveness, objectivity and accuracy of the task identification of the unmanned aerial vehicle in the airspace.
Drawings
FIG. 1 is a flight map grid task element map of an embodiment;
fig. 2 is a flow chart of the present invention.
Detailed Description
The specific technical scheme of the invention is explained by combining the attached drawings.
The invention provides an unmanned aerial vehicle task identification method with fusion of map task elements and state characteristics, which comprises the following steps:
firstly, a monitoring area is required to be provided with radar and ADS-B equipment, and flight state characteristics such as position coordinates, speed and time stamps of the unmanned aerial vehicle in the air can be effectively captured.
The invention divides the tasks of the unmanned aerial vehicle into 5 types of terminal logistics, aerial photography mapping, emergency rescue, agriculture and forestry plant protection and industrial inspection, and constructs a characteristic function aiming at the 5 types of tasks and flight state characteristics. When the maximum value of the class 5 task recognition probability is <50%, the casting task is not recognized.
A. End logistics:
under the terminal logistics scene, unmanned aerial vehicle coverage is mostly within 5 kilometers, generally does not exceed 20 kilometers. And the current terminal logistics unmanned aerial vehicle origin-destination point is a freight warehouse or a transit station and the current terminal logistics unmanned aerial vehicle is planned to fly along the reported air route in midway under the condition of infrastructure. B. Aerial photography and mapping:
under the mapping scene of taking photo by plane, in order to guarantee angle, definition and the continuity of shooing, unmanned aerial vehicle need accomplish actions such as climb, decline, removal, even hover with less gesture change, mainly fly at public nature's such as park, square, road scene. Meanwhile, the unmanned aerial vehicle has higher requirement on the real-time performance of video image transmission, so that the unmanned aerial vehicle mostly flies in the sight distance.
C. Emergency rescue:
under the emergency rescue scene, the unmanned aerial vehicle origin-destination point is located in emergency response departments such as hospitals and fire control, and hovering action can be performed in residential areas.
D. And (3) agriculture and forestry plant protection:
under the agriculture and forestry plant protection scene, unmanned aerial vehicle often designs flying height and air route in advance for the even spraying of guaranteeing to spray thing, and the flight area is located the area for agriculture and forestry basically.
E. Industrial inspection:
under the scene is patrolled and examined in industry, unmanned aerial vehicle flies along the route and the speed planned in advance according to arranging of pipeline, and the flight area is located the regional that pipeline circuit distributes concentratedly mostly.
According to the flight characteristics of the unmanned aerial vehicle in the 5 types of task scenes, the following characteristic functions are constructed:
(1) Degree of yaw
The yaw degree refers to the deviation degree of the actual flight path of the unmanned aerial vehicle and the reported planned flight path, and the distance d between two points is calculated according to the coordinate of the reported planned flight path node and the current actual coordinate of the unmanned aerial vehicle:
d i =1000·R·arccos(siny′ i siny i +cosy′ i cosy i cos(x′ i -x i ))
r is the radius of the earth, the mean value is 6370km, and the i coordinate longitude and latitude of the actual node of the unmanned aerial vehicle is (x) i ′,y i ') the coordinate latitude and longitude of the proposed planned route node i is (x) i ,y i ). Setting a yaw counter C according to the deviation degree of the distance i
Figure BDA0004013367800000061
The method for calculating the integral yaw degree of the flight range of the unmanned aerial vehicle comprises the following steps:
Figure BDA0004013367800000062
(2) Horizontal accumulated voyage
The horizontal accumulated range refers to the accumulated flight distance of the flight path of the unmanned aerial vehicle, is an important reference of a working area of the unmanned aerial vehicle, and reflects the coverage range of the current task:
Figure BDA0004013367800000063
(3) Average hover time ratio
The average hover time ratio is the ratio of the hover time to the total time in each average flight path of the unmanned aerial vehicle, and can represent the time proportion occupied by the hover state in the flight of the unmanned aerial vehicle, and is used for reflecting the hover state of the unmanned aerial vehicle:
Figure BDA0004013367800000064
th i indicating the time during which the drone is hovering (i.e., unchanged in altitude) during the ith flight. T is a unit of i Representing the time elapsed for the ith flight.
(4) Rate of change of horizontal distance
The horizontal distance change rate is the speed of the change of the space position of the unmanned aerial vehicle in the horizontal dimension, and is used for reflecting the speed component of the unmanned aerial vehicle in the horizontal direction. The method for calculating the horizontal distance change rate of the ith flight path comprises the following steps:
Figure BDA0004013367800000065
(5) Rate of change of vertical distance
The vertical distance change rate is the speed of the change of the spatial position of the unmanned aerial vehicle in the vertical dimension and is used for reflecting the speed component of the unmanned aerial vehicle in the vertical direction, and the vertical distance change rate calculation method of the ith flight path is as follows:
Figure BDA0004013367800000071
h i+1 ' and h i ' represents the vertical height of the unmanned aerial vehicle at the time of the node i +1 and the time of the node i respectively; t is i Representing the time elapsed for the ith flight.
Fig. 2 is a flow chart of the entire technical solution. The technical scheme comprises the following specific steps:
step 1: according to the land utilization property of the unmanned aerial vehicle flight space, task element mapping of a flight map is completed, each grid may have several task attributes at the same time, and the mapping result is the ratio M of 5 types of tasks of each grid k ={m k1 ,m k2 ,m k3 ,m k4 ,m k5 },M k Class 5 task distribution indicating the kth grid, { m } k1 ,...,m k5 The occupation ratios of 5 types of tasks of terminal logistics, aerial photography surveying and mapping, emergency rescue, agriculture and forestry plant protection and industrial inspection in a task grid are respectively set;
and 2, step: collecting the flight state historical data of the unmanned aerial vehicle, and preprocessing the acquired flight state historical data to obtain characteristic data X i ={x i1 ,x i2 ,x i3 ,x i4 ,x i5 },X i Some feature vector, { x, { representing a structure from historical data of drones i1 ,...,x i5 Denotes 5 constructional features, respectively;
and step 3: for historical data with corresponding tasks, directly taking the task type as a sample label Y i (ii) a Clustering the characteristic samples X for historical data which do not record task types, and mapping the task types of clustering results to ensure that each piece of characteristic data has a corresponding label value;
and 4, step 4: putting the characteristic data and the sample data into a deep learning neural network with 5 input and output neurons for training, and optimizing the network structure and parameters;
and 5: inputting new characteristic data by using the trained network parameters to predict the tasks, wherein the prediction result is the probability O of 5 types of tasks j ={o j1 ,o j2 ,o j3 ,o j4 ,o j5 In which O is j A neural network prediction result vector representing drone j,{o j1 ,...,o j5 respectively representing the prediction probability of 5 types of tasks;
and 6: and fusing the class ratio of the flight map task elements with the output class probability to obtain the fusion probability of the associated environment space and the flight state characteristics:
FP jk ={αo j1 +βm k1 ,αo j2 +βm k2 ,αo j3 +βm k3 ,αo j4 +βm k4 ,αo j5 +βm k5 }
FP jk and (3) representing the fusion probability vector of each task of the unmanned aerial vehicle j running on the grid k, wherein alpha and beta are fusion coefficients and represent the proportion of the environment space and the flying state.

Claims (3)

1. An unmanned aerial vehicle task identification method based on fusion of map task elements and state features is characterized in that:
firstly, a monitoring area is required to be provided with a radar and ADS-B equipment, so that the position coordinates, the speed and the time stamp flight state characteristics of the unmanned aerial vehicle in the airspace can be effectively captured;
dividing tasks of the unmanned aerial vehicle into 5 types of terminal logistics, aerial photography mapping, emergency rescue, agriculture and forestry plant protection and industrial inspection, and constructing a characteristic function according to the 5 types of tasks and flight state characteristics; when the maximum value of the class-5 task identification probability is less than 50%, throwing the task without identification;
the identification method comprises the following specific steps:
step 1: according to the land utilization property of the flight space of the unmanned aerial vehicle, the task element mapping of the flight map is completed, each grid can have several task attributes at the same time, and the mapping result is the proportion M of 5 types of tasks of each grid k ={m k1 ,m k2 ,m k3 ,m k4 ,m k5 },M k Class 5 task distribution showing the kth grid, { m k1 ,...,m k5 The occupation ratios of 5 types of tasks of terminal logistics, aerial photography surveying and mapping, emergency rescue, agriculture and forestry plant protection and industrial inspection in a task grid are respectively set;
step 2: collecting historical data of flight state of unmanned aerial vehicle, preprocessing and obtaining characteristic dataX i ={x i1 ,x i2 ,x i3 ,x i4 ,x i5 },X i Some feature vector, { x, { representing a structure from historical data of drones i1 ,...,x i5 Denotes 5 constructional features, respectively;
and step 3: for historical data with corresponding tasks, directly taking the task type as a sample label Y i (ii) a Clustering the characteristic samples X for the historical data which do not record the task type, mapping the task type of the clustering result, and ensuring that each piece of characteristic data has a corresponding label value;
and 4, step 4: putting the characteristic data and the sample data into a deep learning neural network with 5 input and output neurons for training, and optimizing the network structure and parameters;
and 5: inputting new characteristic data by using the trained network parameters to predict the tasks, wherein the prediction result is the probability O of 5 types of tasks j ={o j1 ,o j2 ,o j3 ,o j4 ,o j5 In which O is j Neural network prediction result vector, { o, { representing drone j j1 ,...,o j5 Respectively representing the prediction probability of 5 types of tasks;
step 6: and fusing the class ratio of the flight map task elements with the output class probability to obtain the fusion probability of the associated environment space and the flight state characteristics:
FP jk ={αo j1 +βm k1 ,αo j2 +βm k2 ,αo j3 +βm k3 ,αo j4 +βm k4 ,αo j5 +βm k5 }
FP jk and (3) representing the fusion probability vector of each task of the unmanned aerial vehicle j running on the grid k, wherein alpha and beta are fusion coefficients and represent the proportion of the environment space and the flying state.
2. The unmanned aerial vehicle task identification method based on fusion of map task elements and state features according to claim 1, wherein the 5 types of tasks are respectively:
A. end logistics:
under the terminal logistics scene, the current terminal logistics unmanned aerial vehicle origin-destination point is a freight warehouse or a transit station and flies along a reported air route plan in the midway under the condition of infrastructure conditions;
B. aerial photography and mapping:
flying within the sight distance;
C. emergency rescue:
in an emergency rescue scene, the unmanned aerial vehicle origin-destination points are located in emergency response departments, and hovering actions can be carried out in residential living areas;
D. and (3) agriculture and forestry plant protection:
in an agriculture and forestry plant protection scene, a flight area is basically located in an agriculture and forestry area;
E. industrial inspection:
under the scene is patrolled and examined in industry, unmanned aerial vehicle flies along the route and the speed planned in advance according to arranging of pipeline, and the flight area is located the region that the pipeline circuit distributes concentratedly.
3. The unmanned aerial vehicle task recognition method based on fusion of map task elements and state features of claim 2, wherein flight characteristics of the unmanned aerial vehicle in the 5-class task scene construct the following feature functions:
(1) Degree of yaw
The yaw degree refers to the deviation degree of the actual flight path of the unmanned aerial vehicle and the reported planned flight path, and the distance d between two points is calculated according to the coordinates of the reported planned flight path node and the actual coordinates of the current unmanned aerial vehicle i
d i =1000·R·arccos(siny i ′siny i +cosy i ′cosy i cos(x i ′-x i ))
R is the radius of the earth, the mean value is 6370km, and the i coordinate longitude and latitude of the actual node of the unmanned aerial vehicle is (x) i ′,y i ') the coordinate latitude and longitude of the proposed planned route node i is (x) i ,y i ) (ii) a Setting a yaw counter C according to the deviation degree of the distance i
Figure FDA0004013367790000021
The method for calculating the integral yaw degree of the flight range of the unmanned aerial vehicle comprises the following steps:
Figure FDA0004013367790000022
(2) Horizontal accumulated voyage
The horizontal accumulated range refers to the accumulated flight distance of the flight path of the unmanned aerial vehicle, is an important reference of a working area of the unmanned aerial vehicle, and reflects the coverage range of the current task:
Figure FDA0004013367790000023
(3) Average hover time ratio
The average hovering time ratio is the ratio of the hovering time to the total time in each average flight path of the unmanned aerial vehicle, represents the time proportion occupied by the hovering state in the flight of the unmanned aerial vehicle, and is used for reflecting the hovering state of the unmanned aerial vehicle:
Figure FDA0004013367790000031
th i representing the ith flight, and the time when the unmanned aerial vehicle is in a hovering state; t is i Representing the time of the ith voyage;
(4) Rate of change of horizontal distance
The horizontal distance change rate is the speed of the spatial position change of the unmanned aerial vehicle in the horizontal dimension and is used for reflecting the speed component of the unmanned aerial vehicle in the horizontal direction; the method for calculating the horizontal distance change rate of the ith flight path comprises the following steps:
Figure FDA0004013367790000032
(5) Rate of change of vertical distance
The vertical distance change rate is the speed of the change of the spatial position of the unmanned aerial vehicle in the vertical dimension and is used for reflecting the speed component of the unmanned aerial vehicle in the vertical direction, and the vertical distance change rate calculation method of the ith flight path is as follows:
Figure FDA0004013367790000033
h i+1 ' and h i ' represents the vertical height of the unmanned aerial vehicle at the time of the node i +1 and the time of the node i respectively; t is i Representing the time elapsed for the ith flight.
CN202211661737.9A 2022-12-23 2022-12-23 Unmanned aerial vehicle task identification method based on fusion of map task elements and state features Pending CN115953700A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211661737.9A CN115953700A (en) 2022-12-23 2022-12-23 Unmanned aerial vehicle task identification method based on fusion of map task elements and state features

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211661737.9A CN115953700A (en) 2022-12-23 2022-12-23 Unmanned aerial vehicle task identification method based on fusion of map task elements and state features

Publications (1)

Publication Number Publication Date
CN115953700A true CN115953700A (en) 2023-04-11

Family

ID=87290231

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211661737.9A Pending CN115953700A (en) 2022-12-23 2022-12-23 Unmanned aerial vehicle task identification method based on fusion of map task elements and state features

Country Status (1)

Country Link
CN (1) CN115953700A (en)

Similar Documents

Publication Publication Date Title
Shakhatreh et al. Unmanned aerial vehicles (UAVs): A survey on civil applications and key research challenges
CN108037770B (en) Unmanned aerial vehicle power transmission line inspection system and method based on artificial intelligence
CN110866887A (en) Target situation fusion sensing method and system based on multiple sensors
Alexandrov et al. Analysis of machine learning methods for wildfire security monitoring with an unmanned aerial vehicles
CN110728857B (en) Low-altitude isolation airspace traffic management method based on vertically-taking-off and landing unmanned aerial vehicle
CN112382131B (en) Airport scene safety collision avoidance early warning system and method
CN113536564B (en) Unmanned bee colony autonomous collaborative assessment method and system based on virtual simulation
CN115761421A (en) Multi-source information fusion low-speed small target detection method and unmanned air defense system
CN111859247B (en) Unmanned aerial vehicle operation risk assessment method based on satellite-based ADS-B data
CN113963276A (en) Unmanned aerial vehicle autonomous inspection method and system for power transmission line
CN112817331A (en) Intelligent forestry information monitoring system based on multi-machine cooperation
CN113014866A (en) Airport low-altitude bird activity monitoring and risk alarming system
CN116972694A (en) Unmanned plane cluster attack-oriented countering method and system
CN116736887A (en) Unmanned aerial vehicle path planning method and device for forest fire control and fireproof system
CN117519944A (en) Unmanned vehicle based on cooperation of computing power perception and edge cloud computing and cooperation method thereof
CN105915275A (en) Wide-area collaborative precision remote sensing platform and remote sensing method thereof
Srivastava et al. Connotation of unconventional drones for agricultural applications with node arrangements using neural networks
CN105096661B (en) Air traffic Workflow Management System and method based on vertical sectional view
CN115953700A (en) Unmanned aerial vehicle task identification method based on fusion of map task elements and state features
CN114663818B (en) Airport operation core area monitoring and early warning system and method based on vision self-supervision learning
Alharbi et al. Modeling and characterization of traffic flow patterns and identification of airspace density for UTM application
Bastani et al. SkyQuery: an aerial drone video sensing platform
Xu et al. Assistance of UAVs in the intelligent management of urban space: A survey
CN117170406B (en) Unmanned aerial vehicle rapid autonomous searching method based on hierarchical planning
CN118274846B (en) Unmanned aerial vehicle group collaborative path planning method and system for road hidden trouble investigation

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination