CN112541608A - Unmanned aerial vehicle takeoff point prediction method and device - Google Patents

Unmanned aerial vehicle takeoff point prediction method and device Download PDF

Info

Publication number
CN112541608A
CN112541608A CN202010101564.XA CN202010101564A CN112541608A CN 112541608 A CN112541608 A CN 112541608A CN 202010101564 A CN202010101564 A CN 202010101564A CN 112541608 A CN112541608 A CN 112541608A
Authority
CN
China
Prior art keywords
point
position point
flying
data set
prediction model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010101564.XA
Other languages
Chinese (zh)
Other versions
CN112541608B (en
Inventor
陈杰
李坚强
程艳燕
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Zhongke Baotai Aerospace Technology Co ltd
Original Assignee
Shenzhen Zhongke Baotai Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Zhongke Baotai Technology Co ltd filed Critical Shenzhen Zhongke Baotai Technology Co ltd
Priority to CN202010101564.XA priority Critical patent/CN112541608B/en
Publication of CN112541608A publication Critical patent/CN112541608A/en
Application granted granted Critical
Publication of CN112541608B publication Critical patent/CN112541608B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/04Forecasting or optimisation specially adapted for administrative or management purposes, e.g. linear programming or "cutting stock problem"
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/28Determining representative reference patterns, e.g. by averaging or distorting; Generating dictionaries
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Abstract

The embodiment of the application is suitable for the technical field of unmanned aerial vehicles and discloses a method and a device for predicting the takeoff point of an unmanned aerial vehicle, wherein the method comprises the following steps: acquiring geographical position information of a position point to be predicted; and inputting the geographic position information of the position point to be predicted into a pre-trained flying point prediction model to obtain an output result of the flying point prediction model, wherein the output result represents whether the position point to be predicted can be used as the flying point of the unmanned aerial vehicle or not. According to the embodiment of the application, the geographical position information of the position point to be predicted is obtained, and then the geographical position information is input into the trained flying point prediction model, and the flying point prediction model can judge whether the position point can be used as the flying point of the unmanned aerial vehicle according to the input geographical position information, so that the labor cost and the time cost in the process of determining the flying point of the unmanned aerial vehicle are reduced.

Description

Unmanned aerial vehicle takeoff point prediction method and device
Technical Field
The application belongs to the technical field of unmanned aerial vehicles, and particularly relates to a method and a device for predicting a flying point of an unmanned aerial vehicle.
Background
Along with the continuous development of unmanned aerial vehicle technique, unmanned aerial vehicle's application is also more and more extensive.
When the unmanned aerial vehicle takes off, a proper takeoff point needs to be determined first, and then the unmanned aerial vehicle takes off from the takeoff point. At present, generally look for the departure point that is fit for unmanned aerial vehicle to take off through the artificial mode, specific process includes: artificially searching an open area and detecting the satellite positioning signal intensity of each position point in the open area by using an unmanned aerial vehicle; and then selecting a flying point according to the strength of the satellite positioning signal, if the strength of the satellite positioning signal is strong, the flying environment is good, and the position point is suitable for the unmanned aerial vehicle to take off, otherwise, if the strength of the satellite positioning signal is weak, the position point is suitable for taking off. When each position point of one area is not suitable for taking off, the next area needs to be continuously searched, and the operation is repeated until a suitable flying point is found.
From top to bottom, current unmanned aerial vehicle flying spot confirms the mode and need consume a large amount of human costs and time cost.
Disclosure of Invention
The embodiment of the application provides an unmanned aerial vehicle takeoff point prediction method and device, and aims to solve the problem that a large amount of labor cost and time cost are consumed in an existing unmanned aerial vehicle takeoff point determination mode.
In a first aspect, an embodiment of the present application provides an unmanned aerial vehicle departure point prediction method, including:
acquiring geographical position information of a position point to be predicted;
and inputting the position point information into a pre-trained flying point prediction model to obtain an output result of the flying point prediction model, wherein the output result represents whether the position point to be predicted can be used as the flying point of the unmanned aerial vehicle.
According to the embodiment of the application, the geographical position information of the position point is obtained, and then the information is input into the trained flying point prediction model, and the flying point prediction model can determine whether the position point can be used as the flying point of the unmanned aerial vehicle according to the input geographical position information, so that the labor cost and the time cost in the process of determining the flying point of the unmanned aerial vehicle are reduced.
In a possible implementation manner of the first aspect, before obtaining the geographic location information of the location point to be predicted, the method further includes:
acquiring an unlabeled data set, wherein the unlabeled data set comprises geographic position information of at least one position point;
randomly selecting at least one position point from the unmarked data set, and taking the geographical position information of the selected position point as a support set;
and training a pre-constructed flying point prediction model based on the unmarked data set and the support set.
In a possible implementation manner of the first aspect, training a pre-constructed flying point prediction model based on the unlabeled data set and the support set includes:
manual labeling: labeling the unmarked data set by using an active learning mode to obtain a manually labeled data set; the data set of the artificial marking comprises geographic position information of a position point, a picture of the position point, satellite positioning signal intensity of the position point and a label of the position point, wherein the label of the position point represents whether the position point can be used as an unmanned aerial vehicle flying point or not;
training: training a pre-constructed flying point prediction model by using the artificially labeled data set according to the support set and the predetermined hyper-parameters, and calculating a loss value;
a judging step: if the loss value does not tend to be stable, after updating the hyper-parameter by using a gradient descent algorithm, circularly executing the manual labeling step, the training step and the judging step until the loss value tends to be stable; and if the loss value tends to be stable, obtaining a trained flying point prediction model.
In a possible implementation manner of the first aspect, labeling the unlabeled data set in an active learning manner to obtain a manually labeled data set includes:
selecting a target position point from the unmarked data set according to the uncertainty of each position point;
acquiring a picture of the target position point, satellite positioning signal intensity of the target position point and a label of the target position point, wherein the label of the target position point represents whether the target position point can be used as an unmanned aerial vehicle flying point;
and forming the artificially labeled data set based on the geographic position information of each target position point, the picture of the target position point, the satellite positioning signal strength of the target position point and the label of the target position point.
In a possible implementation manner of the first aspect, selecting a target location point from the unlabeled dataset according to uncertainty of each location point includes:
calculating the conditional entropy of each position point in the unmarked data set;
calculating the expected entropy of each position point according to the conditional entropy;
and selecting a target position point according to the expected entropy.
In a possible implementation manner of the first aspect, calculating an expected entropy of each location point according to the conditional entropy includes:
by passing
Figure BDA0002387026560000031
Calculating an expected entropy for each of the location points;
wherein x represents RtThe region is not manually marked but a position point for manual marking is selected;
Figure BDA0002387026560000032
representing a desired entropy; e represents expectation; rtRepresenting a region of interest; z is represented by RtAssuming position points which are selected and manually marked in the area;
Figure BDA0002387026560000033
indicating whether the position point z can be used as a flying point;
Figure BDA0002387026560000034
indicating whether the position point x can be used as a flying point; y istRepresenting training data; h [ y ]z (H)|yt,yx (H)]Representing a conditional entropy;
Figure BDA0002387026560000035
wherein ,
Figure BDA0002387026560000036
indicating whether the acquired position point z can be used as a flying point; (ii) a x represents RtThe non-acquired location points within the region,
Figure BDA0002387026560000037
indicating whether the position point x which is not acquired can be used as a flying point; y istRepresenting training data.
In a possible implementation manner of the first aspect, training a pre-constructed flying point prediction model by using the artificially labeled data set according to the support set and the hyperparameters, and calculating a loss value includes:
dividing the manually marked data set into a training sample set and a testing sample set;
inputting the training sample set and the test sample set into a pre-constructed flying point prediction model, and performing model training by using the hyper-parameter, the support set, the picture of the position point and the satellite positioning signal intensity of the position point;
and calculating the loss value according to the output result of the pre-constructed flying point prediction model and the label of the position point.
In one possible implementation manner of the first aspect, the flying point prediction model is obtained by
Figure BDA0002387026560000041
Predicting whether each position point can be used as a flying point;
wherein x denotes a predicted location point;
Figure BDA0002387026560000042
indicating whether the predicted position point can be taken as a flying point,
Figure BDA0002387026560000043
Figure BDA0002387026560000046
indicating that it can be taken as a flying point, -1 indicating that it cannot be taken as a flying point; y represents manual annotation data; u denotes a set of support sets,
Figure BDA0002387026560000044
represents the mean value;
Figure BDA0002387026560000045
the variance is indicated.
In a second aspect, an embodiment of the present application provides a terminal device, which includes a memory, a processor, and a computer program stored in the memory and executable on the processor, and the processor, when executing the computer program, implements the method according to any one of the above first aspects.
In a third aspect, an embodiment of the present application provides a computer-readable storage medium, which stores a computer program, and when the computer program is executed by a processor, the computer program implements the method according to any one of the above first aspects.
In a fourth aspect, embodiments of the present application provide a computer program product, which, when run on a terminal device, causes the terminal device to perform the method of any one of the first aspect.
It is understood that the beneficial effects of the second to fourth aspects can be seen from the description of the first aspect, and are not described herein again.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the embodiments or the prior art descriptions will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without inventive exercise.
Fig. 1 is a schematic block diagram of a flow of a method for predicting a departure point of an unmanned aerial vehicle according to an embodiment of the present disclosure;
FIG. 2 is a schematic block diagram of a process of training a takeoff point prediction model according to an embodiment of the present disclosure;
fig. 3 is a schematic block diagram of a specific flow of step S203 provided in the present application;
FIG. 4 is a schematic block diagram of a flow of a manual annotation process provided in an embodiment of the present application;
FIG. 5 is a schematic diagram of an active learning model provided herein;
fig. 6 is a block diagram of a structure of an unmanned aerial vehicle departure point prediction apparatus according to an embodiment of the present application;
fig. 7 is a schematic structural diagram of a terminal device according to an embodiment of the present application.
Detailed Description
In the following description, for purposes of explanation and not limitation, specific details are set forth, such as particular system structures, techniques, etc. in order to provide a thorough understanding of the embodiments of the present application.
Reference throughout this specification to "one embodiment" or "some embodiments," or the like, means that a particular feature, structure, or characteristic described in connection with the embodiment is included in one or more embodiments of the present application. Thus, appearances of the phrases "in one embodiment," "in some embodiments," "in other embodiments," or the like, in various places throughout this specification are not necessarily all referring to the same embodiment, but rather "one or more but not all embodiments" unless specifically stated otherwise. The terms "comprising," "including," "having," and variations thereof mean "including, but not limited to," unless expressly specified otherwise.
According to the unmanned aerial vehicle flying point prediction scheme, the geographical position information of the position point is input into the flying point prediction model trained in advance, and the flying point prediction model outputs the result of whether the position point can be used as the flying point, so that the labor cost and the time cost are reduced.
In the embodiment of the application, a flying point prediction model needs to be constructed first, and then training data is collected; training a flying point prediction model by using training data; after the model training is finished, the geographical position information of the position point to be predicted is input into the trained flying point prediction model, and the flying point prediction model can output the judgment result of whether the position point can be used as the takeoff of the unmanned aerial vehicle.
Compared with the existing unmanned aerial vehicle flying point determining mode, the unmanned aerial vehicle flying point determining mode can reduce labor cost and time cost.
The technical scheme provided by the embodiment of the application is explained in detail below.
Referring to fig. 1, a schematic block diagram of a flow of a method for predicting a departure point of an unmanned aerial vehicle provided in an embodiment of the present application may include the following steps:
and step S101, acquiring the geographical position information of the position point to be predicted.
It should be noted that the geographic location information refers to longitude and latitude information of the location point to be predicted, and the manner of acquiring the longitude and latitude information of the location point to be predicted may be arbitrary, and may specifically include, but is not limited to, GPS and beidou.
And S102, inputting the geographic position information of the position point to be predicted into a pre-trained flying point prediction model, obtaining an output result of the flying point prediction model, and representing whether the position point to be predicted can be used as the flying point of the unmanned aerial vehicle or not by the output result.
It should be noted that the output result of the takeoff point prediction model may represent whether the position point can be used as the judgment result of the takeoff point of the unmanned aerial vehicle. In some embodiments, the output result of the takeoff prediction model may include a picture of the position point to be predicted, in addition to whether the position point to be predicted can be used as a result of takeoff of the unmanned aerial vehicle. Of course, in other embodiments, the flying point prediction model may not output the picture of the position point to be predicted. The pictures output by the flying spot prediction model can play an auxiliary role, can be selectively output or not output, and can be set according to requirements. The outputted picture of the position point to be predicted can be the picture characteristic information of the position point to be predicted.
And the picture characteristic information of the position point to be predicted is obtained by the flying point prediction model according to the picture characteristics memorized by previous training. The picture characteristic information may provide topographic and geomorphic information of the position point, for example, the picture characteristic information of the position point to be predicted represents that the position of the position point to be predicted is a grassland or a lake. In general, some special landforms cannot be taken as the flying points of the unmanned aerial vehicles, for example, the position points to be predicted are in lakes, and people cannot stand on the lakes to take off the unmanned aerial vehicles. The takeoff point prediction model may be, but not limited to, a Heterogeneous Multi-output gaussian Process model (HMOGP), that is, geographic position information of a position point, a picture of the position point, satellite positioning signal strength of the position point, and a label of the position point may be input to the Heterogeneous Multi-output gaussian Process model to train the model, and the geographic position information of the position point and the geographical position information of the position point do not belong to the same type. Therefore, according to the embodiment of the application, the geographic position information of the position point to be predicted is obtained, and then the information is input into the trained flying point prediction model, and the flying point prediction model can determine whether the position point can be used as the flying point of the unmanned aerial vehicle according to the input geographic position information, so that the labor cost and the time cost in the process of determining the flying point of the unmanned aerial vehicle are reduced.
The following describes the training process of the flying point prediction model.
Referring to fig. 2, a schematic block diagram of a process for training a flying point prediction model provided in an embodiment of the present application may include the following steps:
step S201, obtaining an unmarked data set, wherein the unmarked data set comprises geographic position information of at least one position point.
In specific application, the geographic position information of all position points in the region of interest can be collected in advance, and the geographic position information of all position points in the region of interest forms the unmarked data set.
Step S202, at least one position point is randomly selected from the unlabeled data set, and the geographical position information of the selected position point is used as a support set.
The number of the selected position points may be set as needed, and is not limited herein. For example, the unlabeled data set includes the geographic location information of 300 location points, 100 location points are randomly selected from the unlabeled data set, and the geographic location information of the selected 100 location points forms the support set.
And S203, training a pre-constructed flying point prediction model based on the unmarked data set and the support set.
In specific application, the initial hyper-parameter of the takeoff point prediction model can be determined, a proper Gaussian kernel function is selected according to requirements, and the parameter of the Gaussian kernel function is the initial hyper-parameter of the model. And continuously updating the hyper-parameters through a gradient descent algorithm in the iterative training process of the model. And obtaining the optimal hyper-parameter when the model training is completed.
Referring to a specific flow schematic block diagram of step S203 shown in fig. 3, the step S203 may include a manual labeling step, a training step, and a determining step, which correspond to steps S301, S302, and S303, respectively. The process of training the pre-constructed flying point prediction model based on the unlabeled data set and the support set may include:
s301, labeling unmarked data sets by using an active learning mode to obtain artificially labeled data sets; the artificially marked data set comprises geographic position information of the position point, a picture of the position point, satellite positioning signal intensity of the position point and a label of the position point, wherein the label of the position point represents whether the position point can be used as an unmanned aerial vehicle flying starting point.
In some embodiments, referring to the schematic flow chart of the manual labeling process shown in fig. 4, the step S301 may include:
step S401, according to the uncertainty of each position point, selecting a target position point from the unmarked data set.
It should be noted that the magnitude of the uncertainty can be determined by the expected entropy of the location point, and the expected entropy is large and the uncertainty is also large. The selection process of the target location point may include:
the first step is as follows: and calculating the conditional entropy of each position point in the unmarked data set.
Specifically, the calculation formula of the conditional entropy of the position point may be as follows:
Figure BDA0002387026560000081
wherein ,
Figure BDA0002387026560000082
indicating whether the acquired position point z can be used as a flying point,
Figure BDA0002387026560000083
-1 indicates that it cannot be taken as a flying point, +1 indicates that it can be taken as a flying point; x represents RtThe non-acquired location points within the region,
Figure BDA0002387026560000084
indicating whether the position point x which is not acquired can be used as a flying point; y istRepresenting training data.
The second step is that: according to the conditional entropy, the expected entropy of each position point is calculated.
Specifically, the expected entropy calculation formula of the position point is specifically as follows:
Figure BDA0002387026560000085
wherein x represents RtThe region is not manually marked but a position point for manual marking is selected;
Figure BDA0002387026560000086
representing a desired entropy; e represents expectation; rtRepresenting a region of interest; z is represented by RtAssuming position points which are selected and manually marked in the area;
Figure BDA0002387026560000091
indicating whether the position point z can be used as a flying point;
Figure BDA0002387026560000092
indicating whether the position point x can be used as a flying point; y istRepresenting training data; h [ y ]z (H)|yt,yx (H)]Representing conditional entropy.
The third step: and selecting a target position point according to the expected entropy.
After the expected entropy of each position point is calculated, the position point with larger expected entropy, namely larger uncertainty, can be selected as the target position point. The preset threshold value can be set according to requirements.
In specific application, a position point with the expected entropy larger than an expected entropy threshold value can be selected as a target position point by setting the expected entropy threshold value; it is also possible to set the number threshold to n, that is, select n location points with higher expected entropy as the target location points.
Of course, in some other embodiments, the user may select a location point with a larger uncertainty as a target location point instead of setting the threshold value.
Step S402, obtaining a picture of the target position point, satellite positioning signal intensity of the target position point and a label of the target position point, wherein the label of the target position point represents whether the target position point can be used as an unmanned aerial vehicle flying starting point.
Specifically, after a target position point with high uncertainty is selected, a person who acquires the image of the target position point can be allowed to arrive at the target position point to take the image of the position point, for example, if the position of the target position point is a grassland, the image of the grassland is taken; the satellite positioning signal intensity of the target position point is manually measured, and whether each target position point can be used as a flying point (namely a label of the target position point) is manually marked.
Step S403, forming a manually labeled data set based on the geographic position information of each target position point, the picture of the target position point, the satellite positioning signal strength of the target position point, and the label of the target position point.
Specifically, after the satellite positioning signal strength, the tag and the picture of each target position point are acquired, the above manually labeled data set is formed.
To better describe the process of manual labeling using active learning, the following description will be made with reference to the schematic diagram of the active learning model shown in fig. 5.
As shown in fig. 5, the active learning model consists of C, Q, L, S, U five parts. C refers to a classifier, which here may refer to a Heterogeneous Multiple Output Gaussian Process (HMOGP) model in the embodiments of the present application; q represents a query function; u represents an unlabeled data set (i.e., latitude and longitude information of all location points within the region of interest); s represents a supervisor, which can label the unmarked data set; and L represents a manual labeling sample data set obtained by manual labeling.
In particular, a classifier is used to select a plurality of location points with a large uncertainty from the unlabeled dataset, and the uncertainty calculation can be referred to above. And then collecting the shot pictures of the personnel arriving at the selected position points, manually measuring the satellite positioning signal intensity of each position point and marking whether each position point can be used as a flying point. Further, the position information of each position point, the corresponding picture, the corresponding satellite positioning signal strength value and whether the position information can be used as the result of the flying spot form an artificial labeling data set.
And S302, training a pre-constructed flying point prediction model by using a manually labeled data set according to the support set and the predetermined hyper-parameters, and calculating a loss value.
It is understood that the above hyper-parameter refers to a parameter obtained by selecting a suitable gaussian kernel function according to actual needs. This step may include:
the first step is as follows: the manually labeled data set is divided into a training sample set and a testing sample set.
Specifically, a training data set (i.e., a manually labeled data set) is randomly divided into training samples and testing samples, where the training samples and the testing samples include different data. Training samples are used to train a flying point prediction model, e.g., an HMOGP model; the test samples are used to test the accuracy of the model. Typically, the number of training samples is greater than the number of test samples.
The second step is that: inputting the training sample set and the test sample set into a pre-constructed flying point prediction model, and performing model training by using the hyper-parameters, the support set, the pictures of the position points and the satellite positioning signal intensity of the position points.
Specifically, the training sample may be input to the prediction model first, then the test sample may be input to the prediction model, and iterative training may be performed to obtain the trained prediction model finally. The training samples and the test samples each include geographical location information of a location point, a corresponding picture, a satellite positioning signal strength value, and a result (i.e., a label) of whether the location point can be a departure point.
In the process of training the flying point prediction model, each time the flying point prediction model is trained, the flying point prediction model outputs a prediction result, and the prediction result represents whether the input position point can be used as a flying point. The flying point prediction model can determine whether the input position point can be regarded as a flying point by the following formula 3.
Figure BDA0002387026560000111
wherein ,x*Representing a predicted location point;
Figure BDA0002387026560000112
indicating whether the predicted position point can be taken as a flying point,
Figure BDA0002387026560000113
Figure BDA0002387026560000118
indicating that it can be taken as a flying point, -1 indicating that it cannot be taken as a flying point; y tableDisplaying manual annotation data; u denotes a set of support sets,
Figure BDA0002387026560000114
represents the mean value;
Figure BDA0002387026560000115
the variance is indicated.
The mean and variance can be calculated by the following formula 4.
Figure BDA0002387026560000116
It should be noted that, in the process of training the takeoff point prediction model, the output result of the takeoff point prediction model may include only the result of whether the position point can be taken as the takeoff point of the unmanned aerial vehicle, or may include the result of whether the position point can be taken as the takeoff point of the unmanned aerial vehicle and a picture of the position point, where the picture of the position point is a picture processed by the takeoff point prediction model, that is, the takeoff point prediction model extracts the features of the picture as output.
The third step: and calculating a loss value according to the output result of the pre-constructed flying point prediction model and the label of the position point.
It can be understood that, in the model training process, the takeoff point prediction model outputs the prediction result of the currently input position point, and determines whether the position point can be used as the takeoff point of the unmanned aerial vehicle. After obtaining the output result of the model, a loss value between the output result and the label of the position point is calculated.
The loss value calculation formula of the flying point prediction model is as follows 5.
Figure BDA0002387026560000117
Step S303, if the loss value does not tend to be stable, after updating the hyper-parameter by using a gradient descent algorithm, circularly executing a manual labeling step, a training step and a judging step, namely returning to the step S301, and iterating the steps S301 to S303 until the loss value tends to be stable; if the loss value tends to be stable, the process proceeds to step S304.
And step S304, obtaining a trained flying point prediction model.
In a specific application, after the loss value is calculated, whether the loss value tends to be stable needs to be determined. By stable is meant that the loss value does not change within a range. If the loss value is not changed any more, the model tends to be converged, and the model training is judged to be finished, so that the trained model and the optimal hyper-parameter are obtained. On the contrary, if the loss value does not tend to be stable, the model does not tend to be convergent, a new hyper-parameter is obtained by calculation through a gradient descent algorithm, and the steps S301 to S303 are iterated by using the new hyper-parameter until the loss value tends to be stable.
After the trained flying point prediction model is obtained, the geographic position information of the position point needing to be predicted can be input into the flying point prediction model, the output result of the flying point prediction model is obtained, and whether the position point can be used as the flying point of the unmanned aerial vehicle or not can be judged, so that the efficiency of searching the flying point of the unmanned aerial vehicle is improved.
It should be understood that, the sequence numbers of the steps in the foregoing embodiments do not imply an execution sequence, and the execution sequence of each process should be determined by its function and inherent logic, and should not constitute any limitation to the implementation process of the embodiments of the present application.
Corresponding to the method for predicting the takeoff point of the unmanned aerial vehicle in the above embodiment, fig. 6 shows a structural block diagram of the device for predicting the takeoff point of the unmanned aerial vehicle provided in the embodiment of the present application, and for convenience of description, only the parts related to the embodiment of the present application are shown.
Referring to fig. 6, the apparatus includes:
a location point information obtaining module 61, configured to obtain geographic location information of a location point to be predicted;
and the prediction module 62 is configured to input the geographic position information of the position point to be predicted into the pre-trained takeoff point prediction model, obtain an output result of the takeoff point prediction model, and indicate whether the position point to be predicted can be used as a takeoff point of the unmanned aerial vehicle.
In a possible implementation manner, the apparatus further includes:
the data set acquisition module is used for acquiring an unmarked data set, wherein the unmarked data set comprises the geographic position information of at least one position point;
a support set determining module, configured to randomly select at least one location point from an unlabeled data set, and use geographic location information of the selected location point as a support set;
and the training module is used for training a pre-constructed flying point prediction model based on the unmarked data set and the support set.
In a possible implementation manner, the training module is specifically configured to perform the following steps:
manual labeling: labeling the unmarked data set by using an active learning mode to obtain a manually labeled data set; the data set marked manually comprises geographic position information of the position point, a picture of the position point, satellite positioning signal intensity of the position point and a label of the position point, and the label of the position point represents whether the position point can be used as an unmanned aerial vehicle flying starting point or not;
training: training a pre-constructed flying point prediction model by using a manually labeled data set according to a support set and a predetermined hyper-parameter, and calculating a loss value;
a judging step: if the loss value does not tend to be stable, after updating the hyper-parameters by using a gradient descent algorithm, circularly executing a manual labeling step, a training step and a judging step until the loss value tends to be stable; and if the loss value tends to be stable, obtaining a trained flying point prediction model.
In a possible implementation manner, the training module is specifically configured to:
selecting a target position point from the unmarked data set according to the uncertainty of each position point;
acquiring a picture of a target position point, satellite positioning signal intensity of the target position point and a label of the target position point, wherein the label of the target position point represents whether the target position point can be used as an unmanned aerial vehicle flying-off point or not;
and forming an artificially labeled data set based on the geographic position information of each target position point, the picture of the target position point, the satellite positioning signal strength of the target position point and the label of the target position point.
In a possible implementation manner, the training module is specifically configured to:
calculating the conditional entropy of each position point in the unmarked data set;
calculating expected entropies of all the position points according to the conditional entropies;
and selecting a target position point according to the expected entropy.
In a possible implementation manner, the training module is specifically configured to:
by passing
Figure BDA0002387026560000131
Calculating expected entropies of the position points;
wherein x represents RtThe region is not manually marked but a position point for manual marking is selected;
Figure BDA0002387026560000132
representing a desired entropy; e represents expectation; rtRepresenting a region of interest; z is represented by RtAssuming position points which are selected and manually marked in the area;
Figure BDA0002387026560000133
indicating whether the position point z can be used as a flying point;
Figure BDA0002387026560000134
indicating whether the position point x can be used as a flying point; y istRepresenting training data; h [ y ]z (H)|yt,yx (H)]Representing a conditional entropy;
Figure BDA0002387026560000141
wherein ,
Figure BDA0002387026560000142
indicating whether the acquired position point z can be used as a flying point; (ii) a x represents RtThe non-acquired location points within the region,
Figure BDA0002387026560000143
indicating whether the position point x which is not acquired can be used as a flying point; y istRepresenting training data.
In a possible implementation manner, the training module is specifically configured to:
dividing the manually marked data set into a training sample set and a testing sample set;
inputting a training sample set and a test sample set into a pre-constructed flying point prediction model, and performing model training by using a hyper-parameter, a support set, a picture of a position point and the satellite positioning signal intensity of the position point;
and calculating a loss value according to the output result of the pre-constructed flying point prediction model and the label of the position point.
In one possible implementation, the takeoff point prediction model is passed
Figure BDA0002387026560000144
Predicting whether each position point can be used as a flying point;
wherein ,x*Representing a predicted location point;
Figure BDA0002387026560000145
indicating whether the predicted position point can be taken as a flying point,
Figure BDA0002387026560000146
Figure BDA0002387026560000147
indicating that it can be taken as a flying point, -1 indicating that it cannot be taken as a flying point; y represents manual annotation data; u denotes a set of support sets,
Figure BDA0002387026560000148
represents the mean value;
Figure BDA0002387026560000149
the variance is indicated.
The unmanned aerial vehicle flying point prediction device has the function of realizing the unmanned aerial vehicle flying point prediction method, the function can be realized by hardware, and also can be realized by corresponding software executed by hardware, the hardware or the software comprises one or more modules corresponding to the functions, and the modules can be software and/or hardware.
It should be noted that, for the information interaction, execution process, and other contents between the above-mentioned devices/modules, the specific functions and technical effects thereof are based on the same concept as those of the embodiment of the method of the present application, and reference may be made to the part of the embodiment of the method specifically, and details are not described here.
Fig. 7 is a schematic structural diagram of a terminal device according to an embodiment of the present application. As shown in fig. 7, the terminal device 7 of this embodiment includes: at least one processor 70, a memory 71, and a computer program 72 stored in the memory 71 and executable on the at least one processor 70, the processor 70 implementing the steps in any of the various method embodiments described above when executing the computer program 72.
The terminal device 7 may be a desktop computer, a notebook, a palm computer, a cloud server, or other computing devices. The terminal device may include, but is not limited to, a processor 70, a memory 71. Those skilled in the art will appreciate that fig. 7 is only an example of the terminal device 7, and does not constitute a limitation to the terminal device 7, and may include more or less components than those shown, or combine some components, or different components, for example, and may further include input/output devices, network access devices, and the like.
The Processor 70 may be a Central Processing Unit (CPU), and the Processor 70 may be other general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), an off-the-shelf Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic, discrete hardware components, etc. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The memory 71 may in some embodiments be an internal storage unit of the terminal device 7, such as a hard disk or a memory of the terminal device 7. In other embodiments, the memory 71 may also be an external storage device of the terminal device 7, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), and the like, which are provided on the terminal device 7. Further, the memory 71 may also include both an internal storage unit and an external storage device of the terminal device 7. The memory 71 is used for storing an operating system, an application program, a BootLoader (BootLoader), data, and other programs, such as program codes of the computer program. The memory 71 may also be used to temporarily store data that has been output or is to be output.
It will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-mentioned division of the functional units and modules is illustrated, and in practical applications, the above-mentioned function distribution may be performed by different functional units and modules according to needs, that is, the internal structure of the apparatus is divided into different functional units or modules to perform all or part of the above-mentioned functions. Each functional unit and module in the embodiments may be integrated in one processing unit, or each unit may exist alone physically, or two or more units are integrated in one unit, and the integrated unit may be implemented in a form of hardware, or in a form of software functional unit. In addition, specific names of the functional units and modules are only for convenience of distinguishing from each other, and are not used for limiting the protection scope of the present application. The specific working processes of the units and modules in the system may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
The embodiments of the present application further provide a computer-readable storage medium, where a computer program is stored, and when the computer program is executed by a processor, the computer program implements the steps that can be implemented in the above method embodiments.
The embodiments of the present application provide a computer program product, which, when running on a terminal device, enables the terminal device to implement the steps in the above method embodiments when executed.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, all or part of the processes in the methods of the embodiments described above can be implemented by a computer program, which can be stored in a computer-readable storage medium and can implement the steps of the embodiments of the methods described above when the computer program is executed by a processor. Wherein the computer program comprises computer program code, which may be in the form of source code, object code, an executable file or some intermediate form, etc. The computer readable medium may include at least: any entity or device capable of carrying computer program code to a photographing apparatus/terminal apparatus, a recording medium, computer Memory, Read-Only Memory (ROM), Random Access Memory (RAM), an electrical carrier signal, a telecommunications signal, and a software distribution medium. Such as a usb-disk, a removable hard disk, a magnetic or optical disk, etc. In certain jurisdictions, computer-readable media may not be an electrical carrier signal or a telecommunications signal in accordance with legislative and patent practice.
In the above embodiments, the descriptions of the respective embodiments have respective emphasis, and reference may be made to the related descriptions of other embodiments for parts that are not described or illustrated in a certain embodiment.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus and method may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the modules or units is only one logical division, and there may be other divisions when actually implemented, for example, a plurality of units or components may be combined or may be integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
The above-mentioned embodiments are only used for illustrating the technical solutions of the present application, and not for limiting the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; such modifications and substitutions do not substantially depart from the spirit and scope of the embodiments of the present application and are intended to be included within the scope of the present application.

Claims (10)

1. An unmanned aerial vehicle takeoff point prediction method is characterized by comprising the following steps:
acquiring geographical position information of a position point to be predicted;
and inputting the geographic position information of the position point to be predicted into a pre-trained flying point prediction model to obtain an output result of the flying point prediction model, wherein the output result represents whether the position point to be predicted can be used as the flying point of the unmanned aerial vehicle.
2. The method of claim 1, prior to obtaining geographic location information for a location point to be predicted, further comprising:
acquiring an unlabeled data set, wherein the unlabeled data set comprises geographic position information of at least one position point;
randomly selecting at least one position point from the unmarked data set, and taking the geographical position information of the selected position point as a support set;
and training a pre-constructed flying point prediction model based on the unmarked data set and the support set.
3. The method of claim 2, wherein training a pre-constructed flying point prediction model based on the unlabeled data set and the support set comprises:
manual labeling: labeling the unmarked data set by using an active learning mode to obtain a manually labeled data set; the data set of the artificial marking comprises geographic position information of a position point, a picture of the position point, satellite positioning signal intensity of the position point and a label of the position point, wherein the label of the position point represents whether the position point can be used as an unmanned aerial vehicle flying point or not;
training: training a pre-constructed flying point prediction model by using the artificially labeled data set according to the support set and the predetermined hyper-parameters, and calculating a loss value;
a judging step: if the loss value does not tend to be stable, after updating the hyper-parameter by using a gradient descent algorithm, circularly executing the manual labeling step, the training step and the judging step until the loss value tends to be stable; and if the loss value tends to be stable, obtaining a trained flying point prediction model.
4. The method of claim 3, wherein annotating the unlabeled data set using active learning to obtain an artificially labeled data set comprises:
selecting a target position point from the unmarked data set according to the uncertainty of each position point;
acquiring a picture of the target position point, satellite positioning signal intensity of the target position point and a label of the target position point, wherein the label of the target position point represents whether the target position point can be used as an unmanned aerial vehicle flying point;
and forming the artificially labeled data set based on the geographic position information of each target position point, the picture of the target position point, the satellite positioning signal strength of the target position point and the label of the target position point.
5. The method of claim 4, wherein selecting a target location point from the unlabeled dataset based on the uncertainty of each location point comprises:
calculating the conditional entropy of each position point in the unmarked data set;
calculating the expected entropy of each position point according to the conditional entropy;
and selecting a target position point according to the expected entropy.
6. The method of claim 5, wherein calculating the desired entropy for each of the location points based on the conditional entropy comprises:
by passing
Figure FDA0002387026550000021
Calculating an expected entropy for each of the location points;
wherein x represents RtThe region is not manually marked but a position point for manual marking is selected;
Figure FDA0002387026550000022
representing a desired entropy; e represents expectation; rtRepresenting a region of interest; z is represented by RtAssuming position points which are selected and manually marked in the area;
Figure FDA0002387026550000023
indicating whether the position point z can be used as a flying point;
Figure FDA0002387026550000024
indicating whether the position point x can be used as a flying point; y istRepresenting training data; h [ y ]z (H)|yt,yx (H)]Representing a conditional entropy;
Figure FDA0002387026550000025
wherein ,
Figure FDA0002387026550000026
indicating whether the acquired position point z can be used as a flying point; x represents RtThe non-acquired location points within the region,
Figure FDA0002387026550000027
indicating whether the position point x which is not acquired can be used as a flying point; y istRepresenting training data.
7. The method of claim 3, wherein training a pre-constructed departure point prediction model using the artificially labeled data set and calculating loss values based on the support set and hyperparameters comprises:
dividing the manually marked data set into a training sample set and a testing sample set;
inputting the training sample set and the test sample set into a pre-constructed flying point prediction model, and performing model training by using the hyper-parameter, the support set, the picture of the position point and the satellite positioning signal intensity of the position point;
and calculating the loss value according to the output result of the pre-constructed flying point prediction model and the label of the position point.
8. A method according to any one of claims 3 to 7, wherein the flying point prediction model is generated by
Figure FDA0002387026550000031
Predicting whether each position point can be used as a flying point;
wherein ,x*Representing a predicted location point;
Figure FDA0002387026550000032
indicating whether the predicted position point can be taken as a flying point,
Figure FDA0002387026550000033
e { +1, -1}, +1 represents that the flying point can be taken as the flying point, and-1 represents that the flying point cannot be taken as the flying point; y represents manual annotation data; u denotes a set of support sets,
Figure FDA0002387026550000034
represents the mean value;
Figure FDA0002387026550000035
the variance is indicated.
9. A terminal device comprising a memory, a processor and a computer program stored in the memory and executable on the processor, characterized in that the processor implements the method according to any of claims 1 to 8 when executing the computer program.
10. A computer-readable storage medium, in which a computer program is stored which, when being executed by a processor, carries out the method according to any one of claims 1 to 8.
CN202010101564.XA 2020-02-19 2020-02-19 Unmanned aerial vehicle departure point prediction method and device Active CN112541608B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010101564.XA CN112541608B (en) 2020-02-19 2020-02-19 Unmanned aerial vehicle departure point prediction method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010101564.XA CN112541608B (en) 2020-02-19 2020-02-19 Unmanned aerial vehicle departure point prediction method and device

Publications (2)

Publication Number Publication Date
CN112541608A true CN112541608A (en) 2021-03-23
CN112541608B CN112541608B (en) 2023-10-20

Family

ID=75013312

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010101564.XA Active CN112541608B (en) 2020-02-19 2020-02-19 Unmanned aerial vehicle departure point prediction method and device

Country Status (1)

Country Link
CN (1) CN112541608B (en)

Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100017051A1 (en) * 2008-07-15 2010-01-21 Astrium Gmbh Method of Automatically Determining a Landing Runway
CN103336863A (en) * 2013-06-24 2013-10-02 北京航空航天大学 Radar flight path observation data-based flight intention recognition method
CN103942229A (en) * 2013-01-22 2014-07-23 日电(中国)有限公司 Destination prediction device and method
EP3115858A1 (en) * 2014-03-07 2017-01-11 State Grid Corporation of China (SGCC) Centralized monitoring system and monitoring method for unmanned aerial vehicle to patrol power transmission line
CN107444665A (en) * 2017-07-24 2017-12-08 长春草莓科技有限公司 A kind of unmanned plane Autonomous landing method
CN108334099A (en) * 2018-01-26 2018-07-27 上海深视信息科技有限公司 A kind of efficient unmanned plane human body tracing method
US20180267524A1 (en) * 2016-05-24 2018-09-20 Wuhan University Of Science And Technology Air-ground heterogeneous robot system path planning method based on neighborhood constraint
CN109543994A (en) * 2018-11-20 2019-03-29 广东电网有限责任公司 A kind of unmanned plane dispositions method and device
CN110197475A (en) * 2018-10-31 2019-09-03 国网宁夏电力有限公司检修公司 Insulator automatic recognition system, method and application in a kind of transmission line of electricity
CN110209195A (en) * 2019-06-13 2019-09-06 浙江海洋大学 The tele-control system and control method of marine unmanned plane
CN110751106A (en) * 2019-10-23 2020-02-04 南京航空航天大学 Unmanned aerial vehicle target detection method and system
CN110766038A (en) * 2019-09-02 2020-02-07 深圳中科保泰科技有限公司 Unsupervised landform classification model training and landform image construction method
CN110794437A (en) * 2019-10-31 2020-02-14 深圳中科保泰科技有限公司 Satellite positioning signal strength prediction method and device

Patent Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100017051A1 (en) * 2008-07-15 2010-01-21 Astrium Gmbh Method of Automatically Determining a Landing Runway
CN103942229A (en) * 2013-01-22 2014-07-23 日电(中国)有限公司 Destination prediction device and method
CN103336863A (en) * 2013-06-24 2013-10-02 北京航空航天大学 Radar flight path observation data-based flight intention recognition method
EP3115858A1 (en) * 2014-03-07 2017-01-11 State Grid Corporation of China (SGCC) Centralized monitoring system and monitoring method for unmanned aerial vehicle to patrol power transmission line
US20180267524A1 (en) * 2016-05-24 2018-09-20 Wuhan University Of Science And Technology Air-ground heterogeneous robot system path planning method based on neighborhood constraint
CN107444665A (en) * 2017-07-24 2017-12-08 长春草莓科技有限公司 A kind of unmanned plane Autonomous landing method
CN108334099A (en) * 2018-01-26 2018-07-27 上海深视信息科技有限公司 A kind of efficient unmanned plane human body tracing method
CN110197475A (en) * 2018-10-31 2019-09-03 国网宁夏电力有限公司检修公司 Insulator automatic recognition system, method and application in a kind of transmission line of electricity
CN109543994A (en) * 2018-11-20 2019-03-29 广东电网有限责任公司 A kind of unmanned plane dispositions method and device
CN110209195A (en) * 2019-06-13 2019-09-06 浙江海洋大学 The tele-control system and control method of marine unmanned plane
CN110766038A (en) * 2019-09-02 2020-02-07 深圳中科保泰科技有限公司 Unsupervised landform classification model training and landform image construction method
CN110751106A (en) * 2019-10-23 2020-02-04 南京航空航天大学 Unmanned aerial vehicle target detection method and system
CN110794437A (en) * 2019-10-31 2020-02-14 深圳中科保泰科技有限公司 Satellite positioning signal strength prediction method and device

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
阎文丽,姜振东: "预警卫星功能仿真系统", 系统仿真学报, no. 06 *
马旗;孙晓军;张杨;姜雨辰;: "基于红外图像的低空无人机检测识别方法", 弹箭与制导学报, no. 03 *

Also Published As

Publication number Publication date
CN112541608B (en) 2023-10-20

Similar Documents

Publication Publication Date Title
CN106909924B (en) Remote sensing image rapid retrieval method based on depth significance
EP3385919B1 (en) Method of processing passage record and device
CN106681996B (en) The method and apparatus for determining interest region in geographic range, point of interest
Surabuddin Mondal et al. Modeling of spatio-temporal dynamics of land use and land cover in a part of Brahmaputra River basin using Geoinformatic techniques
CN106767835B (en) Positioning method and device
CN108540988B (en) Scene division method and device
CN102375987B (en) Image processing device and image feature vector extracting and image matching method
CN110766038A (en) Unsupervised landform classification model training and landform image construction method
CN109949063A (en) A kind of address determines method, apparatus, electronic equipment and readable storage medium storing program for executing
EP3944218A1 (en) Information processing device, information processing method, and program
CN110794437B (en) Satellite positioning signal strength prediction method and device
CN111831935A (en) Interest point ordering method and device, electronic equipment and storage medium
Chen et al. A subpixel mapping algorithm combining pixel-level and subpixel-level spatial dependences with binary integer programming
CN110781256B (en) Method and device for determining POI matched with Wi-Fi based on sending position data
CN110457706B (en) Point-of-interest name selection model training method, using method, device and storage medium
CN113298042B (en) Remote sensing image data processing method and device, storage medium and computer equipment
CN110263250A (en) A kind of generation method and device of recommended models
CN108875901B (en) Neural network training method and universal object detection method, device and system
CN110096609A (en) Source of houses searching method, device, equipment and computer readable storage medium
CN108830302B (en) Image classification method, training method, classification prediction method and related device
CN112541608A (en) Unmanned aerial vehicle takeoff point prediction method and device
CN111104965A (en) Vehicle target identification method and device
Zhang et al. Wild plant data collection system based on distributed location
CN111831827B (en) Data processing method and device, electronic equipment and storage medium
CN111782980B (en) Mining method, device, equipment and storage medium for map interest points

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right

Effective date of registration: 20220214

Address after: 518000 2515, building 2, Huilong business center, North Station community, Minzhi street, Longhua District, Shenzhen, Guangdong Province

Applicant after: Shenzhen Zhongke Baotai Aerospace Technology Co.,Ltd.

Address before: Room 1101-1102, building 1, Changfu Jinmao building, No.5, Shihua Road, free trade zone, Fubao street, Futian District, Shenzhen, Guangdong 518000

Applicant before: Shenzhen Zhongke Baotai Technology Co.,Ltd.

TA01 Transfer of patent application right
GR01 Patent grant
GR01 Patent grant