CN112541608B - Unmanned aerial vehicle departure point prediction method and device - Google Patents

Unmanned aerial vehicle departure point prediction method and device Download PDF

Info

Publication number
CN112541608B
CN112541608B CN202010101564.XA CN202010101564A CN112541608B CN 112541608 B CN112541608 B CN 112541608B CN 202010101564 A CN202010101564 A CN 202010101564A CN 112541608 B CN112541608 B CN 112541608B
Authority
CN
China
Prior art keywords
position point
flying spot
point
training
data set
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010101564.XA
Other languages
Chinese (zh)
Other versions
CN112541608A (en
Inventor
陈杰
李坚强
程艳燕
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Zhongke Baotai Aerospace Technology Co ltd
Original Assignee
Shenzhen Zhongke Baotai Aerospace Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Zhongke Baotai Aerospace Technology Co ltd filed Critical Shenzhen Zhongke Baotai Aerospace Technology Co ltd
Priority to CN202010101564.XA priority Critical patent/CN112541608B/en
Publication of CN112541608A publication Critical patent/CN112541608A/en
Application granted granted Critical
Publication of CN112541608B publication Critical patent/CN112541608B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/04Forecasting or optimisation specially adapted for administrative or management purposes, e.g. linear programming or "cutting stock problem"
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/28Determining representative reference patterns, e.g. by averaging or distorting; Generating dictionaries
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Business, Economics & Management (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Economics (AREA)
  • Human Resources & Organizations (AREA)
  • Strategic Management (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Operations Research (AREA)
  • Artificial Intelligence (AREA)
  • Tourism & Hospitality (AREA)
  • Game Theory and Decision Science (AREA)
  • General Business, Economics & Management (AREA)
  • Marketing (AREA)
  • Development Economics (AREA)
  • Entrepreneurship & Innovation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Quality & Reliability (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)
  • Navigation (AREA)

Abstract

The embodiment of the application is suitable for the technical field of unmanned aerial vehicles, and discloses a method and a device for predicting flying spot of an unmanned aerial vehicle, wherein the method comprises the following steps: obtaining geographic position information of a position point to be predicted; and inputting the geographical position information of the position point to be predicted into a pre-trained flying spot prediction model to obtain an output result of the flying spot prediction model, wherein the output result represents whether the position point to be predicted can be used as the flying spot of the unmanned aerial vehicle. According to the embodiment of the application, the geographic position information of the position point to be predicted is obtained, and then the geographic position information is input into the trained flying spot prediction model, so that the flying spot prediction model can judge whether the position point can be used as the flying spot of the unmanned aerial vehicle according to the input geographic position information, and the labor cost and the time cost in the unmanned aerial vehicle flying spot determination process are reduced.

Description

Unmanned aerial vehicle departure point prediction method and device
Technical Field
The application belongs to the technical field of unmanned aerial vehicles, and particularly relates to a method and a device for predicting flying spot of an unmanned aerial vehicle.
Background
With the continuous development of unmanned aerial vehicle technology, unmanned aerial vehicle applications are becoming more and more widespread.
When the unmanned aerial vehicle takes off, a proper take-off point is required to be determined first, and then the unmanned aerial vehicle takes off from the take-off point. At present, a flying spot suitable for taking off of an unmanned aerial vehicle is generally found in an artificial mode, and the specific process comprises the following steps: artificially searching an open area and detecting satellite positioning signal intensity of each position point in the open area by using an unmanned aerial vehicle; and selecting a flying spot according to the strength of the satellite positioning signal, if the satellite positioning signal strength is strong, indicating that the flying environment is good, and if the satellite positioning signal strength is weak, indicating that the position spot is suitable for taking off. When none of the location points of one area is suitable for takeoff, then the next area needs to be searched continuously, and the operation is repeated until a suitable takeoff point is found.
From the above, the existing unmanned aerial vehicle flying spot determining method needs to consume a great deal of manpower cost and time cost.
Disclosure of Invention
The embodiment of the application provides a method and a device for predicting a flying spot of an unmanned aerial vehicle, which are used for solving the problem that a large amount of labor cost and time cost are required to be consumed in the existing method for determining the flying spot of the unmanned aerial vehicle.
In a first aspect, an embodiment of the present application provides a method for predicting a flying spot of an unmanned aerial vehicle, including:
obtaining geographic position information of a position point to be predicted;
and inputting the position point information into a pre-trained flying spot prediction model to obtain an output result of the flying spot prediction model, wherein the output result represents whether the position point to be predicted can be used as a flying spot of the unmanned aerial vehicle.
According to the embodiment of the application, the geographic position information of the position point is acquired and then is input into the trained flying spot prediction model, so that whether the position point can be used as the flying spot of the unmanned aerial vehicle or not can be determined by the flying spot prediction model according to the input geographic position information, and the labor cost and the time cost in the unmanned aerial vehicle flying spot determination process are reduced.
In a possible implementation manner of the first aspect, before obtaining the geographical location information of the location point to be predicted, the method further includes:
Acquiring an unlabeled data set, wherein the unlabeled data set comprises geographic position information of at least one position point;
randomly selecting at least one position point from the unlabeled data set, and taking geographic position information of the selected position point as a support set;
and training a pre-constructed flying spot prediction model based on the unlabeled data set and the support set.
In a possible implementation manner of the first aspect, training a pre-built flying spot prediction model based on the unlabeled dataset and the support set includes:
manual labeling: labeling the unlabeled data set by using an active learning mode to obtain a manually labeled data set; the manually marked data set comprises geographic position information of a position point, pictures of the position point, satellite positioning signal intensity of the position point and labels of the position point, and the labels of the position point represent whether the position point can be used as a flying spot of an unmanned aerial vehicle;
training: training a pre-constructed flying spot prediction model by using the manually marked data set according to the support set and a pre-determined super parameter, and calculating a loss value;
Judging: if the loss value does not tend to be stable, after updating the super parameter by using a gradient descent algorithm, circularly executing the manual labeling step, the training step and the judging step until the loss value tends to be stable; and if the loss value tends to be stable, obtaining a flying spot prediction model after training is completed.
In a possible implementation manner of the first aspect, labeling the unlabeled dataset by using an active learning manner to obtain a manually labeled dataset includes:
selecting a target position point from the unlabeled data set according to the uncertainty of each position point;
acquiring a picture of the target position point, satellite positioning signal intensity of the target position point and a label of the target position point, wherein the label of the target position point represents whether the target position point can be used as a flying spot of an unmanned aerial vehicle;
and forming the manually marked data set based on the geographic position information of each target position point, the picture of the target position point, the satellite positioning signal strength of the target position point and the label of the target position point.
In a possible implementation manner of the first aspect, selecting a target location point from the unlabeled dataset according to uncertainty of each location point includes:
Calculating the conditional entropy of each position point in the unlabeled data set;
calculating expected entropy of each position point according to the conditional entropy;
and selecting a target position point according to the expected entropy.
In a possible implementation manner of the first aspect, calculating the expected entropy of each location point according to the conditional entropy includes:
by passing throughCalculating expected entropy of each position point;
wherein x represents R t The location points which are not marked manually but are to be marked manually are selected in the area;representing the expected entropy; e represents the desire; r is R t Representing a region of interest; z is represented by R t The position points which are selected and manually marked are assumed to be in the region; />Indicating whether the position point z can be regarded as a flying spot; />Indicating whether the position point x can be used as a flying spot; y is t Representing training data; h [ y ] z (H) |y t ,y x (H) ]Representing conditional entropy;
wherein ,indicating whether the collected position point z can be used as a flying spot; the method comprises the steps of carrying out a first treatment on the surface of the x represents R t Non-acquired location points in the region +.>Indicating whether the position point x which is not collected can be used as a flying spot; y is t Representing training data.
In a possible implementation manner of the first aspect, according to the support set and the super parameter, training a pre-constructed flying spot prediction model by using the manually marked data set, and calculating a loss value, including:
Dividing the manually marked data set into a training sample set and a test sample set;
inputting the training sample set and the test sample set into a pre-constructed flying spot prediction model, and performing model training by using the super parameters, the support set, the pictures of the position points and the satellite positioning signal strength of the position points;
and calculating the loss value according to the output result of the pre-constructed flying spot prediction model and the label of the position point.
In a possible implementation manner of the first aspect, the flying-spot prediction model is implemented byPredicting whether each position point can be used as a flying spot;
wherein x represents a predicted location point;indicating whether the predicted location point can be taken as a departure point, +.> Representing the ability to act as a departure point, -1 representing the inability to act as a departure point; y represents manual annotation data; u represents a support set,/->Representing the mean; />Representing the variance.
In a second aspect, an embodiment of the present application provides a terminal device, including a memory, a processor, and a computer program stored in the memory and executable on the processor, the processor implementing the method according to any one of the first aspects when executing the computer program.
In a third aspect, embodiments of the present application provide a computer readable storage medium storing a computer program which, when executed by a processor, implements a method as in any of the first aspects above.
In a fourth aspect, an embodiment of the application provides a computer program product for, when run on a terminal device, causing the terminal device to perform the method of any of the first aspects above.
It will be appreciated that the advantages of the second to fourth aspects may be found in the relevant description of the first aspect and are not repeated here.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are needed in the embodiments or the description of the prior art will be briefly described below, it being obvious that the drawings in the following description are only some embodiments of the present application, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a schematic block flow diagram of a method for predicting a flying spot of an unmanned aerial vehicle according to an embodiment of the present application;
FIG. 2 is a schematic block diagram of a flying-spot prediction model training process provided by an embodiment of the present application;
fig. 3 is a schematic block diagram of a specific flow of step S203 provided in the present application;
FIG. 4 is a schematic block diagram of a manual labeling process according to an embodiment of the present application;
FIG. 5 is a schematic diagram of an active learning model according to the present application;
fig. 6 is a block diagram of a unmanned plane flying spot predicting device according to an embodiment of the present application;
fig. 7 is a schematic structural diagram of a terminal device according to an embodiment of the present application.
Detailed Description
In the following description, for purposes of explanation and not limitation, specific details are set forth such as the particular system architecture, techniques, etc., in order to provide a thorough understanding of the embodiments of the present application.
Reference in the specification to "one embodiment" or "some embodiments" or the like means that a particular feature, structure, or characteristic described in connection with the embodiment is included in one or more embodiments of the application. Thus, appearances of the phrases "in one embodiment," "in some embodiments," "in other embodiments," and the like in the specification are not necessarily all referring to the same embodiment, but mean "one or more but not all embodiments" unless expressly specified otherwise. The terms "comprising," "including," "having," and variations thereof mean "including but not limited to," unless expressly specified otherwise.
According to the unmanned aerial vehicle flying spot prediction scheme provided by the embodiment of the application, the geographic position information of the position points is input into the pre-trained flying spot prediction model, and the flying spot prediction model outputs whether the position points can be used as the flying spot results, so that the labor cost and the time cost are reduced.
In the embodiment of the application, a flying spot prediction model is required to be constructed firstly, and then training data is acquired; then training a flying spot prediction model by using training data; after model training is completed, geographical position information of the position points to be predicted is input into a trained flying spot prediction model, and the flying spot prediction model can output a judgment result of whether the position points can be taken as unmanned aerial vehicle take-off or not.
Compared with the existing unmanned aerial vehicle flying spot determining mode, the unmanned aerial vehicle flying spot determining mode can reduce labor cost and time cost.
The technical scheme provided by the embodiment of the application is explained in detail below.
Referring to fig. 1, a schematic flow block diagram of a method for predicting a flying spot of an unmanned aerial vehicle according to an embodiment of the present application may include the following steps:
step S101, geographical position information of a position point to be predicted is obtained.
It should be noted that, the geographical location information refers to latitude and longitude information of a location point to be predicted, and a manner of collecting the latitude and longitude information of the location point to be predicted may be arbitrary, and may specifically include, but is not limited to, GPS and beidou.
Step S102, geographical position information of the position points to be predicted is input into a pre-trained flying spot prediction model, an output result of the flying spot prediction model is obtained, and the output result represents whether the position points to be predicted can be used as flying spots of the unmanned aerial vehicle.
It should be noted that, the output result of the flying spot prediction model may represent whether the location point can be used as the determination result of the flying spot of the unmanned aerial vehicle. In some embodiments, the output result of the takeoff prediction model may include, in addition to whether the location point to be predicted can be taken as a result of the unmanned aerial vehicle takeoff, a picture of the location point to be predicted. Of course, in other embodiments, the flying spot prediction model may not output a picture of the location point to be predicted. The pictures output by the flying spot prediction model can play an auxiliary role, can be selected for output or not output, and can be set according to requirements. The output picture of the position point to be predicted can be the picture characteristic information of the position point to be predicted.
The picture characteristic information of the position point to be predicted is obtained by a flying spot prediction model according to the picture characteristics of previous training and memorization. The image feature information may provide topographical information of the location point, for example, the image feature information of the location point to be predicted characterizes that the location of the location point to be predicted is a lawn or a lake. In general, some special topography and topography cannot be used as a flying spot of an unmanned aerial vehicle, for example, a position where a position point to be predicted is located is a lake, and a person cannot stand on the lake to take off the unmanned aerial vehicle in general. The flying spot prediction model may be, but not limited to, a heterogeneous multi-output gaussian process model (Heterogeneous Multi-OutputGaussian Process, HMOGP), that is, geographic location information of a location point, a picture of the location point, satellite positioning signal strength of the location point, and a tag of the location point may be input to the heterogeneous multi-output gaussian process model to train the model, whereas the picture of the location point and the geographic location information of the location point are not of the same type, and after the model is trained, a plurality of results may be output, for example, the geographic location information of the location point to be predicted is input to the trained heterogeneous multi-output gaussian process model, and the trained heterogeneous multi-output gaussian process model outputs the picture of the location point to be predicted and whether the location point to be predicted can be taken as a result of unmanned plane takeoff. It can be seen that, in the embodiment of the application, by acquiring the geographic position information of the position point to be predicted and inputting the information into the trained flying spot prediction model, the flying spot prediction model can determine whether the position point can be used as the flying spot of the unmanned aerial vehicle according to the input geographic position information, so that the labor cost and the time cost in the unmanned aerial vehicle flying spot determination process are reduced.
The following describes the training process of the flying spot prediction model.
Referring to fig. 2, a schematic flow diagram of a flying-spot prediction model training process according to an embodiment of the present application may include the following steps:
step S201, obtaining an unlabeled data set, where the unlabeled data set includes geographic location information of at least one location point.
In a specific application, the geographic position information of all the position points in the region of interest can be collected in advance, and the geographic position information of all the position points in the region of interest forms the unlabeled data set.
Step S202, randomly selecting at least one position point from an unlabeled data set, and taking geographical position information of the selected position point as a support set.
The number of selected position points may be set according to the need, and is not limited herein. For example, the unlabeled dataset includes geographic location information of 300 location points, 100 location points are randomly selected from the unlabeled dataset, and the geographic location information of the selected 100 location points forms the support set.
And step 203, training a pre-constructed flying spot prediction model based on the unlabeled data set and the support set.
In specific application, the initial super-parameters of the flying spot prediction model can be determined first, and a proper Gaussian kernel function is selected according to the requirements, wherein the parameters of the Gaussian kernel function are the initial super-parameters of the model. In the iterative training process of the model, the super parameters are continuously updated through a gradient descent algorithm. And when model training is completed, obtaining the optimal super-parameters.
Referring to the specific flow schematic block diagram of step S203 shown in fig. 3, step S203 may include a manual labeling step, a training step, and a judging step, which correspond to S301, S302, and S303, respectively. The training of the pre-constructed flying spot prediction model based on the unlabeled data set and the support set may include:
step S301, labeling the unlabeled data set by using an active learning mode to obtain a manually labeled data set; the manually marked data set comprises geographic position information of the position points, pictures of the position points, satellite positioning signal intensity of the position points and labels of the position points, and the labels of the position points represent whether the position points can be used as flying spots of the unmanned aerial vehicle.
In some embodiments, referring to the schematic block diagram of the manual labeling process shown in fig. 4, the step S301 may include:
And S401, selecting a target position point from the unlabeled data set according to the uncertainty of each position point.
The magnitude of the uncertainty can be determined by the expected entropy of the location point, and the expected entropy is large and the uncertainty is also large. The selecting process of the target location point may include:
the first step: and calculating the conditional entropy of each position point in the unlabeled data set.
Specifically, the calculation formula of the conditional entropy of the location point may be as follows:
wherein ,indicating whether the acquired position point z can be taken as a flying spot +.>-1 represents the inability to act as a departure point, +1 represents the ability to act as a departure point; x represents R t Non-acquired location points in the region +.>Indicating whether the position point x which is not collected can be used as a flying spot; y is t Representing training data.
And a second step of: and calculating expected entropy of each position point according to the conditional entropy.
Specifically, the expected entropy calculation formula of the location point is specifically as follows:
wherein x represents R t The location points which are not marked manually but are to be marked manually are selected in the area;representing the expected entropy; e represents the desire; r is R t Representing a region of interest; z is represented by R t The position points which are selected and manually marked are assumed to be in the region; />Indicating whether the position point z can be regarded as a flying spot; / >Indicating whether the position point x can be used as a flying spot; y is t Representing training data; h [ y ] z (H) |y t ,y x (H) ]Representing conditional entropy.
And a third step of: and selecting a target position point according to the expected entropy.
After calculating the expected entropy of each position point, a position point with larger expected entropy, i.e. larger uncertainty, can be selected as the target position point. The preset threshold value can be set according to requirements.
In specific application, a position point with expected entropy larger than an expected entropy threshold can be selected as a target position point by setting the expected entropy threshold; the number threshold may also be set to n, i.e. n location points with a larger expected entropy are selected as target location points.
Of course, in other embodiments, the user may not set the threshold, but may instead select a location point with greater uncertainty as the target location point as desired.
Step S402, obtaining a picture of a target position point, satellite positioning signal intensity of the target position point and a label of the target position point, wherein the label of the target position point represents whether the target position point can be used as a flying spot of the unmanned aerial vehicle.
Specifically, after the target position point with large uncertainty is selected, the collector can get to the target position point to shoot a picture of the position point, for example, if the position of the target position point is grassland, shooting to obtain a grassland map; the satellite positioning signal strength of the target location point is manually measured, and whether each target location point can be used as a flying spot (namely, the label of the target location point) is manually marked.
Step S403, forming a manually labeled dataset based on the geographic location information of each target location point, the picture of the target location point, the satellite positioning signal strength of the target location point and the label of the target location point.
Specifically, after the satellite positioning signal intensity, the labels and the pictures of each target position point are acquired, the manually marked data set is formed.
In order to better describe the process of manually labeling by active learning, the following description will be made with reference to the schematic diagram of the active learning model shown in fig. 5.
As shown in fig. 5, the active learning model is composed of five parts C, Q, L, S, U. C refers to a classifier, which may be referred to herein as a heterogeneous multi-output Gaussian process (HMOGP) model in an embodiment of the application; q represents a query function; u represents an unlabeled dataset (i.e., longitude and latitude information of all location points in the region of interest); s represents a supervisor, and can label an unlabeled data set; l represents a manually marked sample data set obtained by manually marking.
In particular, using a classifier to select a plurality of location points with large uncertainty from an unlabeled dataset, the calculation of uncertainty can be seen above. Then, the collected personnel arrive at each selected position point to shoot pictures, manually measure satellite positioning signal intensity of each position point and mark whether each position point can be used as a flying spot. Furthermore, the position information of each position point, the corresponding picture, the corresponding satellite positioning signal intensity value and whether the manual annotation data set can be formed as a result of the flying spot.
Step S302, training a pre-constructed flying spot prediction model by using a manually marked data set according to the support set and a pre-determined super parameter, and calculating a loss value.
It will be appreciated that the above-mentioned hyper-parameters refer to parameters obtained by selecting a suitable gaussian kernel function according to actual needs. This step may include:
the first step: the manually labeled data set is divided into a training sample set and a test sample set.
Specifically, the training data set (i.e., the manually labeled data set) is randomly divided into a training sample and a test sample, and the data included in the training sample and the test sample are different. The training samples are used to train a flying-spot prediction model, e.g., an HMOGP model; the test sample is used to test the accuracy of the model. Typically, the number of training samples is greater than the number of test samples.
And a second step of: and inputting the training sample set and the test sample set into a pre-constructed flying spot prediction model, and performing model training by using the super parameters, the support set, the pictures of the position points and the satellite positioning signal strength of the position points.
Specifically, a training sample can be input to the prediction model first, then a test sample is input to the prediction model, and the training is iterated, so that a prediction model with the training completed is finally obtained. Both the training sample and the test sample include geographic location information for the location point, a corresponding picture, satellite positioning signal strength values, and whether the result (i.e., tag) can be a flying spot.
In the flying spot prediction model training process, each time the flying spot prediction model is trained, a prediction result is output, and the prediction result represents whether an input position point can be used as a flying spot. The flying spot prediction model can determine whether the input position point can be regarded as a flying spot by the following equation 3.
wherein ,x* Representing predicted location points;indicating whether the predicted location point can be taken as a departure point, +.> Representing the ability to act as a departure point, -1 representing the inability to act as a departure point; y represents manual annotation data; u represents a support set,/->Representing the mean; />Representing the variance.
The mean and variance can be calculated by the following equation 4.
In the flying spot prediction model training process, the output result of the flying spot prediction model may only include whether the location point can be used as the result of the flying spot of the unmanned aerial vehicle, or may also include whether the location point can be used as the result of the flying spot of the unmanned aerial vehicle and the picture of the location point, where the picture of the location point is the picture processed by the flying spot prediction model, that is, the feature of the picture extracted by the flying spot prediction model is used as the output.
And a third step of: and calculating a loss value according to the output result of the pre-constructed flying spot prediction model and the label of the position point.
It can be understood that in the model training process, the flying spot prediction model outputs the prediction result of the currently input position point, and determines whether the position point can be used as the flying spot of the unmanned aerial vehicle. After the output result of the model is obtained, a loss value between the output result and the label of the position point is calculated.
The loss value calculation formula of the flying spot prediction model is as follows in equation 5.
Step S303, if the loss value does not tend to be stable, after updating the super parameter by using a gradient descent algorithm, circularly executing a manual labeling step, a training step and a judging step, namely returning to the step S301, and iterating the steps S301 to S303 until the loss value tends to be stable; if the loss value tends to be stable, the process proceeds to step S304.
And step S304, obtaining a flying spot prediction model after training.
In a specific application, after calculating the loss value, it is necessary to determine whether the loss value tends to be stable. The term "stable" as used herein means that the loss value is no longer variable within a range. If the loss value is not changed, the model tends to converge, and the model training is judged to be completed, so that a trained model and optimal super parameters are obtained. Otherwise, if the loss value does not tend to be stable, the model does not tend to be converged, a gradient descent algorithm is used for calculating a new super-parameter, and the steps S301 to S303 are iterated by using the new super-parameter until the loss value tends to be stable.
After the flying spot prediction model is obtained after training is completed, geographical position information of the position point to be predicted can be input into the flying spot prediction model, an output result of the flying spot prediction model is obtained, whether the position point can be used as a flying spot of the unmanned aerial vehicle can be judged, and efficiency of searching the flying spot of the unmanned aerial vehicle is improved.
It should be understood that the sequence number of each step in the foregoing embodiment does not mean that the execution sequence of each process should be determined by the function and the internal logic, and should not limit the implementation process of the embodiment of the present application.
Corresponding to the unmanned aerial vehicle flying-spot predicting method described in the above embodiments, fig. 6 shows a block diagram of the unmanned aerial vehicle flying-spot predicting device provided in the embodiment of the present application, and for convenience of explanation, only the portions related to the embodiment of the present application are shown.
Referring to fig. 6, the apparatus includes:
a location point information obtaining module 61, configured to obtain geographical location information of a location point to be predicted;
the prediction module 62 is configured to input geographical location information of a location point to be predicted into a pre-trained flying spot prediction model, obtain an output result of the flying spot prediction model, and characterize whether the location point to be predicted can be used as a flying spot of the unmanned aerial vehicle.
In one possible implementation manner, the apparatus further includes:
the data set acquisition module is used for acquiring an unlabeled data set, wherein the unlabeled data set comprises geographic position information of at least one position point;
the support set determining module is used for randomly selecting at least one position point from the unlabeled data set and taking the geographic position information of the selected position point as a support set;
and the training module is used for training a pre-constructed flying spot prediction model based on the unlabeled data set and the support set.
In one possible implementation manner, the training module is specifically configured to perform the following steps:
manual labeling: labeling the unlabeled data set by using an active learning mode to obtain a manually labeled data set; the manually marked data set comprises geographic position information of the position points, pictures of the position points, satellite positioning signal strength of the position points and labels of the position points, and the labels of the position points represent whether the position points can be used as flying points of the unmanned aerial vehicle;
training: training a pre-constructed flying spot prediction model by using a manually marked data set according to the support set and a pre-determined super parameter, and calculating a loss value;
Judging: if the loss value does not tend to be stable, after updating the super parameter by using a gradient descent algorithm, circularly executing a manual labeling step, a training step and a judging step until the loss value tends to be stable; and if the loss value tends to be stable, obtaining the flying spot prediction model after training is completed.
In one possible implementation manner, the training module is specifically configured to:
selecting a target position point from the unlabeled data set according to the uncertainty of each position point;
acquiring a picture of a target position point, satellite positioning signal intensity of the target position point and a label of the target position point, wherein the label of the target position point represents whether the target position point can be used as a flying spot of the unmanned aerial vehicle;
and forming a manually marked data set based on the geographic position information of each target position point, the picture of the target position point, the satellite positioning signal strength of the target position point and the label of the target position point.
In one possible implementation manner, the training module is specifically configured to:
calculating the conditional entropy of each position point in the unlabeled data set;
calculating expected entropy of each position point according to the conditional entropy;
and selecting a target position point according to the expected entropy.
In one possible implementation manner, the training module is specifically configured to:
By passing throughCalculating expected entropy of each position point;
wherein x represents R t The location points which are not marked manually but are to be marked manually are selected in the area;representing the expected entropy; e represents the desire; r is R t Representing a region of interest; z is represented by R t The position points which are selected and manually marked are assumed to be in the region; />Indicating whether the position point z can be regarded as a flying spot; />Indicating whether the position point x can be used as a flying spot; y is t Representing training data; h [ y ] z (H) |y t ,y x (H) ]Representing conditional entropy;
wherein ,indicating whether the collected position point z can be used as a flying spot; the method comprises the steps of carrying out a first treatment on the surface of the x represents R t Non-acquired location points in the region +.>Indicating whether the position point x which is not collected can be used as a flying spot; y is t Representing training data.
In one possible implementation manner, the training module is specifically configured to:
dividing the manually marked data set into a training sample set and a test sample set;
inputting a training sample set and a test sample set into a pre-constructed flying spot prediction model, and performing model training by using super parameters, a support set, pictures of position points and satellite positioning signal strength of the position points;
and calculating a loss value according to the output result of the pre-constructed flying spot prediction model and the label of the position point.
In one possible implementation, the flying-spot prediction model is passed throughPredicting whether each position point can be used as a flying spot;
wherein ,x* Representing predicted location points;indicating whether the predicted location point can be taken as a departure point, +.> Representing the ability to act as a departure point, -1 representing the inability to act as a departure point; y represents manual annotation data; u represents a support set,/->Representing the mean; />Representing the variance.
The unmanned aerial vehicle flying spot prediction device has the function of realizing the unmanned aerial vehicle flying spot prediction method, the function can be realized by hardware, the function can also be realized by executing corresponding software by hardware, the hardware or the software comprises one or more modules corresponding to the function, and the modules can be software and/or hardware.
It should be noted that, because the content of information interaction and execution process between the above devices/modules is based on the same concept as the method embodiment of the present application, specific functions and technical effects thereof may be referred to in the method embodiment section, and will not be described herein.
Fig. 7 is a schematic structural diagram of a terminal device according to an embodiment of the present application. As shown in fig. 7, the terminal device 7 of this embodiment includes: at least one processor 70, a memory 71 and a computer program 72 stored in the memory 71 and executable on the at least one processor 70, the processor 70 implementing the steps of any of the various method embodiments described above when executing the computer program 72.
The terminal device 7 may be a computing device such as a desktop computer, a notebook computer, a palm computer, a cloud server, etc. The terminal device may include, but is not limited to, a processor 70, a memory 71. It will be appreciated by those skilled in the art that fig. 7 is merely an example of the terminal device 7 and is not limiting of the terminal device 7, and may include more or fewer components than shown, or may combine certain components, or different components, such as may also include input-output devices, network access devices, etc.
The processor 70 may be a central processing unit (Central Processing Unit, CPU) and the processor 70 may be other general purpose processors, a digital signal processor (Digital Signal Processor, DSP), an application specific integrated circuit (Application Specific Integrated Circuit, ASIC), an off-the-shelf programmable gate array (Field-Programmable Gate Array, FPGA) or other programmable logic device, discrete gate or transistor logic, discrete hardware components, or the like. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The memory 71 may in some embodiments be an internal storage unit of the terminal device 7, such as a hard disk or a memory of the terminal device 7. The memory 71 may in other embodiments also be an external storage device of the terminal device 7, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card) or the like, which are provided on the terminal device 7. Further, the memory 71 may also include both an internal storage unit and an external storage device of the terminal device 7. The memory 71 is used for storing an operating system, application programs, boot loader (BootLoader), data, other programs, etc., such as program codes of the computer program. The memory 71 may also be used for temporarily storing data that has been output or is to be output.
It will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-described division of the functional units and modules is illustrated, and in practical application, the above-described functional distribution may be performed by different functional units and modules according to needs, i.e. the internal structure of the apparatus is divided into different functional units or modules to perform all or part of the above-described functions. The functional units and modules in the embodiment may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit, where the integrated units may be implemented in a form of hardware or a form of a software functional unit. In addition, the specific names of the functional units and modules are only for distinguishing from each other, and are not used for limiting the protection scope of the present application. The specific working process of the units and modules in the above system may refer to the corresponding process in the foregoing method embodiment, which is not described herein again.
Embodiments of the present application also provide a computer readable storage medium storing a computer program which, when executed by a processor, implements steps for implementing the various method embodiments described above.
Embodiments of the present application provide a computer program product which, when run on a terminal device, causes the terminal device to perform steps that enable the implementation of the method embodiments described above.
The integrated units, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a computer readable storage medium. Based on such understanding, the present application may implement all or part of the flow of the method of the above embodiments, and may be implemented by a computer program to instruct related hardware, where the computer program may be stored in a computer readable storage medium, and when the computer program is executed by a processor, the computer program may implement the steps of each of the method embodiments described above. Wherein the computer program comprises computer program code which may be in source code form, object code form, executable file or some intermediate form etc. The computer readable medium may include at least: any entity or device capable of carrying computer program code to a photographing device/terminal apparatus, recording medium, computer Memory, read-Only Memory (ROM), random access Memory (RAM, random Access Memory), electrical carrier signals, telecommunications signals, and software distribution media. Such as a U-disk, removable hard disk, magnetic or optical disk, etc. In some jurisdictions, computer readable media may not be electrical carrier signals and telecommunications signals in accordance with legislation and patent practice.
In the foregoing embodiments, the descriptions of the embodiments are emphasized, and in part, not described or illustrated in any particular embodiment, reference is made to the related descriptions of other embodiments.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus and method may be implemented in other manners. For example, the apparatus embodiments described above are merely illustrative, e.g., the division of the modules or units is merely a logical functional division, and there may be additional divisions when actually implemented, e.g., multiple units or components may be combined or integrated into another system, or some features may be omitted, or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed may be an indirect coupling or communication connection via interfaces, devices or units, which may be in electrical, mechanical or other forms.
The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
The above embodiments are only for illustrating the technical solution of the present application, and not for limiting the same; although the application has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit and scope of the technical solutions of the embodiments of the present application, and are intended to be included in the scope of the present application.

Claims (10)

1. The unmanned aerial vehicle departure point prediction method is characterized by comprising the following steps of:
acquiring an unlabeled data set, wherein the unlabeled data set comprises geographic position information of at least one position point;
randomly selecting at least one position point from the unlabeled data set, and taking geographic position information of the selected position point as a support set;
Training a pre-constructed flying spot prediction model based on the unlabeled data set and the support set;
obtaining geographic position information of a position point to be predicted;
inputting the geographical position information of the position point to be predicted into a pre-trained flying spot prediction model to obtain an output result of the flying spot prediction model, wherein the output result represents whether the position point to be predicted can be used as a flying spot of an unmanned aerial vehicle;
based on the unlabeled data set and the support set, training a pre-constructed flying spot prediction model, wherein the training comprises the following steps:
manual labeling: labeling the unlabeled data set by using an active learning mode to obtain a manually labeled data set; the manually marked data set comprises geographic position information of a position point, pictures of the position point, satellite positioning signal intensity of the position point and labels of the position point, and the labels of the position point represent whether the position point can be used as a flying spot of an unmanned aerial vehicle;
training: training a pre-constructed flying spot prediction model by using the manually marked data set according to the support set and a pre-determined super parameter, and calculating a loss value;
Judging: if the loss value does not tend to be stable, after updating the super parameter by using a gradient descent algorithm, circularly executing the manual labeling step, the training step and the judging step until the loss value tends to be stable; and if the loss value tends to be stable, obtaining a flying spot prediction model after training is completed.
2. The method of claim 1, wherein labeling the unlabeled dataset using active learning to obtain a manually labeled dataset comprises:
selecting a target position point from the unlabeled data set according to the uncertainty of each position point;
acquiring a picture of the target position point, satellite positioning signal intensity of the target position point and a label of the target position point, wherein the label of the target position point represents whether the target position point can be used as a flying spot of an unmanned aerial vehicle;
and forming the manually marked data set based on the geographic position information of each target position point, the picture of the target position point, the satellite positioning signal strength of the target position point and the label of the target position point.
3. The method of claim 2, wherein selecting a target location point from the unlabeled dataset based on the uncertainty of each location point, comprises:
Calculating the conditional entropy of each position point in the unlabeled data set;
calculating expected entropy of each position point according to the conditional entropy;
and selecting a target position point according to the expected entropy.
4. A method as claimed in claim 3, wherein calculating the expected entropy of each of the location points based on the conditional entropy comprises:
by passing throughCalculating expected entropy of each position point;
wherein ,representation->The location points which are not marked manually but are to be marked manually are selected in the area; />Representing the expected entropy; />Representing the desire; />Representing a region of interest; />Is indicated at->The position points which are selected and manually marked are assumed to be in the region; />Representing location point +.>Whether to be used as a flying spot; />Indicating whether the position point x can be used as a flying spot; />Representing training data; />Representing conditional entropy;
wherein ,representing the acquired position points +.>Whether to be used as a flying spot; />Representation->Non-acquired location points in the region +.>Indicating whether the position point x which is not collected can be used as a flying spot; />Representing training data.
5. The method of claim 1, wherein training a pre-constructed departure point prediction model using the artificially labeled dataset based on the support set and the super parameter and calculating a loss value comprises:
Dividing the manually marked data set into a training sample set and a test sample set;
inputting the training sample set and the test sample set into a pre-constructed flying spot prediction model, and performing model training by using the super parameters, the support set, the pictures of the position points and the satellite positioning signal strength of the position points;
and calculating the loss value according to the output result of the pre-constructed flying spot prediction model and the label of the position point.
6. The method of any one of claims 1 to 5, wherein the flying-spot prediction model is generated byPredicting whether each position point can be used as a flying spot;
wherein ,representing predicted location points; />Indicating whether the predicted location point can be taken as a departure point, +.>E { +1, -1}, where +1 represents the ability to act as a departure point, -1 represents the inability to act as a departure point; />Representing manual annotation data; />Representing support sets,/->Representing the mean; />Representing the variance.
7. Unmanned aerial vehicle departure point prediction unit, characterized by, include:
the position point information acquisition module is used for acquiring geographic position information of the position point to be predicted;
the prediction module is used for inputting the geographical position information of the position point to be predicted into a pre-trained flying spot prediction model to obtain an output result of the flying spot prediction model, wherein the output result represents whether the position point to be predicted can be used as a flying spot of the unmanned aerial vehicle;
The data set acquisition module is used for acquiring an unlabeled data set, wherein the unlabeled data set comprises geographic position information of at least one position point;
the support set determining module is used for randomly selecting at least one position point from the unlabeled data set and taking the geographic position information of the selected position point as a support set;
the training module is used for training a pre-constructed flying spot prediction model based on the unlabeled data set and the support set;
the training module is specifically configured to:
manual labeling: labeling the unlabeled data set by using an active learning mode to obtain a manually labeled data set; the manually marked data set comprises geographic position information of a position point, pictures of the position point, satellite positioning signal intensity of the position point and labels of the position point, and the labels of the position point represent whether the position point can be used as a flying spot of an unmanned aerial vehicle;
training: training a pre-constructed flying spot prediction model by using the manually marked data set according to the support set and a pre-determined super parameter, and calculating a loss value;
judging: if the loss value does not tend to be stable, after updating the super parameter by using a gradient descent algorithm, circularly executing the manual labeling step, the training step and the judging step until the loss value tends to be stable; and if the loss value tends to be stable, obtaining a flying spot prediction model after training is completed.
8. The apparatus of claim 7, wherein the training module is specifically configured to:
selecting a target position point from the unlabeled data set according to the uncertainty of each position point;
acquiring a picture of the target position point, satellite positioning signal intensity of the target position point and a label of the target position point, wherein the label of the target position point represents whether the target position point can be used as a flying spot of an unmanned aerial vehicle;
and forming the manually marked data set based on the geographic position information of each target position point, the picture of the target position point, the satellite positioning signal strength of the target position point and the label of the target position point.
9. A terminal device comprising a memory, a processor and a computer program stored in the memory and executable on the processor, characterized in that the processor implements the method according to any of claims 1 to 6 when executing the computer program.
10. A computer readable storage medium storing a computer program, characterized in that the computer program when executed by a processor implements the method according to any one of claims 1 to 6.
CN202010101564.XA 2020-02-19 2020-02-19 Unmanned aerial vehicle departure point prediction method and device Active CN112541608B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010101564.XA CN112541608B (en) 2020-02-19 2020-02-19 Unmanned aerial vehicle departure point prediction method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010101564.XA CN112541608B (en) 2020-02-19 2020-02-19 Unmanned aerial vehicle departure point prediction method and device

Publications (2)

Publication Number Publication Date
CN112541608A CN112541608A (en) 2021-03-23
CN112541608B true CN112541608B (en) 2023-10-20

Family

ID=75013312

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010101564.XA Active CN112541608B (en) 2020-02-19 2020-02-19 Unmanned aerial vehicle departure point prediction method and device

Country Status (1)

Country Link
CN (1) CN112541608B (en)

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103336863A (en) * 2013-06-24 2013-10-02 北京航空航天大学 Radar flight path observation data-based flight intention recognition method
CN103942229A (en) * 2013-01-22 2014-07-23 日电(中国)有限公司 Destination prediction device and method
EP3115858A1 (en) * 2014-03-07 2017-01-11 State Grid Corporation of China (SGCC) Centralized monitoring system and monitoring method for unmanned aerial vehicle to patrol power transmission line
CN107444665A (en) * 2017-07-24 2017-12-08 长春草莓科技有限公司 A kind of unmanned plane Autonomous landing method
CN108334099A (en) * 2018-01-26 2018-07-27 上海深视信息科技有限公司 A kind of efficient unmanned plane human body tracing method
CN109543994A (en) * 2018-11-20 2019-03-29 广东电网有限责任公司 A kind of unmanned plane dispositions method and device
CN110197475A (en) * 2018-10-31 2019-09-03 国网宁夏电力有限公司检修公司 Insulator automatic recognition system, method and application in a kind of transmission line of electricity
CN110209195A (en) * 2019-06-13 2019-09-06 浙江海洋大学 The tele-control system and control method of marine unmanned plane
CN110751106A (en) * 2019-10-23 2020-02-04 南京航空航天大学 Unmanned aerial vehicle target detection method and system
CN110766038A (en) * 2019-09-02 2020-02-07 深圳中科保泰科技有限公司 Unsupervised landform classification model training and landform image construction method
CN110794437A (en) * 2019-10-31 2020-02-14 深圳中科保泰科技有限公司 Satellite positioning signal strength prediction method and device

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE102008033235A1 (en) * 2008-07-15 2010-03-11 Astrium Gmbh Method for automatically determining a runway
CN106020189B (en) * 2016-05-24 2018-10-16 武汉科技大学 Vacant lot heterogeneous robot system paths planning method based on neighborhood constraint

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103942229A (en) * 2013-01-22 2014-07-23 日电(中国)有限公司 Destination prediction device and method
CN103336863A (en) * 2013-06-24 2013-10-02 北京航空航天大学 Radar flight path observation data-based flight intention recognition method
EP3115858A1 (en) * 2014-03-07 2017-01-11 State Grid Corporation of China (SGCC) Centralized monitoring system and monitoring method for unmanned aerial vehicle to patrol power transmission line
CN107444665A (en) * 2017-07-24 2017-12-08 长春草莓科技有限公司 A kind of unmanned plane Autonomous landing method
CN108334099A (en) * 2018-01-26 2018-07-27 上海深视信息科技有限公司 A kind of efficient unmanned plane human body tracing method
CN110197475A (en) * 2018-10-31 2019-09-03 国网宁夏电力有限公司检修公司 Insulator automatic recognition system, method and application in a kind of transmission line of electricity
CN109543994A (en) * 2018-11-20 2019-03-29 广东电网有限责任公司 A kind of unmanned plane dispositions method and device
CN110209195A (en) * 2019-06-13 2019-09-06 浙江海洋大学 The tele-control system and control method of marine unmanned plane
CN110766038A (en) * 2019-09-02 2020-02-07 深圳中科保泰科技有限公司 Unsupervised landform classification model training and landform image construction method
CN110751106A (en) * 2019-10-23 2020-02-04 南京航空航天大学 Unmanned aerial vehicle target detection method and system
CN110794437A (en) * 2019-10-31 2020-02-14 深圳中科保泰科技有限公司 Satellite positioning signal strength prediction method and device

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
基于红外图像的低空无人机检测识别方法;马旗;孙晓军;张杨;姜雨辰;;弹箭与制导学报(第03期);全文 *
预警卫星功能仿真系统;阎文丽,姜振东;系统仿真学报(第06期);全文 *
马旗 ; 孙晓军 ; 张杨 ; 姜雨辰 ; .基于红外图像的低空无人机检测识别方法.弹箭与制导学报.(第03期),全文. *

Also Published As

Publication number Publication date
CN112541608A (en) 2021-03-23

Similar Documents

Publication Publication Date Title
CN108280477B (en) Method and apparatus for clustering images
EP3385919B1 (en) Method of processing passage record and device
CN110413905B (en) Method, device and equipment for acquiring road alignment and storage medium
CN110765882B (en) Video tag determination method, device, server and storage medium
CN110689043A (en) Vehicle fine granularity identification method and device based on multiple attention mechanism
EP3944218A1 (en) Information processing device, information processing method, and program
CN110794437B (en) Satellite positioning signal strength prediction method and device
CN113298042B (en) Remote sensing image data processing method and device, storage medium and computer equipment
Hao et al. Methodology for optimizing quadrat size in sparse vegetation surveys: A desert case study from the Tarim Basin
Song et al. Small UAV-based multi-temporal change detection for monitoring cultivated land cover changes in mountainous terrain
Kargah-Ostadi et al. Automated real-time roadway asset inventory using artificial intelligence
Li et al. Extracting high-precision vehicle motion data from unmanned aerial vehicle video captured under various weather conditions
CN112748453B (en) Road side positioning method, device, equipment and storage medium
CN104615620B (en) Map search kind identification method and device, map search method and system
CN112541608B (en) Unmanned aerial vehicle departure point prediction method and device
Chandio et al. An approach for map-matching strategy of GPS-trajectories based on the locality of road networks
Çalışkan et al. Forest road extraction from orthophoto images by convolutional neural networks
CN111104965A (en) Vehicle target identification method and device
CN114821353A (en) Radar feature matching-based crop planting area rapid extraction method and system
CN113240340B (en) Soybean planting area analysis method, device, equipment and medium based on fuzzy classification
CN114610938A (en) Remote sensing image retrieval method and device, electronic equipment and computer readable medium
CN113326877A (en) Model training method, data processing method, device, apparatus, storage medium, and program
CN112579793B (en) Model training method, POI label detection method and device
CN117710833B (en) Mapping geographic information data acquisition method and related device based on cloud computing
Fedorov Exploiting public web content to enhance environmental monitoring

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right
TA01 Transfer of patent application right

Effective date of registration: 20220214

Address after: 518000 2515, building 2, Huilong business center, North Station community, Minzhi street, Longhua District, Shenzhen, Guangdong Province

Applicant after: Shenzhen Zhongke Baotai Aerospace Technology Co.,Ltd.

Address before: Room 1101-1102, building 1, Changfu Jinmao building, No.5, Shihua Road, free trade zone, Fubao street, Futian District, Shenzhen, Guangdong 518000

Applicant before: Shenzhen Zhongke Baotai Technology Co.,Ltd.

GR01 Patent grant
GR01 Patent grant