CN113327461B - Cooperative unmanned aerial vehicle detection method, device and equipment - Google Patents

Cooperative unmanned aerial vehicle detection method, device and equipment Download PDF

Info

Publication number
CN113327461B
CN113327461B CN202110883686.3A CN202110883686A CN113327461B CN 113327461 B CN113327461 B CN 113327461B CN 202110883686 A CN202110883686 A CN 202110883686A CN 113327461 B CN113327461 B CN 113327461B
Authority
CN
China
Prior art keywords
data
target
unmanned aerial
aerial vehicle
model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110883686.3A
Other languages
Chinese (zh)
Other versions
CN113327461A (en
Inventor
王滨
张峰
王星
史治国
陈积明
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Hikvision Digital Technology Co Ltd
Original Assignee
Hangzhou Hikvision Digital Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Hikvision Digital Technology Co Ltd filed Critical Hangzhou Hikvision Digital Technology Co Ltd
Priority to CN202110883686.3A priority Critical patent/CN113327461B/en
Publication of CN113327461A publication Critical patent/CN113327461A/en
Application granted granted Critical
Publication of CN113327461B publication Critical patent/CN113327461B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G5/00Traffic control systems for aircraft, e.g. air-traffic control [ATC]
    • G08G5/0073Surveillance aids
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/50Network services
    • H04L67/56Provisioning of proxy services
    • H04L67/565Conversion or adaptation of application format or content

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Biomedical Technology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • Traffic Control Systems (AREA)

Abstract

The application provides a cooperative unmanned aerial vehicle detection method, a cooperative unmanned aerial vehicle detection device and cooperative unmanned aerial vehicle detection equipment, wherein the method comprises the following steps: training the initial data standard model based on the first data set to obtain a target data standard model; inputting the sample data in the first data set to a target data standard model to obtain standard data corresponding to the sample data; generating a second data set by using the normalized data corresponding to the plurality of sample data in the first data set; training the initial unmanned aerial vehicle detection model based on the second data set to obtain a candidate unmanned aerial vehicle detection model, and extracting model parameters from the candidate unmanned aerial vehicle detection model; sending the model parameters to a service center end so that the service center end generates a target unmanned aerial vehicle detection model based on the model parameters; and acquiring a target unmanned aerial vehicle detection model from a service center end, and detecting whether an unmanned aerial vehicle exists in a target scene by using the target unmanned aerial vehicle detection model. Through the technical scheme, the detection accuracy of the unmanned aerial vehicle can be improved, and data privacy is protected.

Description

Cooperative unmanned aerial vehicle detection method, device and equipment
Technical Field
The application relates to the field of unmanned aerial vehicle detection, in particular to a cooperative unmanned aerial vehicle detection method, device and equipment.
Background
With the large-scale use of drones, certain drones may affect normal social operations. For example, for some target scenes (such as no-fly areas such as an airport, a special facility area, etc.), the drone needs to be prohibited from flying in the target scene, but for various reasons, the drone may appear in the target scene, and therefore, it is necessary to detect whether the drone exists in the target scene, and then manage the drone.
Because the target scene is large, whether unmanned aerial vehicles exist in all regions of the target scene cannot be detected through one data holding end, therefore, the target scene usually deploys a plurality of data holding ends, each data holding end independently collects data of the target scene, and sends the data to the service center end. And the service center side carries out unmanned aerial vehicle detection based on the data sent by all the data holding sides and analyzes whether the unmanned aerial vehicle exists in the target scene.
However, in the above-mentioned method, each data holding end needs to transmit all data to the service center end in real time, thereby imposing a great pressure on the data communication link. Moreover, the data privacy of each data holding end cannot be protected, and data leakage of the data holding end may be caused, that is, there may be a potential safety hazard.
Disclosure of Invention
The application provides a collaborative unmanned aerial vehicle detection method, is applied to the system that includes service center end and a plurality of data hold the end, the method is applied to arbitrary data hold the end, the method includes:
acquiring an initial data standard model and an initial unmanned aerial vehicle detection model;
training the initial data standard model based on a first data set to obtain a target data standard model; inputting the sample data in the first data set to the target data standard model to obtain standard data corresponding to the sample data; wherein the target data specification model at least comprises a plurality of hidden layers, and the specified data is output data of a target hidden layer in the plurality of hidden layers;
generating a second data set by using a plurality of normalized data corresponding to a plurality of sample data in the first data set; training the initial unmanned aerial vehicle detection model based on the second data set to obtain a candidate unmanned aerial vehicle detection model, and extracting model parameters from the candidate unmanned aerial vehicle detection model;
sending the model parameters to the service center end so that the service center end generates a target unmanned aerial vehicle detection model based on the model parameters sent by the data holding ends;
and acquiring the target unmanned aerial vehicle detection model from the service center terminal, and detecting whether the unmanned aerial vehicle exists in a target scene by using the target unmanned aerial vehicle detection model.
Illustratively, the training the initial data normative model based on the first data set to obtain a target data normative model includes: adding noise to the sample data in the first data set to obtain noise-added data; inputting the noisy data to the initial data specification model to obtain denoised data; determining a loss value between the de-noised data and the sample data, and adjusting network parameters of the initial data standard model based on the loss value to obtain an adjusted data standard model; determining whether the adjusted data specification model has converged;
if not, taking the adjusted data specification model as an initial data specification model, and returning to execute the operation of inputting the noisy data to the initial data specification model to obtain the de-noised data;
and if so, taking the adjusted data specification model as a target data specification model.
Illustratively, the target data specification model sequentially includes an input layer, K first hidden layers, 1 second hidden layer, K third hidden layers, and an output layer, where K is a positive integer, and the target hidden layer is the second hidden layer; for each first hidden layer, the length of input data of the first hidden layer is greater than the length of output data of the first hidden layer; for the second hidden layer, the length of input data of the second hidden layer is greater than the length of output data of the second hidden layer; for each third hidden layer, the length of the input data of the third hidden layer is smaller than the length of the output data of the third hidden layer.
Illustratively, generating a second data set by using a plurality of normalized data corresponding to a plurality of sample data in the first data set includes: determining a data type corresponding to the normalized data for each normalized data corresponding to a plurality of sample data in a first data set, wherein the data type is an image type, an audio type or a radio frequency type; determining a target position and an extended position based on the data type, adding the normalized data at the target position, and adding the extended data at the extended position to obtain target data corresponding to the normalized data; taking the label value of the sample data corresponding to the normalized data as the label value of the target data; the tag value is used for indicating whether the target data corresponds to the existence of the unmanned aerial vehicle or the nonexistence of the unmanned aerial vehicle; and generating a second data set based on the target data corresponding to the normalized data and the label value of each target data.
Illustratively, the extracting model parameters from the candidate unmanned aerial vehicle detection model and sending the model parameters to the service center side includes: inputting each target data in the second data set to the candidate unmanned aerial vehicle detection model to obtain a first detection result corresponding to the target data; determining a first accuracy corresponding to the candidate unmanned aerial vehicle detection model based on a first detection result corresponding to each target data and a label value of each target data; inputting each target data in the second data set to the initial unmanned aerial vehicle detection model to obtain a second detection result corresponding to the target data; determining a second accuracy rate corresponding to the initial unmanned aerial vehicle detection model based on a second detection result corresponding to each target data and a label value of each target data; if the first accuracy is greater than the second accuracy, extracting model parameters from the candidate unmanned aerial vehicle detection model, and sending the model parameters to the service center terminal.
Illustratively, the service center side generates a target unmanned aerial vehicle detection model based on model parameters sent by a plurality of data holding sides, and the method includes: aiming at each data holding end, acquiring a quality score corresponding to the data holding end; the higher the quality score is, the better the performance of the candidate unmanned aerial vehicle detection model trained by the data holding end is; determining a weighting coefficient corresponding to the data holding end based on the quality score corresponding to the data holding end; the higher the quality score is, the larger the corresponding weighting coefficient of the data holding end is;
and determining target parameters based on the model parameters sent by each data holding end and the weighting coefficients corresponding to the data holding ends, and generating a target unmanned aerial vehicle detection model based on the target parameters.
For example, the detecting whether the drone exists in the target scene by using the target drone detection model includes: inputting data to be detected of a target scene into the target data standard model to obtain standard data, wherein the standard data is output data of a target hidden layer of the target data standard model;
determining a target position and an extended position based on a data type corresponding to the normalized data, adding the normalized data at the target position, and adding the extended data at the extended position to obtain the target data;
and inputting the target data into the target unmanned aerial vehicle detection model to obtain an unmanned aerial vehicle detection result, wherein the unmanned aerial vehicle detection result is used for indicating that an unmanned aerial vehicle exists in the target scene or does not exist in the target scene.
Illustratively, the method further comprises: the service center end receives unmanned aerial vehicle detection results sent by a plurality of data holding ends, the unmanned aerial vehicle detection results are first values or second values, the first values indicate that the unmanned aerial vehicle exists in the target scene, and the second values indicate that the unmanned aerial vehicle does not exist in the target scene; determining a weight value corresponding to each data holding end, and determining a target detection result based on the unmanned aerial vehicle detection result sent by each data holding end and the weight value corresponding to each data holding end;
if the target detection result is larger than a threshold value, determining that the unmanned aerial vehicle exists in the target scene;
and if the target detection result is not greater than the threshold value, determining that no unmanned aerial vehicle exists in the target scene.
The application provides a collaborative unmanned aerial vehicle detection device, is applied to the system that includes service center end and a plurality of data hold the end, the device is applied to arbitrary data hold the end, the device includes:
the acquisition module is used for acquiring an initial data standard model and an initial unmanned aerial vehicle detection model;
the training module is used for training the initial data standard model based on a first data set to obtain a target data standard model; inputting the sample data in the first data set to the target data standard model to obtain standard data corresponding to the sample data; wherein the target data specification model at least comprises a plurality of hidden layers, and the specified data is output data of a target hidden layer in the plurality of hidden layers; generating a second data set by using a plurality of normalized data corresponding to a plurality of sample data in the first data set; training the initial unmanned aerial vehicle detection model based on a second data set to obtain a candidate unmanned aerial vehicle detection model;
the sending module is used for extracting model parameters from the candidate unmanned aerial vehicle detection model after obtaining the candidate unmanned aerial vehicle detection model, and sending the model parameters to the service center end so that the service center end can generate a target unmanned aerial vehicle detection model based on the model parameters sent by the plurality of data holding ends;
the acquisition module is further used for acquiring the target unmanned aerial vehicle detection model from the service center end and detecting whether an unmanned aerial vehicle exists in a target scene by using the target unmanned aerial vehicle detection model.
The application provides a data holding end, includes: a processor and a machine-readable storage medium storing machine-executable instructions executable by the processor;
the processor is configured to execute machine executable instructions to perform the steps of:
acquiring an initial data standard model and an initial unmanned aerial vehicle detection model;
training the initial data standard model based on a first data set to obtain a target data standard model; inputting the sample data in the first data set to the target data standard model to obtain standard data corresponding to the sample data; wherein the target data specification model at least comprises a plurality of hidden layers, and the specified data is output data of a target hidden layer in the plurality of hidden layers;
generating a second data set by using a plurality of normalized data corresponding to a plurality of sample data in the first data set; training the initial unmanned aerial vehicle detection model based on the second data set to obtain a candidate unmanned aerial vehicle detection model, and extracting model parameters from the candidate unmanned aerial vehicle detection model;
sending the model parameters to a service center end so that the service center end generates a target unmanned aerial vehicle detection model based on the model parameters sent by a plurality of data holding ends;
and acquiring the target unmanned aerial vehicle detection model from the service center terminal, and detecting whether the unmanned aerial vehicle exists in a target scene by using the target unmanned aerial vehicle detection model.
According to the technical scheme, in the embodiment of the application, different dimensional data are standardized into a unified data format through the data holding end, then the unmanned aerial vehicle detection model issued by the service center end is trained, the trained model parameters are sent to the service center end, the unmanned aerial vehicle detection model is updated by the service center end based on the model parameters, the updated unmanned aerial vehicle detection model is sent to each data holding end, unmanned aerial vehicle detection is carried out by each data holding end based on the updated unmanned aerial vehicle detection model, the unmanned aerial vehicle detection capability is improved, the accuracy of unmanned aerial vehicle detection is improved, the unmanned aerial vehicle detection method can be applied to various unmanned aerial vehicle detection scenes, and the robustness of unmanned aerial vehicle detection is improved. In the above manner, each data holding end does not need to send data to the service center end, and only sends the model parameters to the service center end, so that the pressure on the communication link is reduced, data sharing is not needed, the data privacy of each data holding end can be protected, data leakage is avoided, and potential safety hazards are avoided.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings needed to be used in the description of the embodiments of the present application or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments described in the present application, and other drawings can be obtained by those skilled in the art according to the drawings of the embodiments of the present application.
Fig. 1 is a schematic flowchart of a cooperative drone detection method according to an embodiment of the present application;
fig. 2 is a schematic flow chart of a cooperative drone detection method according to an embodiment of the present application;
fig. 3 is a schematic structural diagram of a cooperative unmanned aerial vehicle detection apparatus according to an embodiment of the present application.
Detailed Description
The terminology used in the embodiments of the present application is for the purpose of describing particular embodiments only and is not intended to be limiting of the application. As used in this application and the claims, the singular forms "a", "an", and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It should also be understood that the term "and/or" as used herein is meant to encompass any and all possible combinations of one or more of the associated listed items.
It should be understood that although the terms first, second, third, etc. may be used in the embodiments of the present application to describe various information, the information should not be limited to these terms. These terms are only used to distinguish one type of information from another. For example, first information may also be referred to as second information, and similarly, second information may also be referred to as first information, without departing from the scope of the present application. Depending on the context, moreover, the word "if" as used may be interpreted as "at … …" or "when … …" or "in response to a determination".
The embodiment of the application provides a method for detecting a cooperative unmanned aerial vehicle, which can be applied to a system comprising a service center side and a plurality of data holding sides, and can be applied to any data holding side, as shown in fig. 1, the method is a schematic flow diagram of the method for detecting the cooperative unmanned aerial vehicle, and the method can include:
step 101, obtaining an initial data specification model and an initial unmanned aerial vehicle detection model.
For example, the initial data specification model may be obtained from the service center side, or the stored initial data specification model may be obtained locally from the data holding side, which is not limited to this.
For another example, the initial drone detection model may be obtained from the service center, or the stored initial drone detection model may be obtained locally from the data holding end, which is not limited to this.
Step 102, training the initial data specification model based on the first data set to obtain a target data specification model (for convenience of distinguishing, the trained data specification model is referred to as a target data specification model).
For example, in step 102, the target data specification model may be obtained by training as follows:
step 1021, adding noise to the sample data (e.g. each sample data) in the first data set to obtain noisy data, where the noisy data is a noisy data.
Step 1022, inputting the noisy data to the initial data normative model to obtain the de-noised data.
For example, after the noisy data is input to the initial data specification model, the initial data specification model may process the noisy data and refer to the processed data as de-noised data.
And 1023, determining a loss value between the de-noised data and the sample data, and adjusting the network parameters of the initial data standard model based on the loss value to obtain an adjusted data standard model.
For example, a loss function may be configured in advance, the input of the loss function is denoising data and sample data, such as a distance value between the denoising data and the sample data, and the output of the loss function is a loss value. Based on this, after de-noising data corresponding to sample data is obtained, the sample data and the de-noising data can be substituted into the loss function to obtain a loss value between the de-noising data and the sample data.
After the loss value is obtained, the network parameters of the initial data normative model may be adjusted based on the loss value, and the adjustment mode is not limited, for example, the network parameters of the initial data normative model may be adjusted by using a gradient descent method, and the like, and the adjustment target of the network parameters is to make the loss value between the de-noise data and the sample data smaller and smaller, and finally make the de-noise data and the sample data close to each other.
And step 1024, determining whether the adjusted data specification model is converged.
For example, if the loss value is smaller than a predetermined threshold (which may be configured empirically without limitation, such as a value greater than 0 but close to 0), it is determined that the adjusted data specification model has converged, and if the loss value is not smaller than the predetermined threshold, it is determined that the adjusted data specification model has not converged.
If not, step 1025 may be performed, and if so, step 1026 may be performed.
And 1025, taking the adjusted data specification model as an initial data specification model, returning to execute the operation of inputting the noisy data to the initial data specification model to obtain the de-noised data, namely inputting the noisy data to the adjusted data specification model to obtain new de-noised data, and then continuously determining the loss value between the new de-noised data and the sample data, and so on.
And step 1026, taking the adjusted data specification model as a target data specification model. And finishing the training process of the initial data standard model to obtain the trained target data standard model.
In summary, the target data specification model can be obtained, step 102 is completed, and the subsequent steps are executed.
Step 103, inputting the sample data in the first data set to the target data specification model to obtain the specified data corresponding to the sample data. For example, the target data specification model may include at least a plurality of hidden layers, and the specified data may be output data of a target hidden layer of the plurality of hidden layers.
In a possible embodiment, the target data specification model sequentially includes an input layer, K first hidden layers, 1 second hidden layer, K third hidden layers, and an output layer, where K may be a positive integer, and on this basis, the target hidden layer may be the second hidden layer, that is, the specified data may be output data of the second hidden layer. For example, for each sample data in the first data set, the sample data is input to the input layer, the output data of the input layer is input to the first hidden layer, the output data of the first hidden layer is input to the second first hidden layer, and so on, the output data of the last first hidden layer is input to the second hidden layer, the second hidden layer processes the data to obtain the output data of the second hidden layer, and the output data of the second hidden layer is the normalized data.
For example, for each first hidden layer, the length of the input data of the first hidden layer is greater than the length of the output data of the first hidden layer; for the second hidden layer, the length of the input data of the second hidden layer is greater than the length of the output data of the second hidden layer; for each third hidden layer, the length of the input data of the third hidden layer is smaller than the length of the output data of the third hidden layer.
And 104, generating a second data set by using a plurality of normalized data corresponding to a plurality of sample data in the first data set. For example, in step 104, the following steps may be taken to generate the second data set:
step 1041, determining, for each normalized data corresponding to a plurality of sample data in the first data set, a data type corresponding to the normalized data, where the data type may be an image type (indicating that the sample data corresponding to the normalized data is image data), an audio type (indicating that the sample data corresponding to the normalized data is audio data), or a radio frequency type (indicating that the sample data corresponding to the normalized data is radio frequency data). Of course, the above are just a few examples of the data type, and no limitation is made to this data type.
Step 1042, determining a target position and an extended position based on the data type, adding the normalized data to the target position, and adding extended data (which may be configured according to experience, for example, all 0 s or all 1 s, which is not limited to this) to the extended position, to obtain target data corresponding to the normalized data.
For example, assuming that there are three data types in total, such as an image type, an audio type, a radio frequency type, and the like, and the data length of the normalized data is M bits, the target data is 3M bits. The sequential relationship of all data types can be configured in advance, for example, the sequential relationship is image type, audio type and radio frequency type.
On this basis, assuming that the data type is an image type, the first M bits of the target data are the target position, the canonical post data with the length of M bits are added at the target position, the last 2M bits of the target data are the extended position, and all 0 with the length of 2M bits are added at the extended position, thereby obtaining the target data.
Assuming that the data type is an audio type, the first M bits of the target data are extension positions, all 0 s with a length of M bits are added at the extension positions, the middle M bits of the target data are target positions, the standard rear data with a length of M bits are added at the target positions, the rear M bits of the target data are extension positions, and all 0 s with a length of M bits are added at the extension positions, so that the target data are obtained.
For the radio frequency type, the first 2M bits of the target data are extension positions, all 0 s are added at the extension positions, the last M bits of the target data are target positions, and the standard data are added at the target positions.
Step 1043, taking a tag value of sample data corresponding to the normalized data as a tag value of the target data, where the tag value is used to indicate whether the target data corresponds to the existence of the unmanned aerial vehicle or the absence of the unmanned aerial vehicle.
For example, for each sample data in the first data set, the sample data has a corresponding tag value, and the tag value may be a first value or a second value, if the tag value is the first value, it indicates that the sample data corresponds to the presence of the unmanned aerial vehicle, and if the tag value is the second value, it indicates that the sample data corresponds to the absence of the unmanned aerial vehicle. On this basis, after the sample data is input to a target data standard model, standard data is obtained, and target data corresponding to the standard data is obtained, a tag value corresponding to the sample data can be used as a tag value corresponding to the target data, the tag value can be a first value or a second value, if the tag value is the first value, it indicates that an unmanned aerial vehicle exists corresponding to the target data, and if the tag value is the second value, it indicates that the unmanned aerial vehicle does not exist corresponding to the target data.
Step 1044 is to generate a second data set based on the target data corresponding to the normalized data and the tag value of each target data, that is, the second data set includes the target data and the tag value of the target data.
In summary, a second data set can be obtained, step 104 is completed, and the subsequent steps are performed.
And 105, training the initial unmanned aerial vehicle detection model based on the second data set to obtain a candidate unmanned aerial vehicle detection model. For example, the second data set may include a plurality of target data and a label value of each target data, and the initial drone detection model may be trained based on the second data set, and the trained drone detection model is referred to as a candidate drone detection model without limitation to the training process.
And 106, extracting model parameters from the candidate unmanned aerial vehicle detection model, and sending the model parameters to a service center terminal. Because a plurality of data hold ends can all send model parameters to the service center end, the service center end generates the target unmanned aerial vehicle detection model based on the model parameters sent by the data hold ends.
For each data holding end, after the data holding end obtains the candidate unmanned aerial vehicle detection model, each target data in the second data set may be input to the candidate unmanned aerial vehicle detection model to obtain a first detection result corresponding to the target data; based on the first detection result corresponding to each target data and the tag value of each target data, determining a first accuracy (e.g., a ratio of the correct number of the first detection results to the total number of the first detection results) corresponding to the candidate unmanned aerial vehicle detection model. Inputting each target data in the second data set to the initial unmanned aerial vehicle detection model (namely, the unmanned aerial vehicle detection model which is not trained yet) to obtain a second detection result corresponding to the target data; and determining a second accuracy corresponding to the initial unmanned aerial vehicle detection model based on a second detection result corresponding to each target data and the label value of each target data.
If the first accuracy is greater than the second accuracy, the model parameters can be extracted from the candidate unmanned aerial vehicle detection model, and the model parameters are sent to the service center side. Or if the first accuracy is not greater than the second accuracy, the model parameters in the candidate unmanned aerial vehicle detection model do not need to be sent to the service center side.
For example, the service center may generate the target drone detection model by: and for each data holding end, acquiring a quality score corresponding to the data holding end, wherein the higher the quality score is, the better the performance of the candidate unmanned aerial vehicle detection model trained by the data holding end is. Then, the weighting coefficient corresponding to the data holding end is determined based on the quality score corresponding to the data holding end, and the higher the quality score is, the larger the weighting coefficient corresponding to the data holding end is. Then, the target parameter is determined based on the model parameter sent by each data holding terminal and the weighting coefficient corresponding to each data holding terminal, that is, the target parameter is obtained by performing weighting operation based on the model parameter and the weighting coefficient. Then, a target drone detection model is generated based on the target parameters, for example, the target parameters are used to replace model parameters in the initial drone detection model (i.e., the initial drone detection model in step 101), so as to obtain the target drone detection model.
And 107, acquiring the target unmanned aerial vehicle detection model from the service center, and detecting whether the unmanned aerial vehicle exists in the target scene by using the target unmanned aerial vehicle detection model.
For example, after obtaining the target unmanned aerial vehicle detection model, the service center side may send the target unmanned aerial vehicle detection model to each data holding side, that is, the data holding side may obtain the target unmanned aerial vehicle detection model from the service center side, and after obtaining the target unmanned aerial vehicle detection model, the data holding side may detect whether an unmanned aerial vehicle exists in a target scene by using the target unmanned aerial vehicle detection model.
To every data hold end, in the unmanned aerial vehicle testing process, can detect whether there is unmanned aerial vehicle in the target scene based on target unmanned aerial vehicle detection model, say so, adopt following step detection whether there is unmanned aerial vehicle:
step 1071, inputting the data to be detected of the target scene to the target data standard model to obtain standard data, wherein the standard data is output data of a target hidden layer of the target data standard model.
For example, to-be-detected data of a target scene, such as image data, audio data, radio frequency data, and the like, may be obtained, and the to-be-detected data is input to the target data specification model to obtain specified data corresponding to the to-be-detected data, which is similar to step 103 and is not repeated here.
Step 1072, determining a target position and an extended position based on the data type corresponding to the normalized data, adding the normalized data to the target position, and adding the extended data to the extended position to obtain the target data.
For example, after obtaining the normalized data, the normalized data may be converted into target data corresponding to the data to be detected, and the implementation process may refer to step 1041 and step 1042, which is not described repeatedly herein.
Step 1073, inputting the target data corresponding to the data to be detected into the target unmanned aerial vehicle detection model to obtain an unmanned aerial vehicle detection result, wherein the unmanned aerial vehicle detection result is used for representing that an unmanned aerial vehicle exists in a target scene or does not exist in the target scene. For example, after the target data is input to the target unmanned aerial vehicle detection model, the target unmanned aerial vehicle detection model can process the target data, and the processing process is not limited, so that an unmanned aerial vehicle detection result is obtained, and the unmanned aerial vehicle detection result can be a first value or a second value. If the detection result of the unmanned aerial vehicle is a first value (such as 1), it indicates that the unmanned aerial vehicle exists in the target scene, and if the detection result of the unmanned aerial vehicle is a second value (such as 0), it indicates that the unmanned aerial vehicle does not exist in the target scene.
In a possible implementation manner, for each data holding end, after obtaining the unmanned aerial vehicle detection result, the data holding end may further send the unmanned aerial vehicle detection result to the service center end, so that the service center end determines the target detection result based on the unmanned aerial vehicle detection results sent by the multiple data holding ends.
For example, the service center can receive the unmanned aerial vehicle detection results sent by the multiple data holding ends, and for each unmanned aerial vehicle detection result, the unmanned aerial vehicle detection result can be a first value or a second value, the first value indicates that the unmanned aerial vehicle exists in the target scene, and the second value indicates that the unmanned aerial vehicle does not exist in the target scene. And determining a weight value corresponding to each data holding end, determining a target detection result based on the unmanned aerial vehicle detection result sent by each data holding end and the weight value corresponding to each data holding end, and obtaining the target detection result if performing weighting operation. On the basis, if the target detection result is larger than a threshold value, determining that the unmanned aerial vehicle exists in the target scene; and if the target detection result is not greater than the threshold value, determining that no unmanned aerial vehicle exists in the target scene.
In the above embodiment, the unmanned aerial vehicle detection is taken as an example, and in practical application, in addition to detecting whether an unmanned aerial vehicle exists in a target scene, other types of objects may be detected, as long as the service center side and the plurality of data holding sides jointly realize cooperative detection. For example, in the intelligent transportation field, the plurality of data holding terminals may be a plurality of cameras, the unmanned aerial vehicle detection model may be replaced with a vehicle detection model (the structure of the vehicle detection model is the same as or different from that of the unmanned aerial vehicle detection model), and the data specification model is kept unchanged, on this basis, whether a vehicle exists in the target scene may be detected by using the embodiment.
For another example, in the robot field, the plurality of data holding terminals may be a plurality of robots, the unmanned aerial vehicle detection model may be replaced with an article detection model, and the data specification model is kept unchanged, on the basis, whether articles (express packages, barcodes, and the like) exist in the target scene may be detected by using the embodiment.
For example, in the field of ship recognition, the plurality of data holding terminals may be a plurality of cameras, the unmanned detection model may be replaced with a ship detection model, and the data normative model may be maintained.
For another example, in the field of forest fire recognition, the plurality of data holding terminals may be a plurality of cameras, the unmanned aerial vehicle detection model may be replaced with a forest fire detection model, and the data normative model is kept unchanged.
Of course, the above are only a few detection examples, and the present invention is not limited thereto, and any type of object may be detected by using the above embodiments, and the unmanned aerial vehicle detection model is replaced with a corresponding type of object detection model, for example, the unmanned aerial vehicle detection model is replaced with a license plate detection model (for detecting a license plate), a vehicle type detection model (for detecting a vehicle type), a fingerprint detection model (for detecting a fingerprint), a face detection model (for detecting a face), and the like, and the type of the detection model is not limited.
According to the technical scheme, in the embodiment of the application, different dimensional data are standardized into a unified data format through the data holding end, then the unmanned aerial vehicle detection model issued by the service center end is trained, the trained model parameters are sent to the service center end, the unmanned aerial vehicle detection model is updated by the service center end based on the model parameters, the updated unmanned aerial vehicle detection model is sent to each data holding end, unmanned aerial vehicle detection is carried out by each data holding end based on the updated unmanned aerial vehicle detection model, the unmanned aerial vehicle detection capability is improved, the accuracy of unmanned aerial vehicle detection is improved, the unmanned aerial vehicle detection method can be applied to various unmanned aerial vehicle detection scenes, and the robustness of unmanned aerial vehicle detection is improved. In the above manner, each data holding end does not need to send data to the service center end, and only sends the model parameters to the service center end, so that the pressure on the communication link is reduced, data sharing is not needed, the data privacy of each data holding end can be protected, data leakage is avoided, and potential safety hazards are avoided.
The following describes the cooperative unmanned aerial vehicle detection method in detail with reference to a specific application scenario.
In order to detect whether the unmanned aerial vehicle exists in the target scene, an unmanned aerial vehicle detection method based on images, an unmanned aerial vehicle detection method based on audio or an unmanned aerial vehicle detection method based on radio frequency can be adopted. For example, a plurality of data holding terminals are deployed in a target scene, and for each data holding terminal, the data holding terminal may be a terminal device for acquiring image data, a terminal device for acquiring audio data, or a terminal device for acquiring radio frequency data. On the basis, each data holding end independently collects data of a target scene and sends the data to the service center end. And the service center side carries out unmanned aerial vehicle detection based on the data sent by all the data holding sides and analyzes whether the unmanned aerial vehicle exists in the target scene.
However, each data holding end needs to send all data to the service center end in real time, thereby causing a large pressure on the data communication link. Data privacy of each data holding end cannot be protected, data leakage of the data holding end may be caused, and potential safety hazards may exist. The data of different types are not intercommunicated, the result can not be multiplexed, and the fusion of the data of multiple types can not be effectively carried out. For example, it is not possible to analyze whether or not a drone is present based on both image data and audio data. Different interference factors cause the accuracy of the unmanned aerial vehicle detection method to be reduced. For example, the existence of birds and the difference of light cause the accuracy of the unmanned aerial vehicle detection method based on image data to be reduced, the complicated environment of wireless radio frequency causes the accuracy of the unmanned aerial vehicle detection method based on radio frequency data to be reduced, and the environment with numerous and noisy sound sources causes the accuracy of the unmanned aerial vehicle detection method based on audio data to be reduced.
In view of the above problems, the embodiment of the present application provides a cooperative unmanned aerial vehicle detection method, which solves the problem of unmanned aerial vehicle detection by combining unmanned aerial vehicle detection means with different dimensions on the premise that a data holding end does not need to share data, that is, unmanned aerial vehicle detection can be performed based on image data, audio data and radio frequency data, privacy protection is performed on the data holding end, and the accuracy of unmanned aerial vehicle detection is also improved.
In the embodiment of the application, the data of different dimensionalities are standardized into a unified data format through the data holding end, then the unmanned aerial vehicle detection model issued by the service center end is trained, the trained model parameters are sent to the service center end, the service center end updates the unmanned aerial vehicle detection model through the model parameters of the data holding ends integrated through a parameter fusion method, the test result of the unmanned aerial vehicle detection model is iterated to be stable, the updated unmanned aerial vehicle detection model is finally sent to the data holding ends, unmanned aerial vehicle detection is carried out on the data holding ends based on the updated unmanned aerial vehicle detection model, the unmanned aerial vehicle detection capability is improved, and the accuracy of unmanned aerial vehicle detection is improved.
In the embodiment of the application, in order to ensure the privacy of each data holding end, each data holding end does not upload original data, and only uploads model parameters. The raw data of each data holding terminal includes, but is not limited to, tagged image data, tagged audio data, and tagged radio frequency data, as well as image data, audio data, and radio frequency data collected in real time. And each data holding end carries out data standardization on own original data, trains the unmanned aerial vehicle detection model with a unified structure, and uploads model parameters of the trained unmanned aerial vehicle detection model.
In the embodiment of the application, in order to ensure the uniform combination of different detection methods, data specification and uniform training can be performed, the data types of different detection methods are considered to be different, but the data contains information, namely whether an unmanned aerial vehicle exists or not, therefore, the information can be extracted through the data specification to uniformly train an unmanned aerial vehicle detection model, and effective combination of different detection methods (such as an image-based unmanned aerial vehicle detection method, an audio-based unmanned aerial vehicle detection method and a radio frequency-based unmanned aerial vehicle detection method) is realized.
The embodiment of the application provides a cooperative unmanned aerial vehicle detection method which can comprise a cooperative training process and a cooperative detection process. In the collaborative training process, a target data standard model and a candidate unmanned aerial vehicle detection model can be obtained through training for each data holding end, and a target unmanned aerial vehicle detection model can be obtained through training for a service center end. In the cooperative detection process, for each data holding end, unmanned detection can be performed based on a target data specification model and a target unmanned detection model.
For example, for the cooperative training process, training is performed only when the held data of the data holding terminals are different, otherwise, only one data holding terminal holding the same data needs to be trained. For example, if the image data sets held by the data holding terminal a and the data holding terminal B are different, both the data holding terminal a and the data holding terminal B need to participate in the cooperative training, and if the image data sets held by the data holding terminals a and B are the same, the data holding terminal a needs to participate in the cooperative training, and the data holding terminal B does not need to participate in the cooperative training, and the training result of the data holding terminal a is used directly. For example, if the data holder C holds image data and the data holder D holds audio data, both the data holder C and the data holder D need to participate in the cooperative training.
Referring to fig. 2, a schematic flow chart of a cooperative drone detection method is shown, where the method may include:
step 201, the service center side obtains a data specification model (for distinguishing convenience, the data specification model may be recorded as an initial data specification model) and an unmanned aerial vehicle detection model (for distinguishing convenience, the unmanned aerial vehicle detection model may be recorded as an initial unmanned aerial vehicle detection model), sends the initial data specification model to each data holding side, and sends the initial unmanned aerial vehicle detection model to each data holding side.
For example, the service center may pre-configure an initial data specification Model, without limitation to the structure of the initial data specification Model, and the initial data specification Model may be denoted as Model _ Norm.
The role of the initial data specification model may include, but is not limited to: extracting the same latitude data in different format data (such as image data, audio data, radio frequency data and the like), namely: whether a drone is present. And normalizing the data in different formats into data in the same expression form so as to input the data into the unmanned aerial vehicle detection model.
The initial data specification model may be a network model constructed by a principal component analysis method, or may be a network model constructed by a self-encoder, and the structure of the initial data specification model is not limited. For example, the initial data specification model is exemplified as a stacked denoising self-encoder, the stacked denoising self-encoder is a neural network structure, the neural network is trained by comparing the difference between input data and output data, and the stacked denoising self-encoder has the characteristics of stronger noise resistance, stronger robustness of model training results and the like.
The initial data specification model may include an input layer, K first hidden layers, 1 second hidden layer, K third hidden layers, and an output layer in this order, that is, there are odd number of hidden layers, and K is a positive integer. For each first hidden layer, the length of the input data of the first hidden layer is greater than the length of the output data of the first hidden layer. For the second hidden layer, the length of the input data of the second hidden layer is greater than the length of the output data of the second hidden layer. For each third hidden layer, the length of the input data of the third hidden layer is smaller than the length of the output data of the third hidden layer. Subsequently, taking K as 2 as an example, the initial data specification model sequentially includes an input layer, a hidden layer 1, a hidden layer 2, a hidden layer 3, a hidden layer 4, a hidden layer 5, and an output layer, and certainly, the value of K may also be 1, 3, 4, and the like, which is not limited to this. In this application scenario, hidden layers 1 and 2 are first hidden layers, hidden layer 3 is a second hidden layer, and hidden layers 4 and 5 are third hidden layers.
The number of neurons in each layer in the initial data specification model may be: [ M1, M2, M3, M4, M5, M6, M7], that is, the number of neurons in the input layer is M1, the number of neurons in the hidden layer 1 is M2, the number of neurons in the hidden layer 2 is M3, the number of neurons in the hidden layer 3 is M4, the number of neurons in the hidden layer 4 is M5, the number of neurons in the hidden layer 5 is M6, and the number of neurons in the output layer is M7.
Among the above neuron numbers of each layer, M1 is greater than M2, M2 is greater than M3, M3 is greater than M4, M4 is less than M5, M5 is less than M6, and M6 is less than M7, for example, the neuron numbers of each layer in the initial data specification model are: [65536, 256, 128, 96, 128, 256, 65536], although the above values are merely examples.
In the initial data normative model of the data holding ends of different data types, the number of neurons in other layers may be different or the same, except that the number of neurons in the hidden layer 3 is the same. For example, the number of neurons in the hidden layer 3 of the initial data normative model is 96 for the data holding side of the image data, the data holding side of the audio data, and the data holding side of the radio frequency data. However, the number of neurons may be different for the input layer, the hidden layer 1, the hidden layer 2, the hidden layer 4, the hidden layer 5, and the output layer.
For example, the service center may pre-configure an initial unmanned aerial vehicle detection Model, and the structure of the initial unmanned aerial vehicle detection Model is not limited, and the initial unmanned aerial vehicle detection Model may be denoted as Model _ Detect.
The role of the initial drone detection model may include, but is not limited to: and judging whether the unmanned aerial vehicle exists according to the input data. The initial unmanned aerial vehicle detection model can be a network model constructed by using logistic regression, can also be a network model constructed by using a deep neural network, can also be a network model constructed by using a convolutional neural network, and can also be a network model constructed by using a cyclic neural network.
The initial unmanned aerial vehicle detection model can sequentially comprise an input layer, a hidden layer 1, a hidden layer 2, a hidden layer 3 and an output layer, the number of the hidden layers is not limited, and the number of the hidden layers can be configured at will.
The number of neurons in each layer in the initial unmanned aerial vehicle detection model may be: [ N1, N2, N3, N4, N5], that is, the number of neurons in the input layer is N1, the number of neurons in the hidden layer 1 is N2, the number of neurons in the hidden layer 2 is N3, the number of neurons in the hidden layer 3 is N4, and the number of neurons in the output layer is N5.
N1 may be M4 × p, where p represents the total number of data types, and if there are 3 data types, N1 is M4 × 3, for example, when M4 is 96, N1 is 288 (96 × 3), N5 may be 1, and N2, N3, and N4 may be arbitrarily set, which is not limited. For example, the number of neurons in each layer in the initial unmanned aerial vehicle detection model is: [288 (96 × 3), 1024, 1024, 1024, 1], although the above values are merely examples.
In the initial unmanned aerial vehicle detection models of the data holding ends with different data types, the number of neurons in each layer may be different or the same, and the following description will be given by taking the example that the number of neurons in each layer is the same.
Step 202, aiming at each data holding end, the data holding end trains the initial data standard model based on the sample data in the first data set to obtain a trained target data standard model, and data standard is carried out on the sample data in the first data set based on the target data standard model to obtain target data.
For example, the data holder may construct a first data set, which may include sample data (e.g., a plurality of sample data) and a tag value corresponding to each sample data. The sample data may be image data, audio data, or radio frequency data, and the tag value may be a first value or a second value. If the tag value is a first value (e.g., 1), it indicates that the sample data corresponds to the existence of the unmanned aerial vehicle, and if the tag value is a second value (e.g., 0), it indicates that the sample data corresponds to the nonexistence of the unmanned aerial vehicle.
In one possible implementation, step 202 may be implemented by:
at step 2021, for each sample Data in the first Data set (hereinafter, taking image Data _ Source as an example), the image Data is processed into an array Data _ Input _ Norm with a length of M1. For example, since the number of neurons in each layer in the initial Data specification model is [ M1, M2, M3, M4, M5, M6, M7] in sequence, the image Data is processed into an array Data _ Input _ Norm of length M1, for example, when M1 is 65536, the image Data can be processed into an array Data _ Input _ Norm of length 65536.
Step 2022, adding Noise to the array Data _ Input _ Norm corresponding to each sample Data to obtain noisy Data, for example, adding Noise to each position of the array Data _ Input _ Norm to obtain noisy Data _ Input _ Norm _ Noise.
For example, the noise may be a value configured empirically, or may be noise generated based on characteristic information (such as at least one of an IP address, a MAC address, and a sequence number) of the data holding end, and the value of the noise is not limited. For example, random number seed may be generated based on the IP address, MAC address, and serial number of the Data holder, Noise may be determined according to the random number seed, and then Noise may be added to the array Data _ Input _ Norm to obtain noisy Data _ Input _ Norm _ Noise.
Step 2023, inputting each noisy data to the initial data normative model based on the noisy data corresponding to each sample data in the first data set, i.e. a plurality of noisy data, to obtain the de-noised data.
For example, for each noisy data, noisy data of length M1 is input to the input layer of the initial data specification model, the input layer processes the data to obtain output data of length M1, and output data of length M1 is input to the hidden layer 1. The hidden layer 1 processes the Data to obtain Output Data with the length of M2, the Output Data with the length of M2 is input to the hidden layer 2, and so on until the Output Data with the length of M7 is input to the Output layer, and the Output layer processes the Data to obtain de-Noise Data with the length of M7, namely de-Noise Data _ Output _ Norm _ Noise corresponding to the noisy Data.
Step 2024, by comparing the difference (i.e. loss value) between the denoised Data _ Output _ Norm _ Noise corresponding to the sample Data and the Data _ Input _ Norm corresponding to the sample Data, the network parameters of the initial Data normative model are continuously adjusted until the difference between the denoised Data _ Output _ Norm _ Noise and the Data _ Input _ Norm tends to be stable (i.e. the loss value is smaller than the preset threshold), the adjustment of the network parameters of the initial Data normative model can be stopped, and the trained target Data normative model is obtained.
For example, the loss values of Data _ Output _ Norm _ Noise and Data _ Input _ Norm are calculated, and the network parameters of the initial Data normalization model are adjusted based on the loss values to obtain an adjusted Data normalization model, wherein the adjustment target of the network parameters is to make the loss values of Data _ Output _ Norm _ Noise and Data _ Input _ Norm smaller and smaller, and finally make Data _ Output _ Norm _ Noise and Data _ Input _ Norm approach. It is determined whether the adjusted data specification model has converged. And if not, taking the adjusted data specification model as an initial data specification model, and returning to execute the operation of inputting each noisy data to the initial data specification model to obtain the de-noised data. If so, taking the adjusted data standard Model as a target data standard Model, marking as Model _ Norm _ Final, and storing the trained target data standard Model.
Step 2025, based on the target data normative Model _ Norm _ Final, inputting each sample data in the first data set to the target data normative Model, and obtaining the normative data corresponding to the sample data.
For example, the target data specification model sequentially includes an input layer, K first hidden layers, 1 second hidden layer, K third hidden layers, and an output layer, and for each first hidden layer, the length of input data of the first hidden layer is greater than the length of output data of the first hidden layer; for the second hidden layer, the length of the input data of the second hidden layer is greater than the length of the output data of the second hidden layer; for each third hidden layer, the length of the input data of the third hidden layer is smaller than the length of the output data of the third hidden layer.
The structure of the target data specification model may be the same as the structure of the initial data specification model, e.g., the target data specification model comprises, in order, an input layer, a hidden layer 1, a hidden layer 2, a hidden layer 3, a hidden layer 4, a hidden layer 5, and an output layer. For example, the number of neurons in each layer in the target data specification model may be, in turn: [ M1, M2, M3, M4, M5, M6, M7], such as [65536, 256, 128, 96, 128, 256, 65536 ].
For the first Data set, each sample Data in the first Data set may be converted into an array Data _ Input _ Norm of length M1, resulting in a new Data set, and assuming that the total number of sample Data is N, the new Data set Data _ Input _ Norm = [ Data _ Input _ Norm _1, Data _ Input _ Norm _2, …, Data _ Input _ Norm _ N ]. Then, each array Data _ Input _ Norm in the new Data set is Input to the target Data normative Model _ Norm _ Final, and the output Data of the target hidden layer in the plurality of hidden layers is used as the normalized Data. For example, the target hidden layer is a hidden layer at an intermediate position (i.e. the second hidden layer, i.e. the hidden layer 3), so that the output data of the hidden layer 3 can be extracted as the normalized data, which may be M4 in length, for example, the normalized data may be an array with a length of 96 bits.
Inputting N number groups of Data _ Input _ Norm into a target Data specification Model _ Norm _ Final to obtain N number of specified Data, and recording the N number of specified Data as DataSet _ Norm = [ Data _ Norm _1, Data _ Norm _2, … and Data _ Norm _ N ], wherein each specified Data is a 96-bit array.
Step 2026, expanding the normalized data corresponding to each sample data to obtain target data corresponding to the sample data, and constructing a second data set based on the target data corresponding to each sample data.
For example, for each piece of normalized data, a data type corresponding to the normalized data may be determined, where the data type may be an image type, an audio type, a radio frequency type, or the like, a target position and an extended position are determined based on the data type, the normalized data is added at the target position, and extended data (e.g., all 0 s) is added at the extended position, so as to obtain target data corresponding to the normalized data.
Each sample data in the first data set has a corresponding tag value, so that the tag value of the sample data corresponding to the normalized data can be used as the tag value of the target data, and the tag value can be a first value or a second value. If the tag value is a first value, it indicates that the unmanned aerial vehicle exists corresponding to the target data, and if the tag value is a second value, it indicates that the unmanned aerial vehicle does not exist corresponding to the target data.
In summary, the target data corresponding to each sample data is obtained, and the target data has a corresponding tag value, and then a second data set is constructed, where the second data set includes the target data and the tag value of the target data.
For example, N normalized Data DataSet _ Norm = [ Data _ Norm _1, Data _ Norm _2, …, Data _ Norm _ N ] are extended to obtain N target Data, and a second Data set is constructed based on the N target Data, that is, the second Data set includes the target Data and a tag value of the target Data. The second Data set is denoted as DataSet _ Norm _ Expand = [ [ Data _ Norm _1_ Expand, Label _1], [ Data _ Norm _2_ Expand, Label _2], …, [ Data _ Norm _ N _ Expand, Label _ N ] ]. Data _ Norm _1_ Expand represents the target Data corresponding to the sample Data 1, Label _1 represents the Label value of the target Data, and so on.
Assuming, for each target data, that the normalized data is an array of 96 bits in length and the total number of data types is 3, the target data may be an array of 288 (96 x 3) bits. For the image data, bits 1 to 96 are post-specification data, and bits 97 to 288 are all 0. For audio data, bits 1 to 96 are all 0 s, bits 97 to 192 are post-specification data, and bits 193 to 288 are all 0 s. For the RF data, bits 1-192 are all 0's, and bits 193-288 are post-specification data.
At this point, step 202 is completed, the target data normative model is obtained through training, and the subsequent steps can be executed.
And 203, training the initial unmanned aerial vehicle detection model by the data holding end based on the second data set to obtain a trained candidate unmanned aerial vehicle detection model. During the training process of the initial unmanned aerial vehicle detection model, the input of the initial unmanned aerial vehicle detection model is target data in the second data set, and the output of the initial unmanned aerial vehicle detection model is a label value (indicating whether an unmanned aerial vehicle exists) corresponding to the target data. And after the training of the initial unmanned aerial vehicle detection model is finished, sending model parameters in the candidate unmanned aerial vehicle detection model to the service center terminal.
For example, the data holder may construct a second data set, which may include target data (e.g., a plurality of target data) and a tag value corresponding to each target data. This label value can be first value or second value, if this label value is first value, then indicates that this target data corresponds exists unmanned aerial vehicle, if this label value is the second value, then indicates that this target data corresponds does not exist unmanned aerial vehicle.
For example, the data holding end may train the initial unmanned aerial vehicle detection model by using the second data set, and train by comparing whether the output of the initial unmanned aerial vehicle detection model is consistent with the tag value (e.g., Label _ 1) in the second data set, that is, the network parameters of the initial unmanned aerial vehicle detection model can be continuously adjusted.
And when the output of the initial unmanned aerial vehicle detection Model and the label value in the second data set tend to be stable, finishing training, and stopping adjusting the network parameters of the initial unmanned aerial vehicle detection Model so as to obtain a trained candidate unmanned aerial vehicle detection Model, and marking the candidate unmanned aerial vehicle detection Model as Model _ Detect _ Tran.
For example, after the candidate unmanned aerial vehicle detection model is obtained, each target data in the second data set may be input to the candidate unmanned aerial vehicle detection model to obtain a first detection result corresponding to the target data; and determining a first accuracy corresponding to the candidate unmanned aerial vehicle detection model based on the first detection result corresponding to each target data and the label value of each target data. For example, if the first detection result corresponding to the target data matches (if they are the same as) the tag value of the target data, the correct number is incremented by 1, and if the first detection result corresponding to the target data does not match (if they are different from) the tag value of the target data, the error number is incremented by 1. After all the target data in the second data set are processed, the correct number and the error number can be obtained, and the ratio of the correct number to the total number (i.e. correct number + error number) is the first accuracy.
Similarly, each target data in the second data set can be input to the initial unmanned aerial vehicle detection model to obtain a second detection result corresponding to the target data; based on the second detection result corresponding to each target data and the label value of each target data, the second accuracy corresponding to the initial unmanned aerial vehicle detection model can be determined.
On the basis, if the first accuracy is greater than the second accuracy, the performance of the candidate unmanned aerial vehicle detection model is superior to that of the initial unmanned aerial vehicle detection model, model parameters can be extracted from the candidate unmanned aerial vehicle detection model, and the model parameters are sent to the service center. For example, the model parameters are parameters to be adjusted in the unmanned aerial vehicle detection model, and the model parameters herein refer to parameter values that may change. For example, when the initial drone detection model is trained, the parameter a in the initial drone detection model may be adjusted, and then the model parameter is a parameter value of the parameter a in the candidate drone detection model, that is, a parameter value of the adjusted parameter a, that is, a parameter value of the parameter a finally determined in the training process.
If the first accuracy is not greater than the second accuracy, the performance of the initial unmanned aerial vehicle detection model is superior to that of the candidate unmanned aerial vehicle detection model, model parameters in the candidate unmanned aerial vehicle detection model do not need to be sent to the service center side, and the model parameters are not uploaded. Even if the model parameters are not uploaded, the data holding end can inform the service center end of the result of the iteration, namely 'False', indicating that no more optimal model parameters exist.
And 204, the service center side generates a target unmanned aerial vehicle detection model based on the model parameters sent by the data holding sides, and sends the target unmanned aerial vehicle detection model to each data holding side.
In one possible implementation, step 204 may be implemented by:
step 2041, the service center side obtains a quality score corresponding to each data holding side, the higher the quality score is, the better the performance of the candidate unmanned aerial vehicle detection model trained by the data holding side is, and the lower the quality score is, the worse the performance of the candidate unmanned aerial vehicle detection model trained by the data holding side is.
For example, for each data holding end, the service center end may obtain scene information and/or total amount of sample data corresponding to the data holding end, and determine a quality score corresponding to the data holding end based on the scene information and/or the total amount of sample data. Of course, the scene information and the total amount of sample data are only two examples of determining the quality score, which is not limited to this, and the quality score may also be determined based on other feature information.
For example, when the quality score corresponding to the data holding end is determined based on the scene information, for the data holding end performing unmanned detection based on the image data, if the number of birds in the scene where the data holding end is located is large (for example, the number of birds is greater than a preset threshold), the quality score corresponding to the data holding end is low, and if the number of birds in the scene where the data holding end is located is small or there is no bird (for example, the number of birds is not greater than the preset threshold), the quality score corresponding to the data holding end is high. In addition, for a data holding side that performs unmanned detection based on audio data, if there is a large amount of noise in a scene (e.g., near an airport or the like) in which the data holding side is located, the quality score corresponding to the data holding side is low, and if there is little or no noise in the scene in which the data holding side is located, the quality score corresponding to the data holding side is high. In addition, for a data holding end performing unmanned aerial vehicle detection based on radio frequency data, if the radio frequency signals in the scene where the data holding end is located are more, the quality score corresponding to the data holding end is lower, and if the radio frequency signals in the scene where the data holding end is located are less or no radio frequency signals, the quality score corresponding to the data holding end is higher.
Of course, the above is only an example of determining the quality score based on the scene information, and this is not limited thereto, as long as the quality score of a certain scene is higher when the performance of the candidate unmanned aerial vehicle detection model under the certain scene is better.
For example, when the quality score corresponding to the data holding end is determined based on the total amount of sample data, the data holding end obtains the candidate unmanned aerial vehicle detection model based on the training of the second data set, and the total amount of target data in the second data set is the same as the total amount of sample data in the first data set, so that the quality score corresponding to the data holding end is higher when the total amount of sample data in the first data set is more, and the quality score corresponding to the data holding end is lower when the total amount of sample data in the first data set is less. Of course, the above is only an example of determining the quality score based on the total amount of the sample data, and the determination method is not limited.
To sum up, the service center may obtain the quality score corresponding to each data holding end, and the obtaining manner of the quality score is not limited, as long as the performance of the candidate unmanned aerial vehicle detection model trained by the data holding end is better, the quality score corresponding to the data holding end is higher, which is not described herein again.
Step 2042, for each data holding end, the service center end determines the weighting coefficient corresponding to the data holding end based on the quality score corresponding to the data holding end, and the higher the quality score is, the larger the weighting coefficient corresponding to the data holding end is, and the lower the quality score is, the smaller the weighting coefficient corresponding to the data holding end is.
For example, all the data holders are sorted based on the corresponding quality scores of each data holder, such as sorting according to the order of the quality scores from high to low, or sorting according to the order of the quality scores from low to high, taking the order from high to low as an example. Based on all the sorted data holding ends, the weighting coefficient corresponding to the first data holding end is weight1, the weighting coefficient corresponding to the second data holding end is weight2, and so on, if there are P data holding ends, the weighting coefficient corresponding to the P-th data holding end is weight P.
For another example, a mapping relationship between the quality score section and the weighting coefficient is arranged in advance, and the weighting coefficient is larger as the quality score section is higher, and the weighting coefficient is smaller as the quality score section is lower. After the quality score corresponding to the data holding end is obtained, the quality score section where the quality score is located is determined, and the weighting coefficient corresponding to the quality score section is used as the weighting coefficient corresponding to the data holding end.
For another example, a weighting coefficient may be calculated from the mass scores, and the weighting coefficient of a certain data holding end is the sum of the mass scores of the data holding end divided by the mass scores of all the data holding ends, that is:
(quality_n)/((quality_1+quality_2+⋯+quality_P))
quality _ n represents the quality score of the nth data-holding terminal, and P represents the total number of data-holding terminals.
Of course, the above-mentioned method is only an example, and is not limited thereto, as long as the higher the quality score is, the larger the weighting coefficient corresponding to the data holding end is, and the lower the quality score is, the smaller the weighting coefficient corresponding to the data holding end is.
Step 2043, the service center side determines a target parameter based on the model parameter sent by each data holding side and the weighting coefficient corresponding to each data holding side, that is, the target parameter is obtained through weighting operation.
For example, the target parameter is determined using the following formula: (W1 weight1+ W2 weight2+ … + WP weight P)/(weight 1+ weight2+ … + weight P). W1 represents a model parameter transmitted from the first data holding terminal, weight1 represents a weighting coefficient corresponding to the first data holding terminal, W2 represents a model parameter transmitted from the second data holding terminal, weight2 represents a weighting coefficient corresponding to the second data holding terminal, …, WP represents a model parameter transmitted from the pth data holding terminal, and weight P represents a weighting coefficient corresponding to the pth data holding terminal. For another example, the target parameter is determined using the following formula: (W1 weight1+ W2 weight2+ … + WP weight P + WC weight C)/(weight 1+ weight2+ … + weight P + weight C). In the above formula, WC represents model parameters in the initial unmanned aerial vehicle detection model (i.e., the initial unmanned aerial vehicle detection model in step 201), and weight c represents a weighting coefficient corresponding to the initial unmanned aerial vehicle detection model, and the weighting coefficient weight c may be configured empirically.
Of course, the above is only an example of determining the target parameter, and the determination method is not limited.
Step 2044, the service center side generates a target unmanned aerial vehicle detection model based on the target parameters, namely, the target parameters are used for replacing model parameters in the initial unmanned aerial vehicle detection model, so that the target unmanned aerial vehicle detection model is obtained.
For example, a model parameter a1, a model parameter b1, and a model parameter c1 exist in the initial unmanned aerial vehicle detection model, after the service center determines a target parameter a2 corresponding to the model parameter a1, a target parameter b2 corresponding to the model parameter b1, and a target parameter c2 corresponding to the model parameter c1, the target parameter a2 replaces the model parameter a1 in the initial unmanned aerial vehicle detection model, the target parameter b2 replaces the model parameter b1 in the initial unmanned aerial vehicle detection model, and the target parameter c2 replaces the model parameter c1 in the initial unmanned aerial vehicle detection model.
So far, step 204 is completed to obtain a target unmanned aerial vehicle detection model, and after the target unmanned aerial vehicle detection model is obtained, subsequent steps, namely, cooperative detection, can be executed based on the target unmanned aerial vehicle detection model. Or, the target unmanned aerial vehicle detection model is used as an initial unmanned aerial vehicle detection model, the initial unmanned aerial vehicle detection model is sent to each data holding end, the process is executed again, candidate unmanned aerial vehicle detection models are retrained by each data holding end, and the like, until the performance of the candidate unmanned aerial vehicle detection models retrained by each data holding end is lower than that of the initial unmanned aerial vehicle detection model, model parameters are not sent to the service center end, after all the data holding ends do not send the model parameters again, the service center end takes the last unmanned aerial vehicle detection model as the target unmanned aerial vehicle detection model, and subsequent steps are executed based on the target unmanned aerial vehicle detection model, namely, cooperative detection is executed.
So far, accomplish the training process of unmanned aerial vehicle detection Model, the unmanned aerial vehicle detection Model who has accomplished the training is target unmanned aerial vehicle detection Model, can be marked as Model _ Detect _ Final, and the service center end can send target unmanned aerial vehicle detection Model to each data hold the end, is held the end by each data and is carried out the cooperation and Detect in target unmanned aerial vehicle detection Model, that is to say, Model cooperation training process finishes, carries out the cooperation and detects the flow.
Illustratively, with continuing reference to fig. 2, the cooperative detection process may further include the following steps:
step 205, for each data holding end, the data holding end collects data to be detected (such as image data, or audio data, or radio frequency data) of a target scene, and converts the data to be detected into target data.
For example, the data holding end may periodically collect the data to be detected of the target scene, for example, for each data holding end, the data to be detected of the target scene is collected every 1 minute.
After the data to be detected of the target scene is obtained, the data holding end may input the data to be detected to the target data normative model, and obtain normative data corresponding to the data to be detected, where the normative data may be output data of a target hidden layer of the target data normative model, and the implementation process is similar to the process of step 2025, except that the sample data is replaced by the data to be detected, and no repeated description is given here.
After the normalized data corresponding to the data to be detected is obtained, the normalized data can be expanded to obtain the target data corresponding to the data to be detected. For example, the target position and the extended position are determined based on the data type corresponding to the normalized data, the normalized data is added at the target position, the extended data is added at the extended position, and the target data is obtained.
Step 206, the data holding end inputs the target data corresponding to the data to be detected into a target unmanned aerial vehicle detection model to obtain an unmanned aerial vehicle detection result, and the unmanned aerial vehicle detection result is used for representing that an unmanned aerial vehicle exists in a target scene or the unmanned aerial vehicle does not exist in the target scene. For example, the detection result of the unmanned aerial vehicle may be a first value or a second value, if the detection result of the unmanned aerial vehicle is the first value (e.g., 1), it indicates that the unmanned aerial vehicle exists in the target scene, and if the detection result of the unmanned aerial vehicle is the second value (e.g., 0), it indicates that the unmanned aerial vehicle does not exist in the target scene.
To this end, hold the end to every data, this data hold the end and can confirm this unmanned aerial vehicle testing result that waits to detect the data and correspond, then confirm that the target scene has unmanned aerial vehicle or does not exist unmanned aerial vehicle.
For each data holding end, after the unmanned aerial vehicle detection result is obtained, the unmanned aerial vehicle detection result may be sent to the service center end, that is, the service center end may obtain the unmanned aerial vehicle detection results sent by each data holding end, and converge the unmanned aerial vehicle detection results, see the subsequent steps.
And step 207, the service center side receives the unmanned aerial vehicle detection results sent by the data holding sides, and determines a target detection result based on the unmanned aerial vehicle detection results sent by the data holding sides.
In one possible implementation, step 207 may be implemented by:
step 2071, the service center determines a weight value corresponding to each data holding end. For example, the weight value corresponding to each data holding end may be configured in advance. Alternatively, the weight value corresponding to each data holding end may be determined based on the quality score corresponding to each data holding end, the determination manner is as shown in step 2041 and step 2042, and the weight coefficient corresponding to the data holding end is replaced with the weight value, which is not repeated herein.
Step 2072, the service center determines a target detection result based on the detection result of the unmanned aerial vehicle sent by each data holding end and the weight value corresponding to each data holding end, and obtains the target detection result if performing a weighting operation. For example, the target detection result can be determined by the following formula: (E1 × k1+ E2 × k2+ … + EP × kP)/(k 1+ k2+ … + kP). In the above formula, E1 represents the unmanned aerial vehicle detection result sent by the first data holding end, and this unmanned aerial vehicle detection result may be a first value (e.g. 1) or a second value (e.g. 0), k1 represents the weight value corresponding to the first data holding end, E2 represents the unmanned aerial vehicle detection result sent by the second data holding end, k2 represents the weight value corresponding to the second data holding end, …, EP represents the unmanned aerial vehicle detection result sent by the pth data holding end, and kP represents the weight value corresponding to the pth data holding end.
Step 2073, if the target detection result is greater than the threshold (which may be configured according to experience, such as a value between 0 and 1, for example, 0.5, 0.6, 0.7, 0.8, and the like, without limitation), the service center determines that the unmanned aerial vehicle exists in the target scene. And if the target detection result is not greater than the threshold value, the service center end determines that the unmanned aerial vehicle does not exist in the target scene. Therefore, the service center end can collect all unmanned aerial vehicle detection results to obtain a target detection result, and whether the unmanned aerial vehicle exists in a target scene is analyzed based on the target detection result.
At this point, the cooperative detection process is completed, that is, the cooperative detection process is executed once per cycle.
In a possible implementation manner, the unmanned aerial vehicle detection model can be updated, and the update triggering condition of the unmanned aerial vehicle detection model can be configured at will. For example, it may be a fixed period of time that triggers the updating of the drone detection model, such as updating the drone detection model once a month, or updating the drone detection model once a week. For another example, the update of the unmanned aerial vehicle detection model is triggered based on the false alarm times, that is, according to the results of step 206 and step 207, each data holding end separately accumulates the false alarm times (for example, when the unmanned aerial vehicle detection result of the data holding end is not matched with the target detection result of the service center end, the false alarm times are added by 1). For each data holding end, if the ratio of the number of false reports of the data holding end to the existing data amount (i.e., the total number of reports of the data holding end) reaches a threshold value, threshold _ Update _1 (which may be configured according to experience, such as 0.1), the data holding end applies for initiating the Update of the unmanned aerial vehicle detection model. If the ratio of the total number of the data holding ends initiating the Update to the total number of all the data holding ends reaches a threshold value, threshold _ Update _2 (which may be configured according to experience, such as 0.8), the unmanned aerial vehicle detection model is updated.
When the unmanned aerial vehicle detection model needs to be updated, the target unmanned aerial vehicle detection model can be used as an initial unmanned aerial vehicle detection model, the initial unmanned aerial vehicle detection model is sent to each data holding end, the existing label data iteration steps 203 and 204 are utilized by each data holding end, the unmanned aerial vehicle detection model is updated, a new target unmanned aerial vehicle detection model is obtained, and the new target unmanned aerial vehicle detection model is sent to each data holding end. For the existing tag data of each data holding end, the tag data may include target data in the second data set and a tag value thereof, and data to be detected and a target detection result thereof (i.e., a tag value, such as a first value or a second value, etc.), and the tag data is not limited.
According to the technical scheme, the unmanned aerial vehicle cooperative training and the unmanned aerial vehicle cooperative detection can be realized, and the privacy of data of the data holder is ensured through the data specification of the data holder, the model training, the distribution of the service center and the convergence. Through the data standard model, different detection methods can be combined with each other to cooperatively train the same detection model, and the training and detection of the unmanned aerial vehicle detection model can be uniformly completed. The method can be applied to various unmanned aerial vehicle detection scenes, and the robustness of the detection method is improved.
Based on the same application concept as the method, the embodiment of the present application provides a cooperative unmanned aerial vehicle detection apparatus, which is applied to a system including a service center side and a plurality of data holding sides, and the apparatus is applied to any data holding side, as shown in fig. 3, and is a schematic structural diagram of the apparatus, and the apparatus includes:
the acquiring module 31 is used for acquiring an initial data specification model and an initial unmanned aerial vehicle detection model; a training module 32, configured to train the initial data normative model based on a first data set to obtain a target data normative model; inputting the sample data in the first data set to the target data standard model to obtain standard data corresponding to the sample data; wherein the target data specification model at least comprises a plurality of hidden layers, and the specified data is output data of a target hidden layer in the plurality of hidden layers; generating a second data set by using a plurality of normalized data corresponding to a plurality of sample data in the first data set; training the initial unmanned aerial vehicle detection model based on a second data set to obtain a candidate unmanned aerial vehicle detection model; the sending module 33 is configured to extract model parameters from the candidate unmanned aerial vehicle detection model after obtaining the candidate unmanned aerial vehicle detection model, and send the model parameters to the service center side, so that the service center side generates a target unmanned aerial vehicle detection model based on the model parameters sent by the multiple data holding sides; the obtaining module 31 is further configured to obtain the target unmanned aerial vehicle detection model from the service center, and detect whether an unmanned aerial vehicle exists in a target scene by using the target unmanned aerial vehicle detection model.
For example, the training module 32 trains the initial data normative model based on a first data set, and when obtaining the target data normative model, the training module is specifically configured to: adding noise to the sample data in the first data set to obtain noise-added data; inputting the noisy data to an initial data specification model to obtain denoised data; determining a loss value between the de-noised data and the sample data, and adjusting network parameters of an initial data standard model based on the loss value to obtain an adjusted data standard model; determining whether the adjusted data specification model has converged; if not, the adjusted data specification model is used as an initial data specification model, and the operation of inputting the noisy data to the initial data specification model to obtain the de-noised data is returned to be executed; and if so, taking the adjusted data specification model as a target data specification model.
For example, when the training module 32 generates the second data set by using a plurality of normalized data corresponding to a plurality of sample data in the first data set, the training module is specifically configured to: determining a data type corresponding to the normalized data for each normalized data corresponding to a plurality of sample data in a first data set, wherein the data type is an image type, an audio type or a radio frequency type; determining a target position and an extended position based on the data type, adding the normalized data at the target position, and adding the extended data at the extended position to obtain target data corresponding to the normalized data; taking a tag value of sample data corresponding to the normalized data as a tag value of the target data, wherein the tag value is used for indicating whether the target data corresponds to the existence of the unmanned aerial vehicle or the nonexistence of the unmanned aerial vehicle; and generating a second data set based on the target data corresponding to the normalized data and the label value of each target data.
For example, the sending module 33 extracts model parameters from the candidate unmanned aerial vehicle detection model, and when sending the model parameters to the service center, the sending module is specifically configured to: inputting each target data in the second data set to the candidate unmanned aerial vehicle detection model to obtain a first detection result corresponding to the target data; determining a first accuracy corresponding to the candidate unmanned aerial vehicle detection model based on a first detection result corresponding to each target data and a label value of each target data; inputting each target data in a second data set to the initial unmanned aerial vehicle detection model to obtain a second detection result corresponding to the target data; determining a second accuracy rate corresponding to the initial unmanned aerial vehicle detection model based on a second detection result corresponding to each target data and a label value of each target data; and if the first accuracy is greater than the second accuracy, extracting model parameters from the candidate unmanned aerial vehicle detection model, and sending the model parameters to the service center terminal.
For example, the obtaining module 32 is specifically configured to, when detecting whether there is an unmanned aerial vehicle in the target scene by using the target unmanned aerial vehicle detection model: inputting the data to be detected of the target scene into the target data standard model to obtain standard data, wherein the standard data is output data of a target hidden layer of the target data standard model; determining a target position and an extended position based on the data type corresponding to the normalized data, adding the normalized data at the target position, and adding the extended data at the extended position to obtain the target data; and inputting the target data into a target unmanned aerial vehicle detection model to obtain an unmanned aerial vehicle detection result, wherein the unmanned aerial vehicle detection result is used for indicating that an unmanned aerial vehicle exists or does not exist in the target scene.
Based on the same application concept as the method, the embodiment of the application provides a data holding end, wherein the data holding end comprises a processor and a machine-readable storage medium, and the machine-readable storage medium stores machine-executable instructions capable of being executed by the processor; the processor is configured to execute the machine executable instructions to perform the steps of:
acquiring an initial data standard model and an initial unmanned aerial vehicle detection model;
training the initial data standard model based on a first data set to obtain a target data standard model; inputting the sample data in the first data set to the target data standard model to obtain standard data corresponding to the sample data; wherein the target data specification model at least comprises a plurality of hidden layers, and the specified data is output data of a target hidden layer in the plurality of hidden layers;
generating a second data set by using a plurality of normalized data corresponding to a plurality of sample data in the first data set; training the initial unmanned aerial vehicle detection model based on the second data set to obtain a candidate unmanned aerial vehicle detection model, and extracting model parameters from the candidate unmanned aerial vehicle detection model;
sending the model parameters to the service center end so that the service center end generates a target unmanned aerial vehicle detection model based on the model parameters sent by the data holding ends;
and acquiring the target unmanned aerial vehicle detection model from the service center terminal, and detecting whether the unmanned aerial vehicle exists in a target scene by using the target unmanned aerial vehicle detection model.
Based on the same application concept as the method, embodiments of the present application further provide a machine-readable storage medium, where a plurality of computer instructions are stored on the machine-readable storage medium, and when the computer instructions are executed by a processor, the cooperative unmanned aerial vehicle detection method disclosed in the above example of the present application can be implemented.
The machine-readable storage medium may be any electronic, magnetic, optical, or other physical storage device that can contain or store information such as executable instructions, data, and the like. For example, the machine-readable storage medium may be: a RAM (random Access Memory), a volatile Memory, a non-volatile Memory, a flash Memory, a storage drive (e.g., a hard drive), a solid state drive, any type of storage disk (e.g., an optical disk, a dvd, etc.), or similar storage medium, or a combination thereof.
The systems, devices, modules or units illustrated in the above embodiments may be implemented by a computer chip or an entity, or by a product with certain functions. A typical implementation device is a computer, which may take the form of a personal computer, laptop computer, cellular telephone, camera phone, smart phone, personal digital assistant, media player, navigation device, email messaging device, game console, tablet computer, wearable device, or a combination of any of these devices.
For convenience of description, the above devices are described as being divided into various units by function, and are described separately. Of course, the functionality of the units may be implemented in one or more software and/or hardware when implementing the present application.
As will be appreciated by one skilled in the art, embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, embodiments of the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
Furthermore, these computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
The above description is only an example of the present application and is not intended to limit the present application. Various modifications and changes may occur to those skilled in the art. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present application should be included in the scope of the claims of the present application.

Claims (9)

1. A cooperative unmanned aerial vehicle detection method is applied to a system comprising a service center side and a plurality of data holding sides, and the method is applied to any data holding side and comprises the following steps:
acquiring an initial data standard model and an initial unmanned aerial vehicle detection model;
training the initial data standard model based on a first data set to obtain a target data standard model; inputting the sample data in the first data set to the target data standard model to obtain standard data corresponding to the sample data; wherein the target data specification model at least comprises a plurality of hidden layers, and the specified data is output data of a target hidden layer in the plurality of hidden layers;
generating a second data set by using a plurality of normalized data corresponding to a plurality of sample data in the first data set; training the initial unmanned aerial vehicle detection model based on the second data set to obtain a candidate unmanned aerial vehicle detection model, and extracting model parameters from the candidate unmanned aerial vehicle detection model;
sending the model parameters to the service center end so that the service center end generates a target unmanned aerial vehicle detection model based on the model parameters sent by the data holding ends; the service center terminal generates a target unmanned aerial vehicle detection model based on model parameters sent by a plurality of data holding terminals, and the method comprises the following steps: aiming at each data holding end, acquiring a quality score corresponding to the data holding end; the higher the quality score is, the better the performance of the candidate unmanned aerial vehicle detection model trained by the data holding end is; determining a weighting coefficient corresponding to the data holding end based on the quality score corresponding to the data holding end; the higher the quality score is, the larger the corresponding weighting coefficient of the data holding end is; determining target parameters based on the model parameters sent by each data holding end and the weighting coefficients corresponding to the data holding ends, and generating a target unmanned aerial vehicle detection model based on the target parameters;
and acquiring the target unmanned aerial vehicle detection model from the service center terminal, and detecting whether the unmanned aerial vehicle exists in a target scene by using the target unmanned aerial vehicle detection model.
2. The method of claim 1, wherein training the initial data normative model based on the first data set to obtain a target data normative model comprises:
adding noise to the sample data in the first data set to obtain noise-added data; inputting the noisy data to the initial data specification model to obtain denoised data;
determining a loss value between the de-noised data and the sample data, and adjusting network parameters of the initial data standard model based on the loss value to obtain an adjusted data standard model;
determining whether the adjusted data specification model has converged;
if not, taking the adjusted data specification model as an initial data specification model, and returning to execute the operation of inputting the noisy data to the initial data specification model to obtain the de-noised data;
and if so, taking the adjusted data specification model as a target data specification model.
3. The method of claim 1,
the target data specification model sequentially comprises an input layer, K first hidden layers, 1 second hidden layer, K third hidden layers and an output layer, wherein K is a positive integer, and the target hidden layer is the second hidden layer;
for each first hidden layer, the length of input data of the first hidden layer is greater than the length of output data of the first hidden layer; for the second hidden layer, the length of input data of the second hidden layer is greater than the length of output data of the second hidden layer; for each third hidden layer, the length of the input data of the third hidden layer is smaller than the length of the output data of the third hidden layer.
4. The method of claim 1, wherein generating a second data set using a plurality of normalized data corresponding to a plurality of sample data in the first data set comprises:
determining a data type corresponding to the normalized data for each normalized data corresponding to a plurality of sample data in the first data set, wherein the data type is an image type, an audio type or a radio frequency type; determining a target position and an extended position based on the data type, adding the normalized data at the target position, and adding the extended data at the extended position to obtain target data corresponding to the normalized data;
taking the label value of the sample data corresponding to the normalized data as the label value of the target data; wherein the tag value is used to indicate whether the target data corresponds to the presence of a drone or the absence of a drone;
and generating a second data set based on the target data corresponding to the normalized data and the label value of each target data.
5. The method of claim 4, wherein the extracting model parameters from the candidate UAV detection model and sending the model parameters to the service center side includes:
inputting each target data in the second data set to the candidate unmanned aerial vehicle detection model to obtain a first detection result corresponding to the target data; determining a first accuracy corresponding to the candidate unmanned aerial vehicle detection model based on a first detection result corresponding to each target data and a label value of each target data;
inputting each target data in the second data set to the initial unmanned aerial vehicle detection model to obtain a second detection result corresponding to the target data; determining a second accuracy rate corresponding to the initial unmanned aerial vehicle detection model based on a second detection result corresponding to each target data and a label value of each target data;
if the first accuracy is greater than the second accuracy, extracting model parameters from the candidate unmanned aerial vehicle detection model, and sending the model parameters to the service center terminal.
6. The method of claim 1,
the detecting whether the unmanned aerial vehicle exists in the target scene by using the target unmanned aerial vehicle detection model comprises the following steps:
inputting data to be detected of a target scene into the target data standard model to obtain standard data, wherein the standard data is output data of a target hidden layer of the target data standard model;
determining a target position and an extended position based on a data type corresponding to the normalized data, adding the normalized data at the target position, and adding the extended data at the extended position to obtain the target data;
and inputting the target data into the target unmanned aerial vehicle detection model to obtain an unmanned aerial vehicle detection result, wherein the unmanned aerial vehicle detection result is used for indicating that an unmanned aerial vehicle exists in the target scene or does not exist in the target scene.
7. The method of claim 6, further comprising:
the service center end receives unmanned aerial vehicle detection results sent by a plurality of data holding ends, the unmanned aerial vehicle detection results are first values or second values, the first values indicate that the unmanned aerial vehicle exists in the target scene, and the second values indicate that the unmanned aerial vehicle does not exist in the target scene;
determining a weight value corresponding to each data holding end, and determining a target detection result based on the unmanned aerial vehicle detection result sent by each data holding end and the weight value corresponding to each data holding end;
if the target detection result is larger than a threshold value, determining that the unmanned aerial vehicle exists in the target scene;
and if the target detection result is not greater than the threshold value, determining that no unmanned aerial vehicle exists in the target scene.
8. A cooperative unmanned aerial vehicle detection device is applied to a system comprising a service center side and a plurality of data holding sides, and the device is applied to any data holding side and comprises:
the acquisition module is used for acquiring an initial data standard model and an initial unmanned aerial vehicle detection model;
the training module is used for training the initial data standard model based on a first data set to obtain a target data standard model; inputting the sample data in the first data set to the target data standard model to obtain standard data corresponding to the sample data; wherein the target data specification model at least comprises a plurality of hidden layers, and the specified data is output data of a target hidden layer in the plurality of hidden layers; generating a second data set by using a plurality of normalized data corresponding to a plurality of sample data in the first data set; training the initial unmanned aerial vehicle detection model based on a second data set to obtain a candidate unmanned aerial vehicle detection model;
the sending module is used for extracting model parameters from the candidate unmanned aerial vehicle detection model after obtaining the candidate unmanned aerial vehicle detection model, and sending the model parameters to the service center end so that the service center end can generate a target unmanned aerial vehicle detection model based on the model parameters sent by the plurality of data holding ends; the service center terminal generates a target unmanned aerial vehicle detection model based on model parameters sent by a plurality of data holding terminals, and the method comprises the following steps: aiming at each data holding end, acquiring a quality score corresponding to the data holding end; the higher the quality score is, the better the performance of the candidate unmanned aerial vehicle detection model trained by the data holding end is; determining a weighting coefficient corresponding to the data holding end based on the quality score corresponding to the data holding end; the higher the quality score is, the larger the corresponding weighting coefficient of the data holding end is; determining target parameters based on the model parameters sent by each data holding end and the weighting coefficients corresponding to the data holding ends, and generating a target unmanned aerial vehicle detection model based on the target parameters;
the acquisition module is further used for acquiring the target unmanned aerial vehicle detection model from the service center end and detecting whether an unmanned aerial vehicle exists in a target scene by using the target unmanned aerial vehicle detection model.
9. A data holding terminal, comprising: a processor and a machine-readable storage medium storing machine-executable instructions executable by the processor;
the processor is configured to execute machine executable instructions to perform the steps of:
acquiring an initial data standard model and an initial unmanned aerial vehicle detection model;
training the initial data standard model based on a first data set to obtain a target data standard model; inputting the sample data in the first data set to the target data standard model to obtain standard data corresponding to the sample data; wherein the target data specification model at least comprises a plurality of hidden layers, and the specified data is output data of a target hidden layer in the plurality of hidden layers;
generating a second data set by using a plurality of normalized data corresponding to a plurality of sample data in the first data set; training the initial unmanned aerial vehicle detection model based on the second data set to obtain a candidate unmanned aerial vehicle detection model, and extracting model parameters from the candidate unmanned aerial vehicle detection model;
sending the model parameters to a service center end so that the service center end generates a target unmanned aerial vehicle detection model based on the model parameters sent by a plurality of data holding ends; the service center terminal generates a target unmanned aerial vehicle detection model based on model parameters sent by a plurality of data holding terminals, and the method comprises the following steps: aiming at each data holding end, acquiring a quality score corresponding to the data holding end; the higher the quality score is, the better the performance of the candidate unmanned aerial vehicle detection model trained by the data holding end is; determining a weighting coefficient corresponding to the data holding end based on the quality score corresponding to the data holding end; the higher the quality score is, the larger the corresponding weighting coefficient of the data holding end is; determining target parameters based on the model parameters sent by each data holding end and the weighting coefficients corresponding to the data holding ends, and generating a target unmanned aerial vehicle detection model based on the target parameters;
and acquiring the target unmanned aerial vehicle detection model from the service center terminal, and detecting whether the unmanned aerial vehicle exists in a target scene by using the target unmanned aerial vehicle detection model.
CN202110883686.3A 2021-08-03 2021-08-03 Cooperative unmanned aerial vehicle detection method, device and equipment Active CN113327461B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110883686.3A CN113327461B (en) 2021-08-03 2021-08-03 Cooperative unmanned aerial vehicle detection method, device and equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110883686.3A CN113327461B (en) 2021-08-03 2021-08-03 Cooperative unmanned aerial vehicle detection method, device and equipment

Publications (2)

Publication Number Publication Date
CN113327461A CN113327461A (en) 2021-08-31
CN113327461B true CN113327461B (en) 2021-11-23

Family

ID=77426846

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110883686.3A Active CN113327461B (en) 2021-08-03 2021-08-03 Cooperative unmanned aerial vehicle detection method, device and equipment

Country Status (1)

Country Link
CN (1) CN113327461B (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108009525A (en) * 2017-12-25 2018-05-08 北京航空航天大学 A kind of specific objective recognition methods over the ground of the unmanned plane based on convolutional neural networks
CN109285348A (en) * 2018-10-26 2019-01-29 深圳大学 A kind of vehicle behavior recognition methods and system based on two-way length memory network in short-term
CN110084094A (en) * 2019-03-06 2019-08-02 中国电子科技集团公司第三十八研究所 A kind of unmanned plane target identification classification method based on deep learning
CN110766090A (en) * 2019-10-30 2020-02-07 腾讯科技(深圳)有限公司 Model training method, device, equipment, system and storage medium
US10679509B1 (en) * 2016-09-20 2020-06-09 Amazon Technologies, Inc. Autonomous UAV obstacle avoidance using machine learning from piloted UAV flights
CN112287896A (en) * 2020-11-26 2021-01-29 山东捷讯通信技术有限公司 Unmanned aerial vehicle aerial image target detection method and system based on deep learning

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10679509B1 (en) * 2016-09-20 2020-06-09 Amazon Technologies, Inc. Autonomous UAV obstacle avoidance using machine learning from piloted UAV flights
CN108009525A (en) * 2017-12-25 2018-05-08 北京航空航天大学 A kind of specific objective recognition methods over the ground of the unmanned plane based on convolutional neural networks
CN109285348A (en) * 2018-10-26 2019-01-29 深圳大学 A kind of vehicle behavior recognition methods and system based on two-way length memory network in short-term
CN110084094A (en) * 2019-03-06 2019-08-02 中国电子科技集团公司第三十八研究所 A kind of unmanned plane target identification classification method based on deep learning
CN110766090A (en) * 2019-10-30 2020-02-07 腾讯科技(深圳)有限公司 Model training method, device, equipment, system and storage medium
CN112287896A (en) * 2020-11-26 2021-01-29 山东捷讯通信技术有限公司 Unmanned aerial vehicle aerial image target detection method and system based on deep learning

Also Published As

Publication number Publication date
CN113327461A (en) 2021-08-31

Similar Documents

Publication Publication Date Title
US20230084869A1 (en) System for simplified generation of systems for broad area geospatial object detection
CN109583322B (en) Face recognition deep network training method and system
CN107609572B (en) Multi-modal emotion recognition method and system based on neural network and transfer learning
CN111295689B (en) Depth aware object counting
CN109359666A (en) A kind of model recognizing method and processing terminal based on multiple features fusion neural network
US10558186B2 (en) Detection of drones
CN113344220B (en) User screening method, system and equipment based on local model gradient in federated learning and storage medium
CN111401105B (en) Video expression recognition method, device and equipment
CN112200123B (en) Hyperspectral open set classification method combining dense connection network and sample distribution
Utebayeva et al. Multi-label UAV sound classification using Stacked Bidirectional LSTM
CN112288700A (en) Rail defect detection method
CN111291773A (en) Feature identification method and device
CN114419363A (en) Target classification model training method and device based on label-free sample data
KR20220094967A (en) Method and system for federated learning of artificial intelligence for diagnosis of depression
CN110348434A (en) Camera source discrimination method, system, storage medium and calculating equipment
CN113743443B (en) Image evidence classification and recognition method and device
CN108154199B (en) High-precision rapid single-class target detection method based on deep learning
CN113327461B (en) Cooperative unmanned aerial vehicle detection method, device and equipment
CN109992679A (en) A kind of classification method and device of multi-medium data
CN109101984B (en) Image identification method and device based on convolutional neural network
CN112395952A (en) A unmanned aerial vehicle for rail defect detection
Rakowski et al. Frequency-aware CNN for open set acoustic scene classification
CN116189286A (en) Video image violence behavior detection model and detection method
CN114330650A (en) Small sample characteristic analysis method and device based on evolutionary element learning model training
CN114119970A (en) Target tracking method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant