CN113505851A - Multitasking method for intelligent aircraft - Google Patents

Multitasking method for intelligent aircraft Download PDF

Info

Publication number
CN113505851A
CN113505851A CN202110852719.8A CN202110852719A CN113505851A CN 113505851 A CN113505851 A CN 113505851A CN 202110852719 A CN202110852719 A CN 202110852719A CN 113505851 A CN113505851 A CN 113505851A
Authority
CN
China
Prior art keywords
data
modulation
identification
neural network
training
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110852719.8A
Other languages
Chinese (zh)
Other versions
CN113505851B (en
Inventor
周军
岑华峰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
University of Electronic Science and Technology of China
Original Assignee
University of Electronic Science and Technology of China
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by University of Electronic Science and Technology of China filed Critical University of Electronic Science and Technology of China
Priority to CN202110852719.8A priority Critical patent/CN113505851B/en
Publication of CN113505851A publication Critical patent/CN113505851A/en
Application granted granted Critical
Publication of CN113505851B publication Critical patent/CN113505851B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Molecular Biology (AREA)
  • Computational Linguistics (AREA)
  • Software Systems (AREA)
  • Mathematical Physics (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computing Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses an intelligent aircraft multitasking processing method, which is characterized in that a neural network is set in an intelligent aircraft, then training data are obtained, the neural network is trained through the training data, and then an image recognition task and a modulation recognition task are simultaneously executed based on the trained neural network, wherein the neural network comprises a data preprocessing module, a neural network input module, a neural network main body module and a neural network separation identification module, the training data comprises image recognition training data and modulation recognition training data, so that the intelligent aircraft can simultaneously process a plurality of tasks by using one neural network without carrying a plurality of programs, and the size of the intelligent aircraft is reduced.

Description

Multitasking method for intelligent aircraft
Technical Field
The invention belongs to the technical field of intelligent aircrafts, and particularly relates to a multitasking method of an intelligent aircraft.
Background
The intelligent aircraft is mainly applied to military investigation and destructive action, and common processing tasks of the intelligent aircraft comprise an image recognition task and a modulation recognition task. However, the current intelligent aircraft is in the perception stage in the development stage of the intelligent machine, the machine itself cannot receive various coherent or incoherent information for processing as human beings do, and most of the intelligent aircraft still solves a given problem by a given program.
In the prior art, usually, the intelligent aircraft carries a plurality of programs, and the intelligent aircraft can process tasks such as image recognition and modulation recognition at the same time, but carries a plurality of programs to increase the volume of the intelligent aircraft, thereby greatly improving the manufacturing cost and difficulty of the intelligent aircraft.
Therefore, how to reduce the number of programs carried by the intelligent aircraft and reduce the size of the intelligent aircraft, and the intelligent aircraft can also process a plurality of tasks simultaneously, is a technical problem to be solved by those skilled in the art.
Disclosure of Invention
The invention aims to solve the technical problem that in the prior art, an intelligent aircraft can simultaneously process a plurality of tasks by carrying a plurality of programs, and provides a multitasking method for the intelligent aircraft.
The technical scheme of the invention is as follows: a multitasking method for an intelligent aircraft comprises the following steps:
s1, establishing a neural network in the intelligent aircraft;
s2, acquiring training data, and training the neural network through the training data;
and S3, simultaneously executing an image recognition task and a modulation recognition task based on the trained neural network.
The neural network is characterized by comprising a data preprocessing module, a neural network input module, a neural network main body module and a neural network separation identification module which are sequentially connected, wherein the training data comprises image recognition training data and modulation recognition training data.
Further, the step S2 specifically includes the following sub-steps:
s21, preprocessing the training data through the data preprocessing module to obtain first processed data, wherein the preprocessing specifically includes data dimension conversion of the image recognition training data and the modulation recognition training data, and the first processed data includes first image recognition training data and first modulation recognition training data;
s22, performing secondary processing on the first processed data through the neural network input module to obtain second processed data, wherein the secondary processing is specifically to convert the first image recognition training data and the first modulation recognition training data into an image recognition training set and a modulation recognition training set with the same data dimension, and mix the image recognition training set and the modulation recognition training set with the same data dimension to obtain second processed data;
s23, carrying out nonlinear operation on the second processed data through the neural network main body module to obtain mixed output data;
and S24, separating the mixed output data through the neural network separation and identification module to obtain an image recognition training result and a modulation recognition training result, and outputting the image recognition training result and the modulation recognition training result.
Further, before performing data dimension conversion on the image recognition training data in step S21, the method further includes performing random cropping, random flipping and normalization on the image recognition training data.
Further, the step S21 of performing data dimension conversion on the modulation recognition training data specifically includes performing matrix conversion on the modulation recognition training data to obtain 2 × 32 modulation recognition matrix training data, and then performing normalization processing on the modulation recognition matrix training data to obtain first modulation recognition training data.
Further, the image recognition training data and the modulation recognition training data in the training data are acquired according to a preset ratio.
Further, the second processing data is specifically multiple batches of data, the proportion of the image recognition training set and the modulation recognition training set in each batch of data is the preset proportion, and the image recognition training set and the modulation recognition training set in each batch of data have corresponding designated positions.
Further, the step S24 specifically includes the following sub-steps:
s241, determining all data corresponding to the image recognition task in the mixed output data according to the specified position, and splicing all the data corresponding to the image recognition task into a tensor corresponding to the image recognition task; determining all data corresponding to the modulation recognition task in the mixed output data according to the specified position, and splicing all the data corresponding to the modulation recognition task into a tensor corresponding to the modulation recognition task;
s242, identifying the tensor corresponding to the image identification task through the image identification separation layer in the neural network separation and identification module to obtain an image identification training result, and outputting the image identification training result; and simultaneously, identifying the tensor corresponding to the modulation identification task through a modulation identification separation layer in the neural network separation identification module to obtain a modulation identification training result, and outputting the modulation identification training result.
Further, the step S3 specifically includes the following sub-steps:
s31, acquiring image identification data and modulation identification data in real time;
s32, preprocessing the image identification data and the modulation identification data through the data preprocessing module to obtain first image identification data and first modulation identification data;
s33, carrying out secondary processing on the first image identification data and the first modulation identification data through the neural network input module to obtain an image identification data set and a modulation identification data set, and mixing the image identification data set and the modulation identification data set according to a specified position to obtain second processed data;
s34, carrying out nonlinear operation on the second processed data through the neural network main body module to obtain mixed output data;
s35, separating and identifying the mixed output data through the neural network separating and identifying module to obtain an image identification result and a modulation identification result, and outputting the image identification result and the modulation identification result.
Compared with the prior art, the invention has the following beneficial effects;
(1) according to the invention, the neural network is set in the intelligent aircraft, then the training data is obtained, the neural network is trained through the training data, and then the image recognition task and the modulation recognition task are simultaneously executed based on the trained neural network, wherein the neural network comprises the data preprocessing module, the neural network input module, the neural network main body module and the neural network separation and identification module, the training data comprises the image recognition training data and the modulation recognition training data, so that the intelligent aircraft can simultaneously process a plurality of tasks by using one neural network without carrying a plurality of programs, and the size of the intelligent aircraft is reduced.
(2) According to the invention, the image recognition training data and the modulation recognition training data in the training data are obtained according to the preset proportion, and are mixed according to the preset proportion when the image recognition training data and the modulation recognition training data are mixed, so that the intermediate addition is avoided in the mixing process, the data proportion of the image recognition training data and the modulation recognition training data in each batch of data is equal to the preset proportion, and the condition that partial data cannot be filled into each batch of data according to the preset proportion in the training process is avoided.
(3) According to the invention, the designated position is set during mixing to mix the image recognition training data and the modulation recognition training data, so that the image recognition training data and the modulation recognition training data can be separated during recognition, and the corresponding recognition training results can be obtained by recognizing through the corresponding separation layer, so that the image recognition training data and the modulation recognition training data can be processed through the same neural network, and the corresponding recognition results can be obtained.
Drawings
Fig. 1 is a schematic flowchart illustrating a multitasking method for an intelligent aircraft according to an embodiment of the present invention;
fig. 2 is a schematic diagram illustrating a preset ratio and an assigned position of image recognition training data and modulation recognition training data in each batch of data according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
As described in the background art, when the smart aircraft in the prior art needs to perform a plurality of processing tasks, a plurality of processing programs are usually carried, which may result in an increase in facilities such as corresponding motherboard hardware, thereby making the volume of the smart aircraft larger, and the design requirement of the smart aircraft is smaller and better.
Therefore, the present application provides a multitasking method for an intelligent aircraft, and as shown in fig. 1, a flowchart of the multitasking method for the intelligent aircraft provided in the embodiment of the present application is shown, where the method includes the following steps:
and step S1, establishing a neural network in the intelligent aircraft.
And step S2, acquiring training data, and training the neural network through the training data.
In the embodiment of the application, the neural network comprises a data preprocessing module, a neural network input module, a neural network main body module and a neural network separation identification module which are sequentially connected, and the training data comprises image recognition training data and modulation recognition training data.
In this embodiment, the step S2 specifically includes the following sub-steps:
s21, preprocessing the training data through the data preprocessing module to obtain first processed data, wherein the preprocessing specifically includes data dimension conversion of the image recognition training data and the modulation recognition training data, and the first processed data includes first image recognition training data and first modulation recognition training data;
s22, performing secondary processing on the first processed data through the neural network input module to obtain second processed data, wherein the secondary processing is specifically to convert the first image recognition training data and the first modulation recognition training data into an image recognition training set and a modulation recognition training set with the same data dimension, and mix the image recognition training set and the modulation recognition training set with the same data dimension to obtain second processed data;
s23, carrying out nonlinear operation on the second processed data through the neural network main body module to obtain mixed output data;
and S24, separating the mixed output data through the neural network separation and identification module to obtain an image recognition training result and a modulation recognition training result, and outputting the image recognition training result and the modulation recognition training result.
In this embodiment of the application, before performing data dimension conversion on the image recognition training data in step S21, the method further includes performing random cropping, random flipping, and normalization on the image recognition training data.
In a specific application scene, the data enhancement can effectively improve the image recognition effect. We therefore do the following on the image recognition training data: and randomly cutting, randomly turning over and standardizing, wherein the randomly cutting refers to cutting a certain range of the filled image immediately after filling pixel points with the value of 0 around the original image. The number of filling turns and the cutting range are set by self. When the method is used for processing the cifar-10 image, the filling turn number is 4, and the cutting range is 32 x 32.
There are a number of ways to randomly flip. Here a random horizontal flipping method is used. Random horizontal flipping, i.e., the image rotates 180 ° in the horizontal direction with p as the probability.
The normalization operation is as follows:
Figure BDA0003182949210000041
where μ is the mean of the total samples, σ is the standard deviation of the total samples, and x is a particular sample data in the total samples.
By this operation, we can significantly improve the robustness of image recognition. For the data identified by the modulation, we translate it through a dimension conversion operation into data samples that identify the raw data points 2/3 for the image for the number of data points.
The modulation identification data set adopted during training is 2018.01.OSC.0001, the number of data points of each sample in the data set is 1024 x 2, namely 1024 data points are respectively sampled by two paths I and Q; and each sample data point in the image recognition training data is 32 × 3, namely, each channel of the three channels of R, G and B has 32 × 32 — 1024 data points. Therefore, when the two data sets are adopted, we only need to operate the sample dimension of the modulation recognition training data set with reshape through the exchange dimension, and the reshape function is a function for transforming the specified matrix into a specific dimension matrix. The data dimension conversion can be viewed as a matrix transposition. After the original data is transformed into a 2 x 1024 matrix by matrix transposition, the matrix can be transformed into 2 x 32 data by reshape.
If the data points of different task data set channels are different, scaling of the image in the image recognition training data can be performed in a bilinear interpolation mode.
After data dimension conversion, i.e., preprocessing, the modulation identification data set has two channels: i and Q signals, and the image identification data set has three channels: r, G, B, the number of pixel points of each channel is 32 × 32 — 1024 points, and the modulation identification data is also normalized thereafter. Through the preprocessing modes, two groups of data with more differences originally become training sets with the same data dimension in each channel, and a foundation is laid for the processing of a subsequent neural network input layer.
In this embodiment of the application, the performing data dimension conversion on the modulation identification training data in step S21 is specifically to perform matrix conversion on the modulation identification training data to obtain 2 × 32 modulation identification matrix training data, and then perform normalization processing on the modulation identification matrix training data to obtain first modulation identification training data.
Most of the training of neural networks today is achieved by the BP algorithm. The BP algorithm consists of two parts, forward propagation of the signal and backward propagation of the error.
1. The forward propagation refers to a nonlinear operation performed between input data of each layer and a weight value of each layer from an input layer to an output layer.
2. The back propagation is to calculate the error loss through the difference between the actual output and the theoretical output, and to transmit the error from the output layer to the input layer in the reverse direction, and each layer updates the weight according to the gradient descent algorithm:
the algorithm for calculating the loss used in the method adopts cross entropy, and the formula is as follows:
Figure BDA0003182949210000051
wherein class represents a theoretical classification result (taking natural numbers 1, 2 and 3 as values), x is a tensor with dimensionality [1 and C ], and C is a classification number.
The gradient descent formula is as follows:
Figure BDA0003182949210000061
wherein J (theta) is the loss, theta, calculated by usiIs the weight of a certain layer in the neural network, and alpha is the set learning rate.
It can be expected that if each correction of gradient descent is derived from a single sample, the direction of gradient descent is completely controlled by the single sample, which easily causes the two previous and subsequent corrections of gradient directions to conflict or even completely reverse. Therefore, in most machine learning, a certain sample amount is selected to form a batch, namely a batch of data, and the batch is used as a data source of gradient descent for one time, so that the batch can better represent the sample population, and the risk of transverse impact and direct impact in the correction process is greatly reduced.
In the fusion network experiment of image identification and modulation identification by using the method, the dimension of first image identification training data in the first processed data is 3 x 32, and the data dimension of the first modulation identification training data is 2 x 32. The first processed data is input to a neural network input module, and a convolutional layer in the neural network input module can convert the first image recognition training data and the first modulation recognition training data into data with identical data dimensions (the data dimensions are 64 x 32).
In addition, in order to make each direction correction of the gradient descent of the neural network simultaneously influenced by the data of a plurality of tasks, according to the theory, the batch in each gradient descent process is required to simultaneously have the data of the plurality of tasks. And because the neural network is a neural network mixed by a plurality of tasks, according to the gradient descent characteristic, the influence degree of different tasks on the model can be adjusted by adjusting the data proportion of each batch, namely each task in each batch of data. Namely, the model can be more inclined to solve a certain task and more slightly look at other tasks by adjusting the proportion, so that the effect of improving the effect of a certain specific task on the premise of reducing the accuracy of other tasks is achieved
In this embodiment of the application, the image recognition training data and the modulation recognition training data in the training data are obtained according to a preset ratio, the second processing data is specifically multiple batches of data, the ratio of the image recognition training set to the modulation recognition training set in each batch of data is the preset ratio, and the image recognition training set and the modulation recognition training set in each batch of data have corresponding designated positions.
For each batch of data, the composition of the image recognition training set and the modulation recognition training set is as follows:
each batch must contain a sample size batch _ size that is a multiple of the number of groups grouped by a predetermined ratio (e.g., a batch is divided into N + M groups if the predetermined ratio of a task to B task is N: M). After that, according to a preset proportion, N a task data and M B task data are spliced into a new tensor, and the tensors are continuously integrated until the sample size of the tensor reaches a specified batch _ size, as shown in fig. 2, a preset proportion and a specified position schematic diagram of an image recognition training set and a modulation recognition training set in each batch of data according to the embodiment of the present invention are shown, electromagnetic data in fig. 2 is also a modulation recognition training set, and image data is also an image recognition training set. In this way we make the proportion of each task in the batch the same as specified in advance.
The scale of the training data must also be the scale specified in the symbol batch. By the method, the proportion of each batch can be the same as the specified proportion, the situation that the data of one task is filled up to the last several batches when the active data are filled up, and the data of some tasks have a lot of situations can be avoided, namely the generation of redundant data is avoided.
In addition, when the neural network is trained, the preset proportion can be adjusted for training for multiple times.
In this embodiment, the step S24 specifically includes the following sub-steps:
s241, determining all data corresponding to the image recognition task in the mixed output data according to the specified position, and splicing all the data corresponding to the image recognition task into a tensor corresponding to the image recognition task; determining all data corresponding to the modulation recognition task in the mixed output data according to the specified position, and splicing all the data corresponding to the modulation recognition task into a tensor corresponding to the modulation recognition task;
s242, identifying the tensor corresponding to the image identification task through the image identification separation layer in the neural network separation and identification module to obtain an image identification training result, and outputting the image identification training result; and simultaneously, identifying the tensor corresponding to the modulation identification task through a modulation identification separation layer in the neural network separation identification module to obtain a modulation identification training result, and outputting the modulation identification training result.
In a specific application scene, the image recognition training set and the modulation recognition training set are orderly combined according to a specified position and a preset proportion through a neural network input module, and each time the model is updated, namely a gradient descent process is influenced by all tasks. The data dimension of the second processed data is [ batch _ size,64,32,32 ]. And sending the second processing data to a neural network main body, and obtaining mixed output data with a data dimension [ batch _ size,512] through complex nonlinear operation. And reversely extracting output results corresponding to each task from the mixed output data according to the previous appointed position, and splicing the output results into a uniform tensor. Assuming that a two-task fusion neural network is adopted, the mixing proportion of the two tasks is 1: 1, finally, the tensor dimension corresponding to each task is [ batch _ size/2,512], the separated data corresponding to the tasks one by one are respectively sent to the separation layers corresponding to the tasks one by one, and finally corresponding identification results are obtained, the dimensions corresponding to the identification results are [ batch _ size/2,1, class1] and [ batch _ size/2,1, class2], and class1 and class2 are classification numbers respectively.
That is, all data of a certain task are extracted according to the specified position during mixing and spliced into a uniform tensor, and the tensor is sent to a separation layer corresponding to the task one by one. The separation layers are all formed by full connection layers, the operation of respectively identifying the separated different task data is realized, the dimensionality of the data obtained after the data is processed by the neural network separation and identification module is [ batch _ size _ mask, class ], the batch _ size _ mask is the sample size of a certain specific task in one batch, the class represents the classification number required by the task, and the type of the task to be identified is determined according to the maximum value in the final output tensor, namely the output result tensor of the neural network often represents the similarity degree of the object to be classified and each class: for example, the dimension is 1 × 10, and each value of the dimension represents the similarity degree of the target to be classified with a specific class. The maximum value tends to indicate the most similarity, and therefore the classification corresponding to the maximum value indicates the predicted classification.
And step S3, simultaneously executing an image recognition task and a modulation recognition task based on the trained neural network.
In this embodiment, the step S3 specifically includes the following sub-steps:
s31, acquiring image identification data and modulation identification data in real time;
s32, preprocessing the image identification data and the modulation identification data through the data preprocessing module to obtain first image identification data and first modulation identification data;
s33, carrying out secondary processing on the first image identification data and the first modulation identification data through the neural network input module to obtain an image identification data set and a modulation identification data set, and mixing the image identification data set and the modulation identification data set according to a specified position to obtain second processed data;
s34, carrying out nonlinear operation on the second processed data through the neural network main body module to obtain mixed output data;
s35, separating and identifying the mixed output data through the neural network separating and identifying module to obtain an image identification result and a modulation identification result, and outputting the image identification result and the modulation identification result.
Specifically, the neural network parameters after training are fixed, in the actual working operation, the image recognition data and the modulation recognition data are collected and processed according to the processing flow in the training process, but the image recognition data and the modulation recognition data are not required to be mixed according to the preset proportion and only need to be mixed according to the specified position, and all data corresponding to the image recognition task or the modulation recognition task are separated according to the specified position when mixed output data are obtained.
It will be appreciated by those of ordinary skill in the art that the embodiments described herein are intended to assist the reader in understanding the principles of the invention and are to be construed as being without limitation to such specifically recited embodiments and examples. Those skilled in the art can make various other specific changes and combinations based on the teachings of the present invention without departing from the spirit of the invention, and these changes and combinations are within the scope of the invention.

Claims (9)

1. A multitasking method for an intelligent aircraft, characterized by comprising the following steps:
s1, establishing a neural network in the intelligent aircraft;
s2, acquiring training data, and training the neural network through the training data;
and S3, simultaneously executing an image recognition task and a modulation recognition task based on the trained neural network.
2. The intelligent aircraft multitasking processing method according to claim 1, wherein the neural network comprises a data preprocessing module, a neural network input module, a neural network main body module and a neural network separation identification module which are connected in sequence, and the training data comprises image recognition training data and modulation recognition training data.
3. The intelligent aircraft multitasking processing method according to claim 2, wherein said step S2 specifically includes the following substeps:
s21, preprocessing the training data through the data preprocessing module to obtain first processed data, wherein the preprocessing specifically includes data dimension conversion of the image recognition training data and the modulation recognition training data, and the first processed data includes first image recognition training data and first modulation recognition training data;
s22, performing secondary processing on the first processed data through the neural network input module to obtain second processed data, wherein the secondary processing is specifically to convert the first image recognition training data and the first modulation recognition training data into an image recognition training set and a modulation recognition training set with the same data dimension, and mix the image recognition training set and the modulation recognition training set with the same data dimension to obtain second processed data;
s23, carrying out nonlinear operation on the second processed data through the neural network main body module to obtain mixed output data;
and S24, separating and identifying the mixed output data through the neural network separation and identification module to obtain an image recognition training result and a modulation recognition training result, and outputting the image recognition training result and the modulation recognition training result.
4. The intelligent aircraft multitasking processing method according to claim 3, wherein said step S21, before performing data dimension conversion on said image recognition training data, further comprises performing random cropping, random flipping and normalization processing on said image recognition training data.
5. The intelligent aircraft multitasking processing method according to claim 3, wherein the data dimension conversion of the modulation identification training data in step S21 is specifically a matrix conversion of the modulation identification training data to obtain 2 × 32 modulation identification matrix training data, and then the modulation identification matrix training data is normalized to obtain first modulation identification training data.
6. The intelligent aircraft multitasking processing method according to claim 3, wherein said training data includes image recognition training data and modulation recognition training data, which are obtained according to a preset ratio.
7. The intelligent aircraft multitasking processing method according to claim 6, wherein the second processed data is specifically a plurality of batches of data, and the ratio of the image recognition training set to the modulation recognition training set in each batch of data is the preset ratio, and each batch of data has a corresponding designated position in the image recognition training set and the modulation recognition training set.
8. The intelligent aircraft multitasking processing method according to claim 7, wherein said step S24 specifically includes the following substeps:
s241, determining all data corresponding to the image recognition task in the mixed output data according to the specified position, and splicing all the data corresponding to the image recognition task into a tensor corresponding to the image recognition task; determining all data corresponding to the modulation recognition task in the mixed output data according to the specified position, and splicing all the data corresponding to the modulation recognition task into a tensor corresponding to the modulation recognition task;
s242, identifying the tensor corresponding to the image identification task through the image identification separation layer in the neural network separation and identification module to obtain an image identification training result, and outputting the image identification training result; and simultaneously, identifying the tensor corresponding to the modulation identification task through a modulation identification separation layer in the neural network separation identification module to obtain a modulation identification training result, and outputting the modulation identification training result.
9. The intelligent aircraft multitasking processing method according to claim 2, wherein said step S3 specifically includes the following substeps:
s31, acquiring image identification data and modulation identification data in real time;
s32, preprocessing the image identification data and the modulation identification data through the data preprocessing module to obtain first image identification data and first modulation identification data;
s33, carrying out secondary processing on the first image identification data and the first modulation identification data through the neural network input module to obtain an image identification data set and a modulation identification data set, and mixing the image identification data set and the modulation identification data set according to a specified position to obtain second processed data;
s34, carrying out nonlinear operation on the second processed data through the neural network main body module to obtain mixed output data;
s35, separating and identifying the mixed output data through the neural network separating and identifying module to obtain an image identification result and a modulation identification result, and outputting the image identification result and the modulation identification result.
CN202110852719.8A 2021-07-27 2021-07-27 Multitasking method for intelligent aircraft Active CN113505851B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110852719.8A CN113505851B (en) 2021-07-27 2021-07-27 Multitasking method for intelligent aircraft

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110852719.8A CN113505851B (en) 2021-07-27 2021-07-27 Multitasking method for intelligent aircraft

Publications (2)

Publication Number Publication Date
CN113505851A true CN113505851A (en) 2021-10-15
CN113505851B CN113505851B (en) 2023-01-31

Family

ID=78014367

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110852719.8A Active CN113505851B (en) 2021-07-27 2021-07-27 Multitasking method for intelligent aircraft

Country Status (1)

Country Link
CN (1) CN113505851B (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108009525A (en) * 2017-12-25 2018-05-08 北京航空航天大学 A kind of specific objective recognition methods over the ground of the unmanned plane based on convolutional neural networks
CN108182427A (en) * 2018-01-30 2018-06-19 电子科技大学 A kind of face identification method based on deep learning model and transfer learning
CN111814963A (en) * 2020-07-17 2020-10-23 中国科学院微电子研究所 Image identification method based on deep neural network model parameter modulation
CN112613581A (en) * 2020-12-31 2021-04-06 广州大学华软软件学院 Image recognition method, system, computer equipment and storage medium
CN112887239A (en) * 2021-02-15 2021-06-01 青岛科技大学 Method for rapidly and accurately identifying underwater sound signal modulation mode based on deep hybrid neural network

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108009525A (en) * 2017-12-25 2018-05-08 北京航空航天大学 A kind of specific objective recognition methods over the ground of the unmanned plane based on convolutional neural networks
CN108182427A (en) * 2018-01-30 2018-06-19 电子科技大学 A kind of face identification method based on deep learning model and transfer learning
CN111814963A (en) * 2020-07-17 2020-10-23 中国科学院微电子研究所 Image identification method based on deep neural network model parameter modulation
CN112613581A (en) * 2020-12-31 2021-04-06 广州大学华软软件学院 Image recognition method, system, computer equipment and storage medium
CN112887239A (en) * 2021-02-15 2021-06-01 青岛科技大学 Method for rapidly and accurately identifying underwater sound signal modulation mode based on deep hybrid neural network

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
曹永峰等: "基于GA-BP神经网络的计算机智能化图像识别技术探究", 《应用激光》 *
甘俊英等: "基于BP神经网络的人脸识别", 《系统工程与电子技术》 *

Also Published As

Publication number Publication date
CN113505851B (en) 2023-01-31

Similar Documents

Publication Publication Date Title
US20230186056A1 (en) Grabbing detection method based on rp-resnet
CN108171701B (en) Significance detection method based on U network and counterstudy
CN110147834A (en) Fine granularity image classification method based on rarefaction bilinearity convolutional neural networks
EP3843004A1 (en) Portrait segmentation method, model training method and electronic device
CN111178312B (en) Face expression recognition method based on multi-task feature learning network
CN113642445B (en) Hyperspectral image classification method based on full convolution neural network
CN111680739A (en) Multi-task parallel method and system for target detection and semantic segmentation
CN114863539A (en) Portrait key point detection method and system based on feature fusion
CN114863229A (en) Image classification method and training method and device of image classification model
CN111160378A (en) Depth estimation system based on single image multitask enhancement
CN112149526A (en) Lane line detection method and system based on long-distance information fusion
CN112016592B (en) Domain adaptive semantic segmentation method and device based on cross domain category perception
CN113505851B (en) Multitasking method for intelligent aircraft
CN110490876B (en) Image segmentation method based on lightweight neural network
CN116796287A (en) Pre-training method, device, equipment and storage medium for graphic understanding model
CN114494782B (en) Image processing method, model training method, related device and electronic equipment
CN111126173A (en) High-precision face detection method
WO2022111231A1 (en) Cnn training method, electronic device, and computer readable storage medium
CN115359294A (en) Cross-granularity small sample learning method based on similarity regularization intra-class mining
CN114708434A (en) Cross-domain remote sensing image semantic segmentation method based on adaptation and self-training in iterative domain
CN114581789A (en) Hyperspectral image classification method and system
Pang et al. PTRSegNet: A Patch-to-Region Bottom-Up Pyramid Framework for the Semantic Segmentation of Large-Format Remote Sensing Images
CN115861605A (en) Image data processing method, computer equipment and readable storage medium
CN111241924B (en) Face detection and alignment method, device and storage medium based on scale estimation
CN111931773B (en) Image recognition method, device, equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant