CN113505851B - Multitasking method for intelligent aircraft - Google Patents

Multitasking method for intelligent aircraft Download PDF

Info

Publication number
CN113505851B
CN113505851B CN202110852719.8A CN202110852719A CN113505851B CN 113505851 B CN113505851 B CN 113505851B CN 202110852719 A CN202110852719 A CN 202110852719A CN 113505851 B CN113505851 B CN 113505851B
Authority
CN
China
Prior art keywords
data
modulation
identification
neural network
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110852719.8A
Other languages
Chinese (zh)
Other versions
CN113505851A (en
Inventor
周军
岑华峰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
University of Electronic Science and Technology of China
Original Assignee
University of Electronic Science and Technology of China
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by University of Electronic Science and Technology of China filed Critical University of Electronic Science and Technology of China
Priority to CN202110852719.8A priority Critical patent/CN113505851B/en
Publication of CN113505851A publication Critical patent/CN113505851A/en
Application granted granted Critical
Publication of CN113505851B publication Critical patent/CN113505851B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Abstract

The invention discloses an intelligent aircraft multitasking processing method, which is characterized in that a neural network is set in an intelligent aircraft, then training data are obtained, the neural network is trained through the training data, and then an image recognition task and a modulation recognition task are simultaneously executed based on the trained neural network, wherein the neural network comprises a data preprocessing module, a neural network input module, a neural network main body module and a neural network separation identification module, the training data comprises image recognition training data and modulation recognition training data, so that the intelligent aircraft can simultaneously process a plurality of tasks by using one neural network without carrying a plurality of programs, and the size of the intelligent aircraft is reduced.

Description

Multitasking method for intelligent aircraft
Technical Field
The invention belongs to the technical field of intelligent aircrafts, and particularly relates to a multitasking method of an intelligent aircraft.
Background
The intelligent aircraft is mainly applied to military detection and destruction actions, and the common processing tasks of the intelligent aircraft comprise an image identification task and a modulation identification task. However, the current intelligent aircraft is in the perception stage of the development stage of the intelligent machine, the machine itself cannot receive multiple coherent or incoherent information simultaneously like a human to process, and most of the intelligent aircraft still uses a given program to solve a given problem.
In the prior art, usually, the intelligent aircraft carries a plurality of programs, and the intelligent aircraft can process tasks such as image recognition and modulation recognition at the same time, but carries a plurality of programs to increase the volume of the intelligent aircraft, thereby greatly improving the manufacturing cost and difficulty of the intelligent aircraft.
Therefore, how to reduce the number of programs carried by the intelligent aircraft and reduce the size of the intelligent aircraft, and the intelligent aircraft can also process a plurality of tasks simultaneously is a technical problem to be solved by those skilled in the art.
Disclosure of Invention
The invention aims to solve the technical problem that in the prior art, an intelligent aircraft can simultaneously process a plurality of tasks by carrying a plurality of programs, and provides a multitasking method for the intelligent aircraft.
The technical scheme of the invention is as follows: a multitasking method for an intelligent aircraft comprises the following steps:
s1, establishing a neural network in the intelligent aircraft;
s2, acquiring training data, and training the neural network through the training data;
and S3, simultaneously executing an image recognition task and a modulation recognition task based on the trained neural network.
The neural network is characterized by comprising a data preprocessing module, a neural network input module, a neural network main body module and a neural network separation identification module which are sequentially connected, wherein the training data comprises image recognition training data and modulation recognition training data.
Further, the step S2 specifically includes the following sub-steps:
s21, preprocessing the training data through the data preprocessing module to obtain first processed data, wherein the preprocessing specifically includes data dimension conversion of the image recognition training data and the modulation recognition training data, and the first processed data includes first image recognition training data and first modulation recognition training data;
s22, performing secondary processing on the first processed data through the neural network input module to obtain second processed data, wherein the secondary processing is specifically to convert the first image recognition training data and the first modulation recognition training data into an image recognition training set and a modulation recognition training set with the same data dimension, and mix the image recognition training set and the modulation recognition training set with the same data dimension to obtain second processed data;
s23, carrying out nonlinear operation on the second processed data through the neural network main body module to obtain mixed output data;
and S24, separating the mixed output data through the neural network separation and identification module to obtain an image recognition training result and a modulation recognition training result, and outputting the image recognition training result and the modulation recognition training result.
Further, before performing data dimension conversion on the image recognition training data in step S21, the method further includes performing random cropping, random flipping and normalization processing on the image recognition training data.
Further, the step S21 of performing data dimension conversion on the modulation recognition training data specifically includes performing matrix conversion on the modulation recognition training data to obtain 2 × 32 modulation recognition matrix training data, and then performing normalization processing on the modulation recognition matrix training data to obtain first modulation recognition training data.
Further, the image recognition training data and the modulation recognition training data in the training data are acquired according to a preset ratio.
Further, the second processing data is multiple batches of data, the ratio of the image recognition training set to the modulation recognition training set in each batch of data is the preset ratio, and the image recognition training set and the modulation recognition training set in each batch of data have corresponding designated positions.
Further, the step S24 specifically includes the following sub-steps:
s241, determining all data corresponding to the image recognition task in the mixed output data according to the specified position, and splicing all the data corresponding to the image recognition task into a tensor corresponding to the image recognition task; determining all data corresponding to the modulation recognition task in the mixed output data according to the specified position, and splicing all the data corresponding to the modulation recognition task into a tensor corresponding to the modulation recognition task;
s242, identifying the tensor corresponding to the image identification task through the image identification separation layer in the neural network separation identification module to obtain an image identification training result, and outputting the image identification training result; and simultaneously, identifying the tensor corresponding to the modulation identification task through a modulation identification separation layer in the neural network separation identification module to obtain a modulation identification training result, and outputting the modulation identification training result.
Further, the step S3 specifically includes the following sub-steps:
s31, acquiring image identification data and modulation identification data in real time;
s32, preprocessing the image identification data and the modulation identification data through the data preprocessing module to obtain first image identification data and first modulation identification data;
s33, carrying out secondary processing on the first image identification data and the first modulation identification data through the neural network input module to obtain an image identification data set and a modulation identification data set, and mixing the image identification data set and the modulation identification data set according to a specified position to obtain second processing data;
s34, carrying out nonlinear operation on the second processing data through the neural network main body module to obtain mixed output data;
s35, separating and identifying the mixed output data through the neural network separation and identification module to obtain an image identification result and a modulation identification result, and outputting the image identification result and the modulation identification result.
Compared with the prior art, the invention has the following beneficial effects;
(1) According to the invention, the neural network is set in the intelligent aircraft, then the training data is obtained, the neural network is trained through the training data, and then the image recognition task and the modulation recognition task are simultaneously executed based on the trained neural network, wherein the neural network comprises the data preprocessing module, the neural network input module, the neural network main body module and the neural network separation and identification module, the training data comprises the image recognition training data and the modulation recognition training data, so that the intelligent aircraft can simultaneously process a plurality of tasks by using one neural network without carrying a plurality of programs, and the size of the intelligent aircraft is reduced.
(2) According to the invention, the image recognition training data and the modulation recognition training data in the training data are obtained according to the preset proportion, and are mixed according to the preset proportion when the image recognition training data and the modulation recognition training data are mixed, so that the intermediate addition is avoided in the mixing process, the data proportion of the image recognition training data and the modulation recognition training data in each batch of data is equal to the preset proportion, and the condition that partial data cannot be filled into each batch of data according to the preset proportion in the training process is avoided.
(3) According to the invention, the designated position is set during mixing to mix the image recognition training data and the modulation recognition training data, so that the image recognition training data and the modulation recognition training data can be separated during recognition, and the corresponding recognition training results can be obtained by recognizing through the corresponding separation layer, so that the image recognition training data and the modulation recognition training data can be processed through the same neural network, and the corresponding recognition results can be obtained.
Drawings
Fig. 1 is a schematic flowchart illustrating a multitasking method for an intelligent aircraft according to an embodiment of the present invention;
fig. 2 is a schematic diagram illustrating a preset ratio and an assigned position of image recognition training data and modulation recognition training data in each batch of data according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
As described in the background art, when the smart aircraft in the prior art needs to perform a plurality of processing tasks, a plurality of processing programs are usually carried, which may result in an increase in facilities such as corresponding motherboard hardware, thereby making the volume of the smart aircraft larger, and the design requirement of the smart aircraft is smaller and better.
Therefore, the present application provides a multitasking method for an intelligent aircraft, and as shown in fig. 1, a flowchart of the multitasking method for the intelligent aircraft provided in the embodiment of the present application is shown, where the method includes the following steps:
s1, establishing a neural network in the intelligent aircraft.
And S2, acquiring training data, and training the neural network through the training data.
In the embodiment of the application, the neural network comprises a data preprocessing module, a neural network input module, a neural network main body module and a neural network separation identification module which are sequentially connected, and the training data comprises image recognition training data and modulation recognition training data.
In the embodiment of the present application, the step S2 specifically includes the following sub-steps:
s21, preprocessing the training data through the data preprocessing module to obtain first processed data, wherein the preprocessing specifically includes data dimension conversion of the image recognition training data and the modulation recognition training data, and the first processed data includes first image recognition training data and first modulation recognition training data;
s22, performing secondary processing on the first processed data through the neural network input module to obtain second processed data, wherein the secondary processing specifically comprises converting the first image recognition training data and the first modulation recognition training data into an image recognition training set and a modulation recognition training set with the same data dimension, and mixing the image recognition training set and the modulation recognition training set with the same data dimension to obtain second processed data;
s23, carrying out nonlinear operation on the second processing data through the neural network main body module to obtain mixed output data;
and S24, separating the mixed output data through the neural network separation and identification module to obtain an image recognition training result and a modulation recognition training result, and outputting the image recognition training result and the modulation recognition training result.
In this embodiment of the application, before performing data dimension conversion on the image recognition training data in step S21, the method further includes performing random cropping, random flipping, and normalization processing on the image recognition training data.
In a specific application scene, the data enhancement can effectively improve the image recognition effect. We therefore do the following on the image recognition training data: and randomly cutting, randomly turning over and standardizing, wherein the randomly cutting refers to cutting a certain range of the filled image immediately after filling pixel points with the value of 0 around the original image. The number of filling turns and the cutting range are set by self. When the method is used for processing a cifar-10 image, the filling turn number is 4, and the cutting range is 32 × 32.
There are a number of ways to randomly flip. Here a random horizontal flipping method is used. Random horizontal flipping, i.e., the image rotates 180 ° in the horizontal direction with p as the probability.
The normalization operation is as follows:
Figure BDA0003182949210000041
where μ is the mean of the total samples, σ is the standard deviation of the total samples, and x is a particular sample data in the total samples.
By this operation, we can significantly improve the robustness of image recognition. For the data identified by modulation, we convert it to a data sample that identifies 2/3 of the original data points for the image for the number of data points by a dimension conversion operation.
During training, a modulation identification data set is 2018.01.OSC.0001, the number of data points of each sample in the data set is 1024 x 2, namely 1024 data points are respectively sampled by two paths I and Q; and each sample data point in the image recognition training data is 32 × 3, namely, R, G, B three channels each have 32 × 32=1024 data points. Therefore, when using these two data sets, we only need to operate the sample dimensions of the modulation recognition training data set with reshape through exchanging the dimensions, and the reshape function is a function that transforms the specified matrix into a specific dimension matrix. The data dimension conversion can be viewed as a matrix transpose. After the original data is converted into a 2 × 1024 matrix by matrix transposition, the matrix can be converted into 2 × 32 data by reshape.
If the data points of different task data set channels are different, zooming of the image in the image recognition training data can be carried out in a bilinear interpolation mode.
After data dimension conversion, i.e., preprocessing, the modulation identification data set has two channels: i, Q two-way signals, and the image recognition data set has three channels: r, G, B, the number of pixel points of each channel is 32 × 32=1024 points, and after that, the modulation identification data is also normalized. Through the preprocessing modes, two groups of data which are different from each other more originally become training sets with the same data dimension in each channel, and a foundation is laid for the processing of a subsequent neural network input layer.
In this embodiment of the application, the performing data dimension conversion on the modulation recognition training data in step S21 specifically includes performing matrix conversion on the modulation recognition training data to obtain 2 × 32 modulation recognition matrix training data, and then performing normalization processing on the modulation recognition matrix training data to obtain first modulation recognition training data.
Most of the neural network training today is achieved by the BP algorithm. The BP algorithm consists of two parts, forward propagation of the signal and backward propagation of the error.
1. The forward propagation refers to a nonlinear operation performed between input data of each layer and a weight of each layer from an input layer to an output layer.
2. The back propagation is to calculate the error loss through the difference between the actual output and the theoretical output, and to transmit the error from the output layer to the input layer in the reverse direction, and each layer updates the weight according to the gradient descent algorithm:
the algorithm for calculating the loss used in the method adopts cross entropy, and the formula is as follows:
Figure BDA0003182949210000051
wherein class represents a theoretical classification result (taking a natural number of 1,2,3 as a value), x is a tensor with a dimension of [1,C ], and C is a classification number.
The gradient descent formula is as follows:
Figure BDA0003182949210000061
wherein J (theta) is the loss, theta, calculated by us i Is the weight of a certain layer in the neural network, and alpha is the set learning rate.
It can be expected that if each correction of gradient descent is derived from a single sample, the direction of gradient descent is completely controlled by the single sample, which easily causes the two previous and subsequent corrections of gradient directions to conflict or even completely reverse. Therefore, in most machine learning, a certain sample size is selected to form a batch of data, and the batch is used as a data source of gradient descent, so that the batch can better represent the sample population, and the risk of transverse impact in the correction direction is greatly reduced.
In a fusion network experiment using the method for image recognition and modulation recognition, the first image recognition training data in the first processed data has a dimension of 3 × 32, and the first modulation recognition training data has a data dimension of 2 × 32. The first processed data is input to a neural network input module, and a convolutional layer in the neural network input module may convert the first image recognition training data and the first modulation recognition training data into data having exactly the same data dimension (64 × 32 data dimension).
In addition, in order to make each direction correction of the gradient descent of the neural network simultaneously influenced by the data of a plurality of tasks, according to the theory, the batch in each gradient descent process is required to simultaneously have the data of the plurality of tasks. And because the neural network is a neural network mixed by a plurality of tasks, according to the gradient descent characteristic, the influence degree of different tasks on the model can be adjusted by adjusting the data proportion of each batch, namely each task in each batch of data. Namely, the model can be more inclined to solve a certain task and more slightly look at other tasks by adjusting the proportion, so that the effect of improving the effect of a certain specific task on the premise of reducing the accuracy of other tasks is achieved
In this embodiment of the application, the image recognition training data and the modulation recognition training data in the training data are obtained according to a preset ratio, the second processing data is specifically multiple batches of data, the ratio of the image recognition training set to the modulation recognition training set in each batch of data is the preset ratio, and the image recognition training set and the modulation recognition training set in each batch of data have corresponding designated positions.
For each batch of data, the composition of the image recognition training set and the modulation recognition training set is as follows:
each batch must contain a sample size batch _ size that is a multiple of the number of groups grouped by a predetermined ratio (e.g., a batch is divided into N + M groups if the predetermined ratio of a task to B task is N: M). Then, according to a predetermined ratio, N a task data and M B task data are spliced into a new tensor, and the tensors are continuously integrated until the sample size reaches a predetermined batch _ size, as shown in fig. 2, a schematic diagram of a predetermined ratio and a predetermined position of an image recognition training set and a modulation recognition training set in each batch of data according to the embodiment of the present invention is shown, the electromagnetic data in fig. 2 is also referred to as a modulation recognition training set, and the image data is also referred to as an image recognition training set. In this way, we make the proportion of each task in batch the same as that specified in advance.
The scale of the training data must also be the scale specified in the symbol batch. By the method, the proportion of each batch can be the same as the specified proportion, the situation that the data of one task is filled up to the last several batches when the active data are filled up, and the data of some tasks have a lot of situations can be avoided, namely the generation of redundant data is avoided.
In addition, when the neural network is trained, the preset proportion can be adjusted for training for multiple times.
In the embodiment of the present application, the step S24 specifically includes the following sub-steps:
s241, determining all data corresponding to an image recognition task in the mixed output data according to the specified position, and splicing all data corresponding to the image recognition task into a tensor corresponding to the image recognition task; determining all data corresponding to the modulation recognition task in the mixed output data according to the specified position, and splicing all the data corresponding to the modulation recognition task into a tensor corresponding to the modulation recognition task;
s242, identifying the tensor corresponding to the image identification task through the image identification separation layer in the neural network separation and identification module to obtain an image identification training result, and outputting the image identification training result; and simultaneously, identifying the tensor corresponding to the modulation identification task through a modulation identification separation layer in the neural network separation identification module to obtain a modulation identification training result, and outputting the modulation identification training result.
In a specific application scene, the image recognition training set and the modulation recognition training set are orderly combined according to a specified position and a preset proportion through a neural network input module, and each time the model is updated, namely a gradient descent process is influenced by all tasks. The data dimension of the second processed data is [ batch _ size,64,32 ]. And sending the second processing data to a neural network main body, and obtaining mixed output data with a data dimensionality of [ batch _ size,512] through complex nonlinear operation. And reversely extracting output results corresponding to each task from the mixed output data according to the previous appointed position, and splicing the output results into a uniform tensor. Assuming that a two-task fusion neural network is adopted, the mixing ratio of the two tasks is 1:1, finally, the tensor dimension corresponding to each task is [ batch _ size/2,512], the separated data corresponding to the tasks one by one are respectively sent to the separation layers corresponding to the tasks one by one, the final corresponding recognition results are obtained, the dimensions corresponding to the recognition results are [ batch _ size/2,1, class1] and [ batch _ size/2,1, class2], and class1 and class2 are classification numbers respectively.
That is, all data of a certain task are extracted according to the specified position during mixing and spliced into a uniform tensor, and the tensor is sent to a separation layer corresponding to the task one by one. The separation layers are all formed by full connection layers, the operation of respectively identifying the separated different task data is realized, the dimensionality of the data obtained after the data is processed by the neural network separation and identification module is [ batch _ size _ mask, class ], the batch _ size _ mask is the sample size of a certain specific task in one batch, the class represents the classification number required by the task, and the type of the task to be identified is determined according to the maximum value in the final output tensor, namely the output result tensor of the neural network often represents the similarity degree of the object to be classified and each class: for example, the dimension is 1 × 10, and each value of the dimension represents the similarity degree of the target to be classified with a specific class. The maximum value tends to represent the most similar, so the class corresponding to the maximum value represents the predicted class.
And S3, simultaneously executing an image recognition task and a modulation recognition task based on the trained neural network.
In the embodiment of the present application, the step S3 specifically includes the following sub-steps:
s31, acquiring image identification data and modulation identification data in real time;
s32, preprocessing the image identification data and the modulation identification data through the data preprocessing module to obtain first image identification data and first modulation identification data;
s33, carrying out secondary processing on the first image identification data and the first modulation identification data through the neural network input module to obtain an image identification data set and a modulation identification data set, and mixing the image identification data set and the modulation identification data set according to a specified position to obtain second processing data;
s34, carrying out nonlinear operation on the second processing data through the neural network main body module to obtain mixed output data;
s35, separating and identifying the mixed output data through the neural network separation and identification module to obtain an image identification result and a modulation identification result, and outputting the image identification result and the modulation identification result.
Specifically, the neural network parameters after training are fixed, in the actual working operation, the image recognition data and the modulation recognition data are collected and processed according to the processing flow in the training process, but the image recognition data and the modulation recognition data are not required to be mixed according to the preset proportion and only need to be mixed according to the specified position, and all data corresponding to the image recognition task or the modulation recognition task are separated according to the specified position when mixed output data are obtained.
It will be appreciated by those of ordinary skill in the art that the embodiments described herein are intended to assist the reader in understanding the principles of the invention and are to be construed as being without limitation to such specifically recited embodiments and examples. Those skilled in the art, having the benefit of this disclosure, may effect numerous modifications thereto and changes may be made without departing from the scope of the invention in its aspects.

Claims (2)

1. A multitasking method for an intelligent aircraft, characterized in that the method comprises the following steps:
s1, establishing a neural network in the intelligent aircraft;
s2, acquiring training data, and training the neural network through the training data:
s21, preprocessing the training data through the data preprocessing module to obtain first processed data, wherein image recognition training data and modulation recognition training data in the training data are obtained according to a preset proportion, the preprocessing specifically comprises data dimension conversion of the image recognition training data and the modulation recognition training data, and the first processed data comprises first image recognition training data and first modulation recognition training data;
performing data dimension conversion on the modulation recognition training data, specifically performing matrix conversion on the modulation recognition training data to obtain 2 × 32 modulation recognition matrix training data, and then performing standardization processing on the modulation recognition matrix training data to obtain first modulation recognition training data;
s22, performing secondary processing on the first processed data through the neural network input module to obtain second processed data, wherein the secondary processing is specifically to convert the first image recognition training data and the first modulation recognition training data into an image recognition training set and a modulation recognition training set with the same data dimension, and mix the image recognition training set and the modulation recognition training set with the same data dimension to obtain second processed data;
the second processing data is specifically multi-batch data, the proportion of the image recognition training set and the modulation recognition training set in each batch of data is the preset proportion, and the image recognition training set and the modulation recognition training set in each batch of data have corresponding designated positions;
s23, carrying out nonlinear operation on the second processed data through the neural network main body module to obtain mixed output data;
s24, separating and identifying the mixed output data through the neural network separation and identification module to obtain an image recognition training result and a modulation recognition training result, and outputting the image recognition training result and the modulation recognition training result:
s241, determining all data corresponding to an image recognition task in the mixed output data according to the specified position, and splicing all data corresponding to the image recognition task into a tensor corresponding to the image recognition task; meanwhile, all data corresponding to the modulation recognition task are determined in the mixed output data according to the specified position, and all data corresponding to the modulation recognition task are spliced into a tensor corresponding to the modulation recognition task;
s242, identifying the tensor corresponding to the image identification task through the image identification separation layer in the neural network separation and identification module to obtain an image identification training result, and outputting the image identification training result; simultaneously, identifying the tensor corresponding to the modulation identification task through a modulation identification separation layer in the neural network separation identification module to obtain a modulation identification training result, and outputting the modulation identification training result;
s3, simultaneously executing an image recognition task and a modulation recognition task based on the trained neural network:
the neural network comprises a data preprocessing module, a neural network input module, a neural network main body module and a neural network separation identification module which are sequentially connected, and the training data comprises image recognition training data and modulation recognition training data
The step of simultaneously executing the image recognition task and the modulation recognition task specifically comprises the following sub-steps:
s31, acquiring image identification data and modulation identification data in real time;
s32, preprocessing the image identification data and the modulation identification data through the data preprocessing module to obtain first image identification data and first modulation identification data;
s33, carrying out secondary processing on the first image identification data and the first modulation identification data through the neural network input module to obtain an image identification data set and a modulation identification data set, and mixing the image identification data set and the modulation identification data set according to a specified position to obtain second processing data;
s34, carrying out nonlinear operation on the second processing data through the neural network main body module to obtain mixed output data;
s35, separating and identifying the mixed output data through the neural network separation and identification module to obtain an image identification result and a modulation identification result, and outputting the image identification result and the modulation identification result.
2. The intelligent aircraft multitasking method according to claim 1, wherein before performing data dimension conversion on the image recognition training data in the step S21, the method further comprises performing random cropping, random flipping and standardization on the image recognition training data.
CN202110852719.8A 2021-07-27 2021-07-27 Multitasking method for intelligent aircraft Active CN113505851B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110852719.8A CN113505851B (en) 2021-07-27 2021-07-27 Multitasking method for intelligent aircraft

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110852719.8A CN113505851B (en) 2021-07-27 2021-07-27 Multitasking method for intelligent aircraft

Publications (2)

Publication Number Publication Date
CN113505851A CN113505851A (en) 2021-10-15
CN113505851B true CN113505851B (en) 2023-01-31

Family

ID=78014367

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110852719.8A Active CN113505851B (en) 2021-07-27 2021-07-27 Multitasking method for intelligent aircraft

Country Status (1)

Country Link
CN (1) CN113505851B (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108182427A (en) * 2018-01-30 2018-06-19 电子科技大学 A kind of face identification method based on deep learning model and transfer learning
CN112613581A (en) * 2020-12-31 2021-04-06 广州大学华软软件学院 Image recognition method, system, computer equipment and storage medium

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108009525B (en) * 2017-12-25 2018-10-12 北京航空航天大学 A kind of specific objective recognition methods over the ground of the unmanned plane based on convolutional neural networks
CN112887239B (en) * 2021-02-15 2022-04-26 青岛科技大学 Method for rapidly and accurately identifying underwater sound signal modulation mode based on deep hybrid neural network

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108182427A (en) * 2018-01-30 2018-06-19 电子科技大学 A kind of face identification method based on deep learning model and transfer learning
CN112613581A (en) * 2020-12-31 2021-04-06 广州大学华软软件学院 Image recognition method, system, computer equipment and storage medium

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
基于BP神经网络的人脸识别;甘俊英等;《系统工程与电子技术》;20030120(第01期);113-115 *
基于GA-BP神经网络的计算机智能化图像识别技术探究;曹永峰等;《应用激光》;20170215(第01期);139-143 *

Also Published As

Publication number Publication date
CN113505851A (en) 2021-10-15

Similar Documents

Publication Publication Date Title
US20230186056A1 (en) Grabbing detection method based on rp-resnet
EP3843004A1 (en) Portrait segmentation method, model training method and electronic device
CN112634276A (en) Lightweight semantic segmentation method based on multi-scale visual feature extraction
CN110147834A (en) Fine granularity image classification method based on rarefaction bilinearity convolutional neural networks
EP3989104A1 (en) Facial feature extraction model training method and apparatus, facial feature extraction method and apparatus, device, and storage medium
CN111950453A (en) Optional-shape text recognition method based on selective attention mechanism
CN111476133B (en) Unmanned driving-oriented foreground and background codec network target extraction method
US20230162477A1 (en) Method for training model based on knowledge distillation, and electronic device
CN111680739A (en) Multi-task parallel method and system for target detection and semantic segmentation
CN111241924A (en) Face detection and alignment method and device based on scale estimation and storage medium
CN114863539A (en) Portrait key point detection method and system based on feature fusion
CN112149526A (en) Lane line detection method and system based on long-distance information fusion
CN112016592B (en) Domain adaptive semantic segmentation method and device based on cross domain category perception
CN113505851B (en) Multitasking method for intelligent aircraft
CN117475150A (en) Efficient semantic segmentation method based on SAC-UNet
CN116796287A (en) Pre-training method, device, equipment and storage medium for graphic understanding model
CN114494782B (en) Image processing method, model training method, related device and electronic equipment
CN116311349A (en) Human body key point detection method based on lightweight neural network
CN114627370A (en) Hyperspectral image classification method based on TRANSFORMER feature fusion
CN115861605A (en) Image data processing method, computer equipment and readable storage medium
CN113963390A (en) Deformable convolution combined incomplete human face image restoration method based on generation countermeasure network
Pang et al. PTRSegNet: A Patch-to-Region Bottom-Up Pyramid Framework for the Semantic Segmentation of Large-Format Remote Sensing Images
CN117409431B (en) Multi-mode large language model training method, electronic equipment and storage medium
CN113723289B (en) Image processing method, device, computer equipment and storage medium
CN116071625B (en) Training method of deep learning model, target detection method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant