CN115952466A - Communication radiation source cross-mode identification method based on multi-mode information fusion - Google Patents

Communication radiation source cross-mode identification method based on multi-mode information fusion Download PDF

Info

Publication number
CN115952466A
CN115952466A CN202210750915.9A CN202210750915A CN115952466A CN 115952466 A CN115952466 A CN 115952466A CN 202210750915 A CN202210750915 A CN 202210750915A CN 115952466 A CN115952466 A CN 115952466A
Authority
CN
China
Prior art keywords
mode
information
graph
radiation source
modal
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210750915.9A
Other languages
Chinese (zh)
Inventor
利强
李晓帆
潘晔
林静然
胡全
邵怀宗
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
University of Electronic Science and Technology of China
Original Assignee
University of Electronic Science and Technology of China
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by University of Electronic Science and Technology of China filed Critical University of Electronic Science and Technology of China
Priority to CN202210750915.9A priority Critical patent/CN115952466A/en
Publication of CN115952466A publication Critical patent/CN115952466A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Image Analysis (AREA)

Abstract

The invention provides a communication radiation source cross-mode identification method based on multi-mode information fusion. And characterizing the multi-modal characteristics as nodes in a relationship graph among the modal, and constructing an edge relationship among the modal nodes through a prior model. Information among modal nodes is transmitted through a graph convolution neural network, information fusion of the modal nodes is carried out through a graph contraction technology, the problem that a traditional algorithm excessively depends on electromagnetic data characteristics is solved, and multi-modal information can effectively participate in cross-mode identification of a communication radiation source. The cross-mode recognition capability of the communication radiation source is greatly improved compared with that of the traditional algorithm while the same-mode recognition capability is basically unchanged. The invention utilizes graph contraction and multi-mode information of the communication radiation source to contract the relation graph between modes into a feature vector, thereby realizing the cross-mode identification of the communication radiation source based on multi-mode information fusion.

Description

Communication radiation source cross-mode identification method based on multi-mode information fusion
Technical Field
The invention relates to the field of communication and artificial intelligence, in particular to a multi-mode information fusion communication radiation source cross-mode recognition technology.
Background
The communication radiation source mode refers to the combination of functions used by the communication radiation source mode, such as the channel access mode, the fixed frequency, the transmission rate, the modulation pattern and the like, to realize communication. A communication radiation source tends to have multiple modes. In general, the communication radiation source will operate in several modes, fixed; and in emergency situations, such as when the communication radiation source is in the war, other modes of operation may be selected. The task to be solved by the cross-mode identification of the communication radiation source is how to identify the communication radiation source type of the emergency mode information data by using the information data in the normal mode of the communication radiation source.
Typical communication radiation source identification includes expert system based communication radiation source identification methods and data drive based communication radiation source identification methods.
The communication radiation source identification method based on the expert system comprises the following steps: and establishing a professional communication radiation source database, extracting the characteristics of the communication radiation source through expert domain knowledge, and realizing the cross-mode identification of the communication radiation source in the same mode. The method depends on the expert knowledge of domain experts, and the performance of the system is mostly determined by the manually extracted expert knowledge. And the variable cross-mode identification is difficult to manually extract communication radiation source judgment knowledge.
The data-driven communication radiation source identification method comprises the following steps: the communication radiation source signal data of the known mode is used for training the neural network, so that the communication radiation source signals of the known mode can be classified, the communication radiation source same-mode recognition is completed, and the communication radiation source cross-mode recognition is fitted. This method requires a large number of known modes to communicate the radiation source electromagnetic data. The recognition capability to the same-mode data is excellent. But for cross-mode data lacking data, there is a large loss of its recognition capability.
Disclosure of Invention
The invention provides a technical scheme for solving the technical problems that the existing single-mode data-based method is insufficient in cross-mode identification capability of a communication radiation source, and provides a novel cross-mode identification method of the communication radiation source by introducing multi-mode information of the communication radiation source, constructing structure data of an inter-mode relation graph, and performing multi-mode information fusion on the mode information with stable cross-mode characteristics and the traditional communication data.
The technical scheme adopted by the invention for solving the technical problems is to provide a communication radiation source cross-mode identification method based on multi-mode information fusion, which comprises the following steps:
step one, constructing and training a convolutional neural network as a multi-modal information fusion network to realize multi-modal feature extraction and cross-modal sample radiation source category judgment; during training, a communication radiation source sample set in the same mode is used as a training sample set, and the training sample set comprises multi-mode and single-mode samples; the multi-mode at least comprises communication data information and modulation information, and selectively comprises image information and text information;
step two, extracting modal characteristics: the modulation characteristics are composed of modulation parameters; for text information, mapping the text information into text features by using a text space embedding method; the communication data feature and the image feature are both feature extraction completed by communication data information and image information through a convolutional neural network intermediate layer;
thirdly, constructing a relation graph between the modals by using a prior model, wherein each feature extracted from the relation graph between the modals is a node, the nodes of all the features are bordered with the modulation feature nodes, and when the text features are more than 2, the edges are arranged between every two different text feature nodes;
step four, sorting the nodes from big to small according to the degrees in the relation graph among the modes, wherein the more the edges of the nodes are, the higher the degrees are; dividing feature nodes with the same degree into a sub-graph region, finally contracting each node in the relation graph among the modes into a node as a final cross-mode data vector through multi-layer graph convolution and multiple graph contraction by the multi-mode information fusion network, wherein each layer graph convolution is used for completing feature information exchange of the node, and each graph contraction is used for contracting the node with the highest current degree and the sub-graph region with the second highest degree into a node; finally, reducing the dimension of the final cross-mode data vector by using a full connection layer;
and step five, inputting cross-mode data vectors into a softmax layer in the multi-mode information fusion network, and outputting decision vectors by the softmax layer to realize cross-mode identification of the communication radiation source.
The invention firstly collects the multi-mode information of the communication radiation source and respectively extracts the characteristics of the corresponding modal information by utilizing a characteristic extraction algorithm. And characterizing the multi-modal characteristics as nodes in a relationship graph among the modal, and constructing an edge relationship among the modal nodes through a prior model. Information transmission among modal nodes is carried out through a graph convolution neural network, information fusion of the modal nodes is carried out through a graph contraction technology, the problem that a traditional algorithm excessively depends on electromagnetic data characteristics is solved, and multi-modal information can effectively participate in cross-mode recognition of a communication radiation source. The communication radiation source cross-mode recognition capability is greatly improved compared with the traditional algorithm while the same-mode recognition capability is basically unchanged.
The inter-modal relationship diagram of the invention is composed of multi-modal characteristics such as traditional communication data characteristics, modulation characteristics, image characteristics, text characteristics and the like. And using the modal characteristics as nodes of the structure data of the relation graph between the modals and using the prior model between the modals as edges of the structure data of the relation graph between the modals. The graph contraction refers to that two end points of edges in the graph are gradually merged into one node through node information exchange and fusion. The invention utilizes graph contraction and communication radiation source multi-mode information to represent the structure data contraction of the relation graph among the modes as a feature vector, thereby realizing the communication radiation source cross-mode identification based on multi-mode information fusion.
The invention has the beneficial effects that:
1. under the condition of only utilizing the same-mode electromagnetic data and other modal information of the communication radiation source, the excellent identification accuracy rate of the communication radiation source in the same mode is obtained, and the unsophisticated identification accuracy rate of the communication radiation source in a cross mode is obtained.
2. And modal robustness is achieved. The method allows the identified sample to lack partial modal information, and can still obtain better cross-modal identification accuracy rate in the case of partial or complete lack of text information and image information.
3. The expansion capability is strong. Under the prior model, not only the above-mentioned modal information can be introduced, but also other multi-modal information can be introduced to participate in constructing the relationship graph between the modalities. And fusion of more modal information is realized through more frequent graph convolution and graph contraction technologies.
FIG. 1 is a diagram of the relationship between multiple modalities of an embodiment.
FIG. 2 is a flow chart of an embodiment method.
Detailed Description
The embodiment can be divided into five steps, as shown in fig. 2:
the method comprises the steps of firstly, extracting features in communication data information, modulation information, image information and text information by taking a convolutional neural network as a multi-mode information fusion network, compressing modal information by using multi-mode features formed by the multi-mode features such as the communication data features, the modulation features, the image features and the text features, representing the multi-mode features into a plurality of one-dimensional vectors, and finally finishing the judgment of the radiation source category of a cross-mode sample. During training, a communication radiation source sample set in the same mode is used as a training sample set, and the training sample set comprises multi-mode and single-mode samples; the multi-mode at least comprises communication data information and modulation information, and selectively comprises image information and text information; the training uses a communication radiation source sample set with the same mode as a training sample set, and multi-mode samples and single-mode samples in the training sample set respectively account for half.
Setting sample center high-dimensional vector g of each mode type by using priori knowledge l And l represents a sequence number variable of the pattern type. Using convolutional neural networks with weights in hidden layers of the network denoted as M i Offset coefficient of b i The activation function is activate i I is a sequence number variable of the hidden layer, the value range is 1 to last, and then the process of training the convolutional neural network through iteration is as follows:
1) To hidden layer weight M i And a bias coefficient b i Carrying out random initialization;
2) Calculating hidden layer output layer by layer i =activate i (M i-1 ×M i +b i ),M 0 = X, up to the last output of the hidden layer last (ii) a X is used as an input training sample;
3) Loss (output) of computing network last ,g l );
4) Updating hidden layer weights M according to loss back propagation i And bias coefficient b i
And repeating the steps from 2) to 4) until the loss reaches a preset threshold, finishing convergence of the network, finishing training of the convolutional neural network, and fixing network parameters. The type of the radiation source of the cross-mode sample can be judged by the trained convolutional neural network.
And step two, for the text information, the text space embedding method is mapped into the feature vector when the text information is used. The modulation characteristics are comprised of modulation parameters. And performing feature extraction on the communication data features and the image features by using a convolutional neural network. For the image information G, the middle layer output of the trained convolutional neural network is used as the image feature vector of the image information G.
For communicationData information s, first splicing and cutting to obtain ith information segment s i Obtaining a time-frequency matrix X using a short-time Fourier transform i (M, k), where M and k are the number of rows and columns of the time-frequency matrix, respectively, k has a value ranging from 0 to N-1, and M has a value ranging from 0 to M-1. Let X i (m,k)=[T 0 ,T 1 ,T 2 ,…,T M-1 ]At the same time
Figure SMS_1
Figure SMS_2
Indicating transposition. Then marks>
Figure SMS_3
Get->
Figure SMS_4
And S i (m,k)=[P 0 ,P 1 ,P 2 ,…,P M-1 ]. Reusing the trained convolutional neural network pair S i The intermediate layer of (m, k) extracts the communication data feature vector.
And thirdly, constructing a relation graph x between the modes by using knowledge about the relation between the modes in the prior model and the multiple modes after feature extraction.
Through the feature extraction work of the multi-modal information, the complexity of the multi-modal information is reduced. Filling the modal characteristics into corresponding nodes of the relationship graph among the modals according to the modal relationships in the prior model, and connecting the corresponding nodes by using edges: each feature is a node, the nodes of all features have edges with the modulation feature nodes, and when the text features are more than 2, the edges are arranged between every two nodes of the text features.
As shown in fig. 1, the embodiment multi-modal information includes modulation features, communication data features, image features, and 2 text features, where a text feature 1 node, a text feature 2 node, a communication data feature node, and an image feature node all have edges with the modulation feature nodes, and an edge is between the text feature 1 node and the text feature 2 node.
Other modalities than the modulation characteristics and the communication data characteristics are not necessary to construct the inter-modality relationship diagram x, and the lack thereof is allowed.
And step four, fusing multi-mode information into a vector as a cross-mode data sample by utilizing graph convolution and graph contraction of a convolutional neural network in the characteristics of the same-mode data, and realizing the identification of the same mode of the communication radiation source:
after the relationship graph x between the modes is constructed, the graph h is convoluted by using a first layer graph θ Completing exchange of characteristic information of modal nodes, i.e.
Figure SMS_5
Where σ is the activation function and U is the orthogonal matrix of the laplace transform matrix. According to the sorting of the node degrees from big to small: x = (x) m ,x w1 ,x w2 ,x c ,x p ) Modulation signature node x since all nodes have edges to the modulation signature node m The highest degree of (d); the text feature node has edges to other text feature nodes in addition to the modulation feature node, so the text feature 1 node x w1 And text feature 2 node x w2 Degree of (2) and (3) are the same, and the 2 nd and the 3 rd w1 ,x w2 Forming a sub-image region which is the second highest sub-image region; communication data feature node x c And image feature node x p Ordering 4 th, 5 th, x with modulation signature nodes only having edges c ,x p A sub-picture region is formed.
And (3) carrying out graph contraction on the node with the highest current degree and the sub-graph area with the second highest degree each time to finally contract the node into a node: modulation characteristic node x with highest contrast at first m And sub-picture region x of second highest degree w1 ,x w2 Performing graph contraction, and outputting y = (y) by the convolutional neural network i ,y c ,y p ),y i Is x m 、x w1 ,x w2 The ith layer of graph convolution of the convolutional neural network and the output of graph contraction are carried out, and then the modulation characteristic node y with the highest current degree is output i And sub-picture region y of the second highest degree c ,y p Performing graph convolution and graph contraction, and performing graph convolution by using multilayer graph
Figure SMS_6
And multiple graph shrank to make the network output graph y i And finally contracted into a node. Then using multiple fully-connected layers to the final node feature vector y i Dimension reduction is performed so that it can be used as a cross-mode data vector that is decided by the softmax layer and computes the loss of sample labels. The samples constructed in the mode are strong in robustness, the same-mode samples participating in convolutional neural network training do not need to have all modes, and partial mode information is allowed to be absent.
And step five, the convolutional neural network inputs the cross-mode data vector to the softmax layer to output a decision vector, so that the cross-mode identification of the communication radiation source is realized.
The invention utilizes the stability of the cross-mode characteristics of the non-electromagnetic mode and the recognition capability of the same mode and the cross-mode, and realizes the multi-mode information fusion technology of the communication radiation source under the guidance of the prior model through the graph convolution and graph contraction technology. The cross-mode recognition capability greatly improved compared with that of the traditional convolutional neural network is obtained, and the same-mode recognition capability of the multi-mode information fusion network is kept at the same level as that of the traditional network.

Claims (5)

1. A communication radiation source cross-mode identification method based on multi-mode information fusion is characterized by comprising the following steps:
constructing and training a convolutional neural network as a multi-modal information fusion network to realize multi-modal feature extraction and cross-modal sample radiation source type judgment; during training, a communication radiation source sample set in the same mode is used as a training sample set, and the training sample set comprises multi-mode and single-mode samples; the multi-mode at least comprises communication data information and modulation information, and selectively comprises image information and text information;
step two, extracting modal characteristics: the modulation characteristics are composed of modulation parameters; for text information, mapping the text information into text features by using a text space embedding method; the communication data feature and the image feature are both feature extraction completed by communication data information and image information through a convolutional neural network intermediate layer;
thirdly, constructing a relation graph between the modals by using a prior model, wherein each feature extracted from the relation graph between the modals is a node, the nodes of all the features are bordered with the modulation feature nodes, and when the text features are more than 2, the edges are arranged between every two different text feature nodes;
step four, sorting the nodes from big to small according to the degrees in the relation graph among the modals, wherein the more edges of the nodes, the higher the degrees are; dividing feature nodes with the same degree into a sub-graph region, finally contracting each node in the relation graph among the modes into a node as a final cross-mode data vector through multi-layer graph convolution and multiple graph contraction by the multi-mode information fusion network, wherein each layer graph convolution is used for completing feature information exchange of the node, and each graph contraction is used for contracting the node with the highest current degree and the sub-graph region with the second highest degree into a node; finally, reducing the dimension of the final cross-mode data vector by using a full connection layer;
and step five, inputting cross-mode data vectors into a softmax layer in the multi-mode information fusion network, and outputting decision vectors by the softmax layer to realize cross-mode identification of the communication radiation source.
2. The method of claim 1, wherein the multi-modal information fusion network is trained by:
setting sample center high-dimensional vector g of each communication radiation source mode type by using priori knowledge l L represents a serial number variable of the mode type, and weights in hidden layers of each network of the neural network are set to be represented as M i Offset coefficient of b i The activation function is activate i I is a serial number variable of the hidden layer, the value range is 1 to last, and the following iteration process is as follows:
1) To hidden layer weight M i And a bias coefficient b i Carrying out random initialization;
2) Calculating hidden layer output layer by layer i =activate i (M i-1 ×M i +b i ),M 0 = X until output of last layer of hidden layer last (ii) a X is taken as an input training sample;
3) Loss (output) of computing network last ,g l );
4) Updating hidden layer weights M according to loss back propagation i And a bias coefficient b i
And repeating the steps from 2) to 4) until the loss (loss) reaches a preset threshold, and finishing the training of the convolutional neural network.
3. The method of claim 1, wherein the training sample set comprises half of each of the multi-modal and single-modal samples.
4. The method of claim 1, wherein the communication data information is segmented into a plurality of information segments by splicing before being input into the convolutional neural network, and a time-frequency matrix X of each information segment is obtained using a short-time fourier transform i (M, k), wherein M and k are the column number and the row number of the time-frequency matrix respectively, the value range of k is 0 to N-1, and the value range of M is 0 to M-1; time-frequency matrix X i Each column in (m, k) is T m
Figure FDA0003718276610000021
Figure FDA0003718276610000022
Denotes a transposition, a k,m Represents each column T m The k-th element of (1), and then the transformed column P is calculated m ,/>
Figure FDA0003718276610000023
From M-1 column P m Forming a transformed matrix S i (m,k)=[P 0 ,P 1 ,P 2 ,…,P M-1 ]Reuse of the transformed matrix S i (m, k) is input to a convolutional neural network.
5. The method as claimed in claim 1, wherein the method for exchanging the feature information of the graph convolution completion node comprises:
Figure FDA0003718276610000024
x is the input relationship graph between modalities, h θ For the graph convolution parameter, σ is the activation function, U is the orthogonal matrix of the Laplace transform matrix, <' > U is the >>
Figure FDA0003718276610000025
Indicating transposition and y is the graph convolution output. />
CN202210750915.9A 2022-06-28 2022-06-28 Communication radiation source cross-mode identification method based on multi-mode information fusion Pending CN115952466A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210750915.9A CN115952466A (en) 2022-06-28 2022-06-28 Communication radiation source cross-mode identification method based on multi-mode information fusion

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210750915.9A CN115952466A (en) 2022-06-28 2022-06-28 Communication radiation source cross-mode identification method based on multi-mode information fusion

Publications (1)

Publication Number Publication Date
CN115952466A true CN115952466A (en) 2023-04-11

Family

ID=87286369

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210750915.9A Pending CN115952466A (en) 2022-06-28 2022-06-28 Communication radiation source cross-mode identification method based on multi-mode information fusion

Country Status (1)

Country Link
CN (1) CN115952466A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117370933A (en) * 2023-10-31 2024-01-09 中国人民解放军总医院 Multi-mode unified feature extraction method, device, equipment and medium

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117370933A (en) * 2023-10-31 2024-01-09 中国人民解放军总医院 Multi-mode unified feature extraction method, device, equipment and medium
CN117370933B (en) * 2023-10-31 2024-05-07 中国人民解放军总医院 Multi-mode unified feature extraction method, device, equipment and medium

Similar Documents

Publication Publication Date Title
CN109145939B (en) Semantic segmentation method for small-target sensitive dual-channel convolutional neural network
CN111178316B (en) High-resolution remote sensing image land coverage classification method
CN109740679B (en) Target identification method based on convolutional neural network and naive Bayes
CN107832789B (en) Feature weighting K nearest neighbor fault diagnosis method based on average influence value data transformation
CN112365511B (en) Point cloud segmentation method based on overlapped region retrieval and alignment
CN112784929A (en) Small sample image classification method and device based on double-element group expansion
CN109685772B (en) No-reference stereo image quality evaluation method based on registration distortion representation
CN115952466A (en) Communication radiation source cross-mode identification method based on multi-mode information fusion
CN114565628B (en) Image segmentation method and system based on boundary perception attention
Xin et al. Deep self-paced learning for semi-supervised person re-identification using multi-view self-paced clustering
EP3989237A2 (en) Disease diagnosis system and method for performing segmentation by using neural network and unlocalized block
CN108510068A (en) A kind of ultra-deep regression analysis learning method
CN116485839A (en) Visual tracking method based on attention self-adaptive selection of transducer
CN108509964A (en) A kind of preparation method passing through Euclidean space at a distance from probability space
CN113989405B (en) Image generation method based on small sample continuous learning
CN109448039B (en) Monocular vision depth estimation method based on deep convolutional neural network
CN112330699B (en) Three-dimensional point cloud segmentation method based on overlapping region alignment
CN115188440A (en) Intelligent matching method for similar medical records
CN114004250A (en) Method and system for identifying open set of modulation signals of deep neural network
CN116701681B (en) Multi-query network for semantic segmentation
CN111626098B (en) Method, device, equipment and medium for updating parameter values of model
CN116432053A (en) Multi-mode data representation method based on modal interaction deep hypergraph neural network
CN115661539A (en) Less-sample image identification method embedded with uncertainty information
CN115457269A (en) Semantic segmentation method based on improved DenseNAS
CN115080871A (en) Cross-social network social user alignment method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination