WO2021044041A1 - A neural network for identifying radio technologies - Google Patents

A neural network for identifying radio technologies Download PDF

Info

Publication number
WO2021044041A1
WO2021044041A1 PCT/EP2020/074880 EP2020074880W WO2021044041A1 WO 2021044041 A1 WO2021044041 A1 WO 2021044041A1 EP 2020074880 W EP2020074880 W EP 2020074880W WO 2021044041 A1 WO2021044041 A1 WO 2021044041A1
Authority
WO
WIPO (PCT)
Prior art keywords
computer
implemented method
radio
neural network
data samples
Prior art date
Application number
PCT/EP2020/074880
Other languages
French (fr)
Inventor
Adnan Shahid
Jaron Fontaine
Eli De Poorter
Ingrid Moerman
BOTERO Miguel Hernando CAMELO
Steven Latré
Original Assignee
Imec Vzw
Universiteit Antwerpen
Universiteit Gent
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Imec Vzw, Universiteit Antwerpen, Universiteit Gent filed Critical Imec Vzw
Priority to CN202080062600.4A priority Critical patent/CN114341886A/en
Priority to EP20765290.0A priority patent/EP4026059A1/en
Priority to KR1020227010874A priority patent/KR20220053662A/en
Priority to US17/639,521 priority patent/US20220300824A1/en
Publication of WO2021044041A1 publication Critical patent/WO2021044041A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/088Non-supervised learning, e.g. competitive learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/082Learning methods modifying the architecture, e.g. adding, deleting or silencing nodes or connections
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W48/00Access restriction; Network selection; Access point selection
    • H04W48/16Discovering, processing access restriction or access information
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/048Activation functions

Definitions

  • the present invention generally relates to the field of identifying radio technologies employed by nodes for operating in an environment comprising one or more wireless networks that share a radio spectrum.
  • Radio spectrum has become extremely crowded due to the advent of non- collaborative radio technologies that share the same spectrum.
  • interference is one of the critical challenges and if unsolved, this leads to performance degradations. Recognizing or identifying a radio technology that accesses the spectrum is fundamental to define spectrum management policies to mitigate interferences.
  • Cognitive radio has emerged as an enabling technology that provides support for dynamic spectrum access, DSA. It refers to the capability of sharing the spectrum among multiple technologies in an opportunistic manner.
  • DSA dynamic spectrum access
  • One of the critical problems that DSA faces is to identify if some technology is accessing the same spectrum and then take appropriate measures to combat the performance degradation due to interference. This problem is termed as the Technology Recognition, TR, problem, and it refers to identify radio signals of wireless technologies without requiring any signal pre-processing such as channel estimation, and timing and frequency synchronization.
  • This object is achieved, in a first aspect, by a computer-implemented method for providing a neural network for identifying radio technologies employed in an environment, the neural network comprising an autoencoder and a classifier, the method comprising the steps of:
  • the environment comprises a plurality of nodes which operate in the environment.
  • the nodes are, for example, user terminals, access points, gateways and/or base stations.
  • a node that belongs to a wireless network uses one or more wireless radio technologies.
  • a plurality of wireless networks also exists and work independently from each other.
  • the wireless networks may operate on a same or partially overlapping spectrum.
  • the environment is scanned by sensing a radio spectrum. This is, a part of the electromagnetic spectrum which is of interest is sensed on the presence of wireless signals. The sensing results in a set of data samples, which will be further processed.
  • the spectrum sensing is, for example, performed by capturing in-phase and quadrature, IQ, samples and may be performed using Software Defined Radio, SDR, platforms.
  • the samples may, according to an embodiment, transformed depending on the model that subsequently will be trained.
  • the IQ samples which are time domain representation of radio signals, may be transformed into other domains such as frequency or time-frequency.
  • a part or a subset of the data samples is selected and subsequently labelled in terms of a respective radio technology.
  • a part of the data samples is chosen as being representative samples of the radio technologies and labelled.
  • domain expert knowledge or in combination with pseudo labelling, among other techniques may be used.
  • the labelled data samples with the associated labels may further be stored together with the other unselected and unlabelled samples.
  • the labelling may be performed by indicating to which class a given captured or sensed data sample belongs, or such a class label may be a name of a technology, or may further be more expressive and may comprise information about a spectrum utilized over time, central frequencies, duty cycle, or other information that may be related to the sample.
  • the storage may, for example, be performed in two databases.
  • a first database then comprises a sample database
  • a second database comprises a label database.
  • Data samples for example in the form of IQ samples, are stored in the sample database, while the label database may be used for storing the labels of a subset of the set of samples.
  • the databases may be connected to one or more blocks.
  • an autoencoder is trained in an unsupervised way with the unlabelled data samples.
  • An autoencoder is a neural network that is trained to copy its input to its output.
  • An autoencoder is composed of two parts, an encoder and a decoder.
  • the weights of the trained autoencoder are locked to preserve the important features that are learned during the unsupervised learning step.
  • a classifier is trained in a supervised way using the labelled data samples.
  • the encoder is used as a feature extractor. This provides an initial bootstrapping on the classification task.
  • a fine-tuning step may be performed by, for example, retraining all the layers in the classifier to increase the accuracy of the resulting model.
  • the weights of the trained autoencoder may be unlocked.
  • the neural network is provided and is trained to be able to identify technologies on which it was trained for in different unknown and dynamic environments.
  • Another advantage is that even unknown radio technologies may be identified or recognized, without needing expert knowledge for either modelling signals of the environment or selecting required features such as modulation scheme, duty cycle, power level, etc., thereof.
  • the classifier comprises the encoder and a classification block.
  • the classification block is, for example, a SoftMax layer which is preceded by convolutional and/or dense layers to increase the accuracy of the classifier. Further, a non-norm alized output of the classifier may be mapped to a probability distribution over predicted output of radio technologies.
  • the autoencoder comprises a convolutional neural network, CNN.
  • the data samples comprise IQ samples as an input.
  • Other types of input may be used as well, such as, for example, fast Fourier transform, FFT, samples.
  • the encoder comprises two convolutional layers with rectified linear unit, ReLU, activation function, each layer followed by a batch normalization and a dropout layer for regularization.
  • Downsampling in the autoencoder may be performed by using stride convolution or max-pooling layers. Further, the dropout layers allow the autoencoder, or preferably a deep autoencoder, DAE, to behave as a denoising DAE to improve its capacity as feature extractor.
  • the radio technologies comprise at least one of the group of 5G; 5G New Radio, NR; Long Term Evolution, LTE; Private LTE; citizens Broadband Radio Service, CBRS; MulteFire; LTE-Licensed Assisted Access, LTE-LAA; Narrowband-Internet of Things, NV-loT; Enhanced machine type communication, eMTC; 802.11 ax; Wi-Fi 6; 802.11 ah; 802.11af; 802.11 p; vehicle to vehicle, V2V; vehicle to infrastructure, V2I; ZiBee; Bluetooth; WiMax; GSM.
  • 5G New Radio, NR Long Term Evolution, LTE; Private LTE; citizens Broadband Radio Service, CBRS; MulteFire; LTE-Licensed Assisted Access, LTE-LAA; Narrowband-Internet of Things, NV-loT; Enhanced machine type communication, eMTC; 802.11 ax; Wi-Fi 6; 802.11 ah; 802.11af; 802.11 p
  • the invention relates to the neural network according to the method of the first aspect.
  • the neural network may, for example, be trained with data samples captured from a range of environments. This allows identifying technologies in various unknown and dynamic environments.
  • the invention relates to a computer-implemented method for identifying radio technologies in an environment by the neural network according to the second aspect.
  • the computer-implemented method further comprises the step of changing a centre frequency of one of the radio technologies based on the identified radio technologies.
  • the computer-implemented method further comprises the step of assigning a collision-free time slot for transmission based on the identified radio technologies.
  • the computer-implemented method may employ different strategies to avoid a same use of the radio spectrum, and/or to make a shared use thereof in an efficient manner.
  • the invention relates to a data processing system comprising means for carrying out the method according to the first and/or third aspect.
  • the invention relates to a node for operating in a wireless network configured to identify radio technologies employed in an environment by the computer-implemented method according to the third aspect.
  • the invention relates to a computer program product comprising computer-executable instructions for causing a node to perform at least the steps of the computer-implemented method according to the third aspect.
  • the invention relates to a computer readable storage medium comprising the computer program product according to the sixth aspect.
  • Fig. 1 illustrates a semi-supervised algorithm implemented using a deep autoencoder according to an embodiment of the invention
  • Fig. 2 illustrates a spectrum manager configured to recognize radio technologies
  • FIG. 3 illustrates two wireless networks each using a different radio technology
  • Fig. 4 illustrates time and time-frequency signatures of wireless technologies
  • FIG. 5 illustrates a workflow of a semi-supervised learning approach according to an embodiment of the invention
  • Fig. 6 shows an example embodiment of a suitable computing system for performing one or several steps in embodiments of the invention.
  • a first network comprises nodes 300-305 which are configured to communicate with each other through a first radio technology.
  • the illustration further comprises a second network comprising nodes 310-311 which are likewise configured to communicate with each other through a second radio technology.
  • the nodes 300-305 are not configured to communicate with the nodes 310-311, although they share a same or partially overlapping radio spectrum.
  • both networks interfere and compete.
  • Other networks may be present as well, which likewise compete and interfere.
  • Fig. 3 illustrates an environment 320 wherein different radio technologies are present for wireless communication purposes. Different radio technologies are further illustrated in Fig. 2.
  • the nodes or agents 200-203 each represent a radio technology which may operate in the environment 320.
  • a radio technology may further be illustrated through a time and time- frequency signatures of the wireless technologies to be recognized. This is illustrated in Fig. 4 wherein two distinct 401 and 402 radio technologies are illustrated. Radio technology 401 is, for example, deployed by nodes 300-305 and radio technology 402 is deployed by nodes 310-311.
  • a spectrum manager 210 will identify the different radio technologies 200-203 operating in the environment 320. The results of the spectrum manager 210, thus the technology recognition may then be used by making spectrum decisions 211.
  • the goal of the spectrum manager 210 is to assist the unknown wireless technologies 200-203 to make spectrum decisions 211 by first identifying them and then doing frequency domain analysis.
  • the spectrum manager 210 executes the following tasks in the listed manner: training 214, validation 213, frequency domain analysis 212, and spectrum decision 211.
  • the focus will now be on the training 214 and validation 213 steps to enable technology recognition for cognitive radio systems.
  • the training 214 task is used to train a model in a semi-supervised 215 way with raw in-phase and quadrature, IQ, samples of a number of radios 200-203 using a deep autoencoder, DAE. Further, once the model is trained 214, in the validation task 213, it may identify the unknown wireless technologies 200-203. In the frequency domain analysis task 212, frequency domain analysis of the identified technologies 200-203 is done by extracting spectrum occupancy information of the technologies 200-203. Finally, in the spectrum decision task 211, the radio uses the extracted spectrum efficiency information to define actions, such as change the frequencies of the radios 200-203 and/or assign a collision-free time slot for transmissions, so that a fair coexistence may be realized. Once the spectrum decisions are made, they are notified to the radios 200-203 via, for example, control channels.
  • a communication system in which a received signal r(t) may be represented as follows: wherein s(t) is the original transmitted signal, h(t) is the time varying impulse response of the transmit channel, and w(t) represents additive white gaussian noise, AWGN, with zero mean and variance d 2 .
  • the transmitted signal s(t) is modelled as follows: where s(t) is called quadrature signal or IQ samples, and the i(t) and q(t) are termed as the in-phase and quadrature components, respectively.
  • SSL uses unlabelled data to learn valuable information about the data, and then uses it to finetune a classifier with a reduced number of labels.
  • the technology recognition system can be used even when the environment 320 is entirely unknown and no information is provided at all.
  • a first step 500 the spectrum is sensed by capturing IQ samples which are further processed by subsequent steps 501-505.
  • the original IQ samples which are time domain representation of radio signals may be transformed 501 into other domains, such as frequency or time- frequency.
  • IQ samples representation are further used no further processing is required.
  • the data is labelled.
  • two sub steps are performed, namely samples selection and labelling of the samples.
  • the architecture of the invention is semi-supervised, thus making it important to select representative samples of the radio technologies that needs to be identified.
  • domain expert knowledge or in combination with pseudo labelling may be used.
  • the samples and the labels associated with the labelled samples are further stored 503.
  • This data storage 503 block comprises two databases, namely a sample database and a label database. IQ samples are stored in the sample database, while the label database is used for storing labels of a reduced set of examples.
  • the databases are connected to one or more blocks, namely the supervised learning 510 and the unsupervised learning 511 , and the batch system 512.
  • the input data is created by selecting a portion of the data from the sample database via a predefined strategy, for example uniform random selection.
  • the input may be provided by a batch system that takes data from the sample database 503 and uses it for retraining a model.
  • the semi-supervised technology recognition classification block 504 receives the sensed data and performed the classification task.
  • the block 504 also receives a limited labelled data set from the data labelling system block 502. Based on the labelled and unlabelled data sets, different learning algorithms may be used in the supervised 510 and unsupervised 511 learning blocks, and how they interact to perform the SSL task.
  • the proposed architecture indicates which class a given capture sample belongs to. This may, for example, be the name of the technology, but may also be more expressive and comprises information about spectrum utilized over time, central frequencies, duty cycle, etc.
  • the proposed workflow of the invention is flexible to support a range of SSL algorithms, training methods, and input types.
  • the selection of the semi-supervised approach mainly depends on various factors including the amount of available data, the number of labels, the complexity of the radio signals to be identified, and the need for offline or online training capabilities, etc.
  • the SSL TR block illustrated in Fig. 5 may be implemented using a DAE 130 as illustrated in Fig. 1.
  • the encoder 120 comprises a first convolutional layer 101, for example with a 3x3 filter kernel, 64 feature maps, 4x4 strides and a dropout of 0.4.
  • the second convolutional layer 102 comprises a 3x3 filter kernel, 64 feature maps, 4x4 strides and a dropout of 0.4.
  • first transpose convolutional layer 104 comprising a 3x3 filter kernel, 64 feature maps, 1x4 strides and a dropout of 0.4
  • second transpose convolutional layer 105 comprising a 3x3 filter kernel, 64 feature maps, 1x4 strides and a dropout of 0.4
  • the output 112 of the DAE 120-121 is further used by the encoder 123 which comprises a fully connected 1x128 neurons 106 and a Softmax layer 107 comprising 1x17 neurons.
  • the number of convolutional layers, feature maps, strides, dropout, filter size, etc are termed as hyperparameters in machine learning terms and for each specific case a different combination of them may be used.
  • the modelling by the DAE 120-121 is performed through unsupervised learning with unlabelled examples and by the encoder 123 through supervised learning with representative labelled examples.
  • the specific parameters of each layer, etc., may be determined using a hyperparameter swapping.
  • the encoder configuration of the invention generates an intermediate code of size 128, e.g., a reduction factor of 16x.
  • the decoder part follows the same pattern but in reverse order and replacing convolutional layers by transposed convolutional layers.
  • the DAE 130 comprises 1M of trainable parameters.
  • the autoencoder is trained by using batches of size 128, the Adam optimizer with a learning rate of 0.0004, and binary cross-entropy as the loss function for reconstruction.
  • the supervised part of the architecture is composed of the encoder part of the DAE in addition to two dense layers, one with 128 neurons, and the second one with 17 neurons and a SoftMax activation layer for classification.
  • the resulting model has 500k and 18k trainable parameters in phase 1 and phase 2, respectively.
  • the model is trained using the same parameters as the DAE except that the loss function is categorical cross-entropy and the learning rate is reduced to 0.004. Finally, the output 111 is generated.
  • the DAE 130 provides a two-step training process.
  • the DAE 130 which is composed of the encoder 120 and the decoder 21 in an unsupervised way using only X u.
  • a training is performed by a classifier 123 using an encoder 106 together with a Softmax classifier 107 in a supervised way using the reduced labelled data set X s.
  • Fig. 6 shows a suitable computing system 600 enabling to implement embodiments of the method for identifying radio technologies in an environment according to the invention.
  • Computing system 600 may in general be formed as a suitable general-purpose computer and comprise a bus 610, a processor 602, a local memory 604, one or more optional input interfaces 614, one or more optional output interfaces 616, a communication interface 612, a storage element interface 606, and one or more storage elements 608.
  • Bus 610 may comprise one or more conductors that permit communication among the components of the computing system 600.
  • Processor 602 may include any type of conventional processor or microprocessor that interprets and executes programming instructions.
  • Local memory 604 may include a random-access memory (RAM) or another type of dynamic storage device that stores information and instructions for execution by processor 602 and/or a read only memory (ROM) or another type of static storage device that stores static information and instructions for use by processor 602.
  • RAM random-access memory
  • ROM read only memory
  • Input interface 614 may comprise one or more conventional mechanisms that permit an operator or user to input information to the computing device 600, such as a keyboard 620, a mouse 630, a pen, voice recognition and/or biometric mechanisms, a camera, etc.
  • Output interface 616 may comprise one or more conventional mechanisms that output information to the operator or user, such as a display 640, etc.
  • Communication interface 612 may comprise any transceiver-like mechanism such as for example one or more Ethernet interfaces that enables computing system 600 to communicate with other devices and/or systems, for example with other one or more of the nodes 300- 305 or 310-311.
  • the communication interface 612 of computing system 600 may be connected to such another computing system by means of a local area network (LAN) or a wide area network (WAN) such as for example the internet.
  • Storage element interface 606 may comprise a storage interface such as for example a Serial Advanced Technology Attachment (SATA) interface or a Small Computer System Interface (SCSI) for connecting bus 910 to one or more storage elements 608, such as one or more local disks, for example SATA disk drives, and control the reading and writing of data to and/or from these storage elements 908.
  • SATA Serial Advanced Technology Attachment
  • SCSI Small Computer System Interface
  • the storage element(s) 608 above is/are described as a local disk, in general any other suitable computer-readable media such as a removable magnetic disk, optical storage media such as a CD or DVD, -ROM disk, solid state drives, flash memory cards, ... could be used.
  • Computing system 600 could thus correspond to a node in the embodiments illustrated by Fig. 2 or Fig. 3.
  • circuitry may refer to one or more or all of the following:
  • circuitry also covers an implementation of merely a hardware circuit or processor (or multiple processors) or portion of a hardware circuit or processor and its (or their) accompanying software and/or firmware.
  • circuitry also covers, for example and if applicable to the particular claim element, a baseband integrated circuit or processor integrated circuit for a mobile device or a similar integrated circuit in a server, a cellular network device, or other computing or network device.
  • top, bottom, over, under, and the like are introduced for descriptive purposes and not necessarily to denote relative positions. It is to be understood that the terms so used are interchangeable under appropriate circumstances and embodiments of the invention are capable of operating according to the present invention in other sequences, or in orientations different from the one(s) described or illustrated above.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Molecular Biology (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Security & Cryptography (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Mobile Radio Communication Systems (AREA)

Abstract

Example embodiments describe a computer-implemented method for providing a neural network for identifying radio technologies (200-203) employed in an environment, the neural network comprising an autoencoder having an encoder, and a classifier, the method comprising the steps of sensing a radio spectrum of the environment thereby obtaining a set of data samples, labelling a subset of the data samples by a respective radio technology thereby obtaining labelled data samples, training the autoencoder in an unsupervised way by unlabelled data samples, training the classifier in a supervised way by the labelled data samples, and providing the neural network by coupling the output of an encoder network of the autoencoder to an input of the classifier.

Description

A NEURAL NETWORK FOR IDENTIFYING RADIO TECHNOLOGIES
Field of the Invention
[01] The present invention generally relates to the field of identifying radio technologies employed by nodes for operating in an environment comprising one or more wireless networks that share a radio spectrum.
Background of the Invention
[02] Radio spectrum has become extremely crowded due to the advent of non- collaborative radio technologies that share the same spectrum. In this coexisting environment, interference is one of the critical challenges and if unsolved, this leads to performance degradations. Recognizing or identifying a radio technology that accesses the spectrum is fundamental to define spectrum management policies to mitigate interferences.
[03] Cognitive radio, CR, has emerged as an enabling technology that provides support for dynamic spectrum access, DSA. It refers to the capability of sharing the spectrum among multiple technologies in an opportunistic manner. One of the critical problems that DSA faces is to identify if some technology is accessing the same spectrum and then take appropriate measures to combat the performance degradation due to interference. This problem is termed as the Technology Recognition, TR, problem, and it refers to identify radio signals of wireless technologies without requiring any signal pre-processing such as channel estimation, and timing and frequency synchronization.
[04] Traditionally, TR is done by domain experts, which use carefully designed hand-crafted rules to extract features from the radio signals. On the contrary, state- of-the-art approaches based on machine learning methods may extract features directly from raw input data and perform recognition tasks on those features automatically. [05] However, state-of-the-art approaches for technology recognition using machine learning are based on supervised learning, which requires an extensive labelled data set to perform well. If the technologies and their environment are entirely unknown, the labelling task becomes time-consuming and challenging.
[06] It is therefore an object of the present invention to alleviate to above drawback and to provide an improved solution for identifying radio technologies in an environment comprising one or more wireless networks.
Summary of the Invention
[07] This object is achieved, in a first aspect, by a computer-implemented method for providing a neural network for identifying radio technologies employed in an environment, the neural network comprising an autoencoder and a classifier, the method comprising the steps of:
- sensing a radio spectrum of the environment thereby obtaining a set of data samples;
- labelling a subset of the data samples by a respective radio technology thereby obtaining labelled data samples;
- training the autoencoder in an unsupervised way by unlabelled data samples;
- training the classifier in a supervised way by the labelled data samples; and
- providing the neural network by coupling the output of an encoder network of the autoencoder to the input of the classifier.
[08] The environment comprises a plurality of nodes which operate in the environment. The nodes are, for example, user terminals, access points, gateways and/or base stations. A node that belongs to a wireless network uses one or more wireless radio technologies. In addition, a plurality of wireless networks also exists and work independently from each other. The wireless networks may operate on a same or partially overlapping spectrum.
[09] As a first step, the environment is scanned by sensing a radio spectrum. This is, a part of the electromagnetic spectrum which is of interest is sensed on the presence of wireless signals. The sensing results in a set of data samples, which will be further processed.
[10] The spectrum sensing is, for example, performed by capturing in-phase and quadrature, IQ, samples and may be performed using Software Defined Radio, SDR, platforms. Prior to further processing steps, the samples may, according to an embodiment, transformed depending on the model that subsequently will be trained. For example, the IQ samples which are time domain representation of radio signals, may be transformed into other domains such as frequency or time-frequency.
[11] Next, a part or a subset of the data samples is selected and subsequently labelled in terms of a respective radio technology. In other words, a part of the data samples is chosen as being representative samples of the radio technologies and labelled. Preferably, here, domain expert knowledge or in combination with pseudo labelling, among other techniques, may be used. The labelled data samples with the associated labels may further be stored together with the other unselected and unlabelled samples.
[12] The labelling may be performed by indicating to which class a given captured or sensed data sample belongs, or such a class label may be a name of a technology, or may further be more expressive and may comprise information about a spectrum utilized over time, central frequencies, duty cycle, or other information that may be related to the sample.
[13] The storage may, for example, be performed in two databases. A first database then comprises a sample database, and a second database comprises a label database. Data samples, for example in the form of IQ samples, are stored in the sample database, while the label database may be used for storing the labels of a subset of the set of samples. Depending on the type of data, transformed or not, and a training step, the databases may be connected to one or more blocks.
[14] Further, to provide the neural network a training is performed in two steps. First, an autoencoder is trained in an unsupervised way with the unlabelled data samples. An autoencoder is a neural network that is trained to copy its input to its output. An autoencoder is composed of two parts, an encoder and a decoder. The weights of the trained autoencoder are locked to preserve the important features that are learned during the unsupervised learning step. Second, after the unsupervised learning, a classifier is trained in a supervised way using the labelled data samples. During the supervised learning, the encoder is used as a feature extractor. This provides an initial bootstrapping on the classification task. Optionally, a fine-tuning step may be performed by, for example, retraining all the layers in the classifier to increase the accuracy of the resulting model. Then, when locked, the weights of the trained autoencoder may be unlocked.
[15] Finally, after the training steps, the neural network is provided and is trained to be able to identify technologies on which it was trained for in different unknown and dynamic environments.
[16] In the supervised learning step of the neural network, only a limited number of labelled data samples are needed. This makes the labelling task less time- consuming compared to the state-of-the-art machine learning methods for technology recognition. Thus, by this semi supervised learning approach for technology recognition by separating the feature extraction from the classification task in the neural network architecture, the use of unlabelled data is maximized. Furthermore, the use of domain expertise knowledge is only required when labelling few representative examples.
[17] Another advantage is that even unknown radio technologies may be identified or recognized, without needing expert knowledge for either modelling signals of the environment or selecting required features such as modulation scheme, duty cycle, power level, etc., thereof.
[18] According to an embodiment, the classifier comprises the encoder and a classification block.
[19] The classification block is, for example, a SoftMax layer which is preceded by convolutional and/or dense layers to increase the accuracy of the classifier. Further, a non-norm alized output of the classifier may be mapped to a probability distribution over predicted output of radio technologies.
[20] According to an embodiment, the autoencoder comprises a convolutional neural network, CNN.
[21] While traditional deep neural networks, DNNs, are built by connecting a series of fully connected layers, a CNN connects the neurons of a given layer, called convolutional layer, with only a few numbers of neurons of the next layer to reduce the computational complexity of learning. Preferably, in this embodiment the data samples comprise IQ samples as an input. Other types of input may be used as well, such as, for example, fast Fourier transform, FFT, samples.
[22] According to an embodiment, the encoder comprises two convolutional layers with rectified linear unit, ReLU, activation function, each layer followed by a batch normalization and a dropout layer for regularization.
[23] Downsampling in the autoencoder may be performed by using stride convolution or max-pooling layers. Further, the dropout layers allow the autoencoder, or preferably a deep autoencoder, DAE, to behave as a denoising DAE to improve its capacity as feature extractor.
[24] According to an embodiment, the radio technologies comprise at least one of the group of 5G; 5G New Radio, NR; Long Term Evolution, LTE; Private LTE; Citizens Broadband Radio Service, CBRS; MulteFire; LTE-Licensed Assisted Access, LTE-LAA; Narrowband-Internet of Things, NV-loT; Enhanced machine type communication, eMTC; 802.11 ax; Wi-Fi 6; 802.11 ah; 802.11af; 802.11 p; vehicle to vehicle, V2V; vehicle to infrastructure, V2I; ZiBee; Bluetooth; WiMax; GSM.
[25] In other words, a plurality of radio technologies may be identified by the neural network architecture. Further, besides the 5G and legacy wireless technologies, the neural network may be trained to identify any type of wireless radio technology in the radio spectrum, thus even unknown technologies may be identified. [26] According to a second aspect, the invention relates to the neural network according to the method of the first aspect.
[27] The neural network may, for example, be trained with data samples captured from a range of environments. This allows identifying technologies in various unknown and dynamic environments.
[28] According to a third aspect, the invention relates to a computer-implemented method for identifying radio technologies in an environment by the neural network according to the second aspect.
[29] According to an embodiment, the computer-implemented method further comprises the step of changing a centre frequency of one of the radio technologies based on the identified radio technologies.
[30] According to an embodiment, the computer-implemented method further comprises the step of assigning a collision-free time slot for transmission based on the identified radio technologies.
[31] In other words, the computer-implemented method may employ different strategies to avoid a same use of the radio spectrum, and/or to make a shared use thereof in an efficient manner.
[32] According to a fourth aspect, the invention relates to a data processing system comprising means for carrying out the method according to the first and/or third aspect.
[33] According to a fifth aspect, the invention relates to a node for operating in a wireless network configured to identify radio technologies employed in an environment by the computer-implemented method according to the third aspect.
[34] According to a sixth aspect, the invention relates to a computer program product comprising computer-executable instructions for causing a node to perform at least the steps of the computer-implemented method according to the third aspect. [35] According to a seventh aspect, the invention relates to a computer readable storage medium comprising the computer program product according to the sixth aspect.
Brief Description of the Drawings
[36] Some example embodiments will now be described with reference to the accompanying drawings.
[37] Fig. 1 illustrates a semi-supervised algorithm implemented using a deep autoencoder according to an embodiment of the invention; [38] Fig. 2 illustrates a spectrum manager configured to recognize radio technologies;
[39] Fig. 3 illustrates two wireless networks each using a different radio technology; [40] Fig. 4 illustrates time and time-frequency signatures of wireless technologies;
[41] Fig. 5 illustrates a workflow of a semi-supervised learning approach according to an embodiment of the invention; and [42] Fig. 6 shows an example embodiment of a suitable computing system for performing one or several steps in embodiments of the invention.
Detailed Description of Embodiment(s)
[43] In Fig. 3 two networks are illustrated. A first network comprises nodes 300-305 which are configured to communicate with each other through a first radio technology. The illustration further comprises a second network comprising nodes 310-311 which are likewise configured to communicate with each other through a second radio technology. The nodes 300-305 are not configured to communicate with the nodes 310-311, although they share a same or partially overlapping radio spectrum. Thus, both networks interfere and compete. Other networks may be present as well, which likewise compete and interfere. Thus, Fig. 3 illustrates an environment 320 wherein different radio technologies are present for wireless communication purposes. Different radio technologies are further illustrated in Fig. 2. The nodes or agents 200-203 each represent a radio technology which may operate in the environment 320.
[44] A radio technology may further be illustrated through a time and time- frequency signatures of the wireless technologies to be recognized. This is illustrated in Fig. 4 wherein two distinct 401 and 402 radio technologies are illustrated. Radio technology 401 is, for example, deployed by nodes 300-305 and radio technology 402 is deployed by nodes 310-311.
[45] A spectrum manager 210 will identify the different radio technologies 200-203 operating in the environment 320. The results of the spectrum manager 210, thus the technology recognition may then be used by making spectrum decisions 211. The goal of the spectrum manager 210 is to assist the unknown wireless technologies 200-203 to make spectrum decisions 211 by first identifying them and then doing frequency domain analysis. In order to enable this, the spectrum manager 210 executes the following tasks in the listed manner: training 214, validation 213, frequency domain analysis 212, and spectrum decision 211. In this illustrative embodiment, the focus will now be on the training 214 and validation 213 steps to enable technology recognition for cognitive radio systems.
[46] The training 214 task is used to train a model in a semi-supervised 215 way with raw in-phase and quadrature, IQ, samples of a number of radios 200-203 using a deep autoencoder, DAE. Further, once the model is trained 214, in the validation task 213, it may identify the unknown wireless technologies 200-203. In the frequency domain analysis task 212, frequency domain analysis of the identified technologies 200-203 is done by extracting spectrum occupancy information of the technologies 200-203. Finally, in the spectrum decision task 211, the radio uses the extracted spectrum efficiency information to define actions, such as change the frequencies of the radios 200-203 and/or assign a collision-free time slot for transmissions, so that a fair coexistence may be realized. Once the spectrum decisions are made, they are notified to the radios 200-203 via, for example, control channels.
[47] To formulate a technology recognition problem, a communication system in which a received signal r(t) may be represented as follows:
Figure imgf000011_0001
wherein s(t) is the original transmitted signal, h(t) is the time varying impulse response of the transmit channel, and w(t) represents additive white gaussian noise, AWGN, with zero mean and variance d2. In modern digital communication systems, the transmitted signal s(t) is modelled as follows:
Figure imgf000011_0002
where s(t) is called quadrature signal or IQ samples, and the i(t) and q(t) are termed as the in-phase and quadrature components, respectively.
[48] Given a classification problem with an input vector set X and their corresponding target variables set Y, the objective is to find a function / that predicts y Î Y given a new value for x e X, where y represents L class labels:
Figure imgf000011_0003
Let X
Figure imgf000011_0004
and be a set of N examples of radio
Figure imgf000011_0005
technologies and their corresponding labels, respectively, where xi e X and yi e Y for all i
Figure imgf000011_0009
By semi-supervised learning, SSL, the set X is divided in two subsets for which their corresponding labels
Figure imgf000011_0008
Figure imgf000011_0006
are provided, and
Figure imgf000011_0010
for which no labels are provided such that X =
Figure imgf000011_0007
[49] To use SSL algorithms for recognition, it is further required that the knowledge acquired about the distribution of the examples from the unlabelled data set, i.e. , p(x), is useful to infer p(y|x). Otherwise, semi-supervised learning may decrease the performance of the supervised classifier by misguiding it during the learning process. SSL uses unlabelled data to learn valuable information about the data, and then uses it to finetune a classifier with a reduced number of labels. Through the invention, the technology recognition system can be used even when the environment 320 is entirely unknown and no information is provided at all.
[50] Through sensing and capturing over-the-fly radio signals in the form of IQ samples is performed using Software Defined Radio, SDR, platforms. Next, by the invention the feature extraction is decoupled via unsupervised learning, and the classification tasks via supervised learning while keeping the high expressiveness of deep learning, DL, models. The overall workflow of the semi-supervised learning approach by the invention is illustrated in Fig. 5.
[51] In a first step 500, the spectrum is sensed by capturing IQ samples which are further processed by subsequent steps 501-505. Next, depending on the model to be trained, the original IQ samples, which are time domain representation of radio signals may be transformed 501 into other domains, such as frequency or time- frequency. When IQ samples representation are further used no further processing is required.
[52] In the next step 502, the data is labelled. In this step, two sub steps are performed, namely samples selection and labelling of the samples. The architecture of the invention is semi-supervised, thus making it important to select representative samples of the radio technologies that needs to be identified. Here, domain expert knowledge or in combination with pseudo labelling may be used. The samples and the labels associated with the labelled samples are further stored 503.
[53] This data storage 503 block comprises two databases, namely a sample database and a label database. IQ samples are stored in the sample database, while the label database is used for storing labels of a reduced set of examples. Depending on the kind of data and the training strategy, the databases are connected to one or more blocks, namely the supervised learning 510 and the unsupervised learning 511 , and the batch system 512.
[54] In the offline training, the input data is created by selecting a portion of the data from the sample database via a predefined strategy, for example uniform random selection. [55] Next, in the batch system for online training 512, on the other hand, the input may be provided by a batch system that takes data from the sample database 503 and uses it for retraining a model.
[56] The semi-supervised technology recognition classification block 504 receives the sensed data and performed the classification task. The block 504 also receives a limited labelled data set from the data labelling system block 502. Based on the labelled and unlabelled data sets, different learning algorithms may be used in the supervised 510 and unsupervised 511 learning blocks, and how they interact to perform the SSL task.
[57] Finally, in the technology recognition block 505 the proposed architecture indicates which class a given capture sample belongs to. This may, for example, be the name of the technology, but may also be more expressive and comprises information about spectrum utilized over time, central frequencies, duty cycle, etc.
[58] The proposed workflow of the invention is flexible to support a range of SSL algorithms, training methods, and input types. The selection of the semi-supervised approach mainly depends on various factors including the amount of available data, the number of labels, the complexity of the radio signals to be identified, and the need for offline or online training capabilities, etc.
[59] The SSL TR block illustrated in Fig. 5 may be implemented using a DAE 130 as illustrated in Fig. 1. The DAE 130 is composed of two parts, an encoder 120 that maps h = /(x), where h is known as the code, and a decoder 121 that produces a reconstruction r = g(h).
[60] As an input 110 for the DAE 130 IQ samples or any transformation of the radio signals of the different radio technologies are provided. Next, the encoder 120 comprises a first convolutional layer 101, for example with a 3x3 filter kernel, 64 feature maps, 4x4 strides and a dropout of 0.4. The second convolutional layer 102 comprises a 3x3 filter kernel, 64 feature maps, 4x4 strides and a dropout of 0.4. Next, there is a fully connected 1x125 neurons layer 103. Next, there is a first transpose convolutional layer 104 comprising a 3x3 filter kernel, 64 feature maps, 1x4 strides and a dropout of 0.4, and a second transpose convolutional layer 105 comprising a 3x3 filter kernel, 64 feature maps, 1x4 strides and a dropout of 0.4. The output 112 of the DAE 120-121 is further used by the encoder 123 which comprises a fully connected 1x128 neurons 106 and a Softmax layer 107 comprising 1x17 neurons. The number of convolutional layers, feature maps, strides, dropout, filter size, etc are termed as hyperparameters in machine learning terms and for each specific case a different combination of them may be used. The modelling by the DAE 120-121 is performed through unsupervised learning with unlabelled examples and by the encoder 123 through supervised learning with representative labelled examples. The specific parameters of each layer, etc., may be determined using a hyperparameter swapping. The encoder configuration of the invention generates an intermediate code of size 128, e.g., a reduction factor of 16x. Similarly, the decoder part follows the same pattern but in reverse order and replacing convolutional layers by transposed convolutional layers. The DAE 130 comprises 1M of trainable parameters. The autoencoder is trained by using batches of size 128, the Adam optimizer with a learning rate of 0.0004, and binary cross-entropy as the loss function for reconstruction. The supervised part of the architecture is composed of the encoder part of the DAE in addition to two dense layers, one with 128 neurons, and the second one with 17 neurons and a SoftMax activation layer for classification. The resulting model has 500k and 18k trainable parameters in phase 1 and phase 2, respectively. The model is trained using the same parameters as the DAE except that the loss function is categorical cross-entropy and the learning rate is reduced to 0.004. Finally, the output 111 is generated.
[61] Thus, differently formulated, for SSL, the DAE 130 provides a two-step training process. First, the DAE 130 which is composed of the encoder 120 and the decoder 21 in an unsupervised way using only Xu. Secondly, after the unsupervised learning, a training is performed by a classifier 123 using an encoder 106 together with a Softmax classifier 107 in a supervised way using the reduced labelled data set Xs.
[62] During the supervised training, the encoder 106 is used as a feature extractor for the Softmax classifier 107. This step provides an initial bootstrapping on the classification task. Then, a fine-tune step is performed, this is, all layers in 123 are retrained in order to increase the accuracy of the resulting model. [63] Fig. 6 shows a suitable computing system 600 enabling to implement embodiments of the method for identifying radio technologies in an environment according to the invention. Computing system 600 may in general be formed as a suitable general-purpose computer and comprise a bus 610, a processor 602, a local memory 604, one or more optional input interfaces 614, one or more optional output interfaces 616, a communication interface 612, a storage element interface 606, and one or more storage elements 608. Bus 610 may comprise one or more conductors that permit communication among the components of the computing system 600. Processor 602 may include any type of conventional processor or microprocessor that interprets and executes programming instructions. Local memory 604 may include a random-access memory (RAM) or another type of dynamic storage device that stores information and instructions for execution by processor 602 and/or a read only memory (ROM) or another type of static storage device that stores static information and instructions for use by processor 602. Input interface 614 may comprise one or more conventional mechanisms that permit an operator or user to input information to the computing device 600, such as a keyboard 620, a mouse 630, a pen, voice recognition and/or biometric mechanisms, a camera, etc. Output interface 616 may comprise one or more conventional mechanisms that output information to the operator or user, such as a display 640, etc. Communication interface 612 may comprise any transceiver-like mechanism such as for example one or more Ethernet interfaces that enables computing system 600 to communicate with other devices and/or systems, for example with other one or more of the nodes 300- 305 or 310-311. The communication interface 612 of computing system 600 may be connected to such another computing system by means of a local area network (LAN) or a wide area network (WAN) such as for example the internet. Storage element interface 606 may comprise a storage interface such as for example a Serial Advanced Technology Attachment (SATA) interface or a Small Computer System Interface (SCSI) for connecting bus 910 to one or more storage elements 608, such as one or more local disks, for example SATA disk drives, and control the reading and writing of data to and/or from these storage elements 908. Although the storage element(s) 608 above is/are described as a local disk, in general any other suitable computer-readable media such as a removable magnetic disk, optical storage media such as a CD or DVD, -ROM disk, solid state drives, flash memory cards, ... could be used. Computing system 600 could thus correspond to a node in the embodiments illustrated by Fig. 2 or Fig. 3.
[64] As used in this application, the term "circuitry" may refer to one or more or all of the following:
(a) hardware-only circuit implementations such as implementations in only analog and/or digital circuitry and
(b) combinations of hardware circuits and software, such as (as applicable):
(i) a combination of analog and/or digital hardware circuit(s) with software/firmware and
(ii) any portions of hardware processor(s) with software (including digital signal processor(s)), software, and memory(ies) that work together to cause an apparatus, such as a mobile phone or server, to perform various functions) and
(c) hardware circuit(s) and/or processor(s), such as microprocessor(s) or a portion of a microprocessor(s), that requires software (e.g. firmware) for operation, but the software may not be present when it is not needed for operation.
This definition of circuitry applies to all uses of this term in this application, including in any claims. As a further example, as used in this application, the term circuitry also covers an implementation of merely a hardware circuit or processor (or multiple processors) or portion of a hardware circuit or processor and its (or their) accompanying software and/or firmware. The term circuitry also covers, for example and if applicable to the particular claim element, a baseband integrated circuit or processor integrated circuit for a mobile device or a similar integrated circuit in a server, a cellular network device, or other computing or network device.
[65] Although the present invention has been illustrated by reference to specific embodiments, it will be apparent to those skilled in the art that the invention is not limited to the details of the foregoing illustrative embodiments, and that the present invention may be embodied with various changes and modifications without departing from the scope thereof. The present embodiments are therefore to be considered in all respects as illustrative and not restrictive, the scope of the invention being indicated by the appended claims rather than by the foregoing description, and all changes which come within the scope of the claims are therefore intended to be embraced therein. [66] It will furthermore be understood by the reader of this patent application that the words "comprising" or "comprise" do not exclude other elements or steps, that the words "a" or "an" do not exclude a plurality, and that a single element, such as a computer system, a processor, or another integrated unit may fulfil the functions of several means recited in the claims. Any reference signs in the claims shall not be construed as limiting the respective claims concerned. The terms "first", "second", third", "a", "b", "c", and the like, when used in the description or in the claims are introduced to distinguish between similar elements or steps and are not necessarily describing a sequential or chronological order. Similarly, the terms "top", "bottom", "over", "under", and the like are introduced for descriptive purposes and not necessarily to denote relative positions. It is to be understood that the terms so used are interchangeable under appropriate circumstances and embodiments of the invention are capable of operating according to the present invention in other sequences, or in orientations different from the one(s) described or illustrated above.

Claims

1. A computer-implemented method for providing a neural network (130) for identifying radio technologies (200-203) employed in an environment (320), the neural network (130) comprising an autoencoder (120, 121) having an encoder (120), and a classifier (123), the method comprising the steps of:
- sensing a radio spectrum of the environment (320) thereby obtaining a set of data samples;
- labelling a subset of the data samples by a respective radio technology (120, 121) thereby obtaining labelled data samples;
- training the autoencoder (120, 121) in an unsupervised way by unlabelled data samples;
- training the classifier (123) in a supervised way by the labelled data samples; and
- providing the neural network (130) by coupling the output (112) of an encoder network of the autoencoder (120, 121) to the input of the classifier (123).
2. The computer-implemented method according to claim 1, wherein the set of data samples comprises in-phase and quadrature, IQ, samples.
3. The computer-implemented method according to claim 2, further comprising the step of:
- transforming the IQ samples from a time domain to a frequency domain.
4. The computer-implemented method according to one of the claims 1 to 3, wherein the classifier (123) comprises the encoder (120) and a classification block (123).
5. The computer-implemented method according to one of the preceding claims, wherein the autoencoder (120, 121) comprises a convolutional neural network.
6. The computer-implemented method according to one of the preceding claims, wherein the encoder (120) comprises two convolutional layers with rectified linear unit, ReLU, activation function, each layer followed by a batch normalization and a dropout layer for regularization.
7. The computer-implemented method according to one of the preceding claims, wherein the radio technologies (200-203) comprise at least one of the group of 5G; 5G New Radio, NR; Long Term Evolution, LTE; Private LTE; Citizens Broadband Radio Service, CBRS; MulteFire; LTE-Licensed Assisted Access, LTE-LAA; Narrowband-Internet of Things, NV-loT; Enhanced machine type communication, eMTC; 802.11 ax; Wi-Fi 6; 802.11 ah; 802.11af; 802.11p; vehicle to vehicle, V2V; vehicle to infrastructure, V2I; ZiBee; Bleutooth; WiMax; GSM.
8. A computer-implemented method comprising identifying radio technologies (200- 203) employed in an environment (320) by a neural network obtained by any one of the claims 1 to 8.
9. The computer-implemented method according to claim 8, further comprising the step of:
- changing a centre frequency of one of the radio technologies (200-203) based on the identified radio technologies.
10. The computer-implemented method according to claim 8 or 9, further comprising the step of:
- assigning a collision-free time slot for transmission based on the identified radio technologies (200-203).
11. A data processing system comprising means for carrying out the method according to one of the claims 1 to 10.
12. A computer program product comprising computer-executable instructions for causing a data processing system to perform at least the steps of the computer- implemented method according to one of the claims 1 to 10.
13. A computer readable storage medium comprising the computer program product according to claim 12.
PCT/EP2020/074880 2019-09-06 2020-09-04 A neural network for identifying radio technologies WO2021044041A1 (en)

Priority Applications (4)

Application Number Priority Date Filing Date Title
CN202080062600.4A CN114341886A (en) 2019-09-06 2020-09-04 Neural network for identifying radio technology
EP20765290.0A EP4026059A1 (en) 2019-09-06 2020-09-04 A neural network for identifying radio technologies
KR1020227010874A KR20220053662A (en) 2019-09-06 2020-09-04 Neural network to identify radio technologies
US17/639,521 US20220300824A1 (en) 2019-09-06 2020-09-04 A neural network for identifying radio technologies

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
EP19195811.5A EP3789922A1 (en) 2019-09-06 2019-09-06 A neural network for identifying radio technologies
EP19195811.5 2019-09-06

Publications (1)

Publication Number Publication Date
WO2021044041A1 true WO2021044041A1 (en) 2021-03-11

Family

ID=67875288

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/EP2020/074880 WO2021044041A1 (en) 2019-09-06 2020-09-04 A neural network for identifying radio technologies

Country Status (5)

Country Link
US (1) US20220300824A1 (en)
EP (2) EP3789922A1 (en)
KR (1) KR20220053662A (en)
CN (1) CN114341886A (en)
WO (1) WO2021044041A1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113240917A (en) * 2021-05-08 2021-08-10 林兴叶 Traffic management system applying deep neural network to intelligent traffic
CN113723556A (en) * 2021-09-08 2021-11-30 中国人民解放军国防科技大学 Modulation mode identification method based on entropy weighting-multi-mode domain antagonistic neural network

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113095378B (en) * 2021-03-26 2022-04-05 重庆邮电大学 Wireless network device identification method, computer device and readable storage medium
CN113255451B (en) * 2021-04-25 2023-04-07 西北工业大学 Method and device for detecting change of remote sensing image, electronic equipment and storage medium
CN115276855B (en) * 2022-06-16 2023-09-29 宁波大学 Spectrum sensing method based on ResNet-CBAM
CN115276854B (en) * 2022-06-16 2023-10-03 宁波大学 ResNet-CBAM-based energy spectrum sensing method for randomly arriving and leaving main user signal

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
NATHAN E WEST ET AL: "Deep Architectures for Modulation Recognition", ARXIV.ORG, CORNELL UNIVERSITY LIBRARY, 201 OLIN LIBRARY CORNELL UNIVERSITY ITHACA, NY 14853, 27 March 2017 (2017-03-27), XP080755827, DOI: 10.1109/DYSPAN.2017.7920754 *
O'SHEA TIMOTHY J ET AL: "Unsupervised representation learning of structured radio communication signals", 2016 FIRST INTERNATIONAL WORKSHOP ON SENSING, PROCESSING AND LEARNING FOR INTELLIGENT MACHINES (SPLINE), IEEE, 6 July 2016 (2016-07-06), pages 1 - 5, XP032934625, DOI: 10.1109/SPLIM.2016.7528397 *
TIMOTHY J O SHEA ET AL: "Convolutional Radio Modulation Recognition Networks", 10 June 2016 (2016-06-10), XP055633462, Retrieved from the Internet <URL:https://ia902808.us.archive.org/32/items/arxiv-1602.04105/1602.04105.pdf> [retrieved on 20191017] *
WANG YU ET AL: "Data-Driven Deep Learning for Automatic Modulation Recognition in Cognitive Radios", IEEE TRANSACTIONS ON VEHICULAR TECHNOLOGY, IEEE SERVICE CENTER, PISCATAWAY, NJ, US, vol. 68, no. 4, 1 April 2019 (2019-04-01), pages 4074 - 4077, XP011719704, ISSN: 0018-9545, [retrieved on 20190416], DOI: 10.1109/TVT.2019.2900460 *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113240917A (en) * 2021-05-08 2021-08-10 林兴叶 Traffic management system applying deep neural network to intelligent traffic
CN113240917B (en) * 2021-05-08 2022-11-08 广州隧华智慧交通科技有限公司 Traffic management system applying deep neural network to intelligent traffic
CN113723556A (en) * 2021-09-08 2021-11-30 中国人民解放军国防科技大学 Modulation mode identification method based on entropy weighting-multi-mode domain antagonistic neural network

Also Published As

Publication number Publication date
EP3789922A1 (en) 2021-03-10
US20220300824A1 (en) 2022-09-22
CN114341886A (en) 2022-04-12
EP4026059A1 (en) 2022-07-13
KR20220053662A (en) 2022-04-29

Similar Documents

Publication Publication Date Title
US20220300824A1 (en) A neural network for identifying radio technologies
Wu et al. Incremental classifier learning with generative adversarial networks
Zhou et al. A robust modulation classification method using convolutional neural networks
CN109840531B (en) Method and device for training multi-label classification model
CN114092820B (en) Target detection method and moving target tracking method applying same
Muhlbaier et al. Learn $^{++} $. NC: Combining Ensemble of Classifiers With Dynamically Weighted Consult-and-Vote for Efficient Incremental Learning of New Classes
Camelo et al. A semi-supervised learning approach towards automatic wireless technology recognition
CN113435509B (en) Small sample scene classification and identification method and system based on meta-learning
Zhang et al. Modulation classification in multipath fading channels using sixth‐order cumulants and stacked convolutional auto‐encoders
CN112910811B (en) Blind modulation identification method and device under unknown noise level condition based on joint learning
US20190370219A1 (en) Method and Device for Improved Classification
Aswolinskiy et al. Time series classification in reservoir-and model-space
CN112232395B (en) Semi-supervised image classification method for generating countermeasure network based on joint training
CN111178543B (en) Probability domain generalization learning method based on meta learning
Lin et al. Modulation recognition using signal enhancement and multistage attention mechanism
Gong et al. Multi-task based deep learning approach for open-set wireless signal identification in ISM band
JP2020140466A (en) Training data expansion apparatus, method, and program
CN115147632A (en) Image category automatic labeling method and device based on density peak value clustering algorithm
Kong et al. Waveform recognition in multipath fading using autoencoder and CNN with Fourier synchrosqueezing transform
Ali et al. Modulation format identification using supervised learning and high-dimensional features
CN116151319A (en) Method and device for searching neural network integration model and electronic equipment
Alejo et al. An improved dynamic sampling back-propagation algorithm based on mean square error to face the multi-class imbalance problem
CN116883786A (en) Graph data augmentation method, device, computer equipment and readable storage medium
CN115878984A (en) Vibration signal processing method and device based on supervised contrast learning
CN117523218A (en) Label generation, training of image classification model and image classification method and device

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20765290

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

ENP Entry into the national phase

Ref document number: 20227010874

Country of ref document: KR

Kind code of ref document: A

ENP Entry into the national phase

Ref document number: 2020765290

Country of ref document: EP

Effective date: 20220406