CN115499103B - Blind identification method for convolutional codes - Google Patents

Blind identification method for convolutional codes Download PDF

Info

Publication number
CN115499103B
CN115499103B CN202211146485.6A CN202211146485A CN115499103B CN 115499103 B CN115499103 B CN 115499103B CN 202211146485 A CN202211146485 A CN 202211146485A CN 115499103 B CN115499103 B CN 115499103B
Authority
CN
China
Prior art keywords
network model
coding
starting point
identification
layer
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202211146485.6A
Other languages
Chinese (zh)
Other versions
CN115499103A (en
Inventor
刘杰
鲍雁飞
杨健
房珊瑶
马钰
朱宇轩
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
32802 Troops Of People's Liberation Army Of China
Original Assignee
32802 Troops Of People's Liberation Army Of China
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 32802 Troops Of People's Liberation Army Of China filed Critical 32802 Troops Of People's Liberation Army Of China
Priority to CN202211146485.6A priority Critical patent/CN115499103B/en
Publication of CN115499103A publication Critical patent/CN115499103A/en
Application granted granted Critical
Publication of CN115499103B publication Critical patent/CN115499103B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L1/00Arrangements for detecting or preventing errors in the information received
    • H04L1/004Arrangements for detecting or preventing errors in the information received by using forward error control
    • H04L1/0056Systems characterized by the type of code used
    • H04L1/0059Convolutional codes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Abstract

The invention discloses a blind convolutional code identification method, which comprises the following steps: acquiring convolutional code information and a training convolutional code sequence; processing the training convolution code sequence to obtain a training data sample; processing the training data sample to obtain a coding structure identification training set and a coding structure identification verification set; selecting different bit error rates and different sample sequence starting points under each group of code length and storage series, and processing the training data samples to obtain a coding sequence starting point identification training set and a coding sequence starting point identification verification set; and constructing a coding structure identification network model and a coding sequence starting point identification network model, acquiring a convolution code sequence to be identified, and processing by using the coding structure identification network model and the coding sequence starting point identification network model to acquire a coding structure identification result and a coding starting point identification result. The method has the advantages of higher identification accuracy, constant calculation complexity, small required data length and obvious advantages.

Description

Blind identification method for convolutional codes
Technical Field
The invention relates to the technical field of code identification, in particular to a blind convolutional code identification method.
Background
In modern digital communication systems, to ensure stable and reliable transmission of information, error correction coding techniques are often used for information sequences before modulation to implement error detection and correction of data at the receiving end. Whether it is a Cognitive Radio (Cognitive Radio) system in the civil field or a non-cooperative information interception technology in the military field, an error correction coding blind identification technology is playing an increasingly important role. Therefore, the method becomes a hot spot problem in the communication bit stream analysis, obtains the extensive attention of domestic and foreign scholars, and obtains a certain research result.
The (n, 1, m) convolutional code is one of the earliest error correction coding techniques, and is widely used in data transmission of telephone, satellite and wireless channels because of its mature coding and decoding technique and strong error correction capability.
Aiming at the blind identification problem of (n, 1, m) convolutional codes, the method disclosed in the current published literature mainly has two defects. Firstly, the error tolerance is poor, such as that obtained based on a key module equation
Figure BDA0003855499920000011
The problems are all solved by a base reduction algorithm, a Euclidean algorithm obtained by polynomial rolling division, a matrix analysis method obtained by rank deficiency and column vector distribution after the matrix reduction of a received sequence. Under the actual environment, particularly under the non-cooperative condition, due to the influence of noise, interference and signal preprocessing errors, many errors often exist in the binary coding sequence obtained after demodulation and de-interleaving, so that the practicability of the method is restrained. And secondly, the identification of partial parameters can be completed under the condition that a lot of priori information is predicted. Such as- >
Figure BDA0003855499920000012
The base reduction algorithm, the Euclidean algorithm and the Walsh-Hadamard transform (Walsh-Hadamard Transform, WHT) algorithm all complete the identification of the generator polynomial with known code length, code constraint length and code word starting point. Therefore, only a matrix analysis method can complete (n, 1, m) convolutional code identification more completely at present, other methods or a combination matrix analysis method is used for preprocessing, so that the accuracy of integral identification is limited, or a coding sequence is intercepted from different starting points and length sequences, and verification of parameters such as code length and the like is completed through multiple iterations, so that the whole process is complicated.
Disclosure of Invention
The invention aims to solve the technical problem of providing a blind convolutional code identification method and a blind convolutional code identification device, which can utilize common (n, 1, m) convolutional codes to generate coded data under different error rates, then the coded data are randomly divided according to a set length, and a depth residual error neural network is input to complete supervised learning. The method can finish the identification of the code length, the coding constraint length and the code word starting point without other priori information, and has excellent fault tolerance.
In order to solve the technical problems, the embodiment of the invention discloses a blind convolutional code identification method, which comprises the following steps:
S1, acquiring convolutional code information and a training convolutional code sequence, wherein the convolutional code information comprises a code length, a storage level number, a sample sequence starting point and a bit error rate;
s2, processing the training convolution code sequence to obtain a training data sample;
s3, processing the training data sample to obtain a coding structure identification training set and a coding structure identification verification set;
s4, processing the training data sample to obtain a coding sequence starting point identification training set and a coding sequence starting point identification verification set;
s5, constructing a coding structure identification network model, and training the coding structure identification network model by utilizing a coding structure identification training set and a coding structure identification verification set to obtain a target coding structure identification network model with optimal parameters;
s6, constructing a coding sequence starting point recognition network model, and training the coding sequence starting point recognition network model by using a coding sequence starting point recognition training set and a coding sequence starting point recognition verification set to obtain a target coding sequence starting point recognition network model with optimal parameters;
s7, acquiring a convolutional code sequence to be identified, and processing the convolutional code sequence to be identified by utilizing a target coding structure identification network model with optimal parameters to obtain a coding structure identification result;
S8, utilizing the target coding sequence starting point identification network model with optimal parameters to process the coding structure identification result, and obtaining a coding starting point identification result.
As an optional implementation manner, in the embodiment of the present invention, the training convolutional code is an (n, 1, m) convolutional code, n is a code length, 1 is a codeword starting point, and m is a storage level;
the acquiring a training convolutional code sequence includes:
setting different code length and storage series of the convolution code, and combining the code length and the storage series to obtain the training convolution code sequence.
As an optional implementation manner, in an embodiment of the present invention, the processing the training convolutional code sequence to obtain training data samples includes:
numbering the training convolutional code sequence to obtain a numbered training convolutional code sequence;
under each group of code length and storage series, selecting different bit error rates and different sample sequence starting points from the code word starting points, respectively constructing 30000 data with 200bit length under each combination, and randomly adding error bits to obtain training data samples.
In an embodiment of the present invention, the constructing a coding structure recognition network model, performing coding structure recognition network model training by using a coding structure recognition training set and a coding structure recognition verification set, to obtain a target coding structure recognition network model with optimal parameters, includes:
Constructing a coding structure identification network model, training the coding structure identification network model by using a coding structure identification training set, and performing parameter adjustment on the coding structure identification network model by using a coding structure identification verification set to obtain a target coding structure identification network model with optimal parameters.
As an optional implementation manner, in an embodiment of the present invention, the constructing a coding sequence start point identification network model, training the coding sequence start point identification network model by using a coding sequence start point identification training set and a coding sequence start point identification verification set, to obtain a target coding sequence start point identification network model with optimal parameters, includes:
constructing a coding sequence starting point recognition network model, training the coding sequence starting point recognition network model by using a coding sequence starting point recognition training set, and performing parameter adjustment on the coding sequence starting point recognition network model by using a coding sequence starting point recognition verification set to obtain a target coding sequence starting point recognition network model with optimal parameters.
In an embodiment of the present invention, the obtaining the convolutional code sequence to be identified, and processing the convolutional code sequence to be identified by using the target coding structure identification network model with optimal parameters to obtain a coding structure identification result includes:
Acquiring a convolution code sequence to be identified, and processing the convolution code sequence to be identified to obtain a data sample to be identified with a coding structure;
and inputting the data sample to be identified of the coding structure into the target coding structure identification network model with the optimal parameters to obtain a coding structure identification result.
As an optional implementation manner, in an embodiment of the present invention, the identifying a network model using a target coding sequence starting point with an optimal parameter, and processing the coding structure identifying result to obtain a coding starting point identifying result, includes:
and processing the coding structure identification result to obtain a coding start point to-be-identified data sample, and inputting the coding start point to-be-identified data sample into the target coding sequence start point identification network model with optimal parameters to obtain a coding start point identification result.
As an optional implementation manner, in an embodiment of the present invention, the processing the training data samples to obtain a coding sequence start point identification training set and a coding sequence start point identification verification set includes:
for each code length and storage level, selecting 10 bit error rates and n times 10 times 30000 training data samples of n sample sequence starting points to obtain 21 sub-sample sets, uniformly mixing the 21 sub-sample sets, and then randomly dividing the 21 sub-sample sets according to the ratio of 4:1 to obtain a coding sequence starting point identification training set and a coding sequence starting point identification verification set.
As an optional implementation manner, in an embodiment of the present invention, the coding structure identification network model includes a first coding structure identification network model, a second coding structure identification network model, a third coding structure identification network model, a fourth coding structure identification network model, and a fifth coding structure identification network model;
the output end of the first coding structure identification network model is connected with the input end of the second coding structure identification network model; the output end of the second coding structure identification network model is connected with the input end of the third coding structure identification network model; the output end of the third coding structure identification network model is connected with the input end of the fourth coding structure identification network model; the output end of the fourth coding structure identification network model is connected with the input end of the fifth coding structure identification network model;
the first coding structure identification network model is a convolution kernel of 7×7 and 64 dimensions;
the second coding structure identification network model comprises 3 second coding structure identification sub-network models, and the 3 second coding structure identification sub-network models are in a series connection relationship; the second coding structure identification sub-network model comprises a first layer with a convolution kernel of 1 multiplied by 1 and 64 dimensions, a second layer with a convolution kernel of 3 multiplied by 3 and 64 dimensions, a third layer with a convolution kernel of 1 multiplied by 1 and 256 dimensions, and the first layer, the second layer and the third layer are in a series relation;
The third coding structure identification network model comprises 4 third coding structure identification sub-network models, and the 4 third coding structure identification sub-network models are in series connection; the third coding structure identification sub-network model comprises a first layer with a convolution kernel of 1 multiplied by 1 and 128 dimensions, a second layer with a convolution kernel of 3 multiplied by 3 and 128 dimensions, a third layer with a convolution kernel of 1 multiplied by 1 and 512 dimensions, and the first layer, the second layer and the third layer are in a series relation;
the fourth coding structure identification network model comprises 6 fourth coding structure identification sub-network models, and the 6 fourth coding structure identification sub-network models are in a series connection relationship; the fourth coding structure identification sub-network model comprises a first layer with a convolution kernel of 1 multiplied by 1 and 256 dimensions, a second layer with a convolution kernel of 3 multiplied by 3 and 256 dimensions, a third layer with a convolution kernel of 1 multiplied by 1 and 1024 dimensions, and the first layer, the second layer and the third layer are in a series relation;
the fifth coding structure identification network model comprises 3 fifth coding structure identification sub-network models, and the 3 fifth coding structure identification sub-network models are in a series connection relationship; the fifth coding structure identification sub-network model comprises a first layer with a convolution kernel of 1 multiplied by 1 and 512 dimensions, a second layer with a convolution kernel of 3 multiplied by 3 and 512 dimensions, a third layer with a convolution kernel of 1 multiplied by 1 and 2048 dimensions, and the first layer, the second layer and the third layer are in a series relation.
As an optional implementation manner, in an embodiment of the present invention, the coding sequence start point identification network model includes a first coding sequence start point identification network model, a second coding sequence start point identification network model, a third coding sequence start point identification network model, a fourth coding sequence start point identification network model, and a fifth coding sequence start point identification network model;
the output end of the first coding sequence starting point identification network model is connected with the input end of the second coding sequence starting point identification network model; the output end of the second coding sequence starting point identification network model is connected with the input end of the third coding sequence starting point identification network model; the output end of the third coding sequence starting point identification network model is connected with the input end of the fourth coding sequence starting point identification network model; the output end of the fourth coding sequence starting point identification network model is connected with the input end of the fifth coding sequence starting point identification network model;
the first coding sequence starting point identification network model is 7 multiplied by 7, and 64 dimensions;
the second coding sequence starting point identification network model comprises 2 second coding sequence starting point identification sub-network models, and the 2 second coding sequence starting point identification sub-network models are in a series connection relationship;
The third coding sequence starting point identification network model comprises 2 third coding sequence starting point identification sub-network models, and the 2 third coding sequence starting point identification sub-network models are in a series connection relationship;
the fourth coding sequence starting point identification network model comprises 2 fourth coding sequence starting point identification sub-network models, and the 2 fourth coding sequence starting point identification sub-network models are in a series connection relationship;
the fifth coding sequence starting point identification network model comprises 2 fifth coding sequence starting point identification sub-network models, and the 2 fifth coding structure identification sub-network models are in a series connection relationship.
The second coding sequence starting point identification sub-network model comprises a first layer with a convolution kernel of 3 multiplied by 3 and 64 dimensions, a second layer with a convolution kernel of 3 multiplied by 3 and 64 dimensions, and the first layer and the second layer are in a series connection relationship;
the third coding sequence starting point identification sub-network model comprises a first layer with a convolution kernel of 3 multiplied by 3 and 128 dimensions, a second layer with a convolution kernel of 3 multiplied by 3 and 128 dimensions, and the first layer and the second layer are in a series connection relationship;
the fourth coding sequence starting point identification sub-network model comprises a first layer with a convolution kernel of 3 multiplied by 3 and 256 dimensions, a second layer with a convolution kernel of 3 multiplied by 3 and 256 dimensions, and the first layer and the second layer are in a series connection relationship;
The fifth coding sequence starting point identification sub-network model comprises a first layer with a convolution kernel of 3×3 and 512 dimensions, a second layer with a convolution kernel of 3×3 and 512 dimensions, and the first layer and the second layer are in a series connection relationship.
Compared with the prior art, the embodiment of the invention has the following beneficial effects:
the key to the identification of the (n, 1, m) convolutional code is the identification of the code length n, the code word start and the storage level m. Aiming at the problems that the existing method has poor error tolerance and needs a lot of pre-known prior information, the blind convolutional code identification method firstly generates coded data under different error rates by using common (n, 1, m) convolutional codes, then randomly divides the coded data according to the appointed length, and inputs the coded data into a deep residual error neural network to finish supervised learning. The method can finish the identification of the code length, the coding constraint length and the code word starting point without other priori information, and the fault tolerance performance is obviously superior to that of the traditional method.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings required for the description of the embodiments will be briefly described below, and it is apparent that the drawings in the following description are only some embodiments of the present invention, and other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a schematic flow chart of a blind convolutional code recognition method disclosed in the embodiment of the invention;
FIG. 2 is a schematic diagram of the calculation result of a non-systematic convolutional code generator polynomial according to an embodiment of the present invention;
FIG. 3 is a schematic diagram of the computational complexity of 21 convolutional codes disclosed in an embodiment of the present invention;
fig. 4 is a schematic diagram of data lengths required for 21 coding modes according to an embodiment of the present invention.
Detailed Description
In order to make the present invention better understood by those skilled in the art, the following description will clearly and completely describe the technical solutions in the embodiments of the present invention with reference to the accompanying drawings, and it is apparent that the described embodiments are only some embodiments of the present invention, not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
The terms first, second and the like in the description and in the claims and in the above-described figures are used for distinguishing between different objects and not necessarily for describing a sequential or chronological order. Furthermore, the terms "comprise" and "have," as well as any variations thereof, are intended to cover a non-exclusive inclusion. For example, a process, method, apparatus, article, or device that comprises a list of steps or elements is not limited to the list of steps or elements but may, in the alternative, include other steps or elements not expressly listed or inherent to such process, method, article, or device.
Reference herein to "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment may be included in at least one embodiment of the invention. The appearances of such phrases in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. Those of skill in the art will explicitly and implicitly appreciate that the embodiments described herein may be combined with other embodiments.
The invention discloses a blind convolutional code identification method, which can acquire training convolutional code sequences and convolutional code information; processing the training convolution code sequence to obtain a training data sample; processing the training data sample to obtain a coding structure identification training set and a coding structure identification verification set; selecting different bit error rates and different sample sequence starting points under each group of code length and storage series, and processing the training data samples to obtain a coding sequence starting point identification training set and a coding sequence starting point identification verification set; and constructing a coding structure identification network model and a coding sequence starting point identification network model, acquiring a convolution code sequence to be identified, and processing by using the coding structure identification network model and the coding sequence starting point identification network model to acquire a coding structure identification result and a coding starting point identification result.
Example 1
Referring to fig. 1, fig. 1 is a flow chart of a blind convolutional code recognition method according to an embodiment of the present invention. The convolutional code blind recognition method described in fig. 1 can be used in the wireless communication system and the communication countermeasure field, and the embodiment of the invention is not limited. As shown in fig. 1, the blind convolutional code recognition method may include the following operations:
s1, acquiring convolutional code information and a training convolutional code sequence, wherein the convolutional code information comprises a code length, a storage level number, a sample sequence starting point and a bit error rate;
s2, processing the training convolution code sequence to obtain a training data sample;
s3, processing the training data sample to obtain a coding structure identification training set and a coding structure identification verification set;
s4, processing the training data sample to obtain a coding sequence starting point identification training set and a coding sequence starting point identification verification set;
s5, constructing a coding structure identification network model, and training the coding structure identification network model by utilizing a coding structure identification training set and a coding structure identification verification set to obtain a target coding structure identification network model with optimal parameters;
s6, constructing a coding sequence starting point recognition network model, and training the coding sequence starting point recognition network model by using a coding sequence starting point recognition training set and a coding sequence starting point recognition verification set to obtain a target coding sequence starting point recognition network model with optimal parameters;
S7, acquiring a convolutional code sequence to be identified, and processing the convolutional code sequence to be identified by utilizing a target coding structure identification network model with optimal parameters to obtain a coding structure identification result;
s8, utilizing the target coding sequence starting point identification network model with optimal parameters to process the coding structure identification result, and obtaining a coding starting point identification result.
Optionally, the coding structure recognition network model includes a first coding structure recognition network model, a second coding structure recognition network model, a third coding structure recognition network model, a fourth coding structure recognition network model, and a fifth coding structure recognition network model;
the output end of the first coding structure identification network model is connected with the input end of the second coding structure identification network model; the output end of the second coding structure identification network model is connected with the input end of the third coding structure identification network model; the output end of the third coding structure identification network model is connected with the input end of the fourth coding structure identification network model; the output end of the fourth coding structure identification network model is connected with the input end of the fifth coding structure identification network model;
The first coding structure identification network model is a convolution kernel of 7×7 and 64 dimensions;
the second coding structure identification network model comprises 3 second coding structure identification sub-network models, and the 3 second coding structure identification sub-network models are in a series connection relationship; the second coding structure identification sub-network model comprises a first layer with a convolution kernel of 1 multiplied by 1 and 64 dimensions, a second layer with a convolution kernel of 3 multiplied by 3 and 64 dimensions, a third layer with a convolution kernel of 1 multiplied by 1 and 256 dimensions, and the first layer, the second layer and the third layer are in a series relation;
the third coding structure identification network model comprises 4 third coding structure identification sub-network models, and the 4 third coding structure identification sub-network models are in series connection; the third coding structure identification sub-network model comprises a first layer with a convolution kernel of 1 multiplied by 1 and 128 dimensions, a second layer with a convolution kernel of 3 multiplied by 3 and 128 dimensions, a third layer with a convolution kernel of 1 multiplied by 1 and 512 dimensions, and the first layer, the second layer and the third layer are in a series relation;
the fourth coding structure identification network model comprises 6 fourth coding structure identification sub-network models, and the 6 fourth coding structure identification sub-network models are in a series connection relationship; the fourth coding structure identification sub-network model comprises a first layer with a convolution kernel of 1 multiplied by 1 and 256 dimensions, a second layer with a convolution kernel of 3 multiplied by 3 and 256 dimensions, a third layer with a convolution kernel of 1 multiplied by 1 and 1024 dimensions, and the first layer, the second layer and the third layer are in a series relation;
The fifth coding structure identification network model comprises 3 fifth coding structure identification sub-network models, and the 3 fifth coding structure identification sub-network models are in a series connection relationship; the fifth coding structure identification sub-network model comprises a first layer with a convolution kernel of 1 multiplied by 1 and 512 dimensions, a second layer with a convolution kernel of 3 multiplied by 3 and 512 dimensions, a third layer with a convolution kernel of 1 multiplied by 1 and 2048 dimensions, and the first layer, the second layer and the third layer are in a series relation.
Optionally, the coding sequence start point identification network model includes a first coding sequence start point identification network model, a second coding sequence start point identification network model, a third coding sequence start point identification network model, a fourth coding sequence start point identification network model, and a fifth coding sequence start point identification network model;
the output end of the first coding sequence starting point identification network model is connected with the input end of the second coding sequence starting point identification network model; the output end of the second coding sequence starting point identification network model is connected with the input end of the third coding sequence starting point identification network model; the output end of the third coding sequence starting point identification network model is connected with the input end of the fourth coding sequence starting point identification network model; the output end of the fourth coding sequence starting point identification network model is connected with the input end of the fifth coding sequence starting point identification network model;
The first coding sequence starting point identification network model is 7 multiplied by 7, and 64 dimensions;
the second coding sequence starting point identification network model comprises 2 second coding sequence starting point identification sub-network models, and the 2 second coding sequence starting point identification sub-network models are in a series connection relationship;
the third coding sequence starting point identification network model comprises 2 third coding sequence starting point identification sub-network models, and the 2 third coding sequence starting point identification sub-network models are in a series connection relationship;
the fourth coding sequence starting point identification network model comprises 2 fourth coding sequence starting point identification sub-network models, and the 2 fourth coding sequence starting point identification sub-network models are in a series connection relationship;
the fifth coding sequence starting point identification network model comprises 2 fifth coding sequence starting point identification sub-network models, and the 2 fifth coding structure identification sub-network models are in a series connection relationship.
The second coding sequence starting point identification sub-network model comprises a first layer with a convolution kernel of 3 multiplied by 3 and 64 dimensions, a second layer with a convolution kernel of 3 multiplied by 3 and 64 dimensions, and the first layer and the second layer are in a series connection relationship;
the third coding sequence starting point identification sub-network model comprises a first layer with a convolution kernel of 3 multiplied by 3 and 128 dimensions, a second layer with a convolution kernel of 3 multiplied by 3 and 128 dimensions, and the first layer and the second layer are in a series connection relationship;
The fourth coding sequence starting point identification sub-network model comprises a first layer with a convolution kernel of 3 multiplied by 3 and 256 dimensions, a second layer with a convolution kernel of 3 multiplied by 3 and 256 dimensions, and the first layer and the second layer are in a series connection relationship;
the fifth coding sequence starting point identification sub-network model comprises a first layer with a convolution kernel of 3×3 and 512 dimensions, a second layer with a convolution kernel of 3×3 and 512 dimensions, and the first layer and the second layer are in a series connection relationship.
Alternatively, the invention contemplates the use of convolutional neural networks (Convolutional Neural Network, CNN) for encoding, taking into account the data characteristics of convolutional encodingAnd (5) classifying and identifying parameters. CNN is a type of neural network dedicated to processing data having a similar mesh structure, such as time-series data and image data. Compared with the traditional fully-connected neural network, the training complexity of the model is greatly simplified through sparse connection, weight sharing and constant representation. A standard convolutional neural network is mainly composed of a convolutional layer, a pooling layer, a full-connection layer and the like. The convolution layer is a core layer of CNN, and scans input data through convolution kernels with different sizes to finish feature extraction; the pooling layer compresses the original data, so that parameters participating in model calculation are greatly reduced, and the calculation efficiency is improved; the full connection layer further compresses the features extracted by rolling and pooling, and then the classification function of the model is completed. In general, the number of layers of the convolutional layer, the fully-connected layer, involved in parameter updating in the training process is referred to as the network depth. For the first convolutional layer, assume that the input data is
Figure BDA0003855499920000101
The convolution kernel is K ij The output can be expressed as
Figure BDA0003855499920000102
Wherein the symbols are
Figure BDA0003855499920000103
Representing convolution operations, M j Representing a feature map set, b j Representing the bias. g (·) refers to a nonlinear function.
Optionally, in order to further improve the recognition effect, the invention adopts a Resnet model in the CNN for training. Compared with a common convolutional neural network, the Resnet model directly transmits input to network output, so that a learning target becomes a difference value between expected output and input, the problem of model performance degradation in a deep convolutional neural network model is avoided, and better performance is obtained.
Optionally, in combination with the difficulty of the problem, the network structure based on Resnet-18 and Resnet-50 in Pytorch can be selected to be improved, and a Resnet network model suitable for channel coding is constructed, wherein the Resnet-18 is used for identifying the code structure based on the Resnet-50 model, and the Resnet-18 is used for identifying the code sequence starting point. In the original network, the convolution kernels of '1×1', '3×3', '7×7', '1×1', '1×7' are changed from the two-dimensional convolution Conv2D to the one-dimensional convolution Conv1D, so as to obtain a coding structure identification network model and a coding sequence starting point identification network model.
Alternatively, the present invention selects 21 common codes with code lengths of 2, 3 and 4, and the number of the coding storage stages is 1 to 7, and the common codes are used for constructing data, and specific coding parameters are shown in table 1. As can be seen from the analysis, after preprocessing such as demodulation, the bit error rate and the code word start point ultimately affect the recognition effect, so that the data set can be conveniently constructed by using computer simulation software. For each group of code length n and storage level m, 10 cases of bit error rate of 0 (i.e. no error code) and 0.01 to 0.09 are selected, n cases of bit sequence starting point and codeword starting point of 0 to n-1 are selected, and 30000 data with 200bit length are respectively constructed under each combination, wherein error bits are randomly added according to the length of the coding sequence and the bit error rate. Meanwhile, considering correlation of the front and rear bits in the convolution coding sequence, in order to ensure that each data is independent of each other, the data is not sequentially intercepted from one piece of coded data. Instead, a data sequence with the length of 1000 bits is randomly generated each time, then bit errors are randomly added according to the bit error rate, and finally, data with the length of 200 bits is intercepted from the data sequence according to the set data starting point. According to the above setting, the samples used for actual training are shown in table 2.
Table 1 coding parameters
Figure BDA0003855499920000111
Table 2 training samples
Figure BDA0003855499920000112
Figure BDA0003855499920000121
Optionally, when training the coding structure recognition model, sending all data into a network for training, wherein the total sample number is 1.89×107; when training the code sequence starting point recognition model, respectively selecting n sample sequence starting points under the corresponding code length and the code storage series, and n×10x30000 samples under the 10 bit error rates for training, wherein 21 recognition models are required to be trained. The two kinds of network training randomly divide data according to the proportion of 4:1 to obtain a training set and a verification set.
For the 21 (n, 1, m) convolutional codes shown in table 1, numbering them in sequence from 1 to 21, for each numbered convolutional code, generating training samples using Matlab software as shown in table 1;
optionally, after uniformly mixing all samples, randomly dividing data according to a ratio of 4:1 to obtain a coding structure identification training set and a verification set; respectively selecting n sample sequence starting points under corresponding code length and coding storage series, and n multiplied by 10 multiplied by 30000 samples under 10 bit error rates to obtain 21 sub-sample sets, and randomly dividing data according to the ratio of 4:1 after uniformly mixing the 21 sub-sample sets to obtain a coding sequence starting point identification training set and a verification set;
optionally, constructing a coding structure recognition network model and a coding sequence starting point recognition network model based on the Pytorch framework respectively, inputting training set samples into respective networks for training, and continuously adjusting network parameters according to the condition of the verification set until the result on the verification set is optimal;
Optionally, for the convolution code sequence to be identified, dividing the convolution code sequence according to length 200bit to obtain a plurality of data samples, inputting the data samples into the code structure identification network model, identifying the code structure to be identified according to the statistical result, then dividing the code sequence from the starting point at intervals of code length n to obtain a plurality of data samples, inputting the code sequence starting point identification model of the corresponding code structure, identifying the code starting point to be identified according to the statistical result, and completing the identification.
Example two
The advantages of the method of the invention are illustrated by comparing the adaptability of the bit error rate, the computational complexity, the requirement for the data length, and the influence of the code sequence starting point, the code length and the code memory length on the identification.
(1) Comparison of identification Performance
Firstly, comparing the adaptability of the method to the bit error rate with a matrix analysis-based method and a maximum likelihood detection-based method for identifying performance. The convolutional codes are selected (3,1,5) for research, and because the data amounts required by different methods are not the same, the optimal recognition performance of the various methods is mainly considered herein for comparison objectivity (i.e., the various methods meet the conditions required for recognition). It can be derived that:
(1) Under ideal conditions, the identification accuracy of the method is basically equivalent to that of a Walsh-Hadamard transform (Walsh-Hadamard Transform, WHT) algorithm, is obviously superior to that of a method based on maximum likelihood detection, and can reach the identification probability of more than 90% when the error rate is 0.09;
(2) the two comparison methods are more easily affected by bit errors when the code length is increased, so that the recognition rate is reduced.
Meanwhile, the fact that coding parameter priori knowledge required by different methods in recognition is not the same is also considered. The method and Walsh-Hadamard transform (Walsh-Hadamard Transform, WHT) algorithm of the invention are not affected by priori knowledge, and the corresponding code length, code memory length, generator polynomial and other parameters can be identified by directly inputting the received sequence into an identification model, while the maximum likelihood detection-based method requires knowledge of the code length, the order of the check polynomial, the code starting point and the like, otherwise, an effective equation cannot be established for identification. The priori knowledge is often difficult to acquire in a practical application scene, so that the application limit of the method based on the maximum likelihood detection is larger. Fig. 2 is a schematic diagram of a calculation result of a non-systematic convolutional code generator polynomial according to an embodiment of the present invention.
(2) Comparison of computational complexity
The calculated amount of the method mainly consists in the training process of the network, but because the network model can be trained in advance, and near real-time processing is often needed in the actual identification, the calculated amount of the method is mainly compared with the calculated amount of the Walsh-Hadamard transform (Walsh-Hadamard Transform, WHT) algorithm and the calculated amount of the method based on maximum likelihood detection in the comparison of the calculated amount of the method, namely the calculated amount of the test process.
The calculation complexity of the method mainly comes from the convolution layer of the coding type identification network and can be expressed as
Figure BDA0003855499920000131
Wherein D is the convolution layer number of the neural network, K l For the length of the one-dimensional convolution kernel, M is the length of the one-dimensional feature map output by each convolution kernel, which is the length M of the feature map output by the previous convolution layer l-1 The four parameters of the convolution kernel length, the Padding length (Padding) P and the convolution step length (Stride) S are determined, and the calculation formula is that
M l =(M l-1 -K l +2P l )/S l +1
C l-1 The number of input channels for each convolution kernel, i.e. the number of channels output by the previous layer of convolution layer, C l The number of channels output for the convolutional layer of this layer. The network comprises a convolution layer number of 49, a convolution kernel length value of 1 or 5, a filling length P value of 1 or 3, a convolution step length S value of 1 or 2, and the calculation complexity required by identification can be calculated according to the parameters. The calculation complexity of the Walsh-Hadamard transform algorithm is O (25.L 3 /14), wherein L has a value of 49; the method based on maximum likelihood detection has the computational complexity of O (Nn (K+1) 2 n(K+1) ) Where N is the code length, N is the number of rows of the coefficient matrix of the set of equations established, and K is the highest order of the check polynomial. The computational complexity of computing 21 convolutional codes in the present invention is shown in fig. 3. It can be seen that the computational complexity of the method of the invention remains constant and is greater than that of the two conventional methods when the coding parameters are small, but with the increase of the coding parameters, the computational complexity of the method based on maximum likelihood detection gradually exceeds that of the method of the invention.
(3) Demand for data length
The data of the network training of the identification method can be generated by computer simulation, the data quantity required during the identification is the same as the sample length selected during the network training and is constant to 200, and the Walsh-Hadamard transform (Walsh-Hadamard Transform, WHT) algorithm and the method based on maximum likelihood detection are often related to coding parameters. The Walsh-hadamard transform (Walsh-Hadamard Transform, WHT) algorithm establishes an analysis matrix with a number of columns greater than the coding constraint length n (m+1), and under the 21 coding modes selected herein, the number of columns is 30, the number of rows is 35, and the starting point interval of each row is 12, so that the amount of data required in the process of establishing the analysis matrix is at least 438. The method based on maximum likelihood detection is related to the bit error rate, the weight of the equation solution (i.e. the number of 1's in the solution vector), the higher the bit error rate, the greater the weight of the equation solution, the more data volume is required. The method of the invention takes the order of the check polynomial as
Figure BDA0003855499920000141
The equation solves the weight of +.>
Figure BDA0003855499920000142
The bit error rate is 0.09, the required data size is
Figure BDA0003855499920000143
The data length required for 21 coding modes is calculated and the result is shown in fig. 4. It can be seen that the data length required by the identification method of the invention is smaller than that of two comparison methods, especially the method based on the maximum likelihood detection, and the difference can reach hundreds of times under the condition of larger code length and code memory length.
From the above detailed description of the embodiments, it will be apparent to those skilled in the art that the embodiments may be implemented by means of software plus necessary general hardware platforms, or of course by means of hardware. Based on such understanding, the foregoing technical solutions may be embodied essentially or in part in the form of a software product that may be stored in a computer-readable storage medium including Read-Only Memory (ROM), random-access Memory (Random Access Memory, RAM), programmable Read-Only Memory (Programmable Read-Only Memory, PROM), erasable programmable Read-Only Memory (Erasable Programmable Read Only Memory, EPROM), one-time programmable Read-Only Memory (OTPROM), electrically erasable programmable Read-Only Memory (EEPROM), compact disc Read-Only Memory (Compact Disc Read-Only Memory, CD-ROM) or other optical disc Memory, magnetic disc Memory, tape Memory, or any other medium that can be used for computer-readable carrying or storing data.
Finally, it should be noted that: the embodiment of the invention discloses a blind convolutional code recognition method which is disclosed by the embodiment of the invention only for the preferred embodiment of the invention, and is only used for illustrating the technical scheme of the invention, but not limiting the technical scheme; although the invention has been described in detail with reference to the foregoing embodiments, those of ordinary skill in the art will understand that; the technical scheme recorded in the various embodiments can be modified or part of technical features in the technical scheme can be replaced equivalently; such modifications and substitutions do not depart from the spirit and scope of the corresponding technical solutions.

Claims (7)

1. A method for blind identification of convolutional codes, the method comprising:
s1, acquiring convolutional code information and a training convolutional code sequence, wherein the convolutional code information comprises a code length, a storage level number, a sample sequence starting point and a bit error rate;
the training convolutional code is an (n, 1, m) convolutional code, n is the code length, 1 is the code word starting point, and m is the storage level number;
the acquiring a training convolutional code sequence includes:
setting the code length and the storage series of different convolutional codes, and combining the code length and the storage series to obtain a training convolutional code sequence;
S2, processing the training convolution code sequence to obtain a training data sample, wherein the step comprises the following steps:
numbering the training convolutional code sequence to obtain a numbered training convolutional code sequence;
under each group of code length n and storage level m, 10 cases of bit error rate 0, 0.01 to 0.09 are selected, n cases of bit sequence starting point and codeword starting point 0 to n-1 are selected, 30000 data with 200bit length are respectively constructed under each combination, and error bits are randomly added to obtain training data samples;
s3, processing the training data sample to obtain a coding structure identification training set and a coding structure identification verification set;
s4, processing the training data sample to obtain a coding sequence starting point recognition training set and a coding sequence starting point recognition verification set, wherein the method comprises the following steps:
for each code length and storage level, selecting 10 bit error rates and n times 10 times 30000 training data samples of n sample sequence starting points to obtain 21 sub-sample sets, uniformly mixing the 21 sub-sample sets, and then randomly dividing the 21 sub-sample sets according to the ratio of 4:1 to obtain a coding sequence starting point identification training set and a coding sequence starting point identification verification set;
s5, constructing a coding structure identification network model, and training the coding structure identification network model by utilizing a coding structure identification training set and a coding structure identification verification set to obtain a target coding structure identification network model with optimal parameters;
S6, constructing a coding sequence starting point recognition network model, and training the coding sequence starting point recognition network model by using a coding sequence starting point recognition training set and a coding sequence starting point recognition verification set to obtain a target coding sequence starting point recognition network model with optimal parameters;
s7, acquiring a convolutional code sequence to be identified, and processing the convolutional code sequence to be identified by utilizing a target coding structure identification network model with optimal parameters to obtain a coding structure identification result;
s8, utilizing the target coding sequence starting point identification network model with optimal parameters to process the coding structure identification result, and obtaining a coding starting point identification result.
2. The method for blind recognition of convolutional codes according to claim 1, wherein constructing the code structure recognition network model, performing code structure recognition network model training using the code structure recognition training set and the code structure recognition verification set to obtain a target code structure recognition network model with optimal parameters, comprises:
constructing a coding structure identification network model, training the coding structure identification network model by using a coding structure identification training set, and performing parameter adjustment on the coding structure identification network model by using a coding structure identification verification set to obtain a target coding structure identification network model with optimal parameters.
3. The method for blind recognition of convolutional codes according to claim 1, wherein constructing a code sequence start point recognition network model, training the code sequence start point recognition network model by using a code sequence start point recognition training set and a code sequence start point recognition verification set to obtain a target code sequence start point recognition network model with optimal parameters, comprises:
constructing a coding sequence starting point recognition network model, training the coding sequence starting point recognition network model by using a coding sequence starting point recognition training set, and performing parameter adjustment on the coding sequence starting point recognition network model by using a coding sequence starting point recognition verification set to obtain a target coding sequence starting point recognition network model with optimal parameters.
4. The method for blind recognition of convolutional codes according to claim 1, wherein the obtaining the convolutional code sequence to be recognized, processing the convolutional code sequence to be recognized by using a target code structure recognition network model with optimal parameters to obtain a code structure recognition result, comprises:
acquiring a convolution code sequence to be identified, and processing the convolution code sequence to be identified to obtain a data sample to be identified with a coding structure;
And inputting the data sample to be identified of the coding structure into the target coding structure identification network model with the optimal parameters to obtain a coding structure identification result.
5. The blind convolutional code recognition method according to claim 1, wherein the processing the code structure recognition result by using the target code sequence start point recognition network model with optimal parameters to obtain a code start point recognition result comprises:
and processing the coding structure identification result to obtain a coding start point to-be-identified data sample, and inputting the coding start point to-be-identified data sample into the target coding sequence start point identification network model with optimal parameters to obtain a coding start point identification result.
6. The blind convolutional code recognition method of claim 1, wherein the code structure recognition network model comprises a first code structure recognition network model, a second code structure recognition network model, a third code structure recognition network model, a fourth code structure recognition network model, and a fifth code structure recognition network model;
the output end of the first coding structure identification network model is connected with the input end of the second coding structure identification network model; the output end of the second coding structure identification network model is connected with the input end of the third coding structure identification network model; the output end of the third coding structure identification network model is connected with the input end of the fourth coding structure identification network model; the output end of the fourth coding structure identification network model is connected with the input end of the fifth coding structure identification network model;
The first coding structure identification network model is a convolution kernel of 7×7 and 64 dimensions;
the second coding structure identification network model comprises 3 second coding structure identification sub-network models, and the 3 second coding structure identification sub-network models are in a series connection relationship; the second coding structure identification sub-network model comprises a first layer with a convolution kernel of 1 multiplied by 1 and 64 dimensions, a second layer with a convolution kernel of 3 multiplied by 3 and 64 dimensions, a third layer with a convolution kernel of 1 multiplied by 1 and 256 dimensions, and the first layer, the second layer and the third layer are in a series relation;
the third coding structure identification network model comprises 4 third coding structure identification sub-network models, and the 4 third coding structure identification sub-network models are in series connection; the third coding structure identification sub-network model comprises a first layer with a convolution kernel of 1 multiplied by 1 and 128 dimensions, a second layer with a convolution kernel of 3 multiplied by 3 and 128 dimensions, a third layer with a convolution kernel of 1 multiplied by 1 and 512 dimensions, and the first layer, the second layer and the third layer are in a series relation;
the fourth coding structure identification network model comprises 6 fourth coding structure identification sub-network models, and the 6 fourth coding structure identification sub-network models are in a series connection relationship; the fourth coding structure identification sub-network model comprises a first layer with a convolution kernel of 1 multiplied by 1 and 256 dimensions, a second layer with a convolution kernel of 3 multiplied by 3 and 256 dimensions, a third layer with a convolution kernel of 1 multiplied by 1 and 1024 dimensions, and the first layer, the second layer and the third layer are in a series relation;
The fifth coding structure identification network model comprises 3 fifth coding structure identification sub-network models, and the 3 fifth coding structure identification sub-network models are in a series connection relationship; the fifth coding structure identification sub-network model comprises a first layer with a convolution kernel of 1 multiplied by 1 and 512 dimensions, a second layer with a convolution kernel of 3 multiplied by 3 and 512 dimensions, a third layer with a convolution kernel of 1 multiplied by 1 and 2048 dimensions, and the first layer, the second layer and the third layer are in a series relation.
7. The blind convolutional code recognition method of claim 1 wherein the code sequence start recognition network model comprises a first code sequence start recognition network model, a second code sequence start recognition network model, a third code sequence start recognition network model, a fourth code sequence start recognition network model, and a fifth code sequence start recognition network model;
the output end of the first coding sequence starting point identification network model is connected with the input end of the second coding sequence starting point identification network model; the output end of the second coding sequence starting point identification network model is connected with the input end of the third coding sequence starting point identification network model; the output end of the third coding sequence starting point identification network model is connected with the input end of the fourth coding sequence starting point identification network model; the output end of the fourth coding sequence starting point identification network model is connected with the input end of the fifth coding sequence starting point identification network model;
The first coding sequence starting point identification network model is 7 multiplied by 7, and 64 dimensions;
the second coding sequence starting point identification network model comprises 2 second coding sequence starting point identification sub-network models, and the 2 second coding sequence starting point identification sub-network models are in a series connection relationship;
the third coding sequence starting point identification network model comprises 2 third coding sequence starting point identification sub-network models, and the 2 third coding sequence starting point identification sub-network models are in a series connection relationship;
the fourth coding sequence starting point identification network model comprises 2 fourth coding sequence starting point identification sub-network models, and the 2 fourth coding sequence starting point identification sub-network models are in a series connection relationship;
the fifth coding sequence starting point identification network model comprises 2 fifth coding sequence starting point identification sub-network models, and the 2 fifth coding structure identification sub-network models are in a series connection relationship;
the second coding sequence starting point identification sub-network model comprises a first layer with a convolution kernel of 3 multiplied by 3 and 64 dimensions, a second layer with a convolution kernel of 3 multiplied by 3 and 64 dimensions, and the first layer and the second layer are in a series connection relationship;
the third coding sequence starting point identification sub-network model comprises a first layer with a convolution kernel of 3 multiplied by 3 and 128 dimensions, a second layer with a convolution kernel of 3 multiplied by 3 and 128 dimensions, and the first layer and the second layer are in a series connection relationship;
The fourth coding sequence starting point identification sub-network model comprises a first layer with a convolution kernel of 3 multiplied by 3 and 256 dimensions, a second layer with a convolution kernel of 3 multiplied by 3 and 256 dimensions, and the first layer and the second layer are in a series connection relationship;
the fifth coding sequence starting point identification sub-network model comprises a first layer with a convolution kernel of 3×3 and 512 dimensions, a second layer with a convolution kernel of 3×3 and 512 dimensions, and the first layer and the second layer are in a series connection relationship.
CN202211146485.6A 2022-09-20 2022-09-20 Blind identification method for convolutional codes Active CN115499103B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211146485.6A CN115499103B (en) 2022-09-20 2022-09-20 Blind identification method for convolutional codes

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211146485.6A CN115499103B (en) 2022-09-20 2022-09-20 Blind identification method for convolutional codes

Publications (2)

Publication Number Publication Date
CN115499103A CN115499103A (en) 2022-12-20
CN115499103B true CN115499103B (en) 2023-05-12

Family

ID=84470848

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211146485.6A Active CN115499103B (en) 2022-09-20 2022-09-20 Blind identification method for convolutional codes

Country Status (1)

Country Link
CN (1) CN115499103B (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102244520A (en) * 2010-05-11 2011-11-16 中国电子科技集团公司第三十六研究所 Blind recognition method of convolutional coding parameters
CN103312457A (en) * 2013-05-09 2013-09-18 西安电子科技大学 Totally blind recognition method for coding parameter of convolutional code
CN103401650A (en) * 2013-08-08 2013-11-20 山东大学 Blind identification method for (n, 1 and m) convolutional code with error codes
CN111490853A (en) * 2020-04-15 2020-08-04 成都海擎科技有限公司 Channel coding parameter identification method based on deep convolutional neural network

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
AU2001286842A1 (en) * 2000-09-06 2002-03-22 Motorola, Inc. Soft-output error-trellis decoder for convolutional codes

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102244520A (en) * 2010-05-11 2011-11-16 中国电子科技集团公司第三十六研究所 Blind recognition method of convolutional coding parameters
CN103312457A (en) * 2013-05-09 2013-09-18 西安电子科技大学 Totally blind recognition method for coding parameter of convolutional code
CN103401650A (en) * 2013-08-08 2013-11-20 山东大学 Blind identification method for (n, 1 and m) convolutional code with error codes
CN111490853A (en) * 2020-04-15 2020-08-04 成都海擎科技有限公司 Channel coding parameter identification method based on deep convolutional neural network

Also Published As

Publication number Publication date
CN115499103A (en) 2022-12-20

Similar Documents

Publication Publication Date Title
US7917835B2 (en) Memory system and method for use in trellis-based decoding
CN106998208B (en) Code word construction method of variable length Polar code
CN112332860B (en) LDPC (Low Density parity check) code sparse check matrix reconstruction method and system
CN110098839B (en) Blind identification method for non-systematic convolutional code coding parameters under high error code
CN112232171A (en) Remote sensing image information extraction method and device based on random forest and storage medium
CN107612656A (en) A kind of Gaussian approximation method for simplifying suitable for polarization code
CN115499103B (en) Blind identification method for convolutional codes
CN113206808B (en) Channel coding blind identification method based on one-dimensional multi-input convolutional neural network
CN104243095A (en) Code word type blind identification method for convolutional code and linear block code
CN110535560A (en) A kind of polarization code combines coding and interpretation method
CN117194219A (en) Fuzzy test case generation and selection method, device, equipment and medium
CN117093830A (en) User load data restoration method considering local and global
CN116667859A (en) LDPC code parameter identification method
CN114490618B (en) Ant-lion algorithm-based data filling method, device, equipment and storage medium
CN115695564A (en) Efficient transmission method for data of Internet of things
CN112712855B (en) Joint training-based clustering method for gene microarray containing deletion value
CN112821895B (en) Code identification method for realizing high error rate of signal
CN114528810A (en) Data code generation method and device, electronic equipment and storage medium
CN115396064A (en) Detection decoding method and device, computer equipment and readable storage medium
CN114997490A (en) Construction method, prediction method, device and equipment of temperature profile prediction model
CN115293246A (en) Ultra-wideband non-line-of-sight signal identification method and device and computer equipment
CN111506691A (en) Track matching method and system based on depth matching model
Zhang et al. Deep learning for blind detection of Interleaver and scrambler
CN112737733A (en) Channel coding code pattern recognition method based on one-dimensional convolutional neural network
CN103974388A (en) Method and device for fusing data of wireless sensor network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant