CN109936423A - A kind of training method, device and the recognition methods of fountain codes identification model - Google Patents

A kind of training method, device and the recognition methods of fountain codes identification model Download PDF

Info

Publication number
CN109936423A
CN109936423A CN201910183728.5A CN201910183728A CN109936423A CN 109936423 A CN109936423 A CN 109936423A CN 201910183728 A CN201910183728 A CN 201910183728A CN 109936423 A CN109936423 A CN 109936423A
Authority
CN
China
Prior art keywords
fountain codes
model
data
sample set
sample
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201910183728.5A
Other languages
Chinese (zh)
Other versions
CN109936423B (en
Inventor
刘桥平
高兴宇
柴旭荣
邱昕
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Institute of Microelectronics of CAS
Original Assignee
Institute of Microelectronics of CAS
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Institute of Microelectronics of CAS filed Critical Institute of Microelectronics of CAS
Priority to CN201910183728.5A priority Critical patent/CN109936423B/en
Publication of CN109936423A publication Critical patent/CN109936423A/en
Application granted granted Critical
Publication of CN109936423B publication Critical patent/CN109936423B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Abstract

Training method, device and the recognition methods of a kind of fountain codes identification model provided in an embodiment of the present invention, wherein the training method includes: to obtain fountain codes sample set, and the sample encoded using fountain codes and the sample encoded without using fountain codes are included in the fountain codes sample set;The fountain codes sample set is inputted preset first model to be trained, obtains first object model;The fountain codes sample set is modulated, modulation system sample set is obtained;The modulation system sample set is inputted preset second model to be trained, obtains the second object module;By the first object model and second object module, it is configured to fountain codes identification model.The present invention solves the problems, such as to be difficult to be difficult in the transmission of non-cooperating formula at present to carry out automatic identification to fountain codes.

Description

A kind of training method, device and the recognition methods of fountain codes identification model
Technical field
The present invention relates to fields of communication technology, training method, device in particular to a kind of fountain codes identification model And recognition methods.
Background technique
At the beginning of 21 century, software radio (Software Defined Radio, SDR) is born.Software radio is can compile DSP (Digital Signal Processing, the DSP) device of Cheng Liqiang replaces special digital circuit, makes system hardware structure With function opposite independent.Thus can based on a relatively general hardware platform, by the different communication function of software realization, And control is programmed to working frequency, system bandwidth, modulation system, message sink coding etc., system flexibility greatly enhances, also right Urgent demand is proposed in non-cooperating reception.
And the mode of currently used machine learning is cooperatively to receive, and substantially has two class schemes:
First, needing to carry out feature to I/Q data to mention after obtaining IQ (In-phase/Quadrature, I/Q) data It takes, mainly includes time domain charactreristic parameter or transform domain feature parameter.Temporal signatures include instantaneous amplitude, instantaneous frequency and instantaneous phase Position;Transform domain feature includes power spectrum, Spectral correlation function, time-frequency distributions and other statistical parameters.Such prior art needle It is not high to modulation system accuracy of identification, it is lower in particular for QAM16, QAM64 accuracy of identification;It is higher logical to extract feature request Also some information of initial data are lost while believing domain-specific knowledge, and manually extract feature indirectly.
Second, automatically extracting modulation system feature using convolutional neural networks, convolutional neural networks are directly superimposed, in the number of plies It will appear precision of prediction decline after deepening instead, the usual network structure number of plies is less than 8 layers.
The two above classes in the prior art used in the existing defect of identification model are as follows: fountain code communication is cooperation Formula receives, therefore only identifies to modulation system, it is difficult to the identification of fountain codes is carried out in the case where the transmission of non-cooperating formula.
Summary of the invention
In view of this, a kind of training method for being designed to provide fountain codes identification model of the embodiment of the present invention, device And recognition methods, it solves the problems, such as to be difficult to be difficult in the transmission of non-cooperating formula at present to carry out automatic identification to fountain codes.
In a first aspect, the application is provided the following technical solutions by an embodiment:
A kind of training method of fountain codes identification model, comprising:
Fountain codes sample set is obtained, includes to use the sample of fountain codes coding and do not use in the fountain codes sample set The sample of fountain codes coding;
The fountain codes sample set is inputted preset first model to be trained, obtains first object model, wherein institute Stating the first model is neural network model;
The fountain codes sample set is modulated, modulation system sample set is obtained;
The modulation system sample set is inputted preset second model to be trained, obtains the second object module, wherein Second model is neural network model;
By the first object model and second object module, it is configured to fountain codes identification model;Wherein, described The modulation system of two object modules I/Q data for identification;The first object model spray in piginal encoded data for identification Spring code, the piginal encoded data are to be solved using the demodulation mode that second object module identifies to the I/Q data Adjust the data obtained.
Preferably, the acquisition fountain codes sample set, comprising:
Obtain fountain codes data set;
To the sample for using fountain codes coding in the fountain codes data set, label first is marked;
To, not using the sample of fountain codes coding, label second marks in the fountain codes data set;
By labeled first label and the fountain codes data set of second label, as fountain codes sample set.
Preferably, described that the fountain codes sample set is modulated, obtain modulation system sample set, comprising:
Each data in the fountain codes sample set are adjusted according to Different Modulations and a variety of signal-to-noise ratio System obtains modulation system data set;
Corresponding modulation system label is added for each data in the modulation system data set, obtains the modulation methods Formula sample set.
Preferably, first model and second model are depth residual error network model.
Preferably, described to be trained preset first model of fountain codes sample set input, obtain first object Model, comprising:
Training sample in the fountain codes sample set is input to first model to be trained;
According to the test sample in the fountain codes sample set, determine that the accuracy rate of trained first model is It is no to meet preset value;
If it is not, be then adjusted to the order and the inception number of plies of the convolution kernel of first model, and continue by Training sample in the fountain codes sample set is input to first model adjusted and is trained;
If so, using trained first model as the first object model.
Preferably, the sample encoded using fountain codes in the fountain codes sample set is 50%;The fountain codes sample The sample without using fountain codes coding concentrated is 50%.
Second aspect, based on the same inventive concept, the application are provided the following technical solutions by an embodiment:
A kind of training device of fountain codes identification model, comprising:
Fountain codes sample set obtains module, for obtaining fountain codes sample set, includes use in the fountain codes sample set The sample of fountain codes coding and the sample encoded without using fountain codes;
First training module is trained for the fountain codes sample set to be inputted preset first model, obtains the One object module, wherein first model is neural network model;
Modulation system sample set obtains module, for being modulated to the fountain codes sample set, obtains modulation methods style This collection;
Second training module is trained for the modulation system sample set to be inputted preset second model, obtains Second object module, wherein second model is neural network model;
Identification model constructs module, for being configured to fountain for the first object model and second object module Code identification model;Wherein, the modulation system of second object module I/Q data for identification;The first object model is used for Identify that the fountain codes in piginal encoded data, the piginal encoded data are the demodulation identified using second object module Mode carries out the data of demodulation acquisition to the I/Q data.
Preferably, the fountain codes sample set obtains module, is also used to:
Obtain fountain codes data set;
To the sample for using fountain codes coding in the fountain codes data set, label first is marked;
To, not using the sample of fountain codes coding, label second marks in the fountain codes data set;
By labeled first label and the fountain codes data set of second label, as fountain codes sample set.
Preferably, the modulation system sample set obtains module, is also used to:
Each data in the fountain codes sample set are adjusted according to Different Modulations and a variety of signal-to-noise ratio System obtains modulation system data set;
Corresponding modulation system label is added for each data in the modulation system data set, obtains the modulation methods Formula sample set.
The third aspect, based on the same inventive concept, the application are provided the following technical solutions by an embodiment:
A kind of recognition methods of the received fountain codes of non-cooperating, which is characterized in that fountain codes described in first aspect are known Other model is applied to the recognition methods of the fountain codes, and the recognition methods of the fountain codes includes:
Receive I/Q data;
The I/Q data is inputted second object module to identify, if identifying successfully, obtains the I/Q data Corresponding modulation system;
The I/Q data is demodulated according to the modulation system, obtains piginal encoded data;
The piginal encoded data is inputted the first object model to identify, if recognizing the original coding number The corresponding initial data of the I/Q data is obtained according to for fountain codes, then decoding.
One or more technical solutions provided in the embodiments of the present application have at least the following technical effects or advantages:
The training method of a kind of fountain codes identification model provided by the invention, by using fountain codes sample set to the first mould Type, which is trained, obtains first object model;Then fountain codes sample set is modulated, obtains modulation system sample set, guaranteed Fountain codes sample set and modulation system sample set come from same data source.Therefore, it is obtained by the training of modulation system sample set The second object module can after identifying the modulation system of I/Q data, further by first object model can be to demodulation The piginal encoded data obtained after I/Q data carries out the identification of fountain codes, i.e., is constructed by first object model and the second object module Fountain codes identification model can to the fountain codes in I/Q data carry out automatic identification, improve accuracy of identification.Carrying out fountain codes Identification before without to I/Q data carry out feature extraction, reduce the identification of automatic Modulation mode and fountain codes for communication speciality The dependence of domain knowledge;The data that the I/Q data identified simultaneously can receive for non-cooperating.
To enable the above objects, features and advantages of the present invention to be clearer and more comprehensible, preferred embodiment is cited below particularly, and cooperate Appended attached drawing, is described in detail below.
Detailed description of the invention
In order to illustrate the technical solution of the embodiments of the present invention more clearly, below will be to needed in the embodiment attached Figure is briefly described, it should be understood that the following drawings illustrates only certain embodiments of the present invention, therefore is not construed as pair The restriction of range for those of ordinary skill in the art without creative efforts, can also be according to this A little attached drawings obtain other relevant attached drawings.
Fig. 1 is the flow chart of the training method for the fountain codes identification model that first embodiment of the invention provides;
Fig. 2 is the block diagram of the recognition methods for the received fountain codes of non-cooperating that second embodiment of the invention provides;
Fig. 3 is the functional block diagram of the training device for the fountain codes identification model that third embodiment of the invention provides;
Fig. 4 is the training device structural block diagram for the illustrative fountain codes identification model that fourth embodiment of the invention provides;
Fig. 5 be computer readable storage medium structural block diagram that fifth embodiment of the invention provides.
Specific embodiment
Below in conjunction with attached drawing in the embodiment of the present invention, technical solution in the embodiment of the present invention carries out clear, complete Ground description, it is clear that described embodiments are only a part of the embodiments of the present invention, instead of all the embodiments.Usually exist The component of the embodiment of the present invention described and illustrated in attached drawing can be arranged and be designed with a variety of different configurations herein.Cause This, is not intended to limit claimed invention to the detailed description of the embodiment of the present invention provided in the accompanying drawings below Range, but it is merely representative of selected embodiment of the invention.Based on the embodiment of the present invention, those skilled in the art are not doing Every other embodiment obtained under the premise of creative work out, shall fall within the protection scope of the present invention.
It should also be noted that similar label and letter indicate similar terms in following attached drawing, therefore, once a certain Xiang Yi It is defined in a attached drawing, does not then need that it is further defined and explained in subsequent attached drawing.Meanwhile of the invention In description, term " first ", " second " etc. are only used for distinguishing description, are not understood to indicate or imply relative importance.
First embodiment
Fig. 1 is please referred to, a kind of training method of fountain codes identification model is provided in the present embodiment.Specifically, this method Include:
Step S10: obtaining fountain codes sample set, in the fountain codes sample set comprising the sample that is encoded using fountain codes with And the sample encoded without using fountain codes;
Step S20: the fountain codes sample set is inputted into preset first model and is trained, first object mould is obtained Type, wherein first model is neural network model;
Step S30: being modulated the fountain codes sample set, obtains modulation system sample set;
Step S40: the modulation system sample set is inputted into preset second model and is trained, the second target mould is obtained Type, wherein second model is neural network model;
Step S50: by the first object model and second object module, it is configured to fountain codes identification model;Its In, the modulation system of second object module I/Q data for identification;First object model original coding for identification Fountain codes in data, the piginal encoded data are the demodulation mode that is identified using second object module to the IQ Data carry out the data of demodulation acquisition.
In step slo, can be divided into two parts in fountain codes sample set, first part's fountain codes sample be used as into The training sample of row model training, second part fountain codes sample is as the test sample for being tested.For example, first Point account for 60%, 65%, 75%, 80% of fountain codes sample set etc., corresponding second part account for fountain codes sample set 40%, 35%, 25%, 20%.
The sample encoded using fountain codes and the sample encoded without using fountain codes are contained in two parts sample, To guarantee in model learning to the feature of the sample using fountain codes coding and the feature of the sample of non-fountain codes coding.Into one Step, 50% sample encoded using fountain codes in fountain codes sample set can be made, another 50% is to encode without using fountain codes Sample;When being trained sample and test sample divides, fountain is used in training sample and test sample The sample and can also respectively account for 50% without using the sample that fountain codes encode that code encodes.By this division mode to fountain codes sample This collection is divided, it is ensured that is had similar positive and negative samples in model training, is improved the first trained model to fountain codes The accuracy of identification.
It needs to be marked to the sample for using fountain codes to encode and without using the sample that fountain codes encode to carry out area Point, in step slo, obtaining step is as follows:
1, fountain codes data set is obtained;Wherein, comprising using the sample of fountain codes coding and being encoded without using fountain codes Sample.
2, to the sample for using fountain codes coding in fountain codes data set, label first is marked;First label can be Specific character or ID etc., such as the first label can be 1.
3, to, not using the sample of fountain codes coding, label second marks in fountain codes data set;First label can be with It is specific character or ID etc., such as the first label can be 0.
4, by labeled first mark and second label fountain codes data set, as fountain codes sample set.
Step S20: the fountain codes sample set is inputted into preset first model and is trained, first object mould is obtained Type, wherein first model is neural network model.
In step S20, neural network model can are as follows: convolutional neural networks model, Recognition with Recurrent Neural Network, depth nerve net Network, etc..Preferably, the first model is the depth residual error network model (RESNET) in convolutional neural networks.In RESNET net Network structure introduces identical quick connection (identity shortcut connection, identical quick connection), network structure Graph key difference is xi+1In increase xiComponent: xi+1=F (xi)→xi+1=F (xi)+xi.Final neural network model is deep It spends N >=14 (number of plies of CNN network model can only be less than 8), accuracy of identification raising can meet engineering demand.
Such as:
When input is x, export as F (x);
If the 1st layer of output is x1, the 2nd layer of output is x2, and so on, wherein xiFor i layers of output;
For no identical CNN network fast connected, the input of i+1 layer is i-th layer of output, then xi+ 1=F (xi);
For there is the identical RESNET fast connected, the input of i+1 layer is i-th layer of output, but i+1 layer is defeated It has also been superimposed i-th layer of output (i.e. the identical quick connection), i.e. x outi+1=F (xi)+xi
In the present embodiment by taking the first model is the depth residual error network model (RESNET) in convolutional neural networks as an example It is illustrated.During carrying out model training, initial parameter is set first, is then inputted training sample and is trained, Trained model is tested using test sample after the completion of primary training, then whether judging nicety rate reaches default Value (for example, being 90%, 99%, 99.5% etc.), if otherwise further adjusting hyper parameter (the hyper parameter example in RESNET model Such as: the order of convolution kernel, the inception number of plies), until the accuracy rate of test result meets or exceeds preset value.Specifically, The following steps are included:
1, the training sample in fountain codes sample set the first model is input to be trained;
2, according to the test sample in fountain codes sample set, determine whether the accuracy rate of trained first model meets Preset value;
3, if it is not, being then adjusted the order and the inception number of plies of the convolution kernel of the first model, and continue to spray Training sample in spring code sample set is input to the first model adjusted and is trained;It can when carrying out hyper parameter adjustment Judge which type of training shape "current" model is in by observing monitoring index such as loss and accuracy rate in the training process State adjusts hyper parameter in time.Guarantee quickly to obtain the first object mould for meeting accuracy rate requirement by the adjustment to hyper parameter Type.
4, if so, using trained first model as first object model.
It should be noted that the training process of other neural network models can refer to current existing training tool, This is repeated no more.
Step S30: being modulated the fountain codes sample set, obtains modulation system sample set.
In step s 30, modulation system sample set is that fountain codes sample set is modulated by certain modulation system The data obtained afterwards.Modulation system sample set is trained when sample and test sample divide and two parts sample In include using fountain codes encode sample and without using fountain codes coding sample ratio prepare, can be with specific reference to spray Spring code sample set thinks that corresponding embodiment carries out, and is not repeating.
Same data source is all from by step S30, ensure that modulation system sample set and fountain codes sample set.It adopts It can recognize through ovennodulation with the second object module that the second model of modulation system sample set training obtains and include spray The I/Q data of spring code.
Further, step S30 may include the following embodiments and the accompanying drawings:
1, each data in the fountain codes sample set are carried out according to Different Modulations and a variety of signal-to-noise ratio Modulation obtains modulation system data set.
2, corresponding modulation system label is added for each data in the modulation system data set, obtains the modulation Mode sample set.
In the present embodiment, modulation system may include below one or more: BPSK, QPSK, 8PSK, PAM4, QAM16, QAM64, GFSK and CPFSK;Signal-to-noise ratio includes any of the following or a variety of: -20, -18, -16, -14, -12, - 10, -8, -6, -4, -2,0,2,4,6,8,10,12,14,16 and 18.
For example, signal-to-noise ratio type is 20 kinds if the modulation system used in the present embodiment is 8 kinds, every one kind sample pair 1000 samples are answered, each sample is 128 I/Q datas continuously used, and I/Q data is two;So, pass through modulation system And signal-to-noise ratio be modulated after produce float32 data of 8*20*1000*128*2 group.It then, is each group of data of generation Corresponding addition modulation system label can be obtained modulation system sample set, and modulation system label can be specific character or ID, For example, the modulation system label of 8 class modulation systems can be successively are as follows: 0,1,2,3,4,5,6,7.
It should be noted that the sequencing of step S30 and step S10, step S20 are with no restrictions.
Step S40: the modulation system sample set is inputted into preset second model and is trained, the second target mould is obtained Type, wherein second model is neural network model.
In step s 40, neural network model can also are as follows: convolutional neural networks model, Recognition with Recurrent Neural Network, depth nerve Network, etc..Preferably, the second model is the depth residual error network model in convolutional neural networks in the present embodiment (RESNET).It can refer to the training process of above-mentioned first model to the training of second model, details are not described herein.
Finally, first object model and the second object module can be configured to by fountain codes identification model by step S50. Wherein, the modulation system of the second object module I/Q data for identification, the modulation system pair identified by the second object module I/Q data, which carries out demodulation, can be obtained piginal encoded data, then by first object model to the fountain codes in piginal encoded data It is identified, that is, can determine whether piginal encoded data uses fountain codes to encode.
In conclusion a kind of training method of fountain codes identification model provided by the invention is by using fountain codes sample set First model is trained and obtains first object model;Then fountain codes sample set is modulated, obtains modulation methods style This collection ensure that fountain codes sample set and modulation system sample set from same data source.Therefore, pass through modulation system sample set The second object module that training obtains can further pass through first object model after identifying the modulation system of I/Q data The identification that fountain codes can be carried out to the piginal encoded data obtained after demodulation I/Q data, i.e., by first object model and the second target The fountain codes identification model of model construction can carry out automatic identification to the fountain codes in I/Q data, improve accuracy of identification.Into Without carrying out feature extraction to I/Q data before the identification of row fountain codes, reduce the identification of automatic Modulation mode and fountain codes for The dependence of communication speciality domain knowledge;The data that the I/Q data identified simultaneously can receive for non-cooperating.
Second embodiment
Referring to figure 2., the recognition methods of the received fountain codes of a kind of non-cooperating, the fountain codes in first embodiment Identification model can be applied to the recognition methods of the fountain codes.Specifically, the recognition methods of the fountain codes includes:
Step S101: I/Q data is received;
Step S102: the I/Q data is inputted into second object module and is identified, if identifying successfully, obtains institute State the corresponding modulation system of I/Q data;
Step S103: demodulating the I/Q data according to the modulation system, obtains piginal encoded data;
Step S104: inputting the first object model for the piginal encoded data and identify, if recognizing described Piginal encoded data is fountain codes, then decodes and obtain the corresponding initial data of the I/Q data.
The method in this present embodiment of closing, wherein each explanation of nouns/paraphrase being previously mentioned is in the first embodiment It is described in detail, no detailed explanation will be given here.
Meanwhile beneficial effect caused by the method in the present embodiment can be referring specifically to the side described in first embodiment Method, no detailed explanation will be given here.
3rd embodiment
Referring to figure 3., a kind of training device 300 of fountain codes identification model is provided in the present embodiment, specifically, described Device 300 includes:
Fountain codes sample set obtains module 301, includes to make in the fountain codes sample set for obtaining fountain codes sample set The sample encoded with fountain codes and the sample encoded without using fountain codes;
First training module 302 is trained for the fountain codes sample set to be inputted preset first model, obtains First object model, wherein first model is neural network model;
Modulation system sample set obtains module 303, for being modulated to the fountain codes sample set, obtains modulation system Sample set;
Second training module 304 is trained for the modulation system sample set to be inputted preset second model, obtains Obtain the second object module, wherein second model is neural network model;
Identification model constructs module 305, for being configured to spray by the first object model and second object module Spring code identification model;Wherein, the modulation system of second object module I/Q data for identification;The first object model is used Fountain codes in identification piginal encoded data, the piginal encoded data are the solution identified using second object module Tune mode carries out the data of demodulation acquisition to the I/Q data.
As an alternative embodiment, the fountain codes sample set obtains module 301, it is also used to:
Obtain fountain codes data set;
To the sample for using fountain codes coding in the fountain codes data set, label first is marked;
To, not using the sample of fountain codes coding, label second marks in the fountain codes data set;
By labeled first label and the fountain codes data set of second label, as fountain codes sample set.
As an alternative embodiment, the modulation system sample set obtains module 303, it is also used to:
Each data in the fountain codes sample set are adjusted according to Different Modulations and a variety of signal-to-noise ratio System obtains modulation system data set;
Corresponding modulation system label is added for each data in the modulation system data set, obtains the modulation methods Formula sample set.
Device in this present embodiment is closed, wherein the modules and its function that are previously mentioned can be referring in particular to first embodiments In elaboration, no detailed explanation will be given here.
Fourth embodiment
Based on the same inventive concept, as shown in figure 4, present embodiments providing a kind of training device of fountain codes identification model 400, including memory 410, processor 420 and it is stored in the computer journey that can be run on memory 410 and on processor 420 Sequence 411, processor 420 perform the steps of when executing computer program 411
Fountain codes sample set is obtained, includes to use the sample of fountain codes coding and do not use in the fountain codes sample set The sample of fountain codes coding;The fountain codes sample set is inputted preset first model to be trained, obtains first object mould Type, wherein first model is neural network model;The fountain codes sample set is modulated, modulation methods style is obtained This collection;The modulation system sample set is inputted preset second model to be trained, obtains the second object module, wherein institute Stating the second model is neural network model;By the first object model and second object module, it is configured to fountain codes knowledge Other model;Wherein, the modulation system of second object module I/Q data for identification;The first object model is for identification Fountain codes in piginal encoded data, the piginal encoded data are the demodulation mode identified using second object module The data of demodulation acquisition are carried out to the I/Q data.
In the specific implementation process, processor 420 execute computer program 411 when, may be implemented real first embodiment (or 3rd embodiment) in any embodiment, details are not described herein.
5th embodiment
Based on the same inventive concept, as shown in figure 5, present embodiments providing a kind of computer readable storage medium 500, On be stored with computer program 511, computer program 511 performs the steps of when being executed by processor
Fountain codes sample set is obtained, includes to use the sample of fountain codes coding and do not use in the fountain codes sample set The sample of fountain codes coding;The fountain codes sample set is inputted preset first model to be trained, obtains first object mould Type, wherein first model is neural network model;The fountain codes sample set is modulated, modulation methods style is obtained This collection;The modulation system sample set is inputted preset second model to be trained, obtains the second object module, wherein institute Stating the second model is neural network model;By the first object model and second object module, it is configured to fountain codes knowledge Other model;Wherein, the modulation system of second object module I/Q data for identification;The first object model is for identification Fountain codes in piginal encoded data, the piginal encoded data are the demodulation mode identified using second object module The data of demodulation acquisition are carried out to the I/Q data.
In the specific implementation process, when computer program 511 is executed by processor, first embodiment (or may be implemented Two embodiments) in any embodiment, details are not described herein.
In several embodiments provided herein, it should be understood that disclosed device and method can also pass through Other modes are realized.The apparatus embodiments described above are merely exemplary, for example, flow chart and block diagram in attached drawing Show the device of multiple embodiments according to the present invention, the architectural framework in the cards of method and computer program product, Function and operation.In this regard, each box in flowchart or block diagram can represent the one of a module, section or code Part, a part of the module, section or code, which includes that one or more is for implementing the specified logical function, to be held Row instruction.It should also be noted that function marked in the box can also be to be different from some implementations as replacement The sequence marked in attached drawing occurs.For example, two continuous boxes can actually be basically executed in parallel, they are sometimes It can execute in the opposite order, this depends on the function involved.It is also noted that every in block diagram and or flow chart The combination of box in a box and block diagram and or flow chart can use the dedicated base for executing defined function or movement It realizes, or can realize using a combination of dedicated hardware and computer instructions in the system of hardware.
In addition, each functional module in each embodiment of the present invention can integrate one independent portion of formation together Point, it is also possible to modules individualism, an independent part can also be integrated to form with two or more modules.
If the method function in the present invention is realized in the form of software function module and as independent product pin It sells or in use, can store in a computer readable storage medium.Based on this understanding, technical side of the invention Substantially the part of the part that contributes to existing technology or the technical solution can be with the shape of software product in other words for case Formula embodies, which is stored in a storage medium, including some instructions are used so that a calculating Machine equipment (can be personal computer, server or the network equipment etc.) executes each embodiment the method for the present invention All or part of the steps.And storage medium above-mentioned includes: USB flash disk, mobile hard disk, read-only memory (ROM, Read-Only Memory), random access memory (RAM, Random Access Memory), magnetic or disk etc. are various can store journey The medium of sequence code.It should be noted that, in this document, relational terms such as first and second and the like are used merely to one A entity or operation with another entity or operate distinguish, without necessarily requiring or implying these entities or operation it Between there are any actual relationship or orders.Moreover, the terms "include", "comprise" or its any other variant are intended to Cover non-exclusive inclusion, so that the process, method, article or equipment for including a series of elements not only includes those Element, but also including other elements that are not explicitly listed, or further include for this process, method, article or setting Standby intrinsic element.In the absence of more restrictions, the element limited by sentence "including a ...", it is not excluded that There is also other identical elements in the process, method, article or apparatus that includes the element.
The foregoing is only a preferred embodiment of the present invention, is not intended to restrict the invention, for the skill of this field For art personnel, the invention may be variously modified and varied.All within the spirits and principles of the present invention, made any to repair Change, equivalent replacement, improvement etc., should all be included in the protection scope of the present invention.It should also be noted that similar label and letter exist Similar terms are indicated in following attached drawing, therefore, once being defined in a certain Xiang Yi attached drawing, are then not required in subsequent attached drawing It is further defined and explained.
The above description is merely a specific embodiment, but scope of protection of the present invention is not limited thereto, any Those familiar with the art in the technical scope disclosed by the present invention, can easily think of the change or the replacement, and should all contain Lid is within protection scope of the present invention.Therefore, the protection scope of the present invention shall be subject to the protection scope of the claims.

Claims (10)

1. a kind of training method of fountain codes identification model characterized by comprising
Fountain codes sample set is obtained, includes using the sample of fountain codes coding and without using fountain in the fountain codes sample set The sample of code coding;
The fountain codes sample set is inputted preset first model to be trained, obtains first object model, wherein described the One model is neural network model;
The fountain codes sample set is modulated, modulation system sample set is obtained;
The modulation system sample set is inputted preset second model to be trained, obtains the second object module, wherein described Second model is neural network model;
By the first object model and second object module, it is configured to fountain codes identification model;Wherein, second mesh Mark the modulation system of model I/Q data for identification;The first object model fountain in piginal encoded data for identification Code, the piginal encoded data are to be demodulated using the demodulation mode that second object module identifies to the I/Q data The data of acquisition.
2. the method according to claim 1, wherein the acquisition fountain codes sample set, comprising:
Obtain fountain codes data set;
To the sample for using fountain codes coding in the fountain codes data set, label first is marked;
To, not using the sample of fountain codes coding, label second marks in the fountain codes data set;
By labeled first label and the fountain codes data set of second label, as fountain codes sample set.
3. being obtained the method according to claim 1, wherein described be modulated the fountain codes sample set Modulation system sample set, comprising:
Each data in the fountain codes sample set are modulated according to Different Modulations and a variety of signal-to-noise ratio, are obtained Obtain modulation system data set;
Corresponding modulation system label is added for each data in the modulation system data set, obtains the modulation methods style This collection.
4. the method according to claim 1, wherein first model is that depth is residual with second model Poor network model.
5. according to the method described in claim 4, it is characterized in that, described input preset first for the fountain codes sample set Model is trained, and obtains first object model, comprising:
Training sample in the fountain codes sample set is input to first model to be trained;
According to the test sample in the fountain codes sample set, determine whether the accuracy rate of trained first model accords with Close preset value;
If it is not, being then adjusted to the order and the inception number of plies of the convolution kernel of first model, and continuing will be described Training sample in fountain codes sample set is input to first model adjusted and is trained;
If so, using trained first model as the first object model.
6. the method according to claim 1, wherein in the fountain codes sample set using fountain codes encode Sample is 50%;The sample without using fountain codes coding in the fountain codes sample set is 50%.
7. a kind of training device of fountain codes identification model characterized by comprising
Fountain codes sample set obtains module, includes to use fountain in the fountain codes sample set for obtaining fountain codes sample set The sample of code coding and the sample encoded without using fountain codes;
First training module is trained for the fountain codes sample set to be inputted preset first model, obtains the first mesh Mark model, wherein first model is neural network model;
Modulation system sample set obtains module, for being modulated to the fountain codes sample set, obtains modulation system sample set;
Second training module is trained for the modulation system sample set to be inputted preset second model, obtains second Object module, wherein second model is neural network model;
Identification model constructs module, for being configured to fountain codes knowledge for the first object model and second object module Other model;Wherein, the modulation system of second object module I/Q data for identification;The first object model is for identification Fountain codes in piginal encoded data, the piginal encoded data are the demodulation mode identified using second object module The data of demodulation acquisition are carried out to the I/Q data.
8. device according to claim 7, which is characterized in that the fountain codes sample set obtains module, is also used to:
Obtain fountain codes data set;
To the sample for using fountain codes coding in the fountain codes data set, label first is marked;
To, not using the sample of fountain codes coding, label second marks in the fountain codes data set;
By labeled first label and the fountain codes data set of second label, as fountain codes sample set.
9. device according to claim 7, which is characterized in that the modulation system sample set obtains module, is also used to:
Each data in the fountain codes sample set are modulated according to Different Modulations and a variety of signal-to-noise ratio, are obtained Obtain modulation system data set;
Corresponding modulation system label is added for each data in the modulation system data set, obtains the modulation methods style This collection.
10. a kind of recognition methods of the received fountain codes of non-cooperating, which is characterized in that spray described in any one of claims 1-8 Spring code identification model is applied to the recognition methods of the fountain codes, and the recognition methods of the fountain codes includes:
Receive I/Q data;
The I/Q data is inputted second object module to identify, if identifying successfully, it is corresponding to obtain the I/Q data Modulation system;
The I/Q data is demodulated according to the modulation system, obtains piginal encoded data;
The piginal encoded data is inputted the first object model to identify, is if recognizing the piginal encoded data Fountain codes then decode and obtain the corresponding initial data of the I/Q data.
CN201910183728.5A 2019-03-12 2019-03-12 Training method, device and recognition method of fountain code recognition model Active CN109936423B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910183728.5A CN109936423B (en) 2019-03-12 2019-03-12 Training method, device and recognition method of fountain code recognition model

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910183728.5A CN109936423B (en) 2019-03-12 2019-03-12 Training method, device and recognition method of fountain code recognition model

Publications (2)

Publication Number Publication Date
CN109936423A true CN109936423A (en) 2019-06-25
CN109936423B CN109936423B (en) 2021-11-30

Family

ID=66986983

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910183728.5A Active CN109936423B (en) 2019-03-12 2019-03-12 Training method, device and recognition method of fountain code recognition model

Country Status (1)

Country Link
CN (1) CN109936423B (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110263271A1 (en) * 2008-09-26 2011-10-27 Christian Hoymann Techniques for Uplink Cooperation of Access Nodes
US20160156488A1 (en) * 2014-01-06 2016-06-02 Panasonic Corporation Wireless communication device and wireless communication method
CN108282263A (en) * 2017-12-15 2018-07-13 西安电子科技大学 Coded modulation joint recognition methods based on one-dimensional depth residual error light weight network
CN108600135A (en) * 2018-04-27 2018-09-28 中国科学院计算技术研究所 A kind of recognition methods of signal modulation mode
CN108616470A (en) * 2018-03-26 2018-10-02 天津大学 Modulation Signals Recognition method based on convolutional neural networks
WO2018176889A1 (en) * 2017-03-27 2018-10-04 华南理工大学 Method for automatically identifying modulation mode for digital communication signal

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110263271A1 (en) * 2008-09-26 2011-10-27 Christian Hoymann Techniques for Uplink Cooperation of Access Nodes
US20160156488A1 (en) * 2014-01-06 2016-06-02 Panasonic Corporation Wireless communication device and wireless communication method
WO2018176889A1 (en) * 2017-03-27 2018-10-04 华南理工大学 Method for automatically identifying modulation mode for digital communication signal
CN108282263A (en) * 2017-12-15 2018-07-13 西安电子科技大学 Coded modulation joint recognition methods based on one-dimensional depth residual error light weight network
CN108616470A (en) * 2018-03-26 2018-10-02 天津大学 Modulation Signals Recognition method based on convolutional neural networks
CN108600135A (en) * 2018-04-27 2018-09-28 中国科学院计算技术研究所 A kind of recognition methods of signal modulation mode

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
ALAIN SULTAN: ""Compilation of all Rel-13 WIDs"", 《3GPP TSG MEETING #73 SP-160685/CP-160559/RP-161800》 *
WASSIM等: ""Performance of AdaBoost classifier in recognition of superposed modulations for MIMO TWRC with physical-layer network coding"", 《2017 25TH INTERNATIONAL CONFERENCE ON SOFTWARE, TELECOMMUNICATIONS AND COMPUTER NETWORKS (SOFTCOM)》 *
赵纪伟: ""基于深度学习的调制识别技术研究"", 《中国优秀硕士学位论文全文数据库信息科技辑I136-217》 *

Also Published As

Publication number Publication date
CN109936423B (en) 2021-11-30

Similar Documents

Publication Publication Date Title
CN108282263B (en) Coded modulation joint recognition methods based on one-dimensional depth residual error light weight network
CN110647456B (en) Fault prediction method, system and related device of storage equipment
CN112464837A (en) Shallow sea underwater acoustic communication signal modulation identification method and system based on small data samples
CN109784368A (en) A kind of determination method and apparatus of application program classification
CN111724400A (en) Automatic video matting method and system
CN106021556A (en) Address information processing method and device
CN111382803A (en) Feature fusion method based on deep learning
CN117078057A (en) Assessment method, device, equipment and medium for carbon emission control policy
CN109936423A (en) A kind of training method, device and the recognition methods of fountain codes identification model
CN117236788B (en) Water resource scheduling optimization method and system based on artificial intelligence
CN109599123B (en) Audio bandwidth extension method and system based on genetic algorithm optimization model parameters
CN105447477B (en) Formula identification method and device based on formula library
CN105678771A (en) Determination method and device for determining quality scores of images
CN115358473A (en) Power load prediction method and prediction system based on deep learning
CN112580598B (en) Radio signal classification method based on multichannel Diffpool
CN114943260A (en) Method, device, equipment and storage medium for identifying traffic scene
Ho et al. A wavelet-based method for classification of binary digitally modulated signals
CN112200275A (en) Artificial neural network quantification method and device
CN105824871A (en) Picture detecting method and equipment
CN115422264B (en) Time sequence data processing method, device, equipment and readable storage medium
CN104184623B (en) A kind of method and device of singal reporting code generation
Azad et al. Robust speech filter and voice encoder parameter estimation using the phase–phase correlator
CN115908998B (en) Training method of water depth data identification model, water depth data identification method and device
CN108416056A (en) Include correlation study method, apparatus, equipment and the medium relied on based on condition
Ahmadi et al. Symbol Based Modulation Classification using Combination of Fuzzy Clustering and Hierarchical Clustering

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant