CN111652108A - Anti-interference signal identification method and device, computer equipment and storage medium - Google Patents

Anti-interference signal identification method and device, computer equipment and storage medium Download PDF

Info

Publication number
CN111652108A
CN111652108A CN202010469377.7A CN202010469377A CN111652108A CN 111652108 A CN111652108 A CN 111652108A CN 202010469377 A CN202010469377 A CN 202010469377A CN 111652108 A CN111652108 A CN 111652108A
Authority
CN
China
Prior art keywords
signal
neural network
training
label
module
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010469377.7A
Other languages
Chinese (zh)
Other versions
CN111652108B (en
Inventor
马钰
王沙飞
房珊瑶
鲍雁飞
杨健
田震
肖庆正
刘杰
朱宇轩
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
32802 Troops Of People's Liberation Army Of China
Original Assignee
32802 Troops Of People's Liberation Army Of China
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 32802 Troops Of People's Liberation Army Of China filed Critical 32802 Troops Of People's Liberation Army Of China
Priority to CN202010469377.7A priority Critical patent/CN111652108B/en
Publication of CN111652108A publication Critical patent/CN111652108A/en
Application granted granted Critical
Publication of CN111652108B publication Critical patent/CN111652108B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2218/00Aspects of pattern recognition specially adapted for signal processing
    • G06F2218/12Classification; Matching
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2218/00Aspects of pattern recognition specially adapted for signal processing
    • G06F2218/08Feature extraction

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Computational Linguistics (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Signal Processing (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses an anti-interference signal identification method, an anti-interference signal identification device, computer equipment and a storage medium.A neural network is trained by blocking partial data in a signal instance, each neural network corresponds to a label, so that each neural network can accurately predict the blocked part of the signal instance with the same label, and simultaneously the blocked part of the signal instance with other labels is difficult to predict; blocking part of the signal to be identified, and predicting the blocked part of the signal to be identified by using each neural network; and identifying the signal to be identified as a label corresponding to the neural network with the most accurate prediction of the shielded part. The invention solves the problems of inexplicability, easy interference by external noise and low robustness of the traditional intelligent signal identification method. The method has the advantages that the mapping relation between the internal data of the example signals is established, the method has practical physical significance and interpretability, and the anti-interference performance of the identification method is greatly improved.

Description

Anti-interference signal identification method and device, computer equipment and storage medium
Technical Field
The present invention relates to the field of computer technologies, and in particular, to a signal identification method and apparatus, a computer device, and a storage medium.
Background
With the development of scientific technology, artificial intelligence technology has been widely used, and among them, there has been a great progress especially in the identification and processing of signals. Taking the identification of the handwritten digital image signal as an example, the computer device may adopt a traditional multi-layer perceptron method model to identify the handwritten digital image signal, so as to obtain the category of the handwritten digit included in the image signal. The figures in the images are the objective existence in the nature, the structural characteristics of the shapes of the figures are of practical physical significance, objects printed with the figures can be license plates, book page numbers, characters in books and the like, and are also the objective existence in the nature, and the digital image signals are necessarily connected with specific matters in the nature. However, the traditional multi-layer perceptron method model has the technical problems that the interpretability is not available, the human being cannot interpret the characteristics extracted by the neural network, the recognition process of the neural network cannot be understood, when the acquired or received handwritten digital image signals are interfered by noise, the recognition accuracy is possibly rapidly reduced along with the increase of the noise intensity, and the robustness of the method model is not high.
Disclosure of Invention
The main purposes of the invention are: under the condition that the collected or received signals are interfered, a certain recognition accuracy rate available for the actual engineering can be still kept, and the technical problem that the recognition accuracy rate of the traditional algorithm is rapidly reduced along with the increase of interference strength to cause the unavailable actual engineering is avoided.
In view of the foregoing, it is desirable to provide a method, an apparatus, a computer device and a storage medium for signal identification with interference resistance.
In a first aspect, the present invention provides an anti-interference signal identification method, including:
acquiring signal examples, wherein each signal with the same physical characteristics corresponds to one label, and N labels are obtained in total;
storing signals with the same label together to form a data set, wherein N data sets are in total;
blocking partial data in the signal examples, training N neural networks, wherein each neural network corresponds to a label, so that the similarity between a prediction result signal and a real signal is higher when the blocked part of the signal example with the same label is predicted by each neural network, and the similarity between the prediction result signal and the real signal is lower when the blocked part of the signal example with other labels is predicted by each neural network;
acquiring a signal to be identified;
blocking part of the signal to be identified, and predicting the blocked part of the signal to be identified by using N neural networks respectively;
and identifying the signal to be identified as a label corresponding to the neural network with the most accurate prediction of the shielded part.
And obtaining the similarity between the prediction result signal and the real signal, wherein the similarity is measured by calculating the distance between the prediction result signal vector and the real signal vector or the distance between two signal characteristics. The smaller the distance, the more similar the predicted signal is to the true signal. Assuming that the prediction result is a vector x', the true signal is the vector x; the similarity d (x', x) is represented by a cosine included angle theta between vectors, the cosine included angle between two identical signals is 0, the cosine included angle reflects the similarity between the signals, the identical signals are the most similar signals, and the value of the included angle is the minimum; conversely, the more dissimilar, the larger the included angle value, and the calculation formula of the similarity d (x', x) is:
Figure BDA0002513776140000021
wherein x isiRepresenting the i-th element, x, in a vector xi' represents the ith element in vector x ', and n represents the number of elements in vectors x and x '; other methods of computing the distance between vectors may also be used to represent similarity, for example: absolute value distance, Euclidean distance, Chebyshev distance, etc. represent similarity; similarly, the signal features of the predicted signal and the real signal can be extracted first, and then the distance between the features is calculated to measure the similarity, such as: a similarity measure based on image signal color features, a similarity measure based on image signal texture features, a similarity measure based on radar signal pulse description words, a similarity measure based on sound signal/communication signal time-frequency analysis features, and the like.
The training of the N neural networks comprises:
training a common basic neural network on all the acquired signal examples, so that the basic neural network can accurately predict the shielded part of the signal example;
keeping the trained parameters of the basic neural network unchanged, and adding a new neural network layer for controlling function expression in the basic neural network to form a controlled neural network;
respectively training N controlled neural networks on N signal example sets with different labels, so that each controlled neural network can only accurately predict the shielded part of a specific label signal;
the N controlled networks correspond to the N prediction models to obtain N prediction models.
Adding a new neural network layer for controlling function expression in the basic neural network, comprising:
adding a new neural network layer in front of each layer of the basic neural network, wherein the number of nodes is the same as that of nodes of a subsequent network layer;
setting the initial state of the newly added neural network layer to be the state that does not influence the mapping relation established by the original basic neural network, namely, the function of the controlled neural network is consistent with the function of the original basic neural network in the untrained initial state;
on the specific set, newly added neural network layer parameters are trained, and along with continuously strengthening the training of the controlled neural network to predict the shielded part of the specific label signal instance, the capability of the controlled neural network to predict the shielded parts of other label signal instances is gradually weakened, so that accurate prediction cannot be realized.
The training neural network predicts an occluded portion of a signal, comprising:
using a rectangular mask to frame a portion of the signal instance at random;
setting the parts of the signal instances in the rectangular shielding frames to be zero as input parts of training data;
setting the parts of the signal instances outside the rectangular shielding frames to be zero as the output part of the training data;
forming a training data, which comprises an input part and an output part;
training the neural network, fitting the input and output portions of the training data.
The length and width of the rectangular shielding frame are randomly generated.
In a second aspect, the present invention provides a tamper resistant signal identification apparatus, the apparatus comprising:
the acquisition module is used for acquiring a signal example with a label and a signal to be identified, each label corresponds to a specific physical characteristic, the acquisition module outputs the signal example with the label to the training module, and the signal to be identified is output to the classification module;
the training module is used for calling a signal example with a label in the acquisition module, training an intelligent prediction model used in the prediction module and outputting the trained intelligent prediction model parameters to the prediction module;
the prediction module is used for calling the intelligent prediction models trained in the training module, predicting the shielded parts of the signals for the recognition module to call, wherein each intelligent prediction model corresponds to a different label, each intelligent prediction model can accurately predict the shielded parts of the signal examples with the same label, and meanwhile, when the shielded parts of the signal examples with other labels are predicted, the similarity between the obtained prediction result signals and the real signals is low;
blocking partial data in the signal examples, training N neural networks, wherein each neural network corresponds to a label, so that each neural network can accurately predict the blocked part of the signal example with the same label, namely, when the neural networks predict the blocked part of the signal example with the same label, the similarity between the obtained prediction result signal and a real signal is higher, and meanwhile, the blocked part of the signal example with other labels is difficult to predict, namely, when the neural networks predict the blocked part of the signal example with other labels, the similarity between the obtained prediction result signal and the real signal is lower;
the identification module is used for receiving the signal to be identified in the classification module, calling the prediction module to predict the shielded part of the signal to be identified and outputting the prediction result to the classification module;
and the classification module is used for calling the signals to be identified in the acquisition module, transmitting the signals to be identified to the identification module, calling the prediction result in the identification module, and classifying the signals to be identified according to the prediction accuracy of different prediction models in the identification module.
The prediction module consists of controlled neural network submodules, the basic neural network submodules are basic modules forming the controlled neural network submodules, and the basic neural network submodules can accurately predict the shielded parts of all signal examples; the controlled neural network sub-module is built on the basic neural network sub-module, the shielded part of the signal instance with the specific label can be accurately predicted, and meanwhile, when the shielded part of the signal instance with other labels is predicted, the similarity between the obtained prediction result signal and the real signal is low.
The controlled neural network sub-module is built on the basic neural network sub-module, a new neural network layer is added in front of each layer of the basic neural network sub-module, and the newly added neural network layer has the same node number as that of the subsequent network layer; setting the initial state of the newly added neural network layer to be the state that does not influence the mapping relation established by the original basic neural network, namely, the function of the controlled neural network is consistent with the function of the original basic neural network in the untrained initial state; by training newly added neural network layer parameters on a specific set, the ability of the controlled neural network to predict the occluded part of a specific label signal instance is gradually weakened by continuously strengthening the training of the controlled neural network to predict the occluded part with other label signal instances.
In a third aspect, the present invention provides a computer device comprising a memory, a general purpose computing processor, and an intelligent training computing processor, the memory storing a computer program, the general purpose computing processor and the intelligent training computing processor implementing the following steps when executing the computer program:
acquiring signal examples, wherein each signal with the same physical characteristics corresponds to one label, and N labels are obtained in total;
storing signals with the same label together to form a data set, wherein N data sets are in total;
blocking partial data in the signal examples, training N neural networks, wherein each neural network corresponds to a label, so that each neural network can accurately predict the blocked part of the signal examples with the same label, and meanwhile, when the blocked parts of the signal examples with other labels are predicted, the similarity between the obtained prediction result signals and real signals is low;
acquiring a signal to be identified;
blocking part of the signal to be identified, and predicting the blocked part of the signal to be identified by using N neural networks respectively;
and identifying the signal to be identified as a label corresponding to the neural network with the most accurate prediction of the shielded part.
The training of the N neural networks comprises:
training a common basic neural network on all the acquired signal examples, so that the basic neural network can accurately predict the shielded part of the signal example;
keeping the trained parameters of the basic neural network unchanged, and adding a new neural network layer for controlling function expression in the basic neural network to form a controlled neural network;
respectively training N controlled neural networks on N signal example sets with different labels, so that each controlled neural network can only accurately predict the shielded part of a specific label signal;
the N controlled networks correspond to the N prediction models to obtain N prediction models.
Adding a new neural network layer for controlling function expression in the basic neural network, comprising:
adding a new neural network layer in front of each layer of the basic neural network, wherein the number of nodes is the same as that of nodes of a subsequent network layer;
setting the initial state of the newly added neural network layer to be the state that does not influence the mapping relation established by the original basic neural network, namely, the function of the controlled neural network is consistent with the function of the original basic neural network in the untrained initial state;
on the specific set, newly added neural network layer parameters are trained, and along with continuously strengthening the training of the controlled neural network to predict the shielded part of the specific label signal instance, the capability of the controlled neural network to predict the shielded parts of other label signal instances is gradually weakened, so that accurate prediction cannot be realized.
The training neural network predicts an occluded portion of a signal, comprising:
using a rectangular mask to frame a portion of the signal instance at random;
setting the parts of the signal instances in the rectangular shielding frames to be zero as input parts of training data;
setting the parts of the signal instances outside the rectangular shielding frames to be zero as the output part of the training data;
forming a training data, which comprises an input part and an output part;
training the neural network, fitting the input and output portions of the training data.
In a fourth aspect, the present invention provides a computer readable storage medium having stored thereon a computer program which, when executed by a processor, performs the steps of:
acquiring signal examples, wherein each signal with the same physical characteristics corresponds to one label, and N labels are obtained in total;
storing signals with the same label together to form a data set, wherein N data sets are in total;
blocking partial data in the signal examples, training N neural networks, wherein each neural network corresponds to a label, so that each neural network can accurately predict the blocked part of the signal examples with the same label, and meanwhile, when the blocked parts of the signal examples with other labels are predicted, the similarity between the obtained prediction result signals and real signals is low;
acquiring a signal to be identified;
blocking part of the signal to be identified, and predicting the blocked part of the signal to be identified by using N neural networks respectively;
and identifying the signal to be identified as a label corresponding to the neural network with the most accurate prediction of the shielded part.
The training of the N neural networks comprises:
training a common basic neural network on all the acquired signal examples, so that the basic neural network can accurately predict the shielded part of the signal example;
keeping the trained parameters of the basic neural network unchanged, and adding a new neural network layer for controlling function expression in the basic neural network to form a controlled neural network;
respectively training N controlled neural networks on N signal example sets with different labels, so that each controlled neural network can only accurately predict the shielded part of a specific label signal;
the N controlled networks correspond to the N prediction models to obtain N prediction models.
Adding a new neural network layer for controlling function expression in the basic neural network, comprising:
adding a new neural network layer in front of each layer of the basic neural network, wherein the number of nodes is the same as that of nodes of a subsequent network layer;
setting the initial state of the newly added neural network layer to be the state that does not influence the mapping relation established by the original basic neural network, namely, the function of the controlled neural network is consistent with the function of the original basic neural network in the untrained initial state;
on the specific set, newly added neural network layer parameters are trained, and along with continuously strengthening the training of the controlled neural network to predict the shielded part of the specific label signal instance, the capability of the controlled neural network to predict the shielded parts of other label signal instances is gradually weakened, so that accurate prediction cannot be realized.
The training neural network predicts an occluded portion of a signal, comprising:
using a rectangular mask to frame a portion of the signal instance at random;
setting the parts of the signal instances in the rectangular shielding frames to be zero as input parts of training data;
setting the parts of the signal instances outside the rectangular shielding frames to be zero as the output part of the training data;
forming a training data, which comprises an input part and an output part;
training the neural network, fitting the input and output portions of the training data.
According to the anti-interference signal identification method, the anti-interference signal identification device, the computer equipment and the storage medium, the signal instance and the signal to be identified are obtained through the computer equipment, the trained prediction model is adopted to shield and predict the signal to be identified, then the computer equipment identifies the signal to be identified as the label corresponding to the most accurate neural network of the shielded part, and the identification result is obtained. The corresponding mapping relation between the data points in the signal examples is learned during the training of the prediction model, and the prediction model of the specific label only has good prediction capability on the signal examples of the specific label. Therefore, the prediction result is compared with the original signal to be recognized, accurate prediction can be achieved, the fact that the signal to be recognized and a signal instance with a label have a similar internal mapping relation is shown, finally the signal to be recognized is attached to the label corresponding to the prediction model through the recognition model, the recognition process can be explained to a certain extent and can be understood by people, and the technical problems that the traditional method of directly using the multilayer perceptron model to recognize the signal lacks of interpretability and is weak in interference resistance are solved.
The invention achieves the effects that:
the example signal and the signal to be identified are obtained through the computer equipment, the trained prediction model is adopted to shield and predict the signal to be identified, then the computer equipment identifies the signal to be identified as the label corresponding to the neural network with the most accurate prediction of the shielded part, and the identification result is obtained. The corresponding mapping relation among the internal data of the example signals is learned during the training of the prediction model, and the prediction model of the specific label only has good prediction capability on the example signals of the specific label. Therefore, the prediction result is compared with the original signal to be recognized, accurate prediction can be achieved, the fact that the signal to be recognized and the example signal with the label have the similar internal mapping relation is shown, finally the signal to be recognized is attached to the label corresponding to the prediction model through the recognition model, the recognition process can be explained to a certain extent, the recognition process can be understood by people, and the technical problems that the signal recognition by directly using a neural network is lack of interpretability and the interference resistance is weak are solved.
In a specific embodiment of handwritten digital image signal recognition, compared with a classical nearest neighbor algorithm and a traditional multilayer perceptron algorithm, the result shows that even under high-intensity noise interference, the method of the invention can still achieve recognition accuracy of about 67 percent, which is about 28 percent higher than that of the nearest neighbor method and about 50 percent higher than that of the traditional multilayer perceptron method, thereby greatly improving the accuracy of signal recognition under the condition of noise interference, having very high robustness and improving the anti-interference capability of signal recognition. In addition, due to the model obtained by training in the prediction model, the learned mapping relation between the internal data of the example signal has actual physical significance and interpretability, and further the robustness of the recognition result output by the recognition model is greatly improved.
Drawings
FIG. 1 is a diagram of the internal structure of a computer device in one implementation of the method of the present invention;
FIG. 2 is a schematic flow chart of an anti-jamming signal identification method provided by the present invention;
FIG. 3 is a flow diagram illustrating a method for tamper resistant signal identification in accordance with an embodiment of the present invention;
FIG. 4 is a flow diagram illustrating one possible implementation of adding a new neural network layer for controlling function expression in the underlying neural network in an embodiment of the present invention;
FIG. 5 is a schematic flow chart illustrating one possible implementation of "training a neural network to predict an occluded portion of a signal" in an embodiment of the present invention;
FIG. 6 is a schematic diagram of a basic neural network for predicting the occluded part of the handwritten digital image signal in the MNIST handwritten digit set;
FIG. 7 is a diagram showing the result of predicting the occluded part of the training image signal by the trained basic neural network, and the result of predicting the occluded part is shown in the block;
FIG. 8 is a schematic diagram of a controlled neural network for predicting occluded portions of a handwritten digital image signal in an MNIST handwritten digit collection;
FIG. 9 is a diagram illustrating the prediction of the occluded part of a training image signal labeled 6 using 10 trained neural networks, respectively;
FIG. 10 is a drawing of a MNIST handwritten digit test set 1 test image signal after being perturbed by noise at levels 25,51,76,102,127,153,178,204,229,255, respectively;
FIG. 11 is a schematic diagram of a multi-layered perceptron for recognizing handwritten digital image signals from a set of MNIST handwritten digits;
FIG. 12 is a graph of robustness analysis using the method of the present invention, the nearest neighbor method, and the conventional multi-layered perceptron method under the same training image signal and with the test image signal disturbance levels of 0,25,51,76,102,127,153,178,204,229,255, respectively;
fig. 13 is a schematic structural diagram of an apparatus for identifying signals with interference resistance according to an embodiment of the present invention.
Examples
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application.
The invention does not use noise reduction, filtering and other means to carry out signal preprocessing, obviously, the filtering and noise removing means is a way to improve the signal quality. However, it should be emphasized that filtering is also limited and may damage or even lose important characteristic information in the original signal. The method carries out signal identification under the condition of not damaging the original signal, and reserves all original signal characteristic information. The method has good anti-interference capability and robustness, and anti-interference signal identification is carried out by means of noise reduction, filtering and the like which possibly damage original signals.
The anti-interference signal identification method provided by the embodiment of the application can be applied to the computer equipment shown in fig. 1. The computer equipment comprises a general-purpose computing processor, an intelligent training computing processor, a memory, a network interface, a display device, an input device and an output device which are connected through a bus. Wherein the general purpose computing processor of the computer device is configured to provide general purpose computing and control capabilities. The intelligent training calculation processor of the computer equipment is used for providing intelligent model training and reasoning calculation acceleration capability. The memory of the computer device comprises a nonvolatile storage medium and an internal memory. The non-volatile storage medium stores an operating system, a computer program, and a database. The internal memory provides an environment for the operation of an operating system and computer programs in the non-volatile storage medium. The database of the computer device is used for storing the labeled signal instances, the prediction models, the recognition models and the classification models in the following embodiments, and the specific description of the prediction models, the recognition models and the classification models is referred to the specific description in the following embodiments. The network interface of the computer device may be used to communicate with other devices outside over a network connection. Optionally, the computer device may be a server, a desktop, a personal digital assistant, other terminal devices such as a tablet computer, a mobile phone, and the like, or a cloud or a remote server, and the specific form of the computer device is not limited in the embodiment of the present application. The display device of the computer device may be a display screen, such as a liquid crystal display screen or an electronic ink display screen, the input device of the computer device may be a touch layer covered on the display screen, or may be a key, a track ball or a touch pad arranged on a casing of the computer device, or may be an external keyboard, a touch pad or a mouse, or may be a device for transmitting external physical information into the computer, such as: cameras, pressure sensing devices, microphone devices, and the like. The output device of the computer equipment is a device which can transmit information, such as a display, a loudspeaker, a vibration device and the like, the loudspeaker can output human voice, the vibration device can specifically vibrate to express information, and the like. Of course, the input device, the output device and the display device may not belong to a part of the computer device, and may be external devices of the computer device.
Those skilled in the art will appreciate that the architecture shown in fig. 1 is merely a block diagram of some of the structures associated with the disclosed aspects and is not intended to limit the computing devices to which the disclosed aspects apply, as particular computing devices may include more or less components than those shown, or may combine certain components, or have a different arrangement of components.
The following describes the technical solutions of the present application and how to solve the above technical problems with specific examples.
Example 1: the specific embodiment below may be adapted to incorporate different identification signal types. Embodiments of the present application will be described below with reference to the accompanying drawings.
It should be noted that the execution subjects of the following method embodiments may be an anti-interference image signal recognition device, an anti-interference sound signal recognition device, an anti-interference communication signal recognition device, an anti-interference radar signal recognition device, and the like, respectively, and the information acquisition device in the apparatus may be a camera, a microphone, a communication terminal, a radar receiver, and the like, which is not limited in this embodiment; the apparatus can be implemented as part or all of the computer device by software, hardware or a combination of software and hardware. The following method embodiments are described by taking the execution subject as the computer device as an example.
Fig. 2 is a schematic flowchart of a signal identification method according to an embodiment. The embodiment relates to a specific process for classifying signals to be recognized by computer equipment by adopting a prediction model, a recognition model and a classification model.
As shown in fig. 2, the method for identifying signals with interference resistance includes:
and S10, acquiring signal examples, wherein each signal with the same physical characteristics corresponds to one label, and N labels are obtained in total.
Specifically, the computer device obtains the signal instance with the tag information, which may be a signal instance for reading the tag information stored in its own storage device; or receiving the signal example of the tagged information sent by other equipment; or may be a tagged signal instance obtained after pre-processing the original tagged signal instance. Alternatively, the preprocessing may be up-sampling, down-sampling, clipping, normalizing, or the like, of the signal. Optionally, as a specific processing manner, the preprocessing may further be to perform affine transformation on the original signal instance by using a spatial transformation network, so as to implement geometric correction on the original signal instance, and obtain the signal instance with the tag information. The computer device may perform various warping operations on the tagged signal instance, including but not limited to graphics stretching and graphics compressing, etc. Alternatively, the example signal may include a handwritten digital image signal, a human image signal, an animal image signal, and may also include image signals of other objects, and may not only be limited to the image signal, but also be applied to other physical signals, such as: a sound signal, a communication signal, a radar signal, etc., and the present embodiment is not limited thereto.
S20, storing the signals with the same label together to form a data set, and attaching the data set with a label consistent with the signal contained in the data set, wherein N labels exist, and N sets exist;
specifically, the computer device acquires the signal instances with the tag information, and stores the signal instances together according to different tags. N kinds of tags, N signal instance data sets are formed.
S30, blocking partial data in the signal examples, training N neural networks, wherein each neural network corresponds to a label, so that the similarity between the prediction result signal and the real signal is higher when each neural network predicts the blocked part of the signal example with the same label, and the similarity between the prediction result signal and the real signal is lower when each neural network predicts the blocked part of the signal example with other labels; input data and output data of the training neural network are both from signals with actual physical significance, and the fitting and description of the neural network are the objective relation between internal parts of signal examples;
specifically, the computer device trains N neural networks using N sets of signal instances. It should be noted that, in the training learning process, the corresponding mapping relationship between the data points inside the signal instance is learned, and the prediction model of the specific label only has good prediction capability for the signal instance of the specific label. The N neural networks constitute N specific predictive models. The neural network of each prediction model obtains a higher similarity between the prediction result signal and the real signal when predicting the occluded part of the signal instance with the same label, and the neural network of each prediction model obtains a lower similarity between the prediction result signal and the real signal when predicting the occluded part of the signal instance with other labels. When the computer blocks the signal instance, the signal instance can be in various blocking shapes, such as rectangle, circle, any shape and the like; the size and position of the occlusion region may also be randomly selected; only one region may be occluded at a time, or multiple regions may be occluded. The computer may perform various occlusions on the signals, which are not limited to size, position, number, shape, and the like.
S40, acquiring a signal to be identified;
specifically, the most important difference between this step and the example of acquiring the signal in S10 is that the signal to be identified is acquired without tag information and is the signal to be identified. The computer equipment acquires the signal to be identified, and can read the signal to be identified stored in the storage equipment of the computer equipment; or receiving signals to be identified sent by other equipment; but may also be a signal obtained after pre-processing the original signal. Alternatively, the preprocessing may be up-sampling, down-sampling, clipping, normalizing, or the like, of the signal. Optionally, as a specific processing manner, the preprocessing may also be to perform affine transformation on the original signal by using a spatial transformation network, so as to implement geometric correction on the original signal, and obtain a signal to be processed. The computer device may perform various warping operations on the signal including, but not limited to, graphics stretching, graphics compressing, and the like. Alternatively, the example signal may include a handwritten digital image signal, a human image signal, an animal image signal, and may also include image signals of other objects, and may not only be limited to the image signal, but also be applied to other physical signals, such as: a sound signal, a communication signal, a radar signal, etc., and the present embodiment is not limited thereto.
S50, blocking part of the signals to be recognized, respectively predicting the blocked part of the signals to be recognized by using N trained neural networks, and storing the predicted signals;
in particular, the computer device uses N trained neural networks, or so-called N predictive models, to predict the occluded part of the signal to be identified. It should be noted that, when the computer masks the signal to be recognized, the same masking manner as that used in training the model may be adopted, or a new masking manner may be adopted; the shielding shape can be various, and only one can be selected, such as rectangle, circle, any shape and the like; the size and the position of the occlusion area can be randomly selected, and also can be a fixed group or a plurality of groups; only one region may be occluded at a time, or multiple regions may be occluded. The computer may perform various occlusions on the signals, which are not limited to size, position, number, shape, and the like. The computer may perform one or more of occlusion and prediction, and finally store the predicted signal.
And S60, identifying the signal to be identified as the label corresponding to the neural network with the most accurate prediction of the blocked part.
Specifically, the computer device may perform similarity comparison between the stored prediction signal and the signal to be identified, where the similarity comparison may use one prediction signal, or may use multiple prediction signals with different occlusion positions or patterns. And finally, identifying the signal to be identified as the label corresponding to the neural network with the most accurate prediction of the shielded part. The method for measuring the prediction accuracy may be a similarity accumulation averaging method, a similarity weighted accumulation averaging method, or other methods, and this embodiment is not limited.
On the basis of the above embodiment, optionally, the training of the N neural networks includes two steps of training a basic neural network and training a controlled neural network, and finally a required prediction model is formed; one possible implementation of the foregoing S30 is shown in fig. 3, and includes:
and S31, training a common basic neural network on all the acquired signal instances, so that the basic neural network can accurately predict the shielded parts of the signal instances.
Specifically, the trained neural network model is a basic neural network. The basic neural network may be formed by mixing a convolutional layer (Convolution), a void convolutional layer (related Convolution), and a deconvolution layer (De-Convolution), and the number of types may include one or more of them, and the total number of layers may be three, four, five, or other layers. Specifically, the computer device may train the basic neural network to accurately predict the signal of the blocked part after the obtained signal instances block part of the signal instances in sequence.
And S32, keeping the trained parameters of the basic neural network unchanged, and adding a new neural network layer for controlling function expression in the basic neural network to form a controlled neural network.
Specifically, after the basic neural network is trained, the computer device keeps the parameters of the basic neural network unchanged, a new neural network layer for controlling function expression is added between layers of the basic neural network, a control neural network layer can be added between each layer of the basic neural network, or a control neural network layer can be added between partial layers, and the number of layers of each control layer can be one, two, three or other layers. The newly added neural network should be able to form a controlled neural network by setting an initial state, which initially maintains the same or similar predictive power as the original basic neural network.
S33, respectively training N controlled neural networks on N signal instance sets with different labels, so that each controlled neural network can only accurately predict the shielded part of a specific label signal;
specifically, the computer device trains control layer parameters of the N controlled neural networks on the signal instance sets with different labels by adopting the same method as the training of the basic neural network, and the parameters of the basic neural network are kept unchanged in the period. Through the strengthening training, the prediction capability of the controlled neural network on the expression mapping relation of a specific label signal example is enhanced, and the prediction capability of the controlled neural network on the expression mapping relation of other label signals is forgotten and weakened. So that each controlled neural network can only accurately predict the occluded part of a specific tag signal.
And S34, the N controlled networks correspond to the N prediction models to obtain N prediction models.
Specifically, the computer device stores the obtained N controlled neural networks, each controlled neural network corresponds to one prediction model, and the obtained N prediction models correspond to N categories respectively.
Optionally, in the foregoing implementation, one possible implementation manner of "adding a new neural network layer for controlling function expression in the basic neural network" may be as shown in fig. 4, and includes:
s321, adding a new neural network layer in front of each layer of the basic neural network, wherein the number of nodes is the same as that of nodes of a subsequent network layer;
specifically, the number of layers of the new neural network added by the computer device may be one layer, two layers, or other number of layers.
S322, setting the initial state of the newly added neural network layer to be a state that does not affect the mapping relation established by the original basic neural network, namely, the function of the controlled neural network is consistent with the function of the original basic neural network in the untrained initial state;
specifically, the computer device sets the newly added neural network layer parameter to w ═ w0,…,wn]Wherein n is the number of nodes of the subsequent network layer; the operation is performed by setting the signal x originally accessed into the back layer neural network to [ x ═ x0,…,xn]Transformed into a result of bitwise multiplication [ x ]0w0,…,xnwn]。
S323, on the specific set, training newly added neural network layer parameters, and gradually weakening the capability of the controlled neural network to predict the shielded part with other label signal examples along with continuously strengthening the training of the controlled neural network to predict the shielded part of the specific label signal example, so that accurate prediction cannot be carried out.
Specifically, on a specific label signal instance set, the computer device continuously strengthens and trains newly added control neural network layer parameters (control layer parameters for short) so that the signal prediction capability of the controlled neural network on a specific label is maintained or enhanced, thereby weakening or eliminating the prediction capability on other label signals. Because the parameters of the control layer are set to be not influenced by the functions of the original neural network when the parameters of the control layer are initial, the newly added control layer can be trained together or layer by layer; the algorithm can be executed for multiple times, a new control layer is added to a controlled neural network, the trained parameters are kept unchanged, and only the newly added control layer parameters are intensively trained.
Alternatively, in all the above embodiments, one possible implementation manner of "training the neural network to predict the occluded part of the signal" may be as shown in fig. 5, including:
s311, randomly framing a part of the signal instance by using a rectangular shielding frame;
specifically, the size of a rectangular frame randomly generated by the computer equipment is within the range of the capability of neural network prediction, and the neural network cannot be predicted accurately due to the fact that the rectangular frame is too large; the rectangular box is too small to evaluate the predictive ability of the neural network, although it can predict accurately.
S312, setting the parts of the signal instances, which are positioned in the rectangular shielding frame, to be zero, and taking the parts as input parts of training data;
s313, setting the parts of the signal instances outside the rectangular shielding frames to be zero as output parts of the training data;
s314, forming a training data which comprises an input part and an output part;
and S315, fitting the input and output parts of the training data by the training neural network.
Example 2: the invention is implemented to recognize the handwritten digital image signal by taking the recognition of the handwritten digital image signal in the MNIST data set as an example. The MNIST data set (Mixed National Institute of Standards and technology database) is a large handwritten digital database collected and collated by the National Institute of Standards and technology, containing a training set of 60,000 examples and a test set of 10,000 examples.
S10, acquiring the first 500 handwritten digital image signals in the MNIST training data set as signal examples, wherein each image signal handwritten with the same number corresponds to a digital label which is provided with 10 types of numbers 0-9;
and S20, storing the image signals with the same label together to form a data set, and labeling the data set with a label consistent with the image signals. Sign boardThe signed numbers 0-9 are 10 kinds in total, 10 pieces of set data are formed and are marked as S0,S1,…,S9Corresponding to the numbers 0,1, …,9, respectively. Set SiOnly the signal instance with the digital label i is stored. In the example of 500 image signals, there are 62 tags of 0, 51 tags of 1, and so on, | S0|+|S1|+…+|S9|=500;
S30, blocking partial data inside the signal examples, training 10 neural networks, wherein each neural network corresponds to a different digital label, so that each neural network can accurately predict the blocked part of the signal examples with the same digital label, meanwhile, when the blocked part of the signal examples with other labels is predicted, the similarity between the predicted result signal and a real signal is low, the input data and the output data of the trained neural networks are both from signals with actual physical meanings, and the fitting and description of the neural networks are the relation objectively existing among the internal parts of the signal examples;
s31, training a public basic neural network on all the obtained handwritten digital image signal examples, so that the basic neural network can accurately predict the shielded parts of the signal examples, and the structure of the basic neural network is shown in FIG. 6;
s311, randomly framing a part of the image signal instance by using a rectangular shielding frame;
specifically, the size of a rectangular frame randomly generated by the computer equipment is within the capability range of neural network prediction, and the neural network cannot be predicted accurately due to the fact that the rectangular frame is too large; the rectangular box is too small to evaluate the predictive ability of the neural network, although it can predict accurately. Therefore, the length and width of the rectangular frame are randomly selected between 7 and 14 according to the size of the MNIST digital image signal being 28 × 28 pixels.
S312, setting the parts of the image signal examples, which are positioned in the rectangular shielding frame, to be zero, and taking the parts as input parts of training data;
s313, setting the parts of the image signal examples outside the rectangular shielding frames to be zero, and taking the parts as output parts of training data;
s314, forming a training data which comprises an input part and an output part;
and S315, fitting the input and output parts of the training data by the training neural network.
And repeating the steps S311-S315, and training the basic neural network until the basic neural network obtains good prediction ability. The result of the occluded part of a trained basic neural network predicted image signal instance is shown in fig. 7, and the rectangular frame is the predicted image signal output by the basic neural network after occlusion.
And S32, keeping the trained parameters of the basic neural network unchanged, and adding a new neural network layer for controlling function expression in the basic neural network to form a controlled neural network.
S321, adding 3 new neural network layers before each layer of the basic neural network, having the same number of nodes as the succeeding network layer, as shown in fig. 8;
s322, setting the initial state of the newly added neural network layer to be a state that does not affect the mapping relation established by the original basic neural network, namely, the function of the controlled neural network is consistent with the function of the original basic neural network in the untrained initial state;
specifically, the computer device sets each newly added neural network layer parameter to w ═ w0,…,wn]Wherein n is the number of nodes of the subsequent network layer; the operation is set to set the signal x originally connected to the next layer of neural network as [ x ═ x0,…,xn]Transformed into a result of bitwise multiplication [ x ]0w0,…,xnwn](ii) a Parameter w0,…,wnThe initial values are all set to 1; the block diagram is shown in fig. 8.
S323 at SiIn the set, the newly added neural network layer parameters are trained, and along with the continuous strengthening of the training of the controlled neural network to predict the shielded part of the image signal instance with the label being the digital i, the capability of the controlled neural network to predict the shielded part of the image signal instance with other digital labels is gradually weakened, so that accurate prediction cannot be performed.
S33, respectively training 10 controlled neural networks on 10 image signal instance sets with different digital labels, so that each controlled neural network can only accurately predict the shielded part of one specific digital label image signal;
repeating steps S321-S323, respectively at S0,S1,…,S9In the above, 10 controlled neural networks were trained.
S34, 10 controlled networks correspond to 10 prediction models, and 10 prediction models are obtained.
The 10 trained controlled neural networks are used for respectively predicting the shielded part of a training image signal labeled with the number 6, and the prediction result is shown in fig. 9. The controlled neural network with the label as the number 6 predicts the image signal with the label as the number 6 to be the strongest, and the controlled neural network prediction effect of other labels is not ideal.
S40, acquiring 1 handwritten digital image signal in the MNIST test data set as a signal to be identified;
s50, blocking part of signals to be identified, respectively using 10 trained controlled neural networks to predict the blocked part of the images to be identified, and storing predicted image signals;
repeating the step S50 for 30 times, randomly blocking 30 rectangular areas of the test image signal, then respectively predicting the blocked part by using 10 trained controlled neural networks, and respectively recording and storing image signals predicted by different controlled neural networks.
And S60, identifying the image signal to be identified as the label corresponding to the controlled neural network with the most accurate prediction of the blocked part.
And comparing the similarity of the prediction results of different controlled neural networks with the whole original image signal. And respectively calculating the accuracy of the prediction results of the 10 neural networks by adopting an accumulation and averaging mode. And measuring the prediction accuracy by adopting a similarity criterion, wherein the more similar the predicted image signal is to the original image signal, the higher the prediction accuracy is. Similarity is measured in terms of signal distance, and the smaller the distance, the more similar. The pixel data of the handwritten digital image signals may form vectors, in which case the cosine angle between the vectors is used as the distance between the image signals. The distance size shows the similarityDegree, the smaller the distance, the more similar. Suppose that the vector corresponding to the image signal to be identified is X, and the data vectors corresponding to 30 image signals predicted by the controlled neural network labeled with the number i are P respectivelyi,1,Pi,2,…,Pi,30Then, the prediction accuracy is expressed as (d (X, P)i,1)+d(X,Pi,2)+…+d(X,Pi,30) X/30, where d (X, P) is two signal vectors X ═ { X ═ X1,…,xnP ═ P1,…,pnAngle of cosine therebetween, wherein
Figure BDA0002513776140000151
The cosine included angle of two identical image signals is 0, the cosine included angle reflects the similarity between the image signals, and the identical image signals are the most similar image signals; conversely, the included angle is the smallest and the more dissimilar, the larger the included angle. In this embodiment, the smaller the prediction accuracy value is, the more accurate the prediction is. Finally, the test image signal is marked as a digital label corresponding to the most accurate neural network to be predicted.
And repeating the steps of S40-S60, and classifying all 10000 test handwritten digital image signals in the MNIST data set respectively to obtain a recognition result.
To illustrate the robustness of the present invention, the digital image signal is still identified using the already trained predictive model, i.e. only the steps S40-S60 are repeated; the difference is that different degrees of noise interference are added to original 10000 test image signals, and the image signals subjected to the noise interference are used as signals to be identified.
The value of a pixel point of an original test image signal is an integer between 0 and 255, the value of the pixel point represents the brightness of the pixel point, 0 is darkest, and 255 is brightest. The superimposed noise interference signal is Q, and each point Q of the noise interference signaliThe values of (1) and (α) are subjected to discrete uniform distribution on 0,1,2, … and α, α is a nonnegative integer less than or equal to 255, points in the noise interference signal are subjected to the same discrete uniform distribution and are independent and uniformly distributed discrete random variables, and the value size of α reflects the level of noise interferenceα is equal to no noise interference when taking 0, and the whole value of the noise interference signal is increased along with the increase of α value, therefore, α value is used to represent the noise interference level1,x2,…,x784]784 noise interference signal points take on the value of [ q1,q2,…,q784]784 pixels of the test image signal that are disturbed by noise can be denoted as [ T (x)1+q1),T(x2+q2),…,T(x784+q784)]Wherein
Figure BDA0002513776140000161
FIG. 10 shows the MNIST handwritten digit test set 1 test image signals after being disturbed by noise at levels 25,51,76,102,127,153,178,204,229,255, respectively.
The accuracy of identifying the disturbed handwritten digital image signals using the method of the present invention is close to 80.33%, 79.82%, 75.44%, 74.06%, 72.92%, 73.73%, 74.09%, 73.51%, 72.24%, 70.67%, 67.09% with noise disturbance levels of 0,25,51,76,102,127,153,178,204,229,255, respectively.
The nearest neighbor algorithm is one of the most classical pattern recognition methods, and 1-nearest neighbor is used, namely, the test image signal is compared with the image signal examples of the first 500 MNIST training data set one by one, the image signal with the nearest distance is selected, and the test image signal is recognized as the type of the image signal example with the nearest distance. The cosine angle between the vectors of which the pixels of the image signals form vectors is also used as the distance between the image signals. In the case of noise interference levels of 0,25,51,76,102,127,153,178,204,229,255, the accuracy of identifying the disturbed handwritten digital image signals using the nearest neighbor method is 85.23%, 84.56%, 83.73%, 82.24%, 80.2%, 77.39%, 72.59%, 64.82%, 55.47%, 47.25%, 39.02%, respectively.
The conventional multi-layer perceptron is a classical machine learning method, and the structure of the multi-layer perceptron is shown in fig. 11, and a modified Linear Unit layer (Rectified Linear Unit layer) is used after each convolution layer (convolution layer) and full-Connected layer (full-Connected layer), but the output layer is "full-Connected layer) + Softmax", and the output is normalized to One-Hot Encoding (One-Hot Encoding) of digital labels 0-9. The multi-layer perceptron is trained using the first 500 image signal instances of the MNIST training data set and the One-Hot encoding (One HotEncoding) of the corresponding digital labels as training data, which is also the classical mode of traditional machine learning. With noise interference levels of 0,25,51,76,102,127,153,178,204,229,255, respectively, the accuracy of recognition of the disturbed handwritten digital image signals using a trained multi-tier perceptron is close to 91.56%, 91.11%, 88.57%, 81.86%, 70.01%, 59.08%, 48.05%, 37.37%, 28.83%, 21.44%, 17.13%, respectively.
FIG. 12 is a graph of robustness analysis using the method of the present invention, the nearest neighbor method, and the conventional multi-layered perceptron method under the same training data and test image signal perturbation levels of 0,25,51,76,102,127,153,178,204,229,255, respectively. As is obvious from the figure, after being interfered by noise, the robustness of the method is better than that of a nearest neighbor algorithm and a traditional multilayer perceptron method. Under the condition that a test image signal is not interfered by noise, the traditional multilayer perceptron method can achieve high identification accuracy, but the identification accuracy is rapidly reduced along with the increase of the noise level; the robustness of the nearest neighbor algorithm is superior to that of the traditional multilayer perceptron method; the method has the best robustness, and can still keep about 67% of recognition accuracy even under the noise interference of the highest level of 255, which is about 28% higher than that of the nearest neighbor algorithm and is about 50% higher than that of the traditional neural network.
Example 3: as shown in fig. 13, the present embodiment provides a signal identifying apparatus resistant to noise interference, the apparatus including:
the acquisition module 100 is used for acquiring a signal instance with a label and a signal to be identified, wherein each label corresponds to a specific physical characteristic, the acquisition module outputs the signal instance with the label to the training module, and the acquisition module outputs the signal to be identified to the classification module;
the training module 200 is used for calling a signal example with a label in the acquisition module, training an intelligent prediction model used in the prediction module, and outputting the trained intelligent prediction model parameters to the prediction module;
the prediction module 300 is used for calling the intelligent prediction models trained in the training module, predicting the shielded parts of the signals for the recognition module to call, wherein each intelligent prediction model corresponds to a different label, each intelligent prediction model can accurately predict the shielded parts of the signal instances with the same label, and meanwhile, when predicting the shielded parts of the signal instances with other labels, the similarity between the obtained prediction result signals and the real signals is low;
the recognition module 400 is configured to receive a signal to be recognized in the classification module, call the prediction module to predict an occluded part of the signal to be recognized, and output a prediction result to the classification module;
and the classification module 500 is configured to call the signal to be identified in the acquisition module, transfer the signal to be identified to the identification module, call a prediction result in the identification module, and classify the signal to be identified according to the prediction accuracy of different prediction models in the identification module.
In one embodiment, the prediction module 300 is composed of a controlled neural network submodule, and the basic neural network submodule is a basic module composing the controlled neural network submodule, and the basic neural network submodule can accurately predict the shielded parts of all signal instances; the controlled neural network sub-module is built on the basic neural network sub-module, the shielded part of the signal instance with the specific label can be accurately predicted, and meanwhile, when the shielded part of the signal instance with other labels is predicted, the similarity between the obtained prediction result signal and the real signal is low.
In one embodiment, the controlled neural network sub-module is built on a basic neural network sub-module, a new neural network layer is added in front of each layer of the basic neural network sub-module, and the newly added neural network layer has the same node number as that of a subsequent network layer; setting the initial state of the newly added neural network layer to be the state that does not influence the mapping relation established by the original basic neural network, namely, the function of the controlled neural network is consistent with the function of the original basic neural network in the untrained initial state; by training newly added neural network layer parameters on a specific set, the ability of the controlled neural network to predict the occluded part of a specific label signal instance is gradually weakened by continuously strengthening the training of the controlled neural network to predict the occluded part with other label signal instances.
In one embodiment, a rectangular occlusion box is used to randomly occlude a portion of a signal instance in the prediction module 300, the identification module 400, and the classification module 500.
For specific limitations of the signal recognition device, refer to the above limitations of the signal recognition method, and are not described herein again. The modules in the signal identification device can be wholly or partially implemented by software, hardware and a combination thereof. The modules can be embedded in a hardware form or independent of a general computing processor and an intelligent training computing processor in computer equipment, and can also be stored in a memory in the computer equipment in a software form, so that the processor can call and execute the corresponding operations of the modules.
Example 4: there is provided a computer device comprising a memory, a general purpose computing processor and an intelligent training computing processor, the memory having stored therein a computer program, the general purpose computing processor and the intelligent training computing processor when executing the computer program implementing the steps of:
acquiring signal examples, wherein each signal with the same physical characteristics corresponds to one label, and N labels are obtained in total;
storing the signals with the same label together to form a data set, and attaching the data set with labels consistent with the signals contained in the data set, wherein N labels exist, and N sets exist;
blocking partial data in the signal examples, training N neural networks, wherein each neural network corresponds to a different label, so that each neural network can accurately predict the blocked part of the signal examples with the same label, meanwhile, when the blocked parts of the signal examples with other labels are predicted, the similarity between the obtained prediction result signals and real signals is low, the input data and the output data of the trained neural networks are both from signals with actual physical meanings, and the fitting and description of the neural networks are the objective relation between the internal parts of the signal examples;
acquiring a signal to be identified;
blocking part of the signals to be recognized, respectively predicting the blocked part of the signals to be recognized by using N trained neural networks, and storing the predicted signals;
and identifying the signal to be identified as a label corresponding to the neural network with the most accurate prediction of the shielded part.
In one embodiment, the training of the N neural networks includes two steps of training a basic neural network and training a controlled neural network, and finally forms a required prediction model; the general computation processing and intelligent training computation processor further implements the following steps when executing the computer program:
on all acquired signal instances, a common underlying neural network is trained so that the underlying neural network can accurately predict the occluded portions of the signal instances.
Keeping the trained parameters of the basic neural network unchanged, and adding a new neural network layer for controlling function expression in the basic neural network to form a controlled neural network.
Respectively training N controlled neural networks on N signal example sets with different labels, so that each controlled neural network can only accurately predict the shielded part of a specific label signal;
the N controlled networks correspond to the N prediction models to obtain N prediction models.
In one embodiment, the adding of the new neural network layer for controlling the function expression in the underlying neural network, the general purpose computing process and the intelligent training computing processor further implement the following steps when executing the computer program:
adding a new neural network layer in front of each layer of the basic neural network, wherein the number of nodes is the same as that of nodes of a subsequent network layer;
setting the initial state of the newly added neural network layer to be the state that does not influence the mapping relation established by the original basic neural network, namely, the function of the controlled neural network is consistent with the function of the original basic neural network in the untrained initial state;
on the specific set, newly added neural network layer parameters are trained, and with the continuous strengthening of training of the controlled neural network to predict the shielded part of the specific label signal instance, the capability of the controlled neural network to predict the shielded part with other label signal instances is gradually weakened, so that accurate prediction cannot be achieved.
In one embodiment, the training neural network predicts occluded parts of the signal, and the general purpose computing process and the intelligent training computing processor when executing the computer program further performs the steps of:
using a rectangular mask to frame a portion of the signal instance at random;
setting the parts of the signal instances in the rectangular shielding frames to be zero as input parts of training data;
setting the parts of the signal instances outside the rectangular shielding frames to be zero as the output part of the training data;
forming a training data, which comprises an input part and an output part;
the training neural network fits the input and output portions of the training data.
It should be clear that, in the embodiments of the present application, the general computing process and the process of executing the computer program by the intelligent training computing processor are consistent with the execution process of each step in the above method, and specific reference may be made to the description above.
Example 5: the present invention provides a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, performs the steps of:
acquiring signal examples, wherein each signal with the same physical characteristics corresponds to one label, and N labels are obtained in total;
storing the signals with the same label together to form a data set, and attaching the data set with labels consistent with the signals contained in the data set, wherein N labels exist, and N sets exist;
blocking partial data in the signal examples, training N neural networks, wherein each neural network corresponds to a different label, so that each neural network can accurately predict the blocked part of the signal examples with the same label, meanwhile, when the blocked parts of the signal examples with other labels are predicted, the similarity between the obtained prediction result signals and real signals is low, the input data and the output data of the trained neural networks are both from signals with actual physical meanings, and the fitting and description of the neural networks are the objective relation between the internal parts of the signal examples;
acquiring a signal to be identified;
blocking part of the signals to be recognized, respectively predicting the blocked part of the signals to be recognized by using N trained neural networks, and storing the predicted signals;
and identifying the signal to be identified as a label corresponding to the neural network with the most accurate prediction of the shielded part.
In one embodiment, the training of the N neural networks includes two steps of training a basic neural network and training a controlled neural network, and finally forms a required prediction model; the computer program stored on the computer readable storage medium, when executed by the processor, further implements the steps of:
on all acquired signal instances, a common underlying neural network is trained so that the underlying neural network can accurately predict the occluded portions of the signal instances.
Keeping the trained parameters of the basic neural network unchanged, and adding a new neural network layer for controlling function expression in the basic neural network to form a controlled neural network.
Respectively training N controlled neural networks on N signal example sets with different labels, so that each controlled neural network can only accurately predict the shielded part of a specific label signal;
the N controlled networks correspond to the N prediction models to obtain N prediction models.
In one embodiment, said adding a new neural network layer for controlling the expression of functions in the underlying neural network, the computer program stored on the computer readable storage medium further realizing the following steps when executed by the processor:
adding a new neural network layer in front of each layer of the basic neural network, wherein the number of nodes is the same as that of nodes of a subsequent network layer;
setting the initial state of the newly added neural network layer to be the state that does not influence the mapping relation established by the original basic neural network, namely, the function of the controlled neural network is consistent with the function of the original basic neural network in the untrained initial state;
on the specific set, newly added neural network layer parameters are trained, and with the continuous strengthening of training of the controlled neural network to predict the shielded part of the specific label signal instance, the capability of the controlled neural network to predict the shielded part with other label signal instances is gradually weakened, so that accurate prediction cannot be achieved.
In one embodiment, the training neural network predicts an occluded portion of a signal, and the computer program stored on the computer readable storage medium, when executed by the processor, further implements the steps of:
using a rectangular mask to frame a portion of the signal instance at random;
setting the parts of the signal instances in the rectangular shielding frames to be zero as input parts of training data;
setting the parts of the signal instances outside the rectangular shielding frames to be zero as the output part of the training data;
forming a training data, which comprises an input part and an output part;
the training neural network fits the input and output portions of the training data.
It should be clear that, when being executed by a processor, the computer program stored on the computer-readable storage medium in the embodiments of the present application is consistent with the execution of the steps in the above method, and in particular, refer to the description above.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by hardware instructions of a computer program, which can be stored in a non-volatile computer-readable storage medium, and when executed, can include the processes of the embodiments of the methods described above. Any reference to memory, storage, database, or other medium used in the embodiments provided herein may include non-volatile and/or volatile memory, among others. Non-volatile memory can include read-only memory (ROM), Programmable ROM (PROM), Electrically Programmable ROM (EPROM), Electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM) or external cache memory. By way of illustration and not limitation, RAM is available in a variety of forms such as Static RAM (SRAM), Dynamic RAM (DRAM), Synchronous DRAM (SDRAM), Double Data Rate SDRAM (DDRSDRAM), Enhanced SDRAM (ESDRAM), Synchronous Link DRAM (SLDRAM), Rambus Direct RAM (RDRAM), direct bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM).
The technical features of the above embodiments can be arbitrarily combined, and for the sake of brevity, all possible combinations of the technical features in the above embodiments are not described, but should be considered as the scope of the present specification as long as there is no contradiction between the combinations of the technical features.
The above-mentioned embodiments only express several embodiments of the present application, and the description thereof is more specific and detailed, but not construed as limiting the scope of the invention. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the concept of the present application, which falls within the scope of protection of the present application. Therefore, the protection scope of the present patent shall be subject to the appended claims.

Claims (11)

1. A method for interference-free signal identification, the method comprising:
acquiring signal examples, wherein each signal with the same physical characteristics corresponds to one label, and N labels are obtained in total;
storing signals with the same label together to form a data set, wherein N data sets are in total;
blocking partial data in the signal examples, training N neural networks, wherein each neural network corresponds to a label, so that the similarity between a prediction result signal and a real signal is higher when the blocked part of the signal example with the same label is predicted by each neural network, and the similarity between the prediction result signal and the real signal is lower when the blocked part of the signal example with other labels is predicted by each neural network;
acquiring a signal to be identified;
blocking part of the signal to be identified, and predicting the blocked part of the signal to be identified by using N neural networks respectively;
and identifying the signal to be identified as a label corresponding to the neural network with the most accurate prediction of the shielded part.
2. The method of claim 1, wherein the acquiring the signal instance and the acquiring the signal to be identified are acquiring a signal having a physical significance using a signal acquisition device.
3. The method of claim 1, wherein the similarity between the predicted signal and the true signal is obtained by calculating a distance between the predicted signal vector and the true signal vector or a distance between two signal features.
4. The method of claim 1, wherein the training of the N neural networks comprises:
training a common basic neural network on all the acquired signal examples, so that the basic neural network can accurately predict the shielded part of the signal example;
keeping the trained parameters of the basic neural network unchanged, and adding a new neural network layer for controlling function expression in the basic neural network to form a controlled neural network;
respectively training N controlled neural networks on N signal example sets with different labels, so that each controlled neural network can only accurately predict the shielded part of a specific label signal;
the N controlled networks correspond to the N prediction models to obtain N prediction models.
5. The method of claim 4, wherein adding a new neural network layer for controlling function expression in the underlying neural network comprises:
adding a new neural network layer in front of each layer of the basic neural network, wherein the number of nodes is the same as that of nodes of a subsequent network layer;
setting the initial state of the newly added neural network layer to be the state that does not influence the mapping relation established by the original basic neural network, namely, the function of the controlled neural network is consistent with the function of the original basic neural network in the untrained initial state;
on the specific set, newly added neural network layer parameters are trained, and along with continuously strengthening the training of the controlled neural network to predict the shielded part of the specific label signal instance, the capability of the controlled neural network to predict the shielded parts of other label signal instances is gradually weakened, so that accurate prediction cannot be realized.
6. The method of any one of claims 1-5, wherein training the neural network to predict the occluded portion of the signal comprises:
using a rectangular mask to frame a portion of the signal instance at random;
setting the parts of the signal instances in the rectangular shielding frames to be zero as input parts of training data;
setting the parts of the signal instances outside the rectangular shielding frames to be zero as the output part of the training data;
forming a training data, which comprises an input part and an output part;
training the neural network, fitting the input and output portions of the training data.
7. A tamper-resistant signal identifying apparatus, the apparatus comprising:
the acquisition module is used for acquiring a signal example with a label and a signal to be identified, each label corresponds to a specific physical characteristic, the acquisition module outputs the signal example with the label to the training module, and the signal to be identified is output to the classification module;
the training module is used for calling a signal example with a label in the acquisition module, training an intelligent prediction model used in the prediction module and outputting the trained intelligent prediction model parameters to the prediction module;
the prediction module is used for calling the intelligent prediction models trained in the training module, predicting the shielded parts of the signals for the recognition module to call, wherein each intelligent prediction model corresponds to a different label, each intelligent prediction model can accurately predict the shielded parts of the signal examples with the same label, and meanwhile, when the shielded parts of the signal examples with other labels are predicted, the similarity between the obtained prediction result signals and the real signals is low;
the identification module is used for receiving the signal to be identified in the classification module, calling the prediction module to predict the shielded part of the signal to be identified and outputting the prediction result to the classification module;
and the classification module is used for calling the signals to be identified in the acquisition module, transmitting the signals to be identified to the identification module, calling the prediction result in the identification module, and classifying the signals to be identified according to the prediction accuracy of different prediction models in the identification module.
8. The apparatus of claim 7, wherein the prediction module is composed of a controlled neural network submodule, the basic neural network submodule is a basic module composing the controlled neural network submodule, and the basic neural network submodule can accurately predict the occluded part of all signal instances; the controlled neural network sub-module is built on the basic neural network sub-module, the shielded part of the signal instance with the specific label can be accurately predicted, and meanwhile, when the shielded part of the signal instance with other labels is predicted, the similarity between the obtained prediction result signal and the real signal is low.
9. The device of claim 8, wherein the controlled neural network sub-module is built on a basic neural network sub-module, a new neural network layer is added before each layer of the basic neural network sub-module, and the newly added neural network layer has the same node number as that of a succeeding network layer; setting the initial state of the newly added neural network layer to be the state that does not influence the mapping relation established by the original basic neural network, namely, the function of the controlled neural network is consistent with the function of the original basic neural network in the untrained initial state; by training newly added neural network layer parameters on a specific set, the ability of the controlled neural network to predict the occluded part of a specific label signal instance is gradually weakened by continuously strengthening the training of the controlled neural network to predict the occluded part with other label signal instances.
10. A computer device comprising a memory, a general purpose computing processor and an intelligent training computing processor, the memory storing a computer program that, when executed, performs the steps of the method of any one of claims 1-6.
11. A computer-readable storage medium, characterized in that a computer program is stored thereon, which, when being executed by a general-purpose computing processor and a smart training computing processor, carries out the steps of the method of any one of claims 1 to 6.
CN202010469377.7A 2020-05-28 2020-05-28 Anti-interference signal identification method and device, computer equipment and storage medium Active CN111652108B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010469377.7A CN111652108B (en) 2020-05-28 2020-05-28 Anti-interference signal identification method and device, computer equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010469377.7A CN111652108B (en) 2020-05-28 2020-05-28 Anti-interference signal identification method and device, computer equipment and storage medium

Publications (2)

Publication Number Publication Date
CN111652108A true CN111652108A (en) 2020-09-11
CN111652108B CN111652108B (en) 2020-12-29

Family

ID=72346886

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010469377.7A Active CN111652108B (en) 2020-05-28 2020-05-28 Anti-interference signal identification method and device, computer equipment and storage medium

Country Status (1)

Country Link
CN (1) CN111652108B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115529618A (en) * 2022-08-02 2022-12-27 北京邮电大学 Uplink interference simplification identification and performance prediction method for wireless system

Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1622122A (en) * 2003-11-28 2005-06-01 佳能株式会社 Method, device and storage medium for character recognition
CN103310222A (en) * 2012-03-15 2013-09-18 欧姆龙株式会社 Image processor, image processing method, control program, and recording medium
US20150127573A1 (en) * 2012-06-18 2015-05-07 A9.Com, Inc. Virtual postage based on image recognition
JP2015138458A (en) * 2014-01-23 2015-07-30 富士ゼロックス株式会社 Information processing system, information processing device and program
CN107491729A (en) * 2017-07-12 2017-12-19 天津大学 The Handwritten Digit Recognition method of convolutional neural networks based on cosine similarity activation
CN107610224A (en) * 2017-09-25 2018-01-19 重庆邮电大学 It is a kind of that algorithm is represented based on the Weakly supervised 3D automotive subjects class with clear and definite occlusion modeling
CN108764013A (en) * 2018-03-28 2018-11-06 中国科学院软件研究所 A kind of automatic Communication Signals Recognition based on end-to-end convolutional neural networks
CN110147788A (en) * 2019-05-27 2019-08-20 东北大学 A kind of metal plate and belt Product labelling character recognition method based on feature enhancing CRNN
CN110366734A (en) * 2017-02-23 2019-10-22 谷歌有限责任公司 Optimization neural network framework
CN110399827A (en) * 2019-07-23 2019-11-01 华北电力大学(保定) A kind of Handwritten Numeral Recognition Method based on convolutional neural networks
CN110623658A (en) * 2019-09-24 2019-12-31 京东方科技集团股份有限公司 Signal processing method, signal processing apparatus, medical device, and storage medium
CN111191639A (en) * 2020-03-12 2020-05-22 上海志听医疗科技有限公司 Vertigo type identification method, device, medium and electronic equipment based on eye shake
CN111476282A (en) * 2020-03-27 2020-07-31 东软集团股份有限公司 Data classification method and device, storage medium and electronic equipment

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106407991B (en) * 2016-09-14 2020-02-11 北京市商汤科技开发有限公司 Image attribute recognition method and system and related network training method and system

Patent Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1622122A (en) * 2003-11-28 2005-06-01 佳能株式会社 Method, device and storage medium for character recognition
CN103310222A (en) * 2012-03-15 2013-09-18 欧姆龙株式会社 Image processor, image processing method, control program, and recording medium
US20150127573A1 (en) * 2012-06-18 2015-05-07 A9.Com, Inc. Virtual postage based on image recognition
JP2015138458A (en) * 2014-01-23 2015-07-30 富士ゼロックス株式会社 Information processing system, information processing device and program
CN110366734A (en) * 2017-02-23 2019-10-22 谷歌有限责任公司 Optimization neural network framework
CN107491729A (en) * 2017-07-12 2017-12-19 天津大学 The Handwritten Digit Recognition method of convolutional neural networks based on cosine similarity activation
CN107610224A (en) * 2017-09-25 2018-01-19 重庆邮电大学 It is a kind of that algorithm is represented based on the Weakly supervised 3D automotive subjects class with clear and definite occlusion modeling
CN108764013A (en) * 2018-03-28 2018-11-06 中国科学院软件研究所 A kind of automatic Communication Signals Recognition based on end-to-end convolutional neural networks
CN110147788A (en) * 2019-05-27 2019-08-20 东北大学 A kind of metal plate and belt Product labelling character recognition method based on feature enhancing CRNN
CN110399827A (en) * 2019-07-23 2019-11-01 华北电力大学(保定) A kind of Handwritten Numeral Recognition Method based on convolutional neural networks
CN110623658A (en) * 2019-09-24 2019-12-31 京东方科技集团股份有限公司 Signal processing method, signal processing apparatus, medical device, and storage medium
CN111191639A (en) * 2020-03-12 2020-05-22 上海志听医疗科技有限公司 Vertigo type identification method, device, medium and electronic equipment based on eye shake
CN111476282A (en) * 2020-03-27 2020-07-31 东软集团股份有限公司 Data classification method and device, storage medium and electronic equipment

Non-Patent Citations (9)

* Cited by examiner, † Cited by third party
Title
HSIE.,KR等: "非受控学习法与受控学习法相结合的神经网络模型", 《图象识别与自动化》 *
HUI-HUANG ZHAO等: "Multiple Classifiers Fusion and CNN Feature Extraction for Handwritten Digits Recognition", 《GRANULAR COMPUTING》 *
M.C. FAIRHURST等: "Generalised approach to the recognition of structurally similar handwritten characters using multiple expert classifiers", 《IEE PROCEEDINGS - VISION, IMAGE AND SIGNAL PROCESSING》 *
张凯兵: "一种多特征组合与多神经网络分类器集成的手写数字识别新方法", 《西华大学学报(自然科学版)》 *
朱娟等: "改进胶囊网络的有序重叠手写数字识别方法", 《激光杂志》 *
李士进等: "基于多分类器组合的多角度彩色人脸图像检测", 《小型微型计算机系统》 *
李文杰: "一种多分类器融合的单个宫颈细胞图像分割、特征提取和分类识别方法研究", 《中国优秀硕士学位论文全文数据库 医药卫生科技辑》 *
杨丽丽等: "基于Sugeno模糊积分神经网络分类器融合方法在手写数字识别中的应用", 《工业控制计算机》 *
陈柏丞: "基于多神经网络集成的手写数字识别", 《中国优秀硕士学位论文全文数据库 信息科技辑》 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115529618A (en) * 2022-08-02 2022-12-27 北京邮电大学 Uplink interference simplification identification and performance prediction method for wireless system

Also Published As

Publication number Publication date
CN111652108B (en) 2020-12-29

Similar Documents

Publication Publication Date Title
CN110136103B (en) Medical image interpretation method, device, computer equipment and storage medium
Li et al. A deep learning method for change detection in synthetic aperture radar images
CN111950649B (en) Attention mechanism and capsule network-based low-illumination image classification method
Liu et al. AFNet: Adaptive fusion network for remote sensing image semantic segmentation
CN108986140B (en) Target scale self-adaptive tracking method based on correlation filtering and color detection
Liu et al. Blind image quality assessment by relative gradient statistics and adaboosting neural network
CN106845487B (en) End-to-end license plate identification method
Lei et al. A skin segmentation algorithm based on stacked autoencoders
CN111310775A (en) Data training method and device, terminal equipment and computer readable storage medium
CN110838119B (en) Human face image quality evaluation method, computer device and computer readable storage medium
JP2005352900A (en) Device and method for information processing, and device and method for pattern recognition
CN108573499A (en) A kind of visual target tracking method based on dimension self-adaption and occlusion detection
CN114155365B (en) Model training method, image processing method and related device
CN110929617A (en) Face-changing composite video detection method and device, electronic equipment and storage medium
Mu et al. Salient object detection using a covariance-based CNN model in low-contrast images
CN113128360A (en) Driver driving behavior detection and identification method based on deep learning
CN110751057A (en) Finger vein verification method and device based on long-time and short-time memory cyclic neural network
JP2005032250A (en) Method for processing face detection, and device for detecting faces in image
CN114444565A (en) Image tampering detection method, terminal device and storage medium
CN113469092A (en) Character recognition model generation method and device, computer equipment and storage medium
CN111652108B (en) Anti-interference signal identification method and device, computer equipment and storage medium
CN116543261A (en) Model training method for image recognition, image recognition method device and medium
CN116681687B (en) Wire detection method and device based on computer vision and computer equipment
CN112488985A (en) Image quality determination method, device and equipment
CN114445916A (en) Living body detection method, terminal device and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant