CN114818777B - Training method and device for active angle deception jamming recognition model - Google Patents

Training method and device for active angle deception jamming recognition model Download PDF

Info

Publication number
CN114818777B
CN114818777B CN202210273471.4A CN202210273471A CN114818777B CN 114818777 B CN114818777 B CN 114818777B CN 202210273471 A CN202210273471 A CN 202210273471A CN 114818777 B CN114818777 B CN 114818777B
Authority
CN
China
Prior art keywords
layer
convolution
calculation result
training
interference
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210273471.4A
Other languages
Chinese (zh)
Other versions
CN114818777A (en
Inventor
刘天冬
苏琪雅
董胜波
李欣致
于沐尧
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Institute of Remote Sensing Equipment
Original Assignee
Beijing Institute of Remote Sensing Equipment
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Institute of Remote Sensing Equipment filed Critical Beijing Institute of Remote Sensing Equipment
Priority to CN202210273471.4A priority Critical patent/CN114818777B/en
Publication of CN114818777A publication Critical patent/CN114818777A/en
Application granted granted Critical
Publication of CN114818777B publication Critical patent/CN114818777B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2218/00Aspects of pattern recognition specially adapted for signal processing
    • G06F2218/08Feature extraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2218/00Aspects of pattern recognition specially adapted for signal processing
    • G06F2218/12Classification; Matching
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2218/00Aspects of pattern recognition specially adapted for signal processing
    • G06F2218/22Source localisation; Inverse modelling
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D30/00Reducing energy consumption in communication networks
    • Y02D30/70Reducing energy consumption in communication networks in wireless communication networks

Landscapes

  • Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Radar Systems Or Details Thereof (AREA)

Abstract

The embodiment of the disclosure provides a training method and device for an active angle spoofing interference identification model, wherein the method comprises the following steps: generating echo data based on an echo generating model of the radio frequency detection system, marking, and generating a training sample; preprocessing the training sample to generate a time-frequency image training set; training an active angle deception jamming recognition model by using the time-frequency image training set; the active angle deception jamming recognition model comprises a plurality of convolution layers and a full connection layer; the convolution kernels of the plurality of convolution layers decrease in sequence. In this way, the training data can be generated by adopting a joint digital simulation technology, and the method is not dependent on a large-scale manual annotation data set and is easy to realize; the method is applied to interference countermeasure of the radio frequency detection system, a rule is searched from the global view of data, interference and targets are judged, and the identification rate of active angle spoofing interference is improved.

Description

Training method and device for active angle deception jamming recognition model
Technical Field
The disclosure relates to the field of radar signal processing, and in particular relates to the technical field of realizing active angle spoofing interference identification by adopting an information processing method of deep learning in an artificial intelligence technology in radar electronic countermeasure.
Background
Active angle spoofing is one of the main interference problems faced by radio frequency detection systems, and the type of active interference utilizes decoys to detect radar pulse signals through jammers, and active decoy interference signals are formed through sampling, modulation and forwarding to shield the positions of real targets.
Because of the abundant active angle deception jamming patterns, the characteristic rule of the active angle deception jamming patterns is difficult to describe by adopting an analytic mode, so that the anti-jamming mode of the current radio frequency detection system is mainly focused on the aspect of front-end radio frequency access resource countermeasure. The interference and target features have higher similarity, and the extractable features do not obviously cause that once the countermeasure of the radio frequency channel resource at the front end fails, the rear end signal processing countermeasure algorithm commonly adopts known features based on energy, frequency, time delay, polarization and the like to realize the distinction and identification of the real signal and the interference signal, but as the complexity of the interference signal is gradually improved, the real signal and the interference signal are difficult to effectively distinguish by simply adopting a plurality of known features, and in this case, the real target is very difficult to identify and detect.
The artificial intelligent target detection technology applied in a large number at present aims to solve the problems of target detection, identification and signal sorting in complex scenes which are difficult to model, and has better technical fitness with real target detection in active angle spoofing interference scenes. The intelligent algorithm relies on a large amount of training data with marks, has strong algorithm fitting capability, can adapt to fitting problems under various complex backgrounds, is relatively suitable for the scene requirement of a radio frequency detection system for resisting active angle deception jamming, and exists in part of active jamming resisting methods for target-oriented detection.
However, the current deep learning technology applied to the scene against active angle spoofing interference does not have a special and public training data set to support, and meanwhile, due to the fact that the parameters of the radio frequency detection system affecting echo signals are too many, a unified standard data set cannot be formed, the echo data of different radio frequency detection system platforms have large differences, the echo data set needs to be specially customized, and large-scale data acquisition and manual labeling are difficult to achieve.
Disclosure of Invention
The present disclosure provides a training method, apparatus, device and storage medium for an active angle spoofing interference recognition model.
According to a first aspect of the present disclosure, a training method of an active angle spoofing interference recognition model is provided. The method comprises the following steps: generating echo data based on an echo generating model of the radio frequency detection system, marking, and generating a training sample; preprocessing the training sample to generate a time-frequency image training set; training an active angle deception jamming recognition model by using the time-frequency image training set; the active angle deception jamming recognition model comprises a plurality of convolution layers and a full connection layer; the convolution kernels of the plurality of convolution layers decrease in sequence.
In aspects and any one of the possible implementations described above, there is further provided an implementation, generating echo data based on an rf detection system echo generation model and annotating the echo data includes:
echo data of different signal to noise ratios, different signal to interference ratios, different position targets and different position interferences are randomly generated based on an echo generation model of the radio frequency detection system; and marking the position information of the target, the position information of the interference and the relative position information of the target and the interference.
In aspects and any one of the possible implementations as described above, there is further provided an implementation, wherein preprocessing the training sample includes:
and performing time-frequency processing by adopting pulse compression or time-domain product accumulation, and respectively generating time-frequency images by the training samples.
Aspects and any one of the possible implementations as described above, further providing an implementation, the plurality of convolution layers including:
a first convolution layer having a convolution kernel size with a time dimension parameter greater than a frequency dimension;
a second convolution layer having a convolution kernel size less than the convolution kernel size of the first convolution layer;
a third convolution layer having a convolution kernel size less than the convolution kernel size of the second convolution layer;
a fourth convolution layer having a convolution kernel size that is less than the convolution kernel size of the third convolution layer;
and the full connection layer is used for carrying out weighting processing on the calculation result of the previous convolution layer and outputting the position information of the target and the interference.
In the aspect and any possible implementation manner described above, there is further provided an implementation manner, where one or more convolution layers are disposed between the fourth convolution layer and the full connection layer, and a convolution kernel size of the one or more convolution layers is smaller than a convolution kernel size of the fourth convolution layer.
According to a second aspect of the present disclosure, a training apparatus for an active angle spoofing interference recognition model is provided. The device comprises: the sample generation module is used for generating echo data based on the radio frequency detection system echo generation model and marking the echo data to generate a training sample; the preprocessing module is used for preprocessing the training samples to generate a time-frequency image training set; the training module is used for training the active angle deception jamming recognition model by utilizing the time-frequency image training set; the active angle deception jamming recognition model comprises a plurality of convolution layers and a full connection layer; the convolution kernels of the plurality of convolution layers decrease in sequence.
According to a third aspect of the present disclosure, an electronic device is provided. The electronic device includes: a memory and a processor, the memory having stored thereon a computer program, the processor implementing the method as described above when executing the program.
According to a fourth aspect of the present disclosure, there is provided a computer readable storage medium having stored thereon a computer program which when executed by a processor implements a method according to the first and/or second aspects of the present disclosure.
It should be understood that what is described in this summary is not intended to limit the critical or essential features of the embodiments of the disclosure nor to limit the scope of the disclosure. Other features of the present disclosure will become apparent from the following description.
Drawings
The above and other features, advantages and aspects of embodiments of the present disclosure will become more apparent by reference to the following detailed description when taken in conjunction with the accompanying drawings. For a better understanding of the present disclosure, and without limiting the disclosure thereto, the same or similar reference numerals denote the same or similar elements, wherein:
FIG. 1 illustrates a flow chart of a training method of an active angle spoofing interference recognition model in accordance with an embodiment of the present disclosure;
FIG. 2 illustrates a flow chart of an active angle spoofing interference identification method 200 in accordance with an embodiment of the present disclosure;
FIG. 3 illustrates a block diagram of a training apparatus of an active angle spoofing interference recognition model in accordance with an embodiment of the present disclosure;
fig. 4 shows a schematic block diagram of an electronic device that may be used to implement embodiments of the present disclosure.
Detailed Description
For the purposes of making the objects, technical solutions and advantages of the embodiments of the present disclosure more apparent, the technical solutions of the embodiments of the present disclosure will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present disclosure, and it is apparent that the described embodiments are some embodiments of the present disclosure, but not all embodiments. All other embodiments, which can be made by one of ordinary skill in the art based on the embodiments in this disclosure without inventive faculty, are intended to be within the scope of this disclosure.
In addition, the term "and/or" herein is merely an association relationship describing an association object, and means that three relationships may exist, for example, a and/or B may mean: a exists alone, A and B exist together, and B exists alone. In addition, the character "/" herein generally indicates that the front and rear associated objects are an "or" relationship.
Fig. 1 illustrates a flowchart of a training method 100 of an active angle spoofing recognition model in accordance with an embodiment of the present disclosure.
Generating echo data based on an echo generation model of the radio frequency detection system and marking the echo data to generate a training sample at a block 102;
in some embodiments, echo data of different signal to noise ratios, different signal to interference ratios, different position targets and different position interferences are randomly generated based on an echo generation model of the radio frequency detection system; and marking the position information of the target, the position information of the interference and the relative position information of the target and the interference.
In some embodiments, the radio frequency detection system echo generation model is supplemented according to 3-4 content.
At block 104, preprocessing the training samples to generate a time-frequency image training set;
in some embodiments, the time-frequency image is generated using pulse compression or time-domain integration progressive time-frequency processing.
There are two ways of time-frequency processing: firstly, a time-frequency image is formed by adopting a multi-pulse coherent accumulation mode; and secondly, forming a time-frequency image by adopting a single pulse time-frequency analysis method. The horizontal axis of the time-frequency image is a time axis (the physical meaning represents the distance of the pixel), the vertical axis is a frequency axis (the physical meaning represents the speed of the pixel), a single pixel point represents the corresponding time (distance) and frequency (speed) of the pixel unit, and the amplitude represents the energy of the signal.
The time-frequency image training set is a training set formed by the time-frequency image and the corresponding label.
The characteristics of the target and the interference signals are more obvious and prominent through pulse compression and time domain accumulation data preprocessing, so that the deep learning network is convenient to extract the characteristics, and the recognition of the air active angle deception interference based on deep learning is realized.
In some embodiments, a time-frequency image training set and a time-frequency image testing set may be generated to train and test the active angle spoofing interference recognition model, respectively.
Training an active angle spoofing interference recognition model using the time-frequency image training set at block 106;
in some embodiments, the active angle spoofing identification model is a convolutional neural network model, including a plurality of convolutional layers, a fully-connected layer; the convolution kernels of the plurality of convolution layers decrease in sequence.
In some embodiments, the plurality of convolutional layers comprises:
a first convolution layer having a convolution kernel size with a time dimension parameter greater than a frequency dimension;
a second convolution layer having a convolution kernel size less than the convolution kernel size of the first convolution layer;
a third convolution layer having a convolution kernel size less than the convolution kernel size of the second convolution layer;
a fourth convolution layer having a convolution kernel size that is less than the convolution kernel size of the third convolution layer;
and the full connection layer is used for carrying out weighting processing on the calculation result of the previous convolution layer and outputting the position information of the target and the interference.
In some embodiments, one or more convolution layers are disposed between the fourth convolution layer and the full connection layer, the convolution kernel size of which is smaller than the convolution kernel size of the fourth convolution layer.
In some embodiments, the first layer convolution layer designs a large-range convolution kernel function (size 7×21), adopts a time dimension long-strip convolution form, makes a time dimension (physical meaning is distance) sense field of view 3 times a frequency dimension (physical meaning is speed) sense field of view, increases feature extraction capability of the neural network on large-range distance information, and obtains calculation result data of the first layer by carrying out convolution calculation on an input time-frequency domain image. According to the actual situation, in order to increase the visual field of the feature experience of the time dimension (the physical meaning is the distance), the time dimension (the physical meaning is the distance) of the convolution kernel can also be set as the time dimension of the input time-frequency image.
The second layer convolution layer designs a plurality of relatively small convolution kernel functions (the size is 11 multiplied by 3), carries out convolution calculation on the calculation result of the first layer, realizes the fusion of the characteristic edge detection and the large receptive field information of the input image, and obtains calculation result data of the second layer.
The third layer convolution layer designs a plurality of smaller convolution kernel functions (the size is 7 multiplied by 7), and carries out convolution calculation on the calculation result of the second layer to realize extraction of target shape information and obtain the calculation result of the third layer.
And the design depth of the fourth layer of convolution layer can be separated, the convolution kernel function (the size of the dimension is 5 multiplied by 5) is used for carrying out convolution calculation on the calculation result of the third layer, the information among channels is comprehensively utilized, and the rule of a real target is summarized, so that the calculation result of the fourth layer is obtained.
And the design depth of the fifth layer convolution layer can be separated, a convolution kernel function (the size is 3 multiplied by 3), and the calculation result of the fourth layer is subjected to convolution calculation to obtain the calculation result of the fifth layer.
The depth of the sixth layer of convolution layer can be separated into convolution kernel functions (the size is 3 multiplied by 3), and the calculation result of the fifth layer is subjected to convolution calculation to obtain the calculation result of the sixth layer.
The fifth convolution and the sixth convolution serve as abstract generalization of the feature layers generated by the previous convolution network, map features to a higher-dimensional space and generate high-dimensional features required by the prediction of the seventh-layer probability distribution information.
The seventh convolution layer is a full connection layer, weighting processing is carried out on the result of the sixth layer, and position information of the target and the interference is output.
In some embodiments, the active angle spoofing interference identification model is trained using the time-frequency image training set, a loss function is designed, and training is stopped when the loss function of the model is less than a preset threshold.
Fig. 2 illustrates a flow chart of an active angle spoofing interference identification method 200 in accordance with an embodiment of the present disclosure.
At block 202, echo data to be identified is acquired;
in some embodiments, the echo data to be identified is preprocessed. And performing time-frequency processing by adopting pulse compression or time-domain accumulation to generate a time-frequency image.
Inputting the echo data to be identified into a pre-trained active angle spoofing interference identification model at block 204;
at block 206, the disturbance is identified based on the target and disturbance location information output by the active angle spoofing disturbance identification model.
According to the embodiment of the disclosure, the following technical effects are achieved:
the training data is generated by adopting the joint digital simulation technology, and the method is independent of a large-scale manual annotation data set and is easy to realize;
the method is applied to interference countermeasure of the radio frequency detection system, a rule is searched from the global view of data, interference and targets are judged, and the identification rate of active angle spoofing interference is improved.
It should be noted that, for simplicity of description, the foregoing method embodiments are all described as a series of acts, but it should be understood by those skilled in the art that the present disclosure is not limited by the order of acts described, as some steps may be performed in other orders or concurrently in accordance with the present disclosure. Further, those skilled in the art will also appreciate that the embodiments described in the specification are all alternative embodiments, and that the acts and modules referred to are not necessarily required by the present disclosure.
The foregoing is a description of embodiments of the method, and the following further describes embodiments of the present disclosure through examples of apparatus.
Fig. 3 illustrates a block diagram of a training apparatus 300 of an active angle spoofing interference recognition model in accordance with an embodiment of the present disclosure. The device comprises:
the sample generation module 302 is configured to generate echo data based on an echo generation model of the radio frequency detection system and label the echo data to generate a training sample;
the preprocessing module 304 is configured to preprocess the training samples to generate a time-frequency image training set;
the training module 306 is configured to train the active angle spoofing interference recognition model by using the time-frequency image training set; the active angle deception jamming recognition model comprises a plurality of convolution layers and a full connection layer; the convolution kernels of the plurality of convolution layers decrease in sequence.
It will be clear to those skilled in the art that, for convenience and brevity of description, specific working procedures of the described modules may refer to corresponding procedures in the foregoing method embodiments, which are not described herein again.
According to embodiments of the present disclosure, the present disclosure also provides an electronic device, a readable storage medium and a computer program product.
Fig. 4 shows a schematic block diagram of an electronic device 400 that may be used to implement embodiments of the present disclosure. Electronic devices are intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. The electronic device may also represent various forms of mobile devices, such as personal digital processing, cellular telephones, smartphones, wearable devices, and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be exemplary only, and are not meant to limit implementations of the disclosure described and/or claimed herein.
The device 400 comprises a computing unit 401 that may perform various suitable actions and processes in accordance with a computer program stored in a Read Only Memory (ROM) 402 or loaded from a storage unit 408 into a Random Access Memory (RAM) 403. In RAM 403, also 10 may store various programs and data required for operation of device 400. The computing unit 401, ROM 402, and RAM 403 are connected to each other by a bus 404. An input/output (I/O) interface 405 is also connected to bus 404.
Various components in device 400 are connected to I/O interface 405, including: an input unit 406 such as a keyboard, a mouse, etc.; an output unit 407 such as various types of displays, speakers, and the like; a storage unit 408, such as a magnetic disk, optical disk, etc.; and a communication unit 409 such as a network card, modem, wireless communication transceiver, etc. The communication unit 409 allows the device 400 to exchange information/data with other devices via a computer network, such as the internet, and/or various telecommunication networks.
The computing unit 401 may be a variety of general purpose and/or special purpose processing components having processing and computing capabilities. Some examples of computing unit 401 include, but are not limited to, a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), various specialized Artificial Intelligence (AI) computing chips, various computing units running machine learning model algorithms, a Digital Signal Processor (DSP), and any suitable processor, controller, microcontroller, etc. The computing unit 401 performs the various methods and processes described above, e.g., methods 100, 200. For example, in some embodiments, the methods 100, 200 may be implemented as a computer software program tangibly embodied on a machine-readable medium, such as the storage unit 408. In some embodiments, part or all of the computer program may be loaded and/or installed onto the device 400 via the ROM 402 and/or the communication unit 409. One or more of the steps of the methods 100, 200 described above may be performed when a computer program is loaded into RAM 403 and executed by computing unit 401. Alternatively, in other embodiments, the computing unit 401 may be configured to perform the methods 100, 200 by any other suitable means (e.g. by means of firmware).
Various implementations of the systems and techniques described here above may be implemented in digital electronic circuitry, integrated circuit systems, field Programmable Gate Arrays (FPGAs), application Specific Integrated Circuits (ASICs), application Specific Standard Products (ASSPs), systems On Chip (SOCs), load programmable logic devices (CPLDs), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs, the one or more computer programs may be executed and/or interpreted on a programmable system including at least one programmable processor, which may be a special purpose or general-purpose programmable processor, that may receive data and instructions from, and transmit data and instructions to, a storage system, at least one input device, and at least one output device.
Program code for carrying out methods of the present disclosure may be written in any combination of one or more programming languages. These program code may be provided to a processor or controller of a general purpose computer, special purpose computer, or other programmable data processing apparatus such that the program code, when executed by the processor or controller, causes the functions/operations specified in the flowchart and/or block diagram to be implemented. The program code may execute entirely on the machine, partly on the machine, as a stand-alone software package, partly on the machine and partly on a remote machine or entirely on the remote machine or server.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. The machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and pointing device (e.g., a mouse or trackball) by which a user can provide input to the computer. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user may be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic input, speech input, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a background component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such background, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), wide Area Networks (WANs), and the internet.
The computer system may include a client and a server. The client and server are typically remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other. The server may be a cloud server or a server of a distributed system.
It should be appreciated that various forms of the flows shown above may be used to reorder, add, or delete steps. For example, the steps recited in the present disclosure may be performed in parallel or sequentially or in a different order, provided that the desired results of the technical solutions of the present disclosure are achieved, and are not limited herein.
The above detailed description should not be taken as limiting the scope of the present disclosure. It will be apparent to those skilled in the art that various modifications, combinations, sub-combinations and alternatives are possible, depending on design requirements and other factors. Any modifications, equivalent substitutions and improvements made within the spirit and principles of the present disclosure are intended to be included within the scope of the present disclosure.

Claims (6)

1. A training method of an active angle spoofing interference recognition model, comprising:
generating echo data based on the radio frequency detection system echo generation model and marking, and generating a training sample, wherein generating echo data based on the radio frequency detection system echo generation model and marking comprises the following steps: echo data of different signal to noise ratios, different signal to interference ratios, different position targets and different position interferences are randomly generated based on an echo generation model of the radio frequency detection system; labeling the position information of the target, the position information of the interference and the relative position information of the target and the interference;
preprocessing the training sample to generate a time-frequency image training set;
training an active angle deception jamming recognition model by using the time-frequency image training set; wherein, the liquid crystal display device comprises a liquid crystal display device,
the active angle deception jamming recognition model comprises a plurality of convolution layers and a full connection layer; the convolution kernels of the plurality of convolution layers are sequentially reduced;
the plurality of convolution layers includes: the method comprises the steps of designing a convolution kernel function of a first layer, designing the convolution kernel function, enabling the size of the convolution kernel function to be 7 multiplied by 21, enabling a time dimension sensing visual field to be 3 times of a frequency dimension sensing visual field, increasing feature extraction capacity of a neural network on large-range distance information, and obtaining calculation result data of the first layer through convolution calculation on an input time-frequency domain image; the second layer of convolution layer designs a plurality of convolution kernel functions with the size of 11 multiplied by 3, carries out convolution calculation on the calculation result of the first layer, realizes the fusion of the characteristic edge detection and the large receptive field information of the input image, and obtains the calculation result data of the second layer; the third layer of convolution layer is designed with a plurality of convolution kernel functions, the size of the convolution kernel functions is 7 multiplied by 7, the convolution calculation is carried out on the calculation result of the second layer, the extraction of the target shape information is realized, and the calculation result of the third layer is obtained; a fourth layer of convolution layer, which is designed with a separable convolution kernel function of 5 x 5 in size, carries out convolution calculation on the calculation result of the third layer, comprehensively utilizes the information among channels, and summarizes the law of a real target to obtain the calculation result of the fourth layer; a fifth layer of convolution layer, which is designed with a separable convolution kernel function with the size of 3 multiplied by 3, and carries out convolution calculation on the calculation result of the fourth layer to obtain the calculation result of the fifth layer; the sixth layer of convolution layer is designed with a separable convolution kernel function of the depth and the size of 3 multiplied by 3, and the calculation result of the fifth layer is subjected to convolution calculation to obtain the calculation result of the sixth layer, wherein the convolution of the fifth layer and the sixth layer plays a role in abstracting and generalizing a characteristic layer generated by a front convolution network, mapping the characteristic to a higher-dimensional space and generating a high-dimensional characteristic required by the prediction of the seventh layer probability distribution information;
the seventh layer is a full connection layer, the result of the sixth layer is weighted, and the position information of the target and the interference is output.
2. The method of claim 1, wherein preprocessing the training samples comprises:
and performing time-frequency processing by adopting pulse compression or time-domain product accumulation, and respectively generating time-frequency images by the training samples.
3. The method of claim 1, wherein one or more convolution layers having a convolution kernel size that is smaller than a convolution kernel size of the fourth layer of convolution layers are disposed between the fourth layer of convolution layers and the full connection layer.
4. An active angle spoofing interference recognition model training apparatus comprising:
the sample generation module is used for generating echo data based on the radio frequency detection system echo generation model and labeling, and generating a training sample, wherein the generating the echo data based on the radio frequency detection system echo generation model and labeling comprises the following steps: echo data of different signal to noise ratios, different signal to interference ratios, different position targets and different position interferences are randomly generated based on an echo generation model of the radio frequency detection system; labeling the position information of the target, the position information of the interference and the relative position information of the target and the interference;
the preprocessing module is used for preprocessing the training samples to generate a time-frequency image training set;
the training module is used for training the active angle deception jamming recognition model by utilizing the time-frequency image training set; wherein, the liquid crystal display device comprises a liquid crystal display device,
the active angle deception jamming recognition model comprises a plurality of convolution layers and a full connection layer; the convolution kernels of the plurality of convolution layers are sequentially reduced;
the plurality of convolution layers includes: the method comprises the steps of designing a convolution kernel function of a first layer, designing the convolution kernel function, enabling the size of the convolution kernel function to be 7 multiplied by 21, enabling a time dimension sensing visual field to be 3 times of a frequency dimension sensing visual field, increasing feature extraction capacity of a neural network on large-range distance information, and obtaining calculation result data of the first layer through convolution calculation on an input time-frequency domain image; the second layer of convolution layer designs a plurality of convolution kernel functions with the size of 11 multiplied by 3, carries out convolution calculation on the calculation result of the first layer, realizes the fusion of the characteristic edge detection and the large receptive field information of the input image, and obtains the calculation result data of the second layer; the third layer of convolution layer is designed with a plurality of convolution kernel functions, the size of the convolution kernel functions is 7 multiplied by 7, the convolution calculation is carried out on the calculation result of the second layer, the extraction of the target shape information is realized, and the calculation result of the third layer is obtained; a fourth layer of convolution layer, which is designed with a separable convolution kernel function of 5 x 5 in size, carries out convolution calculation on the calculation result of the third layer, comprehensively utilizes the information among channels, and summarizes the law of a real target to obtain the calculation result of the fourth layer; a fifth layer of convolution layer, which is designed with a separable convolution kernel function with the size of 3 multiplied by 3, and carries out convolution calculation on the calculation result of the fourth layer to obtain the calculation result of the fifth layer; the sixth layer of convolution layer is designed with a separable convolution kernel function of the depth and the size of 3 multiplied by 3, and the calculation result of the fifth layer is subjected to convolution calculation to obtain the calculation result of the sixth layer, wherein the convolution of the fifth layer and the sixth layer plays a role in abstracting and generalizing a characteristic layer generated by a front convolution network, mapping the characteristic to a higher-dimensional space and generating a high-dimensional characteristic required by the prediction of the seventh layer probability distribution information;
the seventh layer is a full connection layer, the result of the sixth layer is weighted, and the position information of the target and the interference is output.
5. An electronic device, comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein, the liquid crystal display device comprises a liquid crystal display device,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of any one of claims 1-3.
6. A non-transitory computer readable storage medium storing computer instructions for causing the computer to perform the method of any one of claims 1-3.
CN202210273471.4A 2022-03-18 2022-03-18 Training method and device for active angle deception jamming recognition model Active CN114818777B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210273471.4A CN114818777B (en) 2022-03-18 2022-03-18 Training method and device for active angle deception jamming recognition model

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210273471.4A CN114818777B (en) 2022-03-18 2022-03-18 Training method and device for active angle deception jamming recognition model

Publications (2)

Publication Number Publication Date
CN114818777A CN114818777A (en) 2022-07-29
CN114818777B true CN114818777B (en) 2023-07-21

Family

ID=82531277

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210273471.4A Active CN114818777B (en) 2022-03-18 2022-03-18 Training method and device for active angle deception jamming recognition model

Country Status (1)

Country Link
CN (1) CN114818777B (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105572643A (en) * 2015-12-22 2016-05-11 河海大学 Radar signal emission method for resisting radio frequency storage forwarding interference
CN111541511A (en) * 2020-04-20 2020-08-14 中国人民解放军海军工程大学 Communication interference signal identification method based on target detection in complex electromagnetic environment
CN112949820A (en) * 2021-01-27 2021-06-11 西安电子科技大学 Cognitive anti-interference target detection method based on generation of countermeasure network
CN113219417A (en) * 2020-10-21 2021-08-06 中国人民解放军空军预警学院 Airborne radar interference type identification method based on support vector machine
CN114201987A (en) * 2021-11-09 2022-03-18 北京理工大学 Active interference identification method based on self-adaptive identification network

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10643153B2 (en) * 2017-04-24 2020-05-05 Virginia Tech Intellectual Properties, Inc. Radio signal identification, identification system learning, and identifier deployment
CN110927706B (en) * 2019-12-10 2022-05-24 电子科技大学 Convolutional neural network-based radar interference detection and identification method
CN113298846B (en) * 2020-11-18 2024-02-09 西北工业大学 Interference intelligent detection method based on time-frequency semantic perception
CN112560596B (en) * 2020-12-01 2023-09-19 中国航天科工集团第二研究院 Radar interference category identification method and system
CN112731309B (en) * 2021-01-06 2022-09-02 哈尔滨工程大学 Active interference identification method based on bilinear efficient neural network
CN112859012B (en) * 2021-01-20 2023-12-01 北京理工大学 Radar spoofing interference identification method based on cascade convolution neural network
CN112949387B (en) * 2021-01-27 2024-02-09 西安电子科技大学 Intelligent anti-interference target detection method based on transfer learning
CN114019467A (en) * 2021-10-25 2022-02-08 哈尔滨工程大学 Radar signal identification and positioning method based on MobileNet model transfer learning

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105572643A (en) * 2015-12-22 2016-05-11 河海大学 Radar signal emission method for resisting radio frequency storage forwarding interference
CN111541511A (en) * 2020-04-20 2020-08-14 中国人民解放军海军工程大学 Communication interference signal identification method based on target detection in complex electromagnetic environment
CN113219417A (en) * 2020-10-21 2021-08-06 中国人民解放军空军预警学院 Airborne radar interference type identification method based on support vector machine
CN112949820A (en) * 2021-01-27 2021-06-11 西安电子科技大学 Cognitive anti-interference target detection method based on generation of countermeasure network
CN114201987A (en) * 2021-11-09 2022-03-18 北京理工大学 Active interference identification method based on self-adaptive identification network

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
基于深度学习的复杂背景雷达图像多目标检测;周龙;《系统工程与电子技术》;第41卷(第06期);第1258-1264页 *

Also Published As

Publication number Publication date
CN114818777A (en) 2022-07-29

Similar Documents

Publication Publication Date Title
US11436447B2 (en) Target detection
CN111541511B (en) Communication interference signal identification method based on target detection in complex electromagnetic environment
CN112990204B (en) Target detection method and device, electronic equipment and storage medium
CN112949767B (en) Sample image increment, image detection model training and image detection method
KR20220107120A (en) Method and apparatus of training anti-spoofing model, method and apparatus of performing anti-spoofing using anti-spoofing model, electronic device, storage medium, and computer program
CN112990203B (en) Target detection method and device, electronic equipment and storage medium
CN113378770B (en) Gesture recognition method, device, equipment and storage medium
CN113869449A (en) Model training method, image processing method, device, equipment and storage medium
CN113850838A (en) Ship voyage intention acquisition method and device, computer equipment and storage medium
CN113326773A (en) Recognition model training method, recognition method, device, equipment and storage medium
CN112528858A (en) Training method, device, equipment, medium and product of human body posture estimation model
CN114818777B (en) Training method and device for active angle deception jamming recognition model
CN116385789B (en) Image processing method, training device, electronic equipment and storage medium
CN116935368A (en) Deep learning model training method, text line detection method, device and equipment
CN114549961B (en) Target object detection method, device, equipment and storage medium
CN114724144B (en) Text recognition method, training device, training equipment and training medium for model
CN113792849B (en) Training method of character generation model, character generation method, device and equipment
CN114093006A (en) Training method, device and equipment of living human face detection model and storage medium
CN114638359A (en) Method and device for removing neural network backdoor and image recognition
CN113936158A (en) Label matching method and device
CN113361455A (en) Training method of face counterfeit identification model, related device and computer program product
CN116482680B (en) Body interference identification method, device, system and storage medium
CN117292120B (en) Light-weight visible light insulator target detection method and system
CN115147902B (en) Training method, training device and training computer program product for human face living body detection model
CN115496916B (en) Training method of image recognition model, image recognition method and related device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant