CN114818777A - Training method and device for active angle deception jamming recognition model - Google Patents

Training method and device for active angle deception jamming recognition model Download PDF

Info

Publication number
CN114818777A
CN114818777A CN202210273471.4A CN202210273471A CN114818777A CN 114818777 A CN114818777 A CN 114818777A CN 202210273471 A CN202210273471 A CN 202210273471A CN 114818777 A CN114818777 A CN 114818777A
Authority
CN
China
Prior art keywords
training
convolution
layer
interference
time
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202210273471.4A
Other languages
Chinese (zh)
Other versions
CN114818777B (en
Inventor
刘天冬
苏琪雅
董胜波
李欣致
于沐尧
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Institute of Remote Sensing Equipment
Original Assignee
Beijing Institute of Remote Sensing Equipment
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Institute of Remote Sensing Equipment filed Critical Beijing Institute of Remote Sensing Equipment
Priority to CN202210273471.4A priority Critical patent/CN114818777B/en
Publication of CN114818777A publication Critical patent/CN114818777A/en
Application granted granted Critical
Publication of CN114818777B publication Critical patent/CN114818777B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2218/00Aspects of pattern recognition specially adapted for signal processing
    • G06F2218/08Feature extraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2218/00Aspects of pattern recognition specially adapted for signal processing
    • G06F2218/12Classification; Matching
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2218/00Aspects of pattern recognition specially adapted for signal processing
    • G06F2218/22Source localisation; Inverse modelling
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D30/00Reducing energy consumption in communication networks
    • Y02D30/70Reducing energy consumption in communication networks in wireless communication networks

Landscapes

  • Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Radar Systems Or Details Thereof (AREA)

Abstract

The embodiment of the disclosure provides a method and a device for training an active angle deception jamming recognition model, wherein the method comprises the following steps: generating echo data based on an echo generation model of a radio frequency detection system, labeling the echo data, and generating a training sample; preprocessing the training samples to generate a time-frequency image training set; training an active angle deception jamming recognition model by utilizing the time-frequency image training set; the active angle deception jamming identification model comprises a plurality of convolution layers and full-connection layers; the convolution kernels of the plurality of convolution layers decrease in sequence. In this way, training data can be generated by adopting a joint digital simulation technology, and the method does not depend on a large-scale manual labeling data set and is easy to realize; the method is applied to interference countermeasure of a radio frequency detection system, the law is searched from the global view angle of data, interference and a target are judged, and the identification rate of active angle deception interference is improved.

Description

Training method and device for active angle deception jamming recognition model
Technical Field
The disclosure relates to the field of radar signal processing, in particular to the technical field of identification of active angle deception jamming by adopting an artificial intelligence technology deep learning information processing method in radar electronic countermeasure.
Background
Active angle spoofing is one of the main interference problems faced by radio frequency detection systems, and this type of active interference utilizes baits to detect radar pulse signals through an interference machine, and forms active false target interference signals through sampling, modulation and forwarding so as to cover the position of a real target.
Due to the fact that the active angle deception jamming patterns are rich and the characteristic rules of the active angle deception jamming patterns are difficult to describe in an analytic mode, the anti-jamming mode of the current radio frequency detection system is mainly focused on the aspect of front-end radio frequency channel resource countermeasure. The similarity between the interference and the target characteristics is high, once the countermeasure of the radio frequency access resources at the front end fails due to the fact that the extractable characteristics are not obvious, the back-end signal processing countermeasure algorithm generally adopts known characteristics based on energy, frequency, time delay, polarization and the like to distinguish and identify the real signals and the interference signals, but as the complexity of the interference signals gradually increases, the real signals and the interference signals are difficult to effectively distinguish by simply adopting a plurality of known characteristics, and the identification and the detection of the real targets are very difficult under the condition.
The prior artificial intelligent target detection technology applied in a large number aims at solving the problems of target detection, identification and signal sorting in a complex scene which is difficult to model, and has better technical engagement with real target detection in an active angle deception interference scene. The intelligent algorithm relies on a large amount of training data with marks, has strong algorithm fitting capacity, can adapt to fitting problems under various complex backgrounds, is more suitable for the scene requirement of a radio frequency detection system for resisting active angle deception interference, and has part of active interference resisting methods facing target detection.
However, the current deep learning technology applied to the scene of resisting the active angle deception jamming does not provide a special and public training data set for support, and meanwhile, because the parameters of the radio frequency detection system influencing the echo signals are too many, a uniform standard data set cannot be formed, echo data of different radio frequency detection system platforms have great difference, the echo data set needs to be specially customized, and large-scale data acquisition and manual labeling are difficult to realize.
Disclosure of Invention
The disclosure provides a training method, a device, equipment and a storage medium for an active angle deception jamming recognition model.
According to a first aspect of the present disclosure, a training method of an active angle spoofing interference recognition model is provided. The method comprises the following steps: generating echo data based on an echo generation model of a radio frequency detection system, labeling the echo data, and generating a training sample; preprocessing the training samples to generate a time-frequency image training set; training an active angle deception jamming recognition model by utilizing the time-frequency image training set; the active angle deception jamming identification model comprises a plurality of convolution layers and full-connection layers; the convolution kernels of the plurality of convolution layers decrease in sequence.
The above-mentioned aspects and any possible implementation manner further provide an implementation manner, where generating echo data and performing labeling based on an echo generation model of a radio frequency detection system includes:
randomly generating echo data with different signal-to-noise ratios, different signal-to-interference ratios, different position targets and different position interferences based on an echo generation model of a radio frequency detection system; and marking the position information of the target, the position information of the interference and the relative position information of the target and the interference.
The above-described aspect and any possible implementation further provide an implementation, where the preprocessing the training samples includes:
and performing time-frequency processing by adopting pulse compression or time-domain accumulation, and respectively generating time-frequency images by using the training samples. .
The above-described aspects and any possible implementations further provide an implementation in which the plurality of convolutional layers includes:
a first convolution layer, wherein in the size of a convolution kernel function, a time dimension parameter is larger than a frequency dimension;
a second convolution layer having a convolution kernel function size smaller than that of the first convolution layer;
a third convolutional layer having a convolutional kernel function size smaller than that of the second convolutional layer;
a fourth convolution layer having a convolution kernel function size smaller than that of the third convolution layer;
and the full connection layer is used for performing weighting processing on the calculation result of the previous volume of the lamination layer and outputting the position information of the target and the interference.
The above-described aspects and any possible implementations further provide an implementation in which one or more convolutional layers are disposed between the fourth convolutional layer and the fully-connected layer, and the convolutional kernel size of the convolutional layer is smaller than the convolutional kernel function size of the fourth convolutional layer.
According to a second aspect of the present disclosure, a training apparatus for an active angle spoofing interference recognition model is provided. The device includes: the sample generation module is used for generating echo data based on an echo generation model of the radio frequency detection system, labeling the echo data and generating a training sample; the preprocessing module is used for preprocessing the training samples to generate a time-frequency image training set; the training module is used for training an active angle deception jamming recognition model by utilizing the time-frequency image training set; the active angle deception jamming identification model comprises a plurality of convolution layers and full-connection layers; the convolution kernels of the plurality of convolution layers decrease in sequence.
According to a third aspect of the present disclosure, an electronic device is provided. The electronic device includes: a memory having a computer program stored thereon and a processor implementing the method as described above when executing the program.
According to a fourth aspect of the present disclosure, there is provided a computer readable storage medium having stored thereon a computer program which, when executed by a processor, implements a method as according to the first and/or second aspects of the present disclosure.
It should be understood that the statements herein reciting aspects are not intended to limit the critical or essential features of the embodiments of the present disclosure, nor are they intended to limit the scope of the present disclosure. Other features of the present disclosure will become apparent from the following description.
Drawings
The above and other features, advantages and aspects of various embodiments of the present disclosure will become more apparent by referring to the following detailed description when taken in conjunction with the accompanying drawings. The accompanying drawings are included to provide a further understanding of the present disclosure, and are not intended to limit the disclosure thereto, and the same or similar reference numerals will be used to indicate the same or similar elements, where:
FIG. 1 illustrates a flow chart of a method of training an active angle spoofing interference recognition model in accordance with an embodiment of the present disclosure;
fig. 2 illustrates a flow diagram of an active angle spoofing interference identification method 200 in accordance with an embodiment of the present disclosure;
FIG. 3 illustrates a block diagram of a training apparatus for an active angle spoofing interference recognition model in accordance with an embodiment of the present disclosure;
FIG. 4 shows a schematic block diagram of an electronic device that may be used to implement embodiments of the present disclosure.
Detailed Description
To make the objects, technical solutions and advantages of the embodiments of the present disclosure more clear, the technical solutions of the embodiments of the present disclosure will be described clearly and completely with reference to the drawings in the embodiments of the present disclosure, and it is obvious that the described embodiments are some, but not all embodiments of the present disclosure. All other embodiments, which can be derived by a person skilled in the art from the embodiments disclosed herein without making any creative effort, shall fall within the protection scope of the present disclosure.
In addition, the term "and/or" herein is only one kind of association relationship describing an associated object, and means that there may be three kinds of relationships, for example, a and/or B, which may mean: a exists alone, A and B exist simultaneously, and B exists alone. In addition, the character "/" herein generally indicates that the former and latter related objects are in an "or" relationship.
Fig. 1 shows a flow diagram of a method 100 for training an active angle spoofing interference recognition model according to an embodiment of the disclosure.
In a frame 102, echo data are generated and labeled based on an echo generation model of a radio frequency detection system, and a training sample is generated;
in some embodiments, echo data of different signal-to-noise ratios, different signal-to-interference ratios, different position targets, and different position interferences are randomly generated based on an echo generation model of the radio frequency detection system; and marking the position information of the target, the position information of the interference and the relative position information of the target and the interference.
In some embodiments, the radio frequency detection system echo generation model is supplemented according to 3-4 content.
At block 104, preprocessing the training samples to generate a time-frequency image training set;
in some embodiments, the time-frequency processing is performed using pulse compression or time-domain accumulation to generate a time-frequency image.
There are two ways for time-frequency processing: firstly, a time-frequency image is formed by adopting a multi-pulse coherent accumulation mode; and secondly, forming a time-frequency image by adopting a single-pulse time-frequency analysis method. The horizontal axis of the time-frequency image is a time axis (the physical meaning represents the distance of the pixel elements), the vertical axis of the time-frequency image is a frequency axis (the physical meaning represents the speed of the pixel elements), a single pixel point represents the corresponding time (distance) and frequency (speed) of the pixel unit, and the amplitude of the single pixel point represents the energy of the signal. .
The time-frequency image training set is formed by time-frequency images and corresponding labels.
Through data preprocessing of pulse compression and time domain accumulation, the characteristics of a target and an interference signal are more obvious and outstanding, and a deep learning network can conveniently extract the characteristics so as to realize the aerial active angle deception interference identification based on deep learning.
In some embodiments, a time-frequency image training set and a time-frequency image testing set may be generated to train and test the active angle spoofing interference recognition model, respectively.
At block 106, training an active angle spoofing interference recognition model using the time-frequency image training set;
in some embodiments, the active angle spoofing interference identification model is a convolutional neural network model comprising a plurality of convolutional layers, fully-connected layers; the convolution kernels of the plurality of convolution layers decrease in sequence.
In some embodiments, the plurality of convolutional layers comprises:
a first convolution layer, wherein in the size of a convolution kernel function, a time dimension parameter is larger than a frequency dimension;
a second convolution layer having a convolution kernel function size smaller than that of the first convolution layer;
a third convolutional layer having a convolutional kernel function size smaller than that of the second convolutional layer;
a fourth convolution layer having a convolution kernel function size smaller than that of the third convolution layer;
and the full connection layer is used for performing weighting processing on the calculation result of the previous volume of the lamination layer and outputting the position information of the target and the interference.
In some embodiments, one or more convolutional layers are disposed between the fourth convolutional layer and the fully-connected layer, and have a convolutional kernel size smaller than that of the fourth convolutional layer.
In some embodiments, the first layer convolution layer is designed with a large-range convolution kernel function (with a size of 7 × 21), a time-dimension strip-type convolution form is adopted, so that a time-dimension (physical meaning is distance) sensing visual field is 3 times of a frequency-dimension (physical meaning is speed) sensing visual field, the feature extraction capability of the neural network on large-range distance information is increased, and the calculation result data of the first layer is obtained by performing convolution calculation on an input time-frequency domain image. According to the actual situation, in order to increase the time dimension (physical meaning is distance) feature perception field of view, the time dimension (physical meaning is speed) size of the convolution kernel can also be set as the time dimension size of the input time-frequency image.
The convolution layer of the second layer is designed with a plurality of relatively small convolution kernel functions (the size is 11 multiplied by 3), convolution calculation is carried out on the calculation result of the first layer, fusion of characteristic edge detection of the input image and large receptive field information is achieved, and calculation result data of the second layer is obtained.
And designing a plurality of smaller convolution kernel functions (with the size of 7 multiplied by 7) on the convolution layer of the third layer, carrying out convolution calculation on the calculation result of the second layer, realizing the extraction of the target shape information and obtaining the calculation result of the third layer.
Designing a depth separable convolution kernel function (with the size of 5 multiplied by 5) on the convolution layer of the fourth layer, performing convolution calculation on the calculation result of the third layer, and summarizing the rule of the real target by comprehensively utilizing information among channels to obtain the calculation result of the fourth layer.
And designing a depth separable convolution kernel function (with the size of 3 multiplied by 3) on the fifth layer of convolution layer, and performing convolution calculation on the calculation result of the fourth layer to obtain the calculation result of the fifth layer.
The depth of the convolution layer of the sixth layer can be separated into convolution kernel functions (the size is 3 multiplied by 3), and the calculation result of the fifth layer is subjected to convolution calculation to obtain the calculation result of the sixth layer.
The convolution of the fifth layer and the convolution of the sixth layer are used for abstracting and generalizing the feature layer generated by the previous convolution network, mapping the features to the high latitude space and generating the high-dimensional features required by the prediction of the probability distribution information of the seventh layer.
The seventh convolutional layer is a full link layer, and the results of the sixth layer are weighted to output position information of the target and the interference.
In some embodiments, the time-frequency image training set is used for training an active angle deception jamming recognition model, a loss function is designed, and when the loss function of the model is smaller than a preset threshold value, the training is stopped.
Fig. 2 illustrates a flow diagram of an active angle spoofing interference identification method 200 in accordance with an embodiment of the disclosure.
At block 202, echo data to be identified is acquired;
in some embodiments, the echo data to be identified is pre-processed. And performing time-frequency processing by adopting pulse compression or time domain accumulation to generate a time-frequency image.
At a block 204, inputting the echo data to be recognized into a pre-trained active angle deception jamming recognition model;
at block 206, the interference is identified based on the target and interference location information output by the active angle spoof interference identification model.
According to the embodiment of the disclosure, the following technical effects are achieved:
training data are generated by adopting a joint digital simulation technology, and the method does not depend on a large-scale manual labeling data set and is easy to realize;
the method is applied to interference countermeasure of a radio frequency detection system, the law is searched from the global view angle of data, interference and a target are judged, and the identification rate of active angle deception interference is improved.
It is noted that while for simplicity of explanation, the foregoing method embodiments have been described as a series of acts or combination of acts, it will be appreciated by those skilled in the art that the present disclosure is not limited by the order of acts, as some steps may, in accordance with the present disclosure, occur in other orders and concurrently. Further, those skilled in the art should also appreciate that the embodiments described in the specification are exemplary embodiments and that acts and modules referred to are not necessarily required by the disclosure.
The above is a description of embodiments of the method, and the embodiments of the apparatus are further described below.
Fig. 3 shows a block diagram of a training apparatus 300 for an active angle spoofing interference recognition model according to an embodiment of the present disclosure. The device includes:
a sample generation module 302, configured to generate echo data based on an echo generation model of a radio frequency detection system, perform labeling, and generate a training sample;
a preprocessing module 304, configured to preprocess the training samples to generate a time-frequency image training set;
a training module 306, configured to train an active angle spoofing interference recognition model by using the time-frequency image training set; the active angle deception jamming identification model comprises a plurality of convolution layers and full-connection layers; the convolution kernels of the plurality of convolution layers decrease in sequence.
It can be clearly understood by those skilled in the art that, for convenience and brevity of description, the specific working process of the described module may refer to the corresponding process in the foregoing method embodiment, and is not described herein again.
The present disclosure also provides an electronic device, a readable storage medium, and a computer program product according to embodiments of the present disclosure.
FIG. 4 shows a schematic block diagram of an electronic device 400 that may be used to implement embodiments of the present disclosure. Electronic devices are intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. Electronic devices may also represent various forms of mobile devices, such as personal digital processors, cellular telephones, smart phones, wearable devices, and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be examples only, and are not meant to limit implementations of the disclosure described and/or claimed herein.
The device 400 comprises a computing unit 401 which may perform various suitable actions and processes in accordance with a computer program stored in a Read Only Memory (ROM)402 or a computer program loaded from a storage unit 408 into a Random Access Memory (RAM) 403. In the RAM403, various programs and data required for the operation of the device 400 can also be stored. The computing unit 401, ROM 402, and RAM403 are connected to each other via a bus 404. An input/output (I/O) interface 405 is also connected to bus 404.
A number of components in device 400 are connected to I/O interface 405, including: an input unit 406 such as a keyboard, a mouse, or the like; an output unit 407 such as various types of displays, speakers, and the like; a storage unit 408 such as a magnetic disk, optical disk, or the like; and a communication unit 409 such as a network card, modem, wireless communication transceiver, etc. The communication unit 409 allows the device 400 to exchange information/data with other devices via a computer network, such as the internet, and/or various telecommunication networks.
Computing unit 401 may be a variety of general and/or special purpose processing components with processing and computing capabilities. Some examples of the computing unit 401 include, but are not limited to, a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), various dedicated Artificial Intelligence (AI) computing chips, various computing units running machine learning model algorithms, a Digital Signal Processor (DSP), and any suitable processor, controller, microcontroller, and so forth. The computing unit 401 performs the various methods and processes described above, such as the methods 100, 200. For example, in some embodiments, method XXX may be implemented as a computer software program tangibly embodied in a machine-readable medium, such as storage unit 408. In some embodiments, part or all of the computer program may be loaded and/or installed onto the device 400 via the ROM 402 and/or the communication unit 409. When the computer program is loaded into RAM403 and executed by computing unit 401, one or more steps of methods 100, 200 described above may be performed. Alternatively, in other embodiments, the computing unit 401 may be configured to perform the methods 100, 200 by any other suitable means (e.g., by means of firmware).
Various implementations of the systems and techniques described here above may be implemented in digital electronic circuitry, integrated circuitry, Field Programmable Gate Arrays (FPGAs), Application Specific Integrated Circuits (ASICs), Application Specific Standard Products (ASSPs), system on a chip (SOCs), load programmable logic devices (CPLDs), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs that are executable and/or interpretable on a programmable system including at least one programmable processor, which may be special or general purpose, receiving data and instructions from, and transmitting data and instructions to, a storage system, at least one input device, and at least one output device.
Program code for implementing the methods of the present disclosure may be written in any combination of one or more programming languages. These program codes may be provided to a processor or controller of a general purpose computer, special purpose computer, or other programmable data processing apparatus, such that the program codes, when executed by the processor or controller, cause the functions/operations specified in the flowchart and/or block diagram to be performed. The program code may execute entirely on the machine, partly on the machine, as a stand-alone software package partly on the machine and partly on a remote machine or entirely on the remote machine or server.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. A machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and a pointing device (e.g., a mouse or a trackball) by which a user can provide input to the computer. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user can be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic, speech, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a back-end component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such back-end, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), Wide Area Networks (WANs), and the Internet.
The computer system may include clients and servers. A client and server are generally remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other. The server may be a cloud server or a server of a distributed system.
It should be understood that various forms of the flows shown above may be used, with steps reordered, added, or deleted. For example, the steps described in the present disclosure may be executed in parallel, sequentially, or in different orders, as long as the desired results of the technical solutions disclosed in the present disclosure can be achieved, and the present disclosure is not limited herein.
The above detailed description should not be construed as limiting the scope of the disclosure. It should be understood by those skilled in the art that various modifications, combinations, sub-combinations and substitutions may be made in accordance with design requirements and other factors. Any modification, equivalent replacement, and improvement made within the spirit and principle of the present disclosure should be included in the scope of protection of the present disclosure.

Claims (8)

1. A training method of an active angle spoofing interference recognition model comprises the following steps:
generating echo data based on an echo generation model of a radio frequency detection system, labeling the echo data, and generating a training sample;
preprocessing the training samples to generate a time-frequency image training set;
training an active angle deception jamming recognition model by utilizing the time-frequency image training set; wherein the content of the first and second substances,
the active angle deception jamming identification model comprises a plurality of convolution layers and full-connection layers; the convolution kernels of the plurality of convolution layers decrease in sequence.
2. The method of claim 1, wherein generating and labeling echo data based on a radio frequency detection system echo generation model comprises:
randomly generating echo data with different signal-to-noise ratios, different signal-to-interference ratios, different position targets and different position interferences based on an echo generation model of a radio frequency detection system; and marking the position information of the target, the position information of the interference and the relative position information of the target and the interference.
3. The method of claim 1, wherein preprocessing the training samples comprises:
and performing time-frequency processing by adopting pulse compression or time-domain accumulation, and respectively generating time-frequency images by using the training samples.
4. The method of claim 1, wherein the plurality of convolutional layers comprises:
a first convolution layer, wherein in the size of a convolution kernel function, a time dimension parameter is larger than a frequency dimension parameter;
a second convolution layer having a convolution kernel function size smaller than that of the first convolution layer;
a third convolutional layer having a convolutional kernel function size smaller than that of the second convolutional layer;
a fourth convolution layer having a convolution kernel function size smaller than that of the third convolution layer;
and the full connection layer is used for performing weighting processing on the calculation result of the previous volume of the lamination layer and outputting the position information of the target and the interference.
5. The method of claim 4, wherein one or more convolutional layers are disposed between the fourth convolutional layer and the fully-connected layer, and have a convolutional kernel size smaller than a convolutional kernel function size of the fourth convolutional layer.
6. A training device for an active angle spoofing interference recognition model comprises:
the sample generation module is used for generating echo data based on an echo generation model of the radio frequency detection system, labeling the echo data and generating a training sample;
the preprocessing module is used for preprocessing the training samples to generate a time-frequency image training set;
the training module is used for training an active angle deception jamming recognition model by utilizing the time-frequency image training set; wherein the content of the first and second substances,
the active angle deception jamming identification model comprises a plurality of convolution layers and full-connection layers; the convolution kernels of the plurality of convolution layers decrease in sequence.
7. An electronic device, comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein the content of the first and second substances,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of any one of claims 1-5.
8. A non-transitory computer readable storage medium having stored thereon computer instructions for causing the computer to perform the method of any one of claims 1-5.
CN202210273471.4A 2022-03-18 2022-03-18 Training method and device for active angle deception jamming recognition model Active CN114818777B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210273471.4A CN114818777B (en) 2022-03-18 2022-03-18 Training method and device for active angle deception jamming recognition model

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210273471.4A CN114818777B (en) 2022-03-18 2022-03-18 Training method and device for active angle deception jamming recognition model

Publications (2)

Publication Number Publication Date
CN114818777A true CN114818777A (en) 2022-07-29
CN114818777B CN114818777B (en) 2023-07-21

Family

ID=82531277

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210273471.4A Active CN114818777B (en) 2022-03-18 2022-03-18 Training method and device for active angle deception jamming recognition model

Country Status (1)

Country Link
CN (1) CN114818777B (en)

Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105572643A (en) * 2015-12-22 2016-05-11 河海大学 Radar signal emission method for resisting radio frequency storage forwarding interference
US20180308013A1 (en) * 2017-04-24 2018-10-25 Virginia Tech Intellectual Properties, Inc. Radio signal identification, identification system learning, and identifier deployment
CN110927706A (en) * 2019-12-10 2020-03-27 电子科技大学 Convolutional neural network-based radar interference detection and identification method
CN111541511A (en) * 2020-04-20 2020-08-14 中国人民解放军海军工程大学 Communication interference signal identification method based on target detection in complex electromagnetic environment
CN112560596A (en) * 2020-12-01 2021-03-26 中国航天科工集团第二研究院 Radar interference category identification method and system
CN112731309A (en) * 2021-01-06 2021-04-30 哈尔滨工程大学 Active interference identification method based on bilinear efficient neural network
CN112859012A (en) * 2021-01-20 2021-05-28 北京理工大学 Radar deception jamming identification method based on cascade convolution neural network
CN112949820A (en) * 2021-01-27 2021-06-11 西安电子科技大学 Cognitive anti-interference target detection method based on generation of countermeasure network
CN112949387A (en) * 2021-01-27 2021-06-11 西安电子科技大学 Intelligent anti-interference target detection method based on transfer learning
CN113219417A (en) * 2020-10-21 2021-08-06 中国人民解放军空军预警学院 Airborne radar interference type identification method based on support vector machine
CN113298846A (en) * 2020-11-18 2021-08-24 西北工业大学 Intelligent interference detection method based on time-frequency semantic perception
CN114019467A (en) * 2021-10-25 2022-02-08 哈尔滨工程大学 Radar signal identification and positioning method based on MobileNet model transfer learning
CN114201987A (en) * 2021-11-09 2022-03-18 北京理工大学 Active interference identification method based on self-adaptive identification network

Patent Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105572643A (en) * 2015-12-22 2016-05-11 河海大学 Radar signal emission method for resisting radio frequency storage forwarding interference
US20180308013A1 (en) * 2017-04-24 2018-10-25 Virginia Tech Intellectual Properties, Inc. Radio signal identification, identification system learning, and identifier deployment
CN110927706A (en) * 2019-12-10 2020-03-27 电子科技大学 Convolutional neural network-based radar interference detection and identification method
CN111541511A (en) * 2020-04-20 2020-08-14 中国人民解放军海军工程大学 Communication interference signal identification method based on target detection in complex electromagnetic environment
CN113219417A (en) * 2020-10-21 2021-08-06 中国人民解放军空军预警学院 Airborne radar interference type identification method based on support vector machine
CN113298846A (en) * 2020-11-18 2021-08-24 西北工业大学 Intelligent interference detection method based on time-frequency semantic perception
CN112560596A (en) * 2020-12-01 2021-03-26 中国航天科工集团第二研究院 Radar interference category identification method and system
CN112731309A (en) * 2021-01-06 2021-04-30 哈尔滨工程大学 Active interference identification method based on bilinear efficient neural network
CN112859012A (en) * 2021-01-20 2021-05-28 北京理工大学 Radar deception jamming identification method based on cascade convolution neural network
CN112949820A (en) * 2021-01-27 2021-06-11 西安电子科技大学 Cognitive anti-interference target detection method based on generation of countermeasure network
CN112949387A (en) * 2021-01-27 2021-06-11 西安电子科技大学 Intelligent anti-interference target detection method based on transfer learning
CN114019467A (en) * 2021-10-25 2022-02-08 哈尔滨工程大学 Radar signal identification and positioning method based on MobileNet model transfer learning
CN114201987A (en) * 2021-11-09 2022-03-18 北京理工大学 Active interference identification method based on self-adaptive identification network

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
YU JUNFEI: "BARRAGE JAMMING DETECTION AND CLASSIFICATION BASED ON CONVOLUTIONAL NEURAL NETWORK FOR SYNTHETIC APERTURE RADAR", pages 4583 - 4586 *
周龙: "基于深度学习的复杂背景雷达图像多目标检测", 《系统工程与电子技术》, vol. 41, no. 06, pages 1258 - 1264 *
阮怀林: "基于栈式稀疏自编码器的有源欺骗干扰识别", pages 62 - 67 *

Also Published As

Publication number Publication date
CN114818777B (en) 2023-07-21

Similar Documents

Publication Publication Date Title
CN112990204B (en) Target detection method and device, electronic equipment and storage medium
CN112966742A (en) Model training method, target detection method and device and electronic equipment
CN105117736B (en) Classification of Polarimetric SAR Image method based on sparse depth heap stack network
KR20220107120A (en) Method and apparatus of training anti-spoofing model, method and apparatus of performing anti-spoofing using anti-spoofing model, electronic device, storage medium, and computer program
CN113971751A (en) Training feature extraction model, and method and device for detecting similar images
CN112949767B (en) Sample image increment, image detection model training and image detection method
CN112989995B (en) Text detection method and device and electronic equipment
CN113869449A (en) Model training method, image processing method, device, equipment and storage medium
CN113850838A (en) Ship voyage intention acquisition method and device, computer equipment and storage medium
CN113326773A (en) Recognition model training method, recognition method, device, equipment and storage medium
US20230056784A1 (en) Method for Detecting Obstacle, Electronic Device, and Storage Medium
CN112580666A (en) Image feature extraction method, training method, device, electronic equipment and medium
CN113569740A (en) Video recognition model training method and device and video recognition method and device
CN115343704A (en) Gesture recognition method of FMCW millimeter wave radar based on multi-task learning
CN114169425B (en) Training target tracking model and target tracking method and device
CN116482680B (en) Body interference identification method, device, system and storage medium
CN113269280A (en) Text detection method and device, electronic equipment and computer readable storage medium
CN114818777B (en) Training method and device for active angle deception jamming recognition model
CN116935368A (en) Deep learning model training method, text line detection method, device and equipment
CN114724144B (en) Text recognition method, training device, training equipment and training medium for model
CN114549961B (en) Target object detection method, device, equipment and storage medium
CN113361455B (en) Training method of face counterfeit identification model, related device and computer program product
CN114638359A (en) Method and device for removing neural network backdoor and image recognition
CN114093006A (en) Training method, device and equipment of living human face detection model and storage medium
CN113989720A (en) Target detection method, training method, device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant