CN109614952B - Target signal detection and identification method based on waterfall plot - Google Patents
Target signal detection and identification method based on waterfall plot Download PDFInfo
- Publication number
- CN109614952B CN109614952B CN201811612078.3A CN201811612078A CN109614952B CN 109614952 B CN109614952 B CN 109614952B CN 201811612078 A CN201811612078 A CN 201811612078A CN 109614952 B CN109614952 B CN 109614952B
- Authority
- CN
- China
- Prior art keywords
- waterfall
- target
- target signal
- identification method
- signal detection
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2218/00—Aspects of pattern recognition specially adapted for signal processing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/44—Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Artificial Intelligence (AREA)
- General Engineering & Computer Science (AREA)
- General Health & Medical Sciences (AREA)
- Biomedical Technology (AREA)
- Evolutionary Computation (AREA)
- Computational Linguistics (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- Biophysics (AREA)
- Data Mining & Analysis (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Life Sciences & Earth Sciences (AREA)
- Health & Medical Sciences (AREA)
- Signal Processing (AREA)
- Multimedia (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses a target signal detection and identification method based on a waterfall plot, which comprises the following steps: preprocessing original data to obtain a waterfall graph; performing semantic segmentation on the waterfall graph by using a U-net semantic segmentation method to obtain an initial target area; performing post-processing on the initial target area through convex hull detection of an Opencv image processing library, removing target area segmentation borders, and obtaining an accurate target area; and calculating the time and frequency position of the target signal in the waterfall graph according to the accurate target area. The invention can achieve good target detection effect in a complex electromagnetic environment and has good generalization capability.
Description
Technical Field
The invention belongs to the technical field of signal detection, and particularly relates to a target signal detection and identification method based on a waterfall chart.
Background
When the signal receiving end receives the interfered signal, the existence of the signal is judged according to a certain criterion by utilizing information such as signal probability, noise power and the like, which is called signal detection. The signal receiving end estimates some parameter values (such as time and frequency) of the transmission signal as accurately as possible by using the received interfered transmission signal sequence, detects the target signal from the time and frequency range, and provides the time and frequency information of the target signal. The waterfall graph is a two-dimensional image which maps received signals into dimensions of time and frequency. The communication signal identification technology is an important research direction in the research field of cognitive radio and software radio in the communication technology, improves the identification capability of communication signals in the communication process through the computer vision technology, and has important significance for enhancing the reserve of the communication technology, improving the competitiveness of the communication industry and strengthening the national defense science and technology construction.
The existing signal detection and identification method cannot realize a signal detection effect with higher precision in a complex signal environment; and the generalization capability is poor, and the detection and identification of complex signals cannot be realized.
Disclosure of Invention
In order to solve the problems, the invention provides a target signal detection and identification method based on a waterfall chart, which can well detect and identify target signals in a complex signal environment and has good generalization capability.
In order to achieve the purpose, the invention adopts the technical scheme that: a target signal detection and identification method based on a waterfall plot comprises the following steps:
s100, preprocessing original data to obtain a waterfall graph;
s200, performing semantic segmentation on the waterfall graph by using a U-net semantic segmentation method to obtain an initial target area;
s300, post-processing the initial target area through convex hull detection of an Opencv image processing library, removing target area segmentation borders, and obtaining an accurate target area;
and S400, acquiring the time and frequency position of the target signal in the waterfall graph according to the accurate target area.
Further, in step S100, random resampling is performed to synthesize source data by using a given background and a given signal type, and a two-dimensional time-frequency diagram obtained by a signal processing technique is simulated to serve as a waterfall diagram.
Furthermore, due to the fact that obvious white lines exist in the fixed area of the waterfall graph background, the white lines existing in the generated waterfall graph background are removed by replacing pixel values at the white lines with pixel values in a one-dimensional transverse field, and the value-taking probability is Gaussian distribution.
Furthermore, signals of various types in the waterfall diagram are repeatedly sampled in a put-back mode, and the signals are randomly synthesized into the intercepted background picture after the size of the signals is changed, so that the finally generated different signals are guaranteed to be disjoint.
Further, in step S200, the waterfall graph is segmented into the signal and the background by using a U-net semantic segmentation network, where the U-net semantic segmentation network includes a contraction path and an expansion path.
Further, the U-net semantic segmentation network includes 5 consecutive 3 × 3 convolution kernels, a 2 × 2 convolution kernel for downsampling for maximum pooling, and a 1 × 1 convolution kernel.
Further, the method for segmenting the signal and the background of the waterfall graph by utilizing the U-net semantic segmentation network comprises the following steps:
in the expansion path, each step comprises up-sampling the waterfall graph;
then performing convolution operation by using 5 continuous 3 x 3 convolution kernels, wherein the 3 x 3 convolution kernels use a ReLU activation function;
after each convolution operation, acquiring local features by adopting a maximum pooling layer of 2 x 2 convolution kernels;
and performing convolution operation on the last layer of the neural network by using a 1 x 1 convolution kernel, and mapping each feature vector to an output layer of the network.
Further, the construction and training of the U-net semantic segmentation network model are achieved by using a keras deep learning framework.
Further, in step S300, performing post-processing on the initial target region through convex hull detection of the Opencv image processing library, including the steps of:
s301, dividing the mask of the initial target area, and binarizing the mask;
s302, determining different target signal entities through connectivity among pixels, eliminating regions with undersized target signal areas, and determining a bordering target region according to a judgment threshold;
s303, after the mask bordering the target area is closed, convex hull concave detection is carried out, coordinates of 6 points in total at two concave positions are obtained, and 2 points closest to the convex hull in four vertexes of the circumscribed rectangle are determined;
s304, combining the 8 points into two rectangles to finish the separation of the soil region and the separation of the soil target.
Further, the decision threshold value calculation formula is as follows:
wherein S isConvex hullIs the area of the convex hull, SOuter coverThe area of the rectangle is the external area,to average target height, HTargetIs the target height.
The beneficial effects of the technical scheme are as follows:
after the communication signals are mapped to the image space, the target signal detection and identification are realized by utilizing artificial intelligence and a machine learning algorithm; the target signal can be well detected and identified in a complex electromagnetic environment, and the generalization capability is good;
the method extracts texture, shape and structure information of a target signal from a two-dimensional graph and an image of a waterfall graph by simulating the process of human processing visual information, describes and understands local and global features of the image, utilizes a deep convolution network to carry out automatic feature extraction on original data by utilizing the strong feature extraction capability of the image, and identifies in a feature space; the feature analysis and recognition of the two-dimensional data can be well completed, and the detection and recognition of the specific shape graph and the image are realized.
Drawings
FIG. 1 is a schematic flow chart of a target signal detection and identification method based on a waterfall chart according to the present invention;
FIG. 2 is a waterfall diagram of the pretreated substrate according to the embodiment of the present invention;
FIG. 3 is a mask view before post-processing in an embodiment of the present invention;
FIG. 4 is a mask view after post-processing in an embodiment of the present invention;
fig. 5 is a processing result diagram of the target signal in the waterfall diagram according to the embodiment of the invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is further described with reference to the accompanying drawings.
In this embodiment, referring to fig. 1, the present invention provides a target signal detection and identification method based on a waterfall graph, including the steps of:
s100, preprocessing original data to obtain a waterfall graph;
s200, performing semantic segmentation on the waterfall graph by using a U-net semantic segmentation method to obtain an initial target area;
s300, post-processing the initial target area through convex hull detection of the Opencv image processing library, removing segmentation and bordering of the target area, and obtaining an accurate target area;
and S400, acquiring the time and frequency position of the target signal in the waterfall graph according to the accurate target area.
As an optimization scheme of the above embodiment, in step S100, random resampling is performed to synthesize source data by using a given background and a given signal type, and a two-dimensional time-frequency graph obtained by a signal processing technique is simulated to serve as a waterfall graph; here, 6 types of signals are used, as shown in fig. 2.
Because an obvious white line exists in a fixed area of the waterfall graph background, the white line existing in the generated waterfall graph background is removed by replacing the pixel value at the white line with the pixel value in the one-dimensional transverse field, and the value-taking probability is Gaussian distribution.
And repeatedly sampling various types of signals in the waterfall diagram in a putting-back manner, and randomly synthesizing the signals into the intercepted background picture after changing the size of the signals so as to ensure that different finally generated signals are not intersected. 8-10 signals are repeatedly sampled from 6 signal pictures in a put-back mode, the signals are randomly synthesized into a cut background picture after slight size change, and the disjointness among different signals generated finally is ensured.
As an optimization scheme of the above embodiment, in step S200, the waterfall graph is segmented by using a U-net semantic segmentation network, where the U-net semantic segmentation network includes a contraction path and an expansion path.
The U-net semantic segmentation network comprises 5 continuous 3 x 3 convolution kernels, a 2 x 2 convolution kernel for down-sampling for maximum pooling, and a 1 x 1 convolution kernel.
The method for segmenting the signal and the background of the waterfall graph by utilizing the U-net semantic segmentation network comprises the following steps:
in the expansion path, each step comprises up-sampling the waterfall graph;
then performing convolution operation by using 5 continuous 3 x 3 convolution kernels, wherein the 3 x 3 convolution kernels use a ReLU activation function;
after each convolution operation, acquiring local features by adopting a maximum pooling layer of 2 x 2 convolution kernels;
performing convolution operation on the last layer of the neural network by using a 1 x 1 convolution kernel, and mapping each feature vector to an output layer of the network;
and (3) building and training a U-net semantic segmentation network model by using a keras deep learning framework.
As an optimization scheme of the foregoing embodiment, in step S300, performing post-processing on the initial target region through convex hull detection of the Opencv image processing library includes the steps of:
s301, dividing the mask of the initial target area, and binarizing the mask;
s302, determining different target signal entities through connectivity among pixels, eliminating regions with undersized target signal areas, and determining a bordering target region according to a judgment threshold;
the decision threshold calculation formula is as follows:
wherein S isConvex hullIs the area of the convex hull, SOuter coverThe area of the rectangle is the external area,to average target height, HTargetIs the target height.
S303, after the mask bordering the target area is closed, convex hull concave detection is carried out, coordinates of 6 points (five stars in figure 4) at two concave positions are obtained, and 2 points closest to the convex hull in four vertexes of the circumscribed rectangle are determined;
s304, combining the 8 points into two rectangles to finish the separation of the soil region and the separation of the soil target. The results of the masking before post-processing and after separation are shown in fig. 3 and 4.
Calculating the time and frequency position of the target signal in the waterfall graph according to the accurate target area: the time frequency error is kept within 3 pixel errors on the image scale as a detection accuracy standard, the accuracy rate is 98.86%, the recall rate is 98.75%, and the processing result is shown in fig. 5.
The foregoing shows and describes the general principles and broad features of the present invention and advantages thereof. It will be understood by those skilled in the art that the present invention is not limited to the embodiments described above, which are described in the specification and illustrated only to illustrate the principle of the present invention, but that various changes and modifications may be made therein without departing from the spirit and scope of the present invention, which fall within the scope of the invention as claimed. The scope of the invention is defined by the appended claims and equivalents thereof.
Claims (9)
1. A target signal detection and identification method based on a waterfall plot is characterized by comprising the following steps:
s100, preprocessing original data to obtain a waterfall graph;
s200, performing semantic segmentation on the waterfall graph by using a U-net semantic segmentation method to obtain an initial target area;
s300, performing post-processing on the initial target area through convex hull detection of an Opencv image processing library, removing segmentation borders of the target area, and obtaining an accurate target area;
in step S300, performing post-processing on the initial target region through convex hull detection of the Opencv image processing library, including the steps of:
s301, dividing the mask of the initial target area, and binarizing the mask;
s302, determining different target signal entities through connectivity among pixels, eliminating regions with undersized target signal areas, and determining a bordering target region according to a judgment threshold;
s303, after the mask bordering the target area is closed, convex hull concave detection is carried out, coordinates of 6 points in total at two concave positions are obtained, and 2 points closest to the convex hull in four vertexes of the circumscribed rectangle are determined;
s304, combining the 8 points into two rectangles to finish the splitting of the soil region and finish the separation of the soil target;
and S400, acquiring the time and frequency position of the target signal in the waterfall graph according to the accurate target area.
2. The target signal detection and identification method based on the waterfall chart as claimed in claim 1, wherein in step S100, random resampling is performed to synthesize source data by using a given background and signal type, and a two-dimensional time-frequency chart obtained by a signal processing technique is simulated to serve as the waterfall chart.
3. The target signal detection and identification method based on the waterfall plot as claimed in claim 2, wherein due to the existence of an obvious white line in a fixed area of the background of the waterfall plot, the white line existing in the background of the generated waterfall plot is removed by replacing the pixel value at the white line with the pixel value of a one-dimensional lateral area, and the value-taking probability is gaussian distribution.
4. The target signal detection and identification method based on the waterfall chart as claimed in claim 3, wherein the signals of various types in the waterfall chart are repeatedly sampled in a put-back manner, and the signals are randomly synthesized into the intercepted background picture after changing the size, so as to ensure that the finally generated different signals are not intersected.
5. The target signal detection and identification method based on the waterfall graph as claimed in claim 1, wherein in step S200, the waterfall graph is segmented into the signal and the background by using a U-net semantic segmentation network, wherein the U-net semantic segmentation network comprises a contraction path and an expansion path.
6. The waterfall graph-based target signal detection and identification method as claimed in claim 5, wherein the U-net semantic segmentation network comprises 5 consecutive 3 x 3 convolution kernels, a 2 x 2 convolution kernel for downsampling for maximum pooling operation, and a 1 x 1 convolution kernel.
7. The target signal detection and identification method based on the waterfall graph as claimed in claim 6, wherein the waterfall graph is segmented by using a U-net semantic segmentation network to segment the signal and the background, comprising the steps of:
in the expansion path, each step comprises up-sampling the waterfall graph;
then performing convolution operation by using 5 continuous 3 x 3 convolution kernels, wherein the 3 x 3 convolution kernels use a ReLU activation function;
after each convolution operation, acquiring local features by adopting a maximum pooling layer of 2 x 2 convolution kernels;
and performing convolution operation on the last layer of the network by using a 1 x 1 convolution kernel, and mapping each feature vector to an output layer of the network.
8. The target signal detection and recognition method based on the waterfall plot as claimed in claim 7, wherein a keras deep learning framework is used to realize the building and training of a U-net semantic segmentation network model.
9. The waterfall plot-based target signal detection and identification method as claimed in claim 1, wherein the decision threshold calculation formula is:
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811612078.3A CN109614952B (en) | 2018-12-27 | 2018-12-27 | Target signal detection and identification method based on waterfall plot |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811612078.3A CN109614952B (en) | 2018-12-27 | 2018-12-27 | Target signal detection and identification method based on waterfall plot |
Publications (2)
Publication Number | Publication Date |
---|---|
CN109614952A CN109614952A (en) | 2019-04-12 |
CN109614952B true CN109614952B (en) | 2020-08-25 |
Family
ID=66012878
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201811612078.3A Active CN109614952B (en) | 2018-12-27 | 2018-12-27 | Target signal detection and identification method based on waterfall plot |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109614952B (en) |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110599467B (en) * | 2019-08-29 | 2022-09-27 | 上海联影智能医疗科技有限公司 | Method and device for detecting non-beam limiter area, computer equipment and storage medium |
Family Cites Families (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20100259688A1 (en) * | 2007-11-14 | 2010-10-14 | Koninklijke Philips Electronics N.V. | method of determining a starting point of a semantic unit in an audiovisual signal |
US9078162B2 (en) * | 2013-03-15 | 2015-07-07 | DGS Global Systems, Inc. | Systems, methods, and devices for electronic spectrum management |
CN104887224B (en) * | 2015-05-29 | 2018-04-13 | 北京航空航天大学 | Feature extraction and automatic identifying method towards epileptic EEG Signal |
CN105738061B (en) * | 2016-02-19 | 2018-01-12 | 莆田学院 | A kind of image analysis method of vibration signal |
CN107122738B (en) * | 2017-04-26 | 2020-05-22 | 成都蓝色起源科技有限公司 | Radio signal identification method based on deep learning model and implementation system thereof |
CN108377158B (en) * | 2018-02-13 | 2020-06-16 | 桂林电子科技大学 | Multi-band division and aggregation method for realizing spread spectrum signal |
CN108616470A (en) * | 2018-03-26 | 2018-10-02 | 天津大学 | Modulation Signals Recognition method based on convolutional neural networks |
-
2018
- 2018-12-27 CN CN201811612078.3A patent/CN109614952B/en active Active
Also Published As
Publication number | Publication date |
---|---|
CN109614952A (en) | 2019-04-12 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN112991447B (en) | Visual positioning and static map construction method and system in dynamic environment | |
US10127675B2 (en) | Edge-based local adaptive thresholding system and methods for foreground detection | |
CN110334762B (en) | Feature matching method based on quad tree combined with ORB and SIFT | |
CN107767382A (en) | The extraction method and system of static three-dimensional map contour of building line | |
CN111611643A (en) | Family type vectorization data obtaining method and device, electronic equipment and storage medium | |
WO2019071976A1 (en) | Panoramic image saliency detection method based on regional growth and eye movement model | |
CN107169972B (en) | Non-cooperative target rapid contour tracking method | |
CN110717934B (en) | A STRCF-Based Anti-Occlusion Target Tracking Method | |
CN111353371A (en) | Shoreline extraction method based on spaceborne SAR images | |
CN101634706A (en) | Method for automatically detecting bridge target in high-resolution SAR images | |
CN115393734A (en) | SAR image ship contour extraction method based on the joint method of Faster R-CNN and CV model | |
CN108846844A (en) | A kind of sea-surface target detection method based on sea horizon | |
CN112819832B (en) | Fine-grained boundary extraction method for semantic segmentation of urban scenes based on laser point cloud | |
CN116051822A (en) | Recognition method and device for concave obstacle, processor and electronic equipment | |
CN118115520A (en) | Underwater sonar image target detection method, device, electronic equipment and storage medium | |
CN108717539A (en) | A kind of small size Ship Detection | |
CN103077533B (en) | A kind of based on frogeye visual characteristic setting movement order calibration method | |
CN109614952B (en) | Target signal detection and identification method based on waterfall plot | |
CN108765440A (en) | A kind of line guiding super-pixel tidal saltmarsh method of single polarization SAR image | |
CN107424172B (en) | Moving target tracking method based on foreground discrimination and circular search method | |
CN113534146B (en) | An automatic detection method and system for radar video image targets | |
CN112150509B (en) | Block tracking method based on multi-layer depth features | |
CN115240198A (en) | High-precision water level enhancement identification multi-mode measuring method for hydraulic engineering | |
CN114943711A (en) | Building extraction method and system based on LiDAR point cloud and image | |
CN117152169A (en) | Target segmentation method and device for depth image, terminal equipment and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant | ||
CP03 | Change of name, title or address |
Address after: 610000 No. 270, floor 2, No. 8, Jinxiu street, Wuhou District, Chengdu, Sichuan Patentee after: Chengdu shuzhilian Technology Co.,Ltd. Address before: No.2, 4th floor, building 1, Jule road crossing, Section 1, West 1st ring road, Chengdu, Sichuan 610000 Patentee before: CHENGDU SHUZHILIAN TECHNOLOGY Co.,Ltd. |
|
CP03 | Change of name, title or address |