CN112859011A - Method for extracting waveform signals of single-wavelength airborne sounding radar - Google Patents

Method for extracting waveform signals of single-wavelength airborne sounding radar Download PDF

Info

Publication number
CN112859011A
CN112859011A CN202110034193.2A CN202110034193A CN112859011A CN 112859011 A CN112859011 A CN 112859011A CN 202110034193 A CN202110034193 A CN 202110034193A CN 112859011 A CN112859011 A CN 112859011A
Authority
CN
China
Prior art keywords
waveform
output
elm
particle
convolution
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110034193.2A
Other languages
Chinese (zh)
Other versions
CN112859011B (en
Inventor
杨必胜
纪雪
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wuhan University WHU
Original Assignee
Wuhan University WHU
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wuhan University WHU filed Critical Wuhan University WHU
Priority to CN202110034193.2A priority Critical patent/CN112859011B/en
Publication of CN112859011A publication Critical patent/CN112859011A/en
Application granted granted Critical
Publication of CN112859011B publication Critical patent/CN112859011B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/02Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S13/00
    • G01S7/28Details of pulse systems
    • G01S7/285Receivers
    • G01S7/292Extracting wanted echo-signals
    • G01S7/2923Extracting wanted echo-signals based on data belonging to a number of consecutive radar periods
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/02Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S13/00
    • G01S7/35Details of non-pulse systems
    • G01S7/352Receivers
    • G01S7/354Extracting wanted echo-signals
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/02Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S13/00
    • G01S7/41Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S13/00 using analysis of echo signal for target characterisation; Target signature; Target cross-section
    • G01S7/417Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S13/00 using analysis of echo signal for target characterisation; Target signature; Target cross-section involving the use of neural networks
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/48Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S17/00
    • G01S7/4802Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S17/00 using analysis of echo signal for target characterisation; Target signature; Target cross-section
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/48Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S17/00
    • G01S7/483Details of pulse systems
    • G01S7/486Receivers
    • G01S7/487Extracting wanted echo signals, e.g. pulse detection
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A90/00Technologies having an indirect contribution to adaptation to climate change
    • Y02A90/30Assessment of water resources

Landscapes

  • Engineering & Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • Radar Systems Or Details Thereof (AREA)

Abstract

The invention discloses a method for extracting a waveform signal of a single-wavelength airborne depth sounding radar, which is used for identifying a full waveform of the single-wavelength airborne depth sounding radar, including an abnormal waveform, a supersaturated waveform, a land waveform, a sea surface waveform and an underwater waveform, by constructing a waveform classification model based on a convolutional neural network. The method comprises the steps of removing water surface echoes and abnormal waveforms, obtaining waveform segments through waveform segmentation, constructing a waveform segment classification model SAPSO-ELM to identify the waveform segments, carrying out maximum peak detection on the waveform segments containing peaks, and carrying out Lucy-Richardson Lucy deconvolution algorithm on the waveforms without correct peak identification through combination with waveform types to carry out signal extraction. Experiments show that the method has high efficiency, accuracy and adaptability in waveform signal detection.

Description

Method for extracting waveform signals of single-wavelength airborne sounding radar
Technical Field
The invention belongs to the field of signal processing, and particularly relates to a method for extracting a single-wavelength airborne sounding radar waveform signal.
Background
Coastal management is a complex problem faced by decision makers and scientists in countries around the world. Monitoring the coast can be difficult due to the wide water area that needs to be covered. Obtaining accurate, high density water depth and topography measurements in coastal zone environments is a challenging task. The water depth data in deep and shallow waters are mainly derived from echosounders (single and multi-beam), although in coastal areas with low energy at small depths of 0-5m, ships cannot be used, usually with classical surveying techniques, such as Total Stations (TS) or real time kinematic global navigation satellite systems (RTK-GNSS), or by deploying Unmanned Surface Vessels (USV). Furthermore, satellite sounding can provide medium resolution and low cost sounding data over an extended area, but the accuracy of the method must be, among other things, depth dependent and must always be supported by ground truth data (usually via echo sounding methods).
Airborne laser sounding (ALB) is an attractive shallow water body sounding technology and has high capture rate and measuring point density. In addition, the laser radar depth measurement is a tool for shortening the measurement time and reducing the measurement cost. Traditional lidar depth measurement systems employ near-infrared and green lasers, such as Optech SHOALS-1000/3000, CZMIL Nova, Titan; leica Hawk-Eye II/III, Chiroptera II, REIGL VQ-880G. Near infrared is the water level and green color used to determine the location to achieve water penetration. By comparing the time of flight of the two lasers, the system can determine the range of tilt of the green laser through the water for each emitted laser shot. The adoption of the dual-band high-power laser transmitter and the multi-channel high-sensitivity signal receiving device causes high hardware cost, and limits the large-scale popularization of the airborne laser depth sounding technology to a great extent. The sensor of the single-wavelength laser radar depth measurement system has the working wavelength of 532nm, and is an ideal choice for shallow water depth measurement due to low cost and light weight, such as NASA EAARL, Fugro LADS Mk3, REIGL VQ-820G, Fibertek CATS, NCALM Aquarius. Laser pulses emitted by such devices are typically of relatively low energy, penetrating only shallow water of 10 to 15 meters, but the compromise in maximum detectable depth is the higher pulse emission frequency. Under the same flight condition, the density of the underwater laser spot acquired by the single-waveband airborne laser depth measurement system can reach several times or even more than that of a double-waveband system.
In fact, single (green) wavelength depth lidar systems operate with a single laser wavelength and therefore cannot determine the portion of the trajectory of light in water with a single independent shot. This blending of sensor surface and water depth surface presents challenges to waveform data processing. More importantly, some laser shots will produce only surface returns, in shallow waters some pulses may produce only bottom returns, while others may produce multiple types of returns. Corresponding coordinate calculation modes of the overwater and underwater waveforms are different, and waveform classification is an important guarantee for waveform decomposition. Therefore, it is very important to select an appropriate waveform classification and decomposition processing method to ensure accuracy. Summarizing, single wavelength ALB waveform signal detection suffers from several problems: (1) the waveform is mixed, and the water-land separation is not easy to carry out; (2) the waveform signal detection model cannot be adapted to all waveforms; (3) the traditional waveform signal detection model is time-consuming and low in efficiency; (4) conventional waveform signal detection models often cannot be adapted to all waveform signals.
Disclosure of Invention
In order to solve the problems and obtain an accurate point cloud coordinate result, the importance of waveform types in signal detection and the limitation of the current waveform signal detection are fully considered in the research, and a single-wavelength airborne sounding radar waveform signal extraction method is provided.
The technical scheme adopted by the invention is as follows: the first step is as follows: constructing a convolutional neural network for waveform classification, deleting unnecessary waveforms, and further processing valuable waveform signals; the second step is that: extracting discrete transform characteristics of the waveform, segmenting the waveform by using a sliding window, and taking the waveform and a corresponding wavelet scattering matrix as characteristic matrixes; the third step: constructing a waveform segment classification model to classify waveform segments to obtain target class waveforms; the fourth step: and (5) peak value extraction. The method comprises the following specific steps:
step one, constructing a convolutional neural network for waveform classification, deleting unnecessary waveforms, and further processing valuable waveform signals;
step two, extracting the waveform wavelet scattering transformation characteristics;
thirdly, waveform segmentation, namely segmenting the waveform by using a sliding window L, and taking the segmented waveform segments and the corresponding wavelet scattering transformation characteristics as a characteristic matrix for subsequent classification;
step four, constructing a waveform segment classification model, selecting different types of waveform segments to train the waveform segment classification model, and classifying the waveform segments by using the trained waveform segment classification model to obtain a target waveform;
and step five, extracting the peak value of the target waveform segment.
Further, the specific implementation manner of the step one is as follows;
step 1.1, based on statistical analysis, the full waveforms are divided into five categories: an abnormal waveform; a supersaturated waveform; a land waveform; surface and underwater waveforms;
step 1.2, constructing a waveform classification model based on a convolutional neural network; converting one-dimensional waveform data into two-dimensional data serving as input data, inputting low-level feature information through a convolution module, and then obtaining features of different scales through three pooling layers of different scales, wherein the convolution module comprises a convolution layer, batch standardization and a ReLU activation function; secondly, respectively carrying out two convolution layers and batch processing normalization on each feature to obtain high-level information with different scales; thirdly, performing convolution operation on the three uniformly-sampled high-dimensional characteristics with different convolution kernel sizes, wherein the resolution of the finally-output characteristics is the same; fourthly, aggregating the context information obtained at different moments by utilizing splicing and convolution operations, wherein the aggregation comprises two times of aggregation of the context information, namely splicing the features output in the third step with the features obtained by convolution of the initial input waveform, performing convolution layer on the spliced features, performing batch standardization and pooling operations, performing convolution and splicing on the current features and the features obtained in the second step after the operation is finished, and performing convolution layer, batch standardization and pooling operations on the obtained features; fifthly, flattening the final features for the convenience of classification operation, finally integrating the features by using 3 full connection layers and 2 exit layers and avoiding overfitting, and finally outputting the number of class labels of each input;
step 1.3, after the convolutional neural network model is constructed, selecting n waveforms for each of five types of waveforms to train;
and step 1.4, after the network model training is finished, inputting each input waveform into the trained convolutional neural network model, correspondingly outputting a class label, and finally removing the water surface waveform and the abnormal waveform.
Furthermore, the wavelet scattering transformation is used for extracting the characteristics of the one-dimensional waveform in the second step, and the specific implementation mode is as follows;
the waveform signal is used as a one-dimensional signal and the corresponding 0-order scattering coefficient S0Formula (1):
S0f=f*φJ (1)
wherein f is the input signal, phiJA low pass filter, denoting the convolution operation; high-frequency information lost by averaging original signal through low-pass filter is transformed by waveletj1I proceed with recovery, |jFor wavelet filters, 1 indicates that the calculated correspondence is the first layer, and similarly, to obtain | f | + | ψj1For the part which is invariant in translation, in the scale 2JLow-pass filtering averaging is carried out to obtain a first-order scattering coefficient s1Namely:
S1f=|f*ψj1|*φJ (2)
this operation ensures that the output results are at spatial scale 2JTranslation invariance is provided in the signal, but high-frequency characteristics of the signal are simultaneously lost; in order to avoid the loss of high-frequency detail information, the high-frequency information of the signal is restored by adopting wavelet transformation, and the mother wavelet psi is scaled to be less than or equal to 2j≤2JScaling, and sequentially iterating to obtain the scattering transforms of the 2 nd order and the 3 rd order:
S2=||f*ψλ1|*ψλ2|*φJ (3)
S3=|||f*ψλ1|*ψλ2|*ψλ3|*φJ (4)
λ represents the path, the numbers in the subscripts represent the path length, the calculated scatter transform matrices are combined together and down-sampled, and the length of the matrix columns is kept consistent with the waveform length.
Further, the specific implementation manner of the step four is as follows;
the waveform segment classification model SAPSO-ELM is constructed, the structure of the extreme learning machine ELM is similar to that of a single hidden layer neural network and comprises an input layer, a hidden layer and an output layer, and for a certain standard extreme learning machine network model, N groups of arbitrary training samples (X, Y) are assumed, wherein X is [ X ═ Y1,x2,…,xN]For input in the training sample, xi=[x1i,x2i,…,xDi]TI 1, …, N, D representing the feature matrix dimension, Y1,y2,…,yN]For the output label, y, corresponding to the input in the training samplei=[y1i,y2i,…,yMi]TWhere i is 1, …, N, M denotes the output matrix dimension, and the output function of the network is defined as follows:
Figure BDA0002893530240000041
where s is 1, …, M, ωp,bp∈(-1,1),i=1,…,N;βpsThe connection weight of the s output layer neuron and the p hidden layer neuron; g (x) is an excitation function; omega is an input weight matrix, omegap=[ωp1,ωp2,…,ωpD];bpIs the threshold of the pth hidden layer node; l is the number of nodes of a single hidden layer, and L is less than or equal to N;
the goal of the ELM algorithm is to minimize the difference between the output of the model and the output that corresponds to real theory, i.e.:
Figure BDA0002893530240000042
n is the number of samples, expressed by a matrix:
Hβ=Y (8)
Figure BDA0002893530240000043
Figure BDA0002893530240000051
in order to increase the operation rate, the output weight of the ELM hidden layer is directly determined to be
Figure BDA0002893530240000052
Figure BDA0002893530240000053
In the formula, H*=(HTH)-1HTMoore-Penrose generalized inverse of the hidden layer output matrix H;
the ELM input weight matrix and the hidden layer threshold are given by a random algorithm, but the network structure is unstable, and the ideal prediction precision can be achieved only by the number of neurons of the hidden layer, so that the ELM parameters, namely the input weight matrix omega and the hidden layer threshold b, are optimized by using the SAPSO algorithm, the algorithm is prevented from being partially optimized in the searching process, and the searching precision of the algorithm is improved.
Further, the specific implementation manner of the ELM parameters by using the SAPSO algorithm is as follows;
the PSO is an intelligent optimization algorithm based on the random global optimization of population evolution, each particle is close to the best position found by itself and the best particle in the population through an iteration mode, so that the optimal solution is searched, and in each iteration, the particle updates the speed and the position according to the following formula:
Figure BDA0002893530240000054
Figure BDA0002893530240000055
Figure BDA0002893530240000056
in the formula, velocity vidValue range of [ v ]min,vmax];c1And c2Is a learning factor; r is1And r2Is the interval [0, 1]A random number in between; position xiHas a value range of [ xmin,xmax];pidThe optimal position searched for by the particle so far; gidThe optimal position searched so far for the whole particle swarm; ω is the inertial weight, ωmaxAnd ωminRespectively representing the maximum value and the minimum value of the weight; k is the current iteration number; kmaxIs the maximum iteration number;
and (3) judging whether to update the particle position by using a simulated annealing algorithm SA:
ΔE=fk+1(xi)-fk(xi) (15)
Figure BDA0002893530240000057
Tk+1=αTk (17)
in the formula, rand is a random number between (0, 1); f (x)i) Is the fitness value of the ith particle; t is the temperature under the current iteration times; alpha is the cooling speed;
the ELM algorithm based on the SAPSO optimization comprises the following steps:
step1, establishing an extreme learning machine neural network topological structure based on the SAPSO, setting the number of neurons and the number of hidden nodes, and selecting an activation function sig;
step2, initializing the population, initializing the start-stop temperature T0T and the cooling speed alpha, and the annealing is started; randomly generating M populations;
step3, according to the formula (8) - (11) pairsELM carries out simulation training, compares test output with actual output, and calculates each particle fitness value f (x)i) And finding out the current optimal positions p of all the particlesidAnd global optimal position gid
Step4, judging whether the original particle position is replaced by the updated particle position according to the formulas (15) and (16), if yes, updating the system temperature according to the formula (17); otherwise, the temperature is unchanged;
step5, the position and the speed of the particles are updated according to the expressions (12) to (14), and each particle fitness f (x) is calculatedi) Finding out the current optimal positions p of all particlesidAnd global optimal position gid
Step6, judging whether the system reaches the termination temperature; if so, stopping iteration; otherwise, step3 is switched to continue iteration;
step7, g at the end of the iterationidI.e. optimal (ω, b), the optimal (ω, b) is introduced into the ELM network for training, the connection weight β is calculated according to equations (9) - (11), and the actual output matrix T is calculated according to equation (17).
Further, in the fifth step, the peak value of the target waveform segment is extracted by adopting maximum peak value detection, and whether the extracted peak value is correct or not is cross-verified.
The invention has the following advantages: the invention innovatively provides a method for extracting a single-wavelength airborne sounding radar waveform signal, solves the problem of mixing of the single-wavelength airborne sounding radar waveform, realizes rapid classification of mixed waveforms, invents a simplified waveform theoretical model, can simplify description of the waveform, invents a waveform signal identification model, can be suitable for different water depths, and can rapidly realize identification of the waveform signal.
Drawings
FIG. 1 is a flow chart of the present invention;
FIG. 2 is a waveform type display of the present invention;
FIG. 3 shows the result of waveform classification according to the present invention;
FIG. 4 shows the result of waveform segmentation according to the present invention;
FIG. 5 shows the waveform signal extraction result of the present invention, (a1) is the coordinate information obtained by the device with the solution software; (b1) if the waveform classification result is obtained, point cloud coordinates are displayed as (c1) after unnecessary waveforms are removed, wherein coordinate information is still a calculation result of acquisition software; the re-solved coordinates by respectively adopting LM-GGM and the invention method are (d1) and (e1) in turn.
Detailed Description
As shown in fig. 1, the method for extracting a waveform signal of a single-wavelength airborne sounding radar provided by the invention specifically includes the following steps:
the first step is as follows: constructing a convolutional neural network for waveform classification, deleting unnecessary waveforms, and further processing valuable waveform signals;
step 1: based on statistical analysis, the full waveform is divided into five classes: an abnormal waveform; a supersaturated waveform; a land waveform; surface and underwater waveforms.
Step 1.2: and constructing a model for waveform classification based on a convolutional neural network, and converting one-dimensional waveform data into two-dimensional data as input data. The input of low-level feature information is passed through a convolution module that includes convolution layers, batch normalization and the activation function of the ReLU (modified linear unit) to obtain features of different scales for collecting a dimensional representation of multi-scale features with reduced features, and then through three pooling layers of different scales set to 2, 4 and 8 (number of convolution kernels), respectively. Secondly, two convolution layers and batch processing normalization are respectively carried out on each feature, the output size is set to be 64 (layers), and high-level information with different scales is obtained. Thirdly, the three-scale uniformly sampled high-dimensional features obtained in the last step are subjected to convolution operation by using different convolution kernel sizes, and the resolution of the finally output features is the same and is (1, 18). And fourthly, aggregating the characteristics (context information) obtained at different moments by utilizing splicing and convolution operations for the fusion information, splicing the characteristics output in the third step and the characteristics obtained by performing convolution on the initial input waveform by combining the context information twice, performing convolution layer on the spliced characteristics, and performing batch standardization and pooling operations. After the operation is finished, the features at the moment and the features obtained in the second step are spliced after convolution. The resulting features are then subjected to convolutional layer, batch normalization and pooling operations. And fifthly, flattening the final characteristics for the convenience of classification operation. Finally, using 3 full connection layers (FC) and 2 exit layers to integrate the features and avoid overfitting, the output channels are set to 1024, 256 and 5 respectively, and the probability of Dropout operation remains 0.7. The number of outputs of the last layer of neurons is the class label of each input.
Step 1.3: after the convolutional neural network model is constructed, 20000 waveforms are selected for each of five types of waveforms to be trained;
step 1.4: and after the network model is trained, each input waveform is converted into the network model and a category label is correspondingly output. And removing the water surface waveform and the abnormal waveform.
The second step is that: wavelet scattering transform feature calculation
Step two: extracting the characteristic of wavelet scattering transformation, using the waveform signal as a one-dimensional signal and corresponding 0-order scattering coefficient S0Calculating formula (1):
S0=f*φJ (1)
wherein f is the input signal, phiJFor low pass filters, the convolution operation is denoted. The high frequency information lost by averaging the original signal through a low pass filter can be transformed by a wavelet transform | f |j1I to recover psijFor the wavelet filter, 1 indicates that the calculated correspondence is the first layer. Similarly, to get | f | + |)j1For the part which is invariant in translation, in the scale 2JLow-pass filtering averaging is carried out to obtain a first-order scattering coefficient S1Namely:
S1=|f*ψj1|*φJ (2)
this operation ensures that the output results are at spatial scale 2JWith translational invariance but at the same time losing the high frequency characteristics of the signal. In order to avoid the loss of high-frequency detail information, wavelet transformation is adopted in a subsequent network to recover the high-frequency information of the signal. The mother wavelet psi is within 1 or less than 2j≤2JScaling, successive iterations may result in a2 nd and 3 rd order scatter transform:
S2||f*ψλ1|*ψλ2|*φJ (3)
S3|||f*ψλ1|*ψλ2|*ψλ3|*φJ (4)
lambda represents the path, the number represents the path length, the calculated scatter transform matrices are combined together and down-sampled, and the length of the matrix column is consistent with the waveform length.
The third step: segmenting waveforms with sliding windows
Step 3.1: waveform segment segmentation, a sliding window of length L, is included according to the particular system under consideration in order to segment the signal into segments of the same length, called waveform segments. The waveform signal is simply considered as a combination of the ambient noise signal and the rising signal and the falling signal, and is numbered 1, 2, and 3, respectively. Nine different combinations can be formed by permutation and combination. Where the combination of 1&2 and 1&3 is ignored because it is not present in the waveform. If no valid gradient transition is found in the entire window, and the variation is relatively smooth, then the waveform segments are classified as class 1. If the first half of the window has insignificant change in effective slope and another part is rising, then the waveform segment is referred to as class 2. If half of the two windows are found to have a significant rising gradient, the waveform segment is defined as type 3. If the first half of the effective slope of the window is ascending and the other half of the effective slope is descending, i.e., the window as a whole resembles a hill, then the waveform segment is classified as category 4. In contrast, a valley-like waveform segment is classified as class 6. In contrast to the waveform segment corresponding to class 2, the upper half shows a downward trend, and the stable waveform segment of the lower half is divided into 5 classes. Symmetrical class 3, two halves are found to have an effective decreasing gradient, and the waveform segment is defined as type 7. Wherein the peak exists only in the type 4 waveform segment, and needs to be intensively researched and further analyzed. The difficulty of peak detection is then translated into waveform identification and peak detection. It is emphasized that the selection of the window is critical to the identification of the waveform segment. In order to avoid classification errors caused by waveform segmentation damaging the waveform where the peak is located, waveforms (1/3L and 2/3L) of a certain time scale are skipped to be input in addition to the input original waveform. In other words, one waveform is subjected to a plurality of waveform division and classification operations.
Step 3.2: and taking the waveform segments and the corresponding wavelet scattering matrix as a characteristic matrix.
The fourth step: constructing a waveform segment classification model, selecting different types of waveform segments to train the waveform segment classification model, and then classifying the waveform segments by using the trained waveform segment classification model;
step 4.1: the method comprises the steps of constructing a waveform segment classification model (SAPSO-ELM), wherein the structure of an Extreme Learning Machine (ELM) is similar to that of a single hidden layer neural network and mainly comprises an input layer, a hidden layer and an output layer1,x2,…,xN]For input in the training sample, xi=[x1i,x2i,…,xDi]TI 1, …, N, D representing the feature matrix dimension, Y1,y2,…,yN]For the output label (target output) corresponding to the input in the training sample, yi=[y1i,y2i,…,yMi]TWhere i is 1, …, N, M denotes the output matrix dimension, and the output function of the network is defined as follows:
Figure BDA0002893530240000091
where s is 1, …, M, ωp,bp∈(-1,1),i=1,…,N;βpsThe connection weight of the s output layer neuron and the p hidden layer neuron; g (x) is an excitation function; omega is an input weight matrix, omegap=[ωp1,ωp2,…,ωpD];bpIs the threshold (offset) of the pth hidden layer node; l is the number of nodes of the single hidden layer, and L is less than or equal to N.
The goal of the ELM algorithm is to minimize the difference between the output of the model and the output that corresponds to real theory, i.e.:
Figure BDA0002893530240000101
n is the number of samples, expressed by a matrix:
Hβ=Y (8)
Figure BDA0002893530240000102
Figure BDA0002893530240000103
in order to increase the operation rate, the output weight of the ELM hidden layer is directly determined to be
Figure BDA0002893530240000104
Figure BDA0002893530240000105
In the formula, H*=(HTH)-1HTThe Moore-Penrose generalized inverse of the hidden layer output matrix H.
The input weight matrix omega and the hidden layer threshold b of the ELM are randomly given by an algorithm, the network structure is unstable, and more hidden layer neurons are usually needed to achieve ideal prediction accuracy. Particle Swarm Optimization (PSO) is a Swarm-based intelligent Optimization algorithm proposed by Kennedy and Eberhart in 1995, adopts real number to solve, has few parameters needing to be adjusted, and is a universal global search algorithm. The Simulated Annealing algorithm (SA) is used for simulating and solving an optimization problem by using a thermodynamic system, the SA temperature at the initial searching stage is higher, the global searching capability is stronger, the SA temperature is gradually reduced along with the iteration, the fine searching is carried out, and the Metropolis sampling criterion with the probability jump characteristic is used in the searching process, so that the situation that a local minimum solution is trapped can be effectively avoided. Aiming at the problems that a Particle Swarm Optimization (PSO) is easy to fall into a local extreme point and slow in convergence in the later evolution stage, a simulated annealing idea is introduced to construct an SAPSO algorithm. Aiming at the problems of unstable network structure caused by random generation of ELM hidden layer parameters and overfitting caused by more hidden layer neurons, an SAPSO algorithm is used for optimizing ELM parameters (an input weight matrix omega and a hidden layer threshold b), so that the phenomenon that the algorithm is partially optimal in the searching process can be avoided, and the searching precision of the algorithm is improved.
The Particle Swarm Optimization (PSO) is an intelligent optimization algorithm based on the random global optimization of population evolution, and each particle is close to the best position found by itself and the best particle in the population through an iteration mode, so that the optimal solution is searched. In each iteration, the particle updates the velocity and position according to the following formula:
Figure BDA0002893530240000106
Figure BDA0002893530240000107
Figure BDA0002893530240000111
in the formula, velocity vidValue range of [ v ]min,vmax];c1And c2Is a learning factor; r is1And r2Is the interval [0, 1]A random number in between; position xiHas a value range of [ xmin,xmax];pidThe optimal position searched for by the particle so far; gidThe optimal position searched so far for the whole particle swarm; ω is the inertial weight, ωmaxAnd ωminRespectively representing the maximum value and the minimum value of the weight; k is the current iteration number; kmaxIs the maximum number of iterations.
And (3) judging whether to update the particle position by using SA:
ΔE=fk+1(xi)-fk(xi) (15)
Figure BDA0002893530240000112
Tk+1=αTk (17)
in the formula, rand is a random number between (0, 1); f (x)i) Is the fitness value of the ith particle; t is the temperature under the current iteration times; alpha is the cooling rate.
The ELM algorithm based on the SAPSO optimization mainly comprises the following steps:
step1, establishing an extreme learning machine neural network topological structure based on the SAPSO, setting the number of neurons and the number of hidden nodes, and selecting an activation function (sig).
Step2, initializing the population, initializing the start-stop temperature T0T and the cooling speed alpha, and the annealing is started; m populations are generated randomly.
Step3, carrying out simulation training on the ELM according to the formulas (8) to (11), comparing test output with actual output, and calculating the fitness value f (x) of each particlei) And finding out the current optimal positions p of all the particlesidAnd global optimal position gid
Step4, judging whether the original particle position is replaced by the updated particle position according to the formulas (15) and (16), if yes, updating the system temperature according to the formula (17); otherwise, the temperature is not changed.
Step5, the position and the speed of the particles are updated according to the expressions (12) to (14), and each particle fitness f (x) is calculatedi) Finding out the current optimal positions p of all particlesidAnd global optimal position gid
Step6, judging whether the system reaches the termination temperature; if so, stopping iteration; otherwise step3 continues the iteration.
Step7, finding optimal (ω, b) at the end of iteration, and substituting the optimal (ω, b) into the ELM network for training, calculating the output weight β according to the equations (9) - (11), and calculating the actual output matrix T according to the equation (17).
The ELM input weight matrix and the hidden layer threshold are given by a random algorithm, and more hidden layer neurons are usually needed to achieve ideal prediction precision. And taking the mean square error of the test set as a fitness function through an SAPSO-ELM model. Omega of the optimal searchiAnd biThe SAPSO algorithm of (1) improves the randomness model of the ELM, and thus, the ELM needs a small number of hidden layer neurons to achieve a better prediction effect. Moreover, the PSO-ELM also makes up the defect that the algorithm is easy to have overlapping of local extreme points, enhances the stability of the network, and improves the convergence rate of the subsequent evolutionary algorithm and the generalization of the network.
Step 4.2: and selecting different waveform fragment feature matrixes to train the model, wherein each class comprises 20000 samples.
Step 4.3: and then sequentially inputting the waveform segment characteristic matrixes of the same waveform into a classifier for identification, and outputting a classification label.
The fifth step: waveform peak extraction
Step 5.1: in step 3.1 a waveform undergoes three segmentations, unprocessed, pre-culling 1/3L and 2/3L respectively. Therefore, the waveform after each waveform segment division performs the operation of step 4.3, wherein the output waveform segment with the category label of 4 is the target waveform segment for the next step of peak signal detection, and the waveform segments which are not identified as the label 4 are discarded;
step 5.2: once the waveform segment containing the peak value information is accurately identified, the peak value detection is relatively simple, the maximum peak value detection is carried out on the target waveform segment, repeated peak values inevitably exist in the extracted peak values, and therefore duplication checking is carried out, and the repeatedly detected peak values are eliminated.
Step 5.3: after the repeated peak is removed, the directional waveform segment is segmented by taking the peak as the center and taking L as the length window, and the characteristic matrix is formed by combining the wavelet scattering transformation characteristics to be executed again, and the steps 4.3, 5.1 and 5.2 have significance in cross-verifying whether the extracted peak value is correct or not.
Step 5.4: the peak number and waveform type are checked, and in the deep water waveform, especially in the ultra-shallow water waveform, the accurate detection of the submarine echo signal is very difficult due to the severe mixing of the water surface and the water bottom reflected signals. If the waveform type is identified as an underwater waveform and the number of extracted peaks is not equal to 2, Richardson Lucy Deconvolution (RLD) peak detection is performed, which makes the model suitable for signal detection in different scenes and water depths.
In the embodiment, the ALB data acquired near the Kwangsi island centipede continent for 1 month and 20 days in 2013 is selected to provide an embodiment of the invention;
classifying ALB waveform data;
the single-wavelength airborne sounding radar waveform is subjected to statistical analysis, and the waveform is roughly divided into five types including abnormal waveform, supersaturated waveform, land waveform, water surface waveform and underwater waveform, as shown in fig. 2. Two areas (a) and (b) are selected according to the first step, and classification results are obtained by classifying single-wavelength waveforms, as shown in fig. 3, wherein (a1) and (b1) are point cloud coordinates calculated by self-contained software, (a2) and (b2) are point cloud classification results, (a3) and (b3) are land and water point cloud classification results, (a4) and (b4) are image data acquired by an onboard camera for framing the areas, and (a5) and (b5) are point cloud classification results corresponding to the images.
(II) wavelet scattering transformation;
and (3) inputting the waveform signal into a wavelet scattering network, and performing wavelet scattering transformation feature extraction by using formulas (1) to (4).
(III) waveform segmentation;
the waveform is segmented based on a sliding window with length L, and repeated segmentation is carried out after 1/3L and 2/3L are skipped to obtain waveform segments, as shown in FIG. 4. Wherein, the above division result is obtained by directly dividing the waveform by a sliding window with L length; the middle division result is obtained by skipping 1/3L waveform length and then dividing the waveform by L-length sliding window; the following segmentation results are obtained by skipping the waveform length of 2/3L and then segmenting the waveform with a sliding window of L length.
And (IV) inputting the waveform segments into a waveform segment classification model for classification, and performing peak identification on the waveform segments identified as the 4 th class by adopting a maximum peak method. Since a complete waveform is input three times, there is a repetitive peak detection. After the repeated peaks are removed, the waveform segment with the peak as the center and L as the length is re-segmented. And then inputting the peak detection data into a peak detection model for peak detection. The significance of this step is to cross-verify whether the extracted peaks are correct. However, in deep water waveforms, particularly in very shallow water waveforms, accurate detection of the seafloor echo signals is difficult due to the severe mixing of the surface and bottom reflected signals. And if the number of peaks extracted from the waveform with the waveform label of the sounding waveform is not equal to 2, performing RLD peak detection. The signal detection results are shown in fig. 5. All the areas are located underwater, and partial areas are provided with underwater reefs. And displaying and comparing the self-contained software calculation result, the Gaussian calculation result and the text calculation result. (a1) The coordinate information is obtained by the equipment with resolving software. It is clear that the solution results of many sea surface echoes and null echoes are intermingled, creating an unrealistic topographic map. (b1) And (c1) displaying the point cloud coordinate after unnecessary waveforms are removed, wherein the coordinate information is still the calculation result of the acquisition software. The re-solved coordinates by respectively adopting LM-GGM and the invention method are (d1) and (e1) in turn.
And (3) testing and analyzing: the model of the invention can rapidly and accurately extract waveform signals. The waveform classification can effectively eliminate invalid waveforms, and the settlement coordinates are inevitably mixed with a large amount of unreal terrain data due to too complex waveforms. The waveform classification can effectively eliminate unnecessary resolved waveforms, and the filtering pressure is reduced to a certain extent. The invention achieves better extraction result and satisfactory operation efficiency than LM-GGM. LM-GGM took 10min 54s to process the data, and the new model took 3min 13 s.

Claims (6)

1. A method for extracting a single-wavelength airborne sounding radar waveform signal is characterized by comprising the following steps:
step one, constructing a convolutional neural network for waveform classification, deleting unnecessary waveforms, and further processing valuable waveform signals;
step two, extracting the waveform wavelet scattering transformation characteristics;
thirdly, waveform segmentation, namely segmenting the waveform by using a sliding window L, and taking the segmented waveform segments and the corresponding wavelet scattering transformation characteristics as a characteristic matrix for subsequent classification;
step four, constructing a waveform segment classification model, selecting different types of existing waveform segments to train the waveform segment classification model, and classifying the waveform segments to be detected by using the trained waveform segment classification model to obtain a target waveform;
and step five, extracting the peak value of the target waveform segment.
2. The method of claim 1, wherein the method comprises the following steps: the specific implementation manner of the step one is as follows;
step 1.1, based on statistical analysis, the full waveforms are divided into five categories: an abnormal waveform; a supersaturated waveform; a land waveform; surface and underwater waveforms;
step 1.2, constructing a waveform classification model based on a convolutional neural network; converting one-dimensional waveform data into two-dimensional data serving as input data, inputting low-level feature information through a convolution module, and then obtaining features of different scales through three pooling layers of different scales, wherein the convolution module comprises a convolution layer, batch standardization and a ReLU activation function; secondly, respectively carrying out two convolution layers and batch processing normalization on each feature to obtain high-level information with different scales; thirdly, performing convolution operation on the three uniformly-sampled high-dimensional characteristics with different convolution kernel sizes, wherein the resolution of the finally-output characteristics is the same; fourthly, aggregating the context information obtained at different moments by utilizing splicing and convolution operations, wherein the aggregation comprises two times of aggregation of the context information, namely splicing the features output in the third step with the features obtained by convolution of the initial input waveform, performing convolution layer on the spliced features, performing batch standardization and pooling operations, performing convolution and splicing on the current features and the features obtained in the second step after the operation is finished, and performing convolution layer, batch standardization and pooling operations on the obtained features; fifthly, flattening the final features for the convenience of classification operation, finally integrating the features by using 3 full connection layers and 2 exit layers and avoiding overfitting, and finally outputting the number of class labels of each input;
step 1.3, after the convolutional neural network model is constructed, selecting n waveforms for each of five types of waveforms to train;
and step 1.4, after the network model training is finished, inputting each input waveform into the trained convolutional neural network model, correspondingly outputting a class label, and finally removing the water surface waveform and the abnormal waveform.
3. The method of claim 1, wherein the method comprises the following steps: extracting the characteristics of the one-dimensional waveform by using wavelet scattering transformation, wherein the specific implementation mode is as follows;
the waveform signal is used as a one-dimensional signal and the corresponding 0-order scattering coefficient S0Formula (1):
S0f=f*φJ (1)
wherein f is the input signal, phiJA low pass filter, denoting the convolution operation; high-frequency information lost by averaging original signal through low-pass filter is transformed by waveletj1I proceed with recovery, |jFor wavelet filters, 1 indicates that the calculated correspondence is the first layer, and similarly, to obtain | f | + | ψj1For the part which is invariant in translation, in the scale 2JLow-pass filtering averaging the above results in a first order scattering coefficient s1, namely:
S1f=|f*ψj1|*φJ (2)
this operation ensures that the output results are at spatial scale 2JTranslation invariance is provided in the signal, but high-frequency characteristics of the signal are simultaneously lost; in order to avoid the loss of high-frequency detail information, the high-frequency information of the signal is restored by adopting wavelet transformation, and the mother wavelet psi is scaled to be less than or equal to 2j≤2JScaling, and sequentially iterating to obtain the scattering transforms of the 2 nd order and the 3 rd order:
S2=||f*ψλ1|*ψλ2|*φJ (3)
S3=|||f*ψλ1|*ψλ2|*ψλ3|*φJ (4)
λ represents the path, the numbers in the subscripts represent the path length, the calculated scatter transform matrices are combined together and down-sampled, and the length of the matrix columns is kept consistent with the waveform length.
4. The method of claim 1, wherein the method comprises the following steps: the concrete implementation manner of the step four is as follows;
the waveform segment classification model SAPSO-ELM is constructed, the structure of the extreme learning machine ELM is similar to that of a single hidden layer neural network and comprises an input layer, a hidden layer and an output layer, and for a certain standard extreme learning machine network model, N groups of arbitrary training samples (X, Y) are assumed, wherein X is [ X ═ Y1,x2,…,xN]For input in the training sample, xi=[x1i,x2i,…,xDi]TI 1, …, N, D representing the feature matrix dimension, Y1,y2,…,yN]For the output label, y, corresponding to the input in the training samplei=[y1i,y2i,…,yMi]TWhere i is 1, …, N, M denotes the output matrix dimension, and the output function of the network is defined as follows:
Figure FDA0002893530230000031
where s is 1, …, M, ωp,bp∈(-1,1),i=1,…,N;βpsThe connection weight of the s output layer neuron and the p hidden layer neuron; g (x) is an excitation function; omega is an input weight matrix, omegap=[ωp1,ωp2,…,ωpD];bpIs the threshold of the pth hidden layer node;l is the number of nodes of a single hidden layer, and L is less than or equal to N;
the goal of the ELM algorithm is to minimize the difference between the output of the model and the output that corresponds to real theory, i.e.:
Figure FDA0002893530230000032
n is the number of samples, expressed by a matrix:
Hβ=Y (8)
Figure FDA0002893530230000033
Figure FDA0002893530230000034
in order to increase the operation rate, the output weight of the ELM hidden layer is directly determined to be
Figure FDA0002893530230000035
Figure FDA0002893530230000036
In the formula, H*=(HTH)-1HTMoore-Penrose generalized inverse of the hidden layer output matrix H;
the ELM input weight matrix and the hidden layer threshold are given by a random algorithm, but the network structure is unstable, and the ideal prediction precision can be achieved only by the number of neurons of the hidden layer, so that the ELM parameters, namely the input weight matrix omega and the hidden layer threshold b, are optimized by using the SAPSO algorithm, the algorithm is prevented from being partially optimized in the searching process, and the searching precision of the algorithm is improved.
5. The method of claim 4, wherein the method comprises the following steps: the specific implementation process of optimizing the ELM parameters by using the SAPSO algorithm is as follows;
the PSO is an intelligent optimization algorithm based on the random global optimization of population evolution, each particle is close to the best position found by itself and the best particle in the population through an iteration mode, so that the optimal solution is searched, and in each iteration, the particle updates the speed and the position according to the following formula:
Figure FDA0002893530230000041
Figure FDA0002893530230000042
Figure FDA0002893530230000043
in the formula, velocity vidValue range of [ v ]min,vmax];c1And c2Is a learning factor; r is1And r2Is the interval [0, 1]A random number in between; position xiHas a value range of [ xmin,xmax];pidThe optimal position searched for by the particle so far; gidThe optimal position searched so far for the whole particle swarm; ω is the inertial weight, ωmaxAnd ωminRespectively representing the maximum value and the minimum value of the weight; k is the current iteration number; kmaxIs the maximum iteration number;
and (3) judging whether to update the particle position by using a simulated annealing algorithm SA:
ΔE=fk+1(xi)-fk(xi) (15)
Figure FDA0002893530230000044
Tk+1=αTk (17)
in the formula, rand is a random number between (0, 1); f (x)i) Is the fitness value of the ith particle; t is the temperature under the current iteration times; alpha is the cooling speed;
the ELM algorithm based on the SAPSO optimization comprises the following steps:
step1, establishing an extreme learning machine neural network topological structure based on the SAPSO, setting the number of neurons and the number of hidden nodes, and selecting an activation function sig;
step2, initializing the population, initializing the start-stop temperature T0T and the cooling speed alpha, and the annealing is started; randomly generating M populations;
step3, carrying out simulation training on the ELM according to the formulas (8) to (11), comparing test output with actual output, and calculating the fitness value f (x) of each particlei) And finding out the current optimal positions p of all the particlesidAnd global optimal position gid(ii) a Step4, judging whether the original particle position is replaced by the updated particle position according to the formulas (15) and (16), if yes, updating the system temperature according to the formula (17); otherwise, the temperature is unchanged;
step5, the position and the speed of the particles are updated according to the expressions (12) to (14), and each particle fitness f (x) is calculatedi) Finding out the current optimal positions p of all particlesidAnd global optimal position gid
Step6, judging whether the system reaches the termination temperature; if so, stopping iteration; otherwise, step3 is switched to continue iteration;
step7, g at the end of the iterationidI.e. optimal (ω, b), the optimal (ω, b) is introduced into the ELM network for training, the connection weight β is calculated according to equations (9) - (11), and the actual output matrix T is calculated according to equation (17).
6. The method of claim 1, wherein the method comprises the following steps: and step five, extracting the peak value of the target waveform segment by adopting maximum peak value detection, and performing cross validation on whether the extracted peak value is correct or not.
CN202110034193.2A 2021-01-12 2021-01-12 Method for extracting waveform signals of single-wavelength airborne sounding radar Active CN112859011B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110034193.2A CN112859011B (en) 2021-01-12 2021-01-12 Method for extracting waveform signals of single-wavelength airborne sounding radar

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110034193.2A CN112859011B (en) 2021-01-12 2021-01-12 Method for extracting waveform signals of single-wavelength airborne sounding radar

Publications (2)

Publication Number Publication Date
CN112859011A true CN112859011A (en) 2021-05-28
CN112859011B CN112859011B (en) 2022-06-07

Family

ID=76002635

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110034193.2A Active CN112859011B (en) 2021-01-12 2021-01-12 Method for extracting waveform signals of single-wavelength airborne sounding radar

Country Status (1)

Country Link
CN (1) CN112859011B (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114081462A (en) * 2021-11-19 2022-02-25 齐齐哈尔大学 Heart health monitoring system based on multi-dimensional physiological information
CN114236556A (en) * 2021-12-02 2022-03-25 桂林理工大学 Water depth measuring system of seamless integrated laser radar and unmanned ship
CN114924234A (en) * 2022-07-21 2022-08-19 中国人民解放军国防科技大学 Radar radiation source target signal detection method based on regional contrast
CN116609759A (en) * 2023-07-21 2023-08-18 自然资源部第一海洋研究所 Method and system for enhancing and identifying airborne laser sounding seabed weak echo
CN116761223A (en) * 2023-08-11 2023-09-15 深圳市掌锐电子有限公司 Method for realizing 4G radio frequency communication by using 5G baseband chip and vehicle-mounted radio frequency system
CN118071210A (en) * 2024-04-17 2024-05-24 成都理工大学 Ecological environment vulnerability assessment method combining CNN and PPM

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2000275098A (en) * 1999-03-26 2000-10-06 Toshiba Corp Waveform signal analyzer
CN106527262A (en) * 2016-11-04 2017-03-22 合肥天讯亿达光电技术有限公司 Single wavelength laser radar monitoring system
CN111880158A (en) * 2020-08-06 2020-11-03 中国人民解放军海军航空大学 Radar target detection method and system based on convolutional neural network sequence classification
WO2020232905A1 (en) * 2019-05-20 2020-11-26 平安科技(深圳)有限公司 Superobject information-based remote sensing image target extraction method, device, electronic apparatus, and medium
CN112001270A (en) * 2020-08-03 2020-11-27 南京理工大学 Ground radar automatic target classification and identification method based on one-dimensional convolutional neural network

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2000275098A (en) * 1999-03-26 2000-10-06 Toshiba Corp Waveform signal analyzer
CN106527262A (en) * 2016-11-04 2017-03-22 合肥天讯亿达光电技术有限公司 Single wavelength laser radar monitoring system
WO2020232905A1 (en) * 2019-05-20 2020-11-26 平安科技(深圳)有限公司 Superobject information-based remote sensing image target extraction method, device, electronic apparatus, and medium
CN112001270A (en) * 2020-08-03 2020-11-27 南京理工大学 Ground radar automatic target classification and identification method based on one-dimensional convolutional neural network
CN111880158A (en) * 2020-08-06 2020-11-03 中国人民解放军海军航空大学 Radar target detection method and system based on convolutional neural network sequence classification

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
XUE JI等: "Island feature classification for single-wavelength airborne lidar bathymetry based on full-waveform parameters", 《APPLIED OPTICS》 *
宋沙磊: "对地观测多光谱激光雷达基本原理及关键技术", 《中国优秀博硕士学位论文全文数据库(博士)信息科技辑》 *
金鼎坚 等: "机载激光雷达测深系统大规模应用测试与评估——以中国海岸带为例", 《红外与激光工程》 *

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114081462A (en) * 2021-11-19 2022-02-25 齐齐哈尔大学 Heart health monitoring system based on multi-dimensional physiological information
CN114236556A (en) * 2021-12-02 2022-03-25 桂林理工大学 Water depth measuring system of seamless integrated laser radar and unmanned ship
CN114924234A (en) * 2022-07-21 2022-08-19 中国人民解放军国防科技大学 Radar radiation source target signal detection method based on regional contrast
CN116609759A (en) * 2023-07-21 2023-08-18 自然资源部第一海洋研究所 Method and system for enhancing and identifying airborne laser sounding seabed weak echo
CN116609759B (en) * 2023-07-21 2023-10-31 自然资源部第一海洋研究所 Method and system for enhancing and identifying airborne laser sounding seabed weak echo
CN116761223A (en) * 2023-08-11 2023-09-15 深圳市掌锐电子有限公司 Method for realizing 4G radio frequency communication by using 5G baseband chip and vehicle-mounted radio frequency system
CN116761223B (en) * 2023-08-11 2023-11-10 深圳市掌锐电子有限公司 Method for realizing 4G radio frequency communication by using 5G baseband chip and vehicle-mounted radio frequency system
CN118071210A (en) * 2024-04-17 2024-05-24 成都理工大学 Ecological environment vulnerability assessment method combining CNN and PPM

Also Published As

Publication number Publication date
CN112859011B (en) 2022-06-07

Similar Documents

Publication Publication Date Title
CN112859011B (en) Method for extracting waveform signals of single-wavelength airborne sounding radar
CN110533722B (en) Robot rapid repositioning method and system based on visual dictionary
CN110889324A (en) Thermal infrared image target identification method based on YOLO V3 terminal-oriented guidance
CN109146889B (en) Farmland boundary extraction method based on high-resolution remote sensing image
CN110866887A (en) Target situation fusion sensing method and system based on multiple sensors
Mallet et al. A marked point process for modeling lidar waveforms
CN110222767B (en) Three-dimensional point cloud classification method based on nested neural network and grid map
CN113484875B (en) Laser radar point cloud target hierarchical identification method based on mixed Gaussian ordering
CN106845343B (en) Automatic detection method for optical remote sensing image offshore platform
CN112907520A (en) Single tree crown detection method based on end-to-end deep learning method
Shields et al. Towards adaptive benthic habitat mapping
CN114037836A (en) Method for applying artificial intelligence recognition technology to three-dimensional power transmission and transformation engineering measurement and calculation
CN113570005A (en) Long-distance ship type identification method based on airborne photon radar
CN116402851A (en) Infrared dim target tracking method under complex background
Park et al. Active-passive data fusion algorithms for seafloor imaging and classification from CZMIL data
CN109871907B (en) Radar target high-resolution range profile identification method based on SAE-HMM model
CN114120150A (en) Road target detection method based on unmanned aerial vehicle imaging technology
CN111832463A (en) Deep learning-based traffic sign detection method
CN116953702A (en) Rotary target detection method and device based on deduction paradigm
CN115861756A (en) Earth background small target identification method based on cascade combination network
CN114063063A (en) Geological disaster monitoring method based on synthetic aperture radar and point-like sensor
CN113379738A (en) Method and system for detecting and positioning epidemic trees based on images
Zhang et al. Exploiting Deep Matching and Underwater Terrain Images to Improve Underwater Localization Accuracy
Urmila et al. Processing of LiDAR for Traffic Scene Perception of Autonomous Vehicles
CN113971755B (en) All-weather sea surface target detection method based on improved YOLOV model

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant