CN116338628A - Laser radar sounding method and device based on learning architecture and electronic equipment - Google Patents

Laser radar sounding method and device based on learning architecture and electronic equipment Download PDF

Info

Publication number
CN116338628A
CN116338628A CN202310548929.7A CN202310548929A CN116338628A CN 116338628 A CN116338628 A CN 116338628A CN 202310548929 A CN202310548929 A CN 202310548929A CN 116338628 A CN116338628 A CN 116338628A
Authority
CN
China
Prior art keywords
echo signal
data
laser radar
signal position
waveform
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202310548929.7A
Other languages
Chinese (zh)
Other versions
CN116338628B (en
Inventor
谌一夫
钱悦
乐源
吴临
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China University of Geosciences
Original Assignee
China University of Geosciences
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China University of Geosciences filed Critical China University of Geosciences
Priority to CN202310548929.7A priority Critical patent/CN116338628B/en
Publication of CN116338628A publication Critical patent/CN116338628A/en
Application granted granted Critical
Publication of CN116338628B publication Critical patent/CN116338628B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/48Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S17/00
    • G01S7/4802Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S17/00 using analysis of echo signal for target characterisation; Target signature; Target cross-section
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/88Lidar systems specially adapted for specific applications
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A90/00Technologies having an indirect contribution to adaptation to climate change
    • Y02A90/30Assessment of water resources

Abstract

The invention relates to the field of laser radar detection, and provides a laser radar sounding method and device based on a learning architecture and electronic equipment, wherein the method comprises the following steps: acquiring first high-precision full-waveform laser radar data; determining a first water surface echo signal position and a first water bottom echo signal position; converting the first high-precision full-waveform laser radar data into first two-dimensional picture data; determining a trained network model; acquiring second high-precision full-waveform laser radar data, and determining corresponding second two-dimensional picture data according to the second high-precision full-waveform laser radar data; inputting second two-dimensional picture data into the network model, and outputting a second water surface echo signal position and a second water bottom echo signal position; and determining the corresponding water depth according to the second water surface echo signal position and the second water bottom echo signal position. The technical scheme of the invention can be suitable for complex laser echo signals, reduce the time complexity and improve the efficiency and accuracy of measuring the water depth.

Description

Laser radar sounding method and device based on learning architecture and electronic equipment
Technical Field
The invention relates to the field of laser radar detection, in particular to a laser radar sounding method and device based on a learning architecture and electronic equipment.
Background
Ocean possesses valuable natural resources, is an important support for future economic development, and ocean mapping and investigation are bases for understanding ocean and developing ocean economy, wherein acquisition of submarine topography is one of core fields of ocean base mapping.
Marine surveying mainly includes shipborne measurements and remote sensing measurements. The traditional shipborne measurement has the disadvantages of higher cost and lower efficiency. The water depth measurement in the remote sensing measurement mainly comprises photogrammetry, hyperspectral and the like, the photogrammetry has strict requirements on the water body environment, the hyperspectral depth measurement range is shallow, and the error is large. The airborne laser radar sounding is an ocean sounding method which is vigorously developed in recent years, the sounding coverage area is large, the measurement time is short, and the topography of a water body area can be rapidly and accurately acquired. The laser radar system of the airborne laser radar can emit high-power and narrow-pulse laser beams and record the distribution condition of echo signal energy on a time sequence, so that complete full waveform data are obtained. The airborne laser radar sounding is a process of resolving full waveform data of a water area, and further obtaining water surface and water bottom signals, so as to obtain a water depth value. At present, the main steps of waveform data processing of the full-waveform laser radar comprise four parts of effective waveform extraction, data filtering, model design, parameter optimization and water depth settlement, and at present, the full-waveform laser radar data waveform analysis mainly comprises algorithms such as a numerical value detection method, a Gaussian decomposition method, a deconvolution method and the like in terms of model design and parameter optimization. The numerical detection method is not suitable for complex laser echo signals, the Gaussian decomposition method is high in time complexity, the shallow water echo decomposition effect is poor, and the deconvolution method is extremely large in operation amount.
Disclosure of Invention
The problem addressed by the present invention is how to adapt to complex laser echo signals, reduce the time complexity and improve at least one of the efficiency and accuracy of measuring the water depth.
In order to solve the problems, the invention provides a laser radar sounding method and device based on a learning architecture and electronic equipment.
In a first aspect, the present invention provides a laser radar sounding method based on a learning architecture, including:
acquiring first high-precision full-waveform laser radar data;
determining a corresponding first water surface echo signal position and a corresponding first water bottom echo signal position according to the first high-precision full-waveform laser radar data;
converting the first high-precision full-waveform laser radar data into first two-dimensional picture data;
performing network training by taking the first two-dimensional picture data, the first water surface echo signal position and the first water bottom echo signal position as training sets of a deep learning network, and determining a trained network model;
acquiring second high-precision full-waveform laser radar data, and determining corresponding second two-dimensional picture data according to the second high-precision full-waveform laser radar data;
inputting the second two-dimensional picture data into the network model, and outputting a second water surface echo signal position and a second water bottom echo signal position;
and determining the corresponding water depth according to the second water surface echo signal position and the second water bottom echo signal position.
Optionally, the converting the first high-precision full-waveform laser radar data into first two-dimensional picture data includes:
carrying out gram angle difference field processing on the first high-precision full-waveform laser radar data based on a first formula, and determining the first two-dimensional picture data;
the first formula includes:
Figure SMS_1
wherein GADF represents the first two-dimensional picture data obtained after the treatment of the Graham angle difference field,
Figure SMS_2
representing the inverse cosine of the 1 st data normalized according to time arrangement in the first high-precision full-waveform laser radar data,/for the first high-precision full-waveform laser radar data>
Figure SMS_3
And representing the inverse cosine of the nth data normalized in the first high-precision full-waveform laser radar data.
Optionally, the deep learning network is a resnet_cbam network.
Optionally, the construction process of the Resnet_CBAM network includes:
adding a CBAM attention module in each original block and residual result in the Resnet-18 network to obtain a Resnet_CBAM network, wherein the CBAM attention module comprises a channel attention module and a space attention module;
the input of the Resnet_CBAM network is two-dimensional picture data, and the output is the position of a water surface echo signal and the position of a water bottom echo signal.
Optionally, a loss function in the resnet_cbam network adopts a mean square error function;
the formula of the mean square error function is:
Figure SMS_4
where MSE represents the mean square error function,
Figure SMS_5
represents the j-th predicted water surface signal position, < >>
Figure SMS_6
Representing the true position of the jth water surface signal, < >>
Figure SMS_7
Represents the j-th predicted water bottom signal position, a +.>
Figure SMS_8
The true position of the jth water bottom signal is represented, and M represents the number of training batches.
Optionally, the acquiring the first high-precision full-waveform laser radar data includes:
and acquiring first laser radar sounding waveform data, extracting and filtering the first laser radar sounding waveform data in an effective waveform mode, and determining the first high-precision full-waveform laser radar data.
Optionally, the determining the corresponding first water surface echo signal position and the first water bottom echo signal position according to the first high-precision full-waveform laser radar data includes:
and determining the first water surface echo signal position and the first water bottom echo signal position corresponding to the first high-precision full-waveform laser radar data by adopting a Gaussian decomposition method.
Optionally, the determining the corresponding water depth according to the second water surface echo signal position and the second water bottom echo signal position includes:
determining a second water surface echo signal position and a time difference of the second water surface echo signal position according to the second water surface echo signal position and the second water bottom echo signal position;
and determining the water depth according to the time difference.
In a second aspect, the present invention provides a laser radar sounding device based on learning architecture, including:
the data acquisition module is used for acquiring first high-precision full-waveform laser radar data;
the first data processing module is used for determining a corresponding first water surface echo signal position and a corresponding first water bottom echo signal position according to the first high-precision full-waveform laser radar data;
the second data processing module is used for converting the first high-precision full-waveform laser radar data into first two-dimensional picture data;
the network training module is used for carrying out network training by taking the first two-dimensional picture data, the first water surface echo signal position and the first water bottom echo signal position as training sets of a deep learning network, and determining a trained network model;
the network input determining module is used for acquiring second high-precision full-waveform laser radar data and determining corresponding second two-dimensional picture data according to the second high-precision full-waveform laser radar data;
the network output determining module is used for inputting the second two-dimensional picture data into the network model and outputting a second water surface echo signal position and a second water bottom echo signal position;
and the water depth determining module is used for determining the corresponding water depth according to the second water surface echo signal position and the second water bottom echo signal position.
In a third aspect, the present invention provides an electronic device, the electronic device comprising a processor and a memory, the memory storing at least one instruction or at least one program, the at least one instruction or at least one program being loaded and executed by the processor to implement the learning architecture based lidar sounding method according to the first aspect.
The laser radar sounding method and device based on the learning architecture, provided by the invention, have the beneficial effects that: according to the invention, the first high-precision full-waveform laser radar data are converted into the first two-dimensional picture data, so that the visualization of time sequence data is realized, meanwhile, the time correlation of original data is reserved, and the loss of information is avoided; the network model trained by the invention can adapt to complex laser echo signals, reduce the time complexity, improve the efficiency and accuracy of the output water surface echo signal position and the water bottom echo signal position, and determine the corresponding water depth according to the second water surface echo signal position and the second water bottom echo signal position, thereby improving the efficiency and accuracy of measuring the water depth.
Drawings
Fig. 1 is a schematic flow chart of a laser radar sounding method based on a learning architecture according to an embodiment of the present invention;
FIG. 2 is a schematic diagram of high-precision full-waveform lidar data;
FIG. 3 is a graph of data for a gram angle difference field;
fig. 4 is a schematic structural diagram of a laser radar sounding device based on a learning architecture according to an embodiment of the present invention.
Detailed Description
In order that the above objects, features and advantages of the invention will be readily understood, a more particular description of the invention will be rendered by reference to specific embodiments thereof which are illustrated in the appended drawings.
Fig. 1 is a schematic flow chart of a laser radar sounding method based on a learning architecture according to an embodiment of the present invention, where the method includes the following steps:
and step 101, acquiring first high-precision full-waveform laser radar data.
Specifically, first laser radar sounding waveform data are obtained through an airborne laser radar, and the first laser radar sounding waveform data are processed to obtain first high-precision full-waveform laser radar data.
In one possible implementation manner, the acquiring the first high-precision full-waveform laser radar data includes:
and acquiring first laser radar sounding waveform data, extracting and filtering the first laser radar sounding waveform data in an effective waveform mode, and determining the first high-precision full-waveform laser radar data.
Specifically, after the first laser radar sounding waveform data is obtained through the airborne laser radar, effective waveform extraction is performed on the first laser radar sounding waveform data to obtain corresponding effective waveform data, wherein the effective waveform data comprises instrument system noise, environmental noise from the atmosphere and other environmental noise. Such noise needs to be removed first in order to ensure the accuracy and efficiency of the subsequent determination of the water depth. The data is smoothed by filtering with a gaussian filter as shown in the following formula, wherein,σthe standard deviation of the gaussian distribution is shown, representing the degree of dispersion of the data.
Figure SMS_9
Wherein y, G (y), pi and e are all symbols known in the art, and specific meanings are not repeated.
And filtering the effective waveform data to obtain first high-precision full-waveform laser radar data.
And 102, determining a corresponding first water surface echo signal position and a corresponding first water bottom echo signal position according to the first high-precision full-waveform laser radar data.
Specifically, the first high-precision full-waveform lidar data includes high-precision full-waveform lidar data of different depths. And determining a first water surface echo signal position and a first water bottom echo signal position corresponding to the first high-precision full-waveform laser radar data by adopting a Gaussian decomposition method.
Step 103, converting the first high-precision full-waveform laser radar data into first two-dimensional picture data.
Specifically, the first high-precision full-waveform laser radar data are one-dimensional time sequence data, the one-dimensional time sequence data are converted into two-dimensional picture data, visualization of the time sequence data is achieved, meanwhile, time correlation of original data is reserved, and information loss is avoided.
And 104, performing network training by taking the first two-dimensional image data, the first water surface echo signal position and the first water bottom echo signal position as training sets of a deep learning network, and determining a trained network model.
Specifically, the first two-dimensional picture data, the first water surface echo signal position and the first water bottom echo signal position are used as a training set of the network to carry out network training, and the trained network can output the corresponding water surface echo signal position and water bottom echo signal position according to the two-dimensional picture data.
After the trained network model is determined, data different from the training set is selected as a test set to test the network model.
And 105, acquiring second high-precision full-waveform laser radar data, and determining corresponding second two-dimensional picture data according to the second high-precision full-waveform laser radar data.
Specifically, the second high-precision full-waveform lidar data is target high-precision full-waveform lidar data. The method for determining the corresponding second two-dimensional picture data according to the second high-precision full-waveform laser radar data is the same as the method for converting the first high-precision full-waveform laser radar data into the first two-dimensional picture data in step 103, the airborne laser radar is used for obtaining second laser radar sounding waveform data, effective waveform extraction and filtering processing are carried out on the second laser radar sounding waveform data, and the second high-precision full-waveform laser radar data is determined. The second high-precision full-waveform laser radar data are one-dimensional time sequence data, the one-dimensional time sequence data are converted into two-dimensional picture data, visualization of the time sequence data is achieved, meanwhile, time correlation of original data is reserved, and information loss is avoided.
And 106, inputting the second two-dimensional image data into the network model, and outputting a second water surface echo signal position and a second water bottom echo signal position.
And taking the second two-dimensional picture data as the input of the network, and outputting a second water surface echo signal position and a second water bottom echo signal position.
And step 107, determining the corresponding water depth according to the second water surface echo signal position and the second water bottom echo signal position.
Specifically, the second water surface echo signal position is a target water surface echo signal position, and the second water bottom echo signal position is a target water bottom echo signal position.
In summary, in the embodiment of the invention, the first high-precision full-waveform laser radar data is converted into the first two-dimensional picture data, so that the visualization of time sequence data is realized, the time correlation of the original data is reserved, and the loss of information is avoided; the network model trained by the invention can adapt to complex laser echo signals, reduce the time complexity, improve the efficiency and accuracy of the output water surface echo signal position and the water bottom echo signal position, and determine the corresponding water depth according to the second water surface echo signal position and the second water bottom echo signal position, thereby improving the efficiency and accuracy of measuring the water depth.
In one possible implementation manner, the determining the corresponding first water surface echo signal position and the first water bottom echo signal position according to the first high-precision full waveform laser radar data includes:
and determining the first water surface echo signal position and the first water bottom echo signal position corresponding to the first high-precision full-waveform laser radar data by adopting a Gaussian decomposition method.
Specifically, because the gaussian decomposition method is the prior art, the specific process of determining the first water surface echo signal position and the first water bottom echo signal position corresponding to the first high-precision full-waveform laser radar data by using the gaussian decomposition method is not described.
In one possible implementation, the converting the first high-precision full-waveform lidar data into first two-dimensional picture data includes:
carrying out gram angle difference field processing on the first high-precision full-waveform laser radar data based on a first formula, and determining the first two-dimensional picture data;
the first formula includes:
Figure SMS_10
wherein GADF represents the first two-dimensional picture data obtained after the treatment of the Graham angle difference field,
Figure SMS_11
representing the inverse cosine of the 1 st data normalized according to time arrangement in the first high-precision full-waveform laser radar data,/for the first high-precision full-waveform laser radar data>
Figure SMS_12
And representing the inverse cosine of the nth data normalized in the first high-precision full-waveform laser radar data. And determining corresponding second two-dimensional picture data according to the second high-precision full-waveform laser radar data according to the method for determining the first two-dimensional picture data.
Specifically, the first high-precision full-waveform laser radar data is one-dimensional time sequence data, the first high-precision full-waveform laser radar data is converted into first two-dimensional picture data, a convolutional neural network is formed by alternately stacking a convolutional layer, a pooling layer and a full-connection layer, the convolutional neural network is often used for analysis tasks of pictures and videos, but the effect of the convolutional neural network on time sequence processing is not very good, so the method converts the one-dimensional time sequence data into the two-dimensional picture data through a Gram Angle Difference Field (GADF) method, and the method comprises the following specific steps of:
assume that the timing sequence x= {x 1 ,x 2 ,x 3 , …,x n },x i Representing the echo intensity at the ith nanosecond, n represents the number of samples, which contains n real values, all values of X are scaled to [0, 1 ] by the normalization method shown in the equation in the next paragraph]Interval.
Figure SMS_13
Wherein min (x) representsx 1 ,x 2 ,x 3 , …,x n And max (x) represents the minimum value ofx 1 ,x 2 ,x 3 , …,x n Is selected from the group consisting of a maximum value of (c),
Figure SMS_14
the result of normalizing the time series sequence is shown.
Representing the scaled time series sequence as
Figure SMS_15
Then +.>
Figure SMS_16
Mapped to polar coordinates.
Figure SMS_17
In the method, in the process of the invention,
Figure SMS_18
representation->
Figure SMS_19
I=1, 2, 3, …, n, ++for angles in polar coordinate system>
Figure SMS_20
Representing the polar radius corresponding to the timestamp i, N serves to adjust the span of the polar coordinate system.
The angle range of the inverse cosine function corresponding to the normalized data is 0,
Figure SMS_21
/2]this polar coordinate system based representation provides a new view to the time series, i.e. the time scale variation of the sequence maps to the radial variation of the polar coordinate system over time, while the amplitude variation maps to the angular variation of the polar coordinate system.
Definition GADF is as follows:
Figure SMS_22
wherein, the liquid crystal display device comprises a liquid crystal display device,
Figure SMS_23
representing the normalized arccosine of the 1 st data in time sequence in the first high-precision full-waveform laser radar data, that is, the normalized arccosine of the 1 st data in time sequence,/for the 1 st data in time sequence>
Figure SMS_24
The inverse cosine normalized by the nth data in the first high-precision full-waveform laser radar data is indicated, namely the inverse cosine normalized by the nth data in the time sequence.
As shown in fig. 2 and 3, fig. 2 is a schematic diagram of high-precision full-waveform laser radar data, fig. 3 is a corresponding graph of a glamer angle difference field data, 2 more obvious peaks in fig. 2 correspond to points of band-shaped areas at corresponding positions in the glamer angle difference field data graph of fig. 3, and the brighter the band-shaped areas with larger peaks are, the more obvious the characteristics are.
According to the embodiment of the invention, the one-dimensional time sequence data is converted into the two-dimensional picture data by using a method of a gram angle difference field, the positions of echo signals on the water surface and the water bottom are highlighted by using the GADF data image, the visualization of the time sequence data is realized, meanwhile, the time correlation of the original data is reserved by the converted data, and the loss of information is avoided.
In one possible implementation, the deep learning network is a resnet_cbam network.
In one possible implementation manner, the construction process of the Resnet_CBAM network includes:
adding a CBAM attention module in each original block and residual result in the Resnet-18 network to obtain a Resnet_CBAM network, wherein the CBAM attention module comprises a channel attention module and a space attention module;
the input of the Resnet_CBAM network is two-dimensional picture data, and the output is the position of a water surface echo signal and the position of a water bottom echo signal.
After determining the first two-dimensional picture data, a Resnet_CBAM network is selected for network training. Using the Resnet-18 network model, 18 here designates 18 layers with weights, including convolutional layers and fully-connected layers, excluding pooling and BN layers. The GADF picture does not need to pay attention to global information of the whole picture, but focuses into local features of the water surface echo and the water bottom echo. The Resnet-18 can gradually capture the expected local structure, and compared with other network structures, the method can extract local features more efficiently and accurately, so that the efficiency and the accuracy of the second water surface echo signal position and the second water bottom echo signal position output by the network are improved, and the efficiency and the accuracy of the subsequent determined water depth are further improved.
In order to make the network focus on the meaningful information and target position information of the picture, such as waveform water surface and underwater echo signal areas, a attention module of CBAM is introduced into each original block and residual result of Resnet, so that the network sequentially passes through a Channel Attention Module (CAM) and a Space Attention Module (SAM). The CAM passes the input feature map through two parallel maximum pooling (MaxPool) layers and average pooling (AvgPool) layers, changes the number of channels in the feature map into 1*1, compresses the number of channels, expands the number of channels to the original number of channels, and obtains two activated results through a ReLU activation function. And adding the two output results element by element, obtaining an output result through a sigmoid activation function, multiplying the output result by an original graph, and changing the output result back to the size of the original characteristic graph. The SAM is to obtain two feature graphs from the output result of the CAM through maximum pooling and average pooling, then splice the two feature graphs, change the feature graphs into the feature graphs of 1 channel through convolution, obtain the feature graphs through a sigmoid function, and finally convert the output result multiplied by the original graph back to the original channel size. Meanwhile, the Full Connection (FC) layer of the Resnet is replaced with a new FC layer that satisfies the prediction target. The attention mechanism gives the neural network the focusing capability and merges the migration learning model. The two sub-modules in the CBAM comprise a Channel Attention Module (CAM) and a Space Attention Module (SAM), and the channel attention and the space attention are respectively executed, so that the parameter number is reduced, the calculation speed is improved, and the communication network can be used in a plug-and-play manner.
In one possible implementation, the loss function in the resnet_cbam network employs a mean square error function;
the formula of the mean square error function is:
Figure SMS_25
where MSE represents the mean square error function,
Figure SMS_26
represents the j-th predicted water surface signal position, < >>
Figure SMS_27
Representing the true position of the jth water surface signal, < >>
Figure SMS_28
Represents the j-th predicted water bottom signal position, a +.>
Figure SMS_29
The true position of the jth water bottom signal is represented, and M represents the number of training batches.
In order to make the measured prediction value match the true value more closely, MSE (mean square error) is selected as the loss function in the resnet_cbam network architecture. In the present network, the average value of the sum of squares representing the differences between the water surface and water bottom echo signal positions and the water surface and water bottom echo signal positions predicted by the network is represented, and the smaller the average value is, the more the representative predicted value matches the true value.
In a possible implementation manner, the determining the corresponding water depth according to the second water surface echo signal position and the second water bottom echo signal position includes:
determining a second water surface echo signal position and a time difference of the second water surface echo signal position according to the second water surface echo signal position and the second water bottom echo signal position;
and determining the water depth according to the time difference.
Specifically, the water depth is calculated using the following formula, wherein,Cindicating the propagation speed of light in a vacuum,Trepresenting the time difference between the water surface echo and the water bottom echo,βindicating the angle of refraction of the laser light into the water surface,
Figure SMS_30
indicating the refractive index of the laser in air, +.>
Figure SMS_31
Representing the refractive index of the laser in the body of water, and D represents the depth of water.
Figure SMS_32
And determining the time difference between the water surface echo signal and the water bottom echo signal according to the second water surface echo signal position and the second water bottom echo signal position, so as to determine the water depth through the formula about the water depth.
Fig. 4 is a schematic structural diagram of a laser radar sounding device based on a learning architecture according to an embodiment of the present invention, as shown in fig. 4, the device 300 includes:
a data acquisition module 301, configured to acquire first high-precision full-waveform laser radar data;
a first data processing module 302, configured to determine a corresponding first water surface echo signal position and a first water bottom echo signal position according to the first high-precision full-waveform laser radar data;
a second data processing module 303, configured to convert the first high-precision full-waveform laser radar data into first two-dimensional picture data;
the network training module 304 is configured to perform network training by using the first two-dimensional image data, the first water surface echo signal position, and the first water bottom echo signal position as a training set of a deep learning network, and determine a trained network model;
the network input determining module 305 is configured to obtain second high-precision full-waveform laser radar data, and determine corresponding second two-dimensional image data according to the second high-precision full-waveform laser radar data;
the network output determining module 306 is configured to input the second two-dimensional image data into the network model, and output a second water surface echo signal position and a second water bottom echo signal position;
the water depth determining module 307 is configured to determine a corresponding water depth according to the second water surface echo signal position and the second water bottom echo signal position.
In yet another embodiment of the present invention, an electronic device is provided, where the electronic device includes a processor and a memory, where at least one instruction or at least one program is stored in the memory, where the at least one instruction or at least one program is loaded and executed by the processor to implement a laser radar sounding method based on a learning architecture according to an embodiment of the present invention.
Although the present disclosure is disclosed above, the scope of the present disclosure is not limited thereto. Various changes and modifications may be made by one skilled in the art without departing from the spirit and scope of the disclosure, and these changes and modifications will fall within the scope of the disclosure.

Claims (10)

1. The laser radar sounding method based on the learning architecture is characterized by comprising the following steps of:
acquiring first high-precision full-waveform laser radar data;
determining a corresponding first water surface echo signal position and a corresponding first water bottom echo signal position according to the first high-precision full-waveform laser radar data;
converting the first high-precision full-waveform laser radar data into first two-dimensional picture data;
performing network training by taking the first two-dimensional picture data, the first water surface echo signal position and the first water bottom echo signal position as training sets of a deep learning network, and determining a trained network model;
acquiring second high-precision full-waveform laser radar data, and determining corresponding second two-dimensional picture data according to the second high-precision full-waveform laser radar data;
inputting the second two-dimensional picture data into the network model, and outputting a second water surface echo signal position and a second water bottom echo signal position;
and determining the corresponding water depth according to the second water surface echo signal position and the second water bottom echo signal position.
2. The learning architecture-based lidar sounding method of claim 1, wherein the converting the first high-precision full-waveform lidar data into first two-dimensional picture data comprises:
carrying out gram angle difference field processing on the first high-precision full-waveform laser radar data based on a first formula, and determining the first two-dimensional picture data;
the first formula includes:
Figure QLYQS_1
wherein GADF represents the first two-dimensional picture data obtained after the treatment of the Graham angle difference field,
Figure QLYQS_2
representing the 1 st time sequence in the first high-precision full-waveform laser radar dataInverse cosine of normalized data, +.>
Figure QLYQS_3
And representing the inverse cosine of the nth data normalized in the first high-precision full-waveform laser radar data.
3. The learning architecture-based lidar sounding method of claim 1, wherein the deep learning network is a resnet_cbam network.
4. The learning architecture-based lidar sounding method of claim 3, wherein the constructing process of the resnet_cbam network comprises:
adding a CBAM attention module in each original block and residual result in the Resnet-18 network to obtain a Resnet_CBAM network, wherein the CBAM attention module comprises a channel attention module and a space attention module;
the input of the Resnet_CBAM network is two-dimensional picture data, and the output is the position of a water surface echo signal and the position of a water bottom echo signal.
5. The learning architecture-based lidar sounding method of claim 4, wherein the loss function in the resnet_cbam network employs a mean square error function;
the formula of the mean square error function is:
Figure QLYQS_4
where MSE represents the mean square error function,
Figure QLYQS_5
represents the j-th predicted water surface signal position, < >>
Figure QLYQS_6
Represents the jth water surfaceTrue position of signal, +_>
Figure QLYQS_7
Represents the j-th predicted water bottom signal position, a +.>
Figure QLYQS_8
The true position of the jth water bottom signal is represented, and M represents the number of training batches.
6. The learning architecture-based lidar sounding method of claim 1, wherein the acquiring the first high-precision full-waveform lidar data comprises:
and acquiring first laser radar sounding waveform data, extracting and filtering the first laser radar sounding waveform data in an effective waveform mode, and determining the first high-precision full-waveform laser radar data.
7. The learning architecture-based lidar sounding method of claim 1, wherein the determining the corresponding first water surface echo signal location and first water bottom echo signal location from the first high-precision full-waveform lidar data comprises:
and determining the first water surface echo signal position and the first water bottom echo signal position corresponding to the first high-precision full-waveform laser radar data by adopting a Gaussian decomposition method.
8. The learning architecture-based lidar sounding method of claim 1, wherein the determining the corresponding water depth from the second water surface echo signal location and the second water bottom echo signal location comprises:
determining a second water surface echo signal position and a time difference of the second water surface echo signal position according to the second water surface echo signal position and the second water bottom echo signal position;
and determining the water depth according to the time difference.
9. Laser radar sounding device based on study framework, characterized by comprising:
the data acquisition module is used for acquiring first high-precision full-waveform laser radar data;
the first data processing module is used for determining a corresponding first water surface echo signal position and a corresponding first water bottom echo signal position according to the first high-precision full-waveform laser radar data;
the second data processing module is used for converting the first high-precision full-waveform laser radar data into first two-dimensional picture data;
the network training module is used for carrying out network training by taking the first two-dimensional picture data, the first water surface echo signal position and the first water bottom echo signal position as training sets of a deep learning network, and determining a trained network model;
the network input determining module is used for acquiring second high-precision full-waveform laser radar data and determining corresponding second two-dimensional picture data according to the second high-precision full-waveform laser radar data;
the network output determining module is used for inputting the second two-dimensional picture data into the network model and outputting a second water surface echo signal position and a second water bottom echo signal position;
and the water depth determining module is used for determining the corresponding water depth according to the second water surface echo signal position and the second water bottom echo signal position.
10. An electronic device comprising a processor and a memory, wherein the memory stores at least one instruction or at least one program, and the at least one instruction or the at least one program is loaded and executed by the processor to implement the learning architecture-based lidar sounding method of any of claims 1-8.
CN202310548929.7A 2023-05-16 2023-05-16 Laser radar sounding method and device based on learning architecture and electronic equipment Active CN116338628B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310548929.7A CN116338628B (en) 2023-05-16 2023-05-16 Laser radar sounding method and device based on learning architecture and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310548929.7A CN116338628B (en) 2023-05-16 2023-05-16 Laser radar sounding method and device based on learning architecture and electronic equipment

Publications (2)

Publication Number Publication Date
CN116338628A true CN116338628A (en) 2023-06-27
CN116338628B CN116338628B (en) 2023-09-15

Family

ID=86880662

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310548929.7A Active CN116338628B (en) 2023-05-16 2023-05-16 Laser radar sounding method and device based on learning architecture and electronic equipment

Country Status (1)

Country Link
CN (1) CN116338628B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117093918A (en) * 2023-10-18 2023-11-21 成都信息工程大学 Overlapping spike identification method based on Gellam angle, field and CBAM-Resnet34

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110058254A (en) * 2019-04-22 2019-07-26 中国科学院上海光学精密机械研究所 Pulsed laser ranging echo moment calculation method, system and terminal
CN110929811A (en) * 2019-10-21 2020-03-27 北京理工大学 Deep learning method for acquiring high-resolution data of full-waveform laser radar
WO2020237693A1 (en) * 2019-05-31 2020-12-03 华南理工大学 Multi-source sensing method and system for water surface unmanned equipment
US20210003668A1 (en) * 2020-05-26 2021-01-07 China University Of Geosciences, Wuhan Methods and devices for correcting underwater photon displacement and for depth sounding with single-photon lidar
WO2021154037A1 (en) * 2020-01-31 2021-08-05 주식회사 유진로봇 3d lidar-based target object recognition method, device, and moving body using same
CN113534096A (en) * 2021-07-19 2021-10-22 东莞理工学院 LiDAR signal geometric feature extraction method and system based on spline function
CN113608195A (en) * 2021-10-08 2021-11-05 中国地质大学(武汉) Laser radar full waveform data decomposition sounding method and device and electronic equipment
CN115661627A (en) * 2022-10-24 2023-01-31 哈尔滨工程大学 Single-beam underwater target identification method based on GAF-D3Net

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110058254A (en) * 2019-04-22 2019-07-26 中国科学院上海光学精密机械研究所 Pulsed laser ranging echo moment calculation method, system and terminal
WO2020237693A1 (en) * 2019-05-31 2020-12-03 华南理工大学 Multi-source sensing method and system for water surface unmanned equipment
CN110929811A (en) * 2019-10-21 2020-03-27 北京理工大学 Deep learning method for acquiring high-resolution data of full-waveform laser radar
WO2021154037A1 (en) * 2020-01-31 2021-08-05 주식회사 유진로봇 3d lidar-based target object recognition method, device, and moving body using same
US20210003668A1 (en) * 2020-05-26 2021-01-07 China University Of Geosciences, Wuhan Methods and devices for correcting underwater photon displacement and for depth sounding with single-photon lidar
CN113534096A (en) * 2021-07-19 2021-10-22 东莞理工学院 LiDAR signal geometric feature extraction method and system based on spline function
CN113608195A (en) * 2021-10-08 2021-11-05 中国地质大学(武汉) Laser radar full waveform data decomposition sounding method and device and electronic equipment
CN115661627A (en) * 2022-10-24 2023-01-31 哈尔滨工程大学 Single-beam underwater target identification method based on GAF-D3Net

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
ANTOINE COLLIN 等: "Predicting Species Diversity of Benthic Communities within Turbid Nearshore Using Full-Waveform Bathymetric LiDAR and Machine Learners", SPRINGER LINK, pages 1 - 16 *
周丰年 等: "基于神经网络的机载激光测深偏差模型构建", 海洋测绘, vol. 42, no. 3, pages 5 - 8 *
李由之 等: "双频激光雷达多通道波形数据联合测深方法", 海洋测绘, vol. 40, no. 02, pages 47 - 51 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117093918A (en) * 2023-10-18 2023-11-21 成都信息工程大学 Overlapping spike identification method based on Gellam angle, field and CBAM-Resnet34

Also Published As

Publication number Publication date
CN116338628B (en) 2023-09-15

Similar Documents

Publication Publication Date Title
Tuell et al. Overview of the coastal zone mapping and imaging lidar (CZMIL): a new multisensor airborne mapping system for the US Army Corps of Engineers
CN110111345B (en) Attention network-based 3D point cloud segmentation method
CN116338628B (en) Laser radar sounding method and device based on learning architecture and electronic equipment
Cao et al. An active-passive fusion strategy and accuracy evaluation for shallow water bathymetry based on ICESat-2 ATLAS laser point cloud and satellite remote sensing imagery
CN112990334A (en) Small sample SAR image target identification method based on improved prototype network
Wawrzyniak et al. MSIS image postioning in port areas with the aid of comparative navigation methods
CN115423968B (en) Power transmission channel optimization method based on point cloud data and live-action three-dimensional model
CN107515253A (en) The non-contact quantitative Diagnosis method of girder construction defect based on convolutional neural networks
CN113920255B (en) High-efficient mapping system based on point cloud data
CN115965862A (en) SAR ship target detection method based on mask network fusion image characteristics
CN111401180A (en) Neural network recognition model training method and device, server and storage medium
CN110569387B (en) Radar-image cross-modal retrieval method based on depth hash algorithm
CN117034077A (en) CSI positioning method, device and equipment based on migration learning and GASF
CN108399650B (en) Chlorophyll content prediction method
CN114066795A (en) DF-SAS high-low frequency sonar image fine registration fusion method
CN113702923A (en) Radar clutter simulation method and electronic equipment
CN114037836A (en) Method for applying artificial intelligence recognition technology to three-dimensional power transmission and transformation engineering measurement and calculation
CN117217103B (en) Satellite-borne SAR sea clutter generation method and system based on multi-scale attention mechanism
CN105606224A (en) Underwater heat source detection and location method based on water body hierarchical filtering
CN115983141B (en) Deep learning-based method, medium and system for inverting wave height of ocean waves
CN113534085B (en) Sea surface wind speed and effective wave height joint inversion method of interference imaging altimeter
CN112102394B (en) Remote sensing image ship size integrated extraction method based on deep learning
CN115616485A (en) Multi-vibration-source underwater effective positioning method based on Stockwell transformation time-frequency diagram
Chen et al. A new method for retrieve significant wave height using deep learning
CN116519807A (en) Identification model establishment method and identification method for picosecond ultrasonic measurement signal characteristics

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant