CN115685170A - Active sonar target echo detection method based on reinforcement learning - Google Patents

Active sonar target echo detection method based on reinforcement learning Download PDF

Info

Publication number
CN115685170A
CN115685170A CN202310004992.4A CN202310004992A CN115685170A CN 115685170 A CN115685170 A CN 115685170A CN 202310004992 A CN202310004992 A CN 202310004992A CN 115685170 A CN115685170 A CN 115685170A
Authority
CN
China
Prior art keywords
value
signal
active sonar
echo
data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202310004992.4A
Other languages
Chinese (zh)
Other versions
CN115685170B (en
Inventor
张艳
王振
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Qingdao Guoshu Information Technology Co ltd
Original Assignee
Qingdao Guoshu Information Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Qingdao Guoshu Information Technology Co ltd filed Critical Qingdao Guoshu Information Technology Co ltd
Priority to CN202310004992.4A priority Critical patent/CN115685170B/en
Publication of CN115685170A publication Critical patent/CN115685170A/en
Application granted granted Critical
Publication of CN115685170B publication Critical patent/CN115685170B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A90/00Technologies having an indirect contribution to adaptation to climate change
    • Y02A90/30Assessment of water resources

Abstract

The invention belongs to the technical field of underwater acoustic engineering, and particularly discloses an active sonar target echo detection method based on reinforcement learning. Which comprises the following steps: acquiring historical state data of an active sonar signal echo; carrying out quantization and standardization preprocessing operation on the historical state data, then carrying out gray value processing, and hiding the features of the sonar signals in a generated gray level image; each gray scale map is a group of historical state data, and labels are made for each group of historical state data; establishing an incentive value system, and calculating an incentive value according to the correlation degree of the power of the signal; inputting historical state data into a reinforcement learning model for reinforcement learning training; and performing state judgment on the input active sonar signal echo data by using the trained reinforcement learning model, and finding out a target echo position corresponding to the data. The invention processes the active sonar receiving signal by a reinforcement learning method, detects whether the target echo exists or not, and improves the accuracy and robustness of detection.

Description

Active sonar target echo detection method based on reinforcement learning
Technical Field
The invention belongs to the technical field of underwater acoustic engineering, and relates to an active sonar target echo detection method based on reinforcement learning.
Background
Active sonar is an underwater device that can emit sound waves and acquire information by processing echoes reflected from objects. Currently, active sonar is widely applied to the fields of marine surveying and mapping, marine fishery, military and the like.
Due to the complex and variable marine environment, the marine acoustic channel has time-varying and space-varying characteristics. When the echo reflected by the object reaches the receiving end through the ocean acoustic channel, serious distortion is generated, and difficulty is brought to detection and processing of the echo signal.
In order to have a high resolution on an object, active sonar generally emits a high frequency sound wave.
However, high frequency sound waves propagate in sea water and are more severely attenuated than low frequency sound waves, so that the amplitude of the received signal of the active sonar is very small, and the active sonar is very easily submerged by ocean noise, reverberation noise and the like.
In addition, a higher sampling rate is required for receiving high-frequency acoustic signals, and thus the difficulty of real-time processing is increased accordingly.
The advent of intelligent algorithms has made possible the detection and processing of echo signals. However, since the echo signal has its unique characteristic of following the change of the environment, the echo characteristics of the echo signal may become blurred.
Although fuzzy logic can be applied to any complicated object environment change in the subsequent processing process, the inference of the fuzzy logic becomes very complicated and difficult to debug as the input and output variables are increased.
Disclosure of Invention
The invention aims to provide an active sonar target echo detection method based on reinforcement learning, which processes an active sonar receiving signal through the reinforcement learning method to detect whether a target echo exists or not so as to improve the accuracy and robustness of detection.
In order to achieve the purpose, the invention adopts the following technical scheme:
the active sonar target echo detection method based on reinforcement learning comprises the following steps:
step 1, acquiring historical state data of an active sonar signal echo;
firstly, quantizing and standardizing historical state data, then segmenting the preprocessed signals and processing gray values, wherein the features of the sonar signals are hidden in a generated gray map;
each generated gray scale map is a group of historical state data; making labels for each group of historical state data to indicate the positions of echoes of the historical state data, and taking each group of historical state data added with the labels as a group of training data;
step 2, establishing an incentive value system, and calculating an incentive value according to the correlation degree of the power of the signal;
step 3, inputting each group of historical state data into a reinforcement learning model for preliminary decision making;
inputting each group of historical state data and preliminary decision data into a pre-established convolution network model in the training process, extracting the characteristics of the active sonar signals by the convolution network model, and quantizing the characteristics of the signals to obtain state variation and reward values;
the state quantity change value is the difference value between the current state data and the next state data; updating a data table structure according to each group of historical state data and the corresponding preliminary decision data, state variation and reward value;
the data table structure is used for collecting the state of each step when the reinforcement learning model is used for learning, and judging the executed action through table look-up operation when the trained reinforcement learning model is used for doing the action;
updating an estimation network by minimizing the difference value between the value of a preset target and the output value, constructing a target function by using the difference value between the current value and the previous value, and updating the weight of the estimation network by using a gradient descent method;
the preset target is the minimum distance from the echo of the target, and the output is the actual distance from the target;
and 4, using the trained convolution network model and the data table structure to judge the state of the input active sonar signal echo data until a target echo position corresponding to the active sonar signal echo data is found.
The invention has the following advantages:
as described above, the present invention relates to an active sonar target echo detection method based on reinforcement learning. The detection method processes the active sonar receiving signals by using a reinforcement learning method, and detects whether the target echo exists, so that the detection accuracy and robustness are improved. The method of the invention uses a convolution to extract the characteristics, so that the target echo detection is more accurate, and simultaneously, the capability of the method is more generalized through the addition of a large amount of data sets, and the method can detect echo signals in various states. The introduction of reinforcement learning is equivalent to adding a brain to the judgment of signal echoes, wherein the action utility of Q is used for evaluating the quality of a certain action taken in a specific state, the system is guided to search according to strategies by using the Markov property and only using the next step information, the state value is updated at each step of search, the accurate characteristics are extracted, whether the target echoes are judged by the brain of a machine, the state prompt of the next step is given, and the judgment efficiency of the target echoes is improved.
Drawings
Fig. 1 is a flow chart of an active sonar target echo detection method based on reinforcement learning in the embodiment of the present invention.
Fig. 2 is a flow chart of signal processing according to an embodiment of the present invention.
FIG. 3 is a diagram of a network architecture of a reinforcement learning model according to an embodiment of the present invention.
Fig. 4 is a flowchart of determining existence of echo in active sonar target based on a reinforcement learning model according to an embodiment of the present invention.
Detailed Description
The invention is described in further detail below with reference to the following figures and detailed description:
as shown in fig. 1, the active sonar target echo detection method based on reinforcement learning includes the following steps:
step 1, acquiring historical state data of an active sonar signal echo.
Firstly, quantization and standardization preprocessing operations are carried out on historical state data, then the preprocessed signals are divided, gray value processing is carried out on the divided signals, and the features of the sonar signals are hidden in a generated gray scale image.
The segmentation herein refers to the sequential segmentation of the active sonar signals, that is, in the segmentation of the active sonar signals, the signals of the next segment include 1/3 of the previous segment, which is to make the feature learning continuous.
The difference between black and white of four continuous pixel points in each generated gray scale image is changed, and the difference is equivalent to a returned sine wave.
Each generated gray scale map is a set of historical state data.
As shown in fig. 2, the process of converting the signal into a gray signal map using the mean and variance of the signal is as follows:
step 1.1 calculate the variance G of the active sonar signal by the following formula v
G v =∑ a i=1b j=1 (G(i,j)-g) 2 /(a·b)。
Wherein, ab respectively represents the number of rows and columns in the amplitude diagram of the active sonar signal, wherein i belongs to [1,a ]],j∈[1,b](ii) a G (i, j) represents the amplitude of the position (i, j) signal in the graph, and G represents the mean value of the active sonar signal amplitude graph data.
Step 1.2 the amplitude map of the active sonar signal is adjusted by the following formula.
G(i,j)= [G(i,j)-min[G]]/max[G]。
Where min [ G ] represents the minimum value in the gray-scale values G of the active sonar signal, and max [ G ] represents the maximum value in G.
And step 1.3, normalizing the adjusted active sonar signal diagram by using the gray mean and the variance.
When G is larger than or equal to G, the adjusted gray value G n (i,j)=
Figure 100002_DEST_PATH_IMAGE001
When G is<G is, G n (i,j)=
Figure 100002_DEST_PATH_IMAGE002
. Wherein G is n (i, j) represents the adjusted gray value, which is used to make the feature more visible where it is blurred.
Labeling each group of historical state data to show the position of the echo; and taking each group of history state data after the labels are added as a group of training data respectively for training a reinforcement learning model.
And 2, establishing a reward value system, and calculating a reward value according to the correlation degree of the power of the signal.
The present embodiment gives a specific reward calculation method for CW (continuous wave) echoes, as follows:
P K =β/(d·I T Q -1 K I)。
wherein, P K For the purpose of enhancing the value of the reward for learning the current behavior, P is set as the correlation degree of the active sonar echo signal and the transmission signal is higher K The represented prize value will also go high; the prize value will also become smaller as the corresponding degree of correlation is lower.
Beta is expressed as a reverberation suppression factor, and I is expressed as an identity matrix; q K The covariance matrix is obtained by multiplying the echo signal of the required signal by the noise matrix, and is the covariance matrix of the echo signal segment.
d represents the sequence length after signal segmentation, and d = N/K; wherein N is the rank of the signal covariance matrix estimation; k denotes the initial filter order, which is the number of segments of the signal, and K is a one-dimensional matrix.
The invention establishes correlation solution according to the original characteristics of the transmitting signal, when the detected signal is matched with the power of the transmitting signal, the reward 1 is given to the detected signal, and when the correlation with the power spectrum of the original signal is small, the reward 0 is given to the detected signal.
The small correlation means that the power spectrum correlation with the original signal is smaller than a preset correlation threshold.
In order to tell where their goals are during training, it is necessary to provide a reward value, which is calculated as shown in the above formula, and which is awarded each time a goal is approached, a reward value is provided.
This approach can guide the convolutional network model to find the target and learn the features in the vicinity of the target in detail.
And 3, inputting each group of historical state data (namely, a data table set generated by continuously covering and updating the training judgment result of the network according to the training set) into the reinforcement learning model for preliminary decision making.
In the present embodiment, a model for reinforcement learning is used, and the model structure is shown in fig. 3. State of each stepSAnd the next step of reward r introductionGAnd determining the next action a through signal convolution processing. Meanwhile, in order to improve the learning ability of the system, the learning is started from the front and the back of the signal at the same time, and the learning is judged to be finished when the front learning and the back learning simultaneously find the target.
Thanks to this structure, a good learning effect can be obtained using a small number of data sets.
And in the training process, inputting each group of historical state data and preliminary decision data into a pre-established convolution network model, extracting the characteristics of the active sonar signals by the convolution network model, and quantizing the characteristics of the signals to obtain state variation and reward values.
Inputting the gray value image data generated after standard processing into a convolution network model to extract the main characteristics of the gray value image data, and analyzing the next motion states of the gray value image data according to the characteristics, namely the left motion state, the stop motion state and the right motion state.
When the motion state of the next step is left, the echo signal is judged to be on the left side, and the echo is continuously searched for on the left side.
When the motion state of the next step is right, the echo signal is judged to be on the right side of the next step, and the echo is continuously searched towards the right side.
When the moving state of the next step is stop, the position of the echo is indicated, and the moving state is stopped.
The state quantity change value refers to the difference value between the current state data and the next state data; and updating the data table structure according to each group of historical state data and the corresponding preliminary decision data, the state variable quantity and the reward value.
The data table structure collects the state of each step when the reinforcement learning model is used for learning, and judges the executed action through table look-up operation when the trained reinforcement learning model is used for doing the action.
The data table structure is denoted as Qtable = (s, [ a, r)],s * ). Where s denotes the current state, s * Indicating the status of the next step, a the number of actions, and r the amount of bonus value that the next step of action will receive.
After the state s and the reward value r of the next step are obtained, the value of the current state is evaluated, and the calculation formula is as follows:
Q=r+γmax s q (s, a; theta). Where θ represents the value of the input parameter, γ represents the attenuation factor, max s Q (s, a; θ) represents the maximum future reward given the new state and action.
Q is the corresponding evaluation of the current operation, which will be continuously updated later by this step until it is the optimum value.
And updating the valuation network by minimizing the difference value between the value of the preset target and the output value, constructing an objective function according to the difference value between the current value and the previous value, and updating the weight of the valuation network by using a gradient descent method.
Here, the predetermined target means a minimum distance from an echo of the target, and the output means an actual distance from the target.
The update process of the valuation network is as follows:
Q * (s,a)= Q(s,a)+α(r+ max s Q(s’,a’)- Q(s,a))。
L(θ)=E(TargetQ θ - Q * (s,a))。
wherein Q (s, a) represents the updating of the Q value after the action is finished; q (s, a) represents the Q value of the current action; α represents learning efficiency. s 'and a' represent the action of the next step and the state of the next step, respectively.
max s Q (s ', a') represents the maximum reward that can be obtained in the case, targetQ θ Representing the output of the convolutional network model.
And L (theta) represents a loss function under the parameter theta, the loss function updates the network weight by carrying out reverse propagation gradient, the optimal result is obtained when the L (theta) is not converged, and the action network is the optimal control strategy.
And at the moment, the stored convolution network model is the trained convolution network model (sonar signal feature extraction network model).
And continuously carrying out error judgment on the target signal and the original signal through an updating formula of the estimation network until the target signal is found to ensure that the loss function value of the target signal and the original network structure is the minimum, and training the target signal to quickly find the position of the target echo signal.
The action network and the estimation network of the invention give consideration to dynamic performance and robustness, the estimation network can estimate the running state of the current echo signal detection system more accurately, and the invention also has certain guiding significance to the control process and can be used for on-line updating.
And 4, using the trained convolutional network model and the data table structure Qtable to judge the state of the input active sonar signal echo data until a target echo position corresponding to the active sonar signal echo data is found.
As shown in fig. 4, the active sonar target echo existence judging method based on the reinforcement learning model includes the following steps:
and 4.1, quantizing and standardizing the input active sonar signals.
The input original signal has the problems of jumping points, singular values and the like, the points increase the difficulty for finding the later features, and the feature judgment and calculation are more accurate by preprocessing the input original signal in the starting step.
And 4.2, sequentially segmenting the active sonar signals, wherein in the segmentation of the active sonar signals, the signals of the next segment comprise 1/3 part of the previous segment, so that the feature learning has continuity.
Step 4.3, obtaining the trained convolution network model and a Q table, namely a storage state data table, according to the step 3; the Q table stores the action judgment of the next step in the reinforcement learning, and the convolution network model stores the characteristic extraction and learning of the signal.
And 4.4, identifying the active sonar detection signals through a convolution network model to obtain signal characteristics, and deducing and judging the next step by using a Q table.
And 4.5, rapidly deducing whether the next step is to finish judgment or continue judgment by inquiring the structure of the Q table.
And 4.6, when the next step is finished by inquiring the structure of the Q table, the data of the section is a data section containing the echo signal, and the distance position of the sonar from the target is deduced according to the position of the echo.
It should be understood, however, that the description herein of specific embodiments is not intended to limit the invention to the particular forms disclosed, but on the contrary, the intention is to cover all modifications, equivalents, and alternatives falling within the spirit and scope of the invention as defined by the appended claims.

Claims (7)

1. The active sonar target echo detection method based on reinforcement learning is characterized by comprising the following steps of:
step 1, acquiring historical state data of an echo of an active sonar signal;
firstly, quantifying and standardizing historical state data, then segmenting preprocessed signals and processing gray values, wherein the features of sonar signals are hidden in a generated gray map;
each generated gray scale map is a group of historical state data; labeling each group of historical state data to show the position of the echo; respectively taking each group of historical state data after the labels are added as a group of training data of the reinforcement learning model;
step 2, establishing an incentive value system, and calculating an incentive value according to the correlation degree of the power of the signal;
step 3, inputting each group of historical state data into a reinforcement learning model respectively for preliminary decision making;
inputting each group of historical state data and preliminary decision data into a pre-established convolution network model in the training process, extracting the characteristics of the active sonar signals by the convolution network model, and quantizing the characteristics of the signals to obtain state variation and reward values;
wherein, the state quantity change value is the difference value between the current state data and the next state data; updating a data table structure according to each group of historical state data and the corresponding preliminary decision data, state variation and reward value;
the data table structure is used for collecting the state of each step when the reinforcement learning model is used for learning, and judging the executed action through table look-up operation when the trained reinforcement learning model is used for doing the action;
updating an estimation network by minimizing the difference value between the value of a preset target and the output value, constructing a target function by using the difference value between the current value and the previous value, and updating the weight of the estimation network by using a gradient descent method;
the preset target is the minimum distance from the echo of the target, and the output is the actual distance from the target;
and 4, using the trained convolutional network model and the data table structure to judge the state of the input active sonar signal echo data until a target echo position corresponding to the active sonar signal echo data is found.
2. The reinforcement learning-based active sonar target echo detection method of claim 1,
in step 1, the process of performing gray value processing on the divided signals is as follows:
step 1.1 calculate variance G of active sonar signals by the following formula v
G v =∑ a i=1b j=1 (G(i,j)-g) 2 /(a·b);
Wherein, ab respectively represents the number of rows and columns in the amplitude diagram of the active sonar signal, wherein i belongs to [1,a ]],j∈[1,b](ii) a G (i, j) represents the signal at position (i, j) in the figureThe amplitude value g represents the average value of the data of the active sonar signal amplitude diagram;
step 1.2, adjusting an amplitude map of the active sonar signal by the following formula;
G(i,j)= [G(i,j)-min[G]]/max[G];
wherein min [ G ] represents the minimum value in the gray value G of the active sonar signal, and max [ G ] represents the maximum value in G;
step 1.3, normalizing the adjusted active sonar signal diagram by using a gray level mean value and a variance;
when G is larger than or equal to G, the adjusted gray value G n (i,j)=
Figure DEST_PATH_IMAGE001
When G is<G is, G n (i,j)=
Figure DEST_PATH_IMAGE002
3. The reinforcement learning-based active sonar target echo detection method of claim 1,
in step 2, the calculation method of the reward value is as follows: p is K =β/(d·I T Q -1 K I);
Wherein, P K For the purpose of enhancing the value of the reward for learning the current behavior, P is set as the correlation degree of the active sonar echo signal and the transmission signal is higher K The represented prize value will also go high; when the corresponding degree of correlation is lower, the reward value of the system is also smaller;
beta is expressed as a reverberation suppression factor, and I is expressed as an identity matrix; q K The covariance matrix is obtained by multiplying a required signal echo signal by a noise matrix, and is the covariance matrix of an echo signal segment;
d represents the post-signal segmentation sequence length, d = N/K; wherein, N is the rank of the signal covariance matrix estimation; k denotes the initial filter order, which is the number of segments of the signal, and K is a one-dimensional matrix.
4. The reinforcement learning-based active sonar target echo detection method of claim 1,
in step 3, the data table structure is represented as Qtable = (s, [ a, r)],s * ) (ii) a Where s denotes the current state, s * Representing the status of the next step, a representing the number of actions, r representing how much prize value will be earned by the next step of action;
after the state s and the reward value r of the next step are obtained, the value of the current state is evaluated, and the calculation formula is as follows:
Q=r+γmax s Q(s,a;θ)
where θ represents the value of the input parameter, γ represents the attenuation factor, max s Q (s, a; θ) represents the maximum future reward given the new state and action; q is the corresponding evaluation value of the current operation.
5. The reinforcement learning-based active sonar target echo detection method of claim 1,
in the step 3, the generated active sonar signal gray scale map is input into a convolution network model to extract main characteristics of the active sonar signal gray scale map, and the next motion states, namely a left motion state, a stopping motion state and a right motion state, are analyzed according to the characteristics of the active sonar signal gray scale map;
when the motion state of the next step is left, judging that the echo signal is on the left side of the next step, and continuously searching for the echo to the left side;
when the motion state of the next step is right, judging that the echo signal is on the right side of the next step, and continuously searching for the echo to the right side;
when the next motion state is stop, the target of the echo is the current position, and the echo stops not moving at the moment.
6. The reinforcement learning-based active sonar target echo detection method of claim 4,
in step 3, the updating process of the estimation network is as follows:
Q * (s,a)= Q(s,a)+α(r+ max s Q(s’,a’)- Q(s,a));
L(θ)=E(TargetQ θ - Q * (s,a));
wherein Q is * (s, a) represents updating the Q value after the action is finished; q (s, a) represents the Q value of the current action; α represents learning efficiency; s 'and a' represent the action of the next step and the state of the next step, respectively; max s Q (s ', a') represents the maximum reward that can be obtained in the case, targetQ θ Representing the output result of the convolutional network model;
l (theta) represents a loss function under a parameter theta, the loss function updates a network weight by carrying out reverse propagation gradient, the optimal result is obtained when L (theta) is constant in convergence, and the action network is the optimal control strategy;
and at the moment, the stored convolution network model is the trained convolution network model.
7. The reinforcement learning-based active sonar target echo detection method of claim 1,
the step 4 specifically comprises the following steps:
step 4.1, carrying out noise reduction processing on the input active sonar signals;
step 4.2, segmenting the active sonar signals;
step 4.3, obtaining a trained convolution network model and a Q table according to the step 3; the Q table stores action judgment of the next step in reinforcement learning, and the convolution network model stores feature extraction and learning of signals;
4.4, identifying the active sonar detection signal through a convolutional network model to obtain signal characteristics for the Q table to deduce and judge the next step;
step 4.5, rapidly deducing whether the next step is to finish judgment or continue judgment by inquiring the structure of the Q table;
and 4.6, when the next step is finished by inquiring the structure of the Q table, the data of the section is a data section containing the echo signal, and the distance position of the sonar from the target is deduced according to the position of the echo.
CN202310004992.4A 2023-01-04 2023-01-04 Active sonar target echo detection method based on reinforcement learning Active CN115685170B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310004992.4A CN115685170B (en) 2023-01-04 2023-01-04 Active sonar target echo detection method based on reinforcement learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310004992.4A CN115685170B (en) 2023-01-04 2023-01-04 Active sonar target echo detection method based on reinforcement learning

Publications (2)

Publication Number Publication Date
CN115685170A true CN115685170A (en) 2023-02-03
CN115685170B CN115685170B (en) 2023-05-09

Family

ID=85057290

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310004992.4A Active CN115685170B (en) 2023-01-04 2023-01-04 Active sonar target echo detection method based on reinforcement learning

Country Status (1)

Country Link
CN (1) CN115685170B (en)

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5555532A (en) * 1984-05-23 1996-09-10 The United States Of America As Represented By The Secretary Of The Navy Method and apparatus for target imaging with sidelooking sonar
JP2009068989A (en) * 2007-09-13 2009-04-02 Nec Corp Active sonar device and reverberation removal method by the same
CN102768354A (en) * 2011-05-05 2012-11-07 中国科学院声学研究所 Method and system for obtaining echo data of underwater target
WO2020007981A1 (en) * 2018-07-05 2020-01-09 Thales Method and device for displaying high-dynamic sonar or radar data
CN111652149A (en) * 2020-06-04 2020-09-11 青岛理工大学 Deep convolutional neural network-based benthonic oil sonar detection image identification method
CN112526524A (en) * 2020-12-09 2021-03-19 青岛澎湃海洋探索技术有限公司 Underwater fishing net detection method based on forward-looking sonar image and AUV platform
CN112731410A (en) * 2020-12-25 2021-04-30 上海大学 Underwater target sonar detection method based on CNN
CN113466839A (en) * 2021-09-03 2021-10-01 北京星天科技有限公司 Side-scan sonar sea bottom line detection method and device
CN114444571A (en) * 2021-12-23 2022-05-06 中国船舶重工集团公司第七一五研究所 Sonar target individual identification method for autonomous learning

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5555532A (en) * 1984-05-23 1996-09-10 The United States Of America As Represented By The Secretary Of The Navy Method and apparatus for target imaging with sidelooking sonar
JP2009068989A (en) * 2007-09-13 2009-04-02 Nec Corp Active sonar device and reverberation removal method by the same
CN102768354A (en) * 2011-05-05 2012-11-07 中国科学院声学研究所 Method and system for obtaining echo data of underwater target
WO2020007981A1 (en) * 2018-07-05 2020-01-09 Thales Method and device for displaying high-dynamic sonar or radar data
CN111652149A (en) * 2020-06-04 2020-09-11 青岛理工大学 Deep convolutional neural network-based benthonic oil sonar detection image identification method
CN112526524A (en) * 2020-12-09 2021-03-19 青岛澎湃海洋探索技术有限公司 Underwater fishing net detection method based on forward-looking sonar image and AUV platform
CN112731410A (en) * 2020-12-25 2021-04-30 上海大学 Underwater target sonar detection method based on CNN
CN113466839A (en) * 2021-09-03 2021-10-01 北京星天科技有限公司 Side-scan sonar sea bottom line detection method and device
CN114444571A (en) * 2021-12-23 2022-05-06 中国船舶重工集团公司第七一五研究所 Sonar target individual identification method for autonomous learning

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
赵红军;杨日杰;: "基于CVI+FPGA的声纳目标回波模拟器研制" *
韩婷婷等: "FCM-CV 水平集算法在沉底小目标声呐图像分割中的应用" *

Also Published As

Publication number Publication date
CN115685170B (en) 2023-05-09

Similar Documents

Publication Publication Date Title
EP0120537B1 (en) Method and device for remote tissue identification by statistical modeling and hypothesis testing of echo ultrasound signals
CN112001270B (en) Ground radar automatic target classification and identification method based on one-dimensional convolutional neural network
CN109270501B (en) Sea clutter suppression method for all-solid-state VTS radar
CN110675410A (en) Side-scan sonar sunken ship target unsupervised detection method based on selective search algorithm
CN102279399B (en) Dim target frequency spectrum tracking method based on dynamic programming
CN101926657B (en) Method for tracking features of ultrasound pattern and system thereof
CN108646249B (en) Parameterized leakage target detection method suitable for partial uniform reverberation background
CN111707998A (en) Sea surface floating small target detection method based on connected region characteristics
CN111443344A (en) Automatic extraction method and device for side-scan sonar sea bottom line
CN114900256A (en) Communication scene recognition method and device
CN115685170B (en) Active sonar target echo detection method based on reinforcement learning
CN117609702A (en) Pipeline leakage acoustic emission signal denoising method, system, equipment and medium
Qiu et al. Self-training with dual uncertainty for semi-supervised medical image segmentation
CN115906667B (en) Ocean environment parameter inversion model construction method and device
CN111929666A (en) Weak underwater sound target line spectrum autonomous extraction method based on sequential environment learning
CN116609754A (en) Evolutionary intelligent single-mode airborne radar target tracking method
US6376831B1 (en) Neural network system for estimating conditions on submerged surfaces of seawater vessels
CN113759362B (en) Method, device, equipment and storage medium for radar target data association
CN112287752A (en) Method for extracting early fault characteristics of rotating shaft of hydroelectric generator
CN113126086A (en) Life detection radar weak target detection method based on state prediction accumulation
Yao et al. Improving Nuclei Segmentation in Pathological Image via Reinforcement Learning
Annangi et al. AI assisted feedback system for transmit parameter optimization in Cardiac Ultrasound
Fang et al. Uncertainty Estimation in Deep Speech Enhancement Using Complex Gaussian Mixture Models
CN117434153B (en) Road nondestructive testing method and system based on ultrasonic technology
Mao et al. Modulation Recognition Algorithm of Radar Signal Based on ICanny-CNN

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant