CN113822201B - Deep learning method for underwater object shape recognition based on flow field velocity component time course - Google Patents

Deep learning method for underwater object shape recognition based on flow field velocity component time course Download PDF

Info

Publication number
CN113822201B
CN113822201B CN202111121944.0A CN202111121944A CN113822201B CN 113822201 B CN113822201 B CN 113822201B CN 202111121944 A CN202111121944 A CN 202111121944A CN 113822201 B CN113822201 B CN 113822201B
Authority
CN
China
Prior art keywords
flow field
course
velocity component
time
deep learning
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202111121944.0A
Other languages
Chinese (zh)
Other versions
CN113822201A (en
Inventor
战庆亮
白春锦
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Dalian Maritime University
Original Assignee
Dalian Maritime University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Dalian Maritime University filed Critical Dalian Maritime University
Priority to CN202111121944.0A priority Critical patent/CN113822201B/en
Publication of CN113822201A publication Critical patent/CN113822201A/en
Application granted granted Critical
Publication of CN113822201B publication Critical patent/CN113822201B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2218/00Aspects of pattern recognition specially adapted for signal processing
    • G06F2218/12Classification; Matching
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2218/00Aspects of pattern recognition specially adapted for signal processing
    • G06F2218/08Feature extraction

Abstract

The invention provides a deep learning method for recognizing the shape of an underwater object based on a flow field velocity component time course, which comprises the following steps: identifying the shape of the underwater target body based on the flow field characteristics; extracting the target characteristics based on any point wake flow velocity component time-course signals; processing the wake flow velocity component time-course signal based on a deep learning method; and classifying the flow field velocity component time-course characteristics based on the convolutional neural network. The method adopts the flow field velocity component as the data of the shape recognition, is fundamentally different from the traditional methods of adopting acoustic signals, image signals and the like, overcomes the defect of poor concealment of the active sonar method in the traditional acoustic signals and also overcomes the defect of large interference of the image signals in water because the method adopts brand-new physical quantity to carry out the shape recognition, and is a novel method with high concealment and convenient data acquisition.

Description

Deep learning method for underwater object shape recognition based on flow field velocity component time course
Technical Field
The invention relates to the technical field of shape recognition of underwater target objects, in particular to a brand-new shape recognition method of underwater target objects.
Background
With the continuous improvement of the comprehensive national power of China, the method is particularly important for the economic development of ocean resources and the military research of the territory. The automatic identification of the underwater target is the core problem of intelligentization of underwater equipment and underwater weapon systems for ocean resource development, and is also a consistently recognized technical problem at home and abroad.
At present, target identification in fluid is mainly carried out through optical signals or acoustic signals, and long-distance optical signals are difficult to collect due to the fact that the penetration capacity of the optical signals and the like in water is limited and the attenuation of high-frequency signals is accelerated rapidly; the low-frequency acoustic signals have good propagation performance and are important means for observing and measuring in water, and the sonar collects radiation noise generated by water flow, fish, ships and other underwater targets when the underwater targets move in the water by using a sensitive receiving system, so that the targets are positioned and identified.
Because the source of underwater noise is very complex and is greatly interfered by water and earth surface reflection, the target identification is difficult to be directly carried out according to signals, and complex signal processing work is necessary. The traditional target identification relies on the characteristics of frequency spectrum, fluctuation change and the like of a manually analyzed target signal, and has high requirements on the experience of identification personnel and low efficiency. With the application of the machine learning method, the target identification technology makes a significant breakthrough. When the traditional machine learning method is used for researching underwater target recognition, a signal separation theory is combined with the machine learning method, mathematical methods such as spectrum analysis, wavelet transformation, hilbert-Huang transformation, high-order spectrum estimation and the like are adopted for artificial feature extraction, and the analysis process of the feature engineering mainly depends on artificial knowledge and experience to find better training features, so that the precision of a machine learning algorithm is improved conveniently. Therefore, the generalization capability of the method is weakened by specific feature selection in the traditional method and the learning process of the traditional method, and the further development of the traditional machine learning method in the field of underwater recognition is also limited.
At present, deep learning methods aiming at flow field characteristics are less researched, and target identification by utilizing the flow field characteristics is not yet performed. When fluid flows across a solid surface immersed therein, flow separation and the like may occur due to interference from objects. And the interference characteristics of the flow fields are different for objects with different shapes, so that wake flow fields with different characteristics are formed in the flow fields, and the wake flow characteristics can be regarded as special fingerprints formed by the objects with different shapes in the fluid, so that the shape of the object can be identified according to the wake flow fields. However, as the reynolds number of the flow field changes, the characteristics and the state of the wake flow field are highly complex due to the nonlinear characteristics of the flow control equation, so that the derivation and the description by the traditional mathematical method are difficult, and the characteristic extraction and the identification are difficult to realize.
Disclosure of Invention
According to the proposal, the traditional mathematical method is difficult to derive and describe, the technical problems of feature extraction and identification are difficult to realize, and the deep learning method for the shape identification of the underwater object based on the flow field velocity component time course is provided. The invention mainly utilizes a deep learning method for recognizing the shape of an underwater object based on a flow field velocity component time course, which comprises the following steps:
step S1: acquiring a flow field velocity component time course with a known shape; measuring a flow field speed time-course sample of a wake flow field formed by target characteristics with known appearance;
step S2: extracting the target characteristics based on any point wake flow velocity component time-course signals;
and step S3: processing the wake flow velocity component time-course signal based on a deep learning method;
and step S4: and classifying the flow field velocity component time-course characteristics based on the convolutional neural network.
Further, the identifying the shape of the underwater target body based on the flow field characteristics specifically further comprises: recognizing the shape of the underwater object according to the interference of the target shape on the flow field characteristics;
the flow field features include: the downstream velocity of the flow field and the transverse velocity of the flow field.
Further, the extracting the target feature based on the wake flow velocity component time-course signal of any point is to extract the feature of the target shape according to the velocity time-course signal of any point in the wake flow field;
the wake velocity component comprises a forward flow velocity time interval and a transverse velocity time interval;
identifying the shape of the target according to the characteristics of the speed time-course signal;
further, the processing of the flow field time interval signal based on the deep learning method further comprises the following steps:
step S31: constructing a calculation model of an input signal based on a deep learning method;
step S32: and performing deep learning calculation based on the one-dimensional time course and the time sequence data.
Further, the classifying the flow field velocity component time-course characteristics based on the convolutional neural network further comprises the following steps:
step S41: calculating a forward flow speed time course based on one-dimensional convolution;
step S42: calculating a transverse velocity time course based on one-dimensional convolution;
step S43: extracting wake flow speed time-course characteristics based on one-dimensional convolution;
step S44: and classifying the wake velocity time-course characteristics based on one-dimensional convolution.
Compared with the prior art, the invention has the following advantages:
(1) The method adopts the flow field velocity component as the data of the shape recognition, and has fundamental difference with the traditional methods of adopting acoustic signals, image signals and the like, because the method adopts brand-new physical quantity to carry out the shape recognition, the defect of poor concealment of the active sonar method in the traditional acoustic signals is overcome, the defect of large interference of the image signals in water is also made up, and the method is a novel method with high concealment and convenient data acquisition;
(2) The target variable adopted by the method is a velocity component in a flow field, and the forward velocity or the transverse velocity can be selected, so that the data acquisition of the method is very convenient;
(3) The method adopts the time-course signal to perform shape recognition, and is different from the traditional image recognition method which carries out shape recognition aiming at image data, the required input data volume is small, the constructed deep learning network parameters are few, and the recognition calculation speed is high;
(4) The invention adopts a deep learning method based on convolution to extract and classify the characteristics of the time-course data, and retains the time sequence information of the sample, so the method has high identification precision and is a high-precision new method;
based on the reason, the method can be widely popularized in the fields of underwater target shape recognition and the like.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings needed to be used in the description of the embodiments or the prior art will be briefly introduced below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to these drawings without creative efforts.
FIG. 1 is a flow chart of a deep learning method for underwater object shape recognition based on a flow field velocity component time course according to an embodiment of the present invention;
FIG. 2 is a target profile set W included in an embodiment of the present invention;
FIG. 3 is a flow field velocity component sample acquisition model of a flow field target profile set in an embodiment of the present invention;
FIG. 4 is a schematic view of the arrangement of flow field measurement points in the embodiment of the present invention;
FIG. 5 is a schematic diagram of a deep learning model according to the present invention;
FIG. 6 is a sample representation of a typical flow field downstream velocity time course for target profile 1 in an embodiment of the present invention;
FIG. 7 is a sample representation of a typical flow field downstream velocity time course for target profile 2 in an embodiment of the present invention;
FIG. 8 is a sample representation of a typical flow field transverse velocity time course for target profile 1 in an embodiment of the present invention;
FIG. 9 is a sample representation of a typical flow field transverse velocity time course for target profile 2 in an embodiment of the present invention;
fig. 10 shows the result of the accuracy of the recognition result in the embodiment of the present invention.
Detailed Description
In order to make the above objects, features and advantages of the present invention more clearly understood, the present invention will be further described with reference to the accompanying drawings and examples. In the following description, numerous specific details are set forth in order to provide a thorough understanding of the present invention, however, the present invention may be practiced in other ways than those described herein, and thus, the present invention is not limited to the specific embodiments disclosed below.
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. The following description of at least one exemplary embodiment is merely illustrative in nature and is in no way intended to limit the invention, its application, or uses. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
It is noted that the terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of exemplary embodiments according to the invention. As used herein, the singular forms "a", "an" and "the" are intended to include the plural forms as well, and it should be understood that when the terms "comprises" and/or "comprising" are used in this specification, they specify the presence of stated features, steps, operations, devices, components, and/or combinations thereof, unless the context clearly indicates otherwise.
The relative arrangement of the components and steps, the numerical expressions and numerical values set forth in these embodiments do not limit the scope of the present invention unless specifically stated otherwise. Meanwhile, it should be understood that the sizes of the respective portions shown in the drawings are not drawn in an actual proportional relationship for the convenience of description. Techniques, methods, and apparatus known to those of ordinary skill in the relevant art may not be discussed in detail but are intended to be part of the specification where appropriate. Any specific values in all examples shown and discussed herein are to be construed as exemplary only and not as limiting. Thus, other examples of the exemplary embodiments may have different values. It should be noted that: like reference numbers and letters refer to like items in the following figures, and thus, once an item is defined in one figure, further discussion thereof is not required in subsequent figures.
As shown in fig. 1 to 10, the present invention includes a deep learning method for recognizing the shape of an underwater object based on a time course of a flow field velocity component, which is further described below with reference to specific embodiments:
as a preferred embodiment, in the present application, the deep learning method specifically includes the steps of:
step S1: acquiring a flow field velocity component time course with a known shape; measuring a flow field speed time-course sample of a wake flow field formed by target characteristics with known appearance;
step S2: extracting the target characteristics based on any point wake flow velocity component time-course signals;
and step S3: processing the flow field time interval signal based on a deep learning method;
and step S4: and classifying the flow field velocity component time-course characteristics based on the convolutional neural network.
Specifically, a flow field velocity component time interval with a known shape is obtained in step S1; measuring a flow field velocity time course sample of a wake flow field formed by a target body with a known shape;
in particular, the amount of the solvent to be used,
step S11: firstly, determining an application range of underwater target identification, selecting a shape set of a target object according to the application range, and determining the shape set as a set W;
as a preferred embodiment, in this embodiment, W totally contains 6 target shapes to be identified, which are respectively a cylinder, a hexagonal prism, a square column, an oblique square column, a triangular prism, and an oblique triangular prism, as shown in fig. 2, it can be understood that in other embodiments, the specific target shape to be identified is determined according to actual situations and is not fixed;
step S12: and adopting a wind tunnel test method, a water tunnel test method or a numerical simulation method to simulate the flow field.
In this embodiment, the flow field velocity component time course is obtained by a numerical simulation method, and the method is also applicable to wind tunnel tests, water tunnel tests and field actual measurement, and the numerical simulation calculation model in this embodiment is shown in fig. 3;
step S13: and selecting a target shape 1 in the set W, and acquiring a downstream speed time course or a transverse speed time course in a wake flow region of the target shape 1 by adopting a flow field speed sensor.
In the embodiment, each sensor can obtain the downstream velocity and the transverse velocity of the flow field, and two velocity component time courses are simultaneously stored in the embodiment;
step S14: and (4) moving the position of the sensor, repeating the step (S13), and acquiring speed time courses at more positions to obtain a flow field sample set 1 of the target shape 1. Preferably, in the embodiment, as shown in fig. 4, for a single outline case in the sample W, 3600 sensors are arranged,
step S15: and repeating the steps S13-S14, obtaining a sample set i of each target shape i in the set W, obtaining flow field component time-course signal sets 1-N of all shapes in the set W, and designating labels of each set as 1-N to be defined as a signal set N.
In this embodiment, there are 6 samples in W, and 3600 flow field signals in each sample, so 21600 sample time intervals (for a flow field velocity component) in the total signal set N in this embodiment; as shown in fig. 6 to 9, each lists a forward flow velocity curve and a transverse flow field velocity time course curve of 6 different target profiles; defining the label of each sample according to 0-5, and completing the sample determination in the step 1;
further, training a deep learning model on the signal set N in the step S1, extracting the characteristics of the time-course signals and classifying;
step S21: firstly, constructing an FCN deep learning network based on full convolution calculation;
the model structure in this embodiment is shown in fig. 5; the model input layer is a one-dimensional flow field component time-course signal; performing convolution operation on the time-course signal of the input layer to obtain a convolution layer 1 of the model; carrying out convolution calculation again on the output data of the convolution layer 1 to obtain a convolution layer 2 of the model; performing convolution calculation again on the output data of the convolution layer 2 to obtain a convolution layer 3 of the model; performing convolution calculation again on the output data of the convolution layer 3 to obtain a convolution layer 3 of the model; performing global pooling calculation on the output data of the convolutional layer 4 to obtain a pooling layer 1 of the model; performing full-connection layer calculation on the output data of the pooling layer to obtain an output layer of the model;
further, the step S21 is specifically implemented in the following manner, and includes the following steps:
step S211: the model input layer is a one-dimensional flow field component time-course signal;
step S212: carrying out convolution operation on the time-course signal of the output layer to obtain a convolution layer 1 of the model;
step S213: performing convolution calculation again on the output data in the step S212 to obtain a convolution layer 2 of the model;
step S214: performing convolution calculation again on the output data in the step S213 to obtain a convolution layer 3 of the model;
step S215: performing convolution calculation again on the output data in the step S214 to obtain a convolution layer 3 of the model;
step S216: performing global pooling calculation on the output data in the step S215 to obtain a pooling layer 1 of the model;
step S217: and performing full-connection layer calculation on the output data in the step S216 to obtain an output layer of the model.
Step S22: transmitting the flow field velocity component signal set N serving as an input sample to an FCN deep learning neural network;
as a preferred implementation manner, in this embodiment, 50% of the 21600 samples in step S15 are randomly selected as a training set, and used as input layer variables of the model;
step S23: loss function L for defining model
L=∑|O-G| (1)
Wherein O is a target appearance label calculated and predicted by the model, and G is a real appearance label;
in this embodiment, the initial labels of the model are all set to label 0, and the real labels are 0 to 5, which correspond to the outer shapes of 5 in the set W respectively;
step S24: performing iterative training; the loss function in the step S23 is reduced to obtain network model parameters for shape recognition in the application range;
eliminating the output error through reverse iteration so that the model converges; in this embodiment, 50 iterations are performed, and the loss function in step S23 is small enough to meet the accuracy requirement, so that the model training is completed;
further, step S3, collecting object flow field characteristic samples of the shape to be recognized;
step S31: adopting a wind tunnel test method, a water tunnel test method or a numerical simulation method to simulate the flow field;
in this embodiment, the selection of the set W is the same as the step S11, and the acquisition mode of the flow field velocity component time interval is the same as the step S12;
step S32: acquiring a downstream speed time course or a transverse speed time course in a wake flow area of a target shape to be identified by adopting a flow field speed sensor;
in the present embodiment, the measured variable of the sensor is synchronized with the position in steps S13 and S14;
further, step S4, performing feature calculation on the sample to be recognized based on the deep learning model in step S24;
step S41: taking the sample time interval in the step S32 as input data in the model in the step S24, and calculating by adopting the model parameters in the step S24;
in the present embodiment, the remaining 50% of the samples in step S22 are used as the time interval of the samples to be identified, and since the positions of the samples to be identified are different from the positions of the samples in step S22, the time interval is different, and can be used to represent new samples (i.e. samples that have not been learned in deep learning) for testing the accuracy of the model;
step S42: judging the similarity degree with the labels in the set N according to the result vector output by calculation, finding the closest label m, outputting the recognition result as the m sample shape in the set W, and finishing the recognition of the target shape;
comparing the calculation result obtained in the step S41 with the real label of the sample to be identified in the step S41, wherein the model outputs the probability that each sample belongs to each shape in the set W; in this embodiment, 6 shapes are used, the average probability is 16.7%, and the prediction target probability can be found to be more than 95% by actual calculation, as shown in fig. 10. The flow field with six different shapes is complicated, the method can be adopted to identify the appearance of the object according to the flow field velocity component time course at any point position, and the accuracy is high.
Finally, it should be noted that: the above embodiments are only used to illustrate the technical solution of the present invention, and not to limit the same; while the invention has been described in detail and with reference to the foregoing embodiments, it will be understood by those skilled in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some or all of the technical features may be equivalently replaced; and the modifications or the substitutions do not make the essence of the corresponding technical solutions depart from the scope of the technical solutions of the embodiments of the present invention.

Claims (5)

1. A deep learning method for underwater object shape recognition based on a flow field velocity component time course is characterized by comprising the following steps:
s1: acquiring a flow field velocity component time course with a known shape; measuring a flow field speed time-course sample of a wake flow field formed by target characteristics with known appearance;
s2: extracting the target characteristics based on any point wake flow velocity component time-course signals;
s3: processing the wake flow velocity component time-course signal based on a deep learning method;
s4: and classifying the flow field velocity component time-course characteristics based on the convolutional neural network.
2. The deep learning method for underwater object shape recognition based on flow field velocity component time course according to claim 1,
the identifying the shape of the underwater target body based on the flow field characteristics specifically further comprises: identifying the shape of the underwater object according to the interference of the target shape on the flow field characteristics;
the flow field features include: the downstream velocity of the flow field and the transverse velocity of the flow field.
3. The deep learning method for underwater object shape recognition based on flow field velocity component time course according to claim 1,
the target feature is extracted based on the velocity time-course signal of the wake flow velocity component of any point, namely the feature extraction of the target appearance is carried out according to the velocity time-course signal of any point in the wake flow field;
the wake velocity component comprises a forward flow velocity time interval and a transverse velocity time interval;
and identifying the shape of the target according to the characteristics of the speed time-course signal.
4. The deep learning method for underwater object shape recognition based on flow field velocity component time course according to claim 1, characterized in that the processing of flow field time course signals based on the deep learning method further comprises the following steps:
s31: constructing a calculation model of an input signal based on a deep learning method;
s32: and performing deep learning calculation based on the one-dimensional time course and the time sequence data.
5. The deep learning method for underwater object shape recognition based on flow field velocity component time course according to claim 1, characterized in that the classifying the flow field velocity component time course features based on the convolutional neural network further comprises the following steps:
s41: calculating a forward flow speed time course based on one-dimensional convolution;
s42: calculating a transverse velocity time course based on one-dimensional convolution;
s43: extracting wake flow speed time-course characteristics based on one-dimensional convolution;
s44: and classifying the wake velocity time-course characteristics based on one-dimensional convolution.
CN202111121944.0A 2021-09-24 2021-09-24 Deep learning method for underwater object shape recognition based on flow field velocity component time course Active CN113822201B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111121944.0A CN113822201B (en) 2021-09-24 2021-09-24 Deep learning method for underwater object shape recognition based on flow field velocity component time course

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111121944.0A CN113822201B (en) 2021-09-24 2021-09-24 Deep learning method for underwater object shape recognition based on flow field velocity component time course

Publications (2)

Publication Number Publication Date
CN113822201A CN113822201A (en) 2021-12-21
CN113822201B true CN113822201B (en) 2023-01-06

Family

ID=78915374

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111121944.0A Active CN113822201B (en) 2021-09-24 2021-09-24 Deep learning method for underwater object shape recognition based on flow field velocity component time course

Country Status (1)

Country Link
CN (1) CN113822201B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114896886A (en) * 2022-05-18 2022-08-12 北京百度网讯科技有限公司 Flow field identification method, flow field identification device, electronic apparatus, flow field identification medium, and program product
CN115455838B (en) * 2022-09-26 2023-09-01 大连海事大学 High-spatial-resolution flow field reconstruction method for time-course data
CN115546498B (en) * 2022-09-28 2023-10-17 大连海事大学 Flow field time-varying data compression storage method based on deep learning

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106780543A (en) * 2017-01-13 2017-05-31 深圳市唯特视科技有限公司 A kind of double framework estimating depths and movement technique based on convolutional neural networks
CN108256567A (en) * 2018-01-12 2018-07-06 环球大数据科技有限公司 A kind of target identification method and system based on deep learning
CN109188020A (en) * 2018-09-10 2019-01-11 中国长江三峡集团有限公司 A kind of water surface flow-speed measurement method based on the identification of wake flow lines
CN111027626A (en) * 2019-12-11 2020-04-17 西安电子科技大学 Flow field identification method based on deformable convolution network

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20200302151A1 (en) * 2019-03-19 2020-09-24 Pixart Imaging Inc. Event detection device
CN111626290B (en) * 2019-12-31 2024-02-20 中国航天科工集团八五一一研究所 Infrared ship target detection and identification method under complex sea surface environment

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106780543A (en) * 2017-01-13 2017-05-31 深圳市唯特视科技有限公司 A kind of double framework estimating depths and movement technique based on convolutional neural networks
CN108256567A (en) * 2018-01-12 2018-07-06 环球大数据科技有限公司 A kind of target identification method and system based on deep learning
CN109188020A (en) * 2018-09-10 2019-01-11 中国长江三峡集团有限公司 A kind of water surface flow-speed measurement method based on the identification of wake flow lines
CN111027626A (en) * 2019-12-11 2020-04-17 西安电子科技大学 Flow field identification method based on deformable convolution network

Also Published As

Publication number Publication date
CN113822201A (en) 2021-12-21

Similar Documents

Publication Publication Date Title
CN113822201B (en) Deep learning method for underwater object shape recognition based on flow field velocity component time course
CN111520615B (en) Pipe network leakage identification and positioning method based on line spectrum pair and cubic interpolation search
CN113009447B (en) Road underground cavity detection and early warning method based on deep learning and ground penetrating radar
CN111664365A (en) Oil and gas pipeline leakage detection method based on improved VMD and 1DCNN
CN108957403B (en) Gaussian fitting envelope time delay estimation method and system based on generalized cross correlation
CN113901927B (en) Underwater object shape recognition method based on flow field pressure time course
CN108171119B (en) SAR image change detection method based on residual error network
CN114564982A (en) Automatic identification method for radar signal modulation type
CN112036239A (en) Radar signal working mode identification method and system based on deep learning network
CN105844217A (en) Multi-target tracking method based on measure-driven target birth intensity PHD (MDTBI-PHD)
CN104038792A (en) Video content analysis method and device for IPTV (Internet Protocol Television) supervision
CN103852525A (en) Acoustic emission signal identification method based on AR-HMM
Shen et al. SSCT-Net: A semisupervised circular teacher network for defect detection with limited labeled multiview MFL samples
CN113608193A (en) Radar multi-target distance and speed estimation method based on UNet
CN113626929A (en) Multi-stage multi-topology ship traffic complexity measuring method and system
CN115705393A (en) Radar radiation source grading identification method based on continuous learning
CN115598714B (en) Time-space coupling neural network-based ground penetrating radar electromagnetic wave impedance inversion method
CN113658217B (en) Self-adaptive target tracking method, device and storage medium
CN115375925A (en) Underwater sonar image matching algorithm based on phase information and deep learning
CN114330450A (en) Method and system for detecting and identifying underwater vehicle by fusing multiple physical fields
Gao et al. Acoustic emission-based small leak detection of propulsion system pipeline of sounding rocket
CN113888380A (en) Method, device, equipment and medium for predicting man overboard trajectory
CN113468804A (en) Underground pipeline identification method based on matrix bundle and deep neural network
CN111126694A (en) Time series data prediction method, system, medium and device
CN114199992B (en) Method and system for detecting corrosion of tank wall of oil storage tank

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
EE01 Entry into force of recordation of patent licensing contract

Application publication date: 20211221

Assignee: Yuntong Transportation Technology (Dalian) Co.,Ltd.

Assignor: Dalian Maritime University

Contract record no.: X2023210000299

Denomination of invention: Deep learning method for underwater object shape recognition based on flow velocity component time history

Granted publication date: 20230106

License type: Common License

Record date: 20231212

EE01 Entry into force of recordation of patent licensing contract