CN112925932A - High-definition underwater laser image processing system - Google Patents

High-definition underwater laser image processing system Download PDF

Info

Publication number
CN112925932A
CN112925932A CN202110025073.6A CN202110025073A CN112925932A CN 112925932 A CN112925932 A CN 112925932A CN 202110025073 A CN202110025073 A CN 202110025073A CN 112925932 A CN112925932 A CN 112925932A
Authority
CN
China
Prior art keywords
layer
data
image processing
module
network
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110025073.6A
Other languages
Chinese (zh)
Inventor
伦宇学
唐任仲
赵张耀
王文海
张志猛
张泽银
刘兴高
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang University ZJU
Original Assignee
Zhejiang University ZJU
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang University ZJU filed Critical Zhejiang University ZJU
Priority to CN202110025073.6A priority Critical patent/CN112925932A/en
Publication of CN112925932A publication Critical patent/CN112925932A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/048Activation functions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Abstract

The invention discloses a high-definition underwater laser image processing system which comprises an underwater laser radar, wherein an upper computer and a database are connected with a bus. The laser radar detects the detected water area, and stores the obtained image in the database, and the upper computer comprises an acquisition module, an image processing module, a display module and a training module. The acquisition module is connected with the bus and used for acquiring and transmitting data, the acquisition module is respectively connected with the image processing module and the training module in a bidirectional mode and used for providing data for the image processing module and the training module, the image processing module is connected with the training module in a bidirectional mode, and the output end of the image processing module is connected with the input end of the display module and used for displaying a generated clear image result. By carrying out countermeasure training on the depth generation network and the discrimination network, the imaging quality, the speed and the robustness of the underwater laser radar image are improved.

Description

High-definition underwater laser image processing system
Technical Field
The invention relates to the field of underwater laser image processing, in particular to a high-definition underwater laser image processing system.
Background
The laser underwater target detection technology is a new advanced detection technology, integrates a laser technology, a communication technology, a signal processing and identifying technology, an operation research technology and a GPS technology, and has a wide prospect. At present, respective underwater photoelectric detection research systems are established in many developed countries and are widely applied to the military and civil fields. However, some difficult problems still exist in the laser underwater target detection technology to be solved urgently, one of the difficult problems is the target identification problem, and the target identification cannot effectively clarify the underwater laser image. Image extraction is a basic problem in the field of image processing, and different methods can be adopted according to different processing objects. Because a large amount of various particles suspended in natural water have a serious backscattering effect on laser, although the backscattering is overcome by adopting a range gating or synchronous scanning technology, the imaging quality is still not good, and speckle noise mainly formed by backscattering still exists in an image. The existence of the speckle noise causes strong change of image gray scale, deteriorates the visibility of the image and destroys a great deal of detail information of the image. In order to realize effective segmentation, methods such as local statistical filtering, homomorphic filtering of homomorphic mapping, wavelet soft threshold and the like are adopted. It has been found that these several methods have certain significant problems, which can result in loss of the necessary details of the image.
Disclosure of Invention
In order to solve the problems of poor robustness, poor quality, important information loss and the like of the traditional underwater laser image processing method, the invention provides a high-definition underwater laser image processing system which can greatly improve the quality and robustness of underwater laser image processing.
The purpose of the invention is realized by the following technical scheme: a high-definition underwater laser image processing system comprises an underwater laser radar, and an upper computer is connected with a database bus. The upper computer comprises an acquisition module, an image processing module, a display module and a training module. The database comprises underwater laser radar image data, water clear laser radar image data and processed image data. Through the underwater laser image data that underwater laser radar gathered, after host computer is handled, show clear laser image data through display module to in saving the database with the result, the operation process of adoption is as follows:
1.1, acquiring data, and acquiring underwater laser image data x by the system through an underwater laser radari(w, h,3), wherein w and h respectively represent the abscissa and the ordinate of each image data tensor, 3 is a three-dimensional tensor formed by R, G, B three colors of each image data, and the data is transmitted to a database through a bus.
1.2, network training, wherein the upper computer respectively acquires m clear laser radar image data P { P from the database through the acquisition module1,p2,p3,...,pmAnd m pieces of underwater laser image data X { X ] acquired by an underwater laser radar1,x2,x3,...,xmAnd training in a training module, and storing the parameters of the trained image generation network G into an image processing module.
And 1.3, processing and generating an image, wherein the upper computer acquires new underwater laser picture data from the underwater laser radar through the acquisition module and transmits the acquired data into the image processing module, and the image processing module processes the data through the generation network G to generate a new clear picture.
And 1.4, displaying and storing the image, displaying the processed image through a display module, and storing the data into a database.
The training module is characterized in that an image generation network G and a discrimination network D are established, and a parameter theta of the image generation network G isgAnd a parameter θ for discriminating the network DdTraining is carried out, and the training process is as follows:
2.1 initializing the parameters θ of the image Generation network G and the discrimination network DgAnd thetad
2.2, acquiring underwater laser radar image data X { X from database1,x2,x3,...,xmAnd image data P { P of the lidar on water1,p2,p3,...,pmWhere m denotes the number of acquired images.
2.3 according to formula
Figure BDA0002890074320000021
Obtaining m generated images through a generation network G
Figure BDA0002890074320000022
2.4, data will be generated
Figure BDA0002890074320000023
Inputting the image data of the laser radar on water into a discrimination network D and according to a target function
Figure BDA0002890074320000024
Figure BDA0002890074320000025
Performing multiple optimization training to obtain
Figure BDA0002890074320000026
Wherein
Figure BDA0002890074320000027
Representing a function
Figure BDA0002890074320000028
With respect to parameter θdThe gradient of (d), γ ═ 0.05, indicates the learning rate.
2.5 fixing the parameter θ of the discrimination network DdThe data is then processed, according to an objective function,
Figure BDA0002890074320000029
carrying out optimization training to obtain
Figure BDA00028900743200000210
Wherein
Figure BDA00028900743200000211
Representing a function
Figure BDA00028900743200000212
With respect to parameter θgThe gradient of (d), γ ═ 0.05, indicates the learning rate.
2.6, training for multiple times according to the steps to obtain the best parameters for generating the network G
Figure BDA00028900743200000213
And will be
Figure BDA00028900743200000214
And storing the image into an image processing module.
The image generation network G is characterized by comprising an input layer, a convolution layer, a pooling layer, an anti-convolution layer and an output layer. The operation process adopted by the method is as follows:
3.1, input layer: x is the number ofi(w, h,3), input data size w × h, and channel number c equal to 3.
3.2, convolution layer: according to the formula
Figure BDA00028900743200000215
Performing a convolution calculation wherein
Figure BDA0002890074320000031
The output of the l layer is represented, wherein l represents the number of network layers, u and v respectively represent the abscissa and the ordinate of the data tensor, c is the number of channels, and the number of channels of the first convolution c(1)32; k denotes a convolution kernel whose tensor size is i × j × c; s is the convolution moving step, b is the offset number; ReLU () as a convolutional layerThe activation function relu (x) max {0, x }.
3.3, a pooling layer: according to the formula
Figure BDA0002890074320000032
Calculation of where r(l)For pooled layer output, max2*2pooling () is the maximum pooling calculation of 2 x 2.
3.4, deconvolution layer: according to the formula (I), the compound has the following structure,
Figure BDA0002890074320000033
is calculated, wherein
Figure BDA0002890074320000034
Is the output of the nth deconvolution layer, L represents the total number of convolution layers, r(L+n-1)For the input of the n-1 th layer deconvolution, demax2*2posing () is a 2 x 2 inverse pooling calculation with step 1, that is, adding one pixel between every two adjacent pixels of each channel data inputted by the layer. And then according to the formula,
Figure BDA0002890074320000035
calculating, wherein n is the number of deconvolution layers,
Figure BDA0002890074320000036
is composed of
Figure BDA0002890074320000037
Deconvolution kernel of, number of channels c(L+n)=c(L-n)(ii) a And L is N, wherein N is the total number of deconvolution layers.
3.5, output layer:
Figure BDA0002890074320000038
has a tensor size of (w, h, 3).
The discrimination network D is characterized by comprising an input layer, a convolution layer, a pooling layer, a full-connection layer and an output layer.
4.1, input layer:
Figure BDA0002890074320000039
the input data size w x h, the number of channels c 3.
4.2, convolution layer: according to the formula
Figure BDA00028900743200000310
Performing a convolution calculation wherein
Figure BDA00028900743200000311
The output of the l layer is represented, wherein l represents the number of layers of a convolution network, u and v respectively represent the abscissa and the ordinate of the data tensor, c is the number of channels, and the number of channels c of the first convolution(1)32; k denotes a convolution kernel whose tensor size is i × j × c; s is the convolution kernel moving step length, b is the offset number; ReLU () is the activation function of the convolutional layer.
4.3, a pooling layer: according to the formula
Figure BDA00028900743200000312
Calculation of where r(l)For pooled layer output, max2*2pooling () is the maximum pooling calculation of 2 x 2.
4.4, full connection layer: two layers in total, according to formula
Figure BDA00028900743200000313
The calculation is carried out according to the calculation,
wherein L is the total number of layers of the convolution layer, hL+1Outputting vectors for the L +1 th layer of the network, wherein the whole network has 2 full connection layers, the number of elements of the first layer is equal to 5000, the number of elements of the second layer is equal to 200, and T is a transpose; w(l+1)Is a two-dimensional parameter matrix of fully connected layers, with the shape q x 100,
Figure BDA00028900743200000314
b(l+1)and a one-dimensional vector representing the bias value of the L +1 th layer, wherein the number of elements is equal to the number of elements of the output layer.
4.5, output layer: according to the formula
Figure BDA0002890074320000041
A calculation is performed, where σ () is a sigmoid activation function,
Figure BDA0002890074320000042
the output vector is a l +2 th layer full connection output vector, and the number of elements is equal to 200; w is the parameter matrix of the output layer, the shape being 200 × 2. b represents the offset vector of the output function, the number of elements being equal to 2.
The technical conception and the beneficial effects of the invention are mainly shown in that: compared with the traditional image processing technology, the method for processing the underwater laser radar image by using the antagonistic generation network has the advantages that the generated image has higher definition and better robustness, the dependence on data quantity is reduced, and a better training effect is obtained under the condition that a training sample is relatively insufficient, so that clearer image data is generated; the clear and unclear images are repeatedly judged by using the judgment network, and the recognition capability of the judgment network is improved, so that the image processing capability of the image generation network is improved, and the definition of the images generated by the trained network can be continuously improved.
Drawings
FIG. 1 is a functional block diagram of a system in accordance with the present invention;
FIG. 2 is a network architecture diagram of a training module according to the present invention;
fig. 3 is a flow chart of the image generation network G proposed by the present invention;
fig. 4 is a flowchart of the image discrimination network D according to the present invention.
Detailed Description
The invention is described in detail below with reference to fig. 1, which is a high-definition underwater laser image processing system.
The present invention is described in further detail below with reference to examples, but the embodiments of the present invention are not limited to these examples.
The following describes the implementation of the present invention in detail with reference to specific embodiments.
As shown in fig. 1, the high-definition underwater laser image processing system provided by the invention comprises an underwater laser radar 1, and a database 2 connected with an upper computer 3 through a bus. The upper computer comprises an acquisition module 4, an image processing module 5, a display module 6 and a training module 7. The database comprises underwater laser radar image data, water clear laser radar image data and processed image data. Through the underwater laser image data that underwater laser radar gathered, after host computer is handled, show clear laser image data through display module to in saving the database with the result, the operation process of adoption is as follows:
1.1, acquiring data, and acquiring underwater laser image data x by the system through an underwater laser radar 1i(w, h,3), w, h respectively represent the abscissa and ordinate of each image data tensor, 3 is a three-dimensional tensor composed of R, G, B three colors for each image data, and the data is transmitted to the database 2 through a bus.
1.2, network training, wherein the upper computer 3 respectively acquires m clear laser radar image data P { P from the database 2 through the acquisition module 41,p2,p3,...,pmAnd m pieces of underwater laser image data X { X } acquired by an underwater laser radar 11,x2,x3,...,xmAnd training in a training module 7, and storing the parameters of the trained image generation network G into an image processing module 5.
1.3, image processing and generation, the host computer 3 acquires new underwater laser picture data from the underwater laser radar through the acquisition module, and transmits the acquired data into the image processing module 5, and the image processing module processes the data through the generation network G to generate a new clear laser radar picture.
And 1.4, displaying and saving the image, displaying the processed image through a display module 6, and saving the data into the database 2.
Further to said training module 7, as shown in fig. 2, it is characterized by creating a graphAn image generation network G network 10 and a discrimination network D network 11, and a parameter theta of the image generation network GgAnd a parameter θ for discriminating the network DdTraining is carried out, and the training process is as follows:
2.1 initializing the parameters θ of the image Generation network G and the discrimination network DgAnd thetad
2.2, acquiring underwater laser radar image data 9X { X ] from database1,x2,x3,...,xmAnd clear lidar image data 8P { P }1,p2,p3,...,pmWhere m denotes the number of acquired images.
2.3 according to formula
Figure BDA0002890074320000051
Obtaining m generated images through a generation network G
Figure BDA0002890074320000052
2.4, data will be generated
Figure BDA0002890074320000053
Inputting the clear laser radar image data into a discrimination network D and according to a target function
Figure BDA0002890074320000054
Figure BDA0002890074320000055
Performing multiple optimization training to obtain
Figure BDA0002890074320000056
Wherein
Figure BDA0002890074320000057
Representing a function
Figure BDA0002890074320000058
With respect to parameter θdThe gradient of (d), γ ═ 0.05, indicates the learning rate.
2.5 fixing the parameter θ of the discrimination network DdThe data is then processed, according to an objective function,
Figure BDA0002890074320000059
carrying out optimization training to obtain
Figure BDA00028900743200000510
Wherein
Figure BDA00028900743200000511
Representing a function
Figure BDA00028900743200000512
With respect to parameter θgThe gradient of (d), γ ═ 0.05, indicates the learning rate.
2.6, training for multiple times according to the steps to obtain the best parameters for generating the network G
Figure BDA00028900743200000513
And will be
Figure BDA00028900743200000514
And storing the image into an image processing module.
The image generation network G according to fig. 3 is characterized by comprising an input layer 14, a convolutional layer 15, a pooling layer 16, a deconvolution layer 17, and an output layer 18. It adopts the following operation process
3.1, input layer 14: x is the number ofi(w, h,3), input data size w × h, and channel number c equal to 3.
3.2, convolution layer 15: according to the formula
Figure BDA00028900743200000515
Performing a convolution calculation wherein
Figure BDA00028900743200000516
The output of the l layer is represented, wherein l represents the number of network layers, u and v respectively represent the abscissa and the ordinate of the data tensor, c is the number of channels, and the number of channels of the first convolution c(1)32; k denotes a convolution kernel whose tensor size is i × j × c; s is the convolution moving step, b is the offset number; ReLU () is the activation function of the convolutional layer, ReLU (x) max {0, x }.
3.3, the pooling layer 16: according to the formula
Figure BDA0002890074320000061
Calculation of where r(l)For pooled layer output, max2*2pooling () is the maximum pooling calculation of 2 x 2.
3.4, deconvolution layer 17: according to the formula (I), the compound has the following structure,
Figure BDA0002890074320000062
is calculated, wherein
Figure BDA0002890074320000063
Is the output of the nth deconvolution layer, L represents the total number of convolution layers, r(L+n-1)For the input of the n-1 th layer deconvolution, demax2*2posing () is a 2 x 2 inverse pooling calculation with step 1, that is, adding one pixel between every two adjacent pixels of each channel data inputted by the layer. And then according to the formula,
Figure BDA0002890074320000064
calculating, wherein n is the number of deconvolution layers,
Figure BDA0002890074320000065
is composed of
Figure BDA0002890074320000066
Deconvolution kernel of, number of channels c(L+n)=c(L-n)(ii) a And L is N, wherein N is the total number of deconvolution layers.
3.5, output layer 18:
Figure BDA0002890074320000067
has a tensor size of (w, h, 3).
The discrimination network D according to the requirement of fig. 4 comprises an input layer 1, a convolutional layer 2, a pooling layer 3, a full-link layer 4 and an output layer 5.
4.1, input layer 19:
Figure BDA0002890074320000068
the input data size w x h, the number of channels c 3.
4.2, convolution layer 20: according to the formula
Figure BDA0002890074320000069
Performing a convolution calculation wherein
Figure BDA00028900743200000610
The output of the l layer is represented, wherein l represents the number of layers of a convolution network, u and v respectively represent the abscissa and the ordinate of the data tensor, c is the number of channels, and the number of channels c of the first convolution(1)32; k denotes a convolution kernel whose tensor size is i × j × c; s is the convolution moving step, b is the offset number; ReLU () is the activation function of the convolutional layer.
4.3, pooling layer 21: according to the formula
Figure BDA00028900743200000611
Calculation of where r(l)For pooled layer output, max2*2pooling () is the maximum pooling calculation of 2 x 2.
4.4, full connection layer 22: two layers in total, according to formula
Figure BDA00028900743200000612
The calculation is carried out according to the calculation,
wherein L is the total number of layers of the convolution layer, hL+1Outputting vectors for the L +1 th layer of the network and the whole networkThe number of the elements of the first layer is equal to 5000, the number of the elements of the second layer is equal to 200, and T is a transpose; w(l+1)Is a two-dimensional parameter matrix of fully connected layers, with the shape q x 100,
Figure BDA00028900743200000613
b(l+1)and a one-dimensional vector representing the bias value of the L +1 th layer, wherein the number of elements is equal to the number of elements of the output layer.
4.5, output layer 23: according to the formula
Figure BDA0002890074320000071
A calculation is performed, where σ () is a sigmoid activation function,
Figure BDA0002890074320000072
the output vector is a l +2 th layer full connection output vector, and the number of elements is equal to 200; w is the parameter matrix of the output layer, the shape being 200 × 2. b represents the offset vector of the output function, the number of elements being equal to 2.

Claims (4)

1. A high-definition underwater laser image processing system comprises an underwater laser radar, a database and an upper computer which are sequentially connected through a bus. The upper computer comprises an acquisition module, an image processing module, a display module and a training module which are connected in sequence; the acquisition module is connected with the bus and used for acquiring and transmitting data, the acquisition module is respectively connected with the image processing module and the training module in a bidirectional mode and provides data for the image processing module and the training module, the image processing module is connected with the training module in a bidirectional mode, and the output end of the image processing module is connected with the input end of the display module. The database stores underwater laser radar image data, clear overwater laser radar image data and processed image data. Through the underwater laser image data that underwater laser radar gathered, after host computer is handled, show clear laser image data through display module to in saving the database with the result, the operation process of adoption is as follows:
(1.1) acquiring data, wherein the system acquires underwater laser image data x through an underwater laser radari(w, h,3), wherein w and h respectively represent the abscissa and the ordinate of each image data tensor, 3 is a three-dimensional tensor formed by R, G, B three colors of each image data, and the data is transmitted to a database through a bus.
(1.2) network training, wherein the upper computer respectively acquires m clear laser radar image data P { P) from the database through the acquisition module1,p2,p3,...,pmAnd m pieces of underwater laser image data X { X ] acquired by an underwater laser radar1,x2,x3,...,xmAnd training in a training module, and storing the parameters of the trained image generation network G into an image processing module.
And (1.3) processing and generating images, wherein the upper computer acquires new underwater laser picture data from the underwater laser radar through the acquisition module and transmits the acquired data into the image processing module, and the image processing module processes the data through the generation network G to generate a new clear picture.
And (1.4) displaying and storing the image, displaying the processed image through a display module, and storing the data into a database.
2. The high definition underwater laser image processing system as claimed in claim 1, wherein the training module establishes an image generation network G and a discrimination network D for an image generation network G parameter θgAnd a parameter θ for discriminating the network DdTraining is carried out, and the training process is as follows:
(2.1) initializing the parameter θ of the image generation network G and the discrimination network DgAnd thetad
(2.2) acquiring underwater laser radar image data X { X) from the database1,x2,x3,...,xmAnd image data P { P of the lidar on water1,p2,p3,...,pmWhere m denotes the number of acquired images.
(2.3) according to the formula
Figure FDA0002890074310000011
By raw materialsForming a network G, obtaining m generated images
Figure FDA0002890074310000012
(2.4) generating data
Figure FDA0002890074310000013
Inputting the image data of the laser radar on water into a discrimination network D and according to a target function
Figure FDA0002890074310000014
Figure FDA0002890074310000015
Performing multiple optimization training to obtain
Figure FDA0002890074310000021
Wherein
Figure FDA0002890074310000022
Representing a function
Figure FDA0002890074310000023
With respect to parameter θdThe gradient of (d), γ ═ 0.05, indicates the learning rate.
(2.5) fixing the parameter θ of the discrimination network DdAccording to an objective function.
Figure FDA0002890074310000024
Carrying out optimization training to obtain
Figure FDA0002890074310000025
Wherein
Figure FDA0002890074310000026
Representing a function
Figure FDA0002890074310000027
With respect to parameter θgThe gradient of (d), γ ═ 0.05, indicates the learning rate.
(2.6) training for multiple times according to the steps to obtain the best parameters for generating the network G
Figure FDA0002890074310000028
And will be
Figure FDA0002890074310000029
And storing the image into an image processing module.
3. A high definition underwater laser image processing system as claimed in claim 2, characterized in that said image generation network G comprises an input layer, a convolutional layer, a pooling layer, an anti-convolutional layer, an output layer. The operation process adopted by the method is as follows:
(3.1) input layer: x is the number ofi(w, h,3), input data size w × h, and channel number c equal to 3.
(3.2) convolutional layer: according to the formula
Figure FDA00028900743100000210
Performing a convolution calculation wherein
Figure FDA00028900743100000211
The output of the l layer is represented, wherein l represents the number of network layers, u and v respectively represent the abscissa and the ordinate of the data tensor, c is the number of channels, and the number of channels of the first convolution c(1)32; k denotes a convolution kernel whose tensor size is i × j × c; s is the convolution moving step, b is the offset number; ReLU () is the activation function of the convolutional layer, ReLU (x) max {0, x }.
(3.3) a pooling layer: according to the formula
Figure FDA00028900743100000212
Calculation of where r(l)For pooled layer output, max2* 2pooling () is the maximum pooling calculation of 2 x 2.
(3.4) deconvolution layer: according to the formula (I), the compound has the following structure,
Figure FDA00028900743100000213
is calculated, wherein
Figure FDA00028900743100000214
Is the output of the nth deconvolution layer, L represents the total number of convolution layers, r(L+n-1)De max, the input to the n-1 th layer deconvolution2*2posing () is a 2 x 2 inverse pooling calculation with step 1, that is, adding one pixel between every two adjacent pixels of each channel data inputted by the layer. And then according to the formula,
Figure FDA00028900743100000215
calculating, wherein n is the number of deconvolution layers,
Figure FDA00028900743100000216
is composed of
Figure FDA00028900743100000217
Deconvolution kernel of, number of channels c(L+n)=c(L-n)(ii) a And L is N, wherein N is the total number of deconvolution layers.
(3.5) output layer:
Figure FDA00028900743100000218
has a tensor size of (w, h, 3).
4. A high definition underwater laser image processing system as claimed in claim 2 in which the discrimination network D includes an input layer, a convolutional layer, a pooling layer, a fully connected layer and an output layer.
(4.1) input layer:
Figure FDA0002890074310000031
the input data size w x h, the number of channels c 3.
(4.2) convolutional layer: according to the formula
Figure FDA0002890074310000032
Performing a convolution calculation wherein
Figure FDA0002890074310000033
The output of the l layer is represented, wherein l represents the number of layers of a convolution network, u and v respectively represent the abscissa and the ordinate of the data tensor, c is the number of channels, and the number of channels c of the first convolution(1)32; k denotes a convolution kernel whose tensor size is i × j × c; s is the convolution kernel moving step length, b is the offset number; ReLU () is the activation function of the convolutional layer.
(4.3) a pooling layer: according to the formula
Figure FDA0002890074310000034
Calculation of where r(l)For pooled layer output, max2*2pooling () is the maximum pooling calculation of 2 x 2.
(4.4) fully-connected layer: two layers in total, according to formula
Figure FDA0002890074310000035
The calculation is carried out according to the calculation,
wherein L is the total number of layers of the convolution layer, hL+1Outputting vectors for the L +1 th layer of the network, wherein the whole network has 2 full connection layers, the number of elements of the first layer is equal to 5000, the number of elements of the second layer is equal to 200, and T is a transpose; w(l+1)Is a two-dimensional parameter matrix of fully connected layers, with the shape q x 100,
Figure FDA0002890074310000036
b(l+1)and a one-dimensional vector representing the bias value of the L +1 th layer, wherein the number of elements is equal to the number of elements of the output layer.
(4.5) output layer: according to the formula
Figure FDA0002890074310000037
A calculation is performed, where σ () is a sigmoid activation function,
Figure FDA0002890074310000038
h(L+2)the output vector is a l +2 th layer full connection output vector, and the number of elements is equal to 200; w is the parameter matrix of the output layer, the shape being 200 × 2. b represents the offset vector of the output function, the number of elements being equal to 2.
CN202110025073.6A 2021-01-08 2021-01-08 High-definition underwater laser image processing system Pending CN112925932A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110025073.6A CN112925932A (en) 2021-01-08 2021-01-08 High-definition underwater laser image processing system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110025073.6A CN112925932A (en) 2021-01-08 2021-01-08 High-definition underwater laser image processing system

Publications (1)

Publication Number Publication Date
CN112925932A true CN112925932A (en) 2021-06-08

Family

ID=76163658

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110025073.6A Pending CN112925932A (en) 2021-01-08 2021-01-08 High-definition underwater laser image processing system

Country Status (1)

Country Link
CN (1) CN112925932A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115186814A (en) * 2022-07-25 2022-10-14 南京慧尔视智能科技有限公司 Training method and device for confrontation generation network, electronic equipment and storage medium

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106127702A (en) * 2016-06-17 2016-11-16 兰州理工大学 A kind of image mist elimination algorithm based on degree of depth study
CN108596156A (en) * 2018-05-14 2018-09-28 浙江大学 A kind of intelligence SAR radar airbound target identifying systems
CN110136063A (en) * 2019-05-13 2019-08-16 南京信息工程大学 A kind of single image super resolution ratio reconstruction method generating confrontation network based on condition
CN110276389A (en) * 2019-06-14 2019-09-24 中国矿业大学 One kind being based on the modified mine movable inspection image rebuilding method in edge
CN110619352A (en) * 2019-08-22 2019-12-27 杭州电子科技大学 Typical infrared target classification method based on deep convolutional neural network
CN111260655A (en) * 2019-12-31 2020-06-09 深圳云天励飞技术有限公司 Image generation method and device based on deep neural network model
CN111563841A (en) * 2019-11-13 2020-08-21 南京信息工程大学 High-resolution image generation method based on generation countermeasure network
CN111784581A (en) * 2020-07-03 2020-10-16 苏州兴钊防务研究院有限公司 SAR image super-resolution reconstruction method based on self-normalization generation countermeasure network

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106127702A (en) * 2016-06-17 2016-11-16 兰州理工大学 A kind of image mist elimination algorithm based on degree of depth study
CN108596156A (en) * 2018-05-14 2018-09-28 浙江大学 A kind of intelligence SAR radar airbound target identifying systems
CN110136063A (en) * 2019-05-13 2019-08-16 南京信息工程大学 A kind of single image super resolution ratio reconstruction method generating confrontation network based on condition
CN110276389A (en) * 2019-06-14 2019-09-24 中国矿业大学 One kind being based on the modified mine movable inspection image rebuilding method in edge
CN110619352A (en) * 2019-08-22 2019-12-27 杭州电子科技大学 Typical infrared target classification method based on deep convolutional neural network
CN111563841A (en) * 2019-11-13 2020-08-21 南京信息工程大学 High-resolution image generation method based on generation countermeasure network
CN111260655A (en) * 2019-12-31 2020-06-09 深圳云天励飞技术有限公司 Image generation method and device based on deep neural network model
CN111784581A (en) * 2020-07-03 2020-10-16 苏州兴钊防务研究院有限公司 SAR image super-resolution reconstruction method based on self-normalization generation countermeasure network

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115186814A (en) * 2022-07-25 2022-10-14 南京慧尔视智能科技有限公司 Training method and device for confrontation generation network, electronic equipment and storage medium
CN115186814B (en) * 2022-07-25 2024-02-13 南京慧尔视智能科技有限公司 Training method, training device, electronic equipment and storage medium of countermeasure generation network

Similar Documents

Publication Publication Date Title
CN111639692B (en) Shadow detection method based on attention mechanism
CN110728200B (en) Real-time pedestrian detection method and system based on deep learning
CN108444447B (en) Real-time autonomous detection method for fishing net in underwater obstacle avoidance system
CN108510458B (en) Side-scan sonar image synthesis method based on deep learning method and non-parametric sampling
CN113642634A (en) Shadow detection method based on mixed attention
CN111507275B (en) Video data time sequence information extraction method and device based on deep learning
CN111027497B (en) Weak and small target rapid detection method based on high-resolution optical remote sensing image
CN112215074A (en) Real-time target identification and detection tracking system and method based on unmanned aerial vehicle vision
CN111310622A (en) Fish swarm target identification method for intelligent operation of underwater robot
CN111242061B (en) Synthetic aperture radar ship target detection method based on attention mechanism
CN104657980A (en) Improved multi-channel image partitioning algorithm based on Meanshift
CN113538457B (en) Video semantic segmentation method utilizing multi-frequency dynamic hole convolution
CN111861880A (en) Image super-fusion method based on regional information enhancement and block self-attention
CN111062381B (en) License plate position detection method based on deep learning
CN113888547A (en) Non-supervision domain self-adaptive remote sensing road semantic segmentation method based on GAN network
CN112651423A (en) Intelligent vision system
CN114724120A (en) Vehicle target detection method and system based on radar vision semantic segmentation adaptive fusion
CN114005090A (en) Suspected smoke proposed area and deep learning-based smoke detection method
CN112733914A (en) Underwater target visual identification and classification method based on support vector machine
CN115393635A (en) Infrared small target detection method based on super-pixel segmentation and data enhancement
CN112925932A (en) High-definition underwater laser image processing system
Gu et al. A classification method for polsar images using SLIC superpixel segmentation and deep convolution neural network
CN113989718A (en) Human body target detection method facing radar signal heat map
CN115527105A (en) Underwater target detection method based on multi-scale feature learning
CN115797684A (en) Infrared small target detection method and system based on context information

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20210608

WD01 Invention patent application deemed withdrawn after publication