CN113222976A - Space-time image texture direction detection method and system based on DCNN and transfer learning - Google Patents

Space-time image texture direction detection method and system based on DCNN and transfer learning Download PDF

Info

Publication number
CN113222976A
CN113222976A CN202110602923.4A CN202110602923A CN113222976A CN 113222976 A CN113222976 A CN 113222976A CN 202110602923 A CN202110602923 A CN 202110602923A CN 113222976 A CN113222976 A CN 113222976A
Authority
CN
China
Prior art keywords
space
texture direction
image
data set
time image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110602923.4A
Other languages
Chinese (zh)
Other versions
CN113222976B (en
Inventor
张振
李华宝
陈林
刘志杰
莫岱辉
蒋芸
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hohai University HHU
Original Assignee
Hohai University HHU
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hohai University HHU filed Critical Hohai University HHU
Priority to CN202110602923.4A priority Critical patent/CN113222976B/en
Publication of CN113222976A publication Critical patent/CN113222976A/en
Application granted granted Critical
Publication of CN113222976B publication Critical patent/CN113222976B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/40Analysis of texture

Abstract

The invention discloses a spatio-temporal image texture direction detection method and a spatio-temporal image texture direction detection system based on DCNN (deep convolutional neural network) and transfer learning, which belong to the technical field of spatio-temporal image texture direction identification, and comprise the following steps: collecting a spatio-temporal image; inputting the collected space-time image into a trained space-time image texture direction prediction model based on a deep convolutional neural network and transfer learning, and obtaining a prediction result of the texture direction of the space-time image; the image noise is eliminated, and the detection accuracy under the condition of strong turbulent flow of flow velocity pulsation is improved.

Description

Space-time image texture direction detection method and system based on DCNN and transfer learning
Technical Field
The invention belongs to the technical field of spatiotemporal image texture direction identification, and particularly relates to a spatiotemporal image texture direction detection method and a spatiotemporal image texture direction detection system based on DCNN (deep convolutional neural network) and transfer learning.
Background
The river surface imaging speed measurement method is a new technology applied to non-contact quantitative measurement of the surface flow velocity field of a large-scale water body, has important significance for presenting river surface flow phenomena, disclosing the scale flow rule of a prototype and realizing the on-line monitoring of flood flow, and is the basis of scientific researches such as river dynamics, river hydrology and the like.
As a typical application of a river surface imaging speed measurement method, a spatiotemporal image speed measurement method (STIV) is a one-dimensional time-average flow velocity measurement method which takes a speed measurement line as an analysis region and estimates a texture main direction of a synthesized spatiotemporal image by detection. The method has the advantages of high spatial resolution and strong real-time performance, and has special application potential in real-time monitoring of river water surface flow velocity and flow. The key core of spatiotemporal image speed measurement is how to realize the complex nonlinear data prediction process for accurately detecting the texture main direction from the spatiotemporal image, and the traditional spatiotemporal image texture direction detection method has the problems that image noise cannot be eliminated, false detection is easily caused by too low signal-to-noise ratio under the turbulent flow condition with strong flow velocity pulsation, and the like.
Disclosure of Invention
In order to overcome the defects in the prior art, the invention provides a spatio-temporal image texture direction detection method and a spatio-temporal image texture direction detection system based on DCNN and transfer learning, which eliminate image noise and improve the detection accuracy under the condition of strong turbulence of flow velocity pulsation.
In order to achieve the purpose, the technical scheme adopted by the invention is as follows:
in a first aspect, a spatio-temporal image texture direction detection method is provided, including: collecting a spatio-temporal image; and inputting the acquired space-time image into a trained space-time image texture direction prediction model based on the deep convolutional neural network and the transfer learning, and acquiring a prediction result of the texture direction of the space-time image.
Furthermore, the spatio-temporal image texture direction prediction model based on the deep convolutional neural network and the transfer learning comprises two convolutional layer base layers, a first convolutional layer group, a second convolutional layer group, a fourth convolutional layer group, a full connection layer and a regression layer; wherein the first convolution layer group includes 9 convolution layers, the second convolution layer group includes 12 convolution layers, the third convolution layer group includes 69 convolution layers, the fourth convolution layer group includes 9 convolution layers, and in the first to fourth convolution layer groups, every 3 convolution layers constitute one residual block.
Further, the training method of the spatiotemporal image texture direction prediction model based on the deep convolutional neural network and the transfer learning comprises the following steps: taking a space-time image data set which is artificially synthesized as a source data set, and taking a real space-time image data set as a target data set; inputting a constructed space-time image in a source data set into a space-time image texture direction prediction model based on a deep convolutional neural network and transfer learning to perform pre-training, and acquiring a first model parameter meeting a set condition; based on the transfer learning, the first model parameters are used as initial parameters of a newly constructed space-time image texture direction prediction model based on the deep convolutional neural network and the transfer learning, and the space-time image in the target data set is input into the newly constructed space-time image texture direction prediction model based on the deep convolutional neural network and the transfer learning for fine tuning training to obtain the well-trained space-time image texture direction prediction model based on the deep convolutional neural network and the transfer learning.
Further, the artificially synthesized spatiotemporal image data set as a source data set comprises: firstly, generating a gray level image Img based on the background of a real space-time image, and then superposing a two-dimensional sine function on the Img:
Img=Img+I×sin(w(ax+by)) (1)
the method comprises the steps of obtaining a preliminary artificially synthesized spatiotemporal image dataset, wherein a is sin (alpha), b is cos (alpha), w represents the distance of textures, alpha represents the angle direction of the textures, I represents a coefficient for adjusting the contrast of the textures, and multiple types of texture images are obtained by setting different alpha and I; enhancing a preliminary artificially synthesized spatiotemporal image dataset comprising: rotating the synthesized texture image to obtain a plurality of different angles; and expanding the quantity through Fourier transform and Canny edge detection so as to obtain a manually synthesized spatiotemporal image data set, and taking the manually synthesized spatiotemporal image data set as a source data set and carrying out normalization processing.
Further, the taking of the real spatiotemporal image data set as a target data set includes: firstly, Fourier transform is carried out on a real space-time image to obtain a spectrogram, then filtering processing is carried out on the spectrogram, inverse Fourier transform is carried out on the filtered spectrogram to obtain a space-time image after noise reduction, so that a texture direction truth value of the space-time image is obtained, finally, labeling is carried out, a real space-time image data set is obtained, and the real space-time image data set is used as a target data set to carry out normalization processing.
Further, the spatiotemporal image texture direction prediction model based on the deep convolutional neural network and the transfer learning performs feature extraction on the input spatiotemporal image, performs regression prediction task training according to the extracted features, and dynamically evaluates the RMSE index of the spatiotemporal image texture direction prediction model based on the deep convolutional neural network and the transfer learning in the training process:
Figure BDA0003093301530000031
where m denotes the number of image samples involved in the training, yiThe actual tag value representing the ith sample,
Figure BDA0003093301530000032
and representing the predicted value obtained after the ith sample is input into the model.
Further, the spatiotemporal image texture direction prediction model based on the deep convolutional neural network and the transfer learning optimizes a loss function by using a random gradient descent algorithm and an error back propagation method.
In a second aspect, there is provided a spatiotemporal image texture direction detection system, including: the data acquisition module is used for acquiring a spatiotemporal image; and the space-time image texture direction prediction module is used for inputting the acquired space-time images into a trained space-time image texture direction prediction model based on the deep convolutional neural network and the transfer learning so as to obtain the prediction result of the space-time image texture direction.
Compared with the prior art, the invention has the following beneficial effects: the invention constructs a spatio-temporal image texture direction prediction model based on the deep convolutional neural network and the transfer learning, introduces a spatio-temporal image data set artificially synthesized as a source data set to pre-train the constructed model, performs filtering and denoising on a spatio-temporal image interfered by noise to obtain a texture direction truth value, enables the texture characteristics of the spatio-temporal image to be better extracted by the model, performs fine tuning training on the constructed model by taking a real spatio-temporal image data set as a target data set based on the transfer learning, eliminates image noise and improves the detection accuracy under the turbulent flow condition with strong flow velocity pulsation.
Drawings
Fig. 1 is a schematic main flow chart of a spatio-temporal image texture direction detection method based on DCNN and transfer learning according to an embodiment of the present invention;
fig. 2 is a comparison of a sample source data set and data after enhancement in an embodiment of the present invention;
FIG. 3 is a comparison diagram of the process of denoising spatio-temporal images according to an embodiment of the present invention;
FIG. 4 is a comparison graph of spatiotemporal images disturbed by noise and spatiotemporal images after de-noising processing in an embodiment of the present invention;
FIG. 5 is a structural architecture diagram of a DCNN model used in an embodiment of the present invention;
fig. 6 is a schematic flowchart illustrating experimental steps of a spatio-temporal image texture direction detection method based on DCNN and transfer learning according to an embodiment of the present invention.
Detailed Description
The invention is further described below with reference to the accompanying drawings. The following examples are only for illustrating the technical solutions of the present invention more clearly, and the protection scope of the present invention is not limited thereby.
The first embodiment is as follows:
as shown in fig. 1 to 6, a spatio-temporal image texture direction detection method based on DCNN and transfer learning includes: collecting a spatio-temporal image; and inputting the acquired space-time image into a trained space-time image texture direction prediction model based on the deep convolutional neural network and the transfer learning, and acquiring a prediction result of the texture direction of the space-time image.
As shown in fig. 5, the spatio-temporal image texture direction prediction model based on the deep convolutional neural network and the transfer learning includes two convolutional layer base layers, four convolutional layer groups (first to fourth convolutional layer groups), a full link layer, and a regression layer. Specifically, the multilayer array comprises 2 convolutional layer base layers, 9 convolutional layers forming a first convolutional layer group, 12 convolutional layers forming a second convolutional layer group, 69 convolutional layers forming a third convolutional layer group, and 9 convolutional layers forming a fourth convolutional layer group; each 3 convolutional layers in the first convolutional layer group to the fourth convolutional layer group form a residual block, Pool represents a maximum pooling layer, avgpool represents an average pooling layer, fc represents a full-link layer, regression represents a regression layer, and the used deep convolutional neural network has deeper layers, so that the model has stronger nonlinear capability, the complex nonlinear data prediction problem of texture direction detection of a spatio-temporal image can be better solved, meanwhile, a migration learning method is adopted, the model has the capability of extracting effective texture features in the spatio-temporal image at the beginning of training a real spatio-temporal image data set, the accuracy of texture direction detection of the spatio-temporal image of the final model is obviously improved, and the problems that a large amount of time and energy are spent on the deep convolutional neural network from end to end and the data set is insufficient can be solved.
The training method of the spatio-temporal image texture direction prediction model based on the deep convolutional neural network and the transfer learning comprises the following steps:
taking an artificially synthesized spatiotemporal image data set as a source data set, comprising:
a1, firstly, generating a gray level image Img based on the background of the real space-time image, and then superposing a two-dimensional sine function on the Img:
Img=Img+I×sin(w(ax+by)) (1)
the method comprises the steps of obtaining a preliminary artificially synthesized spatiotemporal image dataset, wherein a is sin (alpha), b is cos (alpha), w represents the distance of textures, alpha represents the angle direction of the textures, I represents a coefficient for adjusting the contrast of the textures, and multiple types of texture images are obtained by setting different alpha and I;
a2, shown in fig. 2, enhancing a preliminary artificially synthesized spatiotemporal image dataset, comprising: performing a rotation operation on the synthesized texture image to obtain a plurality of images with different angles (as shown in fig. 2 (a)); expanding the number of the data sets through Fourier transform (as shown in (b) of FIG. 2) and Canny edge detection (as shown in (c) of FIG. 2), thereby obtaining an artificially synthesized spatiotemporal image data set, and taking the artificially synthesized spatiotemporal image data set as a source data set and carrying out normalization processing;
taking a real spatiotemporal image dataset as a target dataset, comprising:
a3, as shown in fig. 3, the target data set is a real spatio-temporal image generated by a bridgeflower hydrological station, and the spatio-temporal image generated by a real river may be subjected to various interferences, such as flare, shadow, rainfall, etc., and the embodiment performs denoising processing on the spatio-temporal image, which specifically includes: firstly, performing Fourier transform on a real spatiotemporal image (such as (a) in FIG. 3) to obtain a spectrogram (such as (b) in FIG. 3), then performing filtering processing on the spectrogram, performing inverse Fourier transform on the filtered spectrogram (such as (c) in FIG. 3) to obtain a noise-reduced spatiotemporal image, thereby obtaining a texture direction true value of the spatiotemporal image, and finally performing annotation to obtain a real spatiotemporal image data set, and performing normalization processing by using the real spatiotemporal image data set as a target data set;
a4, inputting a spatio-temporal image in a source data set into a constructed spatio-temporal image texture direction prediction model based on a deep convolutional neural network and transfer learning for pre-training, and acquiring a first model parameter meeting a set condition (the set condition means that the error of the model reaches a set convergence error or the iteration number during training reaches an upper limit);
and A5, based on transfer learning, taking the first model parameters as initial parameters of a newly constructed space-time image texture direction prediction model based on the deep convolutional neural network and the transfer learning, inputting the space-time image in the target data set into the newly constructed space-time image texture direction prediction model based on the deep convolutional neural network and the transfer learning for fine tuning training, and obtaining the trained space-time image texture direction prediction model based on the deep convolutional neural network and the transfer learning.
The method comprises the following steps of carrying out feature extraction on an input spatio-temporal image based on a depth convolution neural network and a spatio-temporal image texture direction prediction model of transfer learning, carrying out regression prediction task training according to the extracted features, and dynamically evaluating RMSE indexes of the spatio-temporal image texture direction prediction model based on the depth convolution neural network and the transfer learning in the training process:
Figure BDA0003093301530000071
where m denotes the number of image samples involved in the training, yiThe actual tag value representing the ith sample,
Figure BDA0003093301530000072
and representing the predicted value obtained after the ith sample is input into the model.
The spatio-temporal image texture direction prediction model based on the deep convolutional neural network and the transfer learning optimizes a loss function by using a random gradient descent algorithm and an error back propagation method.
The specific flow of this embodiment is as follows:
s1, data set construction: the data set comprises a source data set, namely an artificially synthesized texture data set, a target data set and a real spatio-temporal image data set;
s11, the step of artificially synthesizing the texture image comprises the following steps: firstly, generating a gray level image Img close to the background of a space-time image, and then superposing a two-dimensional sine function on the Img, as shown in a formula (1);
s12 source data set enhancement: rotating the synthesized texture map to obtain a data set of a plurality of angles, and expanding the number of source data sets by carrying out Fourier transform and Canny edge detection on the data set;
s13, constructing a target data set: the target data set generates a real space-time image for the climbing flower hydrological station, the space-time image generated by the real river can be subjected to various interferences, such as flare, shadow, rainfall and the like, and the embodiment performs denoising processing on the space-time image, and specifically comprises the following steps: firstly, performing Fourier transform on the spatio-temporal image to obtain a spectrogram, then performing filtering processing on the spectrogram, filtering a frequency spectrum corresponding to noise, reserving effective frequency spectrum information, performing Fourier inverse transform on the filtered spectrogram to obtain a spatio-temporal image with clear texture, thereby obtaining a texture direction true value of the spatio-temporal image, and finally performing annotation for training, as shown in FIG. 4, wherein (a) is the spatio-temporal image subjected to noise interference, and (b) is the spatio-temporal image subjected to denoising processing;
s14, normalizing the source data set and the target data set: and normalization processing is carried out, so that the picture size and the pixel size of the data set are kept consistent, other irrelevant features are eliminated in the training process of the deep convolutional neural network model, only the key features are trained, and the effect of improving the model training efficiency is achieved.
S2, establishing a spatio-temporal image texture direction prediction model based on a Deep Convolutional Neural Network (DCNN) and transfer learning, and setting corresponding model parameters including the number of network layers and an activation function; improving on a full connection layer of the DCNN, and adding a regression layer; the space-time image texture direction prediction model based on DCNN and transfer learning is a residual error model which mainly comprises a convolution layer, a pooling layer and a residual error structure.
S3, model training: the method comprises the steps of source data set training (pre-training) and target data set training (transfer learning), inputting texture images in a source data set into a spatio-temporal image texture direction prediction model based on DCNN and transfer learning for pre-training, optimizing the model, and storing the optimal spatio-temporal image texture direction prediction model based on DCNN and transfer learning, wherein the optimal spatio-temporal image texture direction prediction model based on DCNN and transfer learning means that the error of the model reaches a set convergence error or the iteration number reaches an upper limit during training. Because the space-time image data set of the existing real river is less, the pre-training model needs to be subjected to transfer learning, namely when a target data set is trained, the stored optimal first model parameter of the space-time image texture direction prediction model based on DCNN and transfer learning is used as the initialization parameter of a new training model, and a smaller learning rate is set for fine tuning training;
s31, inputting the spatio-temporal image data set into a spatio-temporal image texture direction prediction model based on DCNN and transfer learning, performing feature extraction on the spatio-temporal image by the model, performing regression prediction task training according to the features, and dynamically evaluating RMSE (formula (2)) index conditions of the model on a training set and a testing set in the training process;
s32, calculating RMSE between the predicted value and the true value through S31, and then optimizing a loss function by using a random gradient descent algorithm and an error back propagation method based on training of a space-time image texture direction prediction model of DCNN and transfer learning to obtain an optimized network parameter; the random gradient descent algorithm estimates the gradient of the whole loss function by using the gradient based on a small number of random samples so as to realize a quicker learning process; the gradient of each layer parameter can be rapidly calculated layer by layer through an error back propagation algorithm, so that the adjustment of the parameters is completed, and the effect of minimizing a loss function is achieved.
Predicting by the S4 model: and preprocessing the spatio-temporal image to be predicted, inputting the preprocessed spatio-temporal image into a trained spatio-temporal image texture direction prediction model based on DCNN and transfer learning, and giving a prediction angle of the spatio-temporal image by the model.
In the embodiment, a spatiotemporal image regression model is built by utilizing a Deep Convolutional Neural Network (DCNN), a texture image is artificially synthesized, filtering and denoising are carried out on the spatiotemporal image subjected to noise interference to obtain a texture direction truth value and a transfer learning technology, and through pre-training of various texture image data, the texture characteristics of the spatiotemporal image can be better extracted by the model, and then fine-tuning training is carried out on the real spatiotemporal image. The advantages are as follows: through a large amount of data set training, wherein the data set comprises space-time images generated under various noise interference conditions, the model learns abundant space-time image texture characteristics, so that the capability of accurately extracting the key texture characteristics of the space-time images is achieved, the specific requirements on the space-time images are avoided, the adaptability is strong, and the accuracy of detecting the space-time image texture direction under the turbulent flow condition is improved.
Example two:
based on the first embodiment, which is a spatio-temporal image texture direction detection method based on DCNN and transfer learning, the present embodiment provides a spatio-temporal image texture direction detection system based on DCNN and transfer learning, including: the data acquisition module is used for acquiring a spatiotemporal image; and the space-time image texture direction prediction module is used for inputting the acquired space-time images into a trained space-time image texture direction prediction model based on the deep convolutional neural network and the transfer learning so as to obtain the prediction result of the space-time image texture direction.
The above description is only a preferred embodiment of the present invention, and it should be noted that, for those skilled in the art, several modifications and variations can be made without departing from the technical principle of the present invention, and these modifications and variations should also be regarded as the protection scope of the present invention.

Claims (8)

1. A method for detecting the texture direction of a space-time image is characterized by comprising the following steps:
collecting a spatio-temporal image;
and inputting the acquired space-time image into a trained space-time image texture direction prediction model based on the deep convolutional neural network and the transfer learning, and acquiring a prediction result of the texture direction of the space-time image.
2. The spatio-temporal image texture direction detection method according to claim 1, wherein the spatio-temporal image texture direction prediction model based on the deep convolutional neural network and the transfer learning comprises two convolutional layer base layers, a first convolutional layer group, a second convolutional layer group, a fourth convolutional layer group, a full connection layer and a regression layer; wherein the first convolution layer group includes 9 convolution layers, the second convolution layer group includes 12 convolution layers, the third convolution layer group includes 69 convolution layers, the fourth convolution layer group includes 9 convolution layers, and in the first to fourth convolution layer groups, every 3 convolution layers constitute one residual block.
3. The spatiotemporal image texture direction detection method according to claim 1, wherein the training method of the spatiotemporal image texture direction prediction model based on the deep convolutional neural network and the transfer learning comprises the following steps:
taking a space-time image data set which is artificially synthesized as a source data set, and taking a real space-time image data set as a target data set;
inputting a constructed space-time image in a source data set into a space-time image texture direction prediction model based on a deep convolutional neural network and transfer learning to perform pre-training, and acquiring a first model parameter meeting a set condition;
based on the transfer learning, the first model parameters are used as initial parameters of a newly constructed space-time image texture direction prediction model based on the deep convolutional neural network and the transfer learning, and the space-time image in the target data set is input into the newly constructed space-time image texture direction prediction model based on the deep convolutional neural network and the transfer learning for fine tuning training to obtain the well-trained space-time image texture direction prediction model based on the deep convolutional neural network and the transfer learning.
4. The spatiotemporal image texture direction detection method according to claim 3, wherein the artificially synthesized spatiotemporal image data set as a source data set comprises:
firstly, generating a gray level image Img based on the background of a real space-time image, and then superposing a two-dimensional sine function on the Img:
Img=Img+I×sin(w(ax+by)) (1)
the method comprises the steps of obtaining a preliminary artificially synthesized spatiotemporal image dataset, wherein a is sin (alpha), b is cos (alpha), w represents the distance of textures, alpha represents the angle direction of the textures, I represents a coefficient for adjusting the contrast of the textures, and multiple types of texture images are obtained by setting different alpha and I;
enhancing a preliminary artificially synthesized spatiotemporal image dataset comprising: rotating the synthesized texture image to obtain a plurality of different angles; and expanding the quantity through Fourier transform and Canny edge detection so as to obtain a manually synthesized spatiotemporal image data set, and taking the manually synthesized spatiotemporal image data set as a source data set and carrying out normalization processing.
5. The spatiotemporal image texture direction detection method according to claim 3, wherein the taking of the real spatiotemporal image dataset as a target dataset comprises: firstly, Fourier transform is carried out on a real space-time image to obtain a spectrogram, then filtering processing is carried out on the spectrogram, inverse Fourier transform is carried out on the filtered spectrogram to obtain a space-time image after noise reduction, so that a texture direction truth value of the space-time image is obtained, finally, labeling is carried out, a real space-time image data set is obtained, and the real space-time image data set is used as a target data set to carry out normalization processing.
6. The spatiotemporal image texture direction detection method according to claim 3, characterized in that the spatiotemporal image texture direction prediction model based on the deep convolutional neural network and the transfer learning performs feature extraction on the input spatiotemporal image, performs regression prediction task training according to the extracted features, and dynamically evaluates the RMSE index of the spatiotemporal image texture direction prediction model based on the deep convolutional neural network and the transfer learning in the training process:
Figure FDA0003093301520000031
where m denotes the number of image samples involved in the training, yiThe actual tag value representing the ith sample,
Figure FDA0003093301520000032
and representing the predicted value obtained after the ith sample is input into the model.
7. The spatiotemporal image texture direction detection method according to claim 3, wherein the spatiotemporal image texture direction prediction model based on the deep convolutional neural network and the transfer learning optimizes a loss function using a stochastic gradient descent algorithm and an error back propagation method.
8. A spatio-temporal image texture direction detection system is characterized by comprising:
the data acquisition module is used for acquiring a spatiotemporal image;
and the space-time image texture direction prediction module is used for inputting the acquired space-time images into a trained space-time image texture direction prediction model based on the deep convolutional neural network and the transfer learning so as to obtain the prediction result of the space-time image texture direction.
CN202110602923.4A 2021-05-31 2021-05-31 Space-time image texture direction detection method and system based on DCNN and transfer learning Active CN113222976B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110602923.4A CN113222976B (en) 2021-05-31 2021-05-31 Space-time image texture direction detection method and system based on DCNN and transfer learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110602923.4A CN113222976B (en) 2021-05-31 2021-05-31 Space-time image texture direction detection method and system based on DCNN and transfer learning

Publications (2)

Publication Number Publication Date
CN113222976A true CN113222976A (en) 2021-08-06
CN113222976B CN113222976B (en) 2022-08-05

Family

ID=77081781

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110602923.4A Active CN113222976B (en) 2021-05-31 2021-05-31 Space-time image texture direction detection method and system based on DCNN and transfer learning

Country Status (1)

Country Link
CN (1) CN113222976B (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106355248A (en) * 2016-08-26 2017-01-25 深圳先进技术研究院 Deep convolution neural network training method and device
CN111444958A (en) * 2020-03-25 2020-07-24 北京百度网讯科技有限公司 Model migration training method, device, equipment and storage medium

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106355248A (en) * 2016-08-26 2017-01-25 深圳先进技术研究院 Deep convolution neural network training method and device
CN111444958A (en) * 2020-03-25 2020-07-24 北京百度网讯科技有限公司 Model migration training method, device, equipment and storage medium

Also Published As

Publication number Publication date
CN113222976B (en) 2022-08-05

Similar Documents

Publication Publication Date Title
CN107301661B (en) High-resolution remote sensing image registration method based on edge point features
CN109035149B (en) License plate image motion blur removing method based on deep learning
CN109086824A (en) A kind of sediment sonar image classification method based on convolutional neural networks
CN108985304B (en) Automatic sedimentary layer structure extraction method based on shallow profile data
CN111062978B (en) Texture recognition method for spatio-temporal image flow measurement based on frequency domain filtering technology
CN109886937B (en) Insulator defect detection method based on super-pixel segmentation image recognition
CN109635726B (en) Landslide identification method based on combination of symmetric deep network and multi-scale pooling
CN115797813B (en) Water environment pollution detection method based on aerial image
CN113780242A (en) Cross-scene underwater sound target classification method based on model transfer learning
CN114913434A (en) High-resolution remote sensing image change detection method based on global relationship reasoning
CN111047559A (en) Method for rapidly detecting abnormal area of digital pathological section
CN107358625B (en) SAR image change detection method based on SPP Net and region-of-interest detection
CN113222976B (en) Space-time image texture direction detection method and system based on DCNN and transfer learning
Wang et al. Reconstruction of sub‐mm 3D pavement images using recursive generative adversarial network for faster texture measurement
CN116079713A (en) Multi-drill-arm cooperative control method, system, equipment and medium for drilling and anchoring robot
CN109427042B (en) Method for extracting layered structure and spatial distribution of local sea area sedimentary layer
CN112507826B (en) End-to-end ecological variation monitoring method, terminal, computer equipment and medium
CN112927169B (en) Remote sensing image denoising method based on wavelet transformation and improved weighted kernel norm minimization
CN112560719B (en) High-resolution image water body extraction method based on multi-scale convolution-multi-core pooling
CN116068644A (en) Method for improving resolution and noise reduction of seismic data by using generation countermeasure network
CN112330562B (en) Heterogeneous remote sensing image transformation method and system
Park et al. Image super-resolution using dilated window transformer
CN111008930A (en) Fabric image super-resolution reconstruction method
Zhang et al. Vegetation Coverage Monitoring Model Design Based on Deep Learning
CN112926619B (en) High-precision underwater laser target recognition system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant