CN114140667A - Small sample rapid style migration method based on deep convolutional neural network - Google Patents

Small sample rapid style migration method based on deep convolutional neural network Download PDF

Info

Publication number
CN114140667A
CN114140667A CN202111481391.XA CN202111481391A CN114140667A CN 114140667 A CN114140667 A CN 114140667A CN 202111481391 A CN202111481391 A CN 202111481391A CN 114140667 A CN114140667 A CN 114140667A
Authority
CN
China
Prior art keywords
style
picture
content
neural network
loss function
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111481391.XA
Other languages
Chinese (zh)
Inventor
王龙业
肖舒
曾晓莉
王圳鹏
苏赋
张凯信
王思颖
张高远
肖越
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Southwest Petroleum University
Original Assignee
Southwest Petroleum University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Southwest Petroleum University filed Critical Southwest Petroleum University
Priority to CN202111481391.XA priority Critical patent/CN114140667A/en
Publication of CN114140667A publication Critical patent/CN114140667A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F17/00Digital computing or data processing equipment or methods, specially adapted for specific functions
    • G06F17/10Complex mathematical operations
    • G06F17/16Matrix or vector computation, e.g. matrix-matrix or matrix-vector multiplication, matrix factorization
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Software Systems (AREA)
  • Computing Systems (AREA)
  • Evolutionary Computation (AREA)
  • Biomedical Technology (AREA)
  • Pure & Applied Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Mathematical Optimization (AREA)
  • Mathematical Analysis (AREA)
  • Computational Mathematics (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Algebra (AREA)
  • Databases & Information Systems (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a small sample rapid style migration method based on a deep convolutional neural network, which comprises the steps of firstly, utilizing normalization operation to carry out image preprocessing; then, a VGG-19 network is loaded and corresponding modification is carried out by combining with a style migration task, so that a feature extraction network is constructed, the feature extraction network is used for representing the content of one picture, and a gram matrix is adopted to represent the style of one picture; loading the weight trained by the VGG-19 on the ImageNet data set to a corresponding layer in the constructed network by using a transfer learning mode; in the training process, a content loss function and a style loss function are adopted to define the similarity between a target picture and a content picture as well as a style picture, only the input content picture and the input style picture are trained, and the weight of the neural network is not trained. The method greatly reduces the training difficulty of the neural network, and only one content picture and one style picture are needed in the training, so that the training speed is increased. Experimental results show that compared with the traditional method, the method provided by the invention has an obvious effect improvement.

Description

Small sample rapid style migration method based on deep convolutional neural network
Technical Field
The invention belongs to the technical field of image style migration, and particularly relates to a small sample rapid style migration method based on a deep convolutional neural network.
Background
Image style migration is an important branch of the task of image transformation in the field of computer vision. With the rapid development of the deep learning field, the convolutional neural network has significant advantages in the computer vision field, the computational power and the algorithm are continuously updated and iterated, and a data set can be easily acquired, so that the image style migration becomes a current research hotspot and is widely concerned. Image style migration is based on a convolutional neural network method and a countermeasure network generation method. The invention provides a small sample rapid style migration method based on a deep convolutional neural network.
Image style migration can create high perceptual quality artistic images, using neural networks to separate and recombine the styles and content of arbitrary images. In 2015, a method for performing style migration based on a convolutional neural network was proposed by galys et al using a simplified VGG16 network. This method does not use the last few fully connected layers of VGG16 for object class prediction, but only uses it to perform extraction of image level features and uses the extracted high level features as an approximate representation of the image style. Justin Johnson et al, Stanford university, 2016, proposed a real-time style conversion and super-resolution reconstruction method based on perceptual loss, which improved the disadvantage of the slow training speed of the conventional image conversion. And training a framework of the feature extraction network comprising a forward generation part and pre-training. For images with fixed styles after training is finished, the method can realize quick migration of the style migration algorithm only by performing a forward generation process once. Aiming at the defect that the style migration method of Gatys et al is easy to distort when applied to a photo, FujunLuan et al establishes an image style migration model with better fidelity by adjusting an objective function in 2017.
At present, fast style migration methods based on deep learning are relatively less researched. Chinese patent application publication No. CN 113160033a, "a method and system for matching style and style of clothing" cuts away deformed background by semantic segmentation to restore the background of transferred images. And by utilizing the style of one clothing, rendering and generating a new clothing style defined by another square shape and outline, thereby improving the clothing design efficiency and improving the sensory experience of a user. However, the scheme has the following defects: firstly, the database is huge, weight information of matching between different users and clothes needs to be calculated, and the algorithm is relatively complex; secondly, the degree of freedom of the user is low, the clothes style cannot be designed independently, and the calculation force requirement is high. The application publication number of CN 113065417A of Chinese patent, "scene text recognition method based on generation confrontation style migration", relates to the field of scene text recognition, and can effectively process the problem of scene recognition with less real data. The method for realizing the augmentation of the picture data in the patent is to use style migration to migrate the data from one style to another style. However, this solution still has drawbacks: the pure data augmentation cannot directly solve the training problem of the scene text recognition network.
The invention discloses a technology for image style migration, which is disclosed by a Chinese patent CN 112950460A, and provides an image style migration scheme of a GAN-based cycleGAN optimization algorithm, wherein an inclusion-v 3 network is added to the original cycleGAN algorithm in the scheme, and the invention mainly aims to improve the accuracy of image style conversion. However, this solution still has drawbacks: firstly, the transfer learning can be carried out only after the data set is pre-trained, so that the training time is longer; secondly, image style migration needs to be carried out on two data sets of monet2photo and summer2winter _ yosemite at the same time, and the data training amount is huge.
In summary, the image style migration method based on deep learning mainly has the following three disadvantages: firstly, aiming at a specific style migration task, the data volume needing training is huge, and how to reduce training samples is worth exploring; secondly, the training process is time-consuming, and how to accelerate the training process under limited computing resources is a problem which needs to be solved urgently; thirdly, how to ensure the quality value of the picture after the style migration is further discussed.
Disclosure of Invention
In order to solve the technical defects, the invention provides a small sample rapid style migration method based on a deep convolutional neural network.
The technical scheme of the invention is as follows: a small sample rapid style migration method based on a deep convolutional neural network comprises the following steps:
step 1, inputting content pictures and style pictures with any sizes, and performing cutting and normalization operation on the input;
step 2, modifying the loaded VGG-19 basic network and embedding a gram matrix style migration module to construct a new style migration convolutional neural network model;
step 3, extracting the content and style of the given picture by using the new network model;
step 4, loading the weight of the VGG-19 network on the ImageNet data set to a corresponding network layer in the constructed network by using a migration learning method;
and 5, respectively defining the content and the style similarity of the target picture by using the content loss function and the style loss function, and respectively training the input content picture and the input style picture while keeping the weight value of the neural network unchanged.
Further, the method according to claim 1, wherein in step 1, the specific process is as follows:
step 1.1, inputting content pictures and style pictures with any sizes, and cutting the loaded pictures;
step 1.2, normalizing the cut picture to enable the variance of original input data to be equal to 1, wherein a specific formula is as follows;
Figure BDA0003395359540000031
Figure BDA0003395359540000032
in the formula, x is input data, and i is the number of input data.
The method of claim 2, further characterized by cropping the image to 304 x 400 x 3 pixel size using center cropping.
The method according to claim 1, wherein in step 2, the specific operations are:
and 2.1, loading the VGG-19 network as a basic network structure, selecting certain key layer definition content pictures in the VGG-19 network, extracting local features by using a lower layer, and extracting global features by using a higher layer.
And 2.2, extracting the first layer, the third layer, the fifth layer, the ninth layer and the thirteenth layer of the VGG-19 network to construct a new target convolutional neural network model.
Further, the method according to claim 1, wherein in step 3, the specific process is as follows:
and 3.1, defining a new picture style by adopting a gram matrix for the input style picture. The gram matrix is a matrix formed by pairwise inner products of any k vectors in an n-dimensional Euclidean space, and is specifically expressed as follows:
Figure BDA0003395359540000041
further, the method according to claim 1, wherein in step 4, the specific process is as follows:
step 4.1, loading the weight obtained by training the VGG-19 network on the ImageNet data set into the VGG-19 network by using a transfer learning method;
step 4.2, extracting weight information of a first layer, a third layer, a fifth layer, a ninth layer and a thirteenth layer in the VGG-19 network, and transferring the weight information to a target neural network model;
further, the method according to claim 1, wherein in step 5, the specific process is as follows:
step 5.1, inputting a content picture and a style picture to a target neural network model for training;
step 5.2, copying the content picture as a target picture, and setting the derivation of the target picture as True;
step 5.3, defining an optimizer Adam, wherein the optimized parameters are target pictures, and the convolutional neural network model parameters are not optimized; wherein the learning rate is 0.003, the learnable parameter β ═ 0.5,0.999] step 5.4, the target picture is updated by defining the content loss function and the style loss function, and the specific updating formula is as follows:
the formula for the content loss function update parameter is as follows:
Figure BDA0003395359540000042
in the formula, thetacRepresenting a convolutional neural network-related parameter, LcRepresents a content loss function, and α represents a learning rate;
the style loss function update parameter formula is as follows:
Figure BDA0003395359540000043
in the formula, thetasRepresenting a convolutional neural network-related parameter, LnRepresenting a style loss function, alpha representing a learning rate;
further, the method according to claim 6, wherein the content loss function and the style loss function are implemented by the following specific processes:
step 6.1, calculating the similarity of the input content picture content and the target picture content characteristic by using a content loss function, wherein a specific calculation formula is as follows:
Figure BDA0003395359540000051
in the formula
Figure BDA0003395359540000052
In the form of a picture of the original content,
Figure BDA0003395359540000053
in order to be the target picture,
Figure BDA0003395359540000054
and
Figure BDA0003395359540000055
features are extracted for the ith convolution kernel at the j position in the l layer.
Step 6.2, utilizing a style loss function: the method is used for defining the similarity between the input style picture style and the target picture style, and comprises the following specific calculation formula:
Figure BDA0003395359540000056
in the formula
Figure BDA0003395359540000057
Indicating the characteristics of the l-th layer.
Matching the style of the original image is achieved by minimizing the mean square distance per item between the original image Gram matrix and the image to be generated.
Figure BDA0003395359540000058
In the formula
Figure BDA0003395359540000059
Is a gram matrix of the original style picture,
Figure BDA00033953595400000510
to generate a gram matrix for the picture.
And 6.3, synthesizing a loss function which is the sum of the content loss function and the style loss function, wherein the specific formula is as follows:
Loss=αcontent_loss+βstyle_loss
there are two hyper-parameters α, β. In this task, we compare a with a 1 and β 100 through a large number of experiments. The loss function after synthesis is found to be:
Loss=content_loss+100style_loss
has the advantages that:
the invention provides a small sample rapid style migration method based on a deep convolutional neural network, which is used for solving the problems of more training samples, long training time and the like in an image style migration task. The data set only needs one content picture and one style picture, and normalization processing is carried out on the two input pictures, so that sample data can be trained in the neural network more easily.
According to the invention, the VGG-19 is introduced as a basic network structure, corresponding improvement is carried out on an image grid migration task on the basis of the network, and weight value information trained on an ImageNet data set by the VGG-19 network is loaded into a corresponding layer of the corresponding network, so that the training difficulty and the training time are greatly reduced.
The optimization mode of the invention is different from the optimization mode of the traditional method. The invention fixes the weight information of the network in the training process, and the weight is not updated; the target picture itself is optimized.
Drawings
FIG. 1 is a block flow diagram of the present invention;
FIG. 2 is a schematic diagram of a convolutional neural network structure according to the present invention;
Detailed Description
In order to make the technical solutions and technical advantages of the present invention clearer, the following will clearly and completely describe the technical solutions in the implementation process of the present invention with reference to the embodiments.
Example (b):
as shown in fig. 1, a method for fast style migration of a small sample based on a deep convolutional neural network includes the following steps:
step 1, inputting content pictures and style pictures with any sizes, and performing cutting and normalization operation on the input;
inputting content pictures and style pictures with any sizes, cutting the loaded pictures, and cutting the pictures into pictures with the sizes of 304 × 400 × 3 pixels by adopting a center cutting mode. Then, carrying out normalization operation on the picture, and after normalization output, enabling the variance of input data to be equal to 1, wherein a specific formula is as follows;
Figure BDA0003395359540000061
Figure BDA0003395359540000062
in the formula, x is input data, and i is the number of input data.
Step 2, loading the modified VGG-19 as a basic network, constructing a convolutional neural network model of a style migration task to extract the content of a picture, and extracting the style of the picture by using a gram matrix, as shown in FIG. 2;
and 2.1, loading the VGG-19 network as a basic network structure, selecting certain key layer definition content pictures in the VGG-19 network, extracting local features by using a lower layer, and extracting global features by using a higher layer.
And 2.2, extracting the first layer, the third layer, the fifth layer, the ninth layer and the thirteenth layer of the VGG-19 network to construct a new target convolutional neural network model.
And for the input style picture, defining the style of one picture by adopting a gram matrix. The gram matrix is a matrix formed by pairwise inner products of any k vectors in an n-dimensional Euclidean space, and is specifically expressed as follows:
Figure BDA0003395359540000071
step 3, loading the weight of the VGG-19 network on the ImageNet data set to a corresponding layer in the constructed network by using a migration learning method;
loading the weight obtained by training the VGG-19 network on the ImageNet data set into the VGG-19 network by using a transfer learning method;
taking out weight information of a first layer, a third layer, a fifth layer, a ninth layer and a thirteenth layer in the VGG-19 network, and transferring the weight information to corresponding neural network models;
and 4, defining the similarity of the target picture and the content picture and the similarity of the target picture and the style picture by adopting a content loss function and a style loss function, and only training the input content picture and the input style picture without training the weight of the neural network.
The method is small sample training, wherein only one content picture and one style picture are input to a target neural network model for training; copying the content picture as a target picture, wherein the target picture is a changed amount in the training process, and the representation form is a tensor, so that the derivation of whether the target picture can be set as True; defining an optimizer Adam, wherein parameters optimized by the optimizer in the traditional neural network training are weight information of a neural network, and the parameters optimized in the invention are target pictures without optimizing parameters of a convolutional neural network model; wherein the learning rate is 0.003, β ═ 0.5,0.999 ];
updating the target picture by the content loss function and the style loss function, wherein the specific updating formula is as follows:
the formula for the content loss function update parameter is as follows:
Figure BDA0003395359540000081
in the formula, thetacRepresenting a convolutional neural network-related parameter, LcRepresents a content loss function, and α represents a learning rate;
the style loss function update parameter formula is as follows:
Figure BDA0003395359540000082
in the formula, thetasRepresenting a convolutional neural network-related parameter, LnRepresenting a style loss function, alpha representing a learning rate;
content loss function: the method is used for determining the similarity between the input content picture content and the target picture content, and the specific calculation formula is as follows:
Figure BDA0003395359540000083
in the formula
Figure BDA0003395359540000084
In the form of a picture of the original content,
Figure BDA0003395359540000085
in order to be the target picture,
Figure BDA0003395359540000086
and
Figure BDA0003395359540000087
extracting for the ith convolution kernel of j position in l layerThe characteristics of (1).
Style loss function: the method is used for defining the similarity between the input style picture style and the target picture style, and the specific calculation formula is as follows:
Figure BDA0003395359540000088
in the formula
Figure BDA0003395359540000089
Indicating the characteristics of the l-th layer.
Matching the style of the original image is achieved by minimizing the mean square distance per item between the original image Gram matrix and the image to be generated.
Figure BDA00033953595400000810
In the formula
Figure BDA00033953595400000811
Is a gram matrix of the original style picture,
Figure BDA00033953595400000812
to generate a gram matrix for the picture.
The synthesis loss function is the sum of the content loss function and the style loss function, and the specific formula is as follows:
Loss=αcontent_loss+βstyle_loss
there are two hyper-parameters α, β. In this task, we compare a with a 1 and β 100 through a large number of experiments. The loss function after synthesis is found to be:
Loss=content_loss+100style_loss。

Claims (8)

1. a small sample rapid style migration method based on a deep convolutional neural network is characterized by comprising the following steps:
step 1, inputting content pictures and style pictures with any sizes, and performing cutting and normalization operation on the input;
step 2, modifying the loaded VGG-19 basic network and embedding a gram matrix style migration module to construct a new style migration convolutional neural network model;
step 3, extracting the content and style of the given picture by using the new network model;
step 4, loading the weight of the VGG-19 network on the ImageNet data set to a corresponding layer in the constructed network by using a migration learning method;
and 5, defining the similarity between the target picture and the content picture and the similarity between the target picture and the style picture by adopting a content loss function and a style loss function in the training process, and only training the input content picture and the input style picture without training the weight of the neural network.
2. The method for fast style migration of small samples based on deep convolutional neural network as claimed in claim 1, wherein in step 1, its specific operations are:
step 1.1, inputting content pictures and style pictures with any sizes, and cutting the loaded pictures;
step 1.2, normalizing the cut picture to enable the variance of original input data to be equal to 1, wherein a specific formula is as follows;
Figure FDA0003395359530000011
Figure FDA0003395359530000012
in the formula, x is input data, and i is the number of input data.
3. A method for rapid style migration of small samples based on deep convolutional neural network as claimed in claim 2, wherein the image is cropped to 304 x 400 x 3 pixel size by center cropping.
4. The method for fast style migration of small samples based on deep convolutional neural network as claimed in claim 1, wherein in step 2, its specific operations are:
step 2.1, loading the VGG-19 network as a basic network structure, selecting certain key layer definition content pictures in the VGG-19 network, extracting local features by using a lower layer, and extracting global features by using a higher layer;
and 2.2, extracting the first layer, the third layer, the fifth layer, the ninth layer and the thirteenth layer of the VGG-19 network to construct a new target convolutional neural network model.
5. The method for fast style migration of small samples based on deep convolutional neural network as claimed in claim 1, wherein in step 3, its specific operations are:
step 3.1, defining the style of one picture by using a gram matrix for the input style picture, wherein the gram matrix is a matrix formed by pairwise inner products of any k vectors in an n-dimensional European space, and is specifically represented as follows:
Figure FDA0003395359530000021
6. the method for fast migrating the style of the small sample based on the deep convolutional neural network as claimed in claim 1, wherein in step 4, the specific process is as follows:
step 4.1, loading the weight obtained by training the VGG-19 network on the ImageNet data set into the VGG-19 network by using a transfer learning method;
and 4.2, extracting weight information of a first layer, a third layer, a fifth layer, a ninth layer and a thirteenth layer in the VGG-19 network, and transferring the weight information to a target neural network model.
7. The method for fast migrating the style of the small sample based on the deep convolutional neural network as claimed in claim 1, wherein in step 5, the specific process is as follows:
step 5.1, inputting a content picture and a style picture to a target neural network model for training;
step 5.2, copying the content picture as a target picture, and setting the derivation of the target picture as True;
step 5.3, defining an optimizer Adam, wherein the optimized parameters are target pictures, and the parameters of the convolutional neural network model are not optimized; wherein the learning rate is 0.003, and the learnable parameter beta is [0.5,0.999 ];
and 5.4, updating the target picture by defining a content loss function and a style loss function, wherein a specific updating formula is as follows:
the formula for the content loss function update parameter is as follows:
Figure FDA0003395359530000031
in the formula, thetacRepresenting a convolutional neural network-related parameter, LcRepresents a content loss function, and α represents a learning rate;
the style loss function update parameter formula is as follows:
Figure FDA0003395359530000032
in the formula, thetasRepresenting a convolutional neural network-related parameter, LnThe style loss function is expressed and α represents the learning rate.
8. The method for fast style migration of small samples based on the deep convolutional neural network as claimed in claim 6, wherein the specific process of the content loss function and the style loss function is as follows:
step 6.1, content loss function: the method is used for determining the similarity between the input content picture content and the target picture content, and the specific calculation formula is as follows:
Figure FDA0003395359530000033
in the formula (I), the compound is shown in the specification,
Figure FDA0003395359530000034
in the form of a picture of the original content,
Figure FDA0003395359530000035
in order to be the target picture,
Figure FDA0003395359530000036
and
Figure FDA0003395359530000037
extracting features for the ith convolution kernel at the j position in the l layer;
step 6.2, style loss function: the method is used for defining the similarity between the input style picture style and the target picture style, and comprises the following specific calculation formula:
Figure FDA0003395359530000038
in the formula
Figure FDA0003395359530000039
Represents the characteristics of the l-th layer;
the style of the original image is matched by minimizing the mean square distance of each item between the Gram matrix of the original image and the image to be generated, and the specific calculation formula is as follows:
Figure FDA00033953595300000310
in the formula
Figure FDA00033953595300000311
Is a gram matrix of the original style picture,
Figure FDA00033953595300000312
generating a gram matrix of the picture;
step 6.3, synthesizing a loss function which is the sum of the content loss function and the style loss function, wherein the specific formula is as follows:
Loss=αcontent_loss+βstyle_loss
in the present task, we compare a large number of experiments to obtain α ═ 1 and β ═ 100. The loss function after synthesis is found to be:
Loss=content_loss+100style_loss。
CN202111481391.XA 2021-12-06 2021-12-06 Small sample rapid style migration method based on deep convolutional neural network Pending CN114140667A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111481391.XA CN114140667A (en) 2021-12-06 2021-12-06 Small sample rapid style migration method based on deep convolutional neural network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111481391.XA CN114140667A (en) 2021-12-06 2021-12-06 Small sample rapid style migration method based on deep convolutional neural network

Publications (1)

Publication Number Publication Date
CN114140667A true CN114140667A (en) 2022-03-04

Family

ID=80384203

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111481391.XA Pending CN114140667A (en) 2021-12-06 2021-12-06 Small sample rapid style migration method based on deep convolutional neural network

Country Status (1)

Country Link
CN (1) CN114140667A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116070146A (en) * 2023-01-10 2023-05-05 西南石油大学 Pore structure analysis method integrating migration learning

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116070146A (en) * 2023-01-10 2023-05-05 西南石油大学 Pore structure analysis method integrating migration learning
CN116070146B (en) * 2023-01-10 2023-09-26 西南石油大学 Pore structure analysis method integrating migration learning

Similar Documents

Publication Publication Date Title
WO2021036059A1 (en) Image conversion model training method, heterogeneous face recognition method, device and apparatus
CN111091045A (en) Sign language identification method based on space-time attention mechanism
CN111242844B (en) Image processing method, device, server and storage medium
CN114049381A (en) Twin cross target tracking method fusing multilayer semantic information
WO2023065759A1 (en) Video action recognition method based on spatial-temporal enhanced network
CN110992374B (en) Hair refinement segmentation method and system based on deep learning
CN112819692B (en) Real-time arbitrary style migration method based on dual-attention module
Li et al. Research on object detection algorithm based on deep learning
CN111210432A (en) Image semantic segmentation method based on multi-scale and multi-level attention mechanism
CN115797835A (en) Non-supervision video target segmentation algorithm based on heterogeneous Transformer
Huang et al. Fine-art painting classification via two-channel deep residual network
CN113837290A (en) Unsupervised unpaired image translation method based on attention generator network
CN114612306A (en) Deep learning super-resolution method for crack detection
TW202133032A (en) Image normalization processing method, apparatus and storage medium
CN114140667A (en) Small sample rapid style migration method based on deep convolutional neural network
CN114581918A (en) Text recognition model training method and device
CN117033609A (en) Text visual question-answering method, device, computer equipment and storage medium
CN116167015A (en) Dimension emotion analysis method based on joint cross attention mechanism
US20230072445A1 (en) Self-supervised video representation learning by exploring spatiotemporal continuity
Yang et al. Image super-resolution based on the down-sampling iterative module and deep CNN
CN113706407B (en) Infrared and visible light image fusion method based on separation characterization
CN115272673A (en) Point cloud semantic segmentation method based on three-dimensional target context representation
US20220067992A1 (en) Artificial intelligence techniques for performing image editing operations inferred from natural language requests
CN111583352B (en) Intelligent generation method of stylized icon for mobile terminal
US11328385B2 (en) Automatic image warping for warped image generation

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination