CN112069769A - Intelligent character effect migration method for special effect characters - Google Patents

Intelligent character effect migration method for special effect characters Download PDF

Info

Publication number
CN112069769A
CN112069769A CN201910440039.8A CN201910440039A CN112069769A CN 112069769 A CN112069769 A CN 112069769A CN 201910440039 A CN201910440039 A CN 201910440039A CN 112069769 A CN112069769 A CN 112069769A
Authority
CN
China
Prior art keywords
special effect
network
mask
picture
sub
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201910440039.8A
Other languages
Chinese (zh)
Other versions
CN112069769B (en
Inventor
刘家瑛
胡煜章
汪文靖
杨帅
郭宗明
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Peking University
Original Assignee
Peking University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Peking University filed Critical Peking University
Priority to CN201910440039.8A priority Critical patent/CN112069769B/en
Publication of CN112069769A publication Critical patent/CN112069769A/en
Application granted granted Critical
Publication of CN112069769B publication Critical patent/CN112069769B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Abstract

The invention provides an intelligent character effect migration method for special effect characters, which comprises the following steps: training a mask extraction sub-network to extract a decoration element mask by utilizing a training data set, and training a basic special effect migration sub-network to migrate a basic character special effect; special effect word D with decorative elementsyAnd its paired character pattern picture CyObtaining a decorative element mask M from the trained mask extraction sub-networky(ii) a Will DyAnd its paired character pattern picture CyTarget font picture CxInputting the result into a trained basic special effect migration sub-network to obtain a result S of basic special effect migration and decoration element eliminationx(ii) a Using My,CyAnd CxCarrying out element recombination to fuse decorative elements into SxObtaining the moved special effect character D with the decorative elements corresponding to the target character patternx. The method can transfer the decorative elements of the character special effect while transferring the special effect of the character, and cannot cause the loss and distortion of the decorative elements.

Description

Intelligent character effect migration method for special effect characters
Technical Field
The invention belongs to the field of character special effect style migration, and particularly relates to an intelligent character effect migration method for special effect characters.
Background
The special effect word style migration aims to migrate the style of a given special effect word to a given font, and realize the batch generation of the special effect word. In recent years, special effect word style migration has received increasing academic attention.
The special effect word style migration method can be divided into three categories. Conventional approaches based on block fusion implement style migration by finding similar texture blocks on a given special effect word that match the current glyph block. To better preserve the legibility of glyphs, this type of approach typically computes a priori on the glyph structure and considers the similarity on the glyphs when finding similar blocks. The method based on the global statistic generally extracts the features by means of a deep neural network pre-trained on a classification task, then calculates the global statistic of the features, and enables a migration result to be consistent with a special effect of a character to be migrated on the global statistic in the modes of iteration, training a forward network and the like. The method based on the deep neural network firstly collects a pair training set, then builds a deep neural network framework, trains the network on the training set, and finally applies the trained network to the special effect of the character to be migrated to realize character effect migration.
However, the existing method for transferring the style of special effect words cannot process the decorative elements on the special effect of words, which can cause the loss and distortion of the decorative elements.
Disclosure of Invention
Aiming at the technical problems, the invention provides an intelligent character effect transferring method for special effect characters, which can transfer the special effects of the characters and simultaneously transfer the decorative elements of the characters without losing and distorting the decorative elements.
The technical scheme adopted by the invention is as follows:
an intelligent word effect migration method for special effect words comprises the following steps:
training a mask extraction sub-network to extract a decoration element mask by utilizing a training data set, and training a basic special effect migration sub-network to migrate a basic character special effect;
special effect word D with decorative elementsyAnd its paired character pattern picture CyTo trainExtracting the sub-network from the good mask to obtain a decoration element mask My
Will DyAnd its paired character pattern picture CyTarget font picture CxInputting the result into a trained basic special effect migration sub-network to obtain a result S of basic special effect migration and decoration element eliminationx
Using My,CyAnd CxCarrying out element recombination to fuse decorative elements into SxObtaining the moved special effect character D with the decorative elements corresponding to the target character patternx
Further, the training data set comprises a synthesized special effect word picture D with decoration and a matched character pattern picture C thereof, and a collected special effect word picture D with decoration and manually marked character patternwAnd the paired character pattern picture Cw(ii) a Wherein the picture D is obtained by randomly adding the decoration element picture to a collected or synthesized special effect word picture obtained by randomly combining the collected texture pictures to a character picture containing no special effect.
Furthermore, the mask extraction sub-network adopts a U-net network structure, and a discriminator network netSegD for judging a picture domain where the input picture is located is built, wherein the netSegD comprises four convolution modules, and each convolution module comprises a convolution layer and a linear rectification function.
Further, the training method for mask extraction sub-network comprises the following steps:
combining the D and the C in the channel dimension and inputting the mask to extract the sub-network to obtain the decoration element mask
Figure BDA0002071766960000021
Simultaneously extracting the feature P of the penultimate layer;
will DwAnd CwMerging the input masks in the channel dimension to extract sub-networks to obtain the decorative element mask
Figure BDA0002071766960000022
Simultaneously extracting features P of the penultimate layerw
According to
Figure BDA0002071766960000023
P、PwAnd setting a loss function to obtain a trained mask extraction sub-network.
Further, the basic special effect migration sub-network adopts a multi-scale framework structure and is trained sequentially under three resolutions of 64 × 64, 128 × 128 and 256 × 256.
Further, the training method of the basic special effect migration sub-network comprises the following steps:
will Dy、Cy、CxInputting the data into a basic special effect migration sub-network, merging the data in a channel domain to obtain a migration result Sx
According to SxAnd setting a loss function to obtain a trained basic special effect migration sub-network.
Further, the method for recombining the elements comprises the following steps:
using DenseCrF for MyOptimizing;
will MyScaling to a certain resolution, clustering by using DBSCAN, and obtaining different decorative elements and masks thereof by searching for connected shapes;
for in CyEach decorative element belonging to the area E is searched to obtain the corresponding decorative element CxThereby the decorative element is moved from D to EyIs copied to SxAdjusting the size of the decorative element to obtain Dx
Further, for in CyEach decorative element belonging to the area E is searched according to the following formula to obtain the corresponding decorative element in the area CxAt the suitable position E':
Figure BDA0002071766960000031
wherein M isguide(·),
Figure BDA0002071766960000032
And MExi(. respectively) indicates the regions in Mguide
Figure BDA0002071766960000033
And MExiThe sum of the values of (a) and (b). MguideThe obtaining method comprises the following steps: at CyUpper computation level prior MHorAnd vertical prior MVerNormalized to [0,1 ]]Interval, fuzzy with Gaussian fuzzy kernel; at CyUpper computation distribution prior MDis(ii) a Will obscure MHorAnd MVerAnd MDisCombining in channel dimensions to yield Mguide
Figure BDA0002071766960000034
Is at CxThe same as MguideThe method for obtaining.
A smart word effect migration system for special effect words, comprising:
the mask extraction sub-network module is used for inputting special effect characters with decorative elements and paired character patterns of the special effect characters and obtaining a decorative element mask from the special effect characters;
a basic special effect migration sub-network module, which is used for inputting special effect words with decorative elements, paired font pictures thereof and target font pictures, migrating the basic special effects of the special effect words to the target font, eliminating the decorative elements of the special effect words and outputting results;
and the element recombination module is used for fusing the decoration elements on the result output by the basic special effect migration sub-network module by utilizing the mask obtained by the mask extraction sub-network module to obtain the migrated special effect characters with the decoration elements corresponding to the target character patterns.
The method comprises the steps of firstly extracting a sub-network through a shade to obtain the shade of the decoration element for separating the character special effect and the decoration element, then carrying out basic character special effect migration through a basic special effect migration sub-network, and finally recombining the decoration element and the special effect character after the basic character special effect migration according to the distribution of the decoration element to obtain a special effect migration result. The method can transfer the decorative elements of the character special effect while transferring the special effect of the character, and cannot cause the loss and distortion of the decorative elements.
Drawings
Fig. 1 is a block diagram of a mask extraction sub-network used in the present invention.
Fig. 2 is a block diagram of a basic effect migration sub-network used in the present invention.
FIG. 3 is a block diagram of a smart word effect migration framework used in the present invention.
Detailed Description
In order to make the aforementioned and other features and advantages of the invention more comprehensible, embodiments accompanied with figures are described in detail below.
The embodiment discloses an intelligent word effect migration method for special effect words, which is specifically described as follows:
step 1: a set of character-effect picture pair data sets is collected, and a group of picture pairs consists of a special-effect character picture and a corresponding character-shape picture without a special effect. The special effect words can be generated in batches through a Photoshop special effect word synthesis action. The data set contains 60 special effect character styles, each style contains 19 fonts of English capital and lower case letters, and the total number of the special effect character picture pairs is 988 fonts. The resolution of the special effect word picture and the font picture is 320 multiplied by 320. 85% of the picture pairs are divided as a training set, and 15% of the picture pairs are divided as a test set. 300 cartoon or geometric texture pictures are collected, and synthesized special effect words are obtained through random combination to be used as supplement of training. Thereafter, 4000 sheets of vector graphics were collected as decorative elements. In the network training process, the decorative element pictures are randomly added to the collected or synthesized special effect word pictures to synthesize special effect words with decorations, and the special effect words are used as a synthesized special effect word domain. 1000 special effect character pictures are collected from channels such as a network and the like, and characters are manually marked to serve as a real special effect character domain.
Step 2: a mask extraction subnetwork is constructed.
The network structure is shown in fig. 1. The mask extraction sub-network adopts a U-net network structure. In the special effect word domain, firstly, the synthesized special effect word D with decoration and the matched word pattern picture C are merged and input in the channel dimensionNetwork, extracting sub-network from input data by mask to obtain mask of decoration elements
Figure BDA0002071766960000041
And simultaneously extracting the feature P of the penultimate layer. In the real special effect word field, firstly, the special effect word D with decoration is putwCharacter pattern picture C matched with itwMerging input networks in channel dimension, extracting sub-networks from input data through masks to obtain masks of decorative elements
Figure BDA0002071766960000042
Simultaneously extracting features P of the penultimate layerw. Meanwhile, a discriminator network netSegD is established. netSegD consists of four convolution modules, each containing a convolution layer and a linear rectification function. netSegD outputs a determination of the picture domain in which the input picture is located.
And step 3: the training mask extracts the sub-network.
Total loss function LmaskThe device consists of two parts:
Lmask=λsegLsegadvLadv
parameter lambdasegIs set to 1, lambdaadvSet to 0.01. L issegExtract the loss function for the mask:
Figure BDA0002071766960000043
parameter lambdaL1Is set to 1, lambdaPerSet to 1, M is the labeled decoration element mask of the special effect word picture, VGG (·) is a five-layer extracted feature in ReLU1, ReLU2, ReLU3, ReLU4, ReLU5 using a pre-trained classification network VGG.
LadvExtract the penalty-fighting function for the mask:
Ladv=-log(netSegD(Pw)),
netSegD(Pw) Inputting P for arbiter netSegDwThe resulting output. netSegD is trained in judging defeatedAnd (3) in the picture domain from which the features come, wherein the loss function is as follows:
Figure BDA0002071766960000044
and 4, step 4: and building a basic special effect migration network.
The network structure is as shown in fig. 2, a multi-scale frame structure is adopted, and the network is difficult to directly train on high resolution, so that a training strategy with gradually improved resolution is adopted, the training difficulty is gradually increased from easy to difficult, and specifically, the network is trained sequentially under three resolutions of 64 × 64, 128 × 128 and 256 × 256. Inputting the special effect word D with synthesized decoration elementsyWhich is paired with a font picture CyAnd target font picture Cx. The three input pictures are merged in a channel domain, and a migration result S is obtained after the three input pictures pass through a basic special effect migration networkx
And 5: and training a basic special effect migration network. Total loss function LtransferConsists of two parts, using WGAN-GP:
Figure BDA0002071766960000051
parameter lambdaL1Is set to 1, lambdaadvSet to 0.01. L isadvTo combat the loss function:
Figure BDA0002071766960000052
in the above formula, the first and second carbon atoms are,
Figure BDA0002071766960000053
meaning that the mean is calculated for X for distribution Y. D represents a discriminator network, and D (a, b, c, D) represents the discrimination result of a after the input of a, b, c, D by the discriminator network. The arbiter network adopts the network structure of PatchGAN.
Is a real target special effect word in the data set,
Figure BDA0002071766960000054
in distribution
Figure BDA0002071766960000055
And SxTo be sampled uniformly.
Step 6: and building an intelligent word effect migration system framework. The trained mask extraction sub-network and the base special effects migration sub-network are combined as in fig. 3. Within the framework of the system, a given special effect word D to be migrated with decorative elements is first of all transferredyAnd its paired character pattern picture CyInputting the information into the trained mask extraction sub-network to obtain a decoration element mask My. Then the special effect word D with the decorative elements is put intoyWhich is paired with a font picture CyAnd target font picture CxInputting the data into a basic special effect migration sub-network to obtain a result S of basic special effect migration and decoration element eliminationx. Then carrying out element recombination to obtain the special effect character D with the decorative elements after the final style migrationx
The element recombination part is firstly passed through CyCalculating the level prior MHor. Definition of xy,minIs CyThe pixel position, x, of the foreground font at the leftmost end of each upper liney,maxThe rightmost end thereof. Then generate
Figure BDA0002071766960000056
Figure BDA0002071766960000057
Wherein, Kw=0.06(xy,max-xy,min)。
Thereafter, M is generated by recursionHor
Figure BDA0002071766960000058
xy,centerIs represented by CyOf foreground fonts on each lineThe pixel position of the center.
Then M is addedHorNormalized to [0,1 ]]Interval, and fuzzy by Gaussian fuzzy kernel to obtain final MHor
Similarly, by CyCalculating vertical prior MVer. Definition of yx,minIs CyThe pixel position of the foreground font at the top of each column, yx,maxFor its lowest end, an estimate of the vertical variation of the glyph may then be generated
Figure BDA0002071766960000061
Figure BDA0002071766960000062
Wherein, K'w=0.06(yx,max-yx,min)。
Thereafter, M is generated by recursionVer
Figure BDA0002071766960000063
yx,centerIs represented by CyThe center of the foreground font of each column is located at the pixel position.
Then M is addedVerNormalized to [0,1 ]]Interval, and fuzzy by Gaussian fuzzy kernel to obtain final MVer
At CyUpper computation distribution prior MDis
MDis=(1-Dis(x,y))5
Dis (x, y) denotes the glyph distribution prior calculated by Yang et al (cf. Shuai Yang, Jianying Liu, Zhouhui Lian, and Zongming Guo. "Awesound type: Statistics-Based Text Effects Transfer", Proc. of IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, Hawaii, Jul. 2017.).
Combining M in channel dimensionHor、MVer、MDisTo obtainTo total prior Mguide. Similarly, at CxThe total prior is obtained by the upper calculation
Figure BDA0002071766960000064
Setting a superposed recording matrix MExiWhen the element position has a decorative element, MExi(x, y) is 0, otherwise MExi(x,y)=1。
The element recombination was performed by first using DenseCrF on MyAnd (6) optimizing. Then, at MyTo get MyScaled to 5 × 5 resolution and clustered with DBSCAN. Different decorative elements and their shades are obtained by finding connected shapes. For each decorative element, note it at CyUpper part belongs to the area E and is searched to obtain the corresponding area CxAt the suitable position E':
Figure BDA0002071766960000065
Mguide(·),
Figure BDA0002071766960000066
and MExi(. respectively) indicates the regions in Mguide
Figure BDA0002071766960000067
And MExiThe sum of the values of (a) and (b).
Decorative elements from DyIs copied to SxIf the superposition area of the decoration element and the font is reduced after copying, the decoration element is slightly reduced and is closer to the font, otherwise, the decoration element is slightly enlarged and is further away from the font, and the special effect character D with the decoration element after final style migration is obtainedx
And 7: special effect word D to be migrated with decorative elementyAnd its corresponding font picture CyTarget font picture CxInputting the system frame to obtain the special effect character D with decorative elements after the final style migrationx
The above embodiments are only intended to illustrate the technical solution of the present invention and not to limit the same, and a person skilled in the art can modify the technical solution of the present invention or substitute the same without departing from the spirit and scope of the present invention, and the scope of the present invention should be determined by the claims.

Claims (10)

1. An intelligent word effect migration method for special effect words comprises the following steps:
training a mask extraction sub-network to extract a decoration element mask by utilizing a training data set, and training a basic special effect migration sub-network to migrate a basic character special effect;
special effect word D with decorative elementsyAnd its paired character pattern picture CyObtaining a decorative element mask M from the trained mask extraction sub-networky
Will DyAnd its paired character pattern picture CyTarget font picture CxInputting the result into a trained basic special effect migration sub-network to obtain a result S of basic special effect migration and decoration element eliminationx
Using My,CyAnd CxCarrying out element recombination to fuse decorative elements into SxObtaining the moved special effect character D with the decorative elements corresponding to the target character patternx
2. The method of claim 1, wherein the mask extraction sub-network is a U-net network structure, and a discriminator network netSegD is constructed for determining a picture domain in which the input picture is located, the netSegD comprising four convolution modules, each convolution module comprising a convolution layer and a linear rectification function.
3. The method of claim 2, wherein the training data set comprises a composite decorated special effect word picture D and its counterpart glyph picture C, and a collected and artificially annotated glyph decorated special effect word picture DwAnd itPaired character pattern picture Cw(ii) a Wherein the picture D is obtained by randomly adding the decoration element picture to a collected or synthesized special effect word picture obtained by randomly combining the collected texture pictures to a character picture containing no special effect.
4. A method as claimed in claim 3, wherein the training method of mask extraction sub-networks comprises the steps of:
combining the D and the C in the channel dimension and inputting the mask to extract the sub-network to obtain the decoration element mask
Figure FDA0002071766950000012
Simultaneously extracting the feature P of the penultimate layer;
will DwAnd CwMerging the input masks in the channel dimension to extract sub-networks to obtain the decorative element mask
Figure FDA0002071766950000013
Simultaneously extracting features P of the penultimate layerw
According to
Figure FDA0002071766950000015
P、PwAnd setting a loss function to obtain a trained mask extraction sub-network.
5. The method of claim 4, wherein the method is based on
Figure FDA0002071766950000014
P、PwThe method of setting the loss function is as follows:
the loss function comprises a mask extraction loss function LsegSum mask extraction penalty function LadvThe total loss function L is composed of the two loss functionsmask
Lmask=λsegLsegadvLadv
Wherein λ issegIs set to 1, lambdaadvSet to 0.01;
Figure FDA0002071766950000011
wherein λ isL1Is set to 1, lambdaPerSetting as 1, M is a labeled decoration element mask of a special effect character picture, and VGG (·) is a feature extracted in five layers of ReLU1, ReLU2, ReLU3, ReLU4 and ReLU5 by using a pre-trained classification network VGG;
Ladv=-log(netSegD(Pw)),
among them, netSegD (P)w) Inputting P for arbiter netSegDwThe resulting output, netSegD, is trained in the picture domain from which the input features are judged to come from, with a loss function of:
Figure FDA0002071766950000021
6. the method of claim 1, wherein the base special effect migration sub-network employs a multi-scale framework structure, and the sequential training of the network is performed by using a training strategy with resolution gradually increased, including sequential training at three resolutions of 64 x 64, 128 x 128, and 256 x 256.
7. Method according to claim 1 or 2, characterized in that the training of the base effect migration subnetwork comprises the following steps:
will Dy、Cy、CxInputting the data into a basic special effect migration sub-network, merging the data in a channel domain to obtain a migration result Sx
According to SxAnd setting a loss function to obtain a trained basic special effect migration sub-network.
8. The method of claim 1, wherein the recombination of elements comprises the steps of:
using DenseCrF for MyOptimizing;
will MyScaling to a certain resolution, clustering by using DBSCAN, and obtaining different decorative elements and masks thereof by searching for connected shapes;
for in CyEach decorative element belonging to the area E is searched to obtain the corresponding decorative element CxThereby the decorative element is moved from D to EyIs copied to SxAdjusting the size of the decorative element to obtain Dx
9. The method of claim 8, wherein for at CyEach decorative element belonging to the area E is searched according to the following formula to obtain the corresponding decorative element in the area CxAt the suitable position E':
Figure FDA0002071766950000022
wherein M isguide(·),
Figure FDA0002071766950000023
And MExi(. respectively) indicates the regions in Mguide
Figure FDA0002071766950000024
And MExiThe sum of the values of (a) and (b).
MguideThe obtaining method comprises the following steps: at CyUpper computation level prior MHorAnd vertical prior MVerNormalized to [0,1 ]]Interval, fuzzy with Gaussian fuzzy kernel; at CyUpper computation distribution prior MDis(ii) a Will obscure MHorAnd MyerAnd MDisCombining in channel dimensions to yield Mguide
Figure FDA0002071766950000031
Is at CxThe same as MguideThe method for obtaining.
10. A smart word effect migration system for special effect words, comprising:
the mask extraction sub-network module is used for inputting special effect characters with decorative elements and paired character patterns of the special effect characters and obtaining a decorative element mask from the special effect characters;
a basic special effect migration sub-network module, which is used for inputting special effect words with decorative elements, paired font pictures thereof and target font pictures, migrating the basic special effects of the special effect words to the target font, eliminating the decorative elements of the special effect words and outputting results;
and the element recombination module is used for fusing the decoration elements on the result output by the basic special effect migration sub-network module by utilizing the mask obtained by the mask extraction sub-network module to obtain the migrated special effect characters with the decoration elements corresponding to the target character patterns.
CN201910440039.8A 2019-05-24 2019-05-24 Intelligent word effect migration method and system for special effect words Active CN112069769B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910440039.8A CN112069769B (en) 2019-05-24 2019-05-24 Intelligent word effect migration method and system for special effect words

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910440039.8A CN112069769B (en) 2019-05-24 2019-05-24 Intelligent word effect migration method and system for special effect words

Publications (2)

Publication Number Publication Date
CN112069769A true CN112069769A (en) 2020-12-11
CN112069769B CN112069769B (en) 2022-07-26

Family

ID=73658095

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910440039.8A Active CN112069769B (en) 2019-05-24 2019-05-24 Intelligent word effect migration method and system for special effect words

Country Status (1)

Country Link
CN (1) CN112069769B (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107577651A (en) * 2017-08-25 2018-01-12 上海交通大学 Chinese character style migratory system based on confrontation network
CN109146989A (en) * 2018-07-10 2019-01-04 华南理工大学 A method of birds and flowers characters in a fancy style image is generated by building neural network

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107577651A (en) * 2017-08-25 2018-01-12 上海交通大学 Chinese character style migratory system based on confrontation network
CN109146989A (en) * 2018-07-10 2019-01-04 华南理工大学 A method of birds and flowers characters in a fancy style image is generated by building neural network

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
LI, YH 等: "《Adaptive Batch Normalization for practical domain adaptation》", 《PATTERN RECOGNITION》 *
LI, YH 等: "《Adaptive Batch Normalization for practical domain adaptation》", 《PATTERN RECOGNITION》, 28 December 2018 (2018-12-28) *
蔡湧达 等: "《基于神经风格迁移的字体特效渲染技术》", 《电脑知识与技术》 *
蔡湧达 等: "《基于神经风格迁移的字体特效渲染技术》", 《电脑知识与技术》, 28 February 2019 (2019-02-28) *

Also Published As

Publication number Publication date
CN112069769B (en) 2022-07-26

Similar Documents

Publication Publication Date Title
CN104732506B (en) A kind of portrait photographs' Color Style conversion method based on face semantic analysis
CN108846358B (en) Target tracking method for feature fusion based on twin network
CN110929602B (en) Foundation cloud picture cloud identification method based on convolutional neural network
CN107844795B (en) Convolutional neural networks feature extracting method based on principal component analysis
CN109902748A (en) A kind of image, semantic dividing method based on the full convolutional neural networks of fusion of multi-layer information
CN108898145A (en) A kind of image well-marked target detection method of combination deep learning
CN108345850A (en) The scene text detection method of the territorial classification of stroke feature transformation and deep learning based on super-pixel
CN109285162A (en) A kind of image, semantic dividing method based on regional area conditional random field models
CN109064522A (en) The Chinese character style generation method of confrontation network is generated based on condition
CN112967218B (en) Multi-scale image restoration system based on wire frame and edge structure
CN105550712B (en) Aurora image classification method based on optimization convolution autocoding network
CN109359527A (en) Hair zones extracting method and system neural network based
CN112950477A (en) High-resolution saliency target detection method based on dual-path processing
CN109086777A (en) A kind of notable figure fining method based on global pixel characteristic
CN114742714A (en) Chinese character image restoration algorithm based on skeleton extraction and antagonistic learning
CN111881716A (en) Pedestrian re-identification method based on multi-view-angle generation countermeasure network
CN107506792A (en) A kind of semi-supervised notable method for checking object
CN110826534B (en) Face key point detection method and system based on local principal component analysis
CN110751271B (en) Image traceability feature characterization method based on deep neural network
CN110472591B (en) Shielded pedestrian re-identification method based on depth feature reconstruction
CN111079549B (en) Method for carrying out cartoon face recognition by utilizing gating fusion discrimination characteristics
CN111428795A (en) Improved non-convex robust principal component analysis method
CN107832753A (en) A kind of face feature extraction method based on four value weights and multiple classification
CN114387610A (en) Method for detecting optional-shape scene text based on enhanced feature pyramid network
Guo et al. Decoupling semantic and edge representations for building footprint extraction from remote sensing images

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant