CN109359550B - Manchu document seal extraction and removal method based on deep learning technology - Google Patents

Manchu document seal extraction and removal method based on deep learning technology Download PDF

Info

Publication number
CN109359550B
CN109359550B CN201811100870.0A CN201811100870A CN109359550B CN 109359550 B CN109359550 B CN 109359550B CN 201811100870 A CN201811100870 A CN 201811100870A CN 109359550 B CN109359550 B CN 109359550B
Authority
CN
China
Prior art keywords
seal
network
layer
manchu
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811100870.0A
Other languages
Chinese (zh)
Other versions
CN109359550A (en
Inventor
贺建军
卢海涛
郑蕊蕊
刘文鹏
周建云
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Dalian Minzu University
Original Assignee
Dalian Minzu University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Dalian Minzu University filed Critical Dalian Minzu University
Priority to CN201811100870.0A priority Critical patent/CN109359550B/en
Publication of CN109359550A publication Critical patent/CN109359550A/en
Application granted granted Critical
Publication of CN109359550B publication Critical patent/CN109359550B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V30/00Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
    • G06V30/40Document-oriented image-based pattern recognition
    • G06V30/41Analysis of document content
    • G06V30/413Classification of content, e.g. text, photographs or tables
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/04Context-preserving transformations, e.g. by using an importance map

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • General Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Biomedical Technology (AREA)
  • Molecular Biology (AREA)
  • Biophysics (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)

Abstract

A Manchu document seal extraction and removal method based on a deep learning technology belongs to the field of minority document image detection and identification. The technical points are as follows: preprocessing a Manchu document image; training and saving the countermeasures of the extracted seal to generate a network; training and saving countermeasures for removing the seal generate a network. The method for extracting and removing the seal of the Manchu document based on the deep learning technology has the advantages that the seal area can be extracted on the Manchu document to the maximum extent, so that characters under the seal are removed; the seal can also be removed, so that the Manchu text part under the seal can be recovered to the maximum extent.

Description

Manchu document seal extraction and removal method based on deep learning technology
Technical Field
The invention belongs to the field of minority document image detection and identification, and particularly relates to a Manchu document seal extraction and removal method based on a deep learning technology.
Background
Most Manchu files are single, solitary or rare, and if the Manchu files are used for a long time and at a high frequency, the original files are inevitably damaged to a certain extent, and in order to enable precious Manchu files to be stored permanently, the electronization of Manchu ancient book files is a trend, namely, Manchu files can be stored in an image mode, so that the research and the utilization of Manchu file images are urgent. In the Manchu research process, because the mankind who knows Manchu is deficient, the Manchu research is mostly suspended, so the method of combining computer and deep learning is used for continuously researching Manchu documents, which is beneficial to enhancing the development and utilization of Manchu files. On one hand, the seal in the Manchu document can reflect important information such as the author or the affiliation of the document, and the seal in the Manchu ancient book is also an important basis for identifying the value of the file and analyzing and researching the content in the Manchu file. Therefore, the seal related information is extracted from the Manchu document image, and the research analysis and utilization of the Manchu document are necessary; on the other hand, the seal is attached to the full text, which hinders the recognition of the whole document content, and the seal is noise in the research field of document text line segmentation and character segmentation, so that it is significant to remove the seal in the full text document and retain the full text characters under the seal.
In the prior art, manual single-image processing is mostly adopted, for example, software such as PS is used for removing, so that the time is wasted, the complexity is high, and the effect is poor.
Disclosure of Invention
In order to solve the problems in the prior art, the invention provides a method for extracting and removing a seal of a full-text document based on a deep learning technology, which can extract a seal area on the full-text document to the maximum extent so as to remove characters under the seal; the seal can also be removed, so that the Manchu text part under the seal can be recovered to the maximum extent.
The technical scheme is as follows:
a method for extracting and removing a seal of a Manchu document based on a deep learning technology comprises the following steps:
s1, preprocessing the Manchu document image;
s2, training and saving the counterwork generation network of the extracted seal;
and S3, training and saving countermeasures for removing the seal to generate a network.
Further, the preprocessing in step S1 includes the following specific steps:
s1.1, carrying out normalization operation on the image;
s1.2, preparing and extracting data of a seal countermeasure generation network and a seal countermeasure generation network for removing the seal.
Further, the specific steps in step S2 and/or step S3 are as follows:
s2.1, constructing a generator G1 with a U-net structure;
s2.2, in a generator G1, carrying out convolution, LEAKYRELU operation and BN layer operation on an input image for a plurality of times to obtain a plurality of network layers;
s2.3, in the step S2.2, the last network layer is subjected to UpSampling2D operation, convolution, Dropout layer operation, BN layer operation and connection of the penultimate network layer to obtain a u1 layer;
the network output image is obtained by the S2.4 and u1 layers through a plurality of UpSampling2D operations, convolution, Dropout layer operations, BN layer operations and connection network layer operations;
s2.5, constructing a discriminator D1 with a two-class convolutional neural network, and sending the network output image generated by the generator G1 and the real picture into the discriminator D1;
s2.6, defining an objective loss function:
Figure BDA0001806657970000021
wherein: x is an image matrix to be processed, y is a supervised image matrix of the image matrix, z is a random matrix which accords with Gaussian distribution and has the same size with x and y, the value range of each pixel of the z matrix is [0,255], D1 is a judger, G1 is a lambda generator which is an adjustable hyper-parameter, and the value range is [0,1 ].
And S2.7, storing the trained confrontation generation network model.
Further, the method also comprises the following steps:
and S4, verifying and testing the antibiotic network model.
Further, a verification set is formed by data of the countermeasure generation network of the extracted seal to verify the countermeasure generation network model of the extracted seal, network parameters are finely adjusted to obtain a final countermeasure generation network model of the extracted seal, and a new model is saved for testing a new sample; and forming a verification set by using the data of the countermeasure generation network from which the seal is removed to verify the countermeasure generation network model from which the seal is removed, finely adjusting network parameters to obtain a final countermeasure generation network model from which the seal is removed, and storing the new model for testing a new sample.
The invention has the beneficial effects that:
the method for extracting and removing the seal of the Manchu document based on the deep learning technology can furthest extract the seal area on the Manchu document so as to remove characters under the seal; the seal can also be removed, so that the Manchu text part under the seal can be recovered to the maximum extent.
Drawings
FIG. 1 is a schematic diagram of a network structure for extracting a stamp region with a U-Net structure according to the present invention;
FIG. 2 is a schematic diagram of a network structure for removing a stamp region having a U-Net structure according to the present invention;
fig. 3 is a flowchart of embodiment 2 of the present invention.
Detailed Description
The method for extracting and removing seal of Manchu document based on deep learning technique will be further explained with reference to FIGS. 1-3.
Example 1
The method for extracting and removing the seal of the Manchu document based on the deep confrontation generation network comprises four parts, namely a confrontation generation network of a Manchu document image preprocessing, training and storing extracted seal areas, a confrontation generation network of a training and storing removed seal and a result of testing an untested sample.
Step 1: manchu document image preprocessing
1.1 normalization
And preprocessing the pictures collected by scanning, and unifying the sizes of the pictures into 2048 × 2992. The normalized size can be adjusted, and an appropriate size should be set according to the sizes of all the acquired images to avoid serious picture distortion.
1.2 data preparation
Data 1: in order to train and extract the confrontation generation network prepared data of the seal area, the seal image and the supervision image thereof, namely the seal area image are 1 group of input, and 100 groups of data are totally input.
Data2 Data for training the countermeasures against the recovery of a document without a stamp, and Data for network preparation are generated, and the stamp image and its supervision image, i.e., the document image without a stamp, are input in 1 set for 100 sets.
Step 2: confrontation generation network for training and saving extracted seal area
2.1 construction of countermeasure Generation network Net1 extracting seal regions
The countermeasure generation network is composed of two sub-networks, i.e., generator G1 and arbiter D1.
Generator G1: using an encoder network with a U-net structure, a generating encoder as shown in fig. 1 is constructed, where d0 is an input image, d0 is d1 through convolution (64 4 × 4 kernels with a step size of 2) and LEAKYRELU operation; d1 is convolved (128 4 × 4 kernels with step size of 2), LEAKYRELU, BN layer operation to get d 2; d2 is convolved (256 4 × 4 kernels with step size of 2), LEAKYRELU, BN layer operation to get d 3; d3 is convolved (512 kernels of 4 × 4 with a step size of 2), LEAKYRELU and BN layer operation to obtain d 4; d4 is convolved (512 kernels of 4 × 4 with a step size of 2), LEAKYRELU and BN layer operation to obtain d 5; d5 is convolved (512 kernels of 4 × 4 with a step size of 2), LEAKYRELU and BN layer operation to obtain d 6; d6 is convolved (512 kernels of 4 × 4 with a step size of 2), LEAKYRELU and BN layer operation to obtain d 7; d7 is subjected to UpSampling2D (size ═ 2), convolution (512 × 1), Dropout layer, BN layer and d6 layer connection to obtain u 1; u1 is subjected to UpSampling2D (size ═ 2), convolution (512 × 1), Dropout layer, BN layer and d5 layer connection to obtain u 2; u2 is subjected to UpSampling2D (size ═ 2), convolution (512 × 1), Dropout layer, BN layer and d4 layer connection to obtain u 3; u3 is subjected to UpSampling2D (size ═ 2), convolution (256 × 1), Dropout layer, BN layer and d3 layer connection to obtain u 4; u4 is subjected to UpSampling2D (size ═ 2), convolution (128 × 1), Dropout layer, BN layer and d2 layer connection to obtain u 5; u5 is subjected to UpSampling2D (size ═ 2), convolution (128 × 1), Dropout layer, BN layer and d1 layer connection to obtain u 6; u6 is upsampled by UpSampling2D (size ═ 2) and convolved (3 × 4, step size is 1) to obtain u7, which is the network output picture.
Discriminator D1: a two-class convolutional neural network: the picture generated by the generator is sent to a discriminator D1 together with the real picture, and D1 has a structure of a convolutional layer (64 × 4, step size is 2) → LeakyReLU → a convolutional layer (128 × 4, step size is 2) → LeakyReLU → a BN layer (momentum is 0.8) → a convolutional layer (256 × 4, step size is 2) → a LeakyReLU → a BN layer (momentum is 0.8) → a convolutional layer (512 × 4, step size is 2) → a LeakyReLU → a BN layer (momentum is 0.8) → a convolutional layer (1 × 4, step size is 1).
2.2 training the constructed network
Defining an objective loss function:
Figure BDA0001806657970000041
using generator G1 and discriminator D1 constructed in 2.1, an Adam stochastic gradient descent optimizer was used, iteratively trained for the objective function using the above equation. The iteration number epoch is set to 10000 times.
The above equation is divided into two terms, the first term generating the loss and the second term being the reconstruction loss. Wherein x is the image matrix to be processed, y is the supervision image matrix thereof, z is a random matrix which accords with Gaussian distribution and has the same size with x and y, and the value range of each pixel of the z matrix is [0, 255%]. x is the collected full text document with the seal, and y is the supervision image without document characters and only with the seal in the seal extraction task; in the seal removal task, y is a supervision image which has no seal and only has document characters. At G1 generator, G1(x, z) is an image generated after G1 is input as x, z; d1 is a discriminator, D1(x, y) is the loss of the input x, y to the discriminator D1, D1(x,G1(x, z)) is G formed by combining x with1(x, z) losses sent to arbiter D1. The last term is reconstruction loss, namely the distance between the supervised image y and the generated image L1, wherein lambda is an adjustable hyper-parameter and the value range is [0, 1%]And E is an expectation regarding its corner mark.
2.3 saving the trained confrontation generation network model of the extracted seal area
Setting a threshold value, saving the currently trained network model when the loss is less than the threshold value, and finally saving a plurality of trained models, wherein the loss of the training data is the current loss, so that the saved models have better effect on the test set.
And 3, step 3: countermeasure generation network for training and saving removed seal
3.1 is consistent with the network structure in the second step, but independent of the network of the second part. Independent input, independent training, independent setting of all parameters.
3.2 construct a countermeasure Generation network Net2 with seal removed
The construction network structure is the same as 2.2, except that the output target image is different.
A generator G2, wherein d0 is an input image, d0 is d1 obtained by convolution (64 4 × 4 kernels with step size of 2) and LEAKYRELU operation; d1 is convolved (128 4 × 4 kernels with step size of 2), LEAKYRELU, BN layer operation to get d 2; d2 is convolved (256 4 × 4 kernels with step size of 2), LEAKYRELU, BN layer operation to get d 3; d3 is convolved (512 kernels of 4 × 4 with a step size of 2), LEAKYRELU and BN layer operation to obtain d 4; d4 is convolved (512 kernels of 4 × 4 with a step size of 2), LEAKYRELU and BN layer operation to obtain d 5; d5 is convolved (512 kernels of 4 × 4 with a step size of 2), LEAKYRELU and BN layer operation to obtain d 6; d6 is convolved (512 kernels of 4 × 4 with a step size of 2), LEAKYRELU and BN layer operation to obtain d 7; d7 is subjected to UpSampling2D (size ═ 2), convolution (512 × 1), Dropout layer, BN layer and d6 layer connection to obtain u 1; u1 is subjected to UpSampling2D (size ═ 2), convolution (512 × 1), Dropout layer, BN layer and d5 layer connection to obtain u 2; u2 is subjected to UpSampling2D (size ═ 2), convolution (512 × 1), Dropout layer, BN layer and d4 layer connection to obtain u 3; u3 is subjected to UpSampling2D (size ═ 2), convolution (256 × 1), Dropout layer, BN layer and d3 layer connection to obtain u 4; u4 is subjected to UpSampling2D (size ═ 2), convolution (128 × 1), Dropout layer, BN layer and d2 layer connection to obtain u 5; u5 is subjected to UpSampling2D (size ═ 2), convolution (128 × 1), Dropout layer, BN layer and d1 layer connection to obtain u 6; u6 is upsampled by UpSampling2D (size ═ 2) and convolved (3 × 4, step size is 1) to obtain u7, which is the network output picture.
Discriminator D2: a two-class convolutional neural network: the picture generated by the generator is sent to a discriminator D1 together with the real picture, and D1 has a structure of a convolutional layer (64 × 4, step size is 2) → LeakyReLU → a convolutional layer (128 × 4, step size is 2) → LeakyReLU → a BN layer (momentum is 0.8) → a convolutional layer (256 × 4, step size is 2) → a LeakyReLU → a BN layer (momentum is 0.8) → a convolutional layer (512 × 4, step size is 2) → a LeakyReLU → a BN layer (momentum is 0.8) → a convolutional layer (1 × 4, step size is 1).
3.2 training the constructed network
Using generator G2 and discriminator D2 constructed in 3.1, an Adam random gradient descent optimizer was used, iteratively trained with the loss equation in 2.2 as the objective function. The iteration number epoch is set to 10000 times.
3.3 saving the trained countermeasures against removing the seal and generating the network model
And setting a threshold value, and saving the currently trained network Model2 when the loss is less than the threshold value.
And 4, step 4: verification and testing
And (3) verifying the network Model stored in the step 2.3 by using a verification set in Data1, finely adjusting network parameters to obtain a final network Model1 for extracting the full-text seal area, and storing the Model1 to test a new sample.
And (3) verifying the network Model stored in 3.3 by using a verification set in Data2, finely adjusting network parameters to obtain a final network Model2 for extracting the stamp-removed area, and storing the Model2 to test a new sample.
Example 2
1. Data preparation and preprocessing
(1) The Manchu document images can be collected by scanning, photographing and the like to obtain corresponding Manchu document images from Manchu ancient books.
(2) And (4) preprocessing the image and normalizing the size of the image.
(3) Preparing data of a seal area extraction network: the pair of the stamped document image and the pair of the stamped region only of the target image are provided.
(4) Preparing data of a seal removal network: the stamped document image and the target image are only a document image pair.
2. Building a countermeasure Generation network
The network Net1 for realizing seal extraction and construction countermeasure generation is composed of a generator G1 (shown in FIG. 1) and a discriminator D1. The seal removal implementation construction countermeasure generation network Net2 is composed of a generator G2 (shown in FIG. 2) and a discriminator D2.
The establishment of the generation countermeasure network can be established by an open source platform such as TensorFlow, Keras and the like.
3. Setting parameters
Epochs: as a total number of iterations
batch _ size 1 sets the number of samples for each iteration
Selection of an optimizer: adam adaptive moment estimation. The meaning of moments in probability theory is: if a random variable X obeys a certain distribution, the first moment of X is E (X), i.e., the sample mean, and the second moment of X is E (X ^2), i.e., the mean of the squares of the samples. The Adam algorithm dynamically adjusts the learning rate for each parameter according to the first moment estimate and the second moment estimate of the gradient of the loss function for each parameter. Adam is also a gradient descent-based method, but the learning step size of the parameter at each iteration has a certain range, so that a large learning step size cannot be caused due to a large gradient, and the value of the parameter is stable.
4. The training of the save model is started to start with the back propagation optimizer, in Net1, the discriminator D1 is trained first, the discriminator D1 is used for discriminating whether the input picture is a real picture or generated by the generator, then the ability of generating the extracted stamp picture is generated by the training generator, and then the generator and the discriminator are used for counterlearning, thereby achieving good effect. And finally, storing the trained generator model.
5. Verification and testing
And taking pictures which do not appear in the training set to verify, and continuously improving the precision according to the parameters of the precision adjustment model.
The above description is only for the preferred embodiment of the present invention, but the scope of the present invention is not limited thereto, and any person skilled in the art should be able to cover the technical solutions and the inventive concepts of the present invention within the technical scope of the present invention.

Claims (4)

1. A Manchu document seal extraction and removal method based on deep learning technology is characterized by comprising the following steps:
s1, preprocessing the Manchu document image;
s2, training and saving the countermeasures of the extracted seal to generate a network:
s2.1, constructing generator G with U-net structure1
S2.2 at generator G1In the method, an input image is subjected to convolution, LEAKYRELU operation and BN layer operation for a plurality of times to obtain a plurality of network layers;
s2.3, in the step S2.2, the last network layer is subjected to UpSampling2D operation, convolution, Dropout layer operation, BN layer operation and connection of the penultimate network layer to obtain a u1 layer;
the network output image is obtained by the S2.4 and u1 layers through a plurality of UpSampling2D operations, convolution, Dropout layer operations, BN layer operations and connection network layer operations;
s2.5, constructing a discriminator D of a convolutional neural network with two classifications1Will generator G1The generated network output image and the real picture are sent to a discriminator D1
S2.6, defining an objective loss function:
Figure FDA0003034495100000011
wherein: x is the image matrix to be processed, y is the supervision image matrix, z is the random matrix which accords with Gaussian distribution and has the same size with x and y, and the value range of each pixel of the z matrix is 0,255],D1For the judger, G1For the generator, λ is adjustable hyper-parameter with value range of [0, 1%];
S2.7, storing the trained confrontation generation network model;
s3, training and saving countermeasures for removing the seal to generate a network:
s3.1, constructing generator G with U-net structure1
S3.2 at generator G1In the method, an input image is subjected to convolution, LEAKYRELU operation and BN layer operation for a plurality of times to obtain a plurality of network layers;
s3.3, in the step S3.2, the last network layer is subjected to UpSampling2D operation, convolution, Dropout layer operation, BN layer operation and connection of the penultimate network layer to obtain a u1 layer;
s3.4, obtaining a network output image by the u1 layer through a plurality of UpSampling2D operations, convolution, Dropout layer operations, BN layer operations and connection network layer operations;
s3.5, constructing a discriminator D of the convolutional neural network with two classifications1Will generator G1The generated network output image and the real picture are sent to a discriminator D1
S3.6, defining an objective loss function:
Figure FDA0003034495100000021
wherein: x is the image matrix to be processed, y is the supervision image matrix, z is the random matrix which accords with Gaussian distribution and has the same size with x and y, and the value range of each pixel of the z matrix is 0,255],D1For the judger, G1For the generator, λ is adjustable hyper-parameter with value range of [0, 1%];
And S3.7, storing the trained confrontation generation network model.
2. The method for extracting and removing a seal of a Manchu document based on deep learning technology according to claim 1, wherein the preprocessing in step S1 comprises the following steps:
s1.1, carrying out normalization operation on the image;
s1.2, preparing and extracting data of a seal countermeasure generation network and a seal countermeasure generation network for removing the seal.
3. The method for extracting and removing a seal of a Manchu document based on deep learning technology according to claim 1, further comprising the steps of:
and S4, verifying and testing the antibiotic network model.
4. The method for extracting and removing a seal of a Manchu document based on deep learning technology according to claim 2, wherein a verification set is composed of data of the confrontation generation network of the extracted seal to verify the confrontation generation network model of the extracted seal, network parameters are finely adjusted to obtain a final confrontation generation network model of the extracted seal, and a new model is saved for testing a new sample; and forming a verification set by using the data of the countermeasure generation network from which the seal is removed to verify the countermeasure generation network model from which the seal is removed, finely adjusting network parameters to obtain a final countermeasure generation network model from which the seal is removed, and storing the new model for testing a new sample.
CN201811100870.0A 2018-09-20 2018-09-20 Manchu document seal extraction and removal method based on deep learning technology Active CN109359550B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811100870.0A CN109359550B (en) 2018-09-20 2018-09-20 Manchu document seal extraction and removal method based on deep learning technology

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811100870.0A CN109359550B (en) 2018-09-20 2018-09-20 Manchu document seal extraction and removal method based on deep learning technology

Publications (2)

Publication Number Publication Date
CN109359550A CN109359550A (en) 2019-02-19
CN109359550B true CN109359550B (en) 2021-06-22

Family

ID=65351009

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811100870.0A Active CN109359550B (en) 2018-09-20 2018-09-20 Manchu document seal extraction and removal method based on deep learning technology

Country Status (1)

Country Link
CN (1) CN109359550B (en)

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110516577B (en) * 2019-08-20 2022-07-12 Oppo广东移动通信有限公司 Image processing method, image processing device, electronic equipment and storage medium
CN110516201B (en) * 2019-08-20 2023-03-28 Oppo广东移动通信有限公司 Image processing method, image processing device, electronic equipment and storage medium
CN112183538B (en) * 2020-11-30 2021-03-02 华南师范大学 Manchu recognition method and system
CN112801911B (en) * 2021-02-08 2024-03-26 苏州长嘴鱼软件有限公司 Method and device for removing text noise in natural image and storage medium
CN113065407B (en) * 2021-03-09 2022-07-12 国网河北省电力有限公司 Financial bill seal erasing method based on attention mechanism and generation countermeasure network
CN112950458B (en) * 2021-03-19 2022-06-21 润联软件系统(深圳)有限公司 Image seal removing method and device based on countermeasure generation network and related equipment

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105787510A (en) * 2016-02-26 2016-07-20 华东理工大学 System and method for realizing subway scene classification based on deep learning
CN107220506A (en) * 2017-06-05 2017-09-29 东华大学 Breast cancer risk assessment analysis system based on depth convolutional neural networks
CN108470196A (en) * 2018-02-01 2018-08-31 华南理工大学 A method of handwritten numeral is generated based on depth convolution confrontation network model

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9589210B1 (en) * 2015-08-26 2017-03-07 Digitalglobe, Inc. Broad area geospatial object detection using autogenerated deep learning models

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105787510A (en) * 2016-02-26 2016-07-20 华东理工大学 System and method for realizing subway scene classification based on deep learning
CN107220506A (en) * 2017-06-05 2017-09-29 东华大学 Breast cancer risk assessment analysis system based on depth convolutional neural networks
CN108470196A (en) * 2018-02-01 2018-08-31 华南理工大学 A method of handwritten numeral is generated based on depth convolution confrontation network model

Also Published As

Publication number Publication date
CN109359550A (en) 2019-02-19

Similar Documents

Publication Publication Date Title
CN109359550B (en) Manchu document seal extraction and removal method based on deep learning technology
CN108985317B (en) Image classification method based on separable convolution and attention mechanism
CN106096535B (en) Face verification method based on bilinear joint CNN
CN108520503A (en) A method of based on self-encoding encoder and generating confrontation network restoration face Incomplete image
CN109948692B (en) Computer-generated picture detection method based on multi-color space convolutional neural network and random forest
WO2022247005A1 (en) Method and apparatus for identifying target object in image, electronic device and storage medium
CN110414350A (en) The face false-proof detection method of two-way convolutional neural networks based on attention model
CN104268593A (en) Multiple-sparse-representation face recognition method for solving small sample size problem
CN106503661B (en) Face gender identification method based on fireworks deepness belief network
CN111260568B (en) Peak binarization background noise removing method based on multi-discriminator countermeasure network
CN110633655A (en) Attention-attack face recognition attack algorithm
CN111476727B (en) Video motion enhancement method for face-changing video detection
CN111126169A (en) Face recognition method and system based on orthogonalization graph regular nonnegative matrix decomposition
CN110826534A (en) Face key point detection method and system based on local principal component analysis
CN114882278A (en) Tire pattern classification method and device based on attention mechanism and transfer learning
CN114155572A (en) Facial expression recognition method and system
Saealal et al. Three-Dimensional Convolutional Approaches for the Verification of Deepfake Videos: The Effect of Image Depth Size on Authentication Performance
CN109522865A (en) A kind of characteristic weighing fusion face identification method based on deep neural network
CN111652238B (en) Multi-model integration method and system
CN116229528A (en) Living body palm vein detection method, device, equipment and storage medium
Khudher LSB steganography strengthen footprint biometric template
CN110334710A (en) Legal documents recognition methods, device, computer equipment and storage medium
CN113205044B (en) Deep fake video detection method based on characterization contrast prediction learning
CN111832498B (en) Cartoon face recognition method based on convolutional neural network
CN112733670B (en) Fingerprint feature extraction method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant