CN109359550A - Language of the Manchus document seal Abstraction and minimizing technology based on depth learning technology - Google Patents

Language of the Manchus document seal Abstraction and minimizing technology based on depth learning technology Download PDF

Info

Publication number
CN109359550A
CN109359550A CN201811100870.0A CN201811100870A CN109359550A CN 109359550 A CN109359550 A CN 109359550A CN 201811100870 A CN201811100870 A CN 201811100870A CN 109359550 A CN109359550 A CN 109359550A
Authority
CN
China
Prior art keywords
seal
network
language
confrontation
manchus
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201811100870.0A
Other languages
Chinese (zh)
Other versions
CN109359550B (en
Inventor
贺建军
卢海涛
郑蕊蕊
刘文鹏
周建云
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Dalian Minzu University
Original Assignee
Dalian Minzu University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Dalian Minzu University filed Critical Dalian Minzu University
Priority to CN201811100870.0A priority Critical patent/CN109359550B/en
Publication of CN109359550A publication Critical patent/CN109359550A/en
Application granted granted Critical
Publication of CN109359550B publication Critical patent/CN109359550B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V30/00Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
    • G06V30/40Document-oriented image-based pattern recognition
    • G06V30/41Analysis of document content
    • G06V30/413Classification of content, e.g. text, photographs or tables
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/04Context-preserving transformations, e.g. by using an importance map

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • General Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Biomedical Technology (AREA)
  • Molecular Biology (AREA)
  • Biophysics (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)

Abstract

A kind of language of the Manchus document seal Abstraction and minimizing technology based on depth learning technology belongs to ethnic group's document image detection identification field.Technical essential is as follows: pre-processing to language of the Manchus file and picture;Training generates network with the confrontation for extracting seal is saved;Training generates network with the confrontation for saving removal seal.Beneficial effect is: the language of the Manchus document seal Abstraction of the present invention based on depth learning technology and minimizing technology can extract seal region on language of the Manchus document to the maximum extent, thus the word under removing seal;Seal can also be removed, to restore the language of the Manchus word segment under seal to the maximum extent.

Description

Language of the Manchus document seal Abstraction and minimizing technology based on deep learning technology
Technical field
The invention belongs to ethnic group document image detection identification fields more particularly to a kind of based on deep learning technology Language of the Manchus document seal Abstraction and minimizing technology.
Background technique
Most of archives in Manchu is all single part, only existing copy or dilute, if for a long time, high-frequency use will necessarily be to archives Original part causes centainly to damage, in order to enable precious archives in Manchu persistence to go down, to the electronization of language of the Manchus ancient books archives It is a kind of trend, i.e. archives in Manchu can be preserved in a manner of image, therefore to the research of archives in Manchu image and be utilized urgent In the eyebrows and eyelashes.In language of the Manchus research process, because understanding that the talent of the language of the Manchus is deficient, the research to the language of the Manchus is caused to stagnate mostly, because This continues to study language of the Manchus document using the method that computer is combined with deep learning, is conducive to enhance the exploitation to archives in Manchu With utilization.Seal in one side language of the Manchus document can reflect the important information such as author or ownership of document, language of the Manchus ancient books In seal be also identify the archives value and analysis and research archives in Manchu in content important evidence.Therefore, from the language of the Manchus In file and picture extract seal relevant information, for language of the Manchus document research and analyse with utilize it is necessary;On the other hand it prints Chapter, which exists, invests the literal phenomenon of the language of the Manchus, hinder to identify entire document content, and to the segmentation of document text row, Character segmentation For research field etc., seal is noise, thus by language of the Manchus document seal remove and retain seal under language of the Manchus character it is non- It is often significant.
The prior art passes through artificial single image mostly and handles, such as is removed using softwares such as some PS, not only time-consuming numerous It is trivial and ineffective.
Summary of the invention
In order to solve above-mentioned problems of the prior art, the present invention proposes a kind of language of the Manchus based on deep learning technology Document seal Abstraction and minimizing technology, this method can extract seal region on language of the Manchus document to the maximum extent, to remove Word under seal;Seal can also be removed, to restore the language of the Manchus word segment under seal to the maximum extent.
Technical solution is as follows:
A kind of language of the Manchus document seal Abstraction and minimizing technology based on deep learning technology, steps are as follows:
S1, language of the Manchus file and picture is pre-processed;
S2, training and the confrontation generation network for saving extraction seal;
S3, training and the confrontation generation network for saving removal seal.
Further, specific step is as follows for pretreatment in step S1:
S1.1, operation is normalized in image;
S1.2, the confrontation for being prepared as confrontation generation network and removal seal that seal is extracted in training generate the data of network.
Further, specific step is as follows in step S2 and/or step S3:
S2.1, building have the generator G1 of U-net structure;
S2.2, in generator G1, if input picture is obtained by convolution several times, leakyReLU operation, BN layer operation Dry network layer;
The last one network layer is by up-sampling UpSampling2D operation, convolution, Dropout in S2.3, step S2.2 U1 layers are obtained after layer operation, BN layer operation, connection penultimate network layer;
S2.4, u1 layers by up-sampling UpSampling2D operation, convolution, Dropout layer operation, BN layers of behaviour several times Make, connection network layer operation obtains network output image;
S2.5, building have the arbiter D1 of the convolutional neural networks of two classification, and the network that generator G1 is generated exports Image and true picture are fed together arbiter D1;
S2.6, target loss function is defined:
Wherein: x is image array to be processed, and y is its supervision image array, z be meet Gaussian Profile and x, y it is big Small identical random matrix, each pixel value range of z matrix are [0,255], and D1 is determining device, and it is adjustable that G1, which is to generate λ device, Hyper parameter, value range are [0,1].
S2.7, trained confrontation generation network model is saved.
Further, further comprising the steps of:
S4, confrontation generation network model is verified and is tested.
Further, with the data composition verifying collection verifying extraction seal for the confrontation generation network for extracting seal to antibiosis At network model, and trim network parameter obtains the confrontation generation network model of final extraction seal, saves new model and is used for New sample is tested;The confrontation of the data composition verifying collection verifying removal seal of network is generated with the confrontation of removal seal Network model is generated, and trim network parameter obtains the confrontation generation network model of final removal seal, saves new model and uses It is tested in new sample.
The beneficial effects of the present invention are:
Language of the Manchus document seal Abstraction of the present invention based on deep learning technology and minimizing technology can be to greatest extent Ground extracts seal region on language of the Manchus document, thus the word under removing seal;Seal can also be removed, thus extensive to the maximum extent Duplicate the language of the Manchus word segment under chapter.
Detailed description of the invention
Fig. 1 is the generation schematic network structure in the present invention with U-Net structure extraction seal region;
Fig. 2 is the generation schematic network structure with U-Net structure removal seal region in the present invention;
Fig. 3 is 2 flow chart of the embodiment of the present invention.
Specific embodiment
1-3 does further the language of the Manchus document seal Abstraction based on deep learning technology with minimizing technology with reference to the accompanying drawing Explanation.
Embodiment 1
The method for generating the language of the Manchus document seal Abstraction and removal of network is fought based on depth, is made of four parts, respectively Network, training are generated with the confrontation for saving extraction seal region for the pretreatment of language of the Manchus file and picture, training and save removal seal Confrontation generate the result of network, the non-test sample of test.
Step 1: language of the Manchus file and picture pretreatment
1.1 normalization
The picture that scanning is collected is pre-processed, picture size is uniformly normalized to operation, is unified for 2048* 2992.The size for normalizing size is adjustable, should go a size appropriate according to all acquisition image sizes, avoid figure Piece distortion is serious.
1.2 data preparation
Data1: the data that network prepares, the supervision of lid seal image and it are generated for the confrontation that seal region is extracted in training Image, that is, seal area image is 1 group of input, totally 100 groups of data.
Data2: for training removal seal Document retrieval confrontation generate network prepare data, lid seal image and it Supervision image be the file and picture of seal be 1 group of input, totally 100 groups of data.
Step 2: training generates network with the confrontation for extracting seal region is saved
The confrontation that seal region is extracted in 2.1 buildings generates network N et1
Confrontation generates network and is made of two sub-networks, i.e. generator G1 and arbiter D1.
Generator G1: using the encoder network with U-net structure, constructing generation encoder as shown in Figure 1, In, d0 is input picture, and d0 operates to obtain d1 by convolution (64 4*4 cores, step-length 2), leakyReLU;D1 passes through convolution (128 4*4 cores, step-length 2), leakyReLU, BN layer operation obtain d2;Passing through convolution, (256 4*4 cores, step-length are d2 2), leakyReLU, BN layer operation obtain d3;D3 is by convolution (512 4*4 cores, step-length 2), leakyReLU, BN layers of behaviour Obtain d4;D4 is obtaining d5 by convolution (512 4*4 cores, step-length 2), leakyReLU, BN layer operation;D5 is through pulleying Product (512 4*4 cores, step-length 2), leakyReLU, BN layer operation obtain d6;D6 is passing through convolution (512 4*4 cores, step-length For 2), leakyReLU, BN layer operation obtain d7;D7 is by up-sampling UpSampling2D (size=2), convolution (512*1* 1), Dropout layers, BN layers, connection d6 layers after obtain u1;U1 is by up-sampling UpSampling2D (size=2), convolution (512*1*1), Dropout layers, BN layers, connection d5 layers after obtain u2;U2 process up-sampling UpSampling2D (size=2), Convolution (512*1*1), Dropout layers, BN layers, connection d4 layers after obtain u3;U3 is by up-sampling UpSampling2D (size= 2), convolution (256*1*1), Dropout layers, BN layers, connection d3 layers after obtain u4;U4 is by up-sampling UpSampling2D (size=2), convolution (128*1*1), Dropout layers, BN layers, connection d2 layers after obtain u5;U5 is by up-sampling UpSampling2D (size=2), convolution (128*1*1), Dropout layers, BN layers, connection d1 layers after obtain u6;U6 is by upper It is that network exports picture that u7 is obtained after sampling UpSampling2D (size=2), convolution (3*4*4, step-length 1).
The convolutional neural networks of arbiter D1: one two classification: the picture that generator generates is sent together with true picture Enter arbiter D1, the structure of D1 is that (128*4*4, step-length are convolutional layer (64*4*4, step-length 2) → LeakyReLU → convolutional layer 2) → LeakyReLU → BN layers of (momentum of → LeakyReLU → BN layers of (momentum 0.8) → convolutional layer (256*4*4, step-length 2) For 0.8) → convolutional layer (512*4*4, step-length 2) → LeakyReLU → BN layers of (momentum 0.8) → convolutional layer (1*4*4, step It is a length of 1).
The network that 2.2 training are built
Define target loss function:
Using the generator G1 and arbiter D1 constructed in 2.1, using Adam stochastic gradient descent optimizer, above formula is Objective function repetitive exercise.Setting the number of iterations epoch=10000 times.
Above formula is divided to two, and first item generational loss, Section 2 is reconstruct loss.Wherein x is image array to be processed, y For its supervision image array, z be meet Gaussian Profile and the identical random matrix of x, y size, each pixel value of z matrix Range [0,255].X is the language of the Manchus document with seal of acquisition, and in seal Abstraction task, y is that no document text only has The supervision image of seal;In seal removal task, y is the supervision image of the not only document text of seal.It is generated in G1 Device, G1It is the image that generates after input G1 that (x, z), which is with x, z,;D1 is arbiter, D1(x, y) is to be inputted in arbiter D1 with x, y Loss, D1(x,G1(x, z)) it is by the G of x and generation1The loss of (x, z) feeding arbiter D1.Last is reconstruct loss, As supervision image y and the L1 distance for generating image, wherein λ is adjustable hyper parameter, and value range is [0,1], and E is to ask about it The expectation of footmark.
2.3, which save the trained confrontation for extracting seal region, generates network model
Threshold value is set, current trained network model is saved when loss is less than threshold value, finally can be reserved for multiple Which effect of the model of preservation will be compared because current loss is the loss of training data on test set for trained model Fruit is better.
Step 3: training generates network with the confrontation for saving removal seal
3.1 is consistent with the network structure in second step, but independent mutually with second network.Independent input, it is independent Training, is independently arranged all parameters.
The confrontation of 3.2 building removal seals generates network N et2
Construct the target image difference that network structure is identical as 2.2, only exports.
Generator G2: where d0 is input picture, and d0 is grasped by convolution (64 4*4 cores, step-length 2), leakyReLU Obtain d1;D1 obtains d2 by convolution (128 4*4 cores, step-length 2), leakyReLU, BN layer operation;D2 is passing through convolution (256 4*4 cores, step-length 2), leakyReLU, BN layer operation obtain d3;Passing through convolution, (512 4*4 cores, step-length are d3 2), leakyReLU, BN layer operation obtain d4;D4 is by convolution (512 4*4 cores, step-length 2), leakyReLU, BN layers of behaviour Obtain d5;D5 is obtaining d6 by convolution (512 4*4 cores, step-length 2), leakyReLU, BN layer operation;D6 is through pulleying Product (512 4*4 cores, step-length 2), leakyReLU, BN layer operation obtain d7;D7 is by up-sampling UpSampling2D (size =2), convolution (512*1*1), Dropout layers, BN layers, connection d6 layers after obtain u1;U1 is by up-sampling UpSampling2D (size=2), convolution (512*1*1), Dropout layers, BN layers, connection d5 layers after obtain u2;U2 is by up-sampling UpSampling2D (size=2), convolution (512*1*1), Dropout layers, BN layers, connection d4 layers after obtain u3;U3 is by upper Sampling UpSampling2D (size=2), convolution (256*1*1), Dropout layer, BN layers, connect d3 layers after obtain u4;U4 warp Cross up-sampling UpSampling2D (size=2), convolution (128*1*1), Dropout layers, BN layers, connection d2 layers after obtain u5; U5 is obtained after up-sampling UpSampling2D (size=2), convolution (128*1*1), Dropout layers, BN layers, d1 layers of connection u6;It is network output figure that u6 obtains u7 after up-sampling UpSampling2D (size=2), convolution (3*4*4, step-length 1) Piece.
The convolutional neural networks of arbiter D2: one two classification: the picture that generator generates is sent together with true picture Enter arbiter D1, the structure of D1 is that (128*4*4, step-length are convolutional layer (64*4*4, step-length 2) → LeakyReLU → convolutional layer 2) → LeakyReLU → BN layers of (momentum of → LeakyReLU → BN layers of (momentum 0.8) → convolutional layer (256*4*4, step-length 2) For 0.8) → convolutional layer (512*4*4, step-length 2) → LeakyReLU → BN layers of (momentum 0.8) → convolutional layer (1*4*4, step It is a length of 1).
The network that 3.2 training are built
Using the generator G2 and arbiter D2 constructed in 3.1, using Adam stochastic gradient descent optimizer, in 2.2 Loss formula be objective function repetitive exercise.Setting the number of iterations epoch=10000 times.
3.3 confrontation for saving trained removal seal generate network model
Threshold value is set, current trained network model Model2 is saved when loss is less than threshold value.
Step 4: verification and testing
The network model saved in 2.3 is verified with the verifying collection in Data1, and trim network parameter obtains final extraction The network model Model1 in language of the Manchus seal region, saving model1 can be to new test sample.
The network model saved in 3.3 is verified with the verifying collection in Data2, and trim network parameter obtains final extraction The network model Model2 in seal region is removed, saving model2 can be to new test sample.
Embodiment 2
1. data preparation and pretreatment
(1) collection of language of the Manchus file and picture can be the modes such as to scan, take pictures to obtain from language of the Manchus ancient books document accordingly Language of the Manchus file and picture.
(2) image preprocessing, normalized image size.
(3) data preparation of seal extracted region network: band seal file and picture and target image only seal administrative division map As right.
(4) data preparation of seal removal network: band seal file and picture and target image only file and picture pair.
2. building confrontation generates network
It realizes that seal Abstraction building confrontation generates network N et1, is made of generator G1 (as shown in Figure 1), arbiter D1. It realizes that seal removal building confrontation generates network N et2, is made of generator G2 (as shown in Figure 2), arbiter D2.
Generating building for network of confrontation can be built by Open Source Platforms such as TensorFlow, Keras.
3. parameter is arranged
Epochs: for total iteration wheel number
Batch_size=1 is that sample size is arranged in each round iteration
The selection of optimizer: Adam:adaptive moment estimation, adaptive moments estimation.Square in probability theory Be meant that: if a stochastic variable X obeys some distribution, the first moment of X is E (X), that is, sample mean, the two of X Rank square is exactly E (X^2), that is, the average value of sample square.Adam algorithm is according to loss function to the gradient of each parameter Single order moments estimation and second order moments estimation dynamic adjustment are directed to the learning rate of each parameter.Adam is also based on gradient decline Method, but the Learning Step of iterative parameter has a determining range every time, will not cause because of very big gradient very big Learning Step, the value of parameter is more stable.
4. start that preservation model is trained to start to train with backpropagation optimizer, in Net1, first to train arbiter D1, Arbiter D1 is used to differentiate that input picture is that true picture or generator generate, and then generates the print extracted in training generator The ability of chapter picture, then generator and arbiter confrontation learn, to all achieve the effect that.Finally save trained life It grows up to be a useful person model.
5. verification and testing
It goes to verify by the picture for not appearing in training set, precision is continuously improved according to the parameter that precision adjusts model.
The foregoing is only a preferred embodiment of the present invention, but scope of protection of the present invention is not limited thereto, Anyone skilled in the art within the technical scope of the present disclosure, according to the technique and scheme of the present invention and its Inventive concept is subject to equivalent substitution or change, should be covered by the protection scope of the present invention.

Claims (5)

1. a kind of language of the Manchus document seal Abstraction and minimizing technology based on deep learning technology, which is characterized in that steps are as follows:
S1, language of the Manchus file and picture is pre-processed;
S2, training and the confrontation generation network for saving extraction seal;
S3, training and the confrontation generation network for saving removal seal.
2. the language of the Manchus document seal Abstraction based on deep learning technology and minimizing technology, feature exist as described in claim 1 In specific step is as follows for pretreatment in step S1:
S1.1, operation is normalized in image;
S1.2, the confrontation for being prepared as confrontation generation network and removal seal that seal is extracted in training generate the data of network.
3. the language of the Manchus document seal Abstraction based on deep learning technology and minimizing technology, feature exist as described in claim 1 In specific step is as follows in step S2 and/or step S3:
S2.1, building have the generator G1 of U-net structure;
S2.2, in generator G1, input picture obtains several by convolution several times, leakyReLU operation, BN layer operation Network layer;
The last one network layer is by up-sampling UpSampling2D operation, convolution, Dropout layers of behaviour in S2.3, step S2.2 U1 layers are obtained after work, BN layer operation, connection penultimate network layer;
S2.4, u1 layers by up-sampling UpSampling2D operation several times, convolution, Dropout layer operation, BN layer operation, connecting It connects network layer operation and obtains network output image;
S2.5, building have the arbiter D1 of the convolutional neural networks of two classification, and the network that generator G1 is generated exports image Arbiter D1 is fed together with true picture;
S2.6, target loss function is defined:
Wherein: x is image array to be processed, and y is its supervision image array, z be meet Gaussian Profile and x, y size phase Same random matrix, each pixel value range of z matrix are [0,255], and D1 is determining device, and G1 is generator, and λ is adjustable super ginseng Number, value range are [0,1].
S2.7, trained confrontation generation network model is saved.
4. the language of the Manchus document seal Abstraction based on deep learning technology and minimizing technology, feature exist as described in claim 1 In further comprising the steps of:
S4, confrontation generation network model is verified and is tested.
5. the language of the Manchus document seal Abstraction based on deep learning technology and minimizing technology, feature exist as claimed in claim 2 In, network model is generated with the confrontation that seal is extracted in the data composition verifying collection verifying that the confrontation for extracting seal generates network, and The confrontation that trim network parameter obtains final extraction seal generates network model, saves new model and is used to carry out new sample Test;Network model is generated with the confrontation that the confrontation of removal seal generates the data composition verifying collection verifying removal seal of network, And trim network parameter obtains the confrontation of final removal seal and generates network model, save new model be used for new sample into Row test.
CN201811100870.0A 2018-09-20 2018-09-20 Manchu document seal extraction and removal method based on deep learning technology Active CN109359550B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811100870.0A CN109359550B (en) 2018-09-20 2018-09-20 Manchu document seal extraction and removal method based on deep learning technology

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811100870.0A CN109359550B (en) 2018-09-20 2018-09-20 Manchu document seal extraction and removal method based on deep learning technology

Publications (2)

Publication Number Publication Date
CN109359550A true CN109359550A (en) 2019-02-19
CN109359550B CN109359550B (en) 2021-06-22

Family

ID=65351009

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811100870.0A Active CN109359550B (en) 2018-09-20 2018-09-20 Manchu document seal extraction and removal method based on deep learning technology

Country Status (1)

Country Link
CN (1) CN109359550B (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110516201A (en) * 2019-08-20 2019-11-29 Oppo广东移动通信有限公司 Image processing method, device, electronic equipment and storage medium
CN110516577A (en) * 2019-08-20 2019-11-29 Oppo广东移动通信有限公司 Image processing method, device, electronic equipment and storage medium
CN112183538A (en) * 2020-11-30 2021-01-05 华南师范大学 Manchu recognition method and system
CN112801911A (en) * 2021-02-08 2021-05-14 苏州长嘴鱼软件有限公司 Method and device for removing Chinese character noise in natural image and storage medium
CN112950458A (en) * 2021-03-19 2021-06-11 润联软件系统(深圳)有限公司 Image seal removing method and device based on countermeasure generation network and related equipment
CN113065407A (en) * 2021-03-09 2021-07-02 国网河北省电力有限公司 Financial bill seal erasing method based on attention mechanism and generation countermeasure network

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105787510A (en) * 2016-02-26 2016-07-20 华东理工大学 System and method for realizing subway scene classification based on deep learning
US20170061249A1 (en) * 2015-08-26 2017-03-02 Digitalglobe, Inc. Broad area geospatial object detection using autogenerated deep learning models
CN107220506A (en) * 2017-06-05 2017-09-29 东华大学 Breast cancer risk assessment analysis system based on deep convolutional neural network
CN108470196A (en) * 2018-02-01 2018-08-31 华南理工大学 A method of handwritten numeral is generated based on depth convolution confrontation network model

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170061249A1 (en) * 2015-08-26 2017-03-02 Digitalglobe, Inc. Broad area geospatial object detection using autogenerated deep learning models
CN105787510A (en) * 2016-02-26 2016-07-20 华东理工大学 System and method for realizing subway scene classification based on deep learning
CN107220506A (en) * 2017-06-05 2017-09-29 东华大学 Breast cancer risk assessment analysis system based on deep convolutional neural network
CN108470196A (en) * 2018-02-01 2018-08-31 华南理工大学 A method of handwritten numeral is generated based on depth convolution confrontation network model

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110516201A (en) * 2019-08-20 2019-11-29 Oppo广东移动通信有限公司 Image processing method, device, electronic equipment and storage medium
CN110516577A (en) * 2019-08-20 2019-11-29 Oppo广东移动通信有限公司 Image processing method, device, electronic equipment and storage medium
CN110516577B (en) * 2019-08-20 2022-07-12 Oppo广东移动通信有限公司 Image processing method, image processing device, electronic equipment and storage medium
CN110516201B (en) * 2019-08-20 2023-03-28 Oppo广东移动通信有限公司 Image processing method, image processing device, electronic equipment and storage medium
CN112183538A (en) * 2020-11-30 2021-01-05 华南师范大学 Manchu recognition method and system
CN112183538B (en) * 2020-11-30 2021-03-02 华南师范大学 Manchu recognition method and system
CN112801911A (en) * 2021-02-08 2021-05-14 苏州长嘴鱼软件有限公司 Method and device for removing Chinese character noise in natural image and storage medium
CN112801911B (en) * 2021-02-08 2024-03-26 苏州长嘴鱼软件有限公司 Method and device for removing text noise in natural image and storage medium
CN113065407A (en) * 2021-03-09 2021-07-02 国网河北省电力有限公司 Financial bill seal erasing method based on attention mechanism and generation countermeasure network
CN113065407B (en) * 2021-03-09 2022-07-12 国网河北省电力有限公司 Financial bill seal erasing method based on attention mechanism and generation countermeasure network
CN112950458A (en) * 2021-03-19 2021-06-11 润联软件系统(深圳)有限公司 Image seal removing method and device based on countermeasure generation network and related equipment

Also Published As

Publication number Publication date
CN109359550B (en) 2021-06-22

Similar Documents

Publication Publication Date Title
CN109359550A (en) Language of the Manchus document seal Abstraction and minimizing technology based on depth learning technology
CN110516085B (en) Image text mutual retrieval method based on bidirectional attention
CN107122375B (en) Image subject identification method based on image features
CN109472024A (en) A kind of file classification method based on bidirectional circulating attention neural network
CN112149316A (en) Aero-engine residual life prediction method based on improved CNN model
CN106485259B (en) A kind of image classification method based on high constraint high dispersive principal component analysis network
CN111898447B (en) Xin Jihe modal decomposition-based wind turbine generator fault feature extraction method
CN110175560A (en) A kind of radar signal intra-pulse modulation recognition methods
CN114067368B (en) Power grid harmful bird species classification and identification method based on deep convolution characteristics
CN111339935B (en) Optical remote sensing picture classification method based on interpretable CNN image classification model
CN109886021A (en) A kind of malicious code detecting method based on API overall situation term vector and layered circulation neural network
CN109492625A (en) A kind of human face identification work-attendance checking method based on width study
Golovko et al. A new technique for restricted Boltzmann machine learning
CN109389171A (en) Medical image classification method based on more granularity convolution noise reduction autocoder technologies
CN113076878B (en) Constitution identification method based on attention mechanism convolution network structure
CN109685071A (en) Brain electricity classification method based on the study of common space pattern feature width
CN110020637A (en) A kind of analog circuit intermittent fault diagnostic method based on more granularities cascade forest
CN103077408A (en) Method for converting seabed sonar image into acoustic substrate classification based on wavelet neutral network
CN108197079A (en) A kind of improved algorithm to missing values interpolation
CN103617417B (en) Automatic plant identification method and system
CN110569727B (en) Transfer learning method combining intra-class distance and inter-class distance for motor imagery classification
CN110726813B (en) Electronic nose prediction method based on double-layer integrated neural network
CN115017939A (en) Intelligent diagnosis method and device for faults of aircraft fuel pump and storage medium
CN114495239A (en) Forged image detection method and system based on frequency domain information and generation countermeasure network
CN110378373B (en) Tea variety classification method for fuzzy non-relevant linear discriminant analysis

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant