CN110472632A - Character segmentation method, device and computer storage medium based on character feature - Google Patents
Character segmentation method, device and computer storage medium based on character feature Download PDFInfo
- Publication number
- CN110472632A CN110472632A CN201910702665.XA CN201910702665A CN110472632A CN 110472632 A CN110472632 A CN 110472632A CN 201910702665 A CN201910702665 A CN 201910702665A CN 110472632 A CN110472632 A CN 110472632A
- Authority
- CN
- China
- Prior art keywords
- character
- feature
- image
- segmentation
- carried out
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/241—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/26—Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
- G06V10/267—Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion by performing operations on regions, e.g. growing, shrinking or watersheds
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Evolutionary Computation (AREA)
- Life Sciences & Earth Sciences (AREA)
- Artificial Intelligence (AREA)
- General Engineering & Computer Science (AREA)
- Computing Systems (AREA)
- General Health & Medical Sciences (AREA)
- Molecular Biology (AREA)
- Computational Linguistics (AREA)
- Biophysics (AREA)
- Biomedical Technology (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Health & Medical Sciences (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Evolutionary Biology (AREA)
- Multimedia (AREA)
- Character Input (AREA)
- Character Discrimination (AREA)
- Image Analysis (AREA)
Abstract
The present invention provides a kind of character segmentation method based on character feature, is applied to technical field of image processing, and method includes: to obtain image to be processed;Binary conversion treatment is carried out to the image to be processed, obtains binary image;Network is extracted using foundation characteristic, feature extraction is carried out to the binary image;For extracted feature, feature extraction is carried out to the form of character, obtains fisrt feature, and, feature extraction is carried out to the number of character, obtains second feature;Second feature described in the fisrt feature is merged using semantic segmentation network, and then generative semantics segmentation figure;The division position of character is determined according to the semantic segmentation figure.In addition, the invention also discloses a kind of Character segmentation device and computer storage medium based on character feature.
Description
Technical field
The present invention relates to technical field of image processing, more particularly to a kind of character segmentation method based on character feature,
Device and computer storage medium.
Background technique
Character segmentation is basis and the premise of pictograph information extraction, it is necessary to rationally correctly divide for character
It cuts.
For Character segmentation, the method more early proposed has based on projection localization and based on connected area segmentation (Vertical
projection,Connected domain).This case that the two methods are not directed to adhesion character when proposing, so
Each character can not be preferably marked off for the case where having unintelligible more stroke, missing, adhesion in actual scene picture.
Simultaneously because the morphological feature of up-down structure existing for chinese character itself, tiled configuration, causes many characters to be cut into more
A part.Then water droplet and dividing method (Water droplet, Clustering) based on cluster are although it is contemplated that character
Morphological feature, but be optimized only for character local feature, to adhesion by the way of class gravity or cluster
Stroke is simply divided, and is had been improved to a certain extent, but the result in the division that complexity is adhered stroke is still not
Enough ideals.
Therefore, lack a kind of effective Character segmentation method in the prior art.
Summary of the invention
In view of the foregoing deficiencies of prior art, the purpose of the present invention is to provide a kind of characters based on character feature
The method for the foundation that character information such as word is wide and number of words addition is as segmentation can solve tradition side by dividing method and device
The deficiency of method.Character information is extracted using convolutional neural networks simultaneously, and the full convolutional network of application carries out semantic segmentation, it can be effective
Solve the cutting problems of the character of stroke missing and stroke adhesion.
In order to achieve the above objects and other related objects, the present invention provides a kind of Character segmentation side based on character feature
Method, which comprises
Obtain image to be processed;
Binary conversion treatment is carried out to the image to be processed, obtains binary image;
Network is extracted using foundation characteristic, feature extraction is carried out to the binary image;
For extracted feature, feature extraction is carried out to the form of character, obtains fisrt feature, and, to character
Number carries out feature extraction, obtains second feature;
Second feature described in the fisrt feature is merged using semantic segmentation network, generative semantics segmentation figure;
The division position of character is determined according to the semantic segmentation figure.
It is described that binary conversion treatment is carried out to the image to be processed in a kind of implementation, obtain the step of binary image
Suddenly, comprising:
The grey level histogram generated according to the image to be processed;
Obtain prospect peak and background peak corresponding in the grey level histogram;
Obtain gray value corresponding to the trough on the prospect peak and the background peak;
Using acquired gray value as binarization threshold.
It is described network to be extracted using foundation characteristic feature extraction is carried out to the binary image in a kind of implementation
Step, comprising:
Feature is carried out to the binary image using convolutional neural networks CNN to mention.
In a kind of implementation, the needle melts second feature described in the fisrt feature using semantic segmentation network
The step of conjunction, generative semantics segmentation figure, comprising:
Second feature described in the fisrt feature is received, is operated by deconvolution and up-sampling, the size of data is carried out
Reduction, until reaching the picture size to be processed;
Image after reduction is subjected to Softmax classification, using sorted image as semantic segmentation figure.
In a kind of implementation, the image by after reduction carries out Softmax classification, using sorted image as language
The step of adopted segmentation figure, comprising:
Classify to each pixel in the image after reduction;
Obtain the probability that each pixel corresponds to character type;
It is split according to acquired probability.
In a kind of implementation, to the step of convolutional neural networks CNN training, comprising:
Training dataset is constructed, wherein including using 3755 Chinese specified in GB2312 level-one national standard in the data set
Word formulates 30000 pictures with adhesion situation, picture size 512*512, in picture the size of character [70px,
80px] between, the number of character is between [2,5];
It is added to white noise and interference texture, image after being enhanced at random to data set;
Convolutional neural networks CNN training is carried out based on image after the enhancing.
The invention also discloses a kind of Character segmentation device based on character feature, described device include processor and
The memory being connected to the processor by communication bus;Wherein,
The memory, for storing the Character segmentation program based on character feature;
The processor, it is described in any item to realize for executing the Character segmentation program based on character feature
Character segmentation step based on character feature.
And a kind of computer storage medium is also disclosed, the computer storage medium is stored with one or more
Program, one or more of programs can be executed by one or more processor, so that one or more of processing
Device executes described in any item Character segmentation steps based on character feature.
As described above, a kind of character segmentation method based on character feature, device and calculating provided in an embodiment of the present invention
The method for the foundation that character information such as word is wide and number of words addition is as segmentation can solve conventional method not by storage medium
Foot.Character information is extracted using convolutional neural networks simultaneously, and the full convolutional network of application carries out semantic segmentation, can effectively solve pen
Draw the cutting problems of the character of missing and stroke adhesion.
Detailed description of the invention
Fig. 1 is a kind of a kind of flow diagram of character segmentation method based on character feature of the embodiment of the present invention.
Fig. 2 is a kind of a kind of application schematic diagram of character segmentation method based on character feature of the embodiment of the present invention.
Fig. 3 is a kind of a kind of application schematic diagram of character segmentation method based on character feature of the embodiment of the present invention.
Fig. 4 is a kind of a kind of application schematic diagram of character segmentation method based on character feature of the embodiment of the present invention.
Specific embodiment
Illustrate embodiments of the present invention below by way of specific specific example, those skilled in the art can be by this specification
Other advantages and efficacy of the present invention can be easily understood for disclosed content.The present invention can also pass through in addition different specific realities
The mode of applying is embodied or practiced, the various details in this specification can also based on different viewpoints and application, without departing from
Various modifications or alterations are carried out under spirit of the invention.
Please refer to Fig. 1-4.It should be noted that only the invention is illustrated in a schematic way for diagram provided in the present embodiment
Basic conception, only shown in schema then with related component in the present invention rather than component count, shape when according to actual implementation
Shape and size are drawn, when actual implementation kenel, quantity and the ratio of each component can arbitrarily change for one kind, and its component cloth
Office's kenel may also be increasingly complex.
As shown in Figure 1, the embodiment of the present invention provides a kind of character segmentation method based on character feature, the method packet
It includes:
S101 obtains image to be processed.
It should be noted that image to be processed is the image for needing to be split processing comprising alphabetic character.
S102 carries out binary conversion treatment to the image to be processed, obtains binary image.
It is bimodal to respectively represent target image and background image on the grey level histogram that image generates, it chooses between bimodal
Wave trough position is binarization threshold T.
Wherein, f (x, y) is the gray value of gray level image, and g (x, y) is the gray level image after binaryzation.
Specifically, the method that binary image can also be generated with other selected thresholds, such as P parametric method, maximum entropy
Threshold method, maximum variance between clusters etc..
S103 extracts network using foundation characteristic and carries out feature extraction to the binary image.
Specifically, it is convolutional neural networks (CNN) that foundation characteristic, which extracts network, need using trained CNN, then into
It exercises and uses, carry out feature billiard ball.
In the training process of CNN, it is necessary first to using the training set picture met the requirements, be used in data set
3755 Chinese characters specified in GB2312 level-one national standard formulate 30000 pictures with adhesion situation.Test adopted picture
Size is 512*512, and the size of character is between [70px, 80px] in picture, and the number of character is between [2,5].It is constructing
When character picture, it is also artificially added to white noise at random and interference texture carries out data enhancing.According to the sample image of generation
By artificial or semi-artificial carry out semantic marker, character information label, i.e. character width and character number information are generated.Specific structure
The key parameter allocation list built during data set is as shown in Figure 2.Data set part picture sample is as shown in Figure 3.
In one particular embodiment of the present invention, multiple convolution is carried out to pretreated image and pondization operates, often
The basic structure of one convolution operation unit includes 3 convolutional layers, 3 active coatings and 1 pond layer.By multiple convolution sum ponds
Change processing converts one-dimensional data convenient for Fusion Features, to biography before reducing by Dropout layers for multidimensional data by tiling layer
The data volume broadcast obtains final output result by full articulamentum.
Alternatively, handling using by multiple convolution sum pondizations, one-dimensional data is converted just for multidimensional data by tiling layer
In Fusion Features, concatenate operation is carried out for character morphological feature and character number Fusion Features by fused layer, is passed through
The data volume of Dropout layers of reduction propagated forward, obtains final output result by full articulamentum.
For l layers of neuron in neural network, the output of neuron is expressed as yl.For l+1 layers of nerve net
I-th of neuron in network is usedIt indicates its corresponding weight, usesIndicate its corresponding biasing.Common two-dimensional convolution mind
It is as follows through network query function formula:
Wherein,Indicate the calculated value of i-th of neuron in l+1 layers of neural network,Indicate that the calculated value passes through
The corresponding output result of neuron after activation primitive f () processing.
It is as follows using the neural computing formula of Dropout mechanism:
ri l=Bernoulli (p)i
Wherein, Bernoulli (p)iIt indicates to generate one at random for i-th of neuron in l layers of neural network with Probability p
A use0,1 vector indicated.Then i-th of neuron is handled in original l layers of neural network by the vector of generation
OutputAs a result it usesTo indicate.
After completing to the neural metwork training for using Dropout mechanism, prediction processing is being carried out using neural network
It is stage, as follows for the prediction result calculation formula of i-th of neuron in l+1 layers of neural network:
S104 carries out feature extraction to the form of character, obtains fisrt feature for extracted feature, and, to word
The number of symbol carries out feature extraction, obtains second feature.
In one particular embodiment of the present invention, the image for treating semantic segmentation is pre-processed, and 1) character form spy
Sign extracts sub-network: pretreated result is denoted as Fa 1_1 by the characteristic pattern that 1 convolutional layer and 1 pond layer obtain, it will
Fa 1_1 is denoted as Fa 1_2 by the characteristic pattern that 3 convolutional layers and 1 pond layer obtain, and Fa 1_2 is passed through 3 convolutional layers and 1
The characteristic pattern that a pond layer obtains is denoted as Fa 1_3, the feature seal that Fa 1_3 is obtained by 1 convolutional layer and 1 tiling layer
For Fa 1_4, Fa 1_4 is obtained into the output knot of character morphological feature extraction network by 4 dense layers and 3 dropout layers
Fruit, segmentation result are as shown in Figure 4.
2) character number feature extraction sub-network: pretreated result is obtained by 1 convolutional layer and 1 pond layer
Characteristic pattern is denoted as Fa 2_1, Fa 2_1 is denoted as Fa 2_2 by the characteristic pattern that 3 convolutional layers and 1 pond layer obtain, by Fa
2_2 is denoted as Fa 2_3 by the characteristic pattern that 3 convolutional layers and 1 pond layer obtain, and Fa 2_3 is passed through 1 convolutional layer and 1
The characteristic pattern that tiling layer obtains is denoted as Fa 2_4, and Fa 2_4 and Fa 1_4 is handled by a fused layer, fusion results are passed through
It crosses 4 dense layers and 3 dropout layers and obtains the output result of character number feature extraction network.
In one particular embodiment of the present invention, foundation characteristic is extracted into network and character information feature extraction network is made
For basic network, then result that above-mentioned two network is exported is by 1 fused layer, by fusion results by 3 deconvolution and
Up-sampling operation, each operating unit include 3 warp laminations and a up-sampling layer, are finally obtained using 4 warp laminations
To the output result of network.
Optimization is used as using Adam (Adaptive Moment Estimation) to the training of neural network described above
Device.The loss weight that the neural network output setting of part is extracted to character information is 0.5, to the nerve net of semantic segmentation part
The loss weight of network output setting is 1.0.Furthermore the neural network for extracting part to character information, which exports, to be used
The loss calculation method of " categorical_crossentropy " exports the neural network of semantic segmentation part and uses
The loss calculation method of " binary_crossentropy ".
0.0001 (1e-4) is set by the initial learning rate learning_rate in Adam parameter, single order moments estimation
Exponential decay rate beta_1 is set as 0.9, and the exponential decay rate beta_2 of second order moments estimation is set as 0.999.Furthermore in order to anti-
The only division by 0 in calculating sets 1e-08 for epsilon, while setting 0.0 for decay.
S105 merges second feature described in the fisrt feature using semantic segmentation network, generative semantics segmentation
Figure.
Then operation being operated and up-sampled using multiple deconvolution, data size is reverted into the size as input picture.
Classified using softmax classification function to each pixel using obtained data, the value of each pixel illustrates
The pixel assigns to the size of the probability of character type, generative semantics segmentation figure.
Softmax function:The K of obtained image data is tieed up real vector to reflect
Penetrating as each component is that K on 0-1 ties up reality vector σ (z).
S106 determines the division position of character according to the semantic segmentation figure.
The processing that gray processing and binaryzation can be carried out to semantic segmentation figure, is opened and closed operation to the result of processing, seeks
Minimum closure region is looked for, find maximum rectangle in this region and obtains the coordinate of rectangle.According to S140 and S150, obtained word
The wide division position that character is determined with character number
Convolutional neural networks: convolutional neural networks (CNN) by Visual Neurosci inspiration and be suggested.Its structure master
It include convolutional layer and pond layer.Earliest convolutional neural networks model is the LeNet-5 mould that LeCun Y was proposed in 1998
Type.In the model, original image is converted into several characteristic patterns by convolutional layer and sample level.These characteristic patterns utilize convolution
The local features of low level are mapped on higher level global characteristics by the effect of core by convolution operation.Hereafter, no
Break to have and good result is obtained in ImageNet match based on the improved method of convolutional neural networks foundation structure.
Based on projection localization: carrying out horizontal and vertical scanning to image, count the pixel stain number in both direction, nothing
The column and row of pixel stain are segmentation portion.
Based on cluster segmentation: according to features such as the gray scale of image, color, texture, shapes, dividing the image into several mutually not
The region of overlapping, and make these features that similitude be presented in the same area, there are apparent differences between different regions
Property.Group pixels are realized by image segmentation using clustering algorithm.
Full convolutional neural networks (FCN, Fully Convolutional Networks) be Long Jonathan et al. in
The neural network structure proposed in 2015, its segmentation task mainly for image.It by traditional convolutional neural networks (CNN,
ConvolutionalNeural Networks) in full articulamentum be converted to convolutional layer, in a manner of end to end to image into
The classification of row Pixel-level, to solve the problems, such as the image segmentation (semantic segmentation) of semantic level.
In engineer application field, character recognition task but always exists the bottleneck that recognition accuracy is difficult to improve.Study carefully
Its reason is to be difficult to correctly be divided for the character in image.Some of reluctant problems are for example, often
For tiled configuration Chinese character will appear single character be split the case where holding, can will be single for the case where Chinese-character stroke missing
The case where character cutting is at multiple portions and for more common Chinese-character stroke adhesion can be by multiple character cuttings at one
Character zone.Therefore the method for the foundation that character information such as word is wide and number of words addition is as segmentation be can solve into conventional method
Deficiency.Character information is extracted using convolutional neural networks simultaneously, and the full convolutional network of application carries out semantic segmentation, can effectively solve
The certainly cutting problems of the character of stroke missing and stroke adhesion.
The invention also discloses a kind of Character segmentation device based on character feature, described device include processor and
The memory being connected to the processor by communication bus;Wherein,
The memory, for storing the Character segmentation program based on character feature;
The processor, it is described in any item to realize for executing the Character segmentation program based on character feature
Character segmentation step based on character feature.
And a kind of computer storage medium is also disclosed, the computer storage medium is stored with one or more
Program, one or more of programs can be executed by one or more processor, so that one or more of processing
Device executes described in any item Character segmentation steps based on character feature.
The above-described embodiments merely illustrate the principles and effects of the present invention, and is not intended to limit the present invention.It is any ripe
The personage for knowing this technology all without departing from the spirit and scope of the present invention, carries out modifications and changes to above-described embodiment.Cause
This, institute is complete without departing from the spirit and technical ideas disclosed in the present invention by those of ordinary skill in the art such as
At all equivalent modifications or change, should be covered by the claims of the present invention.
Claims (8)
1. a kind of character segmentation method based on character feature, which is characterized in that the described method includes:
Obtain image to be processed;
Binary conversion treatment is carried out to the image to be processed, obtains binary image;
Network is extracted using foundation characteristic, feature extraction is carried out to the binary image;
For extracted feature, feature extraction is carried out to the form of character, obtains fisrt feature, and, to the number of character
Feature extraction is carried out, second feature is obtained;
Second feature described in the fisrt feature is merged using semantic segmentation network, generative semantics segmentation figure;
The division position of character is determined according to the semantic segmentation figure.
2. the character segmentation method according to claim 1 based on character feature, which is characterized in that it is described to described wait locate
Manage the step of image carries out binary conversion treatment, obtains binary image, comprising:
The grey level histogram generated according to the image to be processed;
Obtain prospect peak and background peak corresponding in the grey level histogram;
Obtain gray value corresponding to the trough on the prospect peak and the background peak;
Using acquired gray value as binarization threshold.
3. the character segmentation method according to claim 1 or 2 based on character feature, which is characterized in that described to use base
The step of plinth feature extraction network carries out feature extraction to the binary image, comprising:
Feature is carried out to the binary image using convolutional neural networks CNN to mention.
4. the character segmentation method according to claim 3 based on character feature, which is characterized in that the needle is using semantic
The step of segmentation network merges second feature described in the fisrt feature, generative semantics segmentation figure, comprising:
Second feature described in the fisrt feature is received, is operated by deconvolution and up-sampling, the size of data is restored,
Until reaching the picture size to be processed;
Image after reduction is subjected to Softmax classification, using sorted image as semantic segmentation figure.
5. the character segmentation method according to claim 4 based on character feature, which is characterized in that it is described will be after reduction
Image carries out Softmax classification, using sorted image as the step of semantic segmentation figure, comprising:
Classify to each pixel in the image after reduction;
Obtain the probability that each pixel corresponds to character type;
It is split according to acquired probability.
6. the character segmentation method according to claim 5 based on character feature, which is characterized in that the convolutional Neural
The step of network C NN training, comprising:
Construct training dataset, wherein include using 3755 Chinese characters specified in GB2312 level-one national standard in the data set
30000 pictures with adhesion situation are formulated, picture size 512*512, the size of character is at [70px, 80px] in picture
Between, the number of character is between [2,5];
It is added to white noise and interference texture, image after being enhanced at random to data set;
Convolutional neural networks CNN training is carried out based on image after the enhancing.
7. a kind of Character segmentation device based on character feature, which is characterized in that described device includes processor and passes through logical
The memory that letter bus is connected to the processor;Wherein,
The memory, for storing the Character segmentation program based on character feature;
The processor, for executing the Character segmentation program based on character feature, to realize as in claim 1 to 6
Described in any item Character segmentation steps based on character feature.
8. a kind of computer storage medium, which is characterized in that the computer storage medium is stored with one or more program,
One or more of programs can be executed by one or more processor, so that one or more of processors execute
Such as the Character segmentation step described in any one of claims 1 to 6 based on character feature.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910702665.XA CN110472632B (en) | 2019-07-31 | 2019-07-31 | Character segmentation method and device based on character features and computer storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910702665.XA CN110472632B (en) | 2019-07-31 | 2019-07-31 | Character segmentation method and device based on character features and computer storage medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN110472632A true CN110472632A (en) | 2019-11-19 |
CN110472632B CN110472632B (en) | 2022-09-30 |
Family
ID=68509348
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910702665.XA Active CN110472632B (en) | 2019-07-31 | 2019-07-31 | Character segmentation method and device based on character features and computer storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110472632B (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111626283A (en) * | 2020-05-20 | 2020-09-04 | 北京字节跳动网络技术有限公司 | Character extraction method and device and electronic equipment |
CN111723815A (en) * | 2020-06-23 | 2020-09-29 | 中国工商银行股份有限公司 | Model training method, image processing method, device, computer system, and medium |
CN112270370A (en) * | 2020-11-06 | 2021-01-26 | 北京环境特性研究所 | Vehicle apparent damage assessment method |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106650721A (en) * | 2016-12-28 | 2017-05-10 | 吴晓军 | Industrial character identification method based on convolution neural network |
US20180020118A1 (en) * | 2013-12-19 | 2018-01-18 | Canon Kabushiki Kaisha | Image processing apparatus, method, and storage medium |
-
2019
- 2019-07-31 CN CN201910702665.XA patent/CN110472632B/en active Active
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20180020118A1 (en) * | 2013-12-19 | 2018-01-18 | Canon Kabushiki Kaisha | Image processing apparatus, method, and storage medium |
CN106650721A (en) * | 2016-12-28 | 2017-05-10 | 吴晓军 | Industrial character identification method based on convolution neural network |
Non-Patent Citations (1)
Title |
---|
白培瑞等: "一种通用的基于图像分割的验证码识别方法", 《山东科技大学学报(自然科学版)》 * |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111626283A (en) * | 2020-05-20 | 2020-09-04 | 北京字节跳动网络技术有限公司 | Character extraction method and device and electronic equipment |
CN111626283B (en) * | 2020-05-20 | 2022-12-13 | 北京字节跳动网络技术有限公司 | Character extraction method and device and electronic equipment |
CN111723815A (en) * | 2020-06-23 | 2020-09-29 | 中国工商银行股份有限公司 | Model training method, image processing method, device, computer system, and medium |
CN112270370A (en) * | 2020-11-06 | 2021-01-26 | 北京环境特性研究所 | Vehicle apparent damage assessment method |
CN112270370B (en) * | 2020-11-06 | 2023-06-02 | 北京环境特性研究所 | Vehicle apparent damage assessment method |
Also Published As
Publication number | Publication date |
---|---|
CN110472632B (en) | 2022-09-30 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Fang et al. | A Method for Improving CNN-Based Image Recognition Using DCGAN. | |
CN109859190B (en) | Target area detection method based on deep learning | |
CN109034210A (en) | Object detection method based on super Fusion Features Yu multi-Scale Pyramid network | |
CN110472632A (en) | Character segmentation method, device and computer storage medium based on character feature | |
CN108509839A (en) | One kind being based on the efficient gestures detection recognition methods of region convolutional neural networks | |
CN107330444A (en) | A kind of image autotext mask method based on generation confrontation network | |
CN106778852A (en) | A kind of picture material recognition methods for correcting erroneous judgement | |
CN109583379A (en) | A kind of pedestrian's recognition methods again being aligned network based on selective erasing pedestrian | |
CN108108751A (en) | A kind of scene recognition method based on convolution multiple features and depth random forest | |
CN110163286A (en) | Hybrid pooling-based domain adaptive image classification method | |
CN109508675A (en) | A kind of pedestrian detection method for complex scene | |
CN109712127A (en) | A kind of electric transmission line fault detection method for patrolling video flowing for machine | |
Baojun et al. | Multi-scale object detection by top-down and bottom-up feature pyramid network | |
CN113642621A (en) | Zero sample image classification method based on generation countermeasure network | |
CN113841161A (en) | Extensible architecture for automatically generating content distribution images | |
Mo et al. | Background noise filtering and distribution dividing for crowd counting | |
Hu et al. | Deep learning for distinguishing computer generated images and natural images: A survey | |
Xu et al. | Occlusion problem-oriented adversarial faster-RCNN scheme | |
Weng et al. | Data augmentation computing model based on generative adversarial network | |
Wang et al. | Facial expression recognition based on CNN | |
Jiang et al. | Improve object detection by data enhancement based on generative adversarial nets | |
CN109377498A (en) | Interactive mode based on Recognition with Recurrent Neural Network scratches drawing method | |
Ling et al. | A facial expression recognition system for smart learning based on YOLO and vision transformer | |
CN110084247A (en) | A kind of multiple dimensioned conspicuousness detection method and device based on fuzzy characteristics | |
Huang et al. | Menfish classification based on Inception_V3 convolutional neural network |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
CB03 | Change of inventor or designer information |
Inventor after: Liu Jin Inventor after: Gao Zhenyu Inventor after: Li Yunhui Inventor before: Liu Jin Inventor before: Gao Zhenyu |
|
CB03 | Change of inventor or designer information | ||
GR01 | Patent grant | ||
GR01 | Patent grant |