CN109685100A - Character identifying method, server and computer readable storage medium - Google Patents
Character identifying method, server and computer readable storage medium Download PDFInfo
- Publication number
- CN109685100A CN109685100A CN201811341729.XA CN201811341729A CN109685100A CN 109685100 A CN109685100 A CN 109685100A CN 201811341729 A CN201811341729 A CN 201811341729A CN 109685100 A CN109685100 A CN 109685100A
- Authority
- CN
- China
- Prior art keywords
- character
- picture
- layers
- character recognition
- recognition model
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 41
- 230000015572 biosynthetic process Effects 0.000 claims abstract description 46
- 238000003786 synthesis reaction Methods 0.000 claims abstract description 46
- 238000013135 deep learning Methods 0.000 claims abstract description 25
- 238000012545 processing Methods 0.000 claims description 50
- 230000007787 long-term memory Effects 0.000 claims description 24
- 230000015654 memory Effects 0.000 claims description 23
- 102100032202 Cornulin Human genes 0.000 claims description 20
- 101000920981 Homo sapiens Cornulin Proteins 0.000 claims description 20
- 230000002123 temporal effect Effects 0.000 claims description 14
- 230000008859 change Effects 0.000 claims description 12
- 239000000284 extract Substances 0.000 claims description 11
- 230000008569 process Effects 0.000 claims description 6
- 235000013399 edible fruits Nutrition 0.000 claims 2
- 230000006870 function Effects 0.000 abstract description 4
- 238000013473 artificial intelligence Methods 0.000 abstract 1
- 238000012549 training Methods 0.000 description 26
- 238000012360 testing method Methods 0.000 description 22
- 238000010586 diagram Methods 0.000 description 8
- 241001269238 Data Species 0.000 description 6
- 238000004590 computer program Methods 0.000 description 4
- 238000013528 artificial neural network Methods 0.000 description 3
- 239000003086 colorant Substances 0.000 description 3
- 238000001914 filtration Methods 0.000 description 3
- 230000001537 neural effect Effects 0.000 description 3
- 230000000306 recurrent effect Effects 0.000 description 3
- 238000006467 substitution reaction Methods 0.000 description 3
- 238000004891 communication Methods 0.000 description 2
- 230000001186 cumulative effect Effects 0.000 description 2
- 238000012015 optical character recognition Methods 0.000 description 2
- 230000006403 short-term memory Effects 0.000 description 2
- 230000000638 stimulation Effects 0.000 description 2
- 230000006399 behavior Effects 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V30/00—Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
- G06V30/10—Character recognition
- G06V30/24—Character recognition characterised by the processing or recognition method
- G06V30/248—Character recognition characterised by the processing or recognition method involving plural approaches, e.g. verification by template match; Resolving confusion among similar patterns, e.g. "O" versus "Q"
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V30/00—Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
- G06V30/10—Character recognition
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Evolutionary Computation (AREA)
- Life Sciences & Earth Sciences (AREA)
- Artificial Intelligence (AREA)
- General Engineering & Computer Science (AREA)
- Computing Systems (AREA)
- Software Systems (AREA)
- Molecular Biology (AREA)
- Computational Linguistics (AREA)
- Biophysics (AREA)
- Biomedical Technology (AREA)
- Mathematical Physics (AREA)
- General Health & Medical Sciences (AREA)
- Health & Medical Sciences (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Bioinformatics & Computational Biology (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Evolutionary Biology (AREA)
- Multimedia (AREA)
- Character Discrimination (AREA)
Abstract
The present invention relates to artificial intelligence, disclose a kind of character identifying method, comprising: obtain character data, and each character data that will acquire and preset background picture carry out image and synthesize to obtain the corresponding character picture of each character data;Random perturbation is carried out to the character picture after synthesis to handle to obtain different types of character picture;The different types of character picture is input in deep learning network and is trained to generate character recognition model;And character picture to be identified is input in the character recognition model, export the recognition result of the character picture to be identified.The present invention also provides a kind of server and computer readable storage mediums.Character identifying method, server and computer readable storage medium provided by the invention are based on deep learning algorithm and realize OCR function, can increase character recognition range and increase character recognition accuracy rate.
Description
Technical field
The present invention relates to character recognition field more particularly to a kind of character identifying method, server and computer-readable deposit
Storage media.
Background technique
During OCR (Optical Character Recognition, optical character identification) business, usually
This, which generally requires business side and provides the scene, is identified to certain fields under a certain special scenes according to the demand of business side
True image data down, and need manually to mark data, is then detected and known with these pictures marked
The training of the deep learning of other model.When the identification content of these fields is in little finite aggregate (such as identity
The gender of card, the type of vehicle of driving license, character of use etc.), recognition correct rate is usually relatively high.And work as the identification content of field
In a great finite aggregate, it might even be possible to when regarding infinite set as (for example the name of identity card, driving license is all
People etc.), identification be easy for by labeled data number limitation, accuracy rate also will receive certain influence.
Summary of the invention
In view of this, the present invention proposes a kind of character identifying method, character recognition range can be increased and increase character
Recognition accuracy.
Firstly, to achieve the above object, the present invention proposes a kind of server, and the server includes memory, processor,
The character recognition system that can be run on the processor is stored on the memory, the character recognition system is by the place
Reason device realizes following steps when executing:
Character data is obtained, and each character data that will acquire carries out image with preset background picture and synthesizes to obtain
To the corresponding character picture of each character data;
Random perturbation is carried out to the character picture after synthesis to handle to obtain different types of character picture;
The different types of character picture is input in deep learning network and is trained to generate character recognition mould
Type;And
Character picture to be identified is input in the character recognition model, the identification of the character picture to be identified is exported
As a result.
Optionally, the deep learning network be CRNN model, the CRNN model include one VGG16 layers, it is two long
LSTM layers and two of short-term memory network full FC layers of connections, wherein described VGG16 layers for the space characteristics to character picture
It extracting, LSTM layers of described two shot and long term memory networks are used to extract the temporal aspect of character picture, and described two
A FC layers of full connection for classifying to the space characteristics and temporal aspect that extract.
Optionally, when the character recognition system is executed by the processor, following steps are also realized:
The character recognition model is tested to the recognition accuracy of character;And
If the recognition accuracy is lower than preset threshold, the character recognition model is adjusted.
Optionally, described the step of being adjusted to the character recognition model, includes:
Freeze VGG16 layers of the parameter;
The parameter of LSTM layers of described two shot and long term memory networks and described two full FC layers of connections is adjusted;And
The character recognition model after being adjusted is trained using true character image data.
In addition, to achieve the above object, the present invention also provides a kind of character identifying methods, it is applied to server, the side
Method includes:
Character data is obtained, and each character data that will acquire carries out image with preset background picture and synthesizes to obtain
To the corresponding character picture of each character data;
Random perturbation is carried out to the character picture after synthesis to handle to obtain different types of character picture;
The different types of character picture is input in deep learning network and is trained to generate character recognition mould
Type;And
Character picture to be identified is input in the character recognition model, the identification of the character picture to be identified is exported
As a result.
Optionally, the deep learning network be CRNN model, the CRNN model include one VGG16 layers, it is two long
LSTM layers and two of short-term memory network full FC layers of connections, wherein described VGG16 layers for the space characteristics to character picture
It extracting, LSTM layers of described two shot and long term memory networks are used to extract the temporal aspect of character picture, and described two
A FC layers of full connection for classifying to the space characteristics and temporal aspect that extract.
Optionally, the described different types of character picture is input in deep learning network is trained to generate
After the step of character recognition model, further includes:
The character recognition model is tested to the recognition accuracy of character;And
If the recognition accuracy is lower than preset threshold, the character recognition model is adjusted.
Optionally, described the step of being adjusted to the character recognition model, includes:
Freeze VGG16 layers of the parameter;
The parameter of LSTM layers of described two shot and long term memory networks and described two full FC layers of connections is adjusted;And
The character recognition model after being adjusted is trained using true character image data.
Optionally, the random perturbation processing includes: Gaussian Blur processing, Gaussian noise processing, the rotation by a small margin of picture
Turn at least one of the color change processing of processing, the contrast change process of picture and picture.
Further, to achieve the above object, the present invention also provides a kind of computer readable storage medium, the computers
Readable storage medium storing program for executing is stored with character recognition system, and the character recognition system can be executed by least one processor, so that institute
State the step of at least one processor executes described in any item character identifying methods as described above.
Compared to the prior art, character identifying method proposed by the invention, server and computer readable storage medium,
Character data is obtained, and each character data that will acquire carries out image with preset background picture and synthesizes to obtain each word
Accord with the corresponding character picture of data;Random perturbation is carried out to the character picture after synthesis to handle to obtain different types of character figure
Picture;The different types of character picture is input in deep learning network and is trained to generate character recognition model;And
Character picture to be identified is input in the character recognition model, the recognition result of the character picture to be identified is exported.This
Sample can generate diversified training sample data as needed, to solve in the prior art due to the truthful data of training sample
Uneven distribution leads to the problem that character recognition range is small and accuracy rate is low, increases character recognition range and increases character and knows
Other accuracy rate.
Detailed description of the invention
Fig. 1 is the schematic diagram of the optional hardware structure of server one of the present invention;
Fig. 2 is the program module schematic diagram of character recognition system first embodiment of the present invention;
Fig. 3 is the program module schematic diagram of character recognition system second embodiment of the present invention;
Fig. 4 is the implementation process diagram of character identifying method first embodiment of the present invention;
Fig. 5 is the implementation process diagram of character identifying method second embodiment of the present invention.
Appended drawing reference:
Server | 2 |
Network | 3 |
Memory | 11 |
Processor | 12 |
Network interface | 13 |
Character recognition system | 100 |
Obtain module | 101 |
Processing module | 102 |
Generation module | 103 |
Output module | 104 |
Test module | 105 |
Adjust module | 106 |
The embodiments will be further described with reference to the accompanying drawings for the realization, the function and the advantages of the object of the present invention.
Specific embodiment
In order to make the objectives, technical solutions, and advantages of the present invention clearer, with reference to the accompanying drawings and embodiments, right
The present invention is further elaborated.It should be appreciated that described herein, specific examples are only used to explain the present invention, not
For limiting the present invention.Based on the embodiments of the present invention, those of ordinary skill in the art are not before making creative work
Every other embodiment obtained is put, shall fall within the protection scope of the present invention.
It should be noted that the description for being related to " first ", " second " etc. in the present invention is used for description purposes only, and cannot
It is interpreted as its relative importance of indication or suggestion or implicitly indicates the quantity of indicated technical characteristic.Define as a result, " the
One ", the feature of " second " can explicitly or implicitly include at least one of the features.In addition, the skill between each embodiment
Art scheme can be combined with each other, but must be based on can be realized by those of ordinary skill in the art, when technical solution
Will be understood that the combination of this technical solution is not present in conjunction with there is conflicting or cannot achieve when, also not the present invention claims
Protection scope within.
As shown in fig.1, being the schematic diagram of the optional hardware structure of application server 2 one of the present invention.
In the present embodiment, the application server 2 may include, but be not limited only to, and company can be in communication with each other by system bus
Connect memory 11, processor 12, network interface 13.It should be pointed out that Fig. 2 illustrates only the application clothes with component 11-13
Business device 2, it should be understood that being not required for implementing all components shown, the implementation that can be substituted is more or less
Component.
Wherein, the application server 2 can be rack-mount server, blade server, tower server or cabinet
Formula server etc. calculates equipment, which can be independent server, be also possible to composed by multiple servers
Server cluster.
The memory 11 include at least a type of readable storage medium storing program for executing, the readable storage medium storing program for executing include flash memory,
Hard disk, multimedia card, card-type memory (for example, SD or DX memory etc.), random access storage device (RAM), static random are visited
It asks memory (SRAM), read-only memory (ROM), electrically erasable programmable read-only memory (EEPROM), may be programmed read-only deposit
Reservoir (PROM), magnetic storage, disk, CD etc..In some embodiments, the memory 11 can be the application clothes
The internal storage unit of business device 2, such as the hard disk or memory of the application server 2.In further embodiments, the memory
11 are also possible to the plug-in type hard disk being equipped on the External memory equipment of the application server 2, such as the application server 2,
Intelligent memory card (Smart Media Card, SMC), secure digital (Secure Digital, SD) card, flash card (Flash
Card) etc..Certainly, the memory 11 can also both including the application server 2 internal storage unit and also including outside it
Portion stores equipment.In the present embodiment, the memory 11 is installed on the operating system of the application server 2 commonly used in storage
With types of applications software, such as the program code of character recognition system 100 etc..In addition, the memory 11 can be also used for temporarily
When store the Various types of data that has exported or will export.
The processor 12 can be in some embodiments central processing unit (Central Processing Unit,
CPU), controller, microcontroller, microprocessor or other data processing chips.The processor 12 is commonly used in answering described in control
With the overall operation of server 2.In the present embodiment, the processor 12 is for running the program generation stored in the memory 11
Code or processing data, such as run the character recognition system 100 etc..
The network interface 13 may include radio network interface or wired network interface, which is commonly used in
Communication connection is established between the application server 2 and other electronic equipments.
So far, oneself is through describing the hardware configuration and function of relevant device of the present invention in detail.In the following, above-mentioned introduction will be based on
It is proposed each embodiment of the invention.
Firstly, the present invention proposes a kind of character recognition system 100.
So far, oneself is through describing the hardware configuration and function of relevant device of the present invention in detail.In the following, above-mentioned introduction will be based on
It is proposed each embodiment of the invention.
Firstly, the present invention proposes a kind of character recognition system 100.
As shown in fig.2, being the Program modual graph of 100 first embodiment of character recognition system of the present invention.
In the present embodiment, the character recognition system 100 includes a series of computer journey being stored on memory 11
The character recognition behaviour of various embodiments of the present invention may be implemented when the computer program instructions are executed by processor 12 in sequence instruction
Make.In some embodiments, the specific operation realized based on the computer program instructions each section, character recognition system
100 can be divided into one or more modules.For example, character recognition system 100 can be divided into acquisition mould in Fig. 2
Block 101, processing module 102, generation module 103 and output module 104.Wherein:
The each character data and preset background that the acquisition module 101 is used to obtain character data, and will acquire
Picture carries out image synthesis to obtain the corresponding character picture of each character data.
Specifically, the character data can be in the present embodiment, described for English alphabet, symbol, number, Chinese character etc.
Character data includes at least one character.The character data can be grabbed from network and be obtained, and be then store in default file,
When user needs using the character data, directly obtained from the default file;The character data may be business side
Provided character data, and it is stored in default file, it, equally can be direct when user needs using the character data
It is obtained from the default file.Preferably, which is the file of TXT format.Those skilled in the art, which can use, to be appointed
Where formula obtains the character data, and details are not described herein again.
The preset background picture is user's identified picture according to actual needs, in the present embodiment, described pre-
If background picture be preferably the picture that is grabbed from network with keyword for " paper ", which is at least one, certainly, this
Obtained picture after picture can also be shot using camera for various paper with user oneself.It is understood that
In other embodiments of the present invention, which may be the picture of other patterns, for example be the picture of license plate number,
The picture etc. of identity card.
For example, when in the character data of acquisition have 5 character datas, the preset background picture have 4 when,
Then when carrying out image synthesis, it is preferable that each character data can be subjected to image with every background image respectively and synthesized, this
Sample, each character data can synthesize 4 character pictures, and 5 character datas can synthesize 20 character pictures.Certainly, exist
When carrying out image synthesis, also may not necessarily each character data require to carry out image synthesis with each background image to obtain
Character picture, in the present embodiment without limitation.In the present embodiment, image is carried out by character data and multiple background pictures
Synthesis, can increase the diversity of character picture.
In the present embodiment, the synthesis of image can be realized using any one existing image composing technique, for example,
When carrying out image synthesis, it is possible, firstly, to according to the word of the length of the character data, the pattern of character data and character data
Number determine the character data shared by pixel space length and width, after the length and width for determining pixel space shared by the character data, from
Corresponding pixel region is chosen in the pixel of the background picture so that the corresponding pixel of the character data can be inserted into the pixel
In region, and replace the pixel being initially positioned in the pixel region.It, can also be with it is understood that in other embodiments
Not by the way of pixel substitution, but the mode of pixel superposition is directlyed adopt, i.e., by the corresponding each pixel of the character data
It is overlapped respectively with each pixel corresponding in the pixel region, using superimposed pixel value as each in the pixel region
The pixel value of a pixel.
The processing module 102 is used to carry out the character picture after synthesis random perturbation processing different types of to obtain
Character picture.
Specifically, the random perturbation processing includes Gaussian Blur processing, Gaussian noise processing, the rotation by a small margin of picture
The processing of the contrast of processing and picture and color change processing etc..Wherein, Gaussian Blur processing is carried out to picture to refer to pair
Picture carries out the gaussian filtering of certain mean value and variance;Gaussian noise is carried out to picture and handles three colors referred in picture
Gauss noise is added on channel, unlike Gaussian Blur, this is the superposition directly in value, and Gaussian Blur is to picture
It is filtered;To picture, rotation processing refers to directly being taken according to the field frame determination central point to be rotated by a small margin
Central point of the center of picture as rotation, this can need to be adjusted, then rotate by central point according to actual business
One angle;The contrast processing of picture refers to S (Saturation saturation degree) and V to picture on HSV color space
(Value lightness) is changed at random;The color change processing of picture refers to the H (Hue to picture on HSV color space
Tone) changed at random.
In the present embodiment, by using at least one disturbance treatment method described above can be with the image after synthesis
Different types of character picture is obtained, for example, the character picture of available spin style, there is the character picture of noise, inclination
Character picture etc..By carrying out disturbance treatment to the image of synthesis, the diversity of character picture can be increased, so that training sample
This data are more abundant, can have higher identification quasi- will pass through the character recognition model that training sample training obtains
True rate.
The generation module 103 is instructed for the different types of character picture to be input in deep learning network
Practice to generate character recognition model.
Specifically, it before different types of character picture is input to deep learning network, needs first to the character figure
As being pre-processed, which is converted into demand characteristic vector, the demand characteristic vector is then input to depth
It is trained in learning network.
In the present embodiment, which is preferably CRNN model, which is a kind of convolutional Neural net
The conjunctive model of network and Recognition with Recurrent Neural Network, the CRNN model are a kind of end-to-end trainable models, are had the advantages that
1) input data can be random length (picture traverse is any, and word length is any);2) training set is not necessarily to the calibration for having character;
3) it can be used with dictionary and without the library (sample) of dictionary;4) performance is good, and model is small (parameter is few).
In a specific embodiment, which includes one VGG16 layers, two LSTM layers of shot and long term memory network
And two full FC layers of connections, wherein described VGG16 layers is made of 13 convolutional layers and 3 full linking layers, for character
The space characteristics of image extract;LSTM layers of described two shot and long term memory networks for the temporal aspect of character picture into
Row extracts, to obtain the context relation of the text to training identification;Described two FC layers of full connection for the sky extracted
Between feature and temporal aspect classify.CRNN model in the present embodiment has increased one newly compared with existing CRNN model
It is complete to connect FC layers to accelerate trained convergence rate.
The output module 104 is for character picture to be identified to be input in the character recognition model, described in output
The recognition result of character picture to be identified.
In the present embodiment, when user needs to identify character, it is only necessary to, will after the character picture for acquiring character to be identified
The character picture is input in character recognition model, which may recognize that the corresponding character of the character picture.
In the present embodiment, which can store in local character recognition terminal, also can store in server
In, it is selected with specific reference to the actual needs of user, in the present embodiment without limitation.
By above procedure module 101-104, character recognition system 100 proposed by the invention obtains character data, and
The each character data that will acquire and preset background picture carry out image synthesis to obtain the corresponding word of each character data
Accord with image;Random perturbation is carried out to the character picture after synthesis to handle to obtain different types of character picture;By the difference
The character picture of type, which is input in deep learning network, to be trained to generate character recognition model;And by character figure to be identified
As being input in the character recognition model, the recognition result of the character picture to be identified is exported.In this way, can give birth to as needed
At diversified training sample data, to solve in the prior art since the truthful data uneven distribution of training sample leads to word
The symbol problem that identification range is small and accuracy rate is low increases character recognition range and increases character recognition accuracy rate.
As shown in fig.3, being the Program modual graph of 100 second embodiment of character recognition system of the present invention.In the present embodiment,
The character recognition system 100 includes a series of computer program instructions being stored on memory 11, when the computer journey
When sequence instruction is executed by processor 12, the character recognition operation of various embodiments of the present invention may be implemented.In some embodiments, base
In the specific operation that the computer program instructions each section is realized, character recognition system 100 can be divided into one or
Multiple modules.For example, character recognition system 100, which can be divided into, to be obtained module 101, processing module 102, generates in Fig. 3
Module 103, output module 104, test module 105 and adjustment module 106.Each program module 101-104 and word of the present invention
It is identical to accord with 100 first embodiment of identifying system, and increases test module 105 and adjustment module 106 on this basis.Wherein:
The each character data and preset background that the acquisition module 101 is used to obtain character data, and will acquire
Picture carries out image synthesis to obtain the corresponding character picture of each character data.
Specifically, the character data can be in the present embodiment, described for English alphabet, symbol, number, Chinese character etc.
Character data includes at least one character.The character data can be grabbed from network and be obtained, and be then store in default file,
When user needs using the character data, directly obtained from the default file;The character data may be business side
Provided character data, and it is stored in default file, it, equally can be direct when user needs using the character data
It is obtained from the default file.Preferably, which is the file of TXT format.Those skilled in the art, which can use, to be appointed
Where formula obtains the character data, and details are not described herein again.
The preset background picture is user's identified picture according to actual needs, in the present embodiment, described pre-
If background picture be preferably the picture that is grabbed from network with keyword for " paper ", which is at least one, certainly, this
Obtained picture after picture can also be shot using camera for various paper with user oneself.It is understood that
In other embodiments of the present invention, which may be the picture of other patterns, for example be the picture of license plate number,
The picture etc. of identity card.
For example, when in the character data of acquisition have 5 character datas, the preset background picture have 4 when,
Then when carrying out image synthesis, it is preferable that each character data can be subjected to image with every background image respectively and synthesized, this
Sample, each character data can synthesize 4 character pictures, and 5 character datas can synthesize 20 character pictures.Certainly, exist
When carrying out image synthesis, also may not necessarily each character data require to carry out image synthesis with each background image to obtain
Character picture, in the present embodiment without limitation.In the present embodiment, image is carried out by character data and multiple background pictures
Synthesis, can increase the diversity of character picture.
In the present embodiment, the synthesis of image can be realized using any one existing image composing technique, for example,
When carrying out image synthesis, it is possible, firstly, to according to the word of the length of the character data, the pattern of character data and character data
Number determine the character data shared by pixel space length and width, after the length and width for determining pixel space shared by the character data, from
Corresponding pixel region is chosen in the pixel of the background picture so that the corresponding pixel of the character data can be inserted into the pixel
In region, and replace the pixel being initially positioned in the pixel region.It, can also be with it is understood that in other embodiments
Not by the way of pixel substitution, but the mode of pixel superposition is directlyed adopt, i.e., by the corresponding each pixel of the character data
It is overlapped respectively with each pixel corresponding in the pixel region, using superimposed pixel value as each in the pixel region
The pixel value of a pixel.
The processing module 102 is used to carry out the character picture after synthesis random perturbation processing different types of to obtain
Character picture.
Specifically, the random perturbation processing includes Gaussian Blur processing, Gaussian noise processing, the rotation by a small margin of picture
The processing of the contrast of processing and picture and color change processing etc..Wherein, Gaussian Blur processing is carried out to picture to refer to pair
Picture carries out the gaussian filtering of certain mean value and variance;Gaussian noise is carried out to picture and handles three colors referred in picture
Gauss noise is added on channel, unlike Gaussian Blur, this is the superposition directly in value, and Gaussian Blur is to picture
It is filtered;To picture, rotation processing refers to directly being taken according to the field frame determination central point to be rotated by a small margin
Central point of the center of picture as rotation, this can need to be adjusted, then rotate by central point according to actual business
One angle;The contrast processing of picture refers to S (Saturation saturation degree) and V to picture on HSV color space
(Value lightness) is changed at random;The color change processing of picture refers to the H (Hue to picture on HSV color space
Tone) changed at random.
In the present embodiment, by using at least one disturbance treatment method described above can be with the image after synthesis
Different types of character picture is obtained, for example, the character picture of available spin style, there is the character picture of noise, inclination
Character picture etc..By carrying out disturbance treatment to the image of synthesis, the diversity of character picture can be increased, so that training sample
This data are more abundant, can have higher identification quasi- will pass through the character recognition model that training sample training obtains
True rate.
The generation module 103 is instructed for the different types of character picture to be input in deep learning network
Practice to generate character recognition model.
Specifically, it before different types of character picture is input to deep learning network, needs first to the character figure
As being pre-processed, which is converted into demand characteristic vector, the demand characteristic vector is then input to depth
It is trained in learning network.
In the present embodiment, which is preferably CRNN model, which is a kind of convolutional Neural net
The conjunctive model of network and Recognition with Recurrent Neural Network, the CRNN model are a kind of end-to-end trainable models, are had the advantages that
1) input data can be random length (picture traverse is any, and word length is any);2) training set is not necessarily to the calibration for having character;
3) it can be used with dictionary and without the library (sample) of dictionary;4) performance is good, and model is small (parameter is few).
In a specific embodiment, which includes one VGG16 layers, two LSTM layers of shot and long term memory network
And two full FC layers of connections, wherein described VGG16 layers is made of 13 convolutional layers and 3 full linking layers, for character
The space characteristics of image extract;LSTM layers of described two shot and long term memory networks for the temporal aspect of character picture into
Row extracts, to obtain the context relation of the text to training identification;Described two FC layers of full connection for the sky extracted
Between feature and temporal aspect classify.CRNN model in the present embodiment has increased one newly compared with existing CRNN model
It is complete to connect FC layers to accelerate trained convergence rate.
The output module 104 is for character picture to be identified to be input in the character recognition model, described in output
The recognition result of character picture to be identified.
In the present embodiment, when user needs to identify character, it is only necessary to, will after the character picture for acquiring character to be identified
The character picture is input in character recognition model, which may recognize that the corresponding character of the character picture.
In the present embodiment, which can store in local character recognition terminal, also can store in server
In, it is selected with specific reference to the actual needs of user.
Test module 105 is for testing the character recognition model to the recognition accuracy of character.
Specifically, after generating character recognition model, need to test the character recognition model to true character picture number
According to recognition accuracy.
In one embodiment, the character picture of several true characters is input in the character recognition model by user, defeated
The corresponding recognition result of the true character out, the accuracy rate then identified according to the recognition result calculating character of the output.It can be with
Understand, in order to obtain the calculated result of accurate character identification rate, the true number of characters being input in character recognition model
According to quantity should be as more as possible.
In the accuracy rate of calculating character identification, the recognition result which exports can be somebody's turn to do with what is prestored
Character data compares, so that it is determined that the character recognition model to the character recognition it is exact whether, however, it is determined that this some
Character data identification is correct, then can count cumulative 1, and until all character recognition are completed, which is removed
With the character number being input in character recognition model to obtain the identification of the character recognition model to true character image data
Accuracy rate.
If the adjustment module 106 is lower than preset threshold for the recognition accuracy, to the character recognition model
It is adjusted.
Specifically, after getting recognition accuracy of the character recognition model to character, by character recognition standard
True rate is compared with preset threshold, if the accuracy rate of the character recognition is lower than the preset threshold, to the character recognition model
It is adjusted.In the present embodiment, which is the minimum of pre-set character recognition accuracy rate, for example, this is pre-
If threshold value is 90%.The preset threshold can be set according to the actual demand of user, the preset threshold after the setting
It can further modify according to actual needs.
It should be noted that only knowing to the character when being adjusted in the present embodiment to the character recognition model
Other model is finely adjusted, without doing too big adjustment.
Specifically, described the step of being adjusted to the character recognition model, includes:
Step A, freeze VGG16 layers of the parameter.
In the present embodiment, when being adjusted to the character recognition model, to the VGG16 layers of parameters not into
Row changes, i.e., freezes to the VGG16 layers of parameter, to prevent when being adjusted to character recognition model, the VGG16
The parameter of layer is adjusted under the stimulation of training sample data.
Step B, the parameter of LSTM layers of described two shot and long term memory networks and described two full FC layers of connections is carried out
Adjustment.
In the present embodiment, when being adjusted to the character recognition model, to described two shot and long term memory networks
LSTM layers and FC layer of parameter of described two full connections are adjusted, and specifically, pass through two shot and long term memory networks of relieving
The parameters of full FC layers of the connection of LSTM layers and two, and be set as the learning rate to be often separated by several epoch and decay, until declining
Reduce to a boundary value.
Step C, the character recognition model after being adjusted is trained using true character image data.
In the present embodiment, to LSTM layers of described two shot and long term memory networks and FC layers of full connection described two
Parameter while be adjusted, true character picture is input to and is carried out into crossing in the character recognition model after adjusting parameter
It further trains with the character recognition model after being adjusted.Character recognition model and then use after being adjusted
Test module 105 tests the recognition accuracy of the model, if test result meet demand, completes to the character recognition
The training of model;It is still unsatisfactory for needing according to the test result that test module 105 tests the character recognition model
When asking, then step A- step C is repeated, until the recognition accuracy of obtained character recognition model reaches requirement.
By above procedure module 101-106, character recognition system 100 proposed by the invention obtains character data, and
The each character data that will acquire and preset background picture carry out image synthesis to obtain the corresponding word of each character data
Accord with image;Random perturbation is carried out to the character picture after synthesis to handle to obtain different types of character picture;By the difference
The character picture of type, which is input in deep learning network, to be trained to generate character recognition model;By character picture to be identified
It is input in the character recognition model, exports the recognition result of the character picture to be identified;Test the character recognition mould
Recognition accuracy of the type to character;And if the recognition accuracy is lower than preset threshold, carries out to the character recognition model
Adjustment.In this way, character recognition model is finely adjusted by when preset recognition accuracy is not achieved in character recognition model,
To improve the accuracy rate of character recognition.
In addition, the present invention also proposes a kind of character identifying method.
As shown in fig.4, being the implementation process diagram of character identifying method first embodiment of the present invention.In the present embodiment
In, the execution sequence of the step in flow chart shown in Fig. 4 can change according to different requirements, and certain steps can be omitted.
Step S500 obtains character data, and each character data that will acquire and preset background picture carry out figure
As synthesis is to obtain the corresponding character picture of each character data.
Specifically, the character data can be in the present embodiment, described for English alphabet, symbol, number, Chinese character etc.
Character data includes at least one character.The character data can be grabbed from network and be obtained, and be then store in default file,
When user needs using the character data, directly obtained from the default file;The character data may be business side
Provided character data, and it is stored in default file, it, equally can be direct when user needs using the character data
It is obtained from the default file.Preferably, which is the file of TXT format.Those skilled in the art, which can use, to be appointed
Where formula obtains the character data, and details are not described herein again.
The preset background picture is user's identified picture according to actual needs, in the present embodiment, described pre-
If background picture be preferably the picture that is grabbed from network with keyword for " paper ", which is at least one, certainly, this
Obtained picture after picture can also be shot using camera for various paper with user oneself.It is understood that
In other embodiments of the present invention, which may be the picture of other patterns, for example be the picture of license plate number,
The picture etc. of identity card.
For example, when in the character data of acquisition have 5 character datas, the preset background picture have 4 when,
Then when carrying out image synthesis, it is preferable that each character data can be subjected to image with every background image respectively and synthesized, this
Sample, each character data can synthesize 4 character pictures, and 5 character datas can synthesize 20 character pictures.Certainly, exist
When carrying out image synthesis, also may not necessarily each character data require to carry out image synthesis with each background image to obtain
Character picture, in the present embodiment without limitation.In the present embodiment, image is carried out by character data and multiple background pictures
Synthesis, can increase the diversity of character picture.
In the present embodiment, the synthesis of image can be realized using any one existing image composing technique, for example,
When carrying out image synthesis, it is possible, firstly, to according to the word of the length of the character data, the pattern of character data and character data
Number determine the character data shared by pixel space length and width, after the length and width for determining pixel space shared by the character data, from
Corresponding pixel region is chosen in the pixel of the background picture so that the corresponding pixel of the character data can be inserted into the pixel
In region, and replace the pixel being initially positioned in the pixel region.It, can also be with it is understood that in other embodiments
Not by the way of pixel substitution, but the mode of pixel superposition is directlyed adopt, i.e., by the corresponding each pixel of the character data
It is overlapped respectively with each pixel corresponding in the pixel region, using superimposed pixel value as each in the pixel region
The pixel value of a pixel.
Step S502 carries out random perturbation to the character picture after synthesis and handles to obtain different types of character picture.
Specifically, the random perturbation processing includes Gaussian Blur processing, Gaussian noise processing, the rotation by a small margin of picture
The processing of the contrast of processing and picture and color change processing etc..Wherein, Gaussian Blur processing is carried out to picture to refer to pair
Picture carries out the gaussian filtering of certain mean value and variance;Gaussian noise is carried out to picture and handles three colors referred in picture
Gauss noise is added on channel, unlike Gaussian Blur, this is the superposition directly in value, and Gaussian Blur is to picture
It is filtered;To picture, rotation processing refers to directly being taken according to the field frame determination central point to be rotated by a small margin
Central point of the center of picture as rotation, this can need to be adjusted, then rotate by central point according to actual business
One angle;The contrast processing of picture refers to S (Saturation saturation degree) and V to picture on HSV color space
(Value lightness) is changed at random;The color change processing of picture refers to the H (Hue to picture on HSV color space
Tone) changed at random.
In the present embodiment, by using at least one disturbance treatment method described above can be with the image after synthesis
Different types of character picture is obtained, for example, the character picture of available spin style, there is the character picture of noise, inclination
Character picture etc..By carrying out disturbance treatment to the image of synthesis, the diversity of character picture can be increased, so that training sample
This data are more abundant, can have higher identification quasi- will pass through the character recognition model that training sample training obtains
True rate.
The different types of character picture is input in deep learning network and is trained to generate word by step S504
Accord with identification model.
Specifically, it before different types of character picture is input to deep learning network, needs first to the character figure
As being pre-processed, which is converted into demand characteristic vector, the demand characteristic vector is then input to depth
It is trained in learning network.
In the present embodiment, which is preferably CRNN model, which is a kind of convolutional Neural net
The conjunctive model of network and Recognition with Recurrent Neural Network, the CRNN model are a kind of end-to-end trainable models, are had the advantages that
1) input data can be random length (picture traverse is any, and word length is any);2) training set is not necessarily to the calibration for having character;
3) it can be used with dictionary and without the library (sample) of dictionary;4) performance is good, and model is small (parameter is few).
In a specific embodiment, which includes one VGG16 layers, two LSTM layers of shot and long term memory network
And two full FC layers of connections, wherein described VGG16 layers is made of 13 convolutional layers and 3 full linking layers, for character
The space characteristics of image extract;LSTM layers of described two shot and long term memory networks for the temporal aspect of character picture into
Row extracts, to obtain the context relation of the text to training identification;Described two FC layers of full connection for the sky extracted
Between feature and temporal aspect classify.CRNN model in the present embodiment has increased one newly compared with existing CRNN model
It is complete to connect FC layers to accelerate trained convergence rate.
Character picture to be identified is input in the character recognition model by step S506, exports the character to be identified
The recognition result of image.
In the present embodiment, when user needs to identify character, it is only necessary to, will after the character picture for acquiring character to be identified
The character picture is input in character recognition model, which may recognize that the corresponding character of the character picture.
In the present embodiment, which can store in local character recognition terminal, also can store in server
In, it is selected with specific reference to the actual needs of user, in the present embodiment without limitation.
S500-S506 through the above steps, character identifying method proposed by the invention obtain character data, and will obtain
The each character data got and preset background picture carry out image synthesis to obtain the corresponding character figure of each character data
Picture;Random perturbation is carried out to the character picture after synthesis to handle to obtain different types of character picture;By the different type
Character picture be input in deep learning network and be trained to generate character recognition model;And it is character picture to be identified is defeated
Enter into the character recognition model, exports the recognition result of the character picture to be identified.In this way, can generate as needed more
The training sample data of sample, to solve in the prior art since the truthful data uneven distribution of training sample causes character to be known
The problem that other range is small and accuracy rate is low increases character recognition range and increases character recognition accuracy rate.
As shown in fig.5, being the implementation process diagram of character identifying method second embodiment of the present invention.In the present embodiment
In, the execution sequence of the step in flow chart shown in fig. 5 can change according to different requirements, and certain steps can be omitted.
Step S600 obtains character data, and each character data that will acquire and preset background picture carry out figure
As synthesis is to obtain the corresponding character picture of each character data.
Step S602 carries out random perturbation to the character picture after synthesis and handles to obtain different types of character picture.
The different types of character picture is input in deep learning network and is trained to generate word by step S604
Accord with identification model.
Character picture to be identified is input in the character recognition model by step S606, exports the character to be identified
The recognition result of image.
Above-mentioned steps S600-S606 and step S500-S506 seemingly, are repeated no more in the present embodiment.
Step S608 tests the character recognition model to the recognition accuracy of character.
Specifically, after generating character recognition model, need to test the character recognition model to true character picture number
According to recognition accuracy.
In one embodiment, the character picture of several true characters is input in the character recognition model by user, defeated
The corresponding recognition result of the true character out, the accuracy rate then identified according to the recognition result calculating character of the output.It can be with
Understand, in order to obtain the calculated result of accurate character identification rate, the true number of characters being input in character recognition model
According to quantity should be as more as possible.
In the accuracy rate of calculating character identification, the recognition result which exports can be somebody's turn to do with what is prestored
Character data compares, so that it is determined that the character recognition model to the character recognition it is exact whether, however, it is determined that this some
Character data identification is correct, then can count cumulative 1, and until all character recognition are completed, which is removed
With the character number being input in character recognition model to obtain the identification of the character recognition model to true character image data
Accuracy rate.
Step S610 is adjusted the character recognition model if the recognition accuracy is lower than preset threshold.
Specifically, after getting recognition accuracy of the character recognition model to character, by character recognition standard
True rate is compared with preset threshold, if the accuracy rate of the character recognition is lower than the preset threshold, to the character recognition model
It is adjusted.In the present embodiment, which is the minimum of pre-set character recognition accuracy rate, for example, this is pre-
If threshold value is 90%.The preset threshold can be set according to the actual demand of user, the preset threshold after the setting
It can further modify according to actual needs.
It should be noted that only knowing to the character when being adjusted in the present embodiment to the character recognition model
Other model is finely adjusted, without doing too big adjustment.
Specifically, described the step of being adjusted to the character recognition model, includes:
Step A, freeze VGG16 layers of the parameter.
In the present embodiment, when being adjusted to the character recognition model, to the VGG16 layers of parameters not into
Row changes, i.e., freezes to the VGG16 layers of parameter, to prevent when being adjusted to character recognition model, the VGG16
The parameter of layer is adjusted under the stimulation of training sample data.
Step B, the parameter of LSTM layers of described two shot and long term memory networks and described two full FC layers of connections is carried out
Adjustment.
In the present embodiment, when being adjusted to the character recognition model, to described two shot and long term memory networks
LSTM layers and FC layer of parameter of described two full connections are adjusted, and specifically, pass through two shot and long term memory networks of relieving
The parameters of full FC layers of the connection of LSTM layers and two, and be set as the learning rate to be often separated by several epoch and decay, until declining
Reduce to a boundary value.
Step C, the character recognition model after being adjusted is trained using true character image data.
In the present embodiment, to LSTM layers of described two shot and long term memory networks and FC layers of full connection described two
Parameter while be adjusted, true character picture is input to and is carried out into crossing in the character recognition model after adjusting parameter
It further trains with the character recognition model after being adjusted.Character recognition model and then use after being adjusted
Test module 105 tests the recognition accuracy of the model, if test result meet demand, completes to the character recognition
The training of model;It is still unsatisfactory for needing according to the test result that test module 105 tests the character recognition model
When asking, then step A- step C is repeated, until the recognition accuracy of obtained character recognition model reaches requirement.
S600-S610 through the above steps, character identifying method proposed by the invention obtain character data, and will obtain
The each character data got and preset background picture carry out image synthesis to obtain the corresponding character figure of each character data
Picture;Random perturbation is carried out to the character picture after synthesis to handle to obtain different types of character picture;By the different type
Character picture be input in deep learning network and be trained to generate character recognition model;Character picture to be identified is inputted
To in the character recognition model, the recognition result of the character picture to be identified is exported;Test the character recognition model pair
The recognition accuracy of character;And if the recognition accuracy is lower than preset threshold, is adjusted to the character recognition model.
In this way, being finely adjusted by when preset recognition accuracy is not achieved in character recognition model to character recognition model, to mention
The accuracy rate of high character recognition.
The serial number of the above embodiments of the invention is only for description, does not represent the advantages or disadvantages of the embodiments.
Through the above description of the embodiments, those skilled in the art can be understood that above-described embodiment side
Method can be realized by means of software and necessary general hardware platform, naturally it is also possible to by hardware, but in many cases
The former is more preferably embodiment.Based on this understanding, technical solution of the present invention substantially in other words does the prior art
The part contributed out can be embodied in the form of software products, which is stored in a storage medium
In (such as ROM/RAM, magnetic disk, CD), including some instructions are used so that a server (can be mobile phone, computer, service
Device, air conditioner or network equipment etc.) execute method described in each embodiment of the present invention.
The above is only a preferred embodiment of the present invention, is not intended to limit the scope of the invention, all to utilize this hair
Equivalent structure or equivalent flow shift made by bright specification and accompanying drawing content is applied directly or indirectly in other relevant skills
Art field, is included within the scope of the present invention.
Claims (10)
1. a kind of character identifying method is applied to server, which is characterized in that the described method includes:
Character data is obtained, and each character data that will acquire carries out image with preset background picture and synthesizes to obtain respectively
The corresponding character picture of a character data;
Random perturbation is carried out to the character picture after synthesis to handle to obtain different types of character picture;
The different types of character picture is input in deep learning network and is trained to generate character recognition model;And
Character picture to be identified is input in the character recognition model, the identification knot of the character picture to be identified is exported
Fruit.
2. character identifying method as described in claim 1, which is characterized in that the deep learning network is CRNN model, institute
Stating CRNN model includes one VGG16 layers, two LSTM layers and two of shot and long term memory network full FC layers of connections, wherein institute
VGG16 layers are stated for extracting to the space characteristics of character picture, LSTM layer of described two shot and long term memory networks are used for pair
The temporal aspect of character picture extracts, and described two FC layers of full connections are for the space characteristics and temporal aspect extracted
Classify.
3. character identifying method as claimed in claim 2, which is characterized in that described that the different types of character picture is defeated
After entering into deep learning network the step of being trained to generate character recognition model, further includes:
The character recognition model is tested to the recognition accuracy of character;And
If the recognition accuracy is lower than preset threshold, the character recognition model is adjusted.
4. character identifying method as claimed in claim 3, which is characterized in that described to be adjusted to the character recognition model
The step of include:
Freeze VGG16 layers of the parameter;
The parameter of LSTM layers of described two shot and long term memory networks and described two full FC layers of connections is adjusted;And
The character recognition model after being adjusted is trained using true character image data.
5. such as the described in any item character identifying methods of Claims 1-4, which is characterized in that the random perturbation, which is handled, includes:
Gaussian Blur processing, Gaussian noise processing, the rotation processing by a small margin of picture, the contrast change process of picture and the face of picture
At least one of color change processing.
6. a kind of server, which is characterized in that the server includes memory, processor, and being stored on the memory can
The character recognition system run on the processor realizes following step when the character recognition system is executed by the processor
It is rapid:
Character data is obtained, and each character data that will acquire carries out image with preset background picture and synthesizes to obtain respectively
The corresponding character picture of a character data;
Random perturbation is carried out to the character picture after synthesis to handle to obtain different types of character picture;
The different types of character picture is input in deep learning network and is trained to generate character recognition model;And
Character picture to be identified is input in the character recognition model, the identification knot of the character picture to be identified is exported
Fruit.
7. server as claimed in claim 6, which is characterized in that the deep learning network is CRNN model, the CRNN
Model includes one VGG16 layers, two LSTM layers and two of shot and long term memory network full FC layers of connections, wherein the VGG16
Layer is for extracting the space characteristics of character picture, and LSTM layers of described two shot and long term memory networks for character picture
Temporal aspect extract, it is described two it is full connection FC layer be used for the space characteristics and temporal aspect that extract are divided
Class.
8. server as claimed in claim 7, which is characterized in that when the character recognition system is executed by the processor,
Also realize following steps:
The character recognition model is tested to the recognition accuracy of character;And
If the recognition accuracy is lower than preset threshold, the character recognition model is adjusted.
9. server as claimed in claim 8, which is characterized in that described the step of being adjusted to the character recognition model
Include:
Freeze VGG16 layers of the parameter;
The parameter of LSTM layers of described two shot and long term memory networks and described two full FC layers of connections is adjusted;And
The character recognition model after being adjusted is trained using true character image data.
10. a kind of computer readable storage medium, the computer-readable recording medium storage has character recognition system, the word
Symbol identifying system can be executed by least one processor, so that at least one described processor is executed as appointed in claim 1-5
The step of character identifying method described in one.
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811341729.XA CN109685100B (en) | 2018-11-12 | 2018-11-12 | Character recognition method, server and computer readable storage medium |
PCT/CN2019/088638 WO2020098250A1 (en) | 2018-11-12 | 2019-05-27 | Character recognition method, server, and computer readable storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811341729.XA CN109685100B (en) | 2018-11-12 | 2018-11-12 | Character recognition method, server and computer readable storage medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN109685100A true CN109685100A (en) | 2019-04-26 |
CN109685100B CN109685100B (en) | 2024-05-10 |
Family
ID=66185317
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201811341729.XA Active CN109685100B (en) | 2018-11-12 | 2018-11-12 | Character recognition method, server and computer readable storage medium |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN109685100B (en) |
WO (1) | WO2020098250A1 (en) |
Cited By (18)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110135413A (en) * | 2019-05-08 | 2019-08-16 | 深圳前海达闼云端智能科技有限公司 | Method for generating character recognition image, electronic equipment and readable storage medium |
CN110222693A (en) * | 2019-06-03 | 2019-09-10 | 第四范式(北京)技术有限公司 | The method and apparatus for constructing character recognition model and identifying character |
CN110348436A (en) * | 2019-06-19 | 2019-10-18 | 平安普惠企业管理有限公司 | Text information in image is carried out to know method for distinguishing and relevant device |
CN110363290A (en) * | 2019-07-19 | 2019-10-22 | 广东工业大学 | A kind of image-recognizing method based on hybrid production style, device and equipment |
CN110458184A (en) * | 2019-06-26 | 2019-11-15 | 平安科技(深圳)有限公司 | Optical character identification householder method, device, computer equipment and storage medium |
CN110765442A (en) * | 2019-09-30 | 2020-02-07 | 奇安信科技集团股份有限公司 | Method and device for identifying verification code in verification picture and electronic equipment |
WO2020098250A1 (en) * | 2018-11-12 | 2020-05-22 | 平安科技(深圳)有限公司 | Character recognition method, server, and computer readable storage medium |
CN111414844A (en) * | 2020-03-17 | 2020-07-14 | 北京航天自动控制研究所 | Container number identification method based on convolution cyclic neural network |
CN112052852A (en) * | 2020-09-09 | 2020-12-08 | 国家气象信息中心 | Character recognition method of handwritten meteorological archive data based on deep learning |
WO2020258491A1 (en) * | 2019-06-28 | 2020-12-30 | 平安科技(深圳)有限公司 | Universal character recognition method, apparatus, computer device, and storage medium |
CN112215221A (en) * | 2020-09-22 | 2021-01-12 | 国交空间信息技术(北京)有限公司 | Automatic vehicle frame number identification method |
CN112287932A (en) * | 2019-07-23 | 2021-01-29 | 上海高德威智能交通系统有限公司 | Method, device and equipment for determining image quality and storage medium |
US10990876B1 (en) | 2019-10-08 | 2021-04-27 | UiPath, Inc. | Detecting user interface elements in robotic process automation using convolutional neural networks |
CN113012265A (en) * | 2021-04-22 | 2021-06-22 | 中国平安人寿保险股份有限公司 | Needle printing character image generation method and device, computer equipment and medium |
CN113239854A (en) * | 2021-05-27 | 2021-08-10 | 北京环境特性研究所 | Ship identity recognition method and system based on deep learning |
US11157783B2 (en) | 2019-12-02 | 2021-10-26 | UiPath, Inc. | Training optical character detection and recognition models for robotic process automation |
US11605210B2 (en) | 2020-01-21 | 2023-03-14 | Mobile Drive Netherlands B.V. | Method for optical character recognition in document subject to shadows, and device employing method |
CN117034212A (en) * | 2020-03-10 | 2023-11-10 | 百度在线网络技术(北京)有限公司 | Method, apparatus, electronic device and computer storage medium for processing image data |
Families Citing this family (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112287934A (en) * | 2020-08-12 | 2021-01-29 | 北京京东尚科信息技术有限公司 | Method and device for recognizing characters and obtaining character image feature extraction model |
CN112287936A (en) * | 2020-09-24 | 2021-01-29 | 深圳市智影医疗科技有限公司 | Optical character recognition test method and device, readable storage medium and terminal equipment |
CN114627459A (en) * | 2020-12-14 | 2022-06-14 | 菜鸟智能物流控股有限公司 | OCR recognition method, recognition device and recognition system |
CN112613572B (en) * | 2020-12-30 | 2024-01-23 | 北京奇艺世纪科技有限公司 | Sample data obtaining method and device, electronic equipment and storage medium |
CN112906693B (en) * | 2021-03-05 | 2022-06-24 | 杭州费尔斯通科技有限公司 | Method for identifying subscript character and subscript character |
CN113971806B (en) * | 2021-10-26 | 2023-05-05 | 北京百度网讯科技有限公司 | Model training and character recognition method, device, equipment and storage medium |
CN114495106A (en) * | 2022-04-18 | 2022-05-13 | 电子科技大学 | MOCR (metal-oxide-semiconductor resistor) deep learning method applied to DFB (distributed feedback) laser chip |
CN114758339B (en) * | 2022-06-15 | 2022-09-20 | 深圳思谋信息科技有限公司 | Method and device for acquiring character recognition model, computer equipment and storage medium |
Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2001358925A (en) * | 2000-06-09 | 2001-12-26 | Minolta Co Ltd | Unit and method for image processing and recording medium |
WO2013121648A1 (en) * | 2012-02-17 | 2013-08-22 | オムロン株式会社 | Character-recognition method and character-recognition device and program using said method |
CN107273896A (en) * | 2017-06-15 | 2017-10-20 | 浙江南自智能科技股份有限公司 | A kind of car plate detection recognition methods based on image recognition |
WO2018090013A1 (en) * | 2016-11-14 | 2018-05-17 | Kodak Alaris Inc. | System and method of character recognition using fully convolutional neural networks with attention |
CN108446621A (en) * | 2018-03-14 | 2018-08-24 | 平安科技(深圳)有限公司 | Bank slip recognition method, server and computer readable storage medium |
CN108564103A (en) * | 2018-01-09 | 2018-09-21 | 众安信息技术服务有限公司 | Data processing method and device |
CN108573496A (en) * | 2018-03-29 | 2018-09-25 | 淮阴工学院 | Multi-object tracking method based on LSTM networks and depth enhancing study |
CN108596180A (en) * | 2018-04-09 | 2018-09-28 | 深圳市腾讯网络信息技术有限公司 | Parameter identification, the training method of parameter identification model and device in image |
Family Cites Families (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP4661921B2 (en) * | 2008-08-26 | 2011-03-30 | 富士ゼロックス株式会社 | Document processing apparatus and program |
CN107392221B (en) * | 2017-06-05 | 2020-09-22 | 天方创新(北京)信息技术有限公司 | Training method of classification model, and method and device for classifying OCR (optical character recognition) results |
CN109685100B (en) * | 2018-11-12 | 2024-05-10 | 平安科技(深圳)有限公司 | Character recognition method, server and computer readable storage medium |
-
2018
- 2018-11-12 CN CN201811341729.XA patent/CN109685100B/en active Active
-
2019
- 2019-05-27 WO PCT/CN2019/088638 patent/WO2020098250A1/en active Application Filing
Patent Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2001358925A (en) * | 2000-06-09 | 2001-12-26 | Minolta Co Ltd | Unit and method for image processing and recording medium |
WO2013121648A1 (en) * | 2012-02-17 | 2013-08-22 | オムロン株式会社 | Character-recognition method and character-recognition device and program using said method |
WO2018090013A1 (en) * | 2016-11-14 | 2018-05-17 | Kodak Alaris Inc. | System and method of character recognition using fully convolutional neural networks with attention |
CN107273896A (en) * | 2017-06-15 | 2017-10-20 | 浙江南自智能科技股份有限公司 | A kind of car plate detection recognition methods based on image recognition |
CN108564103A (en) * | 2018-01-09 | 2018-09-21 | 众安信息技术服务有限公司 | Data processing method and device |
CN108446621A (en) * | 2018-03-14 | 2018-08-24 | 平安科技(深圳)有限公司 | Bank slip recognition method, server and computer readable storage medium |
CN108573496A (en) * | 2018-03-29 | 2018-09-25 | 淮阴工学院 | Multi-object tracking method based on LSTM networks and depth enhancing study |
CN108596180A (en) * | 2018-04-09 | 2018-09-28 | 深圳市腾讯网络信息技术有限公司 | Parameter identification, the training method of parameter identification model and device in image |
Cited By (27)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2020098250A1 (en) * | 2018-11-12 | 2020-05-22 | 平安科技(深圳)有限公司 | Character recognition method, server, and computer readable storage medium |
CN110135413A (en) * | 2019-05-08 | 2019-08-16 | 深圳前海达闼云端智能科技有限公司 | Method for generating character recognition image, electronic equipment and readable storage medium |
CN110222693A (en) * | 2019-06-03 | 2019-09-10 | 第四范式(北京)技术有限公司 | The method and apparatus for constructing character recognition model and identifying character |
CN110222693B (en) * | 2019-06-03 | 2022-03-08 | 第四范式(北京)技术有限公司 | Method and device for constructing character recognition model and recognizing characters |
CN110348436A (en) * | 2019-06-19 | 2019-10-18 | 平安普惠企业管理有限公司 | Text information in image is carried out to know method for distinguishing and relevant device |
CN110458184A (en) * | 2019-06-26 | 2019-11-15 | 平安科技(深圳)有限公司 | Optical character identification householder method, device, computer equipment and storage medium |
CN110458184B (en) * | 2019-06-26 | 2023-06-30 | 平安科技(深圳)有限公司 | Optical character recognition assistance method, device, computer equipment and storage medium |
WO2020258491A1 (en) * | 2019-06-28 | 2020-12-30 | 平安科技(深圳)有限公司 | Universal character recognition method, apparatus, computer device, and storage medium |
CN110363290A (en) * | 2019-07-19 | 2019-10-22 | 广东工业大学 | A kind of image-recognizing method based on hybrid production style, device and equipment |
CN112287932B (en) * | 2019-07-23 | 2024-05-10 | 上海高德威智能交通系统有限公司 | Method, device, equipment and storage medium for determining image quality |
CN112287932A (en) * | 2019-07-23 | 2021-01-29 | 上海高德威智能交通系统有限公司 | Method, device and equipment for determining image quality and storage medium |
CN110765442A (en) * | 2019-09-30 | 2020-02-07 | 奇安信科技集团股份有限公司 | Method and device for identifying verification code in verification picture and electronic equipment |
US10990876B1 (en) | 2019-10-08 | 2021-04-27 | UiPath, Inc. | Detecting user interface elements in robotic process automation using convolutional neural networks |
US11599775B2 (en) | 2019-10-08 | 2023-03-07 | UiPath, Inc. | Detecting user interface elements in robotic process automation using convolutional neural networks |
US11157783B2 (en) | 2019-12-02 | 2021-10-26 | UiPath, Inc. | Training optical character detection and recognition models for robotic process automation |
US11810382B2 (en) | 2019-12-02 | 2023-11-07 | UiPath, Inc. | Training optical character detection and recognition models for robotic process automation |
US11605210B2 (en) | 2020-01-21 | 2023-03-14 | Mobile Drive Netherlands B.V. | Method for optical character recognition in document subject to shadows, and device employing method |
CN117034212A (en) * | 2020-03-10 | 2023-11-10 | 百度在线网络技术(北京)有限公司 | Method, apparatus, electronic device and computer storage medium for processing image data |
CN111414844B (en) * | 2020-03-17 | 2023-08-29 | 北京航天自动控制研究所 | Container number identification method based on convolutional neural network |
CN111414844A (en) * | 2020-03-17 | 2020-07-14 | 北京航天自动控制研究所 | Container number identification method based on convolution cyclic neural network |
CN112052852A (en) * | 2020-09-09 | 2020-12-08 | 国家气象信息中心 | Character recognition method of handwritten meteorological archive data based on deep learning |
CN112052852B (en) * | 2020-09-09 | 2023-12-29 | 国家气象信息中心 | Character recognition method of handwriting meteorological archive data based on deep learning |
CN112215221A (en) * | 2020-09-22 | 2021-01-12 | 国交空间信息技术(北京)有限公司 | Automatic vehicle frame number identification method |
CN113012265A (en) * | 2021-04-22 | 2021-06-22 | 中国平安人寿保险股份有限公司 | Needle printing character image generation method and device, computer equipment and medium |
CN113012265B (en) * | 2021-04-22 | 2024-04-30 | 中国平安人寿保险股份有限公司 | Method, apparatus, computer device and medium for generating needle-type printed character image |
CN113239854B (en) * | 2021-05-27 | 2023-12-19 | 北京环境特性研究所 | Ship identity recognition method and system based on deep learning |
CN113239854A (en) * | 2021-05-27 | 2021-08-10 | 北京环境特性研究所 | Ship identity recognition method and system based on deep learning |
Also Published As
Publication number | Publication date |
---|---|
WO2020098250A1 (en) | 2020-05-22 |
CN109685100B (en) | 2024-05-10 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109685100A (en) | Character identifying method, server and computer readable storage medium | |
CN108596277B (en) | Vehicle identity recognition method and device and storage medium | |
CN108229591B (en) | Neural network adaptive training method and apparatus, device, program, and storage medium | |
CN105359162B (en) | For the pattern mask of the selection and processing related with face in image | |
CN108229379A (en) | Image-recognizing method, device, computer equipment and storage medium | |
CN111428581A (en) | Face shielding detection method and system | |
CN108830235A (en) | Method and apparatus for generating information | |
US10614347B2 (en) | Identifying parameter image adjustments using image variation and sequential processing | |
CN108269254A (en) | Image quality measure method and apparatus | |
CN109871845A (en) | Certificate image extracting method and terminal device | |
CN107316029B (en) | A kind of living body verification method and equipment | |
CN110570435B (en) | Method and device for carrying out damage segmentation on vehicle damage image | |
CN109034069A (en) | Method and apparatus for generating information | |
CN108494778A (en) | Identity identifying method and device | |
CN107172354A (en) | Method for processing video frequency, device, electronic equipment and storage medium | |
CN108509994A (en) | character image clustering method and device | |
CN111666800A (en) | Pedestrian re-recognition model training method and pedestrian re-recognition method | |
CN111739027A (en) | Image processing method, device and equipment and readable storage medium | |
CN109377494A (en) | A kind of semantic segmentation method and apparatus for image | |
CN109242018A (en) | Image authentication method, device, computer equipment and storage medium | |
CN111067522A (en) | Brain addiction structural map assessment method and device | |
CN110874574A (en) | Pedestrian re-identification method and device, computer equipment and readable storage medium | |
CN110599514B (en) | Image segmentation method and device, electronic equipment and storage medium | |
CN109308704A (en) | Background elimination method, device, computer equipment and storage medium | |
CN115984930A (en) | Micro expression recognition method and device and micro expression recognition model training method |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |