Method for improving handwriting OCR performance by utilizing synthesized online text image
Technical Field
The invention relates to the technical field of image processing, in particular to a method for improving handwriting OCR performance by utilizing a synthesized online text image.
Background
The generation of countermeasure networks (GANs) has become a popular research direction in the field of deep learning. The GAN network actually contains 2 networks, one being a Generator network (Generator) and the other being a Discriminator network (Discriminator). The two networks may be neural networks, from convolutional neural networks, recurrent neural networks to autoencoders. In this configuration, the two networks participate in a competitive game and attempt to overtake each other while helping them complete their own tasks. After thousands of iterations, if everything is successful, the generator network, i.e., the generator, can perfectly generate a realistic false image, and the discriminator network, i.e., the discriminator, can well judge whether the image is real or false. The core of GAN is actually the generator, and as for why there is a discriminator, mainly to introduce countertraining by which the generator network can generate high quality pictures. The generator aims to generate pictures which are as vivid as possible in the training process to enable the discriminator to judge whether the picture is a real picture or a generated false picture, and the discriminator aims to distinguish the real picture and the false picture as far as possible in the training process, so that the generator hopes that the error rate of the discriminator is maximized, the discriminator hopes that the error rate of the discriminator is minimized, and the discriminator are mutually confronted and progress in competition.
In the field of handwriting OCR, an ideal training set should be able to cover various writing styles, background and light and shadow variations, all possible occurring vocabularies, etc. for a deep learning-based handwriting OCR engine. However, acquiring such a training set is time consuming and labor intensive, and in some cases severely limits the accuracy of handwriting OCR recognition. The importance of the composition of the handwritten image is thus apparent. Owing to the recent development of antagonistic generation networks, many scholars have proposed methods for generating handwritten-style text lines from text or printed-style text lines. However, the handwriting style generated by the method is still not rich enough, and the style of the handwritten characters is lacked. Another idea of handwritten text line generation is to convert online handwritten data to offline images. The online data can be conveniently acquired by devices such as a mobile phone and a writing board, the data volume is large, the style is variable, and if the online data can be converted into a vivid offline handwritten image, the training of handwriting OCR can be well assisted.
Disclosure of Invention
The invention aims to provide a method for improving handwriting performance by utilizing a synthesized online text image, so as to ensure the recognition accuracy of a handwriting OCR (optical character recognition), and convert the handwriting image into a vivid offline handwriting image for assisting OCR training.
To achieve the above object, the present invention provides a method for improving the performance of handwritten OCR by using a synthesized online text image, comprising the steps of:
step S1, selecting and dividing a data set: adopting an IAM data set, wherein the IAM data set comprises an IAM handwriting data set and an IAM online handwriting data set; the real image IgtObtaining a real handwritten image I after data processingsty,And storing the data in an IAM handwriting data set; step S2, constructing a generator of the style GAN network, wherein the generator comprises a content encoder, a content decoder and a style encoder;
step S3, training the generator of GAN network: for real hand-written image IstyPerforming skeletonization operation to obtain a skeleton diagram IskeThen the skeleton diagram I is drawnskeThe content features are extracted by inputting the content into the content encoder, a feature map is generated and output to a content decoder; will be a true hand-written image IstyInputting the style features into a style encoder to extract style features, and outputting a 512-dimensional style vector s obtained through global pooling; the content decoder receives the feature map from the content encoder and receives the feature map from the content encoderThe component obtained by affine transformation of the style vector s of the style encoder changes the distribution on the level of the feature diagram by adopting the operation of AdaIN, blends style information into the feature diagram, and outputs a composite diagram Isyn;
Step S4, synthesizing the text image in the online data set through the trained generator: and converting the IAM online handwritten data into a skeleton diagram, selecting a test set from the IAM handwritten data set as a style diagram, inputting the test set into a generator together, and generating an offline handwritten image.
Further, in step S2, the content encoder, with reference to the network structure of VGG-19, is composed of a plurality of pooling layers and convolutional layers for down-sampling the input; the content encoder includes five convolutional layer modules and three full-link layers, respectively.
Further, in the step S3, further, in the step S3, the AdaIN changes the data distribution at the aspect diagram level, and the effect of the style migration can be achieved by controlling and changing the affine parameters in the AdaIN layer; the AdaIN layer changes the distribution of the characteristic diagram inside the network, hands over the task of the format migration to AdaIN, and realizes other tasks on the network structure, wherein AdaIN is operated as follows:
the feature map after the content picture coding represented by X is represented by a style vector s through affine sigma and mu, and is a mean value and a standard deviation.
Further, in step S2, the content decoder has a structure similar to that of the decoder part in U-Net, and the structure is symmetrical to that of the content encoder, and a composite handwritten text picture with the same input size is generated by using multiple layers of convolution and bilinear upsampling layers; the content decoder includes five convolutional layer modules.
Further, the stepsIn S3, for skeleton diagram I with height H and width W
skeThe content encoder converts the feature map into a feature map, and the input of the style encoder is a handwritten image I from the true
styOutputting a 512-dimensional style vector s obtained through global pooling; the content encoder generates an image I with a height H and a width W using multiple layers of convolution and bilinear upsampling layers
synThe style vector s output by the style encoder adjusts the feature map of the middle layer of the content decoder in the mode of AdaIN so as to blend the style into the final output I of the content decoder
syn。
Further, in the step S3, by adopting three loss functions, including a content loss function, a perceptual loss function and an antagonistic loss function, the synthetic picture generated by the training generator is as realistic as possible and has a style corresponding to the real picture;
the content loss function accepts the difference from the content feature pixel level between the synthesized text image and the real image to optimize the best parameters, and the generator generates I the synthesized imagesynThe true hand-written image is Igt:
The perceptual loss function solves the problem that the content loss function causes the generated image to be excessively smooth. Using a pre-trained VGG-19 network on ImageNet, in the feature space after relu activation function after the first convolution layer of five convolution modules:
calculating a composite map IsynAnd a real graph IgtThe characteristic difference is as follows:
MSE represents a mean square error function, and alpha values are 1/32, 1/16, 1/8, 1/4 and 1 respectively;
the countermeasure loss function adopts the method of patchGAN, and the arbiter network of former GAN network has been changed into the full convolution network, and patchGAN's arbiter network differentiates a small region of input image receptive field and outputs, makes the model more can focus on image details, and the convolution that superposes one by one finally outputs N × N's matrix, and each element wherein actually represents a bigger receptive field in the original image, corresponds to the patch in the original image, specifically as follows:
d represents a network of discriminators for determining the probability of the comparison between the two being true or false.
Compared with the prior art, the invention has the beneficial effects that: a GAN frame is provided, which adopts the structure of an encoder-decoder, and uses a style encoder to extract style characteristics from a real handwritten image as conditional input of the decoder; the method has good advantages for maintaining high-resolution and high-detail images, and can improve the fidelity of local areas of the generated images; the generated synthetic image is used as a real handwritten text image to train an OCR model, so that the precision is greatly improved; after the generator network structure is trained, the obtained generator network model can convert online handwritten data into a vivid offline handwritten image for assisting OCR training and further contributing to improving the recognition accuracy of the OCR model; the handwriting image generated by the framework can effectively improve the OCR recognition accuracy, and a feasible alternative scheme is provided for collecting and constructing a large-scale handwriting data set.
Drawings
FIG. 1 is a flow chart of the present invention;
FIG. 2 is a block diagram of a stylistic GAN network of the present invention;
FIG. 3 is a diagram of the training process of the GAN network of the present invention;
fig. 4 is a structural diagram of a generator in the present invention.
Detailed Description
The present invention will be described in further detail with reference to the accompanying drawings and examples.
1-4, the present invention provides a method for improving handwritten OCR performance using synthesized online text images, comprising the steps of:
step S1, selecting and dividing a data set, wherein an IAM data set is adopted and comprises a plurality of types of data sets, wherein the data sets comprise an IAM handwriting data set and an IAM online handwriting data set; in the IAM handwritten data set, there are 6,161 offline handwritten text lines in the training set, the height is normalized to H =96, the width is set to W =3H, 1840 text lines in the verification set, 1861 lines in the test set, and 13049 text lines in the IAM online handwritten data set. Wherein, the real image IgtObtaining a real handwritten image I after data processingsty,And storing the data in an IAM handwriting data set;
step S2, constructing a generator of the style GAN network, wherein the generator comprises three parts, namely a content encoder, a content decoder and a style encoder; the content encoder comprises five convolutional layer modules and three fully-connected layers, wherein the first convolutional layer module conv1 comprises two convolutional layers, the convolutional core of the convolutional layer is (3 x 64), a ReLU activator is arranged behind each convolutional layer, and a pooling layer is arranged behind two adjacent convolutional modules for down-sampling; the second convolutional layer module conv2 contains two convolutional layers (3 × 128), each followed by a ReLU activator; the third convolution module comprises four convolution layers (3 x 256), each convolution layer is followed by a ReLU activation function, and the activation function of the last convolution layer is followed by a pooling layer for down-sampling; the fourth convolution module contains four convolution layers (3 x 512), each convolution layer is followed by a ReLU activation function, and the activation function of the last convolution layer is followed by a pooling layer for down-sampling; the fifth convolution module comprises four convolution layers, the convolution kernel of each convolution layer is (3 × 512), the last convolution layer is connected with a pooling layer, the full-connection layer FC4096 is sequentially connected with a ReLU activation function, the full-connection layer FC4096, the ReLU activation function, the full-connection layer FC1000 and a softmax classifier, the result is output, the skeletonized skeleton map is used as input, an image space is converted into a feature space through a content encoder to extract the content features of the handwritten character picture to form a feature map, and then the feature map is input into a content decoder;
for the style encoder, it is identical to the structure of the content encoder except that a real handwritten image I is receivedstyOutputting the style vector s as a 512-dimensional style vector s obtained through global pooling, and adjusting the feature map of the middle layer of the content decoder by the style vector s in an AdaIN operation mode, so that the style signs extracted by the style encoder can be combined with the content features of the style signs, and the style signs extracted by the style encoder can be combined with the content features of the style signs; AdaIN is the effect of changing data distribution on the aspect of a feature diagram, and the effect of style migration can be realized by controlling and changing affine parameters in an AdaIN layer. Because the AdaIN layer changes the distribution of feature maps within the network, the task of format migration can be handed over to AdaIN, implementing other tasks on the network fabric, where AdaIN operates:
wherein X represents the coded characteristic diagram V of the content pictureσ,VμThe method is obtained by affine transformation of a style vector s, parameters of affine transformation are different for different layers of a content decoder, and mean values and standard deviations are represented by sigma and mu;
for a content decoder, the structure of the content decoder is similar to that of a decoder part in U-Net, the structure of the content decoder is symmetrical to that of a content encoder, and a composite handwritten text picture with the same input size is generated by utilizing a multilayer convolution and a bilinear upsampling layer; including five convolutional layer modules, the first convolutional layer module conv1 contains four convolutional layers, the convolutional kernels of which are (3 × 512), each convolutional layer is followed by a ReLU activator, and two adjacent convolutional modules have an upsampled layer to amplify the image; the second convolutional layer module conv2 contains four convolutional layers (3 × 512), each of which is followed by a ReLU activator; the third convolution module contains four convolution layers (3 × 256), each followed by a ReLU activation function; the fourth convolution module has two convolution layers, the convolution kernel of which is (3 × 128); the fifth convolution module has two convolution layers of (3 x 64). An activation function is arranged behind each convolution layer, an up-sampling layer is arranged between two adjacent convolution modules to amplify the size of an image, wherein the up-sampling layer adopts a bilinear method to perform up-sampling, a feature diagram of each size is output, a component obtained by a style encoder after affine transformation of a style vector s is received, the distribution of the feature diagram is changed on the feature diagram level by adopting AdaIN operation, and style features in the style encoder are blended into a content feature diagram;
and step S3, training a generator of the GAN network, and converting the skeleton diagram into a handwritten image which has the style of a real diagram and the same and lifelike content through the generator. For training to generate
The device performs skeletonization operation on a real image to obtain a skeleton drawing I of the real image
skeThen the skeleton diagram I is drawn
skeInputting into content encoder for content feature extraction, and extracting real image I
gtThe input to the genre encoder extracts the genre features therein, and the output of the content encoder is then fed to the content decoder. For the skeleton diagram I with the height of H and the width of W
skeThe input of the style encoder is from a real hand-written image I
gtOutputting a 512-dimensional style vector s obtained through global pooling; the content encoder generates an image I of height H and width W using multiple layers of 3 x 3 convolution and bilinear upsampling layers
synThe style vector s output by the style encoder adjusts the feature map of the middle layer of the content decoder in the mode of AdaIN so as to blend the style into the final output I of the content decoder
syn;
Wherein the skeleton diagram IskeWith said true handwritten image IstyForm aPaired training data Iske-IstyAnd the sample pair is used for extracting a verification set and a test set from the IAM handwriting data set, and detecting the effect of the synthetic picture of the generator network model so as to train the generator.
Three loss functions are adopted in the training experiment of the invention, including a content loss function, a perception loss function and an antagonistic loss function; the content loss function accepts the difference from the content feature pixel level between the synthesized text image and the real image to optimize the best parameters, and the generator generates the synthesized image as IsynThe real image is Isyn:
The perceptual loss function solves the problem that the content loss function causes the generated image to be excessively smooth. Using a pre-trained VGG-19 network on ImageNet, the feature space after the ReLU activation function after the first convolution layer of five convolution modules:
calculating a composite map IsynAnd a real image IgtThe characteristic difference is as follows:
MSE represents the mean square error function and α takes values of 1/32, 1/16, 1/18, 1/4 and 1, respectively.
The method of PatchGAN is adopted for the resistance loss function, the original GAN discriminator network is replaced by a full convolution network, a general GAN network only needs to output a true or false vector, and the PatchGAN discriminator network discriminates and outputs a small area of an input image receptive field, so that the model can pay more attention to image details through training; because the convolution layer that superposes one by one finally outputs a matrix of N × N, each element therein actually represents a relatively large receptive field in the original image, and corresponds to the patch in the original image, so that the method has a good advantage for maintaining the high-resolution and high-detail image, and can improve the fidelity of the generated image local area, specifically as follows:
d represents a discriminator network for judging the probability of the comparison between the two being true or not;
and step S4, synthesizing the text images in the online data set through the trained generator. And converting the IAM online handwriting data into a skeleton diagram, selecting a test set from the IAM handwriting data set as a style diagram, inputting the style diagram into a generator together, and generating an offline handwriting image, namely generating a very vivid offline handwriting image. Experiments show that the generated synthetic image is used as a real handwritten text image to train an OCR model, and compared with an image synthesized by adopting other algorithms to synthesize online data, the accuracy is greatly improved; and the OCR model is trained in a mode of training the generated image and the real image together, so that the recognition accuracy of the OCR model is improved.
While the invention has been described with reference to a preferred embodiment, it will be understood by those skilled in the art that various changes in form and details may be made therein without departing from the spirit and scope of the invention as defined by the appended claims.