WO2019232861A1 - 手写模型训练方法、文本识别方法、装置、设备及介质 - Google Patents

手写模型训练方法、文本识别方法、装置、设备及介质 Download PDF

Info

Publication number
WO2019232861A1
WO2019232861A1 PCT/CN2018/094271 CN2018094271W WO2019232861A1 WO 2019232861 A1 WO2019232861 A1 WO 2019232861A1 CN 2018094271 W CN2018094271 W CN 2018094271W WO 2019232861 A1 WO2019232861 A1 WO 2019232861A1
Authority
WO
WIPO (PCT)
Prior art keywords
chinese
text
term
recognition model
chinese text
Prior art date
Application number
PCT/CN2018/094271
Other languages
English (en)
French (fr)
Inventor
孙强
周罡
Original Assignee
平安科技(深圳)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 平安科技(深圳)有限公司 filed Critical 平安科技(深圳)有限公司
Publication of WO2019232861A1 publication Critical patent/WO2019232861A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V30/00Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
    • G06V30/40Document-oriented image-based pattern recognition
    • G06V30/41Analysis of document content
    • G06V30/413Classification of content, e.g. text, photographs or tables
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/049Temporal neural networks, e.g. delay elements, oscillating neurons or pulsed inputs
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Definitions

  • the present application relates to the field of Chinese text recognition, and in particular, to a handwriting model training method, a text recognition method, a device, a device, and a medium.
  • the embodiments of the present application provide a handwriting model training method, a device, a device, and a medium to solve the problem that the current accuracy of handwritten Chinese text recognition is not high.
  • a handwriting model training method includes:
  • Obtain a normal Chinese text training sample input the normal Chinese text training sample into a bidirectional long-term and short-term memory neural network, and train based on a continuous-time classification algorithm to obtain the total error factor of the bidirectional long-term and short-term memory neural network.
  • the total error factor of the neural network is used to update the network parameters of the two-way long-term and short-term memory neural network to obtain a standardized Chinese text recognition model.
  • non-normative Chinese text training samples input the non-normative Chinese text training samples into the canonical Chinese text recognition model, and train based on continuous time classification algorithms to obtain the total error factor of the canonical Chinese text recognition model.
  • the total error factor of the text recognition model using a particle swarm algorithm to update the network parameters of the canonical Chinese text recognition model to obtain an adjusted Chinese handwritten text recognition model;
  • Input the error text training sample into the adjusted Chinese handwritten text recognition model train based on a continuous time classification algorithm, obtain the total error factor of the adjusted Chinese handwritten text recognition model, and adjust the total error factor of the Chinese handwritten text recognition model according to the adjustment Using the particle swarm algorithm to update and adjust the network parameters of the Chinese handwritten text recognition model to obtain the target Chinese handwritten text recognition model.
  • a handwriting model training device includes:
  • the normal Chinese text recognition model acquisition module is used to obtain normal Chinese text training samples, input the normal Chinese text training samples to a bidirectional long-term and short-term memory neural network, and perform training based on a continuous-time classification algorithm to obtain bidirectional long-term and short-term memory neural networks.
  • the particle parameters algorithm is used to update the network parameters of the two-way long- and short-term memory neural network to obtain a standardized Chinese text recognition model;
  • Adjust the Chinese handwritten text recognition model acquisition module to obtain non-normative Chinese text training samples, input the non-normative Chinese text training samples into the canonical Chinese text recognition model, and train based on continuous-time classification algorithms to obtain canonical Chinese
  • the total error factor of the text recognition model is based on the total error factor of the standard Chinese text recognition model, and a particle swarm algorithm is used to update the network parameters of the standard Chinese text recognition model to obtain an adjusted Chinese handwritten text recognition model;
  • Error text training sample acquisition module for acquiring Chinese text samples to be tested, using the adjusted Chinese handwritten text recognition model to identify the Chinese text samples to be tested, obtaining error texts whose recognition results do not match the true results, and putting all the errors Text as training text for error text;
  • a target Chinese handwritten text recognition model acquisition module configured to input the error text training sample into the adjusted Chinese handwritten text recognition model, train based on a continuous time classification algorithm, and obtain a total error factor of the adjusted Chinese handwritten text recognition model; According to adjusting the total error factor of the Chinese handwritten text recognition model, the particle swarm algorithm was used to update and adjust the network parameters of the Chinese handwritten text recognition model to obtain the target Chinese handwritten text recognition model.
  • a computer device includes a memory, a processor, and computer-readable instructions stored in the memory and executable on the processor.
  • the processor executes the computer-readable instructions, the following steps are implemented:
  • Obtain a normal Chinese text training sample input the normal Chinese text training sample into a bidirectional long-term and short-term memory neural network, and train based on a continuous-time classification algorithm to obtain the total error factor of the bidirectional long-term and short-term memory neural network.
  • the total error factor of the neural network is used to update the network parameters of the two-way long-term and short-term memory neural network to obtain a standardized Chinese text recognition model.
  • non-normative Chinese text training samples input the non-normative Chinese text training samples into the canonical Chinese text recognition model, and train based on continuous time classification algorithms to obtain the total error factor of the canonical Chinese text recognition model.
  • the total error factor of the text recognition model using a particle swarm algorithm to update the network parameters of the canonical Chinese text recognition model to obtain an adjusted Chinese handwritten text recognition model;
  • Input the error text training sample into the adjusted Chinese handwritten text recognition model train based on a continuous time classification algorithm, obtain the total error factor of the adjusted Chinese handwritten text recognition model, and adjust the total error factor of the Chinese handwritten text recognition model according to the adjustment Using the particle swarm algorithm to update and adjust the network parameters of the Chinese handwritten text recognition model to obtain the target Chinese handwritten text recognition model.
  • One or more non-volatile readable storage media storing computer-readable instructions, which when executed by one or more processors, cause the one or more processors to perform the following steps:
  • Obtain a normal Chinese text training sample input the normal Chinese text training sample into a bidirectional long-term and short-term memory neural network, and train based on a continuous-time classification algorithm to obtain the total error factor of the bidirectional long-term and short-term memory neural network.
  • the total error factor of the neural network is used to update the network parameters of the two-way long-term and short-term memory neural network to obtain a standardized Chinese text recognition model.
  • non-normative Chinese text training samples input the non-normative Chinese text training samples into the canonical Chinese text recognition model, and train based on continuous time classification algorithms to obtain the total error factor of the canonical Chinese text recognition model.
  • the total error factor of the text recognition model using a particle swarm algorithm to update the network parameters of the canonical Chinese text recognition model to obtain an adjusted Chinese handwritten text recognition model;
  • Input the error text training sample into the adjusted Chinese handwritten text recognition model train based on a continuous time classification algorithm, obtain the total error factor of the adjusted Chinese handwritten text recognition model, and adjust the total error factor of the Chinese handwritten text recognition model according to the adjustment Using the particle swarm algorithm to update and adjust the network parameters of the Chinese handwritten text recognition model to obtain the target Chinese handwritten text recognition model.
  • the embodiments of the present application further provide a text recognition method, device, device, and medium to solve the problem of low accuracy of current handwritten text recognition.
  • a text recognition method includes:
  • a text recognition device includes:
  • An output value acquisition module configured to acquire Chinese text to be recognized, identify the Chinese text to be recognized using a target Chinese handwritten text recognition model, and obtain an output value of the Chinese text to be recognized in the target Chinese handwritten text recognition model;
  • the target Chinese handwritten text recognition model is obtained by using the handwriting model training method;
  • the recognition result obtaining module is configured to select a maximum output value among output values corresponding to the Chinese text to be recognized, and obtain a recognition result of the Chinese text to be recognized according to the maximum output value.
  • a computer device includes a memory, a processor, and computer-readable instructions stored in the memory and executable on the processor.
  • the processor executes the computer-readable instructions, the following steps are implemented:
  • One or more non-volatile readable storage media storing computer-readable instructions, which when executed by one or more processors, cause the one or more processors to perform the following steps:
  • FIG. 1 is an application environment diagram of a handwriting model training method according to an embodiment of the present application
  • FIG. 2 is a flowchart of a handwriting model training method according to an embodiment of the present application
  • FIG. 3 is a specific flowchart of step S10 in FIG. 2;
  • step S10 in FIG. 2 is another specific flowchart of step S10 in FIG. 2;
  • FIG. 5 is a specific flowchart of step S30 in FIG. 2;
  • FIG. 6 is a schematic diagram of a handwriting model training device according to an embodiment of the present application.
  • FIG. 7 is a flowchart of a text recognition method according to an embodiment of the present application.
  • FIG. 8 is a schematic diagram of a text recognition device according to an embodiment of the present application.
  • FIG. 9 is a schematic diagram of a computer device in an embodiment of the present application.
  • FIG. 1 illustrates an application environment of a handwriting model training method provided by an embodiment of the present application.
  • the application environment of the handwriting model training method includes a server and a client, wherein the server and the client are connected through a network, and the client is a device that can interact with the user, including, but not limited to, a computer and a smart phone.
  • the server can be implemented with an independent server or a server cluster consisting of multiple servers.
  • the handwriting model training method provided in the embodiment of the present application is applied to a server.
  • FIG. 2 shows a flowchart of a handwriting model training method according to an embodiment of the present application.
  • the handwriting model training method includes the following steps:
  • S10 Obtain standard Chinese text training samples, input the normal Chinese text training samples into the bidirectional long-term and short-term memory neural network, and train based on the continuous time classification algorithm to obtain the total error factor of the bidirectional long-term and short-term memory neural network.
  • the total error factor of the neural network was used to update the network parameters of the two-way long-term and short-term memory neural network to obtain a standardized Chinese text recognition model.
  • the standard Chinese text training samples refer to the training samples obtained from standard texts (such as texts that belong to all the prefaces of Chinese fonts such as Kai, Song, or Lishu, and the font is generally selected from Kai or Song).
  • Bi-directional Long-Short-Term Memory (BILSTM) is a time-recursive neural network that is used to train sequence-specific data from two directions: sequence forward and sequence reverse.
  • the bidirectional long-term and short-term memory neural network can not only correlate pre-order data, but also post-order data. Therefore, it is possible to learn sequence-related deep features of the data according to the sequence context.
  • the Continuous Time Classification (CTC) algorithm is a completely end-to-end acoustic model training algorithm. It does not need to align the training samples in advance.
  • PSO Particle Swarm Optimization
  • a training sample of standard Chinese text is obtained.
  • the fonts used in the standard Chinese text training samples are the same (multiple fonts are not mixed).
  • the standard Chinese text training samples used for model training are all in the New Roman style.
  • the New Roman style is used as an example.
  • the Chinese fonts in the standard text here refer to the mainstream fonts in the current Chinese fonts, such as the default font Song style in the input method of computer equipment, and the mainstream font italics commonly used in copying; and like in daily life
  • the less commonly used Chinese fonts, such as cursive and young round, are not included in the scope of the Chinese fonts that make up the standard text.
  • the normal Chinese text training samples are input into a bidirectional long-term and short-term memory neural network, which is trained based on a continuous time classification algorithm to obtain the total error factor of the bidirectional long-term and short-term memory neural network.
  • the total error factor of the neural network was used to update the network parameters of the two-way long-term and short-term memory neural network to obtain a standardized Chinese text recognition model.
  • the standard Chinese text recognition model learns the deep features of the standard Chinese text training samples during the training process, enabling the model to accurately recognize standard standard text, has the ability to recognize standard standard text, and trains standard Chinese text recognition In the process of the model, manual labeling and data alignment of the standard Chinese text training samples are not required, and end-to-end training can be performed directly. It should be noted that regardless of whether the typefaces in the training samples of the standard Chinese text are other Chinese fonts such as Kai, Song, or Lishu, since the standard standard texts composed of these different Chinese fonts are not much different in terms of font recognition, The trained canonical Chinese text recognition model can accurately recognize standard canonical texts corresponding to typefaces such as Kai, Song, or Lishu, and obtain more accurate recognition results.
  • S20 Obtain training samples of non-standard Chinese text, input training samples of non-standard Chinese text into the standard Chinese text recognition model, train based on continuous time classification algorithm, obtain the total error factor of the standard Chinese text recognition model, and recognize according to the standard Chinese text The total error factor of the model.
  • the network parameters of the standard Chinese text recognition model are updated using the particle swarm algorithm, and the Chinese handwritten text recognition model is adjusted.
  • the non-standard Chinese text training sample refers to a training sample obtained based on handwritten Chinese text.
  • the handwritten Chinese text may specifically be a text obtained by handwriting in mainstream fonts such as Kai, Song, or Lishu. Understandably, the difference between this non-standardized Chinese text training sample and the normalized Chinese text training sample is that the non-standardized Chinese text training sample is obtained by handwritten Chinese text. Since it is handwritten, it certainly contains a variety of different fonts. form.
  • the server obtains a training sample of non-standard Chinese text, the training sample contains the characteristics of handwritten Chinese text, and inputs the training sample of non-standard Chinese text into a standard Chinese text recognition model, which is trained based on a continuous-time classification algorithm and Adjust, use the particle swarm algorithm to update the network parameters of the standard Chinese text recognition model, and obtain the adjusted Chinese handwritten text recognition model.
  • the standard Chinese text recognition model has the ability to recognize standard Chinese text, but does not have high recognition accuracy when recognizing handwritten Chinese text.
  • this embodiment uses non-standard Chinese text training samples for training, so that the standard Chinese handwritten text recognition model can adjust the network parameters in the model based on the existing standard text of the recognition standard to obtain the adjusted Chinese handwritten text recognition model.
  • the adjusted Chinese handwritten text recognition model learns the deep features of handwritten Chinese text on the basis of the original standard text recognition, so that the adjusted Chinese handwritten text recognition model combines the deep features of standard and handwritten Chinese text, and can simultaneously adjust the standard specifications.
  • the text and handwritten Chinese text are effectively recognized, and a high accuracy recognition result is obtained.
  • bidirectional long-term and short-term memory neural network performs text recognition, it is judged based on the pixel distribution and sequence of the text.
  • the difference between normative text is much smaller.
  • there is a difference in pixel distribution between "hello” in handwritten Chinese text and "hello” in standard normative text but this difference is compared to "hello” and standard in handwritten Chinese text.
  • the difference between the normative text "goodbye” is significantly smaller. It can be considered that even if there is a certain difference between the handwritten Chinese text and the corresponding standard specification text, this difference is much smaller than the non-corresponding standard specification text.
  • the adjusted Chinese handwritten text recognition model is trained by a two-way long-term and short-term memory neural network. This model combines the standard features of text and the deep features of handwritten Chinese text, and can effectively recognize handwritten Chinese text based on the deep features.
  • step S10 and step S20 in this embodiment is not interchangeable, and step S10 needs to be executed before step S20.
  • Training the bidirectional long-term and short-term memory neural network with the normal Chinese training samples first can make the obtained normal Chinese text recognition model have better recognition ability, and make it have accurate recognition results for the standard normal text.
  • the fine-tuning of step S20 is performed, so that the adjusted Chinese handwritten text recognition model obtained by training can effectively recognize the handwritten Chinese text based on the deep features of the learned handwritten Chinese text and make it handwriting Chinese text recognition has more accurate recognition results.
  • step S20 is performed first or only step S20, because the handwritten Chinese text contained in the handwritten Chinese text has various forms, the features learned by directly training the handwritten Chinese text cannot reflect the characteristics of the handwritten Chinese text. Make the model learn "bad" at the beginning, which makes it difficult to make accurate recognition results for handwritten Chinese text recognition. Although each person's handwritten Chinese text is different, most of them are similar to standard specification text (such as handwritten Chinese text imitating standard specification text). Therefore, at the beginning, model training based on standard and normative text is more in line with the objective situation. It is more effective than model training directly on handwritten Chinese text. You can make corresponding adjustments under the "good" model to obtain the recognition rate of handwritten Chinese text. Highly adjusted Chinese handwritten text recognition model.
  • S30 Obtain a sample of Chinese text to be tested, use the adjusted Chinese handwritten text recognition model to identify the sample of Chinese text to be tested, obtain error texts whose recognition results do not match the true results, and use all the error texts as training text samples for errors.
  • the Chinese text sample to be tested refers to the training sample obtained for testing according to the standard text and the handwritten Chinese text.
  • the standard text used in this step is the same as the standard text used for training in step S10 (because For example, each character corresponding to a font such as Kai, Song, etc. is uniquely determined); the handwritten Chinese text used and the handwritten Chinese text used for training in step S20 may be different (the Chinese text handwritten by different people is not complete) Similarly, each text of the handwritten Chinese text can correspond to multiple font forms. In order to distinguish it from the non-standard Chinese text training samples used for training in step S20, and to avoid the situation of model training overfitting, this step is generally used with S20 different handwritten Chinese text).
  • the trained adjusted Chinese handwritten text recognition model is used to identify the Chinese text sample to be tested.
  • Standard training text and handwritten Chinese text can be input to the adjusted Chinese handwritten text recognition model in a mixed manner during training.
  • the adjusted Chinese handwritten text recognition model is used to recognize the Chinese text samples to be tested, the corresponding recognition results will be obtained, and all error texts whose recognition results do not match the label value (true result) will be used as the error text training samples.
  • the error text training sample reflects that the problem of insufficient recognition accuracy still exists in adjusting the Chinese text handwriting recognition model, so as to further update and optimize the Chinese handwriting text recognition model based on the error text training sample.
  • the network parameters were first updated with the normal Chinese text training samples, and then the non-standard Chinese text training samples were used to update
  • the acquired adjusted Chinese handwritten text recognition model will over-learn the characteristics of non-standard Chinese text training samples, so that the obtained adjusted Chinese handwritten text recognition model will train non-standard Chinese text training samples (including handwritten Chinese text).
  • step S30 uses the Chinese text sample to be tested to adjust Chinese handwritten text recognition model for recognition can largely eliminate over-learning of non-standard Chinese text training samples used during training. That is, by adjusting the Chinese handwritten text recognition model to identify the Chinese text samples to be tested to find the errors caused by over-learning, the errors can be specifically reflected by the error text, so the Chinese handwriting can be further updated and optimized based on the error text. Network parameters for text recognition models.
  • S40 Input the training sample of the erroneous text into the adjusted Chinese handwritten text recognition model, and train it based on the continuous time classification algorithm to obtain the total error factor of the adjusted Chinese handwritten text recognition model.
  • the particle swarm algorithm updates and adjusts the network parameters of the Chinese handwritten text recognition model to obtain the target Chinese handwritten text recognition model.
  • an error text training sample is input into the adjusted Chinese handwritten text recognition model, and training is performed based on a continuous time classification algorithm.
  • the error text training sample reflects that during training and adjustment of the Chinese handwritten text recognition model, due to excessive learning non-standard
  • the characteristics of Chinese text training samples lead to inaccurate recognition problems when adjusting the Chinese handwritten text recognition model to recognize handwritten Chinese text other than non-standard Chinese text training samples.
  • the reason that the normalized Chinese text training samples are used first and then the non-standardized Chinese text training samples are used to train the model will overly weaken the characteristics of the previously learned standard canonical text, which will affect the initial establishment of the model to recognize the standard canonical text. frame".
  • the use of erroneous text training samples can well solve the problems of over-learning and over-weakening. According to the recognition accuracy problems reflected by the erroneous text training samples, the over-learning and over-weakening generated during the original training process can be largely eliminated. Adverse effects. Specifically, the total error factor of the adjusted Chinese handwritten text recognition model is obtained, and based on the adjusted total error factor of the Chinese handwritten text recognition model, a particle swarm algorithm is used for training using the error text training samples, and the Chinese handwritten text is updated and adjusted according to the algorithm Recognize the network parameters of the model to obtain the target Chinese handwritten text recognition model.
  • the target Chinese handwritten text recognition model refers to the model finally trained and used to recognize Chinese handwritten text.
  • the training uses a two-way long-term and short-term memory neural network, which can combine the sequence characteristics of Chinese text to learn the deep features of Chinese text and improve the recognition rate of the target Chinese handwritten text recognition model.
  • the training algorithm is a continuous-time classification algorithm. Using this algorithm for training does not require manual labeling and data alignment of the training samples, which can reduce the complexity of the model and enable direct training of non-aligned and variable-length sequences.
  • using particle swarm algorithm can significantly improve the efficiency of model training, and effectively update network parameters, and improve the recognition accuracy of the target Chinese handwritten text recognition model.
  • the normalized Chinese text training sample is used to train and obtain the normalized Chinese text recognition model, and then the non-standardized Chinese text is used to update the normalized Chinese text recognition model to make the adjusted Chinese handwritten text recognition model obtained after the update.
  • the deep features of handwritten Chinese text are learned through training and updating, so that the adjusted Chinese handwritten text recognition model can better recognize handwritten Chinese text.
  • use the adjusted Chinese handwritten text recognition model to identify the Chinese text samples to be tested, obtain the error texts whose recognition results do not match the real results, and input all the error texts as training text of the error text into the adjusted Chinese handwritten text recognition model, based on continuous time
  • the classification algorithm is updated to obtain the target Chinese handwritten text recognition model.
  • the use of error text training samples can largely eliminate the adverse effects caused by over-learning and over-weakening during the original training process, and can further optimize the recognition accuracy.
  • the network parameter update of each model uses the particle swarm algorithm. This algorithm can perform global random optimization. In the initial stage of training, it can find the convergence field of the optimal solution, and then converge in the convergence field of the optimal solution.
  • the optimal solution is to find the minimum value of the error function to effectively update the network parameters of the bidirectional long-term and short-term memory neural network.
  • Each model is trained using a two-way long-term and short-term memory neural network. This neural network can combine the sequence characteristics of Chinese text to learn the deep features of Chinese text and realize the function of identifying different handwritten Chinese text.
  • the algorithm used to train each model is a continuous-time classification algorithm. Using this algorithm for training does not require manual labeling and data alignment of the training samples, which can reduce the model complexity and enable direct training of non-aligned and indefinite length sequences.
  • step S10 obtaining a normal Chinese text training sample specifically includes the following steps:
  • S101 Obtain a pixel value feature matrix of each Chinese text in a training sample of Chinese text to be processed, normalize each pixel value in the pixel value feature matrix of each Chinese text, and obtain a normalization of each Chinese text Pixel value feature matrix, where the normalization formula is MaxValue is the maximum pixel value in the pixel value feature matrix, MinValue is the minimum pixel value in the pixel value feature matrix, x is the pixel value before normalization, and y is the pixel value after normalization.
  • the training text samples to be processed refers to the training samples that are initially acquired and not processed.
  • a mature, open-source convolutional neural network may be used to extract the features of the Chinese text training samples to be processed, and obtain the pixel value feature matrix of each Chinese text in the Chinese text training samples to be processed.
  • the pixel value feature matrix of each Chinese text represents the features of the corresponding text.
  • the pixel values represent the features of the text. Since the text is represented two-dimensionally by the image, the pixel values can be represented by a matrix, that is, the pixel value feature matrix.
  • the computer device can recognize the form of the pixel value characteristic matrix and read the value in the pixel value characteristic matrix.
  • the server After the server obtains the pixel value feature matrix of each Chinese text, it uses the formula of normalization processing to normalize each pixel value in the feature matrix to obtain the normalized pixel value feature of each Chinese text.
  • the normalized processing method can be used to compress the pixel value feature matrix of each Chinese text within the same range, which can speed up the calculation related to the pixel value feature matrix and help improve the training standard Chinese. Training efficiency of text recognition models.
  • the pixel values in the normalized pixel value feature matrix of each Chinese text are divided into two types of pixel values.
  • the two types of pixel values refer to that the pixel values include only the pixel value A or the pixel value B.
  • a pixel value greater than or equal to 0.5 in the normalized pixel feature matrix can be taken as 1 and a pixel value less than 0.5 can be taken as 0, and a corresponding binary pixel value feature matrix for each Chinese text can be established.
  • the original of the binarized pixel feature matrix of each Chinese text contains only 0 or 1.
  • the Chinese text combination corresponding to the binarized pixel value feature matrix is used as a normal Chinese text training sample, and the normal Chinese text training samples are divided into preset batches. Perform batching. For example, in an image containing text, there are portions of text pixels and portions of blank pixels. The pixel values on the text are generally darker. The "1" in the binarized pixel value feature matrix represents the portion of the text pixel, and the "0" represents the portion of the blank pixel in the image. Understandably, the feature representation of text can be further simplified by establishing a binary pixel value feature matrix. Only the matrix of 0 and 1 can be used to represent and distinguish each text, which can improve the computer processing of the feature matrix of text. Speed, which further improves the training efficiency of training standard Chinese text recognition models.
  • Steps S101-S102 normalize the Chinese text training samples to be processed and divide the two types of values, obtain the binarized pixel value feature matrix of each Chinese text, and binarize the pixel feature matrix of each Chinese text
  • the corresponding text can be used as a training sample for normal Chinese text, which can significantly shorten the time for training the normal Chinese text recognition model.
  • step S10 a training sample of standardized Chinese text is input into a bidirectional long-term and short-term memory neural network, and training is performed based on a continuous-time classification algorithm to obtain a total error of the bidirectional long-term and short-term memory neural network.
  • the factor based on the total error factor of the two-way long-term and short-term memory neural network, uses the particle swarm algorithm to update the network parameters of the two-way long- and short-term memory neural network to obtain a standardized Chinese text recognition model, which specifically includes the following steps:
  • the normal Chinese text training samples are input into the bidirectional long-term and short-term memory neural network in sequence and forward, and training is performed based on the continuous time classification algorithm to obtain the normal Chinese text training samples in the forward direction in the two-way long-term and short-term memory neural network. Forward propagation output and backward propagation output.
  • the normalized Chinese text training samples are reversely input into the bidirectional long-term and short-term memory neural network according to the sequence. The training is based on the continuous time classification algorithm to obtain the normalized Chinese text training samples in the reverse direction.
  • the forward propagation output and the backward propagation output in the memory neural network at the time is expressed as Among them, t represents the number of sequence steps, and u represents the label value of the output corresponding to t, Represents the probability that the output sequence's output at step t is l ' u , The backward propagation output is expressed as Among them, t represents the number of sequence steps, and u represents the label value of the output corresponding to t, Represents the probability that the output sequence's output at step t + 1 is l ' u ,
  • the normalized Chinese text training samples are input into the bidirectional long-term and short-term memory neural network in the sequence forward and sequence reverse directions respectively, and are trained based on the continuous time classification (CTC) algorithm.
  • the CTC algorithm is essentially an algorithm for calculating a loss function. This algorithm is used to measure the error between the input sequence data after passing through the neural network and the real result (objective fact, also called the label value). Therefore, it is possible to obtain the forward propagation output and the backward propagation output in a bidirectional long-term short-term memory neural network in the forward direction and in the reverse direction of the sequence in the normal Chinese text training samples, respectively.
  • the forward propagation output, the forward propagation output and the backward propagation of the sequence in the reverse are described, and the corresponding error functions are constructed.
  • the mapping transformation may be a process of removing overlapping words and removing spaces as in the above example.
  • x) Represents a given input sequence x (such as a sample in a standard Chinese text training sample), and the probability of output is sequence l.
  • the probability of output as sequence l can be expressed as all output paths
  • the sum of the probabilities of the ⁇ mapped sequence is l, which is expressed by the formula: Understandably, as the length of the sequence l increases, the number of corresponding paths increases exponentially, so iterative thinking can be adopted, from the t-step and t-1, t + 1 step forward From the perspective of propagation and backward propagation, the path probability corresponding to sequence l is calculated to improve the efficiency of the calculation. Specifically, before performing calculations, some preprocessing is needed on the sequence l, spaces are added at the beginning and end of the sequence l, and spaces are added between the letters.
  • the set of paths and the output of step t is l ' u , where u / 2 represents the index, so it needs to be rounded down.
  • x) can be represented by a forward variable, that is: p (l
  • x) ⁇ (T, U ') + ⁇ (T, U'-1), where ⁇ (T, U' ) Can be understood as the length of all paths is T, after F mapping is sequence l, and the label value of the output at time T: l ' U or l' U-1 . That is, whether the last of the path includes spaces.
  • f (u) here is actually a list of all possible paths at the previous moment, and the specific condition formula is as follows: Similar to the process of forward propagation, a backward variable ⁇ (t, u) can be defined, which means that starting from time t + 1, a path ⁇ 'is added to the forward variable ⁇ (t, u), so that it is finally mapped by F The sum of the probabilities of the sequence l is followed by the formula: among them, There are corresponding initialization conditions for backward propagation: Therefore, the backward variable can also be obtained in a recursive manner, and expressed by the formula: Among them, g (u) represents a possible path selection function at time t + 1, which is expressed as Then you can describe the process of forward propagation and the process of backward propagation according to the forward and backward variables, and obtain the corresponding forward propagation output and backward propagation output (the recursive
  • the forward and backward propagation outputs of the bidirectional long-term and short-term memory neural network are obtained according to the sequence.
  • the forward error factors of the two-way long-term and short-term memory neural network are obtained.
  • the forward and backward propagation output of the sequence reverse in the bidirectional long-term short-term memory neural network obtains the reverse error factor of the bidirectional long-term short-term memory neural network, and the forward error factor and the bidirectional long-term time
  • the reverse error factors of the memory neural network are added together to obtain the total error factor of the bidirectional long-term and short-term memory neural network, and an error function is constructed based on the total error factor of the bidirectional long-term and short-term memory neural network.
  • the error function can be expressed as Among them, S represents the standard Chinese text training sample.
  • x) in this formula can be calculated from the forward propagation output and the backward propagation output, Is an error factor that can measure error.
  • the error factor is determined by First, the corresponding forward error factor (hereinafter referred to as the forward error factor) of the two-way long-term short-term memory neural network and the reverse error factor (hereinafter referred to as the reverse error factor) of the two-way long-short-term memory neural network are obtained. Add it to the reverse error factor to obtain the total error factor, and then use the negative log of the probability to construct an error function based on the total error factor, and use the sequence of forward propagation output and backward propagation output as described above. And the forward propagation output and backward propagation output of the sequence inversion represent the error function, which will not be repeated here. After obtaining the error function according to the total error factor, the network parameters can be updated according to the error function to obtain a standard Chinese text recognition model.
  • a particle swarm algorithm is used to update the network parameters. Specifically, the partial derivative (that is, the gradient) of the loss function to the network output without the sofmax layer is obtained, and the gradient is multiplied by the learning rate. The network parameters are updated by subtracting the product of the gradient and the learning rate from the original network parameters.
  • the particle swarm algorithm includes the particle position update formula (formula 1) and particle velocity position update formula (formula 2). As follows:
  • V i + 1 w ⁇ V i + c1 ⁇ rand () ⁇ (pbest i -X i ) + c2 ⁇ rand () ⁇ (gbest-X i ) ------- (Formula 1)
  • c1 ⁇ rand () controls the step size of the particle to the optimal position.
  • c2 ⁇ rand () controls the step size of the particle to the optimal position for all particles;
  • w is the inertia bias; when the value of w is large, the particle swarm exhibits a strong ability of global optimization; when the value of w is small, the particle swarm performs It has a strong local optimization ability, which is very suitable for network training.
  • w is generally set to be large to ensure that it has a sufficiently large global optimization capability; in the convergence phase of training, w is generally set to be small to ensure that it can converge to the optimal solution.
  • the first term on the right side of the formula represents the original velocity term; the second term on the right side of the formula represents the "cognitive" part, which is mainly based on the historical optimal position of the particle and considering the effect on the position of the new particle. The process of self-thinking; the third term on the right side of the formula is the "social" part, which mainly considers the impact on the position of new particles based on the optimal position of all particles.
  • the whole formula (1) reflects a process of information sharing. If there is no first part, the update of the particle velocity depends only on the optimal position experienced by the particle and all particles, and the particle has strong convergence.
  • the first term on the right side of the formula ensures that the particle swarm has a certain global optimization ability and has the function of escaping the extreme value. On the contrary, if the part is small, the particle swarm will quickly converge.
  • the second term on the right side of the formula and the third term on the right side guarantee the local convergence of the particle swarm.
  • the particle swarm optimization algorithm is a global random optimization algorithm. Using this calculation formula, the convergence field of the optimal solution can be found in the initial stage of training, and then the convergence is performed in the convergence field of the optimal solution to obtain the optimal solution (i.e. Find the minimum of the error function).
  • the process of using the particle swarm algorithm to update the network parameters of the two-way long-term and short-term memory neural network specifically includes the following steps:
  • the particle swarm algorithm can be used to quickly and accurately obtain the gradient, and to effectively update the network parameters.
  • Steps S111-S113 can construct an error function according to the forward propagation output and the backward propagation output of the normal Chinese text training sample in the two-way long-term short-term memory neural network in the sequence forward and sequence reverse, respectively, and use the particles according to the error function.
  • the swarm algorithm performs error back propagation, updates network parameters, and achieves the purpose of obtaining a standardized Chinese text recognition model.
  • the model learns the deep features of the training samples of standard Chinese text and can accurately identify standard standard text.
  • step S30 the Chinese text sample to be tested is identified by adjusting the Chinese handwritten text recognition model, and error texts whose recognition results do not match the real results are obtained, and all the error texts are used as the error text training samples. , Including the following steps:
  • S31 Input the Chinese text sample to be tested into the adjusted Chinese handwritten text recognition model, and obtain the output value of each text in the Chinese text sample to be tested in the adjusted Chinese handwritten text recognition model.
  • the Chinese handwritten text recognition model is adjusted to recognize the Chinese text sample to be tested, and the Chinese text sample to be tested includes several Chinese texts.
  • the text includes text, and the output value of each text mentioned in this embodiment specifically refers to each output value corresponding to each font in each text.
  • the Chinese character library there are more than 3,000 commonly used Chinese characters (including spaces and various Chinese punctuation marks).
  • each character in the Chinese character library and the input Chinese to be tested should be set.
  • the probability value of the similarity of the words in the text sample can be achieved through the softmax function.
  • the probability value is the output value of each text in the test Chinese text sample in adjusting the Chinese handwritten text recognition model. There are many output values. An output value corresponds to the probability value of the similarity between the Chinese character corresponding to the output number and each character in the Chinese character library. The recognition result of each text can be determined according to the probability value.
  • S32 Select the maximum output value among the output values corresponding to each text, and obtain the recognition result of each text according to the maximum output value.
  • a maximum output value among all output values corresponding to each text is selected, and a recognition result of the text can be obtained according to the maximum output value.
  • the output value directly reflects the similarity between the words in the input Chinese text sample and each character in the Chinese character library, and the maximum output value indicates that the word in the text sample to be tested is closest to a certain character in the Chinese character library. Words, you can determine the actual output according to the word corresponding to the maximum output value. For example, the actual output is "You, you, we are good", “You, we are, we are good,”, "You are you, we are you “And so on instead of actual output such as" You, you, you, you, you, you, you, you, etc.
  • the actual output needs to be further processed to remove the reduplicated words in the actual output, leaving only one; and to remove the spaces, you can get the recognition result, for example, the recognition result in this embodiment is "hello".
  • the correctness of the actual output word is determined by the maximum output value, and the de-superposition and space removal processing are performed to effectively obtain the recognition result of each text.
  • the obtained recognition result is compared with an actual result (objective fact), and an error text in which the recognition result does not match the actual result is used as an error text training sample.
  • the recognition result is just the result recognized by the Chinese text training sample to be tested in adjusting the Chinese handwritten text recognition model. It may be different from the real result, reflecting that the model still has the recognition accuracy. Shortcomings, and these shortcomings can be optimized by training samples of erroneous text to achieve more accurate recognition results.
  • Steps S31-S33 adjust the output value of the Chinese handwritten text recognition model according to each text in the Chinese text sample to be tested, and select the maximum output value from the output value that can reflect the similarity between texts (actually the similarity of words). ; Then get the recognition result through the maximum output value, and get the error text training sample according to the recognition result, which provides an important technical premise for the subsequent use of the error text training sample to further optimize the recognition accuracy.
  • the handwriting model training method before step S10, that is, before the step of obtaining training samples of standard Chinese text, the handwriting model training method further includes the following steps: initializing a two-way long-term short-term memory neural network.
  • initializing a bidirectional long-term and short-term memory neural network initializes network parameters of the network, and assigns initial values to the network parameters. If the initialized weights are in a relatively flat area of the error surface, the convergence speed of bidirectional long-term short-term memory neural network model training may be abnormally slow.
  • the network parameters can be initialized to be uniformly distributed in a relatively small interval with a zero mean, such as in an interval such as [-0.30, + 0.30].
  • Reasonably initializing the bidirectional long-term and short-term memory neural network can make the network more flexible in the initial stage. It can effectively adjust the network during the training process. It can quickly and effectively find the minimum value of the error function, which is beneficial to the bidirectional length.
  • the update and adjustment of the memory neural network makes the model obtained based on the bidirectional long-term and short-term memory neural network for model training have accurate recognition effect when performing Chinese handwriting recognition.
  • the network parameters of the bidirectional long-term and short-term memory neural network are initialized and uniformly distributed in a relatively small interval with a zero mean, such as [-0.30, +0.30], using This initialization method can quickly and efficiently find the minimum value of the error function, which is conducive to the update and adjustment of the bidirectional long-term and short-term memory neural network.
  • a zero mean such as [-0.30, +0.30]
  • the forward propagation output and the backward propagation output obtained in the sequence forward and backward sequence respectively in the bidirectional long-term short-term memory neural network and according to the forward propagation output and the backward propagation output obtained respectively Obtain the forward error factor and the reverse error factor, obtain the total error factor from the forward error factor and the reverse error factor, and construct an error function, and then update the network parameters based on the error function back propagation to obtain a standard Chinese text recognition model.
  • the non-normative Chinese text is used to update the canonical Chinese text recognition model, so that the adjusted Chinese handwritten text recognition model obtained after the update can learn non-normative Chinese by training and updating on the premise that it has the ability to recognize the canonical Chinese handwritten text.
  • the deep features of the text make it possible to adjust the Chinese handwritten text recognition model to better recognize non-standard Chinese handwritten text.
  • the maximum output value that reflects the degree of similarity between texts is selected from the output values, and the maximum output value is used to obtain the recognition result.
  • Recognition results are obtained from training text samples of errors, and all error texts are input as training text samples to adjust the Chinese handwritten text recognition model, and training updates are performed based on the continuous time classification algorithm to obtain the target Chinese handwritten text recognition model.
  • error text training samples can largely eliminate the adverse effects caused by over-learning and over-weakening during the original training process, and can further optimize the recognition accuracy.
  • each model is trained using a bidirectional long-term and short-term memory neural network.
  • the neural network can combine the sequence characteristics of the word, from the perspective of the sequence forward and the sequence reverse. Starting, learn the deep features of the characters and realize the function of recognizing different Chinese handwriting.
  • Each model uses the particle swarm algorithm when updating network parameters.
  • This algorithm can perform global random optimization, and can be used at the initial stage of training. Find the convergence field of the optimal solution, and then converge in the convergence field of the optimal solution to get the optimal solution, find the minimum value of the error function, and update the network parameters.
  • the particle swarm algorithm can significantly improve the efficiency of model training, and effectively update network parameters, and improve the recognition accuracy of the obtained model.
  • FIG. 6 shows a principle block diagram of a handwriting model training device corresponding to the handwriting model training method in the embodiment.
  • the handwriting model training device includes a standard Chinese text recognition model acquisition module 10, an adjusted Chinese handwriting text recognition model acquisition module 20, an error text training sample acquisition module 30, and a target Chinese handwriting text recognition model acquisition module 40.
  • the implementation functions of the standard Chinese text recognition model acquisition module 10, adjusted Chinese handwritten text recognition model acquisition module 20, error text training sample acquisition module 30, and target Chinese handwritten text recognition model acquisition module 40 correspond to the handwriting model training method in the embodiment.
  • the steps correspond one by one. In order to avoid redundant description, this embodiment is not detailed one by one.
  • the normal Chinese text recognition model acquisition module 10 is used to obtain normal Chinese text training samples, input the normal Chinese text training samples into a bidirectional long-term and short-term memory neural network, and train based on a continuous-time classification algorithm to obtain bidirectional long-term and short-term memory neural networks.
  • the total error factor is based on the total error factor of the bidirectional long-term and short-term memory neural network, and the particle parameters algorithm is used to update the network parameters of the bidirectional long-term and short-term memory neural network to obtain a standardized Chinese text recognition model.
  • Adjust the Chinese handwritten text recognition model acquisition module 20 to obtain non-standard Chinese text training samples, input non-standard Chinese text training samples into the standard Chinese text recognition model, and train based on the continuous time classification algorithm to obtain the standard Chinese text recognition model According to the total error factor of the standard Chinese text recognition model, the particle parameters algorithm is used to update the network parameters of the standard Chinese text recognition model to obtain the adjusted Chinese handwritten text recognition model.
  • Error text training sample acquisition module 30 which is used to obtain Chinese text samples to be tested, adjust the Chinese handwritten text recognition model to identify Chinese text samples to be tested, obtain error texts whose recognition results do not match the true results, and train all error texts as error texts sample.
  • the target Chinese handwritten text recognition model acquisition module 40 is used to input training text training samples into the adjusted Chinese handwritten text recognition model, and train based on the continuous time classification algorithm to obtain the total error factor of the adjusted Chinese handwritten text recognition model.
  • the particle swarm algorithm was used to update and adjust the network parameters of the Chinese handwritten text recognition model to obtain the target Chinese handwritten text recognition model.
  • the normalized Chinese text recognition model acquisition module 10 includes a normalized pixel value feature matrix acquisition unit 101, a normalized Chinese text training sample acquisition unit 102, a propagation output acquisition unit 111, an error function construction unit 112, and a normalized Chinese text recognition model acquisition. Unit 113.
  • the normalized pixel value feature matrix obtaining unit 101 is configured to obtain a pixel value feature matrix of each Chinese text in a Chinese text training sample to be processed, and normalize each pixel value in the pixel value feature matrix of each Chinese text. Processing to obtain a normalized pixel value feature matrix for each Chinese text, where the formula for normalization processing is MaxValue is the maximum pixel value in the pixel value feature matrix, MinValue is the minimum pixel value in the pixel value feature matrix, x is the pixel value before normalization, and y is the pixel value after normalization.
  • a normalized Chinese text training sample acquisition unit 102 is configured to divide the pixel values in the normalized pixel value feature matrix of each Chinese text into two types of pixel values, and establish a binarized pixel of each Chinese text based on the two types of pixel values Value feature matrix, using the Chinese text combination corresponding to the binarized pixel value feature matrix of each Chinese text as a standard Chinese text training sample.
  • a propagation output obtaining unit 111 is configured to input the normalized Chinese text training samples in a forward direction to a bidirectional long-term and short-term memory neural network, and perform training based on a continuous-time classification algorithm to obtain the normalized Chinese text training samples in a forward direction in both directions.
  • the forward propagation output and the backward propagation output in the memory neural network, the normal Chinese text training samples are input into the two-way long-term and short-term memory neural network in sequence according to the sequence, and the training is based on the continuous time classification algorithm to obtain the normal Chinese text training samples.
  • the forward and backward propagation outputs of the sequence inversion in a two-way long-term short-term memory neural network is expressed as Among them, t represents the number of sequence steps, and u represents the label value of the output corresponding to t, Represents the probability that the output sequence's output at step t is l ' u ,
  • the backward propagation output is expressed as Among them, t represents the number of sequence steps, and u represents the label value of the output corresponding to t, Represents the probability that the output sequence's output at step t + 1 is l ' u ,
  • An error function constructing unit 112 is configured to obtain the forward error factor of the two-way long-term and short-term memory neural network according to the normal Chinese text training samples in the sequence of forward propagation output and backward propagation output in the two-way long-term memory neural network. Standardize Chinese text training samples to reverse the forward and backward propagation output of the bidirectional long-term and short-term memory neural network according to the sequence. Obtain the reverse error factor of the bidirectional long-term and short-term memory neural network.
  • the error factor and the reverse error factor of the bidirectional long-term and short-term memory neural network are added to obtain the total error factor of the bidirectional long-term and short-term memory neural network, and an error function is constructed based on the total error factor of the bidirectional long-term and short-term memory neural network.
  • the standard Chinese text recognition model acquisition unit 113 is configured to update the network parameters of the two-way long-term and short-term memory neural network by using a particle swarm algorithm according to an error function to obtain a standard Chinese text recognition model.
  • the error text training sample acquisition module 30 includes a model output value acquisition unit 31, a model recognition result acquisition unit 32, and an error text training sample acquisition unit 33.
  • the model output value acquiring unit 31 is configured to input a Chinese text sample to be tested into the adjusted Chinese handwritten text recognition model, and obtain an output value of each text in the Chinese text sample to be tested in the adjusted Chinese handwritten text recognition model.
  • the model recognition result obtaining unit 32 is configured to select a maximum output value among output values corresponding to each text, and obtain a recognition result of each text according to the maximum output value.
  • the error text training sample acquisition unit 33 is configured to obtain error texts whose recognition results do not match the real results according to the recognition results, and use all the error texts as the error text training samples.
  • the handwriting model training device further includes an initialization module 50 for initializing a bidirectional long-term and short-term memory neural network.
  • FIG. 7 shows a flowchart of the text recognition method in this embodiment.
  • the text recognition method can be applied to computer equipment configured by banks, investment, and insurance institutions to recognize handwritten Chinese text and achieve artificial intelligence purposes. As shown in FIG. 7, the text recognition method includes the following steps:
  • S50 Obtain the Chinese text to be recognized, use the target Chinese handwritten text recognition model to identify the Chinese text to be recognized, and obtain the output value of the Chinese text to be recognized in the target Chinese handwritten text recognition model.
  • the target Chinese handwritten text recognition model is trained using the above handwriting model. Method.
  • the Chinese text to be identified refers to the Chinese text to be identified.
  • the Chinese text to be recognized is obtained, the Chinese text to be recognized is input to the target Chinese handwritten text recognition model for recognition, and the Chinese text corresponding to each output number of the target Chinese handwritten text recognition model is obtained.
  • the probability value is the output value of the Chinese text to be recognized in the target Chinese handwritten text recognition model.
  • the recognition result of the Chinese text to be recognized can be determined based on the output value.
  • S60 Select the maximum output value among the output values corresponding to the Chinese text to be recognized, and obtain the recognition result of the Chinese text to be recognized according to the maximum output value.
  • the maximum output value among all the output values corresponding to the Chinese text to be recognized is selected, and the corresponding actual output is determined according to the maximum output value.
  • the actual output is "you _______”.
  • the actual output is further processed, and the overlapping words in the actual output are removed, leaving only one; and the spaces are removed to obtain the recognition result of the Chinese text to be recognized.
  • the maximum output value is used to determine the correctness of the words in the actual output stage, and then the de-superposition and de-space processing is performed to effectively obtain the recognition result of each text and improve the recognition accuracy.
  • the target Chinese handwritten text recognition model is used to identify the Chinese text to be recognized, and the recognition result of the Chinese text to be recognized is obtained according to the maximum output value and the processing of desuperimposed characters and spaces.
  • the target Chinese handwritten text recognition model itself has high recognition accuracy, and combined with the Chinese semantic thesaurus to further improve the accuracy of Chinese handwriting recognition.
  • the Chinese text to be recognized is input into the target Chinese handwritten text recognition model for recognition, and the recognition result is obtained in combination with a preset Chinese semantic thesaurus.
  • the target Chinese handwritten text recognition model is used to recognize Chinese handwritten text, accurate recognition results can be obtained.
  • FIG. 8 shows a principle block diagram of a text recognition device that corresponds one-to-one to the text recognition method in the embodiment.
  • the text recognition device includes an output value acquisition module 60 and a recognition result acquisition module 70.
  • the implementation functions of the output value acquisition module 60 and the recognition result acquisition module 70 correspond to the steps corresponding to the text recognition method in the embodiment one by one. To avoid redundant descriptions, this embodiment does not detail them one by one.
  • the text recognition device includes an output value acquisition module 60 for obtaining the Chinese text to be recognized, using the target Chinese handwritten text recognition model to identify the Chinese text to be recognized, and obtaining the output value of the Chinese text to be recognized in the target Chinese handwritten text recognition model; the target Chinese The handwritten text recognition model is obtained using the handwriting model training method.
  • the recognition result acquisition module 70 is configured to select a maximum output value among output values corresponding to the Chinese text to be recognized, and obtain a recognition result of the Chinese text to be recognized according to the maximum output value.
  • This embodiment provides one or more non-volatile readable storage media storing computer-readable instructions.
  • the computer-readable instructions are executed by one or more processors, the one or more processors are executed.
  • the handwriting model training method in the embodiment is implemented at this time. To avoid repetition, details are not repeated here.
  • the functions of each module / unit of the handwriting model training device in the embodiment are implemented when the one or more processors are executed. To avoid repetition, here No longer.
  • the functions of each step in the text recognition method in the embodiment are implemented when the one or more processors are executed. To avoid repetition, different ones are not provided here.
  • the functions of each module / unit in the text recognition device in the embodiment are implemented when the one or more processors are executed. To avoid repetition, here Not one by one.
  • FIG. 9 is a schematic diagram of a computer device according to an embodiment of the present application.
  • the computer device 80 of this embodiment includes a processor 81, a memory 82, and computer-readable instructions 83 stored in the memory 82 and executable on the processor 81.
  • the computer-readable instructions 83 are processed.
  • the device 81 implements the handwriting model training method in the embodiment when executed. To avoid repetition, details are not described here one by one.
  • the computer-readable instructions 83 are executed by the processor 81, the functions of each model / unit in the handwriting model training device in the embodiment are implemented. To avoid repetition, details are not described here one by one.
  • the computer-readable instructions 83 are executed by the processor 81, the functions of the steps in the text recognition method in the embodiment are implemented. To avoid repetition, details are not described here one by one.
  • the computer-readable instructions 83 are executed by the processor 81, the functions of each module / unit in the text recognition device in the embodiment are realized. To avoid repetition, we will not repeat them here.
  • the computer device 80 may be a computing device such as a desktop computer, a notebook, a palmtop computer, and a cloud server.
  • the computer equipment may include, but is not limited to, a processor 81 and a memory 82.
  • FIG. 9 is only an example of the computer device 80 and does not constitute a limitation on the computer device 80. It may include more or fewer components than shown in the figure, or combine some components or different components.
  • computer equipment may also include input and output equipment, network access equipment, and buses.
  • the so-called processor 81 may be a central processing unit (CPU), or other general-purpose processors, digital signal processors (DSPs), application specific integrated circuits (ASICs), Field-Programmable Gate Array (FPGA) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, etc.
  • a general-purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
  • the memory 82 may be an internal storage unit of the computer device 80, such as a hard disk or a memory of the computer device 80.
  • the memory 82 may also be an external storage device of the computer device 80, such as a plug-in hard disk, a smart media card (SMC), a secure digital (SD) card, and a flash memory card (Flash) provided on the computer device 80. Card) and so on.
  • the memory 82 may also include both an internal storage unit of the computer device 80 and an external storage device.
  • the memory 82 is used to store computer-readable instructions 83 and other programs and data required by the computer device.
  • the memory 82 may also be used to temporarily store data that has been or will be output.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Artificial Intelligence (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Biophysics (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Computational Linguistics (AREA)
  • Biomedical Technology (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • Character Discrimination (AREA)

Abstract

一种手写模型训练方法、文本识别方法、装置、设备及介质。该手写模型训练方法包括:获取规范中文文本训练样本,将所述规范中文文本训练样本输入到双向长短时记忆神经网络中,基于连续时间分类算法进行训练,获取总误差因子,并根据总误差因子采用粒子群算法更新网络参数,获取规范中文文本识别模型;获取并采用非规范中文文本训练样本,训练获取调整中文手写文本识别模型;获取并采用待测试中文文本样本得到出错文本训练样本;采用出错文本训练样本更新中文手写文本识别模型的网络参数,获取目标中文手写文本识别模型。采用该手写模型训练方法,能够得到识别手写文本识别率高的目标中文手写文本识别模型。

Description

手写模型训练方法、文本识别方法、装置、设备及介质
本申请以2018年6月4日提交的申请号为201810564059.1,名称为“手写模型训练方法、文本识别方法、装置、设备及介质”的中国专利申请为基础,并要求其优先权。
技术领域
本申请涉及中文文本识别领域,尤其涉及一种手写模型训练方法、文本识别方法、装置、设备及介质。
背景技术
采用传统文本识别方法在识别较为潦草的非规范文本(手写中文文本)时,识别的精确度不高,使得其识别效果不理想。传统文本识别方法很大程度上只能识别规范文本,对实际生活中各种各样的手写文本进行识别时,准确率较低。
发明内容
本申请实施例提供一种手写模型训练方法、装置、设备及介质,以解决当前手写中文文本识别准确率不高的问题。
一种手写模型训练方法,包括:
获取规范中文文本训练样本,将所述规范中文文本训练样本输入到双向长短时记忆神经网络中,基于连续时间分类算法进行训练,获取双向长短时记忆神经网络的总误差因子,根据双向长短时记忆神经网络的总误差因子,采用粒子群算法更新双向长短时记忆神经网络的网络参数,获取规范中文文本识别模型;
获取非规范中文文本训练样本,将所述非规范中文文本训练样本输入到所述规范中文文本识别模型中,基于连续时间分类算法进行训练,获取规范中文文本识别模型的总误差因子,根据规范中文文本识别模型的总误差因子,采用粒子群算法更新所述规范中文文本识别模型的网络参数,获取调整中文手写文本识别模型;
获取待测试中文文本样本,采用所述调整中文手写文本识别模型识别所述待测试中文文本样本,获取识别结果与真实结果不符的出错文本,把所有所述出错文本作为出错文本训练样本;
将所述出错文本训练样本输入到所述调整中文手写文本识别模型中,基于连续时间分类算法进行训练,获取调整中文手写文本识别模型的总误差因子,根据调整中文手写文本识别模型的总误差因子,采用粒子群算法更新调整中文手写文本识别模型的网络参数,获取目标中文手写文本识别模型。
一种手写模型训练装置,包括:
规范中文文本识别模型获取模块,用于获取规范中文文本训练样本,将所述规范中文文本训练样本输入到双向长短时记忆神经网络中,基于连续时间分类算法进行训练,获取双向长短时记忆神经网络的总误差因子,根据双向长短时记忆神经网络的总误差因子,采用粒子群算法更新双向长短时记忆神经网络的网络参数,获取规范中文文本识别模型;
调整中文手写文本识别模型获取模块,用于获取非规范中文文本训练样本,将所述非规范中文文本训练样本输入到所述规范中文文本识别模型中,基于连续时间分类算法进行训练,获取规范中文文本识别模型的总误差因子,根据规范中文文本识别模型的总误差因子,采用粒子群算法更新所述规范中文文本识别模型的网络参数,获取调整中文手写文本识别模型;
出错文本训练样本获取模块,用于获取待测试中文文本样本,采用所述调整中文手写文本识别模型识别所述待测试中文文本样本,获取识别结果与真实结果不符的出错文本,把所有所述出错文本作为出错文本训练样本;
目标中文手写文本识别模型获取模块,用于将所述出错文本训练样本输入到所述调整中文手写文本识别模型中,基于连续时间分类算法进行训练,获取调整中文手写文本识别模型的总误差因子,根据调 整中文手写文本识别模型的总误差因子,采用粒子群算法更新调整中文手写文本识别模型的网络参数,获取目标中文手写文本识别模型。
一种计算机设备,包括存储器、处理器以及存储在所述存储器中并可在所述处理器上运行的计算机可读指令,所述处理器执行所述计算机可读指令时实现如下步骤:
获取规范中文文本训练样本,将所述规范中文文本训练样本输入到双向长短时记忆神经网络中,基于连续时间分类算法进行训练,获取双向长短时记忆神经网络的总误差因子,根据双向长短时记忆神经网络的总误差因子,采用粒子群算法更新双向长短时记忆神经网络的网络参数,获取规范中文文本识别模型;
获取非规范中文文本训练样本,将所述非规范中文文本训练样本输入到所述规范中文文本识别模型中,基于连续时间分类算法进行训练,获取规范中文文本识别模型的总误差因子,根据规范中文文本识别模型的总误差因子,采用粒子群算法更新所述规范中文文本识别模型的网络参数,获取调整中文手写文本识别模型;
获取待测试中文文本样本,采用所述调整中文手写文本识别模型识别所述待测试中文文本样本,获取识别结果与真实结果不符的出错文本,把所有所述出错文本作为出错文本训练样本;
将所述出错文本训练样本输入到所述调整中文手写文本识别模型中,基于连续时间分类算法进行训练,获取调整中文手写文本识别模型的总误差因子,根据调整中文手写文本识别模型的总误差因子,采用粒子群算法更新调整中文手写文本识别模型的网络参数,获取目标中文手写文本识别模型。
一个或多个存储有计算机可读指令的非易失性可读存储介质,所述计算机可读指令被一个或多个处理器执行时,使得所述一个或多个处理器执行如下步骤:
获取规范中文文本训练样本,将所述规范中文文本训练样本输入到双向长短时记忆神经网络中,基于连续时间分类算法进行训练,获取双向长短时记忆神经网络的总误差因子,根据双向长短时记忆神经网络的总误差因子,采用粒子群算法更新双向长短时记忆神经网络的网络参数,获取规范中文文本识别模型;
获取非规范中文文本训练样本,将所述非规范中文文本训练样本输入到所述规范中文文本识别模型中,基于连续时间分类算法进行训练,获取规范中文文本识别模型的总误差因子,根据规范中文文本识别模型的总误差因子,采用粒子群算法更新所述规范中文文本识别模型的网络参数,获取调整中文手写文本识别模型;
获取待测试中文文本样本,采用所述调整中文手写文本识别模型识别所述待测试中文文本样本,获取识别结果与真实结果不符的出错文本,把所有所述出错文本作为出错文本训练样本;
将所述出错文本训练样本输入到所述调整中文手写文本识别模型中,基于连续时间分类算法进行训练,获取调整中文手写文本识别模型的总误差因子,根据调整中文手写文本识别模型的总误差因子,采用粒子群算法更新调整中文手写文本识别模型的网络参数,获取目标中文手写文本识别模型。
本申请实施例还提供一种文本识别方法、装置、设备及介质,以解决当前手写文本识别准确率不高的问题。
一种文本识别方法,包括:
获取待识别中文文本,采用目标中文手写文本识别模型识别所述待识别中文文本,获取所述待识别中文文本在所述目标中文手写文本识别模型中的输出值;所述目标中文手写文本识别模型是采用上述手写模型训练方法获取到的;
选取所述待识别中文文本对应的输出值中的最大输出值,根据所述最大输出值获取待识别中文文本的识别结果。
一种文本识别装置,包括:
输出值获取模块,用于获取待识别中文文本,采用目标中文手写文本识别模型识别所述待识别中文文本,获取所述待识别中文文本在所述目标中文手写文本识别模型中的输出值;所述目标中文手写文本识别模型是采用所述手写模型训练方法获取到的;
识别结果获取模块,用于选取所述待识别中文文本对应的输出值中的最大输出值,根据所述最大输出值获取待识别中文文本的识别结果。
一种计算机设备,包括存储器、处理器以及存储在所述存储器中并可在所述处理器上运行的计算机可读指令,所述处理器执行所述计算机可读指令时实现如下步骤:
获取待识别中文文本,采用目标中文手写文本识别模型识别所述待识别中文文本,获取所述待识别中文文本在所述目标中文手写文本识别模型中的输出值;所述目标中文手写文本识别模型是采用上述手写模型训练方法获取到的;
选取所述待识别中文文本对应的输出值中的最大输出值,根据所述最大输出值获取待识别中文文本的识别结果。
一个或多个存储有计算机可读指令的非易失性可读存储介质,所述计算机可读指令被一个或多个处理器执行时,使得所述一个或多个处理器执行如下步骤:
获取待识别中文文本,采用目标中文手写文本识别模型识别所述待识别中文文本,获取所述待识别中文文本在所述目标中文手写文本识别模型中的输出值;所述目标中文手写文本识别模型是采用上述手写模型训练方法获取到的;
选取所述待识别中文文本对应的输出值中的最大输出值,根据所述最大输出值获取待识别中文文本的识别结果。
本申请的一个或多个实施例的细节在下面的附图和描述中提出,本申请的其他特征和优点将从说明书、附图以及权利要求变得明显。
附图说明
为了更清楚地说明本申请实施例的技术方案,下面将对本申请实施例的描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本申请的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动性的前提下,还可以根据这些附图获得其他的附图。
图1是本申请一实施例中手写模型训练方法的一应用环境图;
图2是本申请一实施例中手写模型训练方法的一流程图;
图3是图2中步骤S10的一具体流程图;
图4是图2中步骤S10的另一具体流程图;
图5是图2中步骤S30的一具体流程图;
图6是本申请一实施例中手写模型训练装置的一示意图;
图7是本申请一实施例中文本识别方法的一流程图;
图8是本申请一实施例中文本识别装置的一示意图;
图9是本申请一实施例中计算机设备的一示意图。
具体实施方式
下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例是本申请一部分实施例,而不是全部的实施例。基于本申请中的实施例,本领域普通技术人员在没有作出创造性劳动前提下所获得的所有其他实施例,都属于本申请保护的范围。
图1示出本申请实施例提供的手写模型训练方法的应用环境。该手写模型训练方法的应用环境包括服务端和客户端,其中,服务端和客户端之间通过网络进行连接,客户端是可与用户进行人机交互的设备,包括但不限于电脑、智能手机和平板等设备,服务端具体可以用独立的服务器或者多个服务器组成的服务器集群实现。本申请实施例提供的手写模型训练方法应用于服务端。
如图2所示,图2示出本申请实施例中手写模型训练方法的一流程图,该手写模型训练方法包括如下步骤:
S10:获取规范中文文本训练样本,将规范中文文本训练样本输入到双向长短时记忆神经网络中,基于连续时间分类算法进行训练,获取双向长短时记忆神经网络的总误差因子,根据双向长短时记忆神经网络的总误差因子,采用粒子群算法更新双向长短时记忆神经网络的网络参数,获取规范中文文本识别模型。
其中,规范中文文本训练样本是指由标准规范文本(如属于楷体、宋体或隶书等中文字体所有序组 成的文本,字体一般选择楷体或者宋体)所获取的训练样本。双向长短时记忆神经网络(Bi-directional Long Short-Term Memory,简称BILSTM)是一种时间递归神经网络,用于从序列正向和序列反向两个方向训练具有序列特点的数据。双向长短时记忆神经网络不仅能够关联前序数据,还能关联后序数据,因此可以根据序列的前后关系学习数据的与序列相关的深层特征。连续时间分类(Connectionist temporal classification,简称CTC)算法,是一种完全端到端的声学模型训练的算法,不需要预先对训练样本做对齐,只需要一个输入序列和一个输出序列即可训练。粒子群算法(Particle Swarm Optimization,简称PSO)是一种全局随机寻优算法,在训练的初始阶段能找到最优解的收敛领域,然后在最优解的收敛领域中再进行收敛,得到最优解,即找到误差函数的极小值,实现对网络参数的有效更新。
本实施例中,获取规范中文文本训练样本。规范中文文本训练样本中采用的字体是相同的(不将多种字体混杂),如进行模型训练的规范中文文本训练样本全部采用宋体,本实施例中以宋体为例进行说明。可以理解地,这里组成标准规范文本中的中文字体是指属于目前中文字体中的主流字体,如计算机设备的输入法中的默认字体宋体,常用于临摹的主流字体楷体等;而像日常生活中比较少使用的中文字体如草书、幼圆,则不列入组成该标准规范文本的中文字体的范围。在获取规范中文文本训练样本后,将规范中文文本训练样本输入到双向长短时记忆神经网络中,基于连续时间分类算法进行训练,获取双向长短时记忆神经网络的总误差因子,根据双向长短时记忆神经网络的总误差因子,采用粒子群算法更新双向长短时记忆神经网络的网络参数,获取规范中文文本识别模型。该规范中文文本识别模型在训练过程中学习了规范中文文本训练样本的深层特征,使得该模型能够对标准规范文本进行精确的识别,具备对标准规范文本的识别能力,并且,训练规范中文文本识别模型的过程中不需对规范中文文本训练样本进行手动标记和数据对齐,能够直接进行端到端的训练。需要说明的是,无论规范中文文本训练样本中的字体采用的是楷体、宋体或隶书等其他中文字体,由于这些不同的中文字体组成的标准规范文本在字体识别的层面上差别并不大,因此训练好的规范中文文本识别模型可以对楷体、宋体或隶书等字体对应的标准规范文本进行精确的识别,得到较准确的识别结果。
S20:获取非规范中文文本训练样本,将非规范中文文本训练样本输入到规范中文文本识别模型中,基于连续时间分类算法进行训练,获取规范中文文本识别模型的总误差因子,根据规范中文文本识别模型的总误差因子,采用粒子群算法更新规范中文文本识别模型的网络参数,获取调整中文手写文本识别模型。
其中,非规范中文文本训练样本是指根据手写中文文本所获取的训练样本,该手写中文文本具体可以是按照楷体、宋体或隶书等主流字体通过手写方式得到的文本。可以理解地,该非规范中文文本训练样本与规范中文文本训练样本的区别在于非规范中文文本训练样本是由手写中文文本所获取的,既然是手写的,当然就包含各种各样不同的字体形态。
本实施例中,服务端获取非规范中文文本训练样本,该训练样本包含有手写中文文本的特征,将非规范中文文本训练样本输入到规范中文文本识别模型中,基于连续时间分类算法进行训练并调整,采用粒子群算法更新规范中文文本识别模型的网络参数,获取调整中文手写文本识别模型。在训练过程中,获取规范中文文本识别模型的总误差因子,并根据规范中文文本识别模型的总误差因子实现网络更新。可以理解地,规范中文文本识别模型具备识别标准规范中文文本的能力,但是在对手写中文文本进行识别时并没有较高的识别精确度。因此本实施例采用非规范中文文本训练样本进行训练,让规范中文手写文本识别模型在已有识别标准规范文本的基础上,对模型中的网络参数进行调整,获取调整中文手写文本识别模型。该调整中文手写文本识别模型在原本识别标准规范文本的基础上学习手写中文文本的深层特征,使得该调整中文手写文本识别模型结合了标准规范文本和手写中文文本的深层特征,能够同时对标准规范文本和手写中文文本进行有效的识别,得到准确率较高的识别结果。
双向长短时记忆神经网络在进行文本识别时,是根据文本的像素分布及序列进行判断的,在实际生活中的手写中文文本与标准规范文本存在差别,但是这种差别相比与其他不对应标准规范文本的差别小很多的,例如,手写中文文本的“你好”和标准规范文本的“你好”在像素分布上存在差别,但是这种差别相比于手写中文文本“你好”和标准规范文本“再见”之间的差别明显小很多。可以这样认为,即使手写中文文本与相对应的标准规范文本之间存在一定的差别,但是这种差别与不相对应的标准规范文 本的差别小得多,因此,可以通过最相似(即差别最小)的原则确定识别结果。调整中文手写文本识别模型是由双向长短时记忆神经网络训练而来的,该模型结合标准规范文本和手写中文文本的深层特征,能够根据该深层特征对手写中文文本进行有效的识别。
需要说明的是,本实施例的步骤S10和步骤S20的顺序是不可调换的,需先执行步骤S10再执行步骤S20。先采用规范中文训练样本训练双向长短时记忆神经网络可以使获取的规范中文文本识别模型拥有较好的识别能力,使其对标准规范文本有精确的识别结果。在拥有良好的识别能力的基础上再进行步骤S20的微调,使得训练获取的调整中文手写文本识别模型能够根据学习到的手写中文文本的深层特征对手写中文文本进行有效的识别,使其对手写中文文本识别有较精确的识别结果。若先执行步骤S20或只执行步骤S20,由于手写中文文本包含的手写字体有各种各样的形态,直接采用手写中文文本训练学习到的特征并不能较好地反映手写中文文本的特征,会使一开始模型就学“坏”,导致后来再怎么进行调整也难以使得对手写中文文本识别有精确的识别结果。虽然每个人的手写中文文本都不一样,但是极大部分都是与标准规范文本相似(如手写中文文本模仿标准规范文本)。因此,一开始根据标准规范文本进行模型训练更符合客观情况,要比直接对手写中文文本进行模型训练的效果更好,可以在“好”的模型下进行相应的调整,获取手写中文文本识别率高的调整中文手写文本识别模型。
S30:获取待测试中文文本样本,采用调整中文手写文本识别模型识别待测试中文文本样本,获取识别结果与真实结果不符的出错文本,把所有出错文本作为出错文本训练样本。
其中,待测试中文文本样本是指根据标准规范文本和手写中文文本所获取的用于测试的训练样本,该步骤采用的标准规范文本和步骤S10中用于训练的标准规范文本是相同的(因为如楷体、宋体等字体所对应的每个字都是唯一确定的);采用的手写中文文本与和步骤S20中用于训练的手写中文文本可以是不同的(不同人手写的中文文本是不完全相同的,手写中文文本的每个文本可以对应多种字体形态,为了与步骤S20用于训练的非规范中文文本训练样本区分开来,避免模型训练过拟合的情况,一般该步骤采用与步骤S20不同的手写中文文本)。
本实施例中,将训练好的调整中文手写文本识别模型用来识别待测试中文文本样本。训练时标准规范文本和手写中文文本可以是采用混合的方式输入到调整中文手写文本识别模型。在采用调整中文手写文本识别模型对待测试中文文本样本进行识别时,将获取到相应的识别结果,把识别结果与标签值(真实结果)不相符的所有出错文本作为出错文本训练样本。该出错文本训练样本反映调整中文文本手写识别模型仍然存在识别精度不足的问题,以便后续根据该出错文本训练样本进一步更新、优化调整中文手写文本识别模型。
由于调整中文手写文本识别模型的识别精度实际上受到规范中文文本训练样本和非规范中文文本训练样本的共同影响,在先采用规范中文文本训练样本更新网络参数,再采用非规范中文文本训练样本更新网络参数的前提下,会导致获取到的调整中文手写文本识别模型过度学习非规范中文文本训练样本的特征,使得获取的调整中文手写文本识别模型对非规范中文文本训练样本(包括手写中文文本)拥有非常高的识别精度,但却过度学习该非规范中文文本样本的特征,影响除该非规范中文文本训练样本以外的手写中文文本的识别精度,因此,步骤S30采用待测试中文文本样本对调整中文手写文本识别模型进行识别,能够很大程度上消除训练时采用的非规范中文文本训练样本的过度学习。即通过调整中文手写文本识别模型识别待测试中文文本样本,以找出由于过度学习而产生的误差,该误差具体可以通过出错文本反映出来,因此能够根据该出错文本进一步地更新、优化调整中文手写文本识别模型的网络参数。
S40:将出错文本训练样本输入到调整中文手写文本识别模型中,基于连续时间分类算法进行训练,获取调整中文手写文本识别模型的总误差因子,根据调整中文手写文本识别模型的总误差因子,采用粒子群算法更新调整中文手写文本识别模型的网络参数,获取目标中文手写文本识别模型。
本实施例中,将出错文本训练样本输入到调整中文手写文本识别模型中,基于连续时间分类算法进行训练,该出错文本训练样本反映了在训练调整中文手写文本识别模型时,由于过度学习非规范中文文本训练样本的特征,导致调整中文手写文本识别模型在识别非规范中文文本训练样本以外的手写中文文本时出现的识别不精确的问题。并且,由于先采用规范中文文本训练样本再采用非规范中文文本训练样本训练模型的原因,会过度削弱原先学习的标准规范文本的特征,这会影响模型初始搭建的对标准规范文本进行识别的“框架”。利用出错文本训练样本可以很好地解决过度学习和过度削弱的问题,可以根 据出错文本训练样本反映的识别精确度上的问题,在很大程度上消除原本训练过程中产生的过度学习和过度削弱带来的不利影响。具体地,获取调整中文手写文本识别模型的总误差因子,根据调整中文手写文本识别模型的总误差因子,采用出错文本训练样本进行训练时采用的是粒子群算法,根据该算法更新调整中文手写文本识别模型的网络参数,获取目标中文手写文本识别模型,该目标中文手写文本识别模型是指最终训练出来的可用于识别中文手写文本的模型。训练采用的是双向长短时记忆神经网络,该神经网络能够结合中文文本具有的序列特点,学习中文文本的深层特征,提高目标中文手写文本识别模型的识别率。训练采用的算法是连续时间分类算法,采用该算法进行训练,不需要对训练样本进行手动标记和数据对齐,能够减少模型复杂度,实现直接进行非对齐不定长序列的训练。在更新网络参数时,采用粒子群算法能够明显提高模型训练的效率,并且有效地对网络参数进行更新,提高目标中文手写文本识别模型的识别准确率。
步骤S10-S40中,采用规范中文文本训练样本训练并获取规范中文文本识别模型,再通过非规范中文文本对规范中文文本识别模型进行调整性的更新,使得更新后获取的调整中文手写文本识别模型在具备识别标准规范文本能力的前提下,通过训练更新的方式学习手写中文文本的深层特征,使得调整中文手写文本识别模型能够较好地识别手写中文文本。然后采用调整中文手写文本识别模型识别待测试中文文本样本,获取识别结果与真实结果不相符的出错文本,并将所有出错文本作为出错文本训练样本输入到调整中文手写文本识别模型中,基于连续时间分类算法进行训练更新,获取目标中文手写文本识别模型。采用出错文本训练样本可以在很大程度上消除原本训练过程中产生的过度学习和过度削弱带来的不利影响,能够进一步优化识别准确率。各模型进行网络参数更新采用的是粒子群算法,该算法能够进行全局随机寻优,在训练的初始阶段能找到最优解的收敛领域,然后在最优解的收敛领域中再进行收敛,得到最优解,求出误差函数的极小值,以对双向长短时记忆神经网络进行有效的网络参数更新。训练各个模型采用的是双向长短时记忆神经网络,该神经网络能够结合中文文本具有的序列特点,学习中文文本的深层特征,实现对不同的手写中文文本进行识别的功能。训练各个模型采用的算法是连续时间分类算法,采用该算法进行训练,不需要对训练样本进行手动标记和数据对齐,能够减少模型复杂度,实现直接进行非对齐不定长序列的训练。
在一实施例中,如图3所示,步骤S10中,获取规范中文文本训练样本,具体包括如下步骤:
S101:获取待处理中文文本训练样本中每个中文文本的像素值特征矩阵,将每个中文文本的像素值特征矩阵中每个像素值进行归一化处理,获取每个中文文本的归一化像素值特征矩阵,其中,归一化处理的公式为
Figure PCTCN2018094271-appb-000001
MaxValue为像素值特征矩阵中像素值的最大值,MinValue为像素值特征矩阵中像素值的最小值,x为归一化前的像素值,y为归一化后的像素值。
其中,待处理中文文本训练样本是指初始获取的,未经处理的训练样本。
本实施例中,可以采用成熟的、开源的卷积神经网络提取待处理中文文本训练样本的特征,获取待处理中文文本训练样本中每个中文文本的像素值特征矩阵。每个中文文本的像素值特征矩阵代表着对应文本的特征,在这里用像素值代表文本的特征,由于文本是通过图像二维表示的,故像素值可以采用矩阵表示,即像素值特征矩阵。计算机设备能够识别像素值特征矩阵的形式,读取像素值特征矩阵中的数值。服务端获取每个中文文本的像素值特征矩阵后,采用归一化处理的公式对特征矩阵中的各个像素值进行归一化处理,获取每个中文文本的归一化像素值特征。本实施例中,采用归一化处理方式能够将每个中文文本的像素值特征矩阵都压缩在同一个范围区间内,能够加快与该像素值特征矩阵相关的计算,有助于提高训练规范中文文本识别模型的训练效率。
S102:将每个中文文本的归一化像素值特征矩阵中的像素值划分为两类像素值,基于两类像素值建立每个中文文本的二值化像素值特征矩阵,将每个中文文本的二值化像素值特征矩阵对应的中文文本组合作为规范中文文本训练样本。
本实施例中,将每个中文文本的归一化像素值特征矩阵中的像素值划分为两类像素值,该两类像素值是指像素值中只包含像素值A或者像素值B。具体地,可以将归一化像素特征矩阵中大于或等于0.5的像素值取为1,将小于0.5的像素值取为0,建立相应的每个中文文本的二值化像素值特征矩阵,每个中文文本的二值化像素特征矩阵中的原始只包含0或1。在建立每个中文文本的二值化像素值特征矩 阵后,将二值化像素值特征矩阵对应的中文文本组合作为规范中文文本训练样本,并将所述规范中文字训练样本按预设批次进行批分。例如,在一张包含文本的图像中,包含文本像素的部分和空白像素的部分。文本上的像素值一般颜色会比较深,二值化像素值特征矩阵中的“1”代表文本像素的部分,而“0”则代表图像中空白像素的部分。可以理解地,通过建立二值化像素值特征矩阵可以进一步简化对文本的特征表示,仅采用0和1的矩阵就可以将各个文本表示并区别开来,能够提高计算机处理关于文本的特征矩阵的速度,进一步提高训练规范中文文本识别模型的训练效率。
步骤S101-S102对待处理中文文本训练样本进行归一化处理并进行二类值的划分,获取每个中文文本的二值化像素值特征矩阵,并将每个中文文本的二值化像素特征矩阵对应的文本作为规范中文文本训练样本,能够显著缩短训练规范中文文本识别模型的时长。
在一实施例中,如图4所示,步骤S10中,将规范中文文本训练样本输入到双向长短时记忆神经网络中,基于连续时间分类算法进行训练,获取双向长短时记忆神经网络的总误差因子,根据双向长短时记忆神经网络的总误差因子,采用粒子群算法更新双向长短时记忆神经网络的网络参数,获取规范中文文本识别模型,具体包括如下步骤:
S111:将规范中文文本训练样本按序列正向输入到双向长短时记忆神经网络中,基于连续时间分类算法进行训练,获取规范中文文本训练样本按序列正向在双向长短时记忆神经网络中的前向传播输出和后向传播输出,将规范中文文本训练样本按序列反向输入到双向长短时记忆神经网络中,基于连续时间分类算法进行训练,获取规范中文文本训练样本按序列反向在双向长短时记忆神经网络中的前向传播输出和后向传播输出,前向传播输出表示为
Figure PCTCN2018094271-appb-000002
其中,t表示序列步数,u表示与t相对应的输出的标签值,
Figure PCTCN2018094271-appb-000003
表示输出序列在第t步的输出为l' u的概率,
Figure PCTCN2018094271-appb-000004
后向传播输出表示为
Figure PCTCN2018094271-appb-000005
其中,t表示序列步数,u表示与t相对应的输出的标签值,
Figure PCTCN2018094271-appb-000006
表示输出序列在第t+1步的输出为l' u的概率,
Figure PCTCN2018094271-appb-000007
本实施例中,将规范中文文本训练样本分别按序列正向和按序列反向输入到双向长短时记忆神经网络中,基于连续时间分类(CTC)算法进行训练。CTC算法本质上是一种计算损失函数的算法,该算法是用来衡量输入的序列数据经过神经网络之后,和真实结果(客观事实,也称为标签值)之间的误差有多少。因此,可以通过获取规范中文文本训练样本分别按序列正向和按序列反向在双向长短时记忆神经网络中的前向传播输出和后向传播输出,利用序列正向的前向传播输出和后向传播输出,序列反向的前向传播输出和后向传播描述、构建相对应的误差函数。
以下以序列正向为例进行说明。首先简单介绍几个CTC中的基本定义,以更好地理解CTC的实现过程。
Figure PCTCN2018094271-appb-000008
表示输出序列在第t步的输出为k的概率。例如:当输出序列为(a-ab-)时,
Figure PCTCN2018094271-appb-000009
表示在第3步输出的字母为a的概率。p(π|x):表示给定输入x,输出路径为π的概率;由于假设在每一个序列步相应输出的标签值的概率都是相互独立的,那么p(π|x)用公式来表示为
Figure PCTCN2018094271-appb-000010
可以理解为每一个序列步输出路径π相应的标签值的概率的乘积。F:表示一种多对一的映射,将输出路径π映射到标签序列l的一种变换,例如:F(a-ab-)=F(-aa-abb)=aab(其中-代表了空格),本实施例中, 该映射变换可以是如上述例子的去除叠字和去除空格处理。p(l|x):表示给定的输入序列x(如规范中文文本训练样本中的某个样本),输出为序列l的概率,因此,输出为序列为l的概率可以表示为所有输出路径π映射后的序列为l的概率之和,用公式表示为
Figure PCTCN2018094271-appb-000011
可以理解地,随着序列l长度的增加,相对应的路径的数目是成指数增加的,因此可以采用迭代的思路,从序列第t步与t-1步、t+1步的关于前向传播和后向传播的角度出发计算序列l对应的路径概率,提高计算的效率。具体地,在进行计算之前,需要对序列l做一些预处理,在序列l的开头与结尾分别加上空格,并且在字母与字母之间都添加上空格。如果原来序列l的长度为U,那么预处理之后,序列l'的长度为2U+1。对于一个序列l,可以定义前向变量α(t,u)为输出序列长度为t,且经过F映射之后为序列l的路径的概率之和,用公式表示为:
Figure PCTCN2018094271-appb-000012
其中,V(t,u)={π∈A' t:F(π)=l 1:u/2t=l' u},表示所有满足经过F映射之后为序列l,长度为t的路径集合,且在第t序列步的输出为l' u,这里的u/2表示的是索引,因此需要向下取整。所有正确路径的开头须满足是空格或者l 1(即序列l的第一个字母),因此存在着初始化的约束条件:
Figure PCTCN2018094271-appb-000013
(b表示blank,空格),
Figure PCTCN2018094271-appb-000014
Figure PCTCN2018094271-appb-000015
则p(l|x)可以由前向变量来表示,即:p(l|x)=α(T,U')+α(T,U'-1),其中,α(T,U')可以理解为所有路径长度为T,经过F映射之后为序列l,且第T时刻的输出的标签值为:l' U或者l' U-1。也就是路径的最后一个是否包括了空格。于是,前向变量的计算可以按照时间来进行递归,用公式表示为:
Figure PCTCN2018094271-appb-000016
其中,这里的f(u),实际上是对前一时刻的所有可能路径的列举,其具体条件公式如下:
Figure PCTCN2018094271-appb-000017
与前向传播的过程类似,可以定义一个后向变量β(t,u),表示从t+1时刻开始,在前向变量α(t,u)上添加路径π',使得最后通过F映射之后为序列l的概率之和,用公式表示为:
Figure PCTCN2018094271-appb-000018
其中,
Figure PCTCN2018094271-appb-000019
后向传播也有相对应的初始化条件:
Figure PCTCN2018094271-appb-000020
于是,后向变量同样可以根据递归的方式求出,用公式表示为:
Figure PCTCN2018094271-appb-000021
其中,g(u)表示t+1时刻可能的路径选择函数,表示为
Figure PCTCN2018094271-appb-000022
则可以根据前向变量和后向变量对前向传播的过程和后向传播的过程进行描述,获取相对应的前向传播输出和后向传播输出(前向变量的递归表达式即表示前向传播输出,后向变量的递归表达式即表示后向传播输出)。可以理解地,对于序列反向的前向传播输出和后向传播获取的过程与上述序列正向的前向传播输出和后向传播获取的过程是相似的,区别仅在于输入的序列方向,为避免重复,在此不进行赘述。
S112:根据规范中文文本训练样本按序列正向在双向长短时记忆神经网络中的前向传播输出和后向传播输出获取双向长短时记忆神经网络的正向误差因子,根据规范中文文本训练样本按序列反向在双向长短时记忆神经网络中的前向传播输出和后向传播输出获取双向长短时记忆神经网络的反向误差因子,将双向长短时记忆神经网络的正向误差因子和双向长短时记忆神经网络的反向误差因子相加获取双向长短时记忆神经网络的总误差因子,根据双向长短时记忆神经网络的总误差因子构建误差函数。
在一实施例中,先假设只有按序列正向的前向传播输出和后向传播输出,使用概率的负对数表示序列正向的误差函数。具体地,设l=z,则误差函数可以表示为
Figure PCTCN2018094271-appb-000023
其中,S表示规范中文文本训练样本。该式中的p(z|x)可以根据前向传播输出和后向传播输出进行计算,
Figure PCTCN2018094271-appb-000024
是可以衡量误差的误差因子。首先定义一个集合X,其代表t时刻位置处在u的所有正确的路径:用公式表示为:X(t,u)={π∈A' T:F(π)=z,π t=z' u},于是,任意时刻前向变量与后向变量的乘积表示所有可能路径的概率和:
Figure PCTCN2018094271-appb-000025
该式是t时刻位置恰好处于u的所有正确路径的概率和,则对于一般情况,对于任意时刻t,可以计算所有位置的正确路径得到总概率:
Figure PCTCN2018094271-appb-000026
则根据误差函数的定义能够得到误差函数
Figure PCTCN2018094271-appb-000027
以上是假设只有按序列正向的前向传播输出和后向传播输出时构建误差函数的情况,在还包括序列反向的前向传播输出和后向传播输出时,由误差因子
Figure PCTCN2018094271-appb-000028
先求出相应的双向长短时记忆神经网络的正向误差因子(以下简称正向误差因子)和双向长短时记忆神经网络的反向误差因子(以下简称反向误差因子),将正向误差因子和反向误差因子相加获取总误差因子,再根据总误差因子,使用概率的负对数构建误差函数,并如上述的计算过程,采用序列正向的前向传播输出和后向传播输出,以及序列反向的前向传播输出和后向传播输出表示该误差函数,在此不再赘述。在根据总误差因子得到误差函数之后就可以根据该误差函数更新网络参数,获取规范中文文本识别模型。
S113:根据误差函数,采用粒子群算法更新双向长短时记忆神经网络的网络参数,获取规范中文文本识别模型。
在一实施例中,根据获取的误差函数,采用粒子群算法更新网络参数,具体地,求出损失函数对未经过sofmax层的网络输出的偏导数(即梯度),将该梯度乘以学习率,用原来的网络参数减去梯度乘以学习率的积即实现网络参数的更新,其中,粒子群算法包括粒子位置更新公式(公式1)和粒子速度位置更新公式(公式2),粒子群算法如下所示:
V i+1=w×V i+c1×rand()×(pbest i-X i)+c2×rand()×(gbest-X i)-------(公式1)
X i+1=X i+V i-------(公式2)
其中,规范中文文本训练样本的样本维度(即样本对应的二值化像素值特征矩阵的矩阵维度)为n,X i=(x i1,x i2,...,x in)为第i个粒子的位置,X i+1为第i+1个粒子的位置;V i=(v i1,v i2,...,v in)为第i个粒子的速度,V i+1为第i+1个粒子的速度;pbest i=(pbest i1,pbest i2,...,pbest in)为第i个粒子对应的局部极值;gbest=(gbest 1,gbest 2,...,gbest n)为最优极值(又称全局极值),w为惯性偏置,c1为第一学习因子,c2为第二学习因子,c1,c2一般设为常数2,rand()为[0,1]中的任意随机值。
可以理解地,c1×rand()控制粒子向该粒子经历最优位置的步长。c2×rand()控制粒子向所有粒子经历最优位置的步长;w为惯性偏置,当w值大时,粒子群表现出很强的全局寻优能力;当w值小时,粒子群表现出很强的局部寻优能力,该特点非常适合于网络训练的。通常在网络训练的初始阶段,w一般设置比较大,以保证具有足够大的全局寻优能力;在训练的收敛阶段,w一般设置比较小,以保证能够收敛到最优解。
在公式(1)中,公式右边第一项表示原速度项;公式右边第二项表示“认知”部分,主要是根据该粒子的历史最优位置,考虑对新粒子位置的影响,是一个自身思考的过程;公式右边第三项是“社会”部分,主要是根据所有粒子最优位置,考虑对新粒子位置的影响。整个公式(1)反映的是一个信息共享的过程。如果没有第一部分,则粒子速度的更新,只取决于该粒子和所有粒子所经历最优位置,则粒子具有很强的收敛性。公式右边第一项保证了粒子群有一定的全局寻优能力,具有逃离极值作用,反而,如果该部分很小时,则粒子群会迅速收敛。公式右边第二项和公式右边第三项则保证了粒子群的局部收敛性。该粒子群算法是一种全局随机寻优算法,采用该计算公式在训练的初始阶段能找到最优解的收敛领域,然后在最优解的收敛领域中再进行收敛,得到最优解(即求出误差函数的极小值)。
采用粒子群算法更新双向长短时记忆神经网络的网络参数的过程具体包括如下步骤:
(1)初始化粒子位置X和粒子速度V,并设置粒子位置最大值X max和最小值X min,粒子速度最大值V max和最小值V min,惯性权值w,第一学习因子c1,第二学习因子c2,训练最大次数α,停止迭代阈值ε。
(2)对于每个粒子pbest:利用误差函数计算粒子适应值(即寻找更优解),若粒子寻找到更优解,则更新pbest;否则,pbest保持不变。
(3)将局部极值pbest中适应值最小的粒子与全局极值gbest的粒子适应值相比较,选择适应值最小的粒子更新gbest的值。
(4)根据公式(1)更新粒子群的粒子位置X和粒子速度V。
判断pbest中的速度是否超出[V min,V max],如果超出速度的范围,则相应设置为速度的最小值和/或最大值。
判断pbest中的速度是否超出[X min,X max],如果超出位置的范围,则设置为位置的最小值和/或最大值,同时更新惯性权值w,更新w的公式为
Figure PCTCN2018094271-appb-000029
β是指当前训练次数。
(5)判断是否达到训练最大次数α或误差小于停止迭代阈值ε,若是,则终止;若否,则转向(2)继续运行,直至达到要求。
采用粒子群算法能够快速、准确求得梯度,实现网络参数的有效更新。
步骤S111-S113能够根据规范中文文本训练样本在双向长短时记忆神经网络按序列正向和按序列反向分别得到的前向传播输出和后向传播输出构建误差函数,并根据误差函数,采用粒子群算法进行误差反传,更新网络参数,实现获取规范中文文本识别模型的目的。该模型学习了规范中文文本训练样本的深层特征,能够精确地识别标准规范文本。
在一实施例中,如图5所示,步骤S30中,采用调整中文手写文本识别模型识别待测试中文文本样本,获取识别结果与真实结果不符的出错文本,把所有出错文本作为出错文本训练样本,具体包括如下步骤:
S31:将待测试中文文本样本输入到调整中文手写文本识别模型,获取待测试中文文本样本中每一文本在调整中文手写文本识别模型中的输出值。
本实施例中,采用调整中文手写文本识别模型对待测试中文文本样本进行识别,待测试中文文本样本中包含若干个中文文本。文本包括文字,本实施例提及的每一文本的输出值具体是指每一文字中各个字体对应的各个输出值。在中文字库中,常用的中文字大概有三千多个(包括空格和各种中文标点符号),在调整中文手写文本识别模型的输出层应设置中文字库中每一个字与输入的待测试中文文本样本中的字的相似程度的概率值,具体可以通过softmax函数实现。可以理解地,若待测试中文文本样本中的一个文本样本假设为一张分辨率为8*8的图像,上面是“你们好”三个字,则识别时把图片进行垂直切割成8列,8个3维的向量,然后作为调整中文手写文本识别模型的8个输入数。调整中文手写文本识别模型的输出数和输入数目应该是相同的,而实际上该文本样本只有3个输出数,而不应是8个输出数,因此实际输出的情况会出现叠字的情况,例如:“你 你 们 们 好 ___”、“你 _ 们 _ 们 _ 好 _”、“_ 你 你 _ 们 _ 好_”等输出情况,在这8个输出数中,每一个输出数对应的中文字都存在与中文字库中每一个字计算相似程度的概率值,该概率值即测试中文文本样本中每一文本在调整中文手写文本识别模型中的输出值,该输出值有很多个,每一输出值对应该输出数所对应的中文字与中文字库中每一个字相似程度的概率值。根据该概率值可以确定每一文本的识别结果。
S32:选取每一文本对应的输出值中的最大输出值,根据最大输出值获取每一文本的识别结果。
本实施例中,选取每一文本对应的所有输出值中的最大输出值,根据该最大输出值即可获取该文本的识别结果。可以理解地,输出值直接反映了输入的待测试中文文本样本中的字与中文字库中每一字的相似程度,而最大输出值则表明待测试文本样本中的字最接近中文字库中的某个字,则可以根据该最大输出值对应的字确定实际输出,如实际输出为“你 你 们 们 好 ___”、“你 _ 们 _ 们 _ 好 _”、“_ 你 你 _ 们 _ 好_”等输出情况而不是像“妳 妳 扪 扪 好 ___”、“你 _ 们 _ 们 _ 好 _”、“_ 你 妳 _ 扪 _ 好_”等实际输出,根据连续时间分类算法的定义,还需要对实际输出作进一步地处理,把实际输出中的叠词字去除,只保留一个;并把空格去除,则可以得到识别结果,如本实施例中的识别结果为“你们好”。通过最大输出值确定实际输出的字的正确性,再作去叠字和去空格的处理,能够有效获取每一文本的识别结果。
S33:根据识别结果,获取识别结果与真实结果不符的出错文本,把所有出错文本作为出错文本训练样本。
本实施例中,将得到的识别结果与真实结果(客观事实)作比较,将比较识别结果与真实结果不符的出错文本作为出错文本训练样本。可以理解地,该识别结果只是待测试中文文本训练样本在调整中文手写文本识别模型识别出来的结果,与真实结果相比有可能是不相同的,反映了该模型在识别的精确度上仍有不足,而这些不足可以通过出错文本训练样本进行优化,以达到更精确的识别效果。
步骤S31-S33根据待测试中文文本样本中每一文本在调整中文手写文本识别模型中的输出值,从输出值中选择能够反映文本间(实际上是字的相似程度)相似程度的最大输出值;再通过最大输出值得到识别结果,并根据识别结果得到出错文本训练样本,为后续利用出错文本训练样本进一步优化识别精确度提供了重要的技术前提。
在一实施例中,在步骤S10之前,即在获取规范中文文本训练样本的步骤之前,该手写模型训练方法还包括如下步骤:初始化双向长短时记忆神经网络。
在一实施例中,初始化双向长短时记忆神经网络即初始化该网络的网络参数,赋予网络参数初始值。若初始化的权值处在误差曲面的一个相对平缓的区域时,双向长短时记忆神经网络模型训练的收敛速度可能会异常缓慢。可以将网络参数初始化在一个具有0均值的相对小的区间内均匀分布,比如[-0.30,+0.30]这样的区间内。合理地初始化双向长短时记忆神经网络可以使网络在初期有较灵活的调整能力,可以在训练过程中对网络进行有效的调整,能够快速有效地找到误差函数的极小值,有利于双向长短时记忆神经网络的更新和调整,使得基于双向长短时记忆神经网络进行模型训练获取的模型在进行中文手 写字识别时具备精确的识别效果。
本实施例所提供的手写模型训练方法中,将双向长短时记忆神经网络的网络参数初始化在一个具有0均值的相对小的区间内均匀分布,比如[-0.30,+0.30]这样的区间,采用该初始化的方式能够快速有效地找到误差函数的极小值,有利于双向长短时记忆神经网络的更新和调整。对待处理中文文本训练样本进行归一化处理并进行二类值的划分,获取二值化像素值特征矩阵,并将特征矩阵对应的文本作为规范中文文本训练样本,能够显著缩短训练规范中文文本识别模型的时长。根据规范中文文本训练样本在双向长短时记忆神经网络按序列正向和按序列正反向分别得到的前向传播输出和后向传播输出,并根据分别得到的前向传播输出和后向传播输出获取正向误差因子和反向误差因子,由正向误差因子和反向误差因子获取总误差因子并构建误差函数,再根据误差函数反传更新网络参数,能够获取规范中文文本识别模型,该模型学习了规范中文文本训练样本的深层特征,能够精确地识别标准规范文本。接着通过非规范中文文本对规范中文文本识别模型进行调整性的更新,使得更新后获取的调整中文手写文本识别模型在具备识别规范中文手写文本能力的前提下,通过训练更新的方式学习非规范中文文本的深层特征,使得调整中文手写文本识别模型能够较好地识别非规范中文手写文本。接着,根据待测试中文文本样本中每一文本在调整中文手写文本识别模型中的输出值,从输出值中选择能够反映文本间相似程度的最大输出值,利用最大输出值得到识别结果,并根据识别结果得到出错文本训练样本,并将所有出错文本作为出错文本训练样本输入到调整中文手写文本识别模型中,基于连续时间分类算法进行训练更新,获取目标中文手写文本识别模型。采用出错文本训练样本可以在很大程度上消除原本训练过程中产生的过度学习和过度削弱带来的不利影响,能够进一步优化识别准确率。此外,本实施例所提供的手写模型训练方法中,训练各个模型采用的是双向长短时记忆神经网络,该神经网络能够结合字具有的序列特点,从序列的正向和序列的反向的角度出发,学习字的深层特征,实现对不同的中文手写字进行识别的功能;各个模型在进行网络参数更新时采用的是粒子群算法,该算法能够进行全局随机寻优,在训练的初始阶段能找到最优解的收敛领域,然后在最优解的收敛领域中再进行收敛,得到最优解,求出误差函数的极小值,更新网络参数。该粒子群算法能够明显提高模型训练的效率,并且有效地对网络参数进行更新,提高所获取的模型的识别准确率。
应理解,上述实施例中各步骤的序号的大小并不意味着执行顺序的先后,各过程的执行顺序应以其功能和内在逻辑确定,而不应对本申请实施例的实施过程构成任何限定。
图6示出与实施例中手写模型训练方法一一对应的手写模型训练装置的原理框图。如图6所示,该手写模型训练装置包括规范中文文本识别模型获取模块10、调整中文手写文本识别模型获取模块20、出错文本训练样本获取模块30和目标中文手写文本识别模型获取模块40。其中,规范中文文本识别模型获取模块10、调整中文手写文本识别模型获取模块20、出错文本训练样本获取模块30和目标中文手写文本识别模型获取模块40的实现功能与实施例中手写模型训练方法对应的步骤一一对应,为避免赘述,本实施例不一一详述。
规范中文文本识别模型获取模块10,用于获取规范中文文本训练样本,将规范中文文本训练样本输入到双向长短时记忆神经网络中,基于连续时间分类算法进行训练,获取双向长短时记忆神经网络的总误差因子,根据双向长短时记忆神经网络的总误差因子,采用粒子群算法更新双向长短时记忆神经网络的网络参数,获取规范中文文本识别模型。
调整中文手写文本识别模型获取模块20,用于获取非规范中文文本训练样本,将非规范中文文本训练样本输入到规范中文文本识别模型中,基于连续时间分类算法进行训练,获取规范中文文本识别模型的总误差因子,根据规范中文文本识别模型的总误差因子,采用粒子群算法更新规范中文文本识别模型的网络参数,获取调整中文手写文本识别模型。
出错文本训练样本获取模块30,用于获取待测试中文文本样本,采用调整中文手写文本识别模型识别待测试中文文本样本,获取识别结果与真实结果不符的出错文本,把所有出错文本作为出错文本训练样本。
目标中文手写文本识别模型获取模块40,用于将出错文本训练样本输入到调整中文手写文本识别模型中,基于连续时间分类算法进行训练,获取调整中文手写文本识别模型的总误差因子,根据调整中文手写文本识别模型的总误差因子,采用粒子群算法更新调整中文手写文本识别模型的网络参数,获取 目标中文手写文本识别模型。
优选地,规范中文文本识别模型获取模块10包括归一化像素值特征矩阵获取单元101、规范中文文本训练样本获取单元102、传播输出获取单元111、误差函数构建单元112和规范中文文本识别模型获取单元113。
归一化像素值特征矩阵获取单元101,用于获取待处理中文文本训练样本中每个中文文本的像素值特征矩阵,将每个中文文本的像素值特征矩阵中每个像素值进行归一化处理,获取每个中文文本的归一化像素值特征矩阵,其中,归一化处理的公式为
Figure PCTCN2018094271-appb-000030
MaxValue为像素值特征矩阵中像素值的最大值,MinValue为像素值特征矩阵中像素值的最小值,x为归一化前的像素值,y为归一化后的像素值。
规范中文文本训练样本获取单元102,用于将每个中文文本的归一化像素值特征矩阵中的像素值划分为两类像素值,基于两类像素值建立每个中文文本的二值化像素值特征矩阵,将每个中文文本的二值化像素值特征矩阵对应的中文文本组合作为规范中文文本训练样本。
传播输出获取单元111,用于将规范中文文本训练样本按序列正向输入到双向长短时记忆神经网络中,基于连续时间分类算法进行训练,获取规范中文文本训练样本按序列正向在双向长短时记忆神经网络中的前向传播输出和后向传播输出,将规范中文文本训练样本按序列反向输入到双向长短时记忆神经网络中,基于连续时间分类算法进行训练,获取规范中文文本训练样本按序列反向在双向长短时记忆神经网络中的前向传播输出和后向传播输出;前向传播输出表示为
Figure PCTCN2018094271-appb-000031
其中,t表示序列步数,u表示与t相对应的输出的标签值,
Figure PCTCN2018094271-appb-000032
表示输出序列在第t步的输出为l' u的概率,
Figure PCTCN2018094271-appb-000033
后向传播输出表示为
Figure PCTCN2018094271-appb-000034
其中,t表示序列步数,u表示与t相对应的输出的标签值,
Figure PCTCN2018094271-appb-000035
表示输出序列在第t+1步的输出为l' u的概率,
Figure PCTCN2018094271-appb-000036
误差函数构建单元112,用于根据规范中文文本训练样本按序列正向在双向长短时记忆神经网络中的前向传播输出和后向传播输出获取双向长短时记忆神经网络的正向误差因子,根据规范中文文本训练样本按序列反向在双向长短时记忆神经网络中的前向传播输出和后向传播输出获取双向长短时记忆神经网络的反向误差因子,将双向长短时记忆神经网络的正向误差因子和双向长短时记忆神经网络的反向误差因子相加获取双向长短时记忆神经网络的总误差因子,根据双向长短时记忆神经网络的总误差因子构建误差函数。
规范中文文本识别模型获取单元113,用于根据误差函数,采用粒子群算法更新双向长短时记忆神经网络的网络参数,获取规范中文文本识别模型。
优选地,出错文本训练样本获取模块30包括模型输出值获取单元31、模型识别结果获取单元32和出错文本训练样本获取单元33。
模型输出值获取单元31,用于将待测试中文文本样本输入到调整中文手写文本识别模型,获取待测试中文文本样本中每一文本在调整中文手写文本识别模型中的输出值。
模型识别结果获取单元32,用于选取每一文本对应的输出值中的最大输出值,根据最大输出值获取每一文本的识别结果。
出错文本训练样本获取单元33,用于根据识别结果,获取识别结果与真实结果不符的出错文本,把所有出错文本作为出错文本训练样本。
优选地,该手写模型训练装置还包括初始化模块50,用于初始化双向长短时记忆神经网络。
图7示出本实施例中文本识别方法的一流程图。该文本识别方法可应用在银行、投资和保险等机构配置的计算机设备,用于对手写中文文本进行识别,达到人工智能目的。如图7所示,该文本识别方法包括如下步骤:
S50:获取待识别中文文本,采用目标中文手写文本识别模型识别待识别中文文本,获取待识别中文文本在目标中文手写文本识别模型中的输出值,目标中文手写文本识别模型是采用上述手写模型训练方法获取到的。
其中,待识别中文文本是指要进行识别的中文文本。
本实施例中,获取待识别中文文本,将待识别中文文本输入到目标中文手写文本识别模型中进行识别,获取待识别中文文本在目标中文手写文本识别模型中的每一个输出数对应的中文字与中文字库中每一个字的相似程度的概率值,该概率值即待识别中文文本在目标中文手写文本识别模型中的输出值,可以基于该输出值确定该待识别中文文本的识别结果。
S60:选取待识别中文文本对应的输出值中的最大输出值,根据最大输出值获取待识别中文文本的识别结果。
本实施例中,选取待识别中文文本对应的所有输出值中的最大输出值,根据该最大输出值确定其对应的实际输出,例如,实际输出为“你 _ 们 _ 们 _ 好 _”。然后再对该实际输出作进一步地处理,把实际输出中的叠词字去除,只保留一个;并把空格去除,则可以得到待识别中文文本的识别结果。通过最大输出值确定实际输出阶段的字的正确性,再作去叠字和去空格的处理,能够有效获取每一文本的识别结果,提高识别的准确率。
步骤S50-S60,采用目标中文手写文本识别模型识别待识别中文文本,根据最大输出值以及去叠字和去空格的处理,获取待识别中文文本的识别结果。采用该目标中文手写文本识别模型本身拥有较高的识别精确度,再结合中文语义词库进一步提高中文手写的识别准确率。
本申请实施例所提供的文本识别方法中,将待识别中文文本输入到目标中文手写文本识别模型中进行识别,并结合预设的中文语义词库获取识别结果。采用该目标中文手写文本识别模型对中文手写文本进行识别时,可以得到精确的识别结果。
图8示出与实施例中文本识别方法一一对应的文本识别装置的原理框图。如图8所示,该文本识别装置包括输出值获取模块60和识别结果获取模块70。其中,输出值获取模块60和识别结果获取模块70的实现功能与实施例中文本识别方法对应的步骤一一对应,为避免赘述,本实施例不一一详述。
文本识别装置包括输出值获取模块60,用于获取待识别中文文本,采用目标中文手写文本识别模型识别待识别中文文本,获取待识别中文文本在目标中文手写文本识别模型中的输出值;目标中文手写文本识别模型是采用手写模型训练方法获取到的。
识别结果获取模块70,用于选取待识别中文文本对应的输出值中的最大输出值,根据最大输出值获取待识别中文文本的识别结果。
本实施例提供一个或多个存储有计算机可读指令的非易失性可读存储介质,所述计算机可读指令被一个或多个处理器执行时,使得所述一个或多个处理器执行时实现实施例中手写模型训练方法,为避免重复,这里不再赘述。或者,所述计算机可读指令被一个或多个处理器执行时,使得所述一个或多个处理器执行时实现实施例中手写模型训练装置的各模块/单元的功能,为避免重复,这里不再赘述。或者,所述计算机可读指令被一个或多个处理器执行时,使得所述一个或多个处理器执行时实现实施例中文本识别方法中各步骤的功能,为避免重复,此处不一一赘述。或者,所述计算机可读指令被一个或多个处理器执行时,使得所述一个或多个处理器执行时实现实施例中文本识别装置中各模块/单元的功能,为避免重复,此处不一一赘述。
图9是本申请一实施例提供的计算机设备的示意图。如图9所示,该实施例的计算机设备80包括:处理器81、存储器82以及存储在存储器82中并可在处理器81上运行的计算机可读指令83,该计算机可读指令83被处理器81执行时实现实施例中的手写模型训练方法,为避免重复,此处不一一赘述。或者,该计算机可读指令83被处理器81执行时实现实施例中手写模型训练装置中各模型/单元的功能,为避免重复,此处不一一赘述。或者,该计算机可读指令83被处理器81执行时实现实施例中文本识别 方法中各步骤的功能,为避免重复,此处不一一赘述。或者,该计算机可读指令83被处理器81执行时实现实施例中文本识别装置中各模块/单元的功能。为避免重复,此处不一一赘述。
计算机设备80可以是桌上型计算机、笔记本、掌上电脑及云端服务器等计算设备。计算机设备可包括,但不仅限于,处理器81、存储器82。本领域技术人员可以理解,图9仅仅是计算机设备80的示例,并不构成对计算机设备80的限定,可以包括比图示更多或更少的部件,或者组合某些部件,或者不同的部件,例如计算机设备还可以包括输入输出设备、网络接入设备、总线等。
所称处理器81可以是中央处理单元(Central Processing Unit,CPU),还可以是其他通用处理器、数字信号处理器(Digital Signal Processor,DSP)、专用集成电路(Application Specific Integrated Circuit,ASIC)、现场可编程门阵列(Field-Programmable Gate Array,FPGA)或者其他可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件等。通用处理器可以是微处理器或者该处理器也可以是任何常规的处理器等。
存储器82可以是计算机设备80的内部存储单元,例如计算机设备80的硬盘或内存。存储器82也可以是计算机设备80的外部存储设备,例如计算机设备80上配备的插接式硬盘,智能存储卡(Smart Media Card,SMC),安全数字(Secure Digital,SD)卡,闪存卡(Flash Card)等。进一步地,存储器82还可以既包括计算机设备80的内部存储单元也包括外部存储设备。存储器82用于存储计算机可读指令83以及计算机设备所需的其他程序和数据。存储器82还可以用于暂时地存储已经输出或者将要输出的数据。
所属领域的技术人员可以清楚地了解到,为了描述的方便和简洁,仅以上述各功能单元、模块的划分进行举例说明,实际应用中,可以根据需要而将上述功能分配由不同的功能单元、模块完成,即将所述装置的内部结构划分成不同的功能单元或模块,以完成以上描述的全部或者部分功能。
以上所述实施例仅用以说明本申请的技术方案,而非对其限制;尽管参照前述实施例对本申请进行了详细的说明,本领域的普通技术人员应当理解:其依然可以对前述各实施例所记载的技术方案进行修改,或者对其中部分技术特征进行等同替换;而这些修改或者替换,并不使相应技术方案的本质脱离本申请各实施例技术方案的精神和范围,均应包含在本申请的保护范围之内。

Claims (20)

  1. 一种手写模型训练方法,其特征在于,包括:
    获取规范中文文本训练样本,将所述规范中文文本训练样本输入到双向长短时记忆神经网络中,基于连续时间分类算法进行训练,获取双向长短时记忆神经网络的总误差因子,根据双向长短时记忆神经网络的总误差因子,采用粒子群算法更新双向长短时记忆神经网络的网络参数,获取规范中文文本识别模型;
    获取非规范中文文本训练样本,将所述非规范中文文本训练样本输入到所述规范中文文本识别模型中,基于连续时间分类算法进行训练,获取规范中文文本识别模型的总误差因子,根据规范中文文本识别模型的总误差因子,采用粒子群算法更新所述规范中文文本识别模型的网络参数,获取调整中文手写文本识别模型;
    获取待测试中文文本样本,采用所述调整中文手写文本识别模型识别所述待测试中文文本样本,获取识别结果与真实结果不符的出错文本,把所有所述出错文本作为出错文本训练样本;
    将所述出错文本训练样本输入到所述调整中文手写文本识别模型中,基于连续时间分类算法进行训练,获取调整中文手写文本识别模型的总误差因子,根据调整中文手写文本识别模型的总误差因子,采用粒子群算法更新调整中文手写文本识别模型的网络参数,获取目标中文手写文本识别模型。
  2. 根据权利要求1所述的手写模型训练方法,其特征在于,所述获取规范中文文本训练样本,包括:
    获取待处理中文文本训练样本中每个中文文本的像素值特征矩阵,将每个中文文本的像素值特征矩阵中每个像素值进行归一化处理,获取每个中文文本的归一化像素值特征矩阵,其中,归一化处理的公式为
    Figure PCTCN2018094271-appb-100001
    MaxValue为所述像素值特征矩阵中像素值的最大值,MinValue为所述像素值特征矩阵中像素值的最小值,x为归一化前的像素值,y为归一化后的像素值;
    将每个中文文本的归一化像素值特征矩阵中的像素值划分为两类像素值,基于所述两类像素值建立每个中文文本的二值化像素值特征矩阵,将每个中文文本的二值化像素值特征矩阵对应的中文文本组合作为规范中文文本训练样本。
  3. 根据权利要求1所述的手写模型训练方法,其特征在于,所述将所述规范中文文本训练样本输入到双向长短时记忆神经网络中,基于连续时间分类算法进行训练,获取双向长短时记忆神经网络的总误差因子,根据双向长短时记忆神经网络的总误差因子,采用粒子群算法更新双向长短时记忆神经网络的网络参数,获取规范中文文本识别模型,包括:
    将所述规范中文文本训练样本按序列正向输入到双向长短时记忆神经网络中,基于连续时间分类算法进行训练,获取所述规范中文文本训练样本按序列正向在所述双向长短时记忆神经网络中的前向传播输出和后向传播输出,将所述规范中文文本训练样本按序列反向输入到双向长短时记忆神经网络中,基于连续时间分类算法进行训练,获取所述规范中文文本训练样本按序列反向在所述双向长短时记忆神经网络中的前向传播输出和后向传播输出;前向传播输出表示为
    Figure PCTCN2018094271-appb-100002
    其中,t表示序列步数,u表示与t相对应的输出的标签值,
    Figure PCTCN2018094271-appb-100003
    表示输出序列在第t步的输出为l' u的概率,
    Figure PCTCN2018094271-appb-100004
    后向传播输出表示为
    Figure PCTCN2018094271-appb-100005
    其中,t表示序列步数,u表示与t相对应的输出的标签值,
    Figure PCTCN2018094271-appb-100006
    表示输出序列在第t+1步的输出为l' u的概率,
    Figure PCTCN2018094271-appb-100007
    根据所述规范中文文本训练样本按序列正向在所述双向长短时记忆神经网络中的前向传播输出和后向传播输出获取双向长短时记忆神经网络的正向误差因子,根据所述规范中文文本训练样本按序列反向在所述双向长短时记忆神经网络中的前向传播输出和后向传播输出获取双向长短时记忆神经网络的反向误差因子,将双向长短时记忆神经网络的正向误差因子和双向长短时记忆神经网络的反向误差因子相加获取双向长短时记忆神经网络的总误差因子,根据双向长短时记忆神经网络的总误差因子构建误差函数;
    根据所述误差函数,采用粒子群算法更新双向长短时记忆神经网络的网络参数,获取规范中文文本识别模型。
  4. 根据权利要求1所述的手写模型训练方法,其特征在于,所述采用调整中文手写文本识别模型识别待测试中文文本样本,获取识别结果与真实结果不符的出错文本,把所有所述出错文本作为出错文本训练样本,包括:
    将待测试中文文本样本输入到调整中文手写文本识别模型,获取所述待测试中文文本样本中每一文本在所述调整中文手写文本识别模型中的输出值;
    选取每一所述文本对应的输出值中的最大输出值,根据所述最大输出值获取每一所述文本的识别结果;
    根据识别结果,获取识别结果与真实结果不符的出错文本,把所有所述出错文本作为出错文本训练样本。
  5. 根据权利要求1所述的手写模型训练方法,其特征在于,在所述获取规范中文文本训练样本的步骤之前,所述手写模型训练方法还包括:
    初始化双向长短时记忆神经网络。
  6. 一种文本识别方法,其特征在于,包括:
    获取待识别中文文本,采用目标中文手写文本识别模型识别所述待识别中文文本,获取所述待识别中文文本在所述目标中文手写文本识别模型中的输出值;所述目标中文手写文本识别模型是采用权利要求1-5任一项所述手写模型训练方法获取到的;
    选取所述待识别中文文本对应的输出值中的最大输出值,根据所述最大输出值获取待识别中文文本的识别结果。
  7. 一种手写模型训练装置,其特征在于,包括:
    规范中文文本识别模型获取模块,用于获取规范中文文本训练样本,将所述规范中文文本训练样本输入到双向长短时记忆神经网络中,基于连续时间分类算法进行训练,获取双向长短时记忆神经网络的总误差因子,根据双向长短时记忆神经网络的总误差因子,采用粒子群算法更新双向长短时记忆神经网络的网络参数,获取规范中文文本识别模型;
    调整中文手写文本识别模型获取模块,用于获取非规范中文文本训练样本,将所述非规范中文文本训练样本输入到所述规范中文文本识别模型中,基于连续时间分类算法进行训练,获取规范中文文本识别模型的总误差因子,根据规范中文文本识别模型的总误差因子,采用粒子群算法更新所述规范中文文本识别模型的网络参数,获取调整中文手写文本识别模型;
    出错文本训练样本获取模块,用于获取待测试中文文本样本,采用所述调整中文手写文本识别模型识别所述待测试中文文本样本,获取识别结果与真实结果不符的出错文本,把所有所述出错文本作为出错文本训练样本;
    目标中文手写文本识别模型获取模块,用于将所述出错文本训练样本输入到所述调整中文手写文本识别模型中,基于连续时间分类算法进行训练,获取调整中文手写文本识别模型的总误差因子,根据调整中文手写文本识别模型的总误差因子,采用粒子群算法更新调整中文手写文本识别模型的网络参数,获取目标中文手写文本识别模型。
  8. 一种文本识别装置,其特征在于,包括:
    输出值获取模块,用于获取待识别中文文本,采用目标中文手写文本识别模型识别所述待识别中文文本,获取所述待识别中文文本在所述目标中文手写文本识别模型中的输出值;所述目标中文手写文本识别模型是采用权利要求1-5任一项所述手写模型训练方法获取到的;
    识别结果获取模块,用于选取所述待识别中文文本对应的输出值中的最大输出值,根据所述最大输出值获取待识别中文文本的识别结果。
  9. 一种计算机设备,包括存储器、处理器以及存储在所述存储器中并可在所述处理器上运行的计算机可读指令,其特征在于,所述处理器执行所述计算机可读指令时实现如下步骤:
    获取规范中文文本训练样本,将所述规范中文文本训练样本输入到双向长短时记忆神经网络中,基于连续时间分类算法进行训练,获取双向长短时记忆神经网络的总误差因子,根据双向长短时记忆神经网络的总误差因子,采用粒子群算法更新双向长短时记忆神经网络的网络参数,获取规范中文文本识别模型;
    获取非规范中文文本训练样本,将所述非规范中文文本训练样本输入到所述规范中文文本识别模型中,基于连续时间分类算法进行训练,获取规范中文文本识别模型的总误差因子,根据规范中文文本识别模型的总误差因子,采用粒子群算法更新所述规范中文文本识别模型的网络参数,获取调整中文手写文本识别模型;
    获取待测试中文文本样本,采用所述调整中文手写文本识别模型识别所述待测试中文文本样本,获取识别结果与真实结果不符的出错文本,把所有所述出错文本作为出错文本训练样本;
    将所述出错文本训练样本输入到所述调整中文手写文本识别模型中,基于连续时间分类算法进行训练,获取调整中文手写文本识别模型的总误差因子,根据调整中文手写文本识别模型的总误差因子,采用粒子群算法更新调整中文手写文本识别模型的网络参数,获取目标中文手写文本识别模型。
  10. 根据权利要求9所述的计算机设备,其特征在于,所述获取规范中文文本训练样本,包括:
    获取待处理中文文本训练样本中每个中文文本的像素值特征矩阵,将每个中文文本的像素值特征矩阵中每个像素值进行归一化处理,获取每个中文文本的归一化像素值特征矩阵,其中,归一化处理的公式为
    Figure PCTCN2018094271-appb-100008
    MaxValue为所述像素值特征矩阵中像素值的最大值,MinValue为所述像素值特征矩阵中像素值的最小值,x为归一化前的像素值,y为归一化后的像素值;
    将每个中文文本的归一化像素值特征矩阵中的像素值划分为两类像素值,基于所述两类像素值建立每个中文文本的二值化像素值特征矩阵,将每个中文文本的二值化像素值特征矩阵对应的中文文本组合作为规范中文文本训练样本。
  11. 根据权利要求9所述的计算机设备,其特征在于,所述将所述规范中文文本训练样本输入到双向长短时记忆神经网络中,基于连续时间分类算法进行训练,获取双向长短时记忆神经网络的总误差因子,根据双向长短时记忆神经网络的总误差因子,采用粒子群算法更新双向长短时记忆神经网络的网络参数,获取规范中文文本识别模型,包括:
    将所述规范中文文本训练样本按序列正向输入到双向长短时记忆神经网络中,基于连续时间分类算法进行训练,获取所述规范中文文本训练样本按序列正向在所述双向长短时记忆神经网络中的前向传播输出和后向传播输出,将所述规范中文文本训练样本按序列反向输入到双向长短时记忆神经网络中,基于连续时间分类算法进行训练,获取所述规范中文文本训练样本按序列反向在所述双向长短时记忆神经网络中的前向传播输出和后向传播输出;前向传播输出表示为
    Figure PCTCN2018094271-appb-100009
    其中,t表示序列步数,u表示与t相对应的输出的标签值,
    Figure PCTCN2018094271-appb-100010
    表示输出序列在第t步的输出为l' u的概率,
    Figure PCTCN2018094271-appb-100011
    后向传播输出表示为
    Figure PCTCN2018094271-appb-100012
    其中,t表示序列步数,u表示与t相对应的输出的标签值,
    Figure PCTCN2018094271-appb-100013
    表示输出序列在第t+1步的输出为l' u 的概率,
    Figure PCTCN2018094271-appb-100014
    根据所述规范中文文本训练样本按序列正向在所述双向长短时记忆神经网络中的前向传播输出和后向传播输出获取双向长短时记忆神经网络的正向误差因子,根据所述规范中文文本训练样本按序列反向在所述双向长短时记忆神经网络中的前向传播输出和后向传播输出获取双向长短时记忆神经网络的反向误差因子,将双向长短时记忆神经网络的正向误差因子和双向长短时记忆神经网络的反向误差因子相加获取双向长短时记忆神经网络的总误差因子,根据双向长短时记忆神经网络的总误差因子构建误差函数;
    根据所述误差函数,采用粒子群算法更新双向长短时记忆神经网络的网络参数,获取规范中文文本识别模型。
  12. 根据权利要求9所述的计算机设备,其特征在于,所述采用调整中文手写文本识别模型识别待测试中文文本样本,获取识别结果与真实结果不符的出错文本,把所有所述出错文本作为出错文本训练样本,包括:
    将待测试中文文本样本输入到调整中文手写文本识别模型,获取所述待测试中文文本样本中每一文本在所述调整中文手写文本识别模型中的输出值;
    选取每一所述文本对应的输出值中的最大输出值,根据所述最大输出值获取每一所述文本的识别结果;
    根据识别结果,获取识别结果与真实结果不符的出错文本,把所有所述出错文本作为出错文本训练样本。
  13. 根据权利要求9所述的计算机设备,其特征在于,在所述获取规范中文文本训练样本的步骤之前,所述处理器执行所述计算机可读指令时还实现如下步骤:
    初始化双向长短时记忆神经网络。
  14. 一种计算机设备,包括存储器、处理器以及存储在所述存储器中并可在所述处理器上运行的计算机可读指令,其特征在于,所述处理器执行所述计算机可读指令时实现如下步骤:
    获取待识别中文文本,采用目标中文手写文本识别模型识别所述待识别中文文本,获取所述待识别中文文本在所述目标中文手写文本识别模型中的输出值;所述目标中文手写文本识别模型是采用权利要求1-5任一项所述手写模型训练方法获取到的;
    选取所述待识别中文文本对应的输出值中的最大输出值,根据所述最大输出值获取待识别中文文本的识别结果。
  15. 一个或多个存储有计算机可读指令的非易失性可读存储介质,其特征在于,所述计算机可读指令被一个或多个处理器执行时,使得所述一个或多个处理器执行如下步骤:
    获取规范中文文本训练样本,将所述规范中文文本训练样本输入到双向长短时记忆神经网络中,基于连续时间分类算法进行训练,获取双向长短时记忆神经网络的总误差因子,根据双向长短时记忆神经网络的总误差因子,采用粒子群算法更新双向长短时记忆神经网络的网络参数,获取规范中文文本识别模型;
    获取非规范中文文本训练样本,将所述非规范中文文本训练样本输入到所述规范中文文本识别模型中,基于连续时间分类算法进行训练,获取规范中文文本识别模型的总误差因子,根据规范中文文本识别模型的总误差因子,采用粒子群算法更新所述规范中文文本识别模型的网络参数,获取调整中文手写文本识别模型;
    获取待测试中文文本样本,采用所述调整中文手写文本识别模型识别所述待测试中文文本样本,获取识别结果与真实结果不符的出错文本,把所有所述出错文本作为出错文本训练样本;
    将所述出错文本训练样本输入到所述调整中文手写文本识别模型中,基于连续时间分类算法进行训练,获取调整中文手写文本识别模型的总误差因子,根据调整中文手写文本识别模型的总误差因子,采用粒子群算法更新调整中文手写文本识别模型的网络参数,获取目标中文手写文本识别模型。
  16. 根据权利要求15所述的非易失性可读存储介质,其特征在于,所述获取规范中文文本训练样本, 包括:
    获取待处理中文文本训练样本中每个中文文本的像素值特征矩阵,将每个中文文本的像素值特征矩阵中每个像素值进行归一化处理,获取每个中文文本的归一化像素值特征矩阵,其中,归一化处理的公式为
    Figure PCTCN2018094271-appb-100015
    MaxValue为所述像素值特征矩阵中像素值的最大值,MinValue为所述像素值特征矩阵中像素值的最小值,x为归一化前的像素值,y为归一化后的像素值;
    将每个中文文本的归一化像素值特征矩阵中的像素值划分为两类像素值,基于所述两类像素值建立每个中文文本的二值化像素值特征矩阵,将每个中文文本的二值化像素值特征矩阵对应的中文文本组合作为规范中文文本训练样本。
  17. 根据权利要求15所述的非易失性可读存储介质,其特征在于,所述将所述规范中文文本训练样本输入到双向长短时记忆神经网络中,基于连续时间分类算法进行训练,获取双向长短时记忆神经网络的总误差因子,根据双向长短时记忆神经网络的总误差因子,采用粒子群算法更新双向长短时记忆神经网络的网络参数,获取规范中文文本识别模型,包括:
    将所述规范中文文本训练样本按序列正向输入到双向长短时记忆神经网络中,基于连续时间分类算法进行训练,获取所述规范中文文本训练样本按序列正向在所述双向长短时记忆神经网络中的前向传播输出和后向传播输出,将所述规范中文文本训练样本按序列反向输入到双向长短时记忆神经网络中,基于连续时间分类算法进行训练,获取所述规范中文文本训练样本按序列反向在所述双向长短时记忆神经网络中的前向传播输出和后向传播输出;前向传播输出表示为
    Figure PCTCN2018094271-appb-100016
    其中,t表示序列步数,u表示与t相对应的输出的标签值,
    Figure PCTCN2018094271-appb-100017
    表示输出序列在第t步的输出为l' u的概率,
    Figure PCTCN2018094271-appb-100018
    后向传播输出表示为
    Figure PCTCN2018094271-appb-100019
    其中,t表示序列步数,u表示与t相对应的输出的标签值,
    Figure PCTCN2018094271-appb-100020
    表示输出序列在第t+1步的输出为l' u的概率,
    Figure PCTCN2018094271-appb-100021
    根据所述规范中文文本训练样本按序列正向在所述双向长短时记忆神经网络中的前向传播输出和后向传播输出获取双向长短时记忆神经网络的正向误差因子,根据所述规范中文文本训练样本按序列反向在所述双向长短时记忆神经网络中的前向传播输出和后向传播输出获取双向长短时记忆神经网络的反向误差因子,将双向长短时记忆神经网络的正向误差因子和双向长短时记忆神经网络的反向误差因子相加获取双向长短时记忆神经网络的总误差因子,根据双向长短时记忆神经网络的总误差因子构建误差函数;
    根据所述误差函数,采用粒子群算法更新双向长短时记忆神经网络的网络参数,获取规范中文文本识别模型。
  18. 根据权利要求15所述的非易失性可读存储介质,其特征在于,所述采用调整中文手写文本识别模型识别待测试中文文本样本,获取识别结果与真实结果不符的出错文本,把所有所述出错文本作为出错文本训练样本,包括:
    将待测试中文文本样本输入到调整中文手写文本识别模型,获取所述待测试中文文本样本中每一文本在所述调整中文手写文本识别模型中的输出值;
    选取每一所述文本对应的输出值中的最大输出值,根据所述最大输出值获取每一所述文本的识别结果;
    根据识别结果,获取识别结果与真实结果不符的出错文本,把所有所述出错文本作为出错文本训练 样本。
  19. 根据权利要求15所述的非易失性可读存储介质,其特征在于,在所述获取规范中文文本训练样本的步骤之前,所述计算机可读指令被一个或多个处理器执行时,使得所述一个或多个处理器还执行如下步骤:
    初始化双向长短时记忆神经网络。
  20. 一个或多个存储有计算机可读指令的非易失性可读存储介质,其特征在于,所述计算机可读指令被一个或多个处理器执行时,使得所述一个或多个处理器执行如下步骤:
    获取待识别中文文本,采用目标中文手写文本识别模型识别所述待识别中文文本,获取所述待识别中文文本在所述目标中文手写文本识别模型中的输出值;所述目标中文手写文本识别模型是采用权利要求1-5任一项所述手写模型训练方法获取到的;
    选取所述待识别中文文本对应的输出值中的最大输出值,根据所述最大输出值获取待识别中文文本的识别结果。
PCT/CN2018/094271 2018-06-04 2018-07-03 手写模型训练方法、文本识别方法、装置、设备及介质 WO2019232861A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201810564059.1A CN109002461B (zh) 2018-06-04 2018-06-04 手写模型训练方法、文本识别方法、装置、设备及介质
CN201810564059.1 2018-06-04

Publications (1)

Publication Number Publication Date
WO2019232861A1 true WO2019232861A1 (zh) 2019-12-12

Family

ID=64573349

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2018/094271 WO2019232861A1 (zh) 2018-06-04 2018-07-03 手写模型训练方法、文本识别方法、装置、设备及介质

Country Status (2)

Country Link
CN (1) CN109002461B (zh)
WO (1) WO2019232861A1 (zh)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111192692A (zh) * 2020-01-02 2020-05-22 上海联影智能医疗科技有限公司 一种实体关系的确定方法、装置、电子设备及存储介质
CN113642659A (zh) * 2021-08-19 2021-11-12 上海商汤科技开发有限公司 一种训练样本集生成的方法、装置、电子设备及存储介质

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111477212B (zh) * 2019-01-04 2023-10-24 阿里巴巴集团控股有限公司 内容识别、模型训练、数据处理方法、系统及设备
CN110084189A (zh) * 2019-04-25 2019-08-02 楚雄医药高等专科学校 一种基于无线网络的答题卡处理系统及处理方法
CN110210480B (zh) * 2019-06-05 2021-08-10 北京旷视科技有限公司 文字识别方法、装置、电子设备和计算机可读存储介质
CN112232195B (zh) * 2020-10-15 2024-02-20 北京临近空间飞行器系统工程研究所 一种手写汉字识别方法、装置及存储介质

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1147652A (zh) * 1995-06-30 1997-04-16 财团法人工业技术研究院 文字辨识系统数据库的组建方法
CN101256624A (zh) * 2007-02-28 2008-09-03 微软公司 建立适用于识别手写东亚字符的hmm拓扑结构的方法及系统
CN101290659A (zh) * 2008-05-29 2008-10-22 宁波新然电子信息科技发展有限公司 基于组合分类器的手写识别方法
CN101930545A (zh) * 2009-06-24 2010-12-29 夏普株式会社 手写识别方法和设备
CN102722713A (zh) * 2012-02-22 2012-10-10 苏州大学 一种基于李群结构数据的手写体数字识别方法及系统
US20150269431A1 (en) * 2012-11-19 2015-09-24 Imds America Inc. Method and system for the spotting of arbitrary words in handwritten documents

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103942574B (zh) * 2014-02-25 2017-01-11 浙江大学 3d手写识别svm分类器核参数选取方法及用途
CN104850837B (zh) * 2015-05-18 2017-12-05 西南交通大学 手写文字的识别方法

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1147652A (zh) * 1995-06-30 1997-04-16 财团法人工业技术研究院 文字辨识系统数据库的组建方法
CN101256624A (zh) * 2007-02-28 2008-09-03 微软公司 建立适用于识别手写东亚字符的hmm拓扑结构的方法及系统
CN101290659A (zh) * 2008-05-29 2008-10-22 宁波新然电子信息科技发展有限公司 基于组合分类器的手写识别方法
CN101930545A (zh) * 2009-06-24 2010-12-29 夏普株式会社 手写识别方法和设备
CN102722713A (zh) * 2012-02-22 2012-10-10 苏州大学 一种基于李群结构数据的手写体数字识别方法及系统
US20150269431A1 (en) * 2012-11-19 2015-09-24 Imds America Inc. Method and system for the spotting of arbitrary words in handwritten documents

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111192692A (zh) * 2020-01-02 2020-05-22 上海联影智能医疗科技有限公司 一种实体关系的确定方法、装置、电子设备及存储介质
CN111192692B (zh) * 2020-01-02 2023-12-08 上海联影智能医疗科技有限公司 一种实体关系的确定方法、装置、电子设备及存储介质
CN113642659A (zh) * 2021-08-19 2021-11-12 上海商汤科技开发有限公司 一种训练样本集生成的方法、装置、电子设备及存储介质
CN113642659B (zh) * 2021-08-19 2023-06-20 上海商汤科技开发有限公司 一种训练样本集生成的方法、装置、电子设备及存储介质

Also Published As

Publication number Publication date
CN109002461B (zh) 2023-04-18
CN109002461A (zh) 2018-12-14

Similar Documents

Publication Publication Date Title
CN108764195B (zh) 手写模型训练方法、手写字识别方法、装置、设备及介质
WO2019232861A1 (zh) 手写模型训练方法、文本识别方法、装置、设备及介质
WO2019232869A1 (zh) 手写模型训练方法、文本识别方法、装置、设备及介质
CN109086653B (zh) 手写模型训练方法、手写字识别方法、装置、设备及介质
WO2020155518A1 (zh) 物体检测方法、装置、计算机设备及存储介质
CN110472675B (zh) 图像分类方法、图像分类装置、存储介质与电子设备
CN109993236B (zh) 基于one-shot Siamese卷积神经网络的少样本满文匹配方法
CN109359608B (zh) 一种基于深度学习模型的人脸识别方法
WO2021089013A1 (zh) 空间图卷积网络的训练方法、电子设备及存储介质
CN106530341B (zh) 一种保持局部拓扑不变性的点配准算法
CN108985442B (zh) 手写模型训练方法、手写字识别方法、装置、设备及介质
CN110532880B (zh) 样本筛选及表情识别方法、神经网络、设备及存储介质
CN109034280B (zh) 手写模型训练方法、手写字识别方法、装置、设备及介质
WO2019232850A1 (zh) 手写汉字图像识别方法、装置、计算机设备及存储介质
WO2021190046A1 (zh) 手势识别模型的训练方法、手势识别方法及装置
CN113128671B (zh) 一种基于多模态机器学习的服务需求动态预测方法及系统
CN113065525A (zh) 年龄识别模型训练方法、人脸年龄识别方法及相关装置
WO2019232844A1 (zh) 手写模型训练方法、手写字识别方法、装置、设备及介质
CN109034279B (zh) 手写模型训练方法、手写字识别方法、装置、设备及介质
KR20190134965A (ko) 뉴럴 네트워크 학습 방법 및 그 시스템
CN111309823A (zh) 用于知识图谱的数据预处理方法及装置
WO2020199498A1 (zh) 指静脉比对方法、装置、计算机设备及存储介质
WO2021227333A1 (zh) 人脸关键点检测方法、装置以及电子设备
Lim et al. More powerful selective kernel tests for feature selection
CN113762005A (zh) 特征选择模型的训练、对象分类方法、装置、设备及介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18921990

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 11/03/2021)

122 Ep: pct application non-entry in european phase

Ref document number: 18921990

Country of ref document: EP

Kind code of ref document: A1