CN113408373A - Handwriting recognition method, system, client and server - Google Patents

Handwriting recognition method, system, client and server Download PDF

Info

Publication number
CN113408373A
CN113408373A CN202110615845.1A CN202110615845A CN113408373A CN 113408373 A CN113408373 A CN 113408373A CN 202110615845 A CN202110615845 A CN 202110615845A CN 113408373 A CN113408373 A CN 113408373A
Authority
CN
China
Prior art keywords
client
server
character
recognized
handwriting recognition
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110615845.1A
Other languages
Chinese (zh)
Other versions
CN113408373B (en
Inventor
李闯
肖骞宇
陈欣
段金越
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China Financial Certification Authority Co ltd
Original Assignee
China Financial Certification Authority Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China Financial Certification Authority Co ltd filed Critical China Financial Certification Authority Co ltd
Priority to CN202110615845.1A priority Critical patent/CN113408373B/en
Publication of CN113408373A publication Critical patent/CN113408373A/en
Application granted granted Critical
Publication of CN113408373B publication Critical patent/CN113408373B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/60Protecting data
    • G06F21/62Protecting access to data via a platform, e.g. using keys or access control rules
    • G06F21/6218Protecting access to data via a platform, e.g. using keys or access control rules to a system of files or objects, e.g. local or distributed file system or database
    • G06F21/6245Protecting personal data, e.g. for financial or medical purposes

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Computation (AREA)
  • Evolutionary Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • General Health & Medical Sciences (AREA)
  • Bioethics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Security & Cryptography (AREA)
  • Medical Informatics (AREA)
  • Databases & Information Systems (AREA)
  • Computer Hardware Design (AREA)
  • Software Systems (AREA)
  • Character Discrimination (AREA)

Abstract

The invention relates to a handwriting recognition method, a system, a client and a server, wherein the method comprises the steps that the client sends a handwriting recognition request to the server; a server receives a handwriting recognition request sent by a client; the server side sends the feature extraction model and the feature value of the target character to the client side; the client receives the feature extraction model and the feature value of the target character; the client receives the handwritten characters to be recognized, loads the feature extraction model to obtain the feature values of the handwritten characters to be recognized, and matches the feature values of the handwritten characters to be recognized with the feature values of the target characters to recognize the handwritten characters to be recognized. The technical scheme of the invention avoids the problem of data leakage caused by transmitting the client handwritten data to the server side.

Description

Handwriting recognition method, system, client and server
Technical Field
The present invention relates generally to the field of artificial intelligence application technology. More particularly, the invention relates to a handwriting recognition method, a handwriting recognition system, a client and a server.
Background
At present, deep neural networks are continuously developed and applied in recent years as a technology in the field of machine learning. Deep learning models can be designed to accomplish a wide variety of tasks, including word recognition, speech recognition, natural language processing, and computer vision processing, among others. The handwriting recognition technology based on the deep neural network has excellent recognition rate and accuracy. However, as the number of layers of the neural network increases, the number of recognizable words increases, and the amount of parameters of the neural network model and the corresponding required storage space will increase rapidly. The handwriting recognition technology with large storage consumption enables technicians in the field to deploy handwriting recognition service at a server side, send handwriting data to the server side through a client side, recognize the handwriting data through a trained neural network model by the server side, and return recognition results to the client side. However, this client-server mode has a problem that if the client's handwriting data is uploaded to the server, the client's handwriting data may leak out through the server; therefore, this deployment mode risks leakage of client handwritten data.
Disclosure of Invention
In order to solve at least the above problems, the present invention provides a handwriting recognition method, a system, a client and a server, wherein the server hands over a feature extraction function and a feature matching function related to the features of the client handwriting data to the client to complete, and the server cannot contact any handwriting data of a relevant user, thereby avoiding the risk of data leakage caused by transmitting the client handwriting data to the server.
In a first aspect, the present invention provides a handwriting recognition method for a client, comprising: sending a handwriting recognition request to a server; receiving a feature extraction model and a feature value of a target character returned by a server side; receiving the handwritten characters to be recognized, loading the feature extraction model to obtain the feature values of the handwritten characters to be recognized, and matching the feature values of the handwritten characters to be recognized with the feature values of the target characters to recognize the handwritten characters to be recognized.
In one embodiment, the matching comprises: and calculating the similarity between the characteristic value of the handwritten character to be recognized and the characteristic value of the target character.
In one embodiment, the similarity includes one or more of Euclidean distance similarity, cosine similarity, adjusted cosine similarity, and Pearson correlation coefficient
In one embodiment, further comprising: and calculating the similarity between the characteristic value of the handwritten character to be recognized and the characteristic values of more than two target characters, selecting the maximum similarity, and comparing the maximum similarity with a threshold value to recognize the handwritten character to be recognized.
In one embodiment, the receiving the handwritten text to be recognized and loading the feature extraction model includes: the feature extraction model is loaded once each stroke is completed by the user.
In a second aspect, the present invention further provides a handwriting recognition method for a server, including: receiving a handwriting recognition request sent by a client; and sending the feature extraction model and the feature value of the target character to a client according to the handwriting recognition request.
In one embodiment, the feature value extraction model selects positive samples and negative samples in a model training phase, and comprises: selecting the same characters as positive samples and selecting different characters as negative samples; and/or the content obtained by reducing one stroke or two strokes of one character is used as a negative sample, and the content obtained by increasing one stroke or two strokes of one character is used as a negative sample.
In a third aspect, the present invention further provides a client, including: a memory, a processor and a computer program stored on the memory and executable on the processor, the processor implementing the handwriting recognition method for a client in the foregoing first aspect and embodiments when executing the computer program.
In a fourth aspect, the present invention further provides a server, including: a memory, a processor and a computer program stored on the memory and executable on the processor, the processor implementing the handwriting recognition method for the server side in the foregoing second aspect and embodiments when executing the computer program.
In a fifth aspect, the present invention further provides a handwriting recognition system, which includes the foregoing client and the foregoing server, where the server is in communication connection with the client.
The invention is completed by transferring the characteristic extraction function and the characteristic matching function related to the client hand-written data characteristic in the server to the client, and the server does not contact any hand-written data information of the user. On the one hand, the approach of acquiring the user data by the server side is isolated, so that the risk of data leakage caused by the fact that the client hand-written data are transmitted to the server is fundamentally avoided, and the safety of the user privacy data is effectively improved. On the other hand, the client identifies in a mode of loading the feature extraction model and matching the feature value, and the identification accuracy can still be ensured; moreover, since the model and the target characteristic value come from the server side, the processing mode does not significantly increase the burden of the client side.
Drawings
The above and other objects, features and advantages of exemplary embodiments of the present invention will become readily apparent from the following detailed description read in conjunction with the accompanying drawings. Several embodiments of the present invention are illustrated by way of example, and not by way of limitation, in the figures of the accompanying drawings and in which like reference numerals refer to similar or corresponding parts and in which:
FIG. 1 is a flow diagram illustrating a handwriting recognition method for a client in accordance with an embodiment of the present invention;
FIG. 2 is a flow diagram illustrating a handwriting recognition method for a server-side according to an embodiment of the invention;
FIG. 3 is a schematic flow chart illustrating the operation of a handwriting recognition system according to an embodiment of the present invention;
FIG. 4 is a schematic flow chart illustrating the operation of handwriting recognition at a client according to an embodiment of the present invention;
FIG. 5 is a schematic diagram illustrating a structure of a client for handwriting recognition according to an embodiment of the present invention;
fig. 6 is a schematic structural diagram illustrating a server side for handwriting recognition according to an embodiment of the present invention.
Detailed Description
Embodiments will now be described with reference to the accompanying drawings. It will be appreciated that for simplicity and clarity of illustration, where considered appropriate, reference numerals may be repeated among the figures to indicate corresponding or analogous elements. In addition, this application sets forth numerous specific details in order to provide a thorough understanding of the embodiments described herein. However, it will be understood by those of ordinary skill in the art that the embodiments described herein may be practiced without these specific details. In other instances, well-known methods, procedures, and components have not been described in detail so as not to obscure the embodiments described herein. Moreover, this description is not to be taken as limiting the scope of the embodiments described herein.
At present, a deep neural network is used as a technology in the field of machine learning, and is continuously developed and applied in recent years due to its high robustness and fault-tolerant capability, distributed storage and learning capability, and capability of sufficiently approaching to a complex nonlinear relationship. Deep learning models can be designed to perform a wide variety of tasks, including text recognition, speech recognition, natural language processing (including handwriting recognition), and computer vision processing, with excellent recognition and accuracy. However, as the number of layers of the neural network increases, the number of recognizable words increases, and the amount of parameters of the neural network model and the corresponding required storage space will increase rapidly. Such application scenarios, which require storing and loading of large amounts of parameter data, are not suitable for deploying handwriting recognition services in a local machine web browser.
Take a single word picture recognition model as an example. If the model supports the identification of 2 ten thousand Chinese characters, the output of the last layer of neural network of the model also needs 2 ten thousand output nodes to completely represent the probability of the input picture of the model on each Chinese character classification. If the number of input nodes of the last layer of neural network is N, then N20000 sizereof (float) bytes of storage space would be needed. Generally, for Chinese character recognition, the value of N is in the interval of 128-1024, that is, 9.7-78.1 Mbytes of storage space will be used only for the neural network of the last layer. The foregoing is only to recognize single-word pictures, and when it is necessary to recognize multi-word pictures or more complicated forms, the required storage space is more enormous. In order to solve the problem of large storage consumption, a user usually deploys the handwriting recognition service at a server side, a client side sends user handwriting data to the server side, and the server side processes the user handwriting data according to a handwriting recognition model based on a neural network to obtain a recognition result and returns the recognition result to the client side. The above-mentioned client-server mode has a problem that the client uploads the handwriting data of the user to the server, and the server obtains information containing the user privacy (user signature handwriting information, etc.), so this way may cause the risk of the server revealing the user data, and affect the security of the user privacy information.
Based on this, embodiments of the present invention provide a handwriting recognition method for a client, a handwriting recognition method for a server, a client, a server, and a corresponding handwriting recognition system. Partial functions (which can comprise a feature extraction function and a feature value comparison function) originally deployed in a neural network-based handwriting recognition model of a server are handed over to a client for processing, the server no longer receives handwriting data of a user, the risk of user handwriting data leakage is avoided, and the safety of the user data is effectively guaranteed.
The technical solution in the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings.
Fig. 1 is a flowchart illustrating a handwriting recognition method for a client according to an embodiment of the present invention. As shown in fig. 1, first, at step S101, the client transmits a handwriting recognition request to the server. Then, at step S102, the feature extraction model and the feature value of the target text returned by the server are received. Next, at step S103, handwritten text to be recognized is received. Next, at step S104, the feature extraction model is loaded to obtain feature values of the handwritten text to be recognized. Finally, in step S105, the feature value of the handwritten character to be recognized and the feature value of the target character are matched to recognize the handwritten character to be recognized. In an implementation scenario, when the feature value matching is performed in step S105, the feature value matching may be performed by calculating the similarity between the feature value of the handwritten character to be recognized and the feature value of the target character, or by calculating the feature distance between the two feature values. The feature extraction model received by the client only reserves the extracted feature part when the handwriting recognition model based on the neural network is simplified.
In an embodiment, the aforementioned manner of matching the feature value of the handwritten character to be recognized and the feature value of the target character may include a manner of calculating similarity to perform matching. In one implementation scenario, the Similarity may be one or more of Cosine Similarity (Cosine Similarity), Adjusted Cosine Similarity (Adjusted Cosine Similarity), Pearson Correlation Coefficient (Pearson Correlation Coefficient), Jaccard Similarity Coefficient (Jaccard Correlation), Euclidean Distance (Euclidean Distance), and the like. After the similarity of the two characteristic values is obtained by calculation by the method, if the similarity is greater than a similarity threshold value, the matching is considered to be successful, and the identification of the handwritten characters to be identified is completed. In an implementation scenario, taking cosine similarity as an example, the following similarity function SIM (x) may be adopted1,x2) It is calculated as follows:
Figure BDA0003097981490000051
where e is 1e-8, x1 and x2 may be two feature values. The eigenvalue may be a vector. It is to be understood that the foregoing selection of the similarity function is merely exemplary and not limiting, and may be adjusted according to actual needs.
The manner in which the matching of feature values includes calculating the similarity of the feature values is described above, and the processing procedure when a plurality of similarity values are calculated is explained next. In an application scenario, e.g.And if the client receives the plurality of characteristic values of the target characters, calculating the similarity between the characteristic value of the handwritten character to be recognized and the plurality of characteristic values of the target characters, selecting the maximum similarity, comparing the maximum similarity with a threshold, and when the maximum similarity is higher than the threshold, recognizing the corresponding handwritten character to be recognized. The following description will be given by taking a "still" word as an example. The client receives the characteristic value of the static character comprising one or more of the fonts such as the song font, the regular font, the clerical script and the like, namely each font corresponds to one characteristic value, and the characteristic value of the static character can comprise FSong Dynasty,FRegular script,FSong DynastyAnd at the moment, the client extracts a characteristic value of a word written by the user, then calculates the similarity between the characteristic value of the word and the characteristic values of the plurality of fonts and carries out similarity sorting, selects the characteristic value with the maximum similarity to compare with a threshold value, and judges that the character recognition is passed if the characteristic value exceeds the threshold value.
In one embodiment, there are many ways for the client to load the feature extraction model for feature extraction. Taking the recognition of a single character picture as an example, each time a user performs a writing operation, the client loads the feature extraction model once and obtains the feature value of the handwritten character to be recognized at the moment, the similarity between the feature value of the handwritten character to be recognized and the feature value of the target character is confirmed, and if the similarity is higher than a threshold value, the recognition is judged to be passed, namely, the recognition of one character is completed. In other words, in the process of writing a character by a user, the feature extraction model is loaded for a plurality of times, that is, each time a stroke is input (the stroke is not a standard stroke, but is regarded as a stroke from pen-down to pen-up based on the habit of the user), the feature extraction model is loaded for a plurality of times. In order to reduce the calling frequency of the feature extraction model, in a scene realized by one person, the client can also determine that the user has finished one character by a manual confirmation mode or a delayed confirmation mode, at the moment, the feature extraction model is loaded, the feature value of the handwritten character to be recognized at the moment is obtained, the similarity between the feature value of the handwritten character to be recognized at the moment and the feature value of the target character is confirmed, and if the similarity at the moment is higher than a threshold value, the character recognition is judged to be passed, namely, the character recognition is finished.
The handwriting recognition method for the client terminal of the present invention is exemplarily described above with reference to fig. 1. The following describes an exemplary overall function of the server side in handwriting recognition according to the present invention with reference to fig. 2.
Fig. 2 is a flowchart illustrating a server-side operation method according to an embodiment of the present invention. As shown in fig. 2, first, at step S201, the server receives a handwriting recognition request sent by the client. The handwriting recognition request can be a command and can also contain target character information to be recognized. And if the target character information needing to be identified is contained, the server analyzes the request and calls the characteristic value of the target character from a database of the server. Next, at S202, the server sends the feature extraction model and the feature value of the target word to the client according to the identification request. In one scenario, the feature value of the target text sent by the client may include the corresponding target text in the identification request, or may include other pre-stored text feature values. The following description will be given by taking a "still" word as an example. The server side can send the characteristic values of the static characters of the Song font, and can also send the characteristic values of various fonts in various fonts such as the Song font, the regular font, the clerical script and the like.
In order to enable the feature extraction model to more accurately recognize characters of different handwriting styles and distinguish similar characters. In one embodiment, the positive and negative examples selected by the feature value extraction model in the model training stage may include selecting a character with a lower similarity among the same characters as a positive example, and selecting a character with a higher similarity among different characters as a negative example. Taking the example of selecting the training sample of the character of 'ri', different characters of 'ri' can be selected as positive samples, and 'aim', 'say', 'old' and 'field' are selected as negative samples, and the selected positive samples and the negative samples are used for training the feature extraction model to obtain a more accurate feature extraction model.
Further, in order to more accurately distinguish the character which is not written, the character which is written and the character which starts to write the next character stroke, two groups of negative samples are added in the following mode during model training. Specifically, the content obtained by subtracting one or two strokes from one character is used as a negative sample, and the content obtained by adding one or two strokes from one character and the character are used as negative samples. The two groups of negative samples increase the written characters and the characteristic distance of the characters, and the accuracy of the characteristic extraction model can be effectively improved. Similarly, taking the training sample of the "japanese" word as an example, selecting "kou", "E" and "two" obtained by subtracting one or two strokes from the "japanese" word as negative samples, and selecting "tian", "old", "denier" and "long span" obtained by adding one or two strokes to the "japanese" word as negative samples, and training the feature extraction model, so that the feature extraction model can accurately extract the feature value of the word.
The above describes the method flows of completing handwriting recognition at the client and the server, respectively, with reference to fig. 1-2. In view of the detailed description of the functions executed by the client and the server, when the client uses the handwriting recognition method, the client only needs to load the feature vectors corresponding to the feature extraction models with extremely small calculated amount and parameter amount and the target characters on the client, and the handwriting recognition method is matched with the handwriting acquisition component of the client to recognize the handwritten characters of the user. The server side does not acquire the handwriting data of the user any more, and the whole recognition process can be finished at the client side. In order to better understand the principle and process of the client side for handwriting recognition, the invention also provides a method for performing handwriting recognition by adopting the handwriting recognition system. The handwriting recognition method is exemplarily described below with reference to fig. 3.
Fig. 3 is a flowchart illustrating an operation principle of a handwriting recognition method according to an embodiment of the present invention.
As shown in fig. 3, in one embodiment, first, at step S301, the client sends a handwriting recognition request to the server. In one implementation scenario, the handwriting recognition request may include a target word to be recognizedThe information of (1). The target word may include one or more word messages, or may be a control command to initiate recognition. Taking the identification process of the user signature as an example, the client sends a handwriting identification request to the server, the handwriting identification request may include a character string S to be identified and to be signed, the character string S may include S1、S2、S3、…、Si、…Sn
Next, after receiving the handwriting recognition request sent by the client, in step S401, the server sends the feature extraction model and the feature value of the target character to the client according to the foregoing handwriting recognition request. In an implementation scenario, if the identification request includes target character information to be identified, the server may send a feature value and a feature extraction model of a corresponding target character to the client. Correspondingly, if the identification request is only a control command for starting identification, the server side can send the characteristic values of all the characters in the memory to the client side. The feature value of the target character may include a plurality of cases, and the target character is "safe", and the feature value of the target character may include one or more kinds of feature values obtained by extracting characters such as a song style, a script, and an clerical script. In one application scenario, the client needs to obtain a calculation model (i.e. a feature extraction model, M) from the serverwebModel) and the characteristic value corresponding to the Chinese character to be signed. After the server receives the Chinese character string S to be signed with the length of n, each Chinese character S is sent to the server1、S2、S3、…、Si、…SnCharacteristic code F of corresponding standard Chinese characters (such as Song dynasty style)1、F2、F3、…、Fi、…、FnTogether with MwebThe model is sent to the client. Taking the identification process of the user signature as an example, the ith Chinese character S to be identifiediSetting the character as the current target character to be identified, and the client receives the standard Chinese character characteristic value F returned by the serveriAnd performing subsequent feature matching.
Then, the client receives the feature extraction model and the feature value of the target character, and at step S302, the client receives the handwritten character to be recognized. In one implementation scenario, the client may obtain the handwritten text to be recognized by connecting a handwriting input device such as a handwriting pad, a drawing board, or a Personal Digital Assistant (PDA). The handwritten characters to be recognized, which are acquired by the client through the handwriting input device, may be content obtained by receiving one writing operation of a user, where the writing operation of the user may be: 1) writing a stroke on the canvas of the client (from pen-down to pen-up, the stroke is regarded as a stroke); 2) and clicking an emptying button of the client to empty the strokes on the current canvas.
Then, in step S303, the client loads the feature extraction model to obtain feature values of the handwritten text to be recognized. Specifically, after receiving the characters written by the user, the trajectory data written in the current canvas can be converted into the picture IcurrentThe client inputs the picture of the handwritten character to be recognized into the feature extraction model MwebLoading the feature extraction model to output the feature value F corresponding to the handwritten character to be recognizedcurrent. Wherein, Icurrent=Mweb(Icurrent). Further, the client receives the characters written by the user, and the user triggers character recognition once when writing one stroke, namely, the M is loaded oncewebAnd (4) modeling. The aforementioned feature extraction model is a model trained in the server side (described in detail below).
Again, at step S304, the client calculates the similarity between the feature value of the handwritten character to be recognized and the feature value of the target character. Specifically, the client may calculate the obtained feature value FcurrentChinese character S to be written at presentiCorresponding standard Chinese character characteristic value FiPerforming similarity operation to obtain similarity s, where the similarity may be cosine similarity, and s ═ SIM (F)i,Fcurrent)。
Finally, in step S305, the client matches the aforementioned feature values of the handwritten character to be recognized with the feature values of the target character to recognize the handwritten character to be recognized. In particular, the guestThe user end can judge whether the similarity obtained by the calculation is larger than a similarity threshold value, and if the similarity is larger than the similarity threshold value, the user end judges that the character recognition is passed. For example, if T is taken as a similarity threshold value, and if the similarity S exceeds the threshold value T, the user is considered to write through, and the current writing track is recorded as the Chinese character SiGo to step S506; if S does not exceed the threshold value T, the current written content of the user and the target word S are indicatediThe possible reasons for the mismatch may include: 1) the user has not written the current word; 2) the user is sloppy in writing and fails in recognition. Further, the client may also determine whether the calculated characteristic distance is smaller than a threshold, and if so, determine that the character recognition is passed. In addition, the target character to be recognized may include a plurality of characters, and after completing one handwriting recognition, in step S306, the client checks whether there are characters that are not recognized, and if so, performs the processes of steps S302 to S305 again according to the received handwritten character to be recognized.
The handwriting recognition method for the client and the server according to the present invention is explained in detail above with reference to fig. 1-3, and the principle of the method according to the present invention is further described below by taking chinese character signature recognition as an example. Before describing the principle of the method of the present invention, a feature extraction model obtained by training at the server side is described first. Feature extraction model M with few parameters and few calculation amount is provided for clientweb(hereinafter referred to as M)webModel) which can be designed by using CNN as a basic unit for extracting the characteristic value of the character. The finally designed model architecture can be based on MobileNet V2 and ResNet, and has good feature extraction and generalization capability while the parameters are small.
In training the feature extraction model, M is usedwebThe output of the model is connected with a traditional linear character classification model to construct an auxiliary model, and the auxiliary model is applied to generating characteristic values of standard fonts. And extracting features of the standard fonts (such as Song dynasty) by using the trained auxiliary models to serve as character standard features (feature values of target characters) and storing the character standard features in a server side.Based on these character standard feature pairs MwebThe model is trained to accelerate MwebThe convergence speed of the model. At MwebIn the training process of the model, characters with lower similarity in the same characters are selected as positive samples, so that the characteristic distance between the characters is smaller; selecting characters with higher similarity in different characters as negative samples to enable the characteristic distance between the characters to be larger; through the selection strategy of the positive sample and the negative sample, the model can identify characters with different handwriting styles and distinguish similar characters more accurately.
In order to further increase the accuracy of the model, in addition to the above-mentioned distinction between different handwriting styles, a distinction can be made as to whether a text is completely written. The method for distinguishing whether the text is completed in this embodiment is as follows: the following two sets of negative examples can be added during training. A set of negative examples may be the content of a word reduced by one or two strokes as a negative example, so that the characteristic distance between them is larger; another set of negative examples may be negative examples of increasing a word by one or two strokes, such that the characteristic distance between them is greater. Wherein the increased strokes are from the data set and are formed by randomly extracting strokes of the characters. By utilizing the characteristic extraction model obtained by the added negative sample training, characters which are not completely written, characters which are completely written and characters which begin to write next character strokes can be more accurately distinguished.
Further, the present invention can utilize the above-specified positive and negative samples and apply the following pairs of loss functions to MwebTraining is carried out:
Figure BDA0003097981490000111
wherein: λ is 0 in the positive sample median and 1 in the negative sample; f is the character standard characteristic of the target character;
Figure BDA0003097981490000112
is the feature value output by the feature extraction model,
Figure BDA0003097981490000113
i is the picture data of the incoming model.
Further, in order to simplify the volume of the aforementioned feature extraction model, the MwebWhen the model is stored, floating point weight data in the model is approximately represented by using two bytes through a quantization technology so as to compress the volume of the model, and the reduced model is integrated at a client.
The above is a description of the feature extraction model trained by the server, and the following explains the principle of the method for implementing writing recognition by communication between the client and the server in the present invention with reference to fig. 4 by taking the case of performing writing recognition at the client as an example. The method mainly comprises an identification preparation stage and an identification stage.
In the identification preparation stage, the client needs to acquire a calculation model (namely a feature extraction model, M for short) from the serverwebModel) and the characteristic value corresponding to the Chinese character to be signed. In the recognition stage, the client receives the handwritten characters of the user, and the method flow of character recognition is triggered once when the user writes one stroke. As shown in fig. 4, taking n handwritten characters to be recognized as an example, n may be 3. The method for starting the handwriting recognition by the client by setting the counting value i of the handwritten character to be recognized as 0 comprises the following steps: step S501: and setting the ith character to be recognized as a current target character to be recognized, namely the target character, and receiving the characteristic value corresponding to the target character returned by the server side by the client side for subsequent calculation. Step S502: the client receives the handwritten characters written by the user, namely the handwritten characters to be recognized. Step S503: using the aforementioned MwebAnd the model calculates to obtain the characteristic value of the ith handwritten character to be recognized. Step S504: and performing similarity calculation on the feature value of the ith to-be-recognized handwritten character obtained by calculation and the feature value corresponding to the target character to obtain the similarity. Step S505: and comparing the calculated similarity with a similarity threshold value to judge whether the identification is passed or not. And if the judgment identification is passed, jumping to the step S506, and if the judgment identification is not passed, jumping to the step S502 to wait for further operation of the user. Step S506: judging whether all the characters are identified, judging whether the condition is that i is the index number corresponding to the last character, if so, ending the process, otherwise, turning to the step S507: i is increased by 1. Each time the recognition of one character is completed, the count value i is increased by 1 until i becomes 3, and it is determined that the recognition of all characters is completed. It is to be understood that the foregoing cycle identification process is merely exemplary and not limiting, and may be adjusted according to actual needs.
The above is a detailed description of the handwriting recognition method of the present invention. According to the handwriting recognition method, when a client uses the handwriting recognition method, the characteristic extraction model with small calculated amount and small parameter amount and the characteristic vector corresponding to the target character only need to be loaded on the client, and the handwriting recognition module is matched with the client handwriting acquisition module to recognize the handwriting character. The function that the whole handwriting recognition process involves the client handwritten data is completed at the client, so that the server cannot obtain the client handwritten data, the client handwritten data is effectively prevented from being leaked, and the safety of user privacy information is improved.
As another aspect of the present invention, an embodiment of the present invention further provides a client for handwriting recognition as shown in fig. 5, which includes a processor, a memory, a communication interface, and a communication bus, where the processor, the memory, and the communication interface complete mutual communication through the communication bus, and the processor executes the steps of the method implemented by the foregoing client. As for the method implemented by the client, since the detailed description has been already made in the foregoing, it is not repeated herein.
As another aspect of the present invention, an embodiment of the present invention further provides a server for handwriting recognition, as shown in fig. 6, including a processor, a memory, a communication interface, and a communication bus, where the processor, the memory, and the communication interface complete mutual communication through the communication bus, and the processor executes the steps of the method implemented by the server. Since the method implemented by the server has been described in detail in the foregoing, it is not repeated herein.
As a further aspect of the present invention, an embodiment of the present invention further provides a handwriting recognition system, including a client as shown in fig. 5 and a server as shown in fig. 6.
In addition, the aforementioned memories of the server and the client in the present invention may include a readable storage medium in which an application program for executing the above method is stored. A readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. For example, the computer-readable storage medium may be any suitable magnetic or magneto-optical storage medium, such as resistive Random Access Memory (rram), Dynamic Random Access Memory (dram), Static Random Access Memory (SRAM), enhanced Dynamic Random Access Memory (edram), High-Bandwidth Memory (HBM), hybrid Memory cubic (hmc) Memory cube, and the like, or any other medium that can be used to store the desired information and that can be accessed by an application, a module, or both. Any such computer storage media may be part of, or accessible or connectable to, a device. Any applications or modules described herein may be implemented using computer-readable/executable instructions that may be stored or otherwise maintained by such computer-readable media.
It should be understood that the terms "first" or "second," etc. in the claims, description, and drawings of the present disclosure are used for distinguishing between different objects and not for describing a particular order. The terms "comprises" and "comprising," when used in the specification and claims of this disclosure, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
It is also to be understood that the terminology used in the description of the invention herein is for the purpose of describing particular embodiments only, and is not intended to be limiting of the invention disclosed. As used in the specification and claims of this disclosure, the singular forms "a", "an" and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It should be further understood that the term "and/or" as used in this disclosure and in the claims refers to any and all possible combinations of one or more of the associated listed items and includes such combinations.
It should also be appreciated that any module, unit, component, server-side, computer, terminal, or device executing instructions exemplified herein may include or otherwise have access to a computer-readable medium, such as a storage medium, computer storage medium, or data storage device (removable) and/or non-removable) such as a magnetic disk, optical disk, or magnetic tape. Computer storage media may include volatile and nonvolatile, removable and non-removable media implemented in any method or technology for storage of information, such as computer readable instructions, data structures, program modules or other data.
Although the embodiments of the present invention are described above, the descriptions are only examples for facilitating understanding of the present invention, and are not intended to limit the scope and application scenarios of the present invention. It will be understood by those skilled in the art that various changes in form and details may be made therein without departing from the spirit and scope of the invention as defined by the appended claims.

Claims (10)

1. A handwriting recognition method for a client, comprising:
sending a handwriting recognition request to a server;
receiving a feature extraction model and a feature value of a target character returned by a server side;
receiving the handwritten characters to be recognized, loading the feature extraction model to obtain the feature values of the handwritten characters to be recognized, and matching the feature values of the handwritten characters to be recognized with the feature values of the target characters to recognize the handwritten characters to be recognized.
2. The method of claim 1, wherein the matching comprises:
and calculating the similarity between the characteristic value of the handwritten character to be recognized and the characteristic value of the target character.
3. The method of claim 1, wherein the similarity comprises one or more of Euclidean distance similarity, cosine similarity, adjusted cosine similarity, and Pearson correlation coefficient.
4. The method of claim 2 or 3, further comprising:
and calculating the similarity between the characteristic value of the handwritten character to be recognized and the characteristic values of more than two target characters, selecting the maximum similarity, and comparing the maximum similarity with a threshold value to recognize the handwritten character to be recognized.
5. The method of claim 1, wherein the receiving the handwritten word to be recognized and loading the feature extraction model comprises:
the feature extraction model is loaded once each stroke is completed by the user.
6. A handwriting recognition method for a server side is characterized by comprising the following steps:
receiving a handwriting recognition request sent by a client;
and sending the feature extraction model and the feature value of the target character to a client according to the handwriting recognition request.
7. The method of claim 6, wherein the feature value extraction model selects positive and negative samples in a model training phase, comprising:
selecting the same characters as positive samples and selecting different characters as negative samples; and/or
The content obtained by reducing one stroke or two strokes of one character is used as a negative sample, and the content obtained by increasing one stroke or two strokes of one character is used as a negative sample.
8. A client, comprising: memory, processor and computer program stored on the memory and executable on the processor, the processor implementing the handwriting recognition method for a client according to any of claims 1 to 5 when executing the computer program.
9. A server side, comprising: memory, processor and computer program stored on the memory and executable on the processor, the processor implementing the handwriting recognition method for server-side according to claim 6 or 7 when executing the computer program.
10. A handwriting recognition system comprising a client according to claim 8 and a server according to claim 9, said server and client being communicatively connected.
CN202110615845.1A 2021-06-02 2021-06-02 Handwriting recognition method, handwriting recognition system, client and server Active CN113408373B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110615845.1A CN113408373B (en) 2021-06-02 2021-06-02 Handwriting recognition method, handwriting recognition system, client and server

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110615845.1A CN113408373B (en) 2021-06-02 2021-06-02 Handwriting recognition method, handwriting recognition system, client and server

Publications (2)

Publication Number Publication Date
CN113408373A true CN113408373A (en) 2021-09-17
CN113408373B CN113408373B (en) 2024-06-07

Family

ID=77675992

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110615845.1A Active CN113408373B (en) 2021-06-02 2021-06-02 Handwriting recognition method, handwriting recognition system, client and server

Country Status (1)

Country Link
CN (1) CN113408373B (en)

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1255192A2 (en) * 2001-05-04 2002-11-06 Microsoft Corporation Web enabled recognition architecture
CN102122360A (en) * 2011-03-01 2011-07-13 华南理工大学 Cloud computing-based mobile terminal handwriting identification method
CN103473491A (en) * 2013-09-01 2013-12-25 西安电子科技大学 Writing process based mobile terminal user identification system and method
CN108229428A (en) * 2018-01-30 2018-06-29 上海思愚智能科技有限公司 A kind of character recognition method, device, server and medium
JP2018147312A (en) * 2017-03-07 2018-09-20 公立大学法人会津大学 User authentication system in handwritten characters
CN108959664A (en) * 2018-09-26 2018-12-07 江苏曲速教育科技有限公司 Distributed file system based on picture processor
CN109086654A (en) * 2018-06-04 2018-12-25 平安科技(深圳)有限公司 Handwriting model training method, text recognition method, device, equipment and medium
CN109299663A (en) * 2018-08-27 2019-02-01 刘梅英 Hand-written script recognition methods, system and terminal device
CN110033052A (en) * 2019-04-19 2019-07-19 济南浪潮高新科技投资发展有限公司 A kind of the self-training method and self-training platform of AI identification hand-written script
CN111832547A (en) * 2020-06-24 2020-10-27 平安普惠企业管理有限公司 Dynamic deployment method and device of character recognition model and computer equipment
CN112036323A (en) * 2020-09-01 2020-12-04 中国银行股份有限公司 Signature handwriting identification method, client and server

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1255192A2 (en) * 2001-05-04 2002-11-06 Microsoft Corporation Web enabled recognition architecture
CN102122360A (en) * 2011-03-01 2011-07-13 华南理工大学 Cloud computing-based mobile terminal handwriting identification method
CN103473491A (en) * 2013-09-01 2013-12-25 西安电子科技大学 Writing process based mobile terminal user identification system and method
JP2018147312A (en) * 2017-03-07 2018-09-20 公立大学法人会津大学 User authentication system in handwritten characters
CN108229428A (en) * 2018-01-30 2018-06-29 上海思愚智能科技有限公司 A kind of character recognition method, device, server and medium
CN109086654A (en) * 2018-06-04 2018-12-25 平安科技(深圳)有限公司 Handwriting model training method, text recognition method, device, equipment and medium
CN109299663A (en) * 2018-08-27 2019-02-01 刘梅英 Hand-written script recognition methods, system and terminal device
CN108959664A (en) * 2018-09-26 2018-12-07 江苏曲速教育科技有限公司 Distributed file system based on picture processor
CN110033052A (en) * 2019-04-19 2019-07-19 济南浪潮高新科技投资发展有限公司 A kind of the self-training method and self-training platform of AI identification hand-written script
CN111832547A (en) * 2020-06-24 2020-10-27 平安普惠企业管理有限公司 Dynamic deployment method and device of character recognition model and computer equipment
CN112036323A (en) * 2020-09-01 2020-12-04 中国银行股份有限公司 Signature handwriting identification method, client and server

Also Published As

Publication number Publication date
CN113408373B (en) 2024-06-07

Similar Documents

Publication Publication Date Title
CN110598206A (en) Text semantic recognition method and device, computer equipment and storage medium
US11010664B2 (en) Augmenting neural networks with hierarchical external memory
CN110569500A (en) Text semantic recognition method and device, computer equipment and storage medium
CN110276406B (en) Expression classification method, apparatus, computer device and storage medium
CN112819023A (en) Sample set acquisition method and device, computer equipment and storage medium
CN112380837B (en) Similar sentence matching method, device, equipment and medium based on translation model
CN113836992B (en) Label identification method, label identification model training method, device and equipment
CN110598603A (en) Face recognition model acquisition method, device, equipment and medium
CN108960574A (en) Quality determination method, device, server and the storage medium of question and answer
CN110163121A (en) Image processing method, device, computer equipment and storage medium
CN111159358A (en) Multi-intention recognition training and using method and device
CN112347284A (en) Combined trademark image retrieval method
JP2022161564A (en) System for training machine learning model recognizing character of text image
CN114282013A (en) Data processing method, device and storage medium
CN112633423A (en) Training method of text recognition model, text recognition method, device and equipment
CN114781380A (en) Chinese named entity recognition method, equipment and medium fusing multi-granularity information
US11810388B1 (en) Person re-identification method and apparatus based on deep learning network, device, and medium
CN114239805A (en) Cross-modal retrieval neural network, training method and device, electronic equipment and medium
CN115525740A (en) Method and device for generating dialogue response sentence, electronic equipment and storage medium
JP2003256839A (en) Method for selecting characteristics of pattern, method for classifying pattern, method for judging pattern, and its program and its device
CN113408373B (en) Handwriting recognition method, handwriting recognition system, client and server
CN113204971B (en) Scene self-adaptive Attention multi-intention recognition method based on deep learning
CN115859112A (en) Model training method, recognition method, device, processing equipment and storage medium
CN114638229A (en) Entity identification method, device, medium and equipment of record data
CN114298182A (en) Resource recall method, device, equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant