CN113469092B - Character recognition model generation method, device, computer equipment and storage medium - Google Patents
Character recognition model generation method, device, computer equipment and storage medium Download PDFInfo
- Publication number
- CN113469092B CN113469092B CN202110787681.0A CN202110787681A CN113469092B CN 113469092 B CN113469092 B CN 113469092B CN 202110787681 A CN202110787681 A CN 202110787681A CN 113469092 B CN113469092 B CN 113469092B
- Authority
- CN
- China
- Prior art keywords
- data set
- character data
- recognized
- similarity
- model
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 38
- 238000012549 training Methods 0.000 claims abstract description 204
- 238000012545 processing Methods 0.000 claims description 42
- 238000004590 computer program Methods 0.000 claims description 29
- 238000006243 chemical reaction Methods 0.000 claims description 16
- 238000012216 screening Methods 0.000 claims description 13
- 238000003062 neural network model Methods 0.000 claims description 10
- 238000010276 construction Methods 0.000 claims description 6
- 238000013508 migration Methods 0.000 abstract description 2
- 230000005012 migration Effects 0.000 abstract description 2
- 238000010586 diagram Methods 0.000 description 5
- 230000008569 process Effects 0.000 description 5
- 238000004519 manufacturing process Methods 0.000 description 4
- 238000004422 calculation algorithm Methods 0.000 description 3
- 230000000694 effects Effects 0.000 description 3
- 238000013528 artificial neural network Methods 0.000 description 2
- 238000004364 calculation method Methods 0.000 description 2
- 238000013499 data model Methods 0.000 description 2
- 230000003068 static effect Effects 0.000 description 2
- 238000004458 analytical method Methods 0.000 description 1
- 238000007405 data analysis Methods 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 238000012546 transfer Methods 0.000 description 1
- 230000009466 transformation Effects 0.000 description 1
- 238000013519 translation Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/60—Analysis of geometric attributes
- G06T7/62—Analysis of geometric attributes of area, perimeter, diameter or volume
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/60—Analysis of geometric attributes
- G06T7/64—Analysis of geometric attributes of convexity or concavity
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02T—CLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
- Y02T10/00—Road transport of goods or passengers
- Y02T10/10—Internal combustion engine [ICE] based vehicles
- Y02T10/40—Engine management systems
Abstract
The application relates to a character recognition model generation method, a device, computer equipment and a storage medium, wherein the method comprises the following steps: acquiring the similarity between a plurality of character data sets to be identified and the character data sets to be identified, and taking the character data sets to be identified, which are matched with the similarity between the character data sets to be identified, as target character data sets; obtaining a pre-training model corresponding to the target character data set, and constructing a target training model according to the pre-training model; the pre-training model is a model which is used for identifying a target character data set after being pre-trained; generating a target training data set according to the recognized character data set and the character data set to be recognized; and training the target training model according to the target training data set to obtain a character recognition model corresponding to the character data set to be recognized. The similarity between character data sets realizes the learning and migration during the model training, thereby improving the generation efficiency of the character recognition model.
Description
Technical Field
The present application relates to the field of computer recognition technologies, and in particular, to a method and apparatus for generating a character recognition model, a computer device, and a storage medium.
Background
With the development of industry, more and more production scenarios begin to recognize character information on production equipment, production products, etc. through character recognition models.
However, the character recognition model needs to be trained from scratch for different production scenes, the training period of the character recognition model is long, more data is required for training, and the efficiency of character recognition generation is low.
Disclosure of Invention
In view of the foregoing, it is desirable to provide a character recognition model generation method, apparatus, computer device, and storage medium.
A character recognition model generation method, comprising:
acquiring the similarity between a plurality of character data sets to be identified and the character data sets to be identified, and taking the character data sets to be identified, which are matched with the similarity between the character data sets to be identified, as target character data sets;
obtaining a pre-training model corresponding to the target character data set, and constructing a target training model according to the pre-training model; the pre-training model is a model which is used for identifying the target character data set after being pre-trained;
generating a target training data set according to the recognized character data set and the character data set to be recognized; and training the target training model according to the target training data set to obtain a character recognition model corresponding to the character data set to be recognized.
In one embodiment, the recognized character data set and the character data set to be recognized both carry picture parameter information and text outline information;
the obtaining the similarity between the plurality of recognized character data sets and the character data set to be recognized comprises the following steps:
obtaining the similarity of the picture parameters between the recognized character data set and the character data set to be recognized according to the picture parameter information carried by the recognized character data set and the picture parameter information carried by the character data set to be recognized;
obtaining the text outline similarity between the recognized character data set and the character data set to be recognized according to the text outline information carried by the recognized character data set and the text outline information carried by the character data set to be recognized;
and carrying out weighting processing on the picture parameter similarity and the text outline similarity, and determining the similarity between the recognized character data set and the character data set to be recognized according to the result of the weighting processing.
In one embodiment, the obtaining the similarity of the picture parameters between the recognized character data set and the character data set to be recognized according to the picture parameter information carried by the recognized character data set and the picture parameter information carried by the character data set to be recognized includes:
Acquiring color channel mean values and wide-high mean values of all images in the recognized character data set and the character data set to be recognized;
according to the color channel mean value and the wide-high mean value, determining cosine distances between the recognized character data set and the color channel mean value of the character data set to be recognized and cosine distances between the recognized character data set and the wide-high mean value of the character data set to be recognized;
and taking the sum of the cosine distance of the color channel mean value and the cosine distance of the wide-high mean value as the similarity of the picture parameters.
In one embodiment, the obtaining the text outline similarity between the recognized character data set and the character data set to be recognized according to the text outline information carried by the recognized character data set and the text outline information carried by the character data set to be recognized includes:
identifying outline characteristic information corresponding to the text information in the identified character data set and the character data set to be identified;
determining convex hull information of text information in the recognized character data set and the character data set to be recognized according to the outline characteristic information;
According to the convex hull information, determining the convex hull area of the text information and the convex hull overlapping area of the text information in the recognized character data set and the character data set to be recognized;
and obtaining the similarity of the text outline of the recognized character data set and the character data set to be recognized according to the convex hull area of the text information and the convex hull overlapping area of the text information.
In one embodiment, the identified character data set to be matched with the similarity between the character data sets to be identified, as a target character data set, includes:
screening at least one identified character data set with the similarity greater than or equal to a preset similarity threshold value from the similarity of the plurality of identified character data sets;
and identifying the identified character data set with the highest similarity among the similarities of at least one identified character data set obtained after screening as the target character data set.
In one embodiment, constructing a target training model from the pre-training model includes:
obtaining model parameters of the pre-training model;
and applying the model parameters to a pre-constructed neural network model to obtain the target training model.
In one embodiment, before training the target training model according to the target training data set, the method further comprises:
performing gamma conversion processing and histogram equalization processing on the images in the target training data set;
performing image size unification on the image subjected to the gamma conversion processing and the histogram equalization processing;
and inputting the images subjected to the uniform size processing into the target training model for training.
A character recognition model generation apparatus, the apparatus comprising:
the similarity acquisition module is used for acquiring the similarity between the plurality of recognized character data sets and the character data set to be recognized, and taking the recognized character data set matched with the similarity between the character data sets to be recognized as a target character data set;
the model construction module is used for acquiring a pre-training model corresponding to the target character data set and constructing a target training model according to the pre-training model; the pre-training model is a model which is used for identifying the target character data set after being pre-trained;
the model generation module is used for generating a target training data set according to the recognized character data set and the character data set to be recognized; and training the target training model according to the target training data set to obtain a character recognition model corresponding to the character data set to be recognized.
A computer device comprising a memory storing a computer program and a processor implementing the steps of the method described above when the processor executes the computer program.
A computer readable storage medium having stored thereon a computer program which, when executed by a processor, implements the steps of the method of any of claims 1 to 7.
A computer device comprising a memory storing a computer program and a processor which when executing the computer program performs the steps of:
acquiring the similarity between a plurality of character data sets to be identified and the character data sets to be identified, and taking the character data sets to be identified, which are matched with the similarity between the character data sets to be identified, as target character data sets;
obtaining a pre-training model corresponding to the target character data set, and constructing a target training model according to the pre-training model; the pre-training model is a model which is used for identifying the target character data set after being pre-trained;
generating a target training data set according to the recognized character data set and the character data set to be recognized; and training the target training model according to the target training data set to obtain a character recognition model corresponding to the character data set to be recognized.
A computer readable storage medium having stored thereon a computer program which when executed by a processor performs the steps of:
acquiring the similarity between a plurality of character data sets to be identified and the character data sets to be identified, and taking the character data sets to be identified, which are matched with the similarity between the character data sets to be identified, as target character data sets;
obtaining a pre-training model corresponding to the target character data set, and constructing a target training model according to the pre-training model; the pre-training model is a model which is used for identifying the target character data set after being pre-trained;
generating a target training data set according to the recognized character data set and the character data set to be recognized; and training the target training model according to the target training data set to obtain a character recognition model corresponding to the character data set to be recognized.
The character recognition model generation method, the device, the computer equipment and the storage medium, wherein the method comprises the following steps: acquiring the similarity between a plurality of character data sets to be identified and the character data sets to be identified, and taking the character data sets to be identified, which are matched with the similarity between the character data sets to be identified, as target character data sets; obtaining a pre-training model corresponding to the target character data set, and constructing a target training model according to the pre-training model; the pre-training model is a model which is used for identifying a target character data set after being pre-trained; generating a target training data set according to the recognized character data set and the character data set to be recognized; and training the target training model according to the target training data set to obtain a character recognition model corresponding to the character data set to be recognized. According to the method and the device for training the pre-training model of the character data set to be recognized, the pre-training model of the character data set to be recognized is used as the training model of the character data set to be recognized through the similarity between the character data set to be recognized, so that learning and migration are carried out during model training according to the similarity between the character data sets, and the generation efficiency of the character recognition model is improved.
Drawings
FIG. 1 is an application environment diagram of a character recognition model generation method in one embodiment;
FIG. 2 is a flow diagram of a method for generating a character recognition model in one embodiment;
FIG. 3 is a flowchart illustrating steps for obtaining similarities between a plurality of recognized character data sets and character data sets to be recognized in one embodiment;
FIG. 4 is a flowchart illustrating steps for obtaining similarity of picture parameters between a recognized character dataset and a character dataset to be recognized according to an embodiment;
FIG. 5 is a flow chart illustrating steps for text outline similarity between a recognized character dataset and a character dataset to be recognized in one embodiment;
FIG. 6 is a block diagram showing the construction of a character recognition model generating apparatus in one embodiment;
fig. 7 is an internal structural diagram of a computer device in one embodiment.
Detailed Description
The present application will be described in further detail with reference to the drawings and examples, in order to make the objects, technical solutions and advantages of the present application more apparent. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the application.
The character recognition model generation method provided by the application can be applied to an application environment shown in figure 1. Wherein the terminal 11 communicates with the server 12 via a network. The server 12 acquires character data sets to be recognized from the terminal 11 through a network, and acquires similarities between a plurality of the character data sets to be recognized, the server 12 regarding the character data sets to be recognized that match the similarities between the character data sets to be recognized as target character data sets; the server 12 acquires a pre-training model corresponding to the target character data set, and the server 12 constructs a target training model according to the pre-training model; the pre-training model is a model which is used for identifying a target character data set after being pre-trained; the server 12 generates a target training data set from the recognized character data set and the character data set to be recognized; the server 12 trains the target training model according to the target training data set to obtain a character recognition model corresponding to the character data set to be recognized. The terminal 11 can directly upload character data to be recognized to the server 12, so that the server runs the target training model after training to recognize the character data and returns a recognition result to the terminal 11; the server 12 may also transfer the training-completed target training model to the terminal 11, so that the terminal 11 may directly recognize the character data.
The terminal 11 may be, but not limited to, various personal computers, notebook computers, smartphones, tablet computers, and portable wearable devices, and the server 12 may be implemented as a stand-alone server or a server cluster composed of a plurality of servers.
In one embodiment, as shown in fig. 2, a method for generating a character recognition model is provided, which is described by taking the server 12 in fig. 1 as an example, and includes the following steps:
step 21, obtaining the similarity between the plurality of recognized character data sets and the character data set to be recognized, and taking the recognized character data set matched with the similarity between the character data sets to be recognized as the target character data set.
Wherein the recognized character data set refers to a character data set which can be recognized by a pre-trained model after the model has been trained by the character data set before; the character data set to be recognized refers to a character data set that has not been recognized or applied to model training. The character data set may be composed of a plurality of pictures containing characters, and file format, size, etc. of the pictures are not limited.
The similarity refers to the overall similarity between the recognized character data set and the character data set to be recognized, which is judged by the preset condition, and the similarity of the character data set has multiple judging modes or can be determined by combining and weighting the multiple judging modes; for example, the number, size, format, color, etc. of the pictures in the character data set can be used as the basis for determining the similarity, and the shape, outline, color, etc. of the characters in the pictures can be used as the basis for determining the similarity.
The similarity matching means that the similarity between the recognized character data set and the character data set to be recognized falls within a preset similarity interval, that is, the similarity between the recognized character data set and the character data set to be recognized reaches a certain preset condition, and then the recognized character data set and the character data set to be recognized can be regarded as similarity matching.
The target character data set refers to the recognized character data set with the similarity meeting the matching condition with the character data set to be recognized, and one or more character data sets can be recognized.
Specifically, after the server obtains the character data set to be identified, the server performs data analysis and processing on the character data set to be identified to obtain a numerical value which can be used for similarity judgment; the server acquires at least one recognized character data set which is used for model training before, and a numerical value which can be used for similarity judgment is obtained by using the same analysis and processing modes; comparing the numerical value of the character data set to be identified with the numerical value of at least one identified character data set to obtain a plurality of similarities; and selecting at least one similarity reaching a preset standard from the plurality of similarities, and taking the identified character data set corresponding to the similarity as a target character data set.
Step 22, obtaining a pre-training model corresponding to the target character data set, and constructing a target training model according to the pre-training model; the pre-training model is a model which is used for identifying a target character data set after being trained in advance.
The pre-training model is a model trained by a target character data set (identified character data set) in advance, weight parameters in the model are adjusted when training is performed by the target character data set (identified character data set), and character data in the target character data set (identified character data set) can be identified.
Specifically, the server acquires a pre-training model corresponding to a target character data set, and constructs a target training model according to a neural network structure, weight parameters and the like of the pre-training model; the weight parameters in the construction process and the neural network structure can be adjusted adaptively.
Step 23, generating a target training data set according to the recognized character data set and the character data set to be recognized; and training the target training model according to the target training data set to obtain a character recognition model corresponding to the character data set to be recognized.
Wherein the target training data set is generated according to the recognized character data set and the character data set to be recognized, for example, character data is randomly fetched from the recognized character data set, and the target training data set and all the character data sets to be recognized are represented by 2:8, forming a new character data set as a target training data set; the specific character data acquisition mode, the acquisition number, the combination ratio and the like are not limited.
Specifically, the server generates a new character data set as a target training data set according to the recognized character data set and the character data set to be recognized; and training the target training model by taking the target training data set as training data of the target training model, and determining that the training of the target training model is completed after the loss value of the target training model reaches a preset standard or the recognition accuracy of the target training model to the character data set to be recognized reaches the preset standard to obtain a character recognition model corresponding to the character data set to be recognized.
For example, a company has previously obtained a large number of pictures containing character data in a scene a to form a character data set a, and trained a character recognition model a' using the character data set a, in order to solve the problem of character recognition in the scene a. Now, the problem of character recognition in the B scene needs to be solved, and a large number of pictures containing character data in the B scene are acquired to form a character data set B. Moreover, the scene a is very similar to the scene B, and the character recognition model A' can be directly applied to the recognition character data set B; that is, the character data set a may be used as one of the recognized character data sets, and the character data set B may be used as the character data set to be recognized; and if the similarity of the character data set A and the character data set B is determined to be matched, confirming that the character data set A is a target character data set. A new character data model B 'can be constructed by utilizing the character recognition model A', or the character recognition model A 'is directly used as the character data model B' for training; the target training data set can be obtained according to the character data set A and the character data set B, or only the character data set B can be utilized; i.e. it is also possible to train the character recognition model a' directly with the character dataset B.
The character recognition model generation method, the device, the computer equipment and the storage medium, wherein the method comprises the following steps: acquiring the similarity between a plurality of character data sets to be identified and the character data sets to be identified, and taking the character data sets to be identified, which are matched with the similarity between the character data sets to be identified, as target character data sets; obtaining a pre-training model corresponding to the target character data set, and constructing a target training model according to the pre-training model; the pre-training model is a model which is used for identifying a target character data set after being pre-trained; generating a target training data set according to the recognized character data set and the character data set to be recognized; and training the target training model according to the target training data set to obtain a character recognition model corresponding to the character data set to be recognized. According to the method and the device for training the training of the character data set, the pre-training model passing through the recognized character data set is used as the training model of the character data set to be recognized to train through the similarity between the recognized character data set and the character data set to be recognized, so that the training of the model is studied and transferred according to the similarity between the character data sets, and the generation efficiency of the character recognition model is improved.
In one embodiment, the recognized character data set and the character data set to be recognized each carry picture parameter information and text outline information.
As shown in fig. 3, step 21 of acquiring similarities between the plurality of recognized character data sets and the character data set to be recognized includes:
step 31, obtaining the similarity of the picture parameters between the recognized character data set and the character data set to be recognized according to the picture parameter information carried by the recognized character data set and the picture parameter information carried by the character data set to be recognized;
step 32, obtaining the text outline similarity between the recognized character data set and the character data set to be recognized according to the text outline information carried by the recognized character data set and the text outline information carried by the character data set to be recognized;
and step 33, carrying out weighting processing on the picture parameter similarity and the text outline similarity, and determining the similarity between the recognized character data set and the character data set to be recognized according to the result of the weighting processing.
The picture parameter information may include the number, size, file format, pixels, color, etc. of pictures in the character dataset; the text outline information includes text outlines, geometric convex hulls, and the like.
The picture parameter similarity refers to the similarity degree of picture parameter information between the recognized character data set and the character data set to be recognized; for example, the sizes of 5 pictures in the recognized character data set are all 10×10, and the sizes of 6 pictures in the character data set to be recognized are all 10×10; if only the size is considered as the similarity judging condition, the similarity of the picture parameters of the two-character data set can be 100%.
The text outline similarity refers to the similarity degree of text outline information between the recognized character data set and the character data set to be recognized; for example, the size of the shape of the character region included in 5 pictures in the recognized character data set is a, and the size of the shape of the character region included in 6 pictures in the character data set to be recognized is 10×10; if only the size is considered as the similarity judging condition, the similarity of the picture parameters of the two-character data set can be 100%.
Specifically, if the picture parameter similarity is c, the text contour similarity is d, the weight of the picture parameter similarity may be set to 0.6, and the weight of the text contour similarity is 0.4, the similarity after the weighting process may be 0.6c+0.4d.
In the embodiment, by respectively calculating the similarity of the picture parameters and the similarity of the text outline, the effect of accurately calculating the similarity between the recognized character data set and the character data set to be recognized can be achieved, and the generation efficiency of the character recognition model is improved.
In one embodiment, as shown in fig. 4, step 31, obtaining the similarity of the picture parameters between the recognized character data set and the character data set to be recognized according to the picture parameter information carried by the recognized character data set and the picture parameter information carried by the character data set to be recognized includes:
step 41, acquiring the color channel mean value and the width-height mean value of all images in the recognized character data set and the character data set to be recognized;
step 42, determining cosine distances between the recognized character data set and the color channel mean value of the character data set to be recognized and cosine distances between the recognized character data set and the wide-high mean value of the character data set to be recognized according to the color channel mean value and the wide-high mean value;
step 43, taking the sum of the cosine distance of the color channel mean and the cosine distance of the wide-high mean as the similarity of the picture parameters.
Specifically, the server acquires RGB average values of image pixels in two character data sets as color channel average values; the RGB mean value comprises the average value of color values of all pixel points in the image in each color channel. The server acquires the width-height average value of the images in the two character data sets as the width-height average value.
The picture parameter similarity is obtained by the following modes:
Ai=cos(rgb[P],rgb[Qi])+cos(hw[P],hw[Qi]);
wherein P is a character data set to be identified, qi is an identified character data set, and Ai is the sum of the cosine distance of the mean value of the color channel and the cosine distance of the mean value of the width and the height; rgb [ P ] is the color channel mean of the character data set to be recognized, and rgb [ Qi ] is the color channel mean of the recognized character data set; hw [ P ] is the wide-to-high average of the character data set to be recognized, hw [ Qi ] is the wide-to-high average of the recognized character data set, and cos is the cosine distance calculation symbol.
In this embodiment, the image parameter similarity between the recognized character data set and the character data set to be recognized is obtained by calculating the color channel mean value and the wide-high mean value respectively, so that the effect of accurately calculating the similarity between the recognized character data set and the character data set to be recognized can be achieved, and the generation efficiency of the character recognition model is improved.
In one embodiment, as shown in fig. 5, step 32, obtaining the text outline similarity between the recognized character data set and the character data set to be recognized according to the text outline information carried by the recognized character data set and the text outline information carried by the character data set to be recognized includes:
Step 51, identifying outline characteristic information corresponding to the text information in the identified character data set and the character data set to be identified;
step 52, determining convex hull information of text information in the recognized character data set and the character data set to be recognized according to the outline characteristic information;
step 53, determining the convex hull area of the text information and the convex hull overlapping area of the text information in the recognized character data set and the character data set to be recognized according to the convex hull information;
and step 54, obtaining the similarity of the text outline of the recognized character data set and the character data set to be recognized according to the convex hull area of the text information and the convex hull overlapping area of the text information.
Specifically, a contour matching algorithm can be adopted to search text information in the interested area as contour feature information; the contour matching algorithm can divide the images in the recognized character data set and the character data set to be recognized into a plurality of channels so as to search text information edges and convert the text information edges into contour features, and contour matching is performed; the contour matching method may include convex hulls, and is not limited in detail.
For example, the outline feature information includes the detected text outline; when the convex hull is adopted for calculation, translation can be firstly carried out on each text outline in the recognized character data set and the character data set to be recognized respectively, so that the left upper corner of the circumscribed rectangle in each text outline is coincident with the origin of the preset coordinate system. And drawing the translated text outlines on the same plane, and finding out the minimum geometric convex hull of the text outlines by using a convex hull detection algorithm in an opencv library, wherein the minimum geometric convex hull is marked as convexhull [ P ] (the minimum geometric convex hull of the text outlines in the character data set P to be identified). Using the same method, a geometric convex hull of all text in the recognized character dataset is obtained, denoted as convexhull [ Qi ] (the smallest geometric convex hull of text contours in the recognized character dataset Qi).
Text outline similarity is defined as the ratio of the intersection of the two text geometric convex hulls:
Bi=S[inter]/(S[convexhull[P]]+S[convexhull[Qi]]–S[inter])。
wherein Bi is the similarity of the text outline of the recognized character data set and the character data set to be recognized; s < Convexwell < P > ] and S < Convexwell < Qi > ] are the character data set to be recognized and the recognized character data set respectively, the area of the text convex hull is contained in the convex hull information; s [ inter ] is the area of the overlapping area of the convex hull of the text in the character data set to be recognized and the recognized character data set, namely the overlapping area of the convex hull.
In this embodiment, the image parameter similarity between the recognized character data set and the character data set to be recognized is obtained by calculating the color channel mean value and the wide-high mean value respectively, so that the effect of accurately calculating the similarity between the recognized character data set and the character data set to be recognized can be achieved, and the generation efficiency of the character recognition model is improved.
In one embodiment, the identified character data set matching the similarity between character data sets to be identified is taken as a target character data set, comprising: screening at least one identified character data set with the similarity greater than or equal to a preset similarity threshold value from the similarity of the plurality of identified character data sets; and identifying the identified character data set with the highest similarity among the similarities of the at least one identified character data set obtained after screening as the target character data set.
Specifically, a preset similarity threshold may be set as a criterion for judging whether the similarity matches; for example, if the preset similarity threshold is set to 85, the recognizable character data set with the calculated similarity greater than or equal to 85 may be used as the target character data set. The preset similarity threshold may also be set to a range, for example, a similarity of 85, ±5 may be regarded as a match, i.e., a recognized character data set for which the similarity falls within a similarity range of 80-90 and above 90 may be regarded as the target character data set.
In this embodiment, the preset similarity threshold is set as the recognition basis of the target character data set, so that the generation efficiency of the character recognition model is improved.
In one embodiment, building a target training model from the pre-training model includes: obtaining model parameters of a pre-training model; and applying the model parameters to a pre-constructed neural network model to obtain a target training model.
Specifically, the server acquires model parameters of the pre-training model, and sets the model parameters in a pre-constructed neural network model as a target training model.
In this embodiment, the model parameters are applied to the pre-built neural network model, so that the pre-built neural network model to which the model parameters are applied can train on the basis of having a certain recognition capability, and the generation efficiency of the character recognition model is improved.
In one embodiment, before training the target training model according to the target training data set, the method further comprises: performing gamma conversion processing and histogram equalization processing on the images in the target training data set; carrying out image size unification on the image subjected to the gamma conversion treatment and the histogram equalization treatment; and inputting the images subjected to the uniform size processing into a target training model for training.
Specifically, the gamma conversion processing is to enhance the gray value of the darker region of the image in the target training data set through nonlinear conversion, and reduce the gray value of the region with overlarge gray value in the image; the whole detail expression of the image is enhanced through gamma conversion processing. The histogram equalization process can increase the global contrast of the image, especially when the contrast of the useful data of the image in the target training dataset is close; after histogram equalization, the brightness can be better distributed on the histogram. The size unification processing can unify images with different sizes in the target training data set into images with the same size, and improves the processing efficiency of the images.
In the embodiment, the generation efficiency of the character recognition model is improved by performing image processing on the images in the target training data set.
It should be understood that, although the steps in the flowcharts of fig. 2-5 are shown in order as indicated by the arrows, these steps are not necessarily performed in order as indicated by the arrows. The steps are not strictly limited to the order of execution unless explicitly recited herein, and the steps may be executed in other orders. Moreover, at least some of the steps in fig. 2-5 may include multiple steps or stages that are not necessarily performed at the same time, but may be performed at different times, nor does the order in which the steps or stages are performed necessarily performed in sequence, but may be performed alternately or alternately with at least a portion of the steps or stages in other steps or other steps.
In one embodiment, as shown in fig. 6, there is provided a character recognition model generating apparatus, including a similarity obtaining module 61, a model constructing module 62, and a model generating module 63, wherein:
a similarity obtaining module 61, configured to obtain similarities between the plurality of identified character data sets and the character data set to be identified, and take the identified character data set that matches the similarities between the character data sets to be identified as a target character data set;
The model construction module 62 is configured to acquire a pre-training model corresponding to the target character data set, and construct a target training model according to the pre-training model; the pre-training model is a model which is used for identifying a target character data set after being pre-trained;
a model generating module 63, configured to generate a target training data set according to the recognized character data set and the character data set to be recognized; and training the target training model according to the target training data set to obtain a character recognition model corresponding to the character data set to be recognized.
In one embodiment, the recognized character data set and the character data set to be recognized both carry picture parameter information and text outline information;
the similarity obtaining module 61 is further configured to obtain a similarity of picture parameters between the identified character dataset and the character dataset to be identified according to the picture parameter information carried by the identified character dataset and the picture parameter information carried by the character dataset to be identified; obtaining the text outline similarity between the recognized character data set and the character data set to be recognized according to the text outline information carried by the recognized character data set and the text outline information carried by the character data set to be recognized; and carrying out weighting processing on the picture parameter similarity and the text outline similarity, and determining the similarity between the recognized character data set and the character data set to be recognized according to the result of the weighting processing.
In one embodiment, the similarity obtaining module 61 is further configured to obtain color channel average values and wide-high average values of all images in the recognized character data set and the character data set to be recognized; according to the color channel mean value and the wide-high mean value, determining cosine distances between the recognized character data set and the color channel mean value of the character data set to be recognized and cosine distances between the recognized character data set and the wide-high mean value of the character data set to be recognized; and taking the sum of the cosine distance of the mean value of the color channel and the cosine distance of the mean value of the width and height as the similarity of the picture parameters.
In one embodiment, the similarity obtaining module 61 is further configured to identify contour feature information corresponding to text information in the identified character dataset and the character dataset to be identified; determining convex hull information of the text information in the recognized character data set and the character data set to be recognized according to the outline characteristic information; according to the convex hull information, determining the convex hull area of the text information and the convex hull overlapping area of the text information in the recognized character data set and the character data set to be recognized; and obtaining the text outline similarity of the recognized character data set and the character data set to be recognized according to the convex hull area of the text information and the convex hull overlapping area of the text information.
In one embodiment, the similarity obtaining module 61 is further configured to screen at least one identified character data set with a similarity greater than or equal to a preset similarity threshold from the similarities of the plurality of identified character data sets; and identifying the identified character data set with the highest similarity among the similarities of the at least one identified character data set obtained after screening as the target character data set.
In one embodiment, the model construction module 62 is further configured to obtain model parameters of the pre-trained model; and applying the model parameters to a pre-constructed neural network model to obtain a target training model.
In one embodiment, the model generating module 63 is further configured to perform a gamma transformation process and a histogram equalization process on the image in the target training dataset; carrying out image size unification on the image subjected to the gamma conversion treatment and the histogram equalization treatment; and inputting the images subjected to the uniform size processing into a target training model for training.
For specific limitations on the character recognition model generating means, reference may be made to the above limitations on the character recognition model generating method, and no further description is given here. The respective modules in the above character recognition model generating apparatus may be implemented in whole or in part by software, hardware, or a combination thereof. The above modules may be embedded in hardware or may be independent of a processor in the computer device, or may be stored in software in a memory in the computer device, so that the processor may call and execute operations corresponding to the above modules.
In one embodiment, a computer device is provided, which may be a server, the internal structure of which may be as shown in fig. 7. The computer device includes a processor, a memory, and a network interface connected by a system bus. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device includes a non-volatile storage medium and an internal memory. The non-volatile storage medium stores an operating system, computer programs, and a database. The internal memory provides an environment for the operation of the operating system and computer programs in the non-volatile storage media. The database of the computer device is for storing character recognition model generation data. The network interface of the computer device is used for communicating with an external terminal through a network connection. The computer program is executed by a processor to implement a character recognition model generation method.
It will be appreciated by those skilled in the art that the structure shown in FIG. 7 is merely a block diagram of some of the structures associated with the present inventive arrangements and is not limiting of the computer device to which the present inventive arrangements may be applied, and that a particular computer device may include more or fewer components than shown, or may combine some of the components, or have a different arrangement of components.
In one embodiment, a computer device is provided comprising a memory and a processor, the memory having stored therein a computer program, the processor when executing the computer program performing the steps of:
acquiring the similarity between a plurality of character data sets to be identified and the character data sets to be identified, and taking the character data sets to be identified, which are matched with the similarity between the character data sets to be identified, as target character data sets;
obtaining a pre-training model corresponding to the target character data set, and constructing a target training model according to the pre-training model; the pre-training model is a model which is used for identifying a target character data set after being pre-trained;
generating a target training data set according to the recognized character data set and the character data set to be recognized; and training the target training model according to the target training data set to obtain a character recognition model corresponding to the character data set to be recognized.
In one embodiment, the recognized character data set and the character data set to be recognized both carry picture parameter information and text outline information;
the processor when executing the computer program also implements the steps of: obtaining the similarity of the picture parameters between the recognized character data set and the character data set to be recognized according to the picture parameter information carried by the recognized character data set and the picture parameter information carried by the character data set to be recognized; obtaining the text outline similarity between the recognized character data set and the character data set to be recognized according to the text outline information carried by the recognized character data set and the text outline information carried by the character data set to be recognized; and carrying out weighting processing on the picture parameter similarity and the text outline similarity, and determining the similarity between the recognized character data set and the character data set to be recognized according to the result of the weighting processing.
In one embodiment, the processor when executing the computer program further performs the steps of: acquiring color channel mean values and wide-high mean values of all images in the recognized character data set and the character data set to be recognized; according to the color channel mean value and the wide-high mean value, determining cosine distances between the recognized character data set and the color channel mean value of the character data set to be recognized and cosine distances between the recognized character data set and the wide-high mean value of the character data set to be recognized; and taking the sum of the cosine distance of the mean value of the color channel and the cosine distance of the mean value of the width and height as the similarity of the picture parameters.
In one embodiment, the processor when executing the computer program further performs the steps of: identifying outline characteristic information corresponding to the text information in the identified character data set and the character data set to be identified; determining convex hull information of the text information in the recognized character data set and the character data set to be recognized according to the outline characteristic information; according to the convex hull information, determining the convex hull area of the text information and the convex hull overlapping area of the text information in the recognized character data set and the character data set to be recognized; and obtaining the text outline similarity of the recognized character data set and the character data set to be recognized according to the convex hull area of the text information and the convex hull overlapping area of the text information.
In one embodiment, the processor when executing the computer program further performs the steps of: screening at least one identified character data set with the similarity greater than or equal to a preset similarity threshold value from the similarity of the plurality of identified character data sets; and identifying the identified character data set with the highest similarity among the similarities of the at least one identified character data set obtained after screening as the target character data set.
In one embodiment, the processor when executing the computer program further performs the steps of: obtaining model parameters of a pre-training model; and applying the model parameters to a pre-constructed neural network model to obtain a target training model.
In one embodiment, the processor when executing the computer program further performs the steps of: performing gamma conversion processing and histogram equalization processing on the images in the target training data set; carrying out image size unification on the image subjected to the gamma conversion treatment and the histogram equalization treatment; and inputting the images subjected to the uniform size processing into a target training model for training.
In one embodiment, a computer readable storage medium is provided having a computer program stored thereon, which when executed by a processor, performs the steps of:
Acquiring the similarity between a plurality of character data sets to be identified and the character data sets to be identified, and taking the character data sets to be identified, which are matched with the similarity between the character data sets to be identified, as target character data sets;
obtaining a pre-training model corresponding to the target character data set, and constructing a target training model according to the pre-training model; the pre-training model is a model which is used for identifying a target character data set after being pre-trained;
generating a target training data set according to the recognized character data set and the character data set to be recognized; and training the target training model according to the target training data set to obtain a character recognition model corresponding to the character data set to be recognized.
In one embodiment, the recognized character data set and the character data set to be recognized both carry picture parameter information and text outline information;
the computer program when executed by the processor also performs the steps of: obtaining the similarity of the picture parameters between the recognized character data set and the character data set to be recognized according to the picture parameter information carried by the recognized character data set and the picture parameter information carried by the character data set to be recognized; obtaining the text outline similarity between the recognized character data set and the character data set to be recognized according to the text outline information carried by the recognized character data set and the text outline information carried by the character data set to be recognized; and carrying out weighting processing on the picture parameter similarity and the text outline similarity, and determining the similarity between the recognized character data set and the character data set to be recognized according to the result of the weighting processing.
In one embodiment, the computer program when executed by the processor further performs the steps of: acquiring color channel mean values and wide-high mean values of all images in the recognized character data set and the character data set to be recognized; according to the color channel mean value and the wide-high mean value, determining cosine distances between the recognized character data set and the color channel mean value of the character data set to be recognized and cosine distances between the recognized character data set and the wide-high mean value of the character data set to be recognized; and taking the sum of the cosine distance of the mean value of the color channel and the cosine distance of the mean value of the width and height as the similarity of the picture parameters.
In one embodiment, the computer program when executed by the processor further performs the steps of: identifying outline characteristic information corresponding to the text information in the identified character data set and the character data set to be identified; determining convex hull information of the text information in the recognized character data set and the character data set to be recognized according to the outline characteristic information; according to the convex hull information, determining the convex hull area of the text information and the convex hull overlapping area of the text information in the recognized character data set and the character data set to be recognized; and obtaining the text outline similarity of the recognized character data set and the character data set to be recognized according to the convex hull area of the text information and the convex hull overlapping area of the text information.
In one embodiment, the computer program when executed by the processor further performs the steps of: screening at least one identified character data set with the similarity greater than or equal to a preset similarity threshold value from the similarity of the plurality of identified character data sets; and identifying the identified character data set with the highest similarity among the similarities of the at least one identified character data set obtained after screening as the target character data set.
In one embodiment, the computer program when executed by the processor further performs the steps of: obtaining model parameters of a pre-training model; and applying the model parameters to a pre-constructed neural network model to obtain a target training model.
In one embodiment, the computer program when executed by the processor further performs the steps of: performing gamma conversion processing and histogram equalization processing on the images in the target training data set; carrying out image size unification on the image subjected to the gamma conversion treatment and the histogram equalization treatment; and inputting the images subjected to the uniform size processing into a target training model for training.
Those skilled in the art will appreciate that implementing all or part of the above-described methods may be accomplished by way of a computer program stored on a non-transitory computer readable storage medium, which when executed may comprise the steps of the embodiments of the methods described above. Any reference to memory, storage, database, or other medium used in embodiments provided herein may include at least one of non-volatile and volatile memory. The nonvolatile Memory may include Read-Only Memory (ROM), magnetic tape, floppy disk, flash Memory, optical Memory, or the like. Volatile memory can include random access memory (Random Access Memory, RAM) or external cache memory. By way of illustration, and not limitation, RAM can be in the form of a variety of forms, such as static random access memory (Static Random Access Memory, SRAM) or dynamic random access memory (Dynamic Random Access Memory, DRAM), and the like.
The technical features of the above embodiments may be arbitrarily combined, and all possible combinations of the technical features in the above embodiments are not described for brevity of description, however, as long as there is no contradiction between the combinations of the technical features, they should be considered as the scope of the description.
The foregoing examples illustrate only a few embodiments of the application and are described in detail herein without thereby limiting the scope of the application. It should be noted that it will be apparent to those skilled in the art that several variations and modifications can be made without departing from the spirit of the application, which are all within the scope of the application. Accordingly, the scope of protection of the present application is to be determined by the appended claims.
Claims (10)
1. A character recognition model generation method, characterized by comprising:
acquiring color channel mean values and wide-high mean values of all images in the recognized character data set and the character data set to be recognized; the recognized character data set and the character data set to be recognized both carry picture parameter information and text contour information;
according to the color channel mean value and the wide-high mean value, determining cosine distances between the recognized character data set and the color channel mean value of the character data set to be recognized and cosine distances between the recognized character data set and the wide-high mean value of the character data set to be recognized;
Taking the sum of the cosine distance of the color channel mean value and the cosine distance of the wide-high mean value as the similarity of picture parameters between the recognized character data set and the character data set to be recognized;
identifying outline characteristic information corresponding to the text information in the identified character data set and the character data set to be identified; the outline characteristic information comprises a detected text outline;
determining convex hull information of text information in the recognized character data set and the character data set to be recognized according to the outline characteristic information, wherein the convex hull information comprises the following components: translating each text outline in the recognized character data set and the character data set to be recognized respectively, so that the left upper corner of the circumscribed rectangle in each text outline coincides with the origin of a preset coordinate system; drawing each translated text outline on the same plane, and determining the minimum geometric convex hull of each translated text outline as convex hull information of the text information;
according to the convex hull information, determining the convex hull area of the text information and the convex hull overlapping area of the text information in the recognized character data set and the character data set to be recognized; the convex hull overlapping area is the overlapping area between the convex hull area of the text information in the recognized character data set and the convex hull area of the text information in the character data set to be recognized;
Obtaining the text outline similarity of the recognized character data set and the character data set to be recognized according to the convex hull area of the text information and the convex hull overlapping area of the text information;
weighting the picture parameter similarity and the text outline similarity, determining the similarity between the recognized character data set and the character data set to be recognized according to the result of the weighting, and taking the recognized character data set matched with the similarity between the character data sets to be recognized as a target character data set;
obtaining a pre-training model corresponding to the target character data set, and constructing a target training model according to the pre-training model; the pre-training model is a model which is used for identifying the target character data set after being pre-trained;
generating a target training data set according to the recognized character data set and the character data set to be recognized; and training the target training model according to the target training data set to obtain a character recognition model corresponding to the character data set to be recognized.
2. The method according to claim 1, wherein said identifying a character data set that matches a similarity between the character data sets to be identified as a target character data set, comprises:
Screening at least one identified character data set with the similarity greater than or equal to a preset similarity threshold value from the similarity of the plurality of identified character data sets;
and identifying the identified character data set with the highest similarity among the similarities of at least one identified character data set obtained after screening as the target character data set.
3. The method according to any one of claims 1 to 2, wherein constructing a target training model from the pre-training model comprises:
obtaining model parameters of the pre-training model;
and applying the model parameters to a pre-constructed neural network model to obtain the target training model.
4. The method according to any one of claims 1 to 2, further comprising, prior to training the target training model from the target training dataset:
performing gamma conversion processing and histogram equalization processing on the images in the target training data set;
performing image size unification on the image subjected to the gamma conversion processing and the histogram equalization processing;
and inputting the images subjected to the uniform size processing into the target training model for training.
5. A character recognition model generation apparatus, characterized in that the apparatus comprises:
the similarity acquisition module is used for acquiring the color channel mean value and the width-height mean value of all images in the recognized character data set and the character data set to be recognized; the recognized character data set and the character data set to be recognized both carry picture parameter information and text contour information; according to the color channel mean value and the wide-high mean value, determining cosine distances between the recognized character data set and the color channel mean value of the character data set to be recognized and cosine distances between the recognized character data set and the wide-high mean value of the character data set to be recognized; taking the sum of the cosine distance of the color channel mean value and the cosine distance of the wide-high mean value as the similarity of picture parameters between the recognized character data set and the character data set to be recognized; identifying outline characteristic information corresponding to the text information in the identified character data set and the character data set to be identified; the outline characteristic information comprises a detected text outline; determining convex hull information of text information in the recognized character data set and the character data set to be recognized according to the outline characteristic information, wherein the convex hull information comprises the following components: translating each text outline in the recognized character data set and the character data set to be recognized respectively, so that the left upper corner of the circumscribed rectangle in each text outline coincides with the origin of a preset coordinate system; drawing each translated text outline on the same plane, and determining the minimum geometric convex hull of each translated text outline as convex hull information of the text information; according to the convex hull information, determining the convex hull area of the text information and the convex hull overlapping area of the text information in the recognized character data set and the character data set to be recognized; the convex hull overlapping area is the overlapping area between the convex hull area of the text information in the recognized character data set and the convex hull area of the text information in the character data set to be recognized; obtaining the text outline similarity of the recognized character data set and the character data set to be recognized according to the convex hull area of the text information and the convex hull overlapping area of the text information; weighting the picture parameter similarity and the text outline similarity, determining the similarity between the recognized character data set and the character data set to be recognized according to the result of the weighting, and taking the recognized character data set matched with the similarity between the character data sets to be recognized as a target character data set;
The model construction module is used for acquiring a pre-training model corresponding to the target character data set and constructing a target training model according to the pre-training model; the pre-training model is a model which is used for identifying the target character data set after being pre-trained;
the model generation module is used for generating a target training data set according to the recognized character data set and the character data set to be recognized; and training the target training model according to the target training data set to obtain a character recognition model corresponding to the character data set to be recognized.
6. The apparatus of claim 5, wherein the similarity acquisition module is further configured to: screening at least one identified character data set with the similarity greater than or equal to a preset similarity threshold value from the similarity of the plurality of identified character data sets; and identifying the identified character data set with the highest similarity among the similarities of at least one identified character data set obtained after screening as the target character data set.
7. The apparatus of any one of claims 5 to 6, wherein the model building module is further configured to: obtaining model parameters of the pre-training model; and applying the model parameters to a pre-constructed neural network model to obtain the target training model.
8. The apparatus of any one of claims 5 to 6, wherein the model generation module is further configured to: performing gamma conversion processing and histogram equalization processing on the images in the target training data set; performing image size unification on the image subjected to the gamma conversion processing and the histogram equalization processing; and inputting the images subjected to the uniform size processing into the target training model for training.
9. A computer device comprising a memory and a processor, the memory storing a computer program, characterized in that the processor implements the steps of the method of any of claims 1 to 4 when the computer program is executed.
10. A computer readable storage medium, on which a computer program is stored, characterized in that the computer program, when being executed by a processor, implements the steps of the method of any of claims 1 to 4.
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110787681.0A CN113469092B (en) | 2021-07-13 | 2021-07-13 | Character recognition model generation method, device, computer equipment and storage medium |
PCT/CN2022/104107 WO2023284608A1 (en) | 2021-07-13 | 2022-07-06 | Character recognition model generating method and apparatus, computer device, and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110787681.0A CN113469092B (en) | 2021-07-13 | 2021-07-13 | Character recognition model generation method, device, computer equipment and storage medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN113469092A CN113469092A (en) | 2021-10-01 |
CN113469092B true CN113469092B (en) | 2023-09-08 |
Family
ID=77879893
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110787681.0A Active CN113469092B (en) | 2021-07-13 | 2021-07-13 | Character recognition model generation method, device, computer equipment and storage medium |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN113469092B (en) |
WO (1) | WO2023284608A1 (en) |
Families Citing this family (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113469092B (en) * | 2021-07-13 | 2023-09-08 | 深圳思谋信息科技有限公司 | Character recognition model generation method, device, computer equipment and storage medium |
CN113971806B (en) * | 2021-10-26 | 2023-05-05 | 北京百度网讯科技有限公司 | Model training and character recognition method, device, equipment and storage medium |
CN116664966B (en) * | 2023-03-27 | 2024-02-20 | 北京鹰之眼智能健康科技有限公司 | Infrared image processing system |
Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109871847A (en) * | 2019-03-13 | 2019-06-11 | 厦门商集网络科技有限责任公司 | A kind of OCR recognition methods and terminal |
CN110377587A (en) * | 2019-07-15 | 2019-10-25 | 腾讯科技(深圳)有限公司 | Method, apparatus, equipment and medium are determined based on the migrating data of machine learning |
CN110503204A (en) * | 2018-05-17 | 2019-11-26 | 国际商业机器公司 | Identification is used for the migration models of machine learning task |
KR20200013299A (en) * | 2018-07-30 | 2020-02-07 | 주식회사 한글과컴퓨터 | Apparatus for recognizing character by comparing original image and generated image and operating method thereof |
CN111461238A (en) * | 2020-04-03 | 2020-07-28 | 讯飞智元信息科技有限公司 | Model training method, character recognition method, device, equipment and storage medium |
CN111738269A (en) * | 2020-08-25 | 2020-10-02 | 北京易真学思教育科技有限公司 | Model training method, image processing device, model training apparatus, and storage medium |
CN112465012A (en) * | 2020-11-25 | 2021-03-09 | 创新奇智(南京)科技有限公司 | Machine learning modeling method and device, electronic equipment and readable storage medium |
WO2021120752A1 (en) * | 2020-07-28 | 2021-06-24 | 平安科技(深圳)有限公司 | Region-based self-adaptive model training method and device, image detection method and device, and apparatus and medium |
Family Cites Families (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10163022B1 (en) * | 2017-06-22 | 2018-12-25 | StradVision, Inc. | Method for learning text recognition, method for recognizing text using the same, and apparatus for learning text recognition, apparatus for recognizing text using the same |
CN108446621A (en) * | 2018-03-14 | 2018-08-24 | 平安科技(深圳)有限公司 | Bank slip recognition method, server and computer readable storage medium |
CN112307858A (en) * | 2019-08-30 | 2021-02-02 | 北京字节跳动网络技术有限公司 | Image recognition and processing method, device, equipment and storage medium |
CN113469092B (en) * | 2021-07-13 | 2023-09-08 | 深圳思谋信息科技有限公司 | Character recognition model generation method, device, computer equipment and storage medium |
-
2021
- 2021-07-13 CN CN202110787681.0A patent/CN113469092B/en active Active
-
2022
- 2022-07-06 WO PCT/CN2022/104107 patent/WO2023284608A1/en unknown
Patent Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110503204A (en) * | 2018-05-17 | 2019-11-26 | 国际商业机器公司 | Identification is used for the migration models of machine learning task |
KR20200013299A (en) * | 2018-07-30 | 2020-02-07 | 주식회사 한글과컴퓨터 | Apparatus for recognizing character by comparing original image and generated image and operating method thereof |
CN109871847A (en) * | 2019-03-13 | 2019-06-11 | 厦门商集网络科技有限责任公司 | A kind of OCR recognition methods and terminal |
CN110377587A (en) * | 2019-07-15 | 2019-10-25 | 腾讯科技(深圳)有限公司 | Method, apparatus, equipment and medium are determined based on the migrating data of machine learning |
CN111461238A (en) * | 2020-04-03 | 2020-07-28 | 讯飞智元信息科技有限公司 | Model training method, character recognition method, device, equipment and storage medium |
WO2021120752A1 (en) * | 2020-07-28 | 2021-06-24 | 平安科技(深圳)有限公司 | Region-based self-adaptive model training method and device, image detection method and device, and apparatus and medium |
CN111738269A (en) * | 2020-08-25 | 2020-10-02 | 北京易真学思教育科技有限公司 | Model training method, image processing device, model training apparatus, and storage medium |
CN112465012A (en) * | 2020-11-25 | 2021-03-09 | 创新奇智(南京)科技有限公司 | Machine learning modeling method and device, electronic equipment and readable storage medium |
Also Published As
Publication number | Publication date |
---|---|
WO2023284608A1 (en) | 2023-01-19 |
CN113469092A (en) | 2021-10-01 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111860670B (en) | Domain adaptive model training method, image detection method, device, equipment and medium | |
JP7236545B2 (en) | Video target tracking method and apparatus, computer apparatus, program | |
CN113469092B (en) | Character recognition model generation method, device, computer equipment and storage medium | |
Wang et al. | Detect globally, refine locally: A novel approach to saliency detection | |
US11915514B2 (en) | Method and apparatus for detecting facial key points, computer device, and storage medium | |
WO2019100724A1 (en) | Method and device for training multi-label classification model | |
CN109344742B (en) | Feature point positioning method and device, storage medium and computer equipment | |
WO2020228446A1 (en) | Model training method and apparatus, and terminal and storage medium | |
US20200356818A1 (en) | Logo detection | |
CN110930417B (en) | Training method and device for image segmentation model, and image segmentation method and device | |
CN110020582B (en) | Face emotion recognition method, device, equipment and medium based on deep learning | |
US20230021661A1 (en) | Forgery detection of face image | |
CN111968134B (en) | Target segmentation method, device, computer readable storage medium and computer equipment | |
WO2018100668A1 (en) | Image processing device, image processing method, and image processing program | |
CN113706564A (en) | Meibomian gland segmentation network training method and device based on multiple supervision modes | |
WO2022194079A1 (en) | Sky region segmentation method and apparatus, computer device, and storage medium | |
Ling et al. | Human object inpainting using manifold learning-based posture sequence estimation | |
CN111435448B (en) | Image saliency object detection method, device, equipment and medium | |
Wang et al. | Head pose estimation in complex environment based on four-branch feature selective extraction and regional information exchange fusion network | |
Zhang et al. | Augmented visual feature modeling for matching in low-visibility based on cycle-labeling of Superpixel Flow | |
CN112101386B (en) | Text detection method, device, computer equipment and storage medium | |
Ebanesar et al. | Human Ear Recognition Using Convolutional Neural Network | |
CN115984583B (en) | Data processing method, apparatus, computer device, storage medium, and program product | |
CN112307908B (en) | Video semantic extraction method and device | |
CN112101386A (en) | Text detection method and device, computer equipment and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
CB03 | Change of inventor or designer information | ||
CB03 | Change of inventor or designer information |
Inventor after: Sun Kun Inventor after: Yao Xufeng Inventor after: Shen Xiaoyong Inventor after: Lv Jiangbo Inventor before: Sun Kun Inventor before: Yao Xufeng Inventor before: Yu Bei Inventor before: Jia Jiaya Inventor before: Shen Xiaoyong Inventor before: Lv Jiangbo |
|
GR01 | Patent grant | ||
GR01 | Patent grant |