CN116612482A - Handwriting formula recognition system and method - Google Patents

Handwriting formula recognition system and method Download PDF

Info

Publication number
CN116612482A
CN116612482A CN202310580823.5A CN202310580823A CN116612482A CN 116612482 A CN116612482 A CN 116612482A CN 202310580823 A CN202310580823 A CN 202310580823A CN 116612482 A CN116612482 A CN 116612482A
Authority
CN
China
Prior art keywords
coarse
decoder
recognition
handwriting
symbols
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310580823.5A
Other languages
Chinese (zh)
Inventor
冯桂焕
张欣宇
应瀚
陶冶
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing University
Original Assignee
Nanjing University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing University filed Critical Nanjing University
Priority to CN202310580823.5A priority Critical patent/CN116612482A/en
Publication of CN116612482A publication Critical patent/CN116612482A/en
Pending legal-status Critical Current

Links

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Abstract

The invention discloses a handwriting formula recognition system and a handwriting formula recognition method, relates to the technical field of image recognition, and solves the problem that in the prior art, symbols with similar appearance cannot be distinguished, so that the handwriting mathematical formula recognition performance is poor; fine-granularity identification is carried out on the extracted features by using a first decoder, and a LaTeX sequence is generated; coarse-grained identification of the extracted features is performed using a second decoder to generate a coarse-grained class sequence.

Description

Handwriting formula recognition system and method
Technical Field
The present invention relates to the field of image recognition technologies, and in particular, to a handwriting formula recognition method, device and computer system.
Background
The mathematical formula is a tool or a knowledge carrier and is applied to almost any scientific technology and all social fields. Handwriting is popular as a natural and smooth recording mode. Therefore, handwriting mathematical formula recognition becomes an important task in pattern recognition, and plays an important role in the fields of intelligent education, academic paper auxiliary tools, office automation and the like.
In handwritten mathematical formulas, many characters are quite similar in appearance and sometimes are difficult to distinguish by individuals. For example, "2" is easily misidentified as "z", "r" is easily misidentified as "γ", and "X" is easily misidentified as "X" due to the consistency in shape with its lower case "X", and similar letters are also "C/C" and "K/K", etc. In addition, the mathematical formula has a complex two-dimensional structure compared to a general one-dimensional text. From simple four-rule operation to complex calculus, the combination and arrangement of symbols all accord with a certain grammar rule. Thus, cumbersome semantic information makes handwriting formula recognition more difficult.
Encoder-decoder architectures are currently in widespread use in recent handwritten mathematical formula recognition methods that express handwritten mathematical formula recognition as an image-to-sequence conversion problem. Given a handwritten formula, such methods predict their corresponding marker sequences. In order to improve the recognition performance of the handwritten mathematical formula, most methods improve the model structure. There are also some other tasks introduced in connection with handwritten mathematical formula recognition, and the relation between these tasks and handwritten mathematical formula recognition has been studied. For example, bohan et al introduce a symbol counting task into the handwritten mathematical formula recognition, and their designed system can predict the number of each symbol in the formula while generating a LaTeX sequence of the formula. Thank-Nghia et al set forth the task: predicting whether all symbols appear in the formula, their experiments indicate that this task can improve the performance of handwriting mathematical formula recognition. However, their research has only been directed to the effect of a single task on handwritten mathematical formula recognition, and no one has yet studied how the performance of handwritten mathematical formula recognition may change when multiple tasks are introduced. Although some mathematical symbols are similar in appearance, they belong to different coarse-grained categories, e.g. 2 belongs to the number, z lowercase; c belongs to the capital letter and C belongs to the lowercase letter.
Therefore, how to distinguish symbols with similar appearance and improve the recognition performance of the handwritten mathematical formula becomes a technical problem that needs to be solved by those skilled in the art.
Disclosure of Invention
The invention provides a handwriting formula recognition system and a handwriting formula recognition method, which are used for solving the problem that the handwriting mathematical formula recognition performance is poor because symbols with similar appearance cannot be distinguished in the prior art.
In order to achieve the above object, the present invention provides a handwriting recognition system, including: a graph position encoder, a first decoder, a second decoder, an auxiliary task module,
the image position encoder is used for extracting characteristics of the image of the handwriting formula;
the first decoder is used for carrying out fine granularity recognition based on the extracted features to generate a LaTeX sequence;
the second decoder is used for carrying out coarse granularity identification based on the extracted features and generating a coarse granularity class sequence;
the first decoder is identical in structure to the second decoder.
Preferably, the system further comprises:
the auxiliary task module is used for completing related tasks identified by the handwriting formulas, and the related tasks comprise: predicting the number of all symbols in the formula, predicting whether all symbols appear in the formula.
Preferably, the second encoder includes:
the coarse granularity dividing module is used for dividing mathematical symbols in the data set into a plurality of groups of coarse granularity categories; wherein the symbols with similar shapes belong to different coarse granularity categories; the same symbols are used to belong to the same coarse-grained class.
Preferably, the first decoder and the second encoder adopt a bidirectional training strategy, and output two prediction results in each time step.
In order to achieve the above object, the present invention also discloses a handwriting formula recognition method, which includes: extracting features of the picture of the handwriting formula;
carrying out fine granularity recognition based on the extracted features to generate a LaTeX sequence;
and generating coarse granularity identification based on the extracted features, and generating a coarse granularity class sequence.
Preferably, the generation of the coarse-grained class sequence is preceded by:
dividing mathematical symbols in a dataset into a plurality of groups of coarse-grained categories; wherein the symbols with similar shapes belong to different coarse granularity categories; the same symbols are used to belong to the same coarse-grained class.
Preferably, during the training process, other tasks related to handwritten mathematical formula recognition affect handwritten mathematical formula recognition by affecting parameters of the shared encoder.
Preferably, the loss function L used in the training process is the sum of cross entropy losses of the fine-granularity recognition task and the coarse-granularity recognition task:
L=L HMER1 L GCRT
wherein L is HMER Loss function for fine-grained recognition tasks, L GCRT The loss function of the task is identified for coarse granularity.
Preferably, the method further comprises: the number of per symbol class is predicted using the Counting-Aware Network.
Preferably, the Counting-awware Network comprises a full connection layer, a Dropout layer and a full connection layer which are sequentially connected, wherein the number of nodes of the second full connection layer is equal to the number of symbol classes.
Compared with the prior art, the handwriting formula recognition system and method provided by the invention have the following beneficial effects: the method can distinguish symbols with similar appearance in the handwriting formula, and improves the recognition performance of the handwriting mathematical formula.
Drawings
In order to more clearly illustrate the invention or the technical solutions of the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described, and it is obvious that the drawings in the description below are some embodiments of the invention, and other drawings can be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a schematic diagram of a coarse-granularity recognition task of a handwritten mathematical formula provided by the invention;
FIG. 2 is a frame diagram of the identification of a write mathematical formula provided by the present invention;
FIG. 3 is a diagram of a multitasking framework for handwriting mathematical formula recognition provided by the present invention;
fig. 4 is a schematic diagram of an error correction case provided by the present invention.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the present invention more apparent, the technical solutions of the present invention will be clearly and completely described below with reference to the accompanying drawings, and it is apparent that the described embodiments are some embodiments of the present invention, not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
A handwriting recognition system and method according to the present invention will be described with reference to fig. 1 to 4.
As shown in fig. 1, the present embodiment provides a handwriting recognition system, including: a graph position encoder, a first decoder, a second decoder, an auxiliary task module,
the image position encoder is used for extracting characteristics of the image of the handwriting formula;
the first decoder is used for carrying out fine granularity recognition based on the extracted features to generate a LaTeX sequence;
a second decoder for generating coarse-granularity identification based on the extracted features, and generating a coarse-granularity class sequence;
the first decoder is identical in structure to the second decoder.
Specifically, the picture position encoder is respectively connected with the first decoder and the second decoder,
further, as shown in fig. 2, the graph position encoder is a feature extractor consisting of 4 DenseNet blocks.
Since the coarse-granularity class recognition task is similar to handwriting mathematical formula recognition, sufficient context information is required to generate a sequence, the first decoder and the second decoder have the same structure and are both transformer decoders. Because the transducer lacks attention coverage information, an ARM module is added in the transducer to introduce the attention coverage information for the model, so that the accuracy of handwriting formula recognition is improved.
The final loss function L is the sum of the cross entropy losses of the two sequence prediction tasks above,
L=L HMER1 L GCRT (1)
wherein lambda is 1 =0.5,L HMER Identifying task loss, L, for fine-grained handwritten mathematical formulas GCRT Loss of tasks is identified for coarse granularity.
Further optimizing scheme, as shown in fig. 3, the system further includes:
the auxiliary task module is used for completing related tasks identified by the handwriting formulas, and the related tasks comprise: predicting the number of all symbols in the formula, predicting whether all symbols appear in the formula.
Further, the second encoder includes:
the coarse granularity dividing module is used for dividing mathematical symbols in the data set into a plurality of groups of coarse granularity categories; wherein the symbols with similar shapes belong to different coarse granularity categories; the same symbols are used to belong to the same coarse-grained class.
Further optimizing scheme, the first decoder and the second encoder adopt a bidirectional training strategy, and each time step outputs two prediction results (L2R, R2L).
The position coding of fig. 2 is a two-dimensional position coding for the features of the first encoder output. And the word position code is one-dimensional position code for the prediction result output by the second decoder.
The invention also discloses a handwriting formula recognition method, which comprises the following steps:
extracting features of the picture of the handwriting formula;
carrying out fine granularity recognition based on the extracted features to generate a LaTeX sequence;
and generating coarse granularity identification based on the extracted features, and generating a coarse granularity class sequence.
Although some mathematical symbols are similar in appearance, they belong to different coarse-grained categories, e.g. 2 belongs to the number, z lowercase; c belongs to the capital letter and C belongs to the lowercase letter. Therefore, the difference between different coarse-grained categories needs to be learned in this embodiment to distinguish between symbols with similar appearance.
Further, the feature extraction of the picture of the handwriting formula comprises:
extracting features of the original picture of the handwriting formula to obtain a feature matrixThen calculating the graph position codes of all the features in the feature matrix, and adding the calculated codes into the feature matrix F to obtain a final output result +.>
Further optimizing the scheme, before generating the coarse-granularity class sequence, the method comprises the following steps:
dividing mathematical symbols in a dataset into a plurality of groups of coarse-grained categories; wherein the symbols with similar shapes belong to different coarse granularity categories; the same symbols are used to belong to the same coarse-grained class.
As shown in fig. 1, the coarse-grained class sequence contains higher-level mathematical semantic information than the LaTeX sequence. For example: if the grammar rules in the four operations are desired, a large number of samples marked by the LaTeX sequence are required. For example, "3+2", "4-8", "100\div5", while performing coarse-grained class sequence "number operator number" can directly tell the grammar rules of model four operations. The higher level mathematical grammar rules are beneficial to complex semantic information in a model learning formula and generate LaTeX sequences conforming to the mathematical grammar rules.
As shown by the mathematical symbols contained in each coarse-grained category of table 1, the 110 mathematical symbols in the CROHME dataset are divided into 23 coarse-grained categories, with similarly shaped symbols belonging to different coarse-grained categories, e.g., '5' belonging to 'number' and's' belonging to lowercase. In addition, the same symbols are used in a coarse-grained class. Such as '\{', '\', '(', ')' always appear in pairs, they are therefore classified into a coarse-grained class of blackbets. Although the control characters 'and' in LaTeX also appear in pairs, they are classified into a single class because they do not correspond to the truly existing symbols in the handwritten mathematical formula. The identification of spatial relationships is important in mathematical handwriting formula identification, and categories 4 to 8 represent different spatial relationships, so they are separated separately, which is advantageous for the model to distinguish between different spatial relationships.
TABLE 1
Further optimizing the scheme, generating the coarse-grained class sequence comprises the following steps: at time t, a coarse-grained class sequence R is predicted based on times F', 0-t-1 t-1 =r 0 ,r 1 ,r 2 ,...,r t-1 Calculating R t-1 The position code of each element in the sequence is added as a suffix to the word embedding vector of each element. According to R t-1 And F', and deducing the coarse granularity class at time t. The new result is repeatedly predicted until the end mark EOS is predicted (i.e., the start mark SOS is predicted by R2L).
Further optimizing the scheme, the loss function L used in the training process is the sum of cross entropy losses of the fine granularity recognition task and the coarse granularity recognition task:
L=L HMER1 L GCRT (1)
wherein lambda is 1 =0.5,L HMER Loss function for fine-grained recognition tasks, L GCRT The loss function of the task is identified for coarse granularity.
Further optimizing scheme, the method further comprises the following steps: predicting the number of all symbols in the formula, predicting whether all symbols appear in the formula. During training, other tasks related to handwritten mathematical formula recognition affect handwritten mathematical formula recognition by affecting parameters of the shared encoder, so that the loss function is derived by summing the losses of each task actually involved in training.
L MuliTask =L HMER1 λ 1 L GCRT2 λ 2 L 2 +...+α N λ N L N (2)
Wherein lambda is N <1, preferably 0.5, L HMER Loss function for fine-grained recognition tasks, L GCRT Loss function for coarse-grained identification task, alpha is equal to 1 or equal to 0, L N Representing the auxiliary task.
Further, the number of per symbol class is predicted using the count-Aware Network (CAN): the extracted feature F' is used as an input to the CAN, and the multi-scale feature is extracted by two parallel convolution branches with convolution kernels of different sizes (3 x 3 and 5 x 5). Both parallel convolved branches are similarly structured, with the feature information being further enhanced with channel attention after the convolutional layers described above. After the enhanced features are obtained, the number of channels is reduced to C using a 1 x 1 convolution, where C is the number of symbol classes (110). After a 1 x 1 convolution, values in the (0, 1) range are generated using a sigmoid function, and the matrix is summed to obtain a count vector V e R 1*C Finally, calculating two count vectors obtained by two convolution branches to obtain a final count vector V f
L Count Is smooth L1 Loss:
wherein, the liquid crystal display device comprises a liquid crystal display device,the number of the true appearance of each symbol in the formula is represented, and V represents the predicted value of the number.
And in a further optimization scheme, the Counting-Aware Network (CAN) comprises a full-connection layer, a Dropout layer and a full-connection layer which are sequentially connected, wherein the Dropout layer CAN prevent the problem of over-fitting, and the number of nodes of the second full-connection layer is equal to the number of symbol classes.
The output of the last full connection layer is subjected to a Sigmoid function to obtain the existence probability of each symbol. The loss function is defined as follows:
wherein t is c,i The value of (1) or (0), 0 means that the symbol species c does not appear in formula i, and 1 is the opposite. y is c,i Is the probability of occurrence of the symbol category c in equation i. C represents the number of symbol classes.
To further verify this scheme, the following experiments were performed:
the first alohme dataset employed is the most widely used common dataset in the field of handwritten mathematical formula recognition. The training set contains 8836 handwritten mathematical expressions. Three test sets: CROHME 2014, 2016, 2019 contains 986, 1147, 1199 handwritten mathematical expressions, respectively. Expression recognition rates (ExpRate) is an evaluation index widely used in recognition of handwritten mathematical formulas. It is defined as the percentage of correctly identifying the mathematical expression. Exprate 1 indicates ExpRate, expRate when Exprate can tolerate at most one symbol level error.
As with the CoMER, we do data enhancement by scaling the original picture. During training, the batch size is set to 8. The optimizer is SGD with a weight decay set to 104 and a momentum set to 0.9. The learning rate of the model was 0.08. Our experiments were performed on a single NVIDIA GeForce RTX 3090 GPU. In addition, λ in both the formulas (2) and (1) is 0.5.
The performance of the most advanced method with data enhancement on the CROHME dataset as in Table 2, and the performance of the most advanced method without data enhancement on the CROHME dataset as in Table 3 is as follows:
TABLE 2
TABLE 3 Table 3
It can be observed from table 2 that the method for recognizing the handwritten mathematical formula provided by the scheme can improve the accuracy of recognizing the handwritten mathematical formula. Compared with the CoMER baseline method, the method improves the Exprate by 1.12%/2.45%/1.84% on the CROHME 2014/2016/2019 test set respectively, and FIG. 4 is a real case in the test set (the right side of the formula picture is the recognition result of the formula, wherein the upper row is the recognition result of baseline and the lower row is the recognition result of our method. As can be seen from fig. 4 (a), 4 (b) and 4 (c), the present method corrects recognition errors due to the symbols having similar shapes. Successfully correct 'P' for 'P', '0' for 'o', and 'q' for '9'. As can be seen from fig. 4 (f) and 4 (g), the present method corrects mathematical syntax errors. Successfully put the' back to the first place (P+1) 'correct to' C (P+1) 'xfy' is corrected to 'x +.y'. As can be seen from figures (d) and (e), our method corrects mathematical syntax errors that do not occur in pairs for bracket types.
In summary, it can be seen from table 2 that the method achieves a good result compared to other most advanced methods with data enhancement. The CAN-DWAP and CAN-ABM in Table 2 use more complex data enhancement means than we. To observe the effect of the method after removal of the data enhancement. The data enhancement is removed and the model is retrained, and the recognition result is shown in table 3, and compared with other most advanced methods without data enhancement, the method achieves the best effect at present.
The behavior of handwritten mathematical formula recognition under the influence of a plurality of related tasks is shown in table 4:
TABLE 4 Table 4
Wherein HMER represents handwritten mathematical formula recognition, GCRT represents a coarse-granularity recognition task of the handwritten mathematical formula, count represents the number of predicted symbols in the formula, and Exist represents whether predicted symbols occur in the formula. Wherein HMER is the primary task and GCRT, count and Exist are tasks related to HMER.
It can be seen from table 4 that with the present method, a very good recognition effect can be achieved even in the case of a plurality of auxiliary tasks.
Finally, it should be noted that: the above embodiments are only for illustrating the technical solution of the present invention, and are not limiting; although the invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit and scope of the technical solutions of the embodiments of the present invention.

Claims (10)

1. A handwriting recognition system, comprising: a graph position encoder, a first decoder, a second decoder, an auxiliary task module,
the image position encoder is used for extracting characteristics of the image of the handwriting formula;
the first decoder is used for carrying out fine granularity recognition based on the extracted features to generate a LaTeX sequence;
the second decoder is used for carrying out coarse granularity identification based on the extracted features and generating a coarse granularity class sequence;
the first decoder is identical in structure to the second decoder.
2. The handwriting recognition system of claim 1, further comprising:
the auxiliary task module is used for completing related tasks identified by the handwriting formulas, and the related tasks comprise: predicting the number of all symbols in the formula, predicting whether all symbols appear in the formula.
3. The handwriting recognition system of claim 1, wherein the second decoder comprises:
the coarse granularity dividing module is used for dividing mathematical symbols in the data set into a plurality of groups of coarse granularity categories; wherein the symbols with similar shapes belong to different coarse granularity categories; the same symbols are used to belong to the same coarse-grained class.
4. The handwriting recognition system of claim 1, wherein the first decoder and the second decoder employ a bi-directional training strategy, and each time step outputs two prediction results.
5. A method of handwriting recognition, comprising:
extracting features of the picture of the handwriting formula;
carrying out fine granularity recognition based on the extracted features to generate a LaTeX sequence;
and generating coarse granularity identification based on the extracted features, and generating a coarse granularity class sequence.
6. The method of claim 5, wherein generating the coarse-grained class sequence is preceded by:
dividing mathematical symbols in a dataset into a plurality of groups of coarse-grained categories; wherein the symbols with similar shapes belong to different coarse granularity categories; the same symbols are used to belong to the same coarse-grained class.
7. The method of claim 5, wherein,
during training, other tasks related to handwritten mathematical formula recognition affect handwritten mathematical formula recognition by affecting parameters of the shared encoder.
8. The method of claim 5, wherein,
the loss function L used in the training process is the sum of cross entropy loss of the fine granularity identification task and the coarse granularity identification task:
L=L HMER1 L GCRT
wherein L is HMER Loss function for fine-grained recognition tasks, L GCRT The loss function of the task is identified for coarse granularity.
9. The handwriting recognition method according to claim 8, further comprising: the number of per symbol class is predicted using the Counting-Aware Network.
10. The method of claim 9, wherein,
the Counting-Aware Network comprises a full-connection layer, a Dropout layer and a full-connection layer which are sequentially connected, wherein the number of nodes of a second full-connection layer is equal to the number of symbol classes.
CN202310580823.5A 2023-05-22 2023-05-22 Handwriting formula recognition system and method Pending CN116612482A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310580823.5A CN116612482A (en) 2023-05-22 2023-05-22 Handwriting formula recognition system and method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310580823.5A CN116612482A (en) 2023-05-22 2023-05-22 Handwriting formula recognition system and method

Publications (1)

Publication Number Publication Date
CN116612482A true CN116612482A (en) 2023-08-18

Family

ID=87677705

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310580823.5A Pending CN116612482A (en) 2023-05-22 2023-05-22 Handwriting formula recognition system and method

Country Status (1)

Country Link
CN (1) CN116612482A (en)

Similar Documents

Publication Publication Date Title
CN110084239B (en) Method for reducing overfitting of network training during off-line handwritten mathematical formula recognition
CN109190131B (en) Neural machine translation-based English word and case joint prediction method thereof
CN111985239B (en) Entity identification method, entity identification device, electronic equipment and storage medium
CN109492202B (en) Chinese error correction method based on pinyin coding and decoding model
CN106776538A (en) The information extracting method of enterprise&#39;s noncanonical format document
CN116629275B (en) Intelligent decision support system and method based on big data
CN111310441A (en) Text correction method, device, terminal and medium based on BERT (binary offset transcription) voice recognition
CN110188827B (en) Scene recognition method based on convolutional neural network and recursive automatic encoder model
CN113571124B (en) Method and device for predicting ligand-protein interaction
CN113393370A (en) Method, system and intelligent terminal for migrating Chinese calligraphy character and image styles
CN112329767A (en) Contract text image key information extraction system and method based on joint pre-training
CN114528928A (en) Two-training image classification algorithm based on Transformer
CN115761764A (en) Chinese handwritten text line recognition method based on visual language joint reasoning
CN112052663B (en) Customer service statement quality inspection method and related equipment
CN111737470B (en) Text classification method
CN116258917B (en) Method and device for classifying malicious software based on TF-IDF transfer entropy
CN116977844A (en) Lightweight underwater target real-time detection method
CN114357186B (en) Entity extraction method, device, medium and equipment based on interactive probability coding
CN116612482A (en) Handwriting formula recognition system and method
CN116521863A (en) Tag anti-noise text classification method based on semi-supervised learning
CN112735604B (en) Novel coronavirus classification method based on deep learning algorithm
CN114416991A (en) Method and system for analyzing text emotion reason based on prompt
CN115331073A (en) Image self-supervision learning method based on TransUnnet architecture
CN114757154A (en) Job generation method, device and equipment based on deep learning and storage medium
CN113947083A (en) Document level named entity identification method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination