CN115984868A - Text processing method, device, medium and equipment - Google Patents

Text processing method, device, medium and equipment Download PDF

Info

Publication number
CN115984868A
CN115984868A CN202211679873.0A CN202211679873A CN115984868A CN 115984868 A CN115984868 A CN 115984868A CN 202211679873 A CN202211679873 A CN 202211679873A CN 115984868 A CN115984868 A CN 115984868A
Authority
CN
China
Prior art keywords
features
text
processing
feature
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211679873.0A
Other languages
Chinese (zh)
Inventor
张家鑫
黄明鑫
黄灿
刘禹良
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Douyin Vision Co Ltd
Original Assignee
Douyin Vision Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Douyin Vision Co Ltd filed Critical Douyin Vision Co Ltd
Priority to CN202211679873.0A priority Critical patent/CN115984868A/en
Publication of CN115984868A publication Critical patent/CN115984868A/en
Pending legal-status Critical Current

Links

Images

Abstract

The present disclosure relates to a text processing method, apparatus, medium, and device, the method comprising: receiving a text image to be processed; inputting the text image into a text processing model to obtain a detection result and an identification result corresponding to the text image; the text processing model comprises a feature extraction layer, a coding layer, a decoding layer and a prediction layer, and the feature extraction layer is used for extracting features of the text image to obtain image features corresponding to the text image; coding the image features through the coding layer to obtain detection features and identification features corresponding to the text image, and splicing the detection features and the identification features to obtain image splicing features; decoding the image splicing characteristics through the decoding layer to obtain processing characteristics corresponding to the text image; and predicting based on the processing characteristics through the prediction layer to obtain a detection result and an identification result corresponding to the text image.

Description

Text processing method, device, medium and equipment
Technical Field
The present disclosure relates to the field of computer technologies, and in particular, to a text processing method, apparatus, medium, and device.
Background
At present, in the process of text recognition task, it is generally performed in two stages, the first stage is a detection model based on image segmentation for determining a portion of text in an image, and the second stage is recognition of text content based on a recognition model of image recognition, such as extracting a portion of text in an image for text content recognition. In the process, the calculation amount is large, meanwhile, the interaction between the detection model and the recognition model is insufficient, the recognition in multiple stages enables the result in the current stage to depend on the result in the previous stage too much, and the stability of the final result is insufficient.
Disclosure of Invention
This summary is provided to introduce a selection of concepts in a simplified form that are further described below in the detailed description. This summary is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used to limit the scope of the claimed subject matter.
In a first aspect, the present disclosure provides a text processing method, including:
receiving a text image to be processed;
inputting the text image into a text processing model to obtain a detection result and an identification result corresponding to the text image;
the text processing model comprises a feature extraction layer, a coding layer, a decoding layer and a prediction layer, and the feature extraction layer is used for extracting features of the text image to obtain image features corresponding to the text image; coding the image features through the coding layer to obtain detection features and identification features corresponding to the text image, and splicing the detection features and the identification features to obtain image splicing features; decoding the image splicing characteristics through the decoding layer to obtain processing characteristics corresponding to the text image; and predicting based on the processing characteristics through the prediction layer to obtain a detection result and an identification result corresponding to the text image.
In a second aspect, the present disclosure provides a text processing apparatus, the apparatus comprising:
the receiving module is used for receiving a text image to be processed;
the processing module is used for inputting the text image into a text processing model to obtain a detection result and an identification result corresponding to the text image;
wherein the text processing model comprises a feature extraction layer, a coding layer, a decoding layer and a prediction layer, and the processing module comprises: the first extraction submodule is used for extracting the features of the text image through the feature extraction layer to obtain the image features corresponding to the text image; the first coding submodule is used for coding the image features through the coding layer to obtain detection features and identification features corresponding to the text image, and splicing the detection features and the identification features to obtain image splicing features; the decoding submodule is used for decoding the image splicing characteristics through the decoding layer to obtain the processing characteristics corresponding to the text image; and the first processing submodule is used for carrying out prediction on the basis of the processing characteristics through the prediction layer to obtain a detection result and an identification result corresponding to the text image.
In a third aspect, the present disclosure provides a computer readable medium having stored thereon a computer program which, when executed by a processing apparatus, performs the steps of the method of the first aspect.
In a fourth aspect, the present disclosure provides an electronic device comprising:
a storage device having a computer program stored thereon;
processing means for executing the computer program in the storage means to implement the steps of the method of the first aspect.
According to the technical scheme, the text image can be processed through the end-to-end text processing model so as to obtain the detection result and the recognition result corresponding to the text image at the same time, and therefore the text processing efficiency can be improved to a certain extent. In addition, in the above technical solution, a shared decoder can be used for processing the detection features and the recognition features in the text processing model, so that multi-modal interaction on the detection features and the recognition features can be realized in the decoding process, so as to perform text processing by simultaneously combining the detection features and the recognition features, thereby improving the efficiency of text processing, and improving the accuracy of the detection results and the recognition results corresponding to the text images.
Additional features and advantages of the disclosure will be set forth in the detailed description which follows.
Drawings
The above and other features, advantages and aspects of various embodiments of the present disclosure will become more apparent by referring to the following detailed description when taken in conjunction with the accompanying drawings. Throughout the drawings, the same or similar reference numbers refer to the same or similar elements. It should be understood that the drawings are schematic and that elements and features are not necessarily drawn to scale. In the drawings:
FIG. 1 is a flow diagram of a method of text processing provided in accordance with one embodiment of the present disclosure;
FIG. 2 is a block diagram illustrating the structure of a decoding layer in a text processing model according to an embodiment of the present disclosure;
FIG. 3 is a block diagram of a text processing apparatus provided in accordance with one embodiment of the present disclosure;
FIG. 4 shows a schematic structural diagram of an electronic device suitable for use in implementing embodiments of the present disclosure.
Detailed Description
Embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While certain embodiments of the present disclosure are shown in the drawings, it is to be understood that the present disclosure may be embodied in various forms and should not be construed as limited to the embodiments set forth herein, but rather are provided for a more thorough and complete understanding of the present disclosure. It should be understood that the drawings and embodiments of the disclosure are for illustration purposes only and are not intended to limit the scope of the disclosure.
It should be understood that the various steps recited in the method embodiments of the present disclosure may be performed in a different order, and/or performed in parallel. Moreover, method embodiments may include additional steps and/or omit performing the illustrated steps. The scope of the present disclosure is not limited in this respect.
The term "including" and variations thereof as used herein is intended to be open-ended, i.e., "including but not limited to". The term "based on" is "based, at least in part, on". The term "one embodiment" means "at least one embodiment"; the term "another embodiment" means "at least one additional embodiment"; the term "some embodiments" means "at least some embodiments". Relevant definitions for other terms will be given in the following description.
It should be noted that the terms "first", "second", and the like in the present disclosure are only used for distinguishing different devices, modules or units, and are not used for limiting the order or interdependence of the functions performed by the devices, modules or units.
It is noted that references to "a", "an", and "the" modifications in this disclosure are intended to be illustrative rather than limiting, and that those skilled in the art will recognize that "one or more" may be used unless the context clearly dictates otherwise.
The names of messages or information exchanged between devices in the embodiments of the present disclosure are for illustrative purposes only, and are not intended to limit the scope of the messages or information.
It is understood that before the technical solutions disclosed in the embodiments of the present disclosure are used, the type, the use range, the use scene, etc. of the personal information related to the present disclosure should be informed to the user and obtain the authorization of the user through a proper manner according to the relevant laws and regulations.
For example, in response to receiving a user's active request, prompt information is sent to the user to explicitly prompt the user that the requested operation to be performed would require acquisition and use of personal information to the user. Thus, the user can autonomously select whether to provide personal information to software or hardware such as an electronic device, an application program, a server, or a storage medium that performs the operations of the technical solution of the present disclosure, according to the prompt information.
As an optional but non-limiting implementation manner, in response to receiving an active request from the user, the manner of sending the prompt information to the user may be, for example, a pop-up window, and the prompt information may be presented in a text manner in the pop-up window. In addition, a selection control for providing personal information to the electronic device by the user's selection of "agreeing" or "disagreeing" can be carried in the pop-up window.
It is understood that the above notification and user authorization process is only illustrative and is not intended to limit the implementation of the present disclosure, and other ways of satisfying the relevant laws and regulations may be applied to the implementation of the present disclosure.
Meanwhile, it is understood that the data involved in the present technical solution (including but not limited to the data itself, the acquisition or use of the data) should comply with the requirements of the corresponding laws and regulations and the related regulations.
Fig. 1 is a flowchart of a text processing method according to an embodiment of the disclosure, where as shown in fig. 1, the method includes:
in step 11, a text image to be processed is received. The text image to be processed may be an image shot by a user to identify text content therein, or an image derived from or downloaded from a web page, which is not limited to this.
In step 12, the text image is input into a text processing model, and a detection result and a recognition result corresponding to the text image are obtained, wherein the text processing model includes a feature extraction layer, a coding layer, a decoding layer and a prediction layer.
Performing feature extraction on the text image through the feature extraction layer to obtain image features corresponding to the text image; coding the image features through the coding layer to obtain detection features and identification features corresponding to the text image, and splicing the detection features and the identification features to obtain image splicing features; decoding the image splicing characteristics through the decoding layer to obtain processing characteristics corresponding to the text image; and predicting based on the processing characteristics through the prediction layer to obtain a detection result and an identification result corresponding to the text image.
Therefore, the text image can be processed through the end-to-end text processing model by the technical scheme, so that the detection result and the recognition result corresponding to the text image can be obtained at the same time, and the text processing efficiency can be improved to a certain extent. In addition, in the above technical solution, a shared decoder may be used for processing the detection features and the recognition features in the text processing model, so that multi-modal interaction between the detection features and the recognition features can be realized during decoding, so as to perform text processing by simultaneously combining the detection features and the recognition features, which can improve the efficiency of text processing and improve the accuracy of detection results and recognition results corresponding to text images.
In a possible embodiment, the performing, by the feature extraction layer, feature extraction on the text image to obtain an image feature corresponding to the text image may include:
and performing feature extraction on the text image based on a multi-scale feature extraction module in the feature extraction layer to obtain multi-scale features.
As an example, the text image may be preprocessed to reduce the input text image to a fixed aspect ratio, to ensure compatibility and adaptation to input images of various sizes, and to improve the applicability of the text processing method. Thus, feature extraction can be performed based on the text image obtained after resize.
For example, the multi-scale feature extraction module may be implemented by a ResNet50 network, and a general extraction manner of the ResNet50 network in the art may be adopted when performing multi-scale feature extraction, which is not described herein again. Through multi-scale feature extraction, the obtained multi-scale features can obtain feature representation of the text image under multiple scales, expression capability of the multi-scale features is improved, and more comprehensive feature representation is provided for subsequent text processing.
Adding position codes and level codes in the multi-scale features, and converting the features added with the position codes and the level codes into one-dimensional feature representations to obtain the image features.
The Position Encoding can be implemented based on Positional Encoding in a transform architecture, and the configuration mode can be configured by adopting any Position Encoding determination mode in the transform architecture, such as an integer value tag, a binary vector tag, a periodic function, and the like, which is not limited in this disclosure. Moreover, in the present disclosure, feature extraction is performed on a text image to obtain a multi-scale feature, feature points located in different feature layers may have the same (w, h) coordinate, and if only position coding is insufficient to accurately represent position information of an element on a multi-scale feature map, in this embodiment, a scale-level encoding may be further added to distinguish the different feature layers, all feature points in the same feature layer may correspond to the same scale-level encoding, and the feature points of the different feature layers correspond to different scale-level encoding. Illustratively, the hierarchical encoding may be random initialization followed by training with the network, which is learnable.
Further, when the feature is encoded by the encoding layer, the feature after adding the position code and the hierarchical code may be further converted into a one-dimensional feature representation based on one-dimensional feature processing, where the two-dimensional feature map in the transform framework may be converted in a transform manner of splitting into one-dimensional sequence features that can be received by the transform, and for example, the two-dimensional feature map corresponding to the text image may be converted into a one-dimensional feature representation in a patch embedding manner, so as to obtain the image feature, so as to perform subsequent encoding.
Therefore, in the technical scheme, the text image is subjected to multi-scale feature extraction, so that more comprehensive features corresponding to the text image can be obtained, the position sensitivity during text processing in the subsequent text processing process is ensured by adding the position codes in the extracted features, the accuracy of the subsequent detection result is ensured, different feature layers under the multi-scale features are distinguished by adding the hierarchical codes, the comprehensiveness and the accuracy of the features in the image features are further improved, and reliable data support is provided for the subsequent text processing.
In a possible embodiment, the encoding, by the encoding layer, the image feature to obtain the corresponding detection feature and identification feature of the text image may include:
and according to the image characteristics, performing characteristic coding based on a transform coding layer to obtain coding characteristics corresponding to the text image.
The encoding layer may be implemented by an encoder of a transform, for example, it may include 6 layers of structures, each layer of structures is connected in series, and each layer of structures may be composed of a deformable attention module and an FFN (Feed Forward Network). In the calculation process of the Deformableattionentry, each feature in the image features can be used as a query to determine the correlation degree between the feature and each other feature in the image features, and the top N features of the correlation degrees in the sequence from large to small are used as the correlation features of the query to model the relationship between the features.
Then, the weights corresponding to the relevant features are determined based on the correlation degrees of the relevant features, and if the weights corresponding to the relevant features are determined according to the proportion of the correlation degrees, the sum of the weights corresponding to the relevant features is 1, and the attention weights are distributed based on the relevant features. In the attention calculation process, attention calculation can be carried out by modeling corresponding weights, so that each feature preferentially pays attention to the feature which is relatively related to the feature in the feature coding calculation process, and the accuracy and the effectiveness of the coding feature are effectively ensured while the data calculation amount is reduced. Further, the obtained feature map is output to the next layer through FFN, and the feature output by the last layer is used as the coding feature.
Predicting each feature in the coding features based on the coding full-link layer, obtaining prediction position information corresponding to each feature in the coding features, and determining target position information from the prediction position information.
For example, the coding full link layer may be implemented based on a general full link layer structure, which is not described in detail herein. Further, for each feature in the encoded features, the predicted position information corresponding to the feature and the confidence corresponding to the predicted position information may be predicted, such as generating, for each feature, a predicted frame corresponding to the feature and a confidence corresponding to the predicted frame. Thereafter, the top m pieces of predicted position information may be selected as the target position information in order of high confidence. Wherein m can be set according to the actual application scenario, for example, m is set to 100 in advance. The predicted position information may be represented by 8 × 25 rectangular boxes.
And for each piece of target position information, determining an identification feature corresponding to the target position information according to the position indicated by the target position information, and determining a feature used for predicting the target position information in the coding features as the detection feature.
Then for each target location information, the identifying feature may be determined according to the location indicated by the target location information, such as the 8 × 25 rectangular frame described above, and according to the feature in the rectangular frame. In this case, the feature corresponding to the center line of the rectangular frame may be used as the identification feature corresponding to the target position information, that is, the feature of the middle 1 × 25 may be selected as the identification feature corresponding to the target position information. As described above, if the target location information is predicted based on a certain feature of the encoding features, the feature of the encoding features used for predicting the target location information may be determined as the detection feature, and the detection feature and the identification feature corresponding to the target location information are spliced, for example, for each target location information, it may obtain a feature of 1 × 26.
Therefore, by the technical scheme, the accurate coding features can be obtained by coding the image features, and the detection features and the identification features can be preliminarily predicted to ensure the accuracy of the detection features and the identification features. In the technical scheme, the detection characteristics and the identification characteristics corresponding to the target position information are determined in the coding characteristics aiming at each target position information, so that on one hand, the accuracy of the detection characteristics and the identification characteristics can be improved, and on the other hand, reliable data support can be provided for the subsequent multi-mode interactive decoding based on the detection characteristics and the identification characteristics.
In a possible embodiment, as shown in fig. 2, the decoding layer is shown as a in fig. 2, and the obtaining, by the decoding layer, the processing feature corresponding to the text image by performing decoding processing on the image splicing feature may include:
and performing multi-modal attention processing according to the image stitching characteristics to obtain a first attention characteristic.
S may represent the image stitching feature as shown in fig. 2, where D is used to represent the detection feature and R is used to represent the recognition feature, and the multi-modal attention process may be calculated using multi-modal attention, which is commonly used in the art, as an example to obtain the first attention feature.
As another example, the step of performing multi-modal attention processing according to the image stitching feature to obtain the first attention feature may include:
and performing text classification processing on the identification features, and taking the feature of the last feature layer in the text classification processing process as the text feature of the identification features.
For example, for a recognition feature, the recognition feature may be classified and normalized to each character class to obtain a character probability matrix corresponding to the recognition feature, and as the text feature, an initial recognition result corresponding to the recognition feature may be obtained based on the text feature, and may be determined as follows:
P=softmax(W 1 R)
wherein, P is used for representing the text characteristic,
Figure BDA0004018500970000091
for the training weight, R is used to represent the recognition feature, C is used to represent the feature dimension, and U is used to represent the character class number.
And splicing the detection features and the text features to obtain text splicing features.
For example, the text features mapped to the language features by the recognition features may be spliced with the corresponding detection features thereof by the following method, for example:
L=cat(D,W 2 P)
wherein L is used for representing the text splicing characteristic, D is used for representing the detection characteristic, cat is used for representing the connection operation so as to connect the two characteristics into a vector,
Figure BDA0004018500970000103
is the weight of the training.
Further, multi-modal attention processing can be performed by the image stitching feature and the text stitching feature. In the general attention processing mechanism, a query vector, a key vector and a value vector are determined based on the same input feature, but in this embodiment, the multi-modal attention processing may be performed according to the image splicing feature and the text splicing feature to obtain the first attention feature, for example, a query vector may be determined according to the image splicing feature, a key vector and a value vector may be generated according to the text splicing feature, and the multi-modal attention processing may be performed based on the query vector, the key vector and the value vector to obtain the first attention feature.
As an example, the determination manner of the query vector Q, the key vector K, and the value vector V may be determined by conversion in a manner commonly used in the art, and is determined based on a general attention calculation manner to obtain the first attention feature, which is not described herein again.
As another example, the attention mask matrix may be added when determining the multi-modal attention weight matrix in the multi-modal attention processing, as the first attention feature F may be determined by:
Figure BDA0004018500970000101
where M is used to represent the attention mask matrix,
Figure BDA0004018500970000102
PE () is used to represent position coding in the DETR model and S is used for tablesThe image stitching characteristics are shown, the attention mask matrix can be used for inquiring the vector to pay excessive attention to the vector when multi-mode attention weight calculation is carried out, therefore, the accuracy of the first attention characteristics can be improved, meanwhile, the process can be enabled to carry out parallel decoding through the attention mask matrix, and the decoding efficiency is further improved.
Therefore, through the technical scheme, in the multi-modal attention processing process, the detection feature and the recognition feature can be interacted at the same time to perform attention processing, so that the accuracy of the first attention feature is ensured by combining the detection feature and the recognition feature. In addition, the identification feature in the input image splicing features is an image feature, and the language feature is converted in the embodiment, so that effective and reliable data support can be further improved for subsequent text identification, feature processing can be performed from the perspective of images and languages, and the accuracy of feature processing is improved.
And then performing factorization attention processing on the first attention characteristic to obtain a second attention characteristic.
In this embodiment, the first attention feature may be mapped by the FFN network, and factorized attention processing may be performed based on the mapped feature. The factorized attention processing may adopt a factored self-attention algorithm in the art to perform processing, and in the factored self-attention algorithm, self-attention calculation belonging to the same feature may be performed, that is, association relations in text lines corresponding to the same feature are modeled, and meanwhile, self-attention calculation between different features may be performed, that is, modeling may be performed for association relations between different features, that is, different texts, so that association feature representation of text contents of a text image in a second attention feature may be further improved.
And performing cross attention processing on the second attention feature to obtain the processing feature.
The cross attention processing may be calculated based on a deformable cross attention in the art, and the specific calculation manner is not described herein again. Thereby, it is possible to further perform feature extraction in the image features of the text image, and map and output the extracted features by FFN as the processing features, as shown by C in fig. 2.
Therefore, by the technical scheme, the detection features and the identification features can be synchronously decoded by sharing the same decoding layer, and the interaction between the detection features and the identification features is effectively increased by the multi-modal attention processing in the process, so that the accuracy of the characterization of the detection information and the identification information by the processing features is improved to a certain extent, the relevance between the interior of the features and the features can be further modeled by factorization attention processing and cross attention processing, meanwhile, the comprehensiveness and richness of the features in the processing features can be improved, and effective data support is provided for the accurate identification of the detection results and the identification results in the follow-up process.
In a possible embodiment, the obtaining, by the prediction layer, the detection result and the recognition result corresponding to the text image by performing prediction based on the processing feature may include:
classifying the processing features based on a first fully connected layer in the prediction layer, and determining a target classification corresponding to the processing features;
performing position regression on the processing features based on a second full-connection layer in the prediction layer, and determining position information corresponding to the processing features;
performing text recognition on the processing features based on a third full-connection layer in the prediction layer, and determining text information corresponding to the processing features;
and determining the detection result and the identification result according to the target classification, the position information and the text information corresponding to the processing characteristics.
The first full connection layer, the second full connection layer and the third full connection layer in the prediction layer can be realized based on the structure of the full connection layer which is universal in the field, and the corresponding parameters of the prediction layer are adjusted and optimized in the training process of the text processing model, so that in the trained text processing model, the result output by the first full connection layer is directly used as target classification, the result output by the second full connection layer is used as position information, and the result output by the third full connection layer is used as text information.
As an example, the determining the detection result and the recognition result according to the target classification, the position information, and the text information corresponding to the processing feature may include:
and for each processing feature, if the target classification corresponding to the processing feature is a foreground classification, using the position information corresponding to the processing feature as the detection result, and determining the text information corresponding to the processing feature as the identification result.
As described above, in the process of processing the text image based on the text processing model, when determining the detection feature, the first m pieces of predicted position information may be selected as the target position information, that is, m pieces of detection results and identification results may be determined in the process. Further, in this embodiment, the processing features may be classified through the first fully connected layer, and the classification category may include a foreground classification and a background classification, where the foreground classification is used to indicate that the processing features correspond to features of real text, and the background classification is used to indicate that the processing features correspond to features of noise. Accordingly, if it is determined that the classification corresponding to the processing feature is a foreground classification, that is, the detection result corresponding to the processing feature and the recognition result are the results of corresponding real texts, the position information corresponding to the processing feature may be used as the detection result, and the text information corresponding to the processing feature may be determined as the recognition result, so as to simultaneously implement the detection and recognition of the text image.
As an example, the text boundary in the text image may be irregular, in this embodiment, the position information may be represented by a plurality of position points, for example, the position information may be represented based on 8 points, and the region corresponding to the position information of the 8 points is used as the text region in the text image, so as to further improve the precision and accuracy of the text processing method.
The present disclosure also provides a method for training a text processing model, which may include:
acquiring a training sample set, wherein each sample in the training sample set comprises a training text image and label detection information and label identification information corresponding to the training text image;
extracting the features of the training text images through a feature extraction layer in a preset model to obtain training image features corresponding to the training text images;
coding the training image features through a coding layer in the preset model to obtain training detection features and training recognition features corresponding to the training text images, and splicing the training detection features and the training recognition features to obtain training image splicing features;
decoding the training image splicing characteristics through a decoding layer in the preset model to obtain training processing characteristics corresponding to the training text image;
predicting through a prediction layer in the preset model based on the training processing characteristics to obtain a training detection result and a training recognition result corresponding to the training text image;
and determining the target loss of the preset model based on the label detection information and the label identification information corresponding to the training text image and the training detection result and the training identification result corresponding to the training text image, training the preset model based on the target loss, and determining the trained preset model as the text processing model.
Thus, in the training process of the text processing model, the training text image may be input into the text processing model, so that the training detection result and the training recognition result may be determined based on the similar process described above, and then the determining the target loss of the preset model based on the label detection information and the label recognition information corresponding to the training text image and the training detection result and the training recognition result corresponding to the training text image may include:
for example, the order in which the model outputs the training detection result is not necessarily the same as the sequence of the ground channel, and binary matching may be performed based on the label frame in the label detection information corresponding to the training text image and the prediction frame in the training detection result to determine the label frame corresponding to each prediction frame, for example, the binary matching may be implemented based on the bipartite matching.
After the label box corresponding to each prediction box is determined, the detection loss and the identification loss may be calculated based on the matched prediction box and label box, for example, the detection loss may include a GIoU loss, an L1 loss, and a class loss, and the identification loss may include a class loss, and the calculation of the loss may be performed in a loss calculation manner commonly used in the art. Further, the detection loss in the detection direction and the recognition loss in the recognition direction may be subjected to weighted summation to determine the target loss of the preset model, and the weights corresponding to the detection loss and the recognition loss may be set according to an actual application scenario, which is not limited by the present disclosure.
After the target loss is determined, parameters in a feature extraction layer, a coding layer, a decoding layer and a prediction layer of the preset model can be optimized and updated based on the target loss, and if the target loss of the preset model is less than a preset threshold value, a gradient descent method can be adopted to optimize the parameters until the target loss of the preset model is less than the preset threshold value or the training times of the preset model reach the preset times, so that the training of the preset model is completed.
Therefore, the end-to-end text processing model can be trained through the technical scheme, so that when the text image is processed, the detection result and the recognition result corresponding to the text image can be obtained at the same time, and the text processing efficiency is improved. Moreover, a shared decoder can be used for processing the detection features and the recognition features in the text processing model, multi-mode interaction of the detection features and the recognition features can be realized in the decoding process, so that the text processing can be simultaneously carried out by combining the detection features and the recognition features, the training efficiency of the text processing model is improved, the calculation data volume of the text processing model is reduced, and the efficiency and the accuracy of text processing based on the text processing model are improved.
The present disclosure also provides a text processing apparatus, as shown in fig. 3, the apparatus 30 includes:
a receiving module 31, configured to receive a text image to be processed;
the processing module 32 is configured to input the text image into a text processing model, and obtain a detection result and an identification result corresponding to the text image;
wherein the text processing model comprises a feature extraction layer, a coding layer, a decoding layer and a prediction layer, and the processing module comprises: the first extraction submodule is used for extracting the features of the text image through the feature extraction layer to obtain image features corresponding to the text image; the first coding submodule is used for coding the image features through the coding layer to obtain detection features and identification features corresponding to the text image, and splicing the detection features and the identification features to obtain image splicing features; the decoding submodule is used for decoding the image splicing characteristics through the decoding layer to obtain the processing characteristics corresponding to the text image; and the first processing submodule is used for carrying out prediction on the basis of the processing characteristics through the prediction layer to obtain a detection result and an identification result corresponding to the text image.
Optionally, the decoding sub-module includes:
the second processing submodule is used for carrying out multi-mode attention processing according to the image splicing characteristics to obtain first attention characteristics;
a third processing submodule, configured to perform factorized attention processing on the first attention feature to obtain a second attention feature;
and the fourth processing submodule is used for carrying out cross attention processing on the second attention feature to obtain the processing feature.
Optionally, the second processing sub-module includes:
the first determining sub-module is used for performing text classification processing on the recognition features, and taking the feature of the last feature layer in the text classification processing process as the text feature of the recognition features;
the splicing sub-module is used for splicing the detection features and the text features to obtain text splicing features;
and the fifth processing submodule is used for determining a query vector according to the image splicing characteristics, generating a key vector and a value vector according to the text splicing characteristics, and performing the multi-mode attention processing based on the query vector, the key vector and the value vector to obtain the first attention characteristics.
Optionally, the first extraction sub-module includes:
the second extraction submodule is used for extracting the features of the text image based on the multi-scale feature extraction module in the feature extraction layer to obtain multi-scale features;
and the adding submodule is used for adding position codes and hierarchy codes in the multi-scale features and converting the features subjected to the position codes and the hierarchy codes into one-dimensional feature representations to obtain the image features.
Optionally, the first encoding submodule includes:
the second coding submodule is used for carrying out feature coding on the basis of a transformer coding layer according to the image features to obtain coding features corresponding to the text image;
the first prediction sub-module is used for predicting each feature in the coding features based on a coding full-link layer, obtaining prediction position information corresponding to each feature in the coding features, and determining target position information from the prediction position information;
and a second determining submodule, configured to determine, for each piece of target location information, an identification feature corresponding to the piece of target location information according to a location indicated by the piece of target location information, and determine, as the detection feature, a feature used for predicting the piece of target location information in the coding features.
Optionally, the first processing sub-module includes:
the classification submodule is used for classifying the processing features based on a first full-connection layer in the prediction layer and determining target classification corresponding to the processing features;
the regression submodule is used for carrying out position regression on the processing features on the basis of a second full-connection layer in the prediction layer and determining position information corresponding to the processing features;
the recognition submodule is used for performing text recognition on the processing characteristics based on a third full-connection layer in the prediction layer and determining text information corresponding to the processing characteristics;
and the third determining submodule is used for determining the detection result and the identification result according to the target classification, the position information and the text information corresponding to the processing characteristics.
Optionally, the third determining sub-module is further configured to:
and for each processing feature, if the target classification corresponding to the processing feature is a foreground classification, using the position information corresponding to the processing feature as the detection result, and determining the text information corresponding to the processing feature as the identification result.
The present disclosure also provides a training apparatus for a text processing model, the apparatus comprising:
the system comprises an acquisition module, a processing module and a processing module, wherein the acquisition module is used for acquiring a training sample set, and each sample in the training sample set comprises a training text image and label detection information and label identification information corresponding to the training text image;
the extraction module is used for extracting the features of the training text images through a feature extraction layer in a preset model to obtain training image features corresponding to the training text images;
the coding module is used for coding the training image features through a coding layer in the preset model to obtain training detection features and training recognition features corresponding to the training text images, and splicing the training detection features and the training recognition features to obtain training image splicing features;
the decoding module is used for decoding the training image splicing characteristics through a decoding layer in the preset model to obtain training processing characteristics corresponding to the training text image;
the prediction module is used for predicting based on the training processing characteristics through a prediction layer in the preset model to obtain a training detection result and a training recognition result corresponding to the training text image;
and the training module is used for determining the target loss of the preset model based on the label detection information and the label identification information corresponding to the training text image and the training detection result and the training identification result corresponding to the training text image, training the preset model based on the target loss and determining the trained preset model as the text processing model.
Referring now to FIG. 4, a block diagram of an electronic device 600 suitable for use in implementing embodiments of the present disclosure is shown. The terminal device in the embodiments of the present disclosure may include, but is not limited to, a mobile terminal such as a mobile phone, a notebook computer, a digital broadcast receiver, a PDA (personal digital assistant), a PAD (tablet computer), a PMP (portable multimedia player), a vehicle terminal (e.g., a car navigation terminal), and the like, and a stationary terminal such as a digital TV, a desktop computer, and the like. The electronic device shown in fig. 4 is only an example, and should not bring any limitation to the functions and the scope of use of the embodiments of the present disclosure.
As shown in fig. 4, the electronic device 600 may include a processing means (e.g., central processing unit, graphics processor, etc.) 601 that may perform various appropriate actions and processes in accordance with a program stored in a Read Only Memory (ROM) 602 or a program loaded from a storage means 608 into a Random Access Memory (RAM) 603. In the RAM 603, various programs and data necessary for the operation of the electronic apparatus 600 are also stored. The processing device 601, the ROM 602, and the RAM 603 are connected to each other via a bus 604. An input/output (I/O) interface 605 is also connected to bus 604.
Generally, the following devices may be connected to the I/O interface 605: input devices 606 including, for example, a touch screen, touch pad, keyboard, mouse, camera, microphone, accelerometer, gyroscope, etc.; output devices 607 including, for example, a Liquid Crystal Display (LCD), a speaker, a vibrator, and the like; storage 608 including, for example, magnetic tape, hard disk, etc.; and a communication device 609. The communication means 609 may allow the electronic device 600 to communicate with other devices wirelessly or by wire to exchange data. While fig. 4 illustrates an electronic device 600 having various means, it is to be understood that not all illustrated means are required to be implemented or provided. More or fewer devices may alternatively be implemented or provided.
In particular, according to an embodiment of the present disclosure, the processes described above with reference to the flowcharts may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product comprising a computer program carried on a non-transitory computer readable medium, the computer program containing program code for performing the method illustrated by the flow chart. In such an embodiment, the computer program may be downloaded and installed from a network via the communication means 609, or may be installed from the storage means 608, or may be installed from the ROM 602. The computer program, when executed by the processing device 601, performs the above-described functions defined in the methods of the embodiments of the present disclosure.
It should be noted that the computer readable medium in the present disclosure can be a computer readable signal medium or a computer readable storage medium or any combination of the two. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the computer readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the present disclosure, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In contrast, in the present disclosure, a computer readable signal medium may comprise a propagated data signal with computer readable program code embodied therein, either in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: electrical wires, optical cables, RF (radio frequency), etc., or any suitable combination of the foregoing.
In some embodiments, the clients, servers may communicate using any currently known or future developed network Protocol, such as HTTP (HyperText Transfer Protocol), and may interconnect with any form or medium of digital data communication (e.g., a communications network). Examples of communication networks include a local area network ("LAN"), a wide area network ("WAN"), the Internet (e.g., the Internet), and peer-to-peer networks (e.g., ad hoc peer-to-peer networks), as well as any currently known or future developed network.
The computer readable medium may be embodied in the electronic device; or may exist separately without being assembled into the electronic device.
The computer readable medium carries one or more programs which, when executed by the electronic device, cause the electronic device to: receiving a text image to be processed; inputting the text image into a text processing model to obtain a detection result and an identification result corresponding to the text image; the text processing model comprises a feature extraction layer, a coding layer, a decoding layer and a prediction layer, and the feature extraction layer is used for extracting features of the text image to obtain image features corresponding to the text image; coding the image features through the coding layer to obtain detection features and identification features corresponding to the text image, and splicing the detection features and the identification features to obtain image splicing features; decoding the image splicing characteristics through the decoding layer to obtain processing characteristics corresponding to the text image; and predicting based on the processing characteristics through the prediction layer to obtain a detection result and an identification result corresponding to the text image.
Alternatively, the computer readable medium carries one or more programs which, when executed by the electronic device, cause the electronic device to: acquiring a training sample set, wherein each sample in the training sample set comprises a training text image and label detection information and label identification information corresponding to the training text image; extracting the features of the training text images through a feature extraction layer in a preset model to obtain training image features corresponding to the training text images; coding the training image features through a coding layer in the preset model to obtain training detection features and training recognition features corresponding to the training text images, and splicing the training detection features and the training recognition features to obtain training image splicing features; decoding the training image splicing characteristics through a decoding layer in the preset model to obtain training processing characteristics corresponding to the training text image; predicting through a prediction layer in the preset model based on the training processing characteristics to obtain a training detection result and a training recognition result corresponding to the training text image; and determining the target loss of the preset model based on the label detection information and the label identification information corresponding to the training text image and the training detection result and the training identification result corresponding to the training text image, training the preset model based on the target loss, and determining the trained preset model as the text processing model.
Computer program code for carrying out operations for the present disclosure may be written in any combination of one or more programming languages, including but not limited to an object oriented programming language such as Java, smalltalk, C + +, and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider).
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems that perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The modules described in the embodiments of the present disclosure may be implemented by software or hardware. The name of the module does not in some cases constitute a limitation of the module itself, and for example, the receiving module may also be described as a "module that receives a text image to be processed".
The functions described herein above may be performed, at least in part, by one or more hardware logic components. For example, without limitation, exemplary types of hardware logic components that may be used include: field Programmable Gate Arrays (FPGAs), application Specific Integrated Circuits (ASICs), application Specific Standard Products (ASSPs), systems on a chip (SOCs), complex Programmable Logic Devices (CPLDs), and the like.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. A machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
Example 1 provides a text processing method according to one or more embodiments of the present disclosure, wherein the method includes:
receiving a text image to be processed;
inputting the text image into a text processing model to obtain a detection result and an identification result corresponding to the text image;
the text processing model comprises a feature extraction layer, a coding layer, a decoding layer and a prediction layer, and the feature extraction layer is used for extracting features of the text image to obtain image features corresponding to the text image; coding the image features through the coding layer to obtain detection features and identification features corresponding to the text image, and splicing the detection features and the identification features to obtain image splicing features; decoding the image splicing characteristics through the decoding layer to obtain processing characteristics corresponding to the text image; and predicting based on the processing characteristics through the prediction layer to obtain a detection result and an identification result corresponding to the text image.
Example 2 provides the method of example 1, wherein the decoding, by the decoding layer, the image splicing feature to obtain a processing feature corresponding to the text image includes:
performing multi-modal attention processing according to the image stitching characteristics to obtain first attention characteristics;
factorizing the first attention feature to obtain a second attention feature;
and performing cross attention processing on the second attention feature to obtain the processing feature.
Example 3 provides the method of example 2, wherein the performing multi-modal attention processing according to the image stitching feature to obtain a first attention feature comprises:
performing text classification processing on the recognition features, and taking the feature of the last feature layer in the text classification processing process as the text feature of the recognition features;
splicing the detection features and the text features to obtain text splicing features;
determining a query vector according to the image splicing features, generating a key vector and a value vector according to the text splicing features, and performing the multi-mode attention processing based on the query vector, the key vector and the value vector to obtain the first attention features.
Example 4 provides the method of example 1, wherein the extracting features of the text image by the feature extraction layer to obtain image features corresponding to the text image includes:
extracting the features of the text image based on a multi-scale feature extraction module in the feature extraction layer to obtain multi-scale features;
adding position codes and level codes in the multi-scale features, and converting the features added with the position codes and the level codes into one-dimensional feature representations to obtain the image features.
Example 5 provides the method of example 1, wherein the encoding, by the encoding layer, the image feature to obtain a corresponding detected feature and an identified feature of the text image includes:
according to the image characteristics, performing characteristic coding based on a transform coding layer to obtain coding characteristics corresponding to the text image;
predicting each feature in the coding features based on a coding full-link layer to obtain prediction position information corresponding to each feature in the coding features, and determining target position information from the prediction position information;
and for each piece of target position information, determining an identification feature corresponding to the target position information according to the position indicated by the target position information, and determining a feature used for predicting the target position information in the coding features as the detection feature.
Example 6 provides the method of example 1, wherein the obtaining, by the prediction layer, the detection result and the recognition result corresponding to the text image by performing prediction based on the processing feature includes:
classifying the processing features based on a first full-connection layer in the prediction layer, and determining a target classification corresponding to the processing features;
performing position regression on the processing features based on a second full-connection layer in the prediction layer, and determining position information corresponding to the processing features;
performing text recognition on the processing features based on a third full-connection layer in the prediction layer, and determining text information corresponding to the processing features;
and determining the detection result and the identification result according to the target classification, the position information and the text information corresponding to the processing characteristics.
Example 7 provides the method of example 6, wherein the determining the detection result and the recognition result according to the target classification, the position information, and the text information corresponding to the processing feature includes:
and for each processing feature, if the target classification corresponding to the processing feature is a foreground classification, using the position information corresponding to the processing feature as the detection result, and determining the text information corresponding to the processing feature as the identification result.
Example 8 provides, in accordance with one or more embodiments of the present disclosure, a text processing apparatus, the apparatus comprising:
the receiving module is used for receiving a text image to be processed;
the processing module is used for inputting the text image into a text processing model to obtain a detection result and an identification result corresponding to the text image;
wherein the text processing model comprises a feature extraction layer, a coding layer, a decoding layer and a prediction layer, and the processing module comprises: the first extraction submodule is used for extracting the features of the text image through the feature extraction layer to obtain the image features corresponding to the text image; the first coding submodule is used for coding the image features through the coding layer to obtain detection features and identification features corresponding to the text image, and splicing the detection features and the identification features to obtain image splicing features; the decoding submodule is used for decoding the image splicing characteristics through the decoding layer to obtain processing characteristics corresponding to the text image; and the first processing submodule is used for carrying out prediction on the basis of the processing characteristics through the prediction layer to obtain a detection result and an identification result corresponding to the text image.
Example 9 provides a computer readable medium having stored thereon a computer program that, when executed by a processing apparatus, performs the steps of the method of any of examples 1-7, in accordance with one or more embodiments of the present disclosure.
Example 10 provides, in accordance with one or more embodiments of the present disclosure, an electronic device comprising:
a storage device having a computer program stored thereon;
processing means for executing the computer program in the storage means to carry out the steps of the method of any of examples 1-7.
The foregoing description is only exemplary of the preferred embodiments of the disclosure and is illustrative of the principles of the technology employed. It will be appreciated by those skilled in the art that the scope of the disclosure herein is not limited to the particular combination of features described above, but also encompasses other embodiments in which any combination of the features described above or their equivalents does not depart from the spirit of the disclosure. For example, the above features and (but not limited to) the features disclosed in this disclosure having similar functions are replaced with each other to form the technical solution.
Further, while operations are depicted in a particular order, this should not be understood as requiring that such operations be performed in the particular order shown or in sequential order. Under certain circumstances, multitasking and parallel processing may be advantageous. Likewise, while several specific implementation details are included in the above discussion, these should not be construed as limitations on the scope of the disclosure. Certain features that are described in the context of separate embodiments can also be implemented in combination in a single embodiment. Conversely, various features that are described in the context of a single embodiment can also be implemented in multiple embodiments separately or in any suitable subcombination.
Although the subject matter has been described in language specific to structural features and/or methodological acts, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the specific features or acts described above. Rather, the specific features and acts described above are disclosed as example forms of implementing the claims. With regard to the apparatus in the above embodiment, the specific manner in which each module performs the operation has been described in detail in the embodiment related to the method, and will not be described in detail here.

Claims (10)

1. A method of text processing, the method comprising:
receiving a text image to be processed;
inputting the text image into a text processing model to obtain a detection result and an identification result corresponding to the text image;
the text processing model comprises a feature extraction layer, a coding layer, a decoding layer and a prediction layer, and the feature extraction layer is used for extracting features of the text image to obtain image features corresponding to the text image; coding the image features through the coding layer to obtain detection features and identification features corresponding to the text image, and splicing the detection features and the identification features to obtain image splicing features; decoding the image splicing characteristics through the decoding layer to obtain processing characteristics corresponding to the text image; and predicting based on the processing characteristics through the prediction layer to obtain a detection result and an identification result corresponding to the text image.
2. The method according to claim 1, wherein the decoding, by the decoding layer, the image stitching feature to obtain a processing feature corresponding to the text image comprises:
performing multi-modal attention processing according to the image stitching characteristics to obtain first attention characteristics;
factoring attention processing is carried out on the first attention feature, and a second attention feature is obtained;
performing cross-attention processing on the second attention feature to obtain the processed feature.
3. The method according to claim 2, wherein the performing multi-modal attention processing according to the image stitching feature to obtain a first attention feature comprises:
performing text classification processing on the recognition features, and taking the feature of the last feature layer in the text classification processing process as the text feature of the recognition features;
splicing the detection features and the text features to obtain text splicing features;
determining a query vector according to the image splicing features, generating a key vector and a value vector according to the text splicing features, and performing the multi-mode attention processing based on the query vector, the key vector and the value vector to obtain the first attention features.
4. The method according to claim 1, wherein the performing, by the feature extraction layer, feature extraction on the text image to obtain an image feature corresponding to the text image includes:
extracting the features of the text image based on a multi-scale feature extraction module in the feature extraction layer to obtain multi-scale features;
adding position codes and level codes in the multi-scale features, and converting the features after adding the position codes and the level codes into one-dimensional feature representations to obtain the image features.
5. The method according to claim 1, wherein said encoding the image features by the encoding layer to obtain corresponding detection features and identification features of the text image comprises:
according to the image characteristics, performing characteristic coding based on a transform coding layer to obtain coding characteristics corresponding to the text image;
predicting each feature in the coding features based on a coding full-link layer to obtain prediction position information corresponding to each feature in the coding features, and determining target position information from the prediction position information;
and for each piece of target position information, determining an identification feature corresponding to the target position information according to the position indicated by the target position information, and determining a feature used for predicting the target position information in the coding features as the detection feature.
6. The method according to claim 1, wherein the obtaining, by the prediction layer, the detection result and the recognition result corresponding to the text image by performing prediction based on the processing feature comprises:
classifying the processing features based on a first fully connected layer in the prediction layer, and determining a target classification corresponding to the processing features;
performing position regression on the processing features based on a second full-connection layer in the prediction layer, and determining position information corresponding to the processing features;
performing text recognition on the processing features based on a third full-connection layer in the prediction layer, and determining text information corresponding to the processing features;
and determining the detection result and the identification result according to the target classification, the position information and the text information corresponding to the processing characteristics.
7. The method according to claim 6, wherein the determining the detection result and the recognition result according to the target classification, the position information and the text information corresponding to the processing feature comprises:
and for each processing feature, if the target classification corresponding to the processing feature is a foreground classification, using the position information corresponding to the processing feature as the detection result, and determining the text information corresponding to the processing feature as the identification result.
8. A text processing apparatus, characterized in that the apparatus comprises:
the receiving module is used for receiving a text image to be processed;
the processing module is used for inputting the text image into a text processing model to obtain a detection result and an identification result corresponding to the text image;
wherein the text processing model comprises a feature extraction layer, a coding layer, a decoding layer and a prediction layer, and the processing module comprises: the first extraction submodule is used for extracting the features of the text image through the feature extraction layer to obtain the image features corresponding to the text image; the first coding submodule is used for coding the image features through the coding layer to obtain detection features and identification features corresponding to the text image, and splicing the detection features and the identification features to obtain image splicing features; the decoding submodule is used for decoding the image splicing characteristics through the decoding layer to obtain the processing characteristics corresponding to the text image; and the first processing submodule is used for predicting based on the processing characteristics through the prediction layer to obtain a detection result and an identification result corresponding to the text image.
9. A computer-readable medium, on which a computer program is stored, characterized in that the program, when being executed by processing means, carries out the steps of the method of any one of claims 1 to 7.
10. An electronic device, comprising:
a storage device having a computer program stored thereon;
processing means for executing the computer program in the storage means to carry out the steps of the method according to any one of claims 1 to 7.
CN202211679873.0A 2022-12-26 2022-12-26 Text processing method, device, medium and equipment Pending CN115984868A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211679873.0A CN115984868A (en) 2022-12-26 2022-12-26 Text processing method, device, medium and equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211679873.0A CN115984868A (en) 2022-12-26 2022-12-26 Text processing method, device, medium and equipment

Publications (1)

Publication Number Publication Date
CN115984868A true CN115984868A (en) 2023-04-18

Family

ID=85957559

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211679873.0A Pending CN115984868A (en) 2022-12-26 2022-12-26 Text processing method, device, medium and equipment

Country Status (1)

Country Link
CN (1) CN115984868A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117197737A (en) * 2023-09-08 2023-12-08 数字广东网络建设有限公司 Land use detection method, device, equipment and storage medium

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117197737A (en) * 2023-09-08 2023-12-08 数字广东网络建设有限公司 Land use detection method, device, equipment and storage medium

Similar Documents

Publication Publication Date Title
EP3637310A1 (en) Method and apparatus for generating vehicle damage information
WO2023116507A1 (en) Target detection model training method and apparatus, and target detection method and apparatus
CN110413812B (en) Neural network model training method and device, electronic equipment and storage medium
CN110826567B (en) Optical character recognition method, device, equipment and storage medium
CN113436620B (en) Training method of voice recognition model, voice recognition method, device, medium and equipment
CN112766284B (en) Image recognition method and device, storage medium and electronic equipment
CN113449070A (en) Multimodal data retrieval method, device, medium and electronic equipment
CN113327599B (en) Voice recognition method, device, medium and electronic equipment
CN113362811B (en) Training method of voice recognition model, voice recognition method and device
CN112883967B (en) Image character recognition method, device, medium and electronic equipment
CN112883968B (en) Image character recognition method, device, medium and electronic equipment
US20240078385A1 (en) Method and apparatus for generating text
CN111310770A (en) Target detection method and device
CN115578570A (en) Image processing method, device, readable medium and electronic equipment
CN112712036A (en) Traffic sign recognition method and device, electronic equipment and computer storage medium
CN115908640A (en) Method and device for generating image, readable medium and electronic equipment
CN115294501A (en) Video identification method, video identification model training method, medium and electronic device
CN116166271A (en) Code generation method and device, storage medium and electronic equipment
CN113408507B (en) Named entity identification method and device based on resume file and electronic equipment
CN115984868A (en) Text processing method, device, medium and equipment
CN114463769A (en) Form recognition method and device, readable medium and electronic equipment
CN114067327A (en) Text recognition method and device, readable medium and electronic equipment
CN115375657A (en) Method for training polyp detection model, detection method, device, medium, and apparatus
CN116244431A (en) Text classification method, device, medium and electronic equipment
CN115375656A (en) Training method, segmentation method, device, medium, and apparatus for polyp segmentation model

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination