CN114155544A - Wireless form identification method and device, computer equipment and storage medium - Google Patents

Wireless form identification method and device, computer equipment and storage medium Download PDF

Info

Publication number
CN114155544A
CN114155544A CN202111345146.6A CN202111345146A CN114155544A CN 114155544 A CN114155544 A CN 114155544A CN 202111345146 A CN202111345146 A CN 202111345146A CN 114155544 A CN114155544 A CN 114155544A
Authority
CN
China
Prior art keywords
text
position coordinate
image
coordinate information
information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111345146.6A
Other languages
Chinese (zh)
Inventor
黄志远
柏英杰
陈沭源
陈柯树
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Qianhai Huanrong Lianyi Information Technology Service Co Ltd
Original Assignee
Shenzhen Qianhai Huanrong Lianyi Information Technology Service Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Qianhai Huanrong Lianyi Information Technology Service Co Ltd filed Critical Shenzhen Qianhai Huanrong Lianyi Information Technology Service Co Ltd
Priority to CN202111345146.6A priority Critical patent/CN114155544A/en
Publication of CN114155544A publication Critical patent/CN114155544A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Abstract

The invention discloses a wireless form identification method, a device, computer equipment and a storage medium, wherein the method comprises the following steps: extracting binary segmentation images of different text regions from the input text image by using a depth residual convolutional neural network; extracting edge information of the binary segmentation image to obtain absolute position coordinate information of different text regions in the input text image; performing text recognition on each text area by using a convolutional recurrent neural network to obtain corresponding text information; judging whether different text regions are combined or not according to the absolute position coordinate information to obtain corresponding cell information; relative position coordinate information between adjacent cells is acquired based on the absolute position coordinate information, and each cell is arranged according to the relative position coordinate information, and then the arrangement result is used as a wireless table. The invention reconstructs complete form information by predicting the relative position relationship between the cells, thereby improving the identification precision of the wireless form.

Description

Wireless form identification method and device, computer equipment and storage medium
Technical Field
The invention relates to the technical field of computer software, in particular to a wireless form identification method, a wireless form identification device, computer equipment and a storage medium.
Background
The existing OCR technology still has a certain defect for the identification of the structured table data, although theoretically, the wired table can make accurate identification of the table structure information by using the table line, the wireless table is difficult to identify the cell information in the table due to the absence of the table line information. Therefore, how to accurately identify and extract the wireless form is a problem to be solved by those skilled in the art.
Disclosure of Invention
The embodiment of the invention provides a wireless form identification method, a wireless form identification device, computer equipment and a storage medium, and aims to improve the identification precision of a wireless form.
In a first aspect, an embodiment of the present invention provides a wireless form identification method, including:
acquiring an input text image containing a wireless form;
extracting binary segmentation images of different text regions from the input text image by using a depth residual convolutional neural network;
extracting edge information of the binary segmentation image to obtain absolute position coordinate information of different text regions in the input text image;
performing text recognition on each text area by using the convolutional recurrent neural network to obtain corresponding text information;
judging whether different text regions are combined or not according to the absolute position coordinate information to obtain corresponding cell information;
and acquiring relative position coordinate information between adjacent cells based on the absolute position coordinate information, arranging each cell according to the relative position coordinate information, and taking the arrangement result as a wireless table.
In a second aspect, an embodiment of the present invention provides a wireless form identification apparatus, including:
an image acquisition unit for acquiring an input text image containing a wireless form;
the image extraction unit is used for extracting binary segmentation images of different text regions from the input text image by using a depth residual convolution neural network;
an edge information extraction unit, configured to extract edge information of the binary segmentation image to obtain absolute position coordinate information of different text regions in the input text image;
the text recognition unit is used for performing text recognition on each text area by using the convolution cyclic neural network to obtain corresponding text information;
the area merging unit is used for judging whether different text areas are merged or not according to the absolute position coordinate information to obtain corresponding cell information;
and the first cell arranging unit is used for acquiring relative position coordinate information between adjacent cells based on the absolute position coordinate information, arranging each cell according to the relative position coordinate information, and then taking an arrangement result as a wireless table.
In a third aspect, an embodiment of the present invention provides a computer device, including a memory, a processor, and a computer program stored on the memory and executable on the processor, where the processor implements the wireless form identification method according to the first aspect when executing the computer program.
In a fourth aspect, an embodiment of the present invention provides a computer-readable storage medium, on which a computer program is stored, where the computer program, when executed by a processor, implements the wireless form identification method according to the first aspect.
The embodiment of the invention provides a wireless form identification method, a wireless form identification device, computer equipment and a storage medium, wherein the method comprises the following steps: acquiring an input text image containing a wireless form; extracting binary segmentation images of different text regions from the input text image by using a depth residual convolutional neural network; extracting edge information of the binary segmentation image to obtain absolute position coordinate information of different text regions in the input text image; performing text recognition on each text area by using the convolutional recurrent neural network to obtain corresponding text information; judging whether different text regions are combined or not according to the absolute position coordinate information to obtain corresponding cell information; and acquiring relative position coordinate information between adjacent cells based on the absolute position coordinate information, arranging each cell according to the relative position coordinate information, and taking the arrangement result as a wireless table. According to the embodiment of the invention, the relative position relation between the cells in the table is predicted through the text position information of the structured wireless table data, and the complete table information is further reconstructed, so that the identification precision of the wireless table can be improved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
Fig. 1 is a flowchart illustrating a wireless form recognition method according to an embodiment of the present invention;
fig. 2 is a sub-flow diagram of a wireless table identification method according to an embodiment of the present invention;
FIG. 3 is a schematic block diagram of a wireless form recognition apparatus according to an embodiment of the present invention;
FIG. 4 is a sub-schematic block diagram of a wireless form recognition apparatus according to an embodiment of the present invention;
fig. 5 is a schematic diagram of a relative position prediction network structure in a wireless form identification method according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
It will be understood that the terms "comprises" and/or "comprising," when used in this specification and the appended claims, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
It is also to be understood that the terminology used in the description of the invention herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention. As used in the specification of the present invention and the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise.
It should be further understood that the term "and/or" as used in this specification and the appended claims refers to and includes any and all possible combinations of one or more of the associated listed items.
Referring to fig. 1, fig. 1 is a schematic flow chart of a wireless form identification method according to an embodiment of the present invention, which specifically includes: steps S101 to S106.
S101, acquiring an input text image containing a wireless form;
s102, extracting binary segmentation images of different text regions from the input text image by using a depth residual convolution neural network;
s103, extracting edge information of the binary segmentation image to obtain absolute position coordinate information of different text regions in the input text image;
s104, performing text recognition on each text area by using the convolutional recurrent neural network to obtain corresponding text information;
s105, judging whether different text regions are combined or not according to the absolute position coordinate information to obtain corresponding cell information;
s106, acquiring relative position coordinate information between adjacent cells based on the absolute position coordinate information, arranging each cell according to the relative position coordinate information, and taking an arrangement result as a wireless table.
In this embodiment, for an input text image for which a wireless form needs to be identified, first, a binary segmentation image corresponding to each text region in the input text image is extracted through a depth residual convolutional neural network, and then, for each binary segmentation image, edge information is extracted to obtain absolute position coordinate information of each text region in the input text image. And simultaneously, respectively carrying out text recognition on each text region by using a convolution cyclic neural network to obtain text information corresponding to each text region. And then, judging whether to combine the text regions into the same cell or not according to the absolute position coordinate information, namely combining the text information in the text regions into the same cell. For example, according to the absolute position coordinate information, if the distance between two adjacent text regions is smaller than a preset distance threshold, the two text regions are merged into the same cell. Further, relative position coordinate information among the cells is acquired, and then the cells are arranged according to the relative position coordinate information, so that a final wireless form can be obtained.
In the embodiment, the relative position relationship between the cells in the table is predicted according to the text position information of the structured wireless table data, and the complete table information is further reconstructed, so that the identification precision of the wireless table can be improved.
In one embodiment, the step S102 includes:
inputting the input text image into a ResNet50 depth residual convolution neural network, and sequentially performing down-sampling and channel expansion processing on the input text image through the ResNet50 depth residual convolution neural network to obtain different feature maps corresponding to the input text image;
compressing channels of each feature map through convolution operation, performing up-sampling processing on each feature map to obtain a single-channel image with the size consistent with that of the input text image, and outputting the single-channel image as the binary segmentation image.
In the embodiment, the deep residual convolutional neural network trained end to end replaces the traditional manual Feature extraction method, so that the text detection performance is improved, and in practical application, a deep network can extract richer image Feature information, therefore, the embodiment adopts the deep residual convolutional neural network ResNet50 as a backbone network to extract image features, wherein ResNet50 consists of a series of residual units including convolution, nonlinear transformation, residual connection and other operations, and different Feature maps (Feature maps) can be obtained after input text images are subjected to network downsampling and channel expansion through ResNet50, so that rich Feature information is provided for the subsequent more accurate prediction of binary segmentation images. After the ResNet50 deep residual convolution neural network extracts the features of the input text image to obtain a feature map, the image channel is compressed through convolution operation and is up-sampled to obtain a single-channel image with the same size as the input text image. In addition, in the model training stage, the normal wireless form image is used as model input, the binary image is used as a label of the model, the model training is guided through the segmentation idea, after the model is fitted, the wireless form image can be input in the reasoning stage, and the binary segmentation image is obtained through prediction.
In one embodiment, as shown in fig. 2, the step S103 includes: steps S201 to S204.
S201, performing edge extraction on the binary segmentation image by adopting a double-threshold method to obtain a high-threshold image;
s202, connecting edges in the high-threshold image into a contour, and acquiring a target point meeting a low threshold value based on an 8-neighborhood point method when reaching a contour endpoint;
s203, collecting the rest edges according to the target point until the edges of the binary segmentation image are closed, so as to obtain the edge information of the binary segmentation image;
and S204, acquiring absolute position coordinate information corresponding to different text areas in the edge information.
In this embodiment, when extracting edge information from a binary segmented image to obtain the absolute position coordinate information, edge extraction is performed by using a dual-threshold method, then the extracted edges are connected into a contour, and a target point which can satisfy a low threshold is searched and obtained in the contour according to an 8-neighborhood point method. Based on the obtained target point, the remaining edges of the binary segmentation image can be collected continuously, so that complete edge information is obtained. Furthermore, according to the obtained complete edge information, more accurate absolute position coordinate information can be correspondingly extracted.
In one embodiment, the step S104 includes:
for any text region, extracting a feature sequence with the width of a single pixel from the text region by using a convolutional layer in the convolutional recurrent neural network;
and predicting characters of the characteristic sequence by utilizing a loop layer in the convolutional circular neural network, and taking a prediction result as the text information.
In this embodiment, the recognition of the text can be realized by using a Convolutional Recurrent Neural Network (CRNN). The CRNN model is mainly composed of a convolutional layer (CNN) for extracting a feature sequence with a width of a single pixel from an input line text image, and a cyclic layer (RNN) for outputting a predicted character with the feature sequence output by the convolutional layer as an input. In the training stage, the convolution cyclic neural network takes the line text image as input and takes the text content of the image as a label, and after the convergence in a certain training stage, the line text can be identified in the reasoning stage.
In one embodiment, the step S105 includes:
calculating the distance between each text area according to the absolute position coordinate information;
when the distance between two adjacent text regions does not exceed a preset distance threshold, combining the two text regions into a cell;
and when the distance between two adjacent text regions exceeds a preset distance threshold, respectively taking the two text regions as a cell.
In this embodiment, the absolute coordinate position information of the line text region in the wireless table and the text content thereof may be obtained through the text detection and recognition model, but each text region is not necessarily a cell, and a plurality of line text regions may be detected in a certain cell, and at this time, the line text regions need to be fused to obtain a cell. In general, the text regions belonging to a cell are adjacent to each other and are close to each other, so that the information of each cell can be obtained by calculating the distance between each text region to determine whether fusion is required.
In one embodiment, the step S106 includes:
based on a seq2seq network structure, coding the absolute position coordinate information, and taking a coding result as target characteristic information;
and decoding the target characteristic information by using a decoder, and taking a decoding result as the relative position coordinate information.
In this embodiment, after the cell information is obtained, a complete table structure can be obtained only by obtaining a row-column relationship between the cells, and the relative position prediction network predicts (outputs) the relative position relationship information between the cells through the absolute position information (input) of the cells. The relative position prediction model is a network structure based on seq2seq, as shown in fig. 5, and is composed of an encoder and a decoder, wherein the encoder extracts the characteristic information of the absolute position coordinates of the input cells, and the decoder utilizes the characteristic information extracted by the encoder and predicts the relative position information between the cells.
In an embodiment, the step S106 further includes:
arranging each text area from left to right and from top to bottom according to the absolute position coordinate information;
and arranging each cell from left to right and from top to bottom according to the relative position coordinate information.
In this embodiment, when the cells are sorted to obtain the final wireless table, the text regions are first arranged to enable the text regions in the cells (including more than one text region) to be distributed in order, and then the cells are arranged according to the relative position coordinate information.
Fig. 3 is a schematic block diagram of a wireless form recognition apparatus 300 according to an embodiment of the present invention, where the apparatus 300 includes:
an image acquisition unit 301 for acquiring an input text image containing a wireless form;
an image extraction unit 302, configured to extract binary segmentation images of different text regions from the input text image by using a depth residual convolutional neural network;
an edge information extraction unit 303, configured to extract edge information of the binary-segmented image to obtain absolute position coordinate information of different text regions in the input text image;
a text recognition unit 304, configured to perform text recognition on each text region by using the convolutional recurrent neural network to obtain corresponding text information;
a region merging unit 305, configured to determine whether to merge different text regions according to the absolute position coordinate information, so as to obtain corresponding cell information;
the first cell arranging unit 306 is configured to obtain relative position coordinate information between adjacent cells based on the absolute position coordinate information, arrange each cell according to the relative position coordinate information, and then use an arrangement result as a wireless table.
In an embodiment, the image extraction unit 302 includes:
the characteristic diagram obtaining unit is used for inputting the input text image into a ResNet50 depth residual convolution neural network, and obtaining different characteristic diagrams corresponding to the input text image after sequentially performing down-sampling and channel expansion processing on the input text image through the ResNet50 depth residual convolution neural network;
and the binary segmentation image output unit is used for compressing each characteristic image channel through convolution operation and performing up-sampling processing on each characteristic image to obtain a single-channel image with the size consistent with that of the input text image, and outputting the single-channel image as the binary segmentation image.
In one embodiment, as shown in fig. 4, the edge information extracting unit 303 includes:
an edge extraction unit 401, configured to perform edge extraction on the binary segmented image by using a dual threshold method to obtain a high threshold image;
a target point obtaining unit 402, configured to connect edges in the high-threshold image as a contour, and when a contour endpoint is reached, obtain a target point that satisfies a low threshold value based on an 8-neighborhood point method;
an edge information obtaining unit 403, configured to collect remaining edges according to the target point until the edge of the binary segmented image is closed, so as to obtain edge information of the binary segmented image;
a coordinate information obtaining unit 404, configured to obtain, in the edge information, absolute position coordinate information corresponding to each of the different text regions.
In one embodiment, the text recognition unit 304 includes:
a feature sequence extraction unit, configured to extract, for any text region, a feature sequence with a width of a single pixel from the text region by using a convolutional layer in the convolutional recurrent neural network;
and the character prediction unit is used for predicting characters of the characteristic sequence by utilizing a loop layer in the convolution cyclic neural network and taking a prediction result as the text information.
In one embodiment, the region merging unit 305 includes:
a distance calculation unit for calculating a distance between the text regions based on the absolute position coordinate information;
a first judging unit, configured to merge two adjacent text regions into one cell when a distance between the two text regions does not exceed a preset distance threshold;
and the second judging unit is used for respectively taking the two adjacent text areas as a unit cell when the distance between the two text areas exceeds a preset distance threshold.
In one embodiment, the first cell arranging unit 306 includes:
the encoding unit is used for encoding the absolute position coordinate information based on a seq2seq network structure and taking an encoding result as target characteristic information;
and the decoding unit is used for decoding the target characteristic information by using a decoder and taking the decoding result as the relative position coordinate information.
In an embodiment, the first cell arranging unit 306 further includes:
the area arrangement unit is used for arranging each text area from left to right and from top to bottom according to the absolute position coordinate information;
and the second cell arranging unit is used for arranging each cell from left to right and from top to bottom according to the relative position coordinate information.
Since the embodiments of the apparatus portion and the method portion correspond to each other, please refer to the description of the embodiments of the method portion for the embodiments of the apparatus portion, which is not repeated here.
Embodiments of the present invention also provide a computer-readable storage medium, on which a computer program is stored, and when the computer program is executed, the steps provided by the above embodiments can be implemented. The storage medium may include: various media capable of storing program codes, such as a usb disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk.
The embodiment of the present invention further provides a computer device, which may include a memory and a processor, where the memory stores a computer program, and the processor may implement the steps provided in the above embodiments when calling the computer program in the memory. Of course, the computer device may also include various network interfaces, power supplies, and the like.
The embodiments are described in a progressive manner in the specification, each embodiment focuses on differences from other embodiments, and the same and similar parts among the embodiments are referred to each other. For the system disclosed by the embodiment, the description is relatively simple because the system corresponds to the method disclosed by the embodiment, and the relevant points can be referred to the method part for description. It should be noted that, for those skilled in the art, it is possible to make several improvements and modifications to the present application without departing from the principle of the present application, and such improvements and modifications also fall within the scope of the claims of the present application.
It is further noted that, in the present specification, relational terms such as first and second, and the like are used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.

Claims (10)

1. A wireless form recognition method, comprising:
acquiring an input text image containing a wireless form;
extracting binary segmentation images of different text regions from the input text image by using a depth residual convolutional neural network;
extracting edge information of the binary segmentation image to obtain absolute position coordinate information of different text regions in the input text image;
performing text recognition on each text area by using the convolutional recurrent neural network to obtain corresponding text information;
judging whether different text regions are combined or not according to the absolute position coordinate information to obtain corresponding cell information;
and acquiring relative position coordinate information between adjacent cells based on the absolute position coordinate information, arranging each cell according to the relative position coordinate information, and taking the arrangement result as a wireless table.
2. The wireless form recognition method of claim 1, wherein the extracting the binary segmented image of different text regions from the input text image using a deep residual convolutional neural network comprises:
inputting the input text image into a ResNet50 depth residual convolution neural network, and sequentially performing down-sampling and channel expansion processing on the input text image through the ResNet50 depth residual convolution neural network to obtain different feature maps corresponding to the input text image;
compressing channels of each feature map through convolution operation, performing up-sampling processing on each feature map to obtain a single-channel image with the size consistent with that of the input text image, and outputting the single-channel image as the binary segmentation image.
3. The wireless form recognition method of claim 1, wherein the extracting edge information of the binary-segmented image to obtain absolute position coordinate information of different text regions in the input text image comprises:
performing edge extraction on the binary segmentation image by adopting a double-threshold method to obtain a high-threshold image;
connecting edges in the high-threshold image into a contour, and acquiring a target point meeting a low threshold value based on an 8-neighborhood point method when reaching a contour endpoint;
collecting the rest edges according to the target point until the edges of the binary segmentation image are closed so as to obtain the edge information of the binary segmentation image;
and acquiring absolute position coordinate information corresponding to different text areas in the edge information.
4. The method of claim 1, wherein the performing text recognition on each text region by using the convolutional recurrent neural network to obtain corresponding text information comprises:
for any text region, extracting a feature sequence with the width of a single pixel from the text region by using a convolutional layer in the convolutional recurrent neural network;
and predicting characters of the characteristic sequence by utilizing a loop layer in the convolutional circular neural network, and taking a prediction result as the text information.
5. The method of claim 1, wherein the determining whether to merge different text regions according to the absolute position coordinate information to obtain corresponding cell information comprises:
calculating the distance between each text area according to the absolute position coordinate information;
when the distance between two adjacent text regions does not exceed a preset distance threshold, combining the two text regions into a cell;
and when the distance between two adjacent text regions exceeds a preset distance threshold, respectively taking the two text regions as a cell.
6. The wireless form recognition method of claim 1, wherein the obtaining relative position coordinate information between adjacent cells based on the absolute position coordinate information, arranging each cell according to the relative position coordinate information, and then using the arrangement result as a wireless form comprises:
based on a seq2seq network structure, coding the absolute position coordinate information, and taking a coding result as target characteristic information;
and decoding the target characteristic information by using a decoder, and taking a decoding result as the relative position coordinate information.
7. The wireless form recognition method of claim 1, wherein the obtaining of relative position coordinate information between adjacent cells based on the absolute position coordinate information, arranging each cell according to the relative position coordinate information, and then using the arrangement result as a wireless form, further comprises:
arranging each text area from left to right and from top to bottom according to the absolute position coordinate information;
and arranging each cell from left to right and from top to bottom according to the relative position coordinate information.
8. A wireless form recognition apparatus, comprising:
an image acquisition unit for acquiring an input text image containing a wireless form;
the image extraction unit is used for extracting binary segmentation images of different text regions from the input text image by using a depth residual convolution neural network;
an edge information extraction unit, configured to extract edge information of the binary segmentation image to obtain absolute position coordinate information of different text regions in the input text image;
the text recognition unit is used for performing text recognition on each text area by using the convolution cyclic neural network to obtain corresponding text information;
the area merging unit is used for judging whether different text areas are merged or not according to the absolute position coordinate information to obtain corresponding cell information;
and the first cell arranging unit is used for acquiring relative position coordinate information between adjacent cells based on the absolute position coordinate information, arranging each cell according to the relative position coordinate information, and then taking an arrangement result as a wireless table.
9. A computer device comprising a memory, a processor, and a computer program stored on the memory and executable on the processor, the processor implementing the wireless form recognition method of any one of claims 1 to 7 when executing the computer program.
10. A computer-readable storage medium, having stored thereon a computer program which, when executed by a processor, implements a wireless form recognition method according to any one of claims 1 to 7.
CN202111345146.6A 2021-11-15 2021-11-15 Wireless form identification method and device, computer equipment and storage medium Pending CN114155544A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111345146.6A CN114155544A (en) 2021-11-15 2021-11-15 Wireless form identification method and device, computer equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111345146.6A CN114155544A (en) 2021-11-15 2021-11-15 Wireless form identification method and device, computer equipment and storage medium

Publications (1)

Publication Number Publication Date
CN114155544A true CN114155544A (en) 2022-03-08

Family

ID=80460051

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111345146.6A Pending CN114155544A (en) 2021-11-15 2021-11-15 Wireless form identification method and device, computer equipment and storage medium

Country Status (1)

Country Link
CN (1) CN114155544A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116071770A (en) * 2023-03-06 2023-05-05 深圳前海环融联易信息科技服务有限公司 Method, device, equipment and medium for general identification of form

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116071770A (en) * 2023-03-06 2023-05-05 深圳前海环融联易信息科技服务有限公司 Method, device, equipment and medium for general identification of form

Similar Documents

Publication Publication Date Title
CN110363252B (en) End-to-end trend scene character detection and identification method and system
CN100373399C (en) Method and apparatus for establishing degradation dictionary
CN111626146B (en) Merging cell table segmentation recognition method based on template matching
CN106909901B (en) Method and device for detecting object from image
CN110276351B (en) Multi-language scene text detection and identification method
US6917708B2 (en) Handwriting recognition by word separation into silhouette bar codes and other feature extraction
CN112070649B (en) Method and system for removing specific character string watermark
CN111681273A (en) Image segmentation method and device, electronic equipment and readable storage medium
CN113723330B (en) Method and system for understanding chart document information
CN112381057A (en) Handwritten character recognition method and device, storage medium and terminal
CN116311310A (en) Universal form identification method and device combining semantic segmentation and sequence prediction
CN114155544A (en) Wireless form identification method and device, computer equipment and storage medium
CN116110036A (en) Electric power nameplate information defect level judging method and device based on machine vision
CN114022887B (en) Text recognition model training and text recognition method and device, and electronic equipment
CN101840499B (en) Bar code decoding method and binarization method thereof
CN112862764A (en) Method and device for identifying ballastless track bed gap damage and storage medium
CN113537187A (en) Text recognition method and device, electronic equipment and readable storage medium
CN111444834A (en) Image text line detection method, device, equipment and storage medium
CN111626313A (en) Feature extraction model training method, image processing method and device
CN107679505B (en) Method for realizing rejection of handwritten character
CN201927035U (en) Bar code decoding device and binaryzation device thereof
CN111310607B (en) Highway safety risk identification method and system based on computer vision and artificial intelligence
CN111126513B (en) Universal object real-time learning and recognition system and learning and recognition method thereof
CN114495144A (en) Method and device for extracting form key-value information in text image
CN111491180A (en) Method and device for determining key frame

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination