CN111950353B - Seal text recognition method and device and electronic equipment - Google Patents

Seal text recognition method and device and electronic equipment Download PDF

Info

Publication number
CN111950353B
CN111950353B CN202010619489.6A CN202010619489A CN111950353B CN 111950353 B CN111950353 B CN 111950353B CN 202010619489 A CN202010619489 A CN 202010619489A CN 111950353 B CN111950353 B CN 111950353B
Authority
CN
China
Prior art keywords
seal
text box
text
picture
layer
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010619489.6A
Other languages
Chinese (zh)
Other versions
CN111950353A (en
Inventor
高亚南
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Emperor Technology Co Ltd
Original Assignee
Shenzhen Emperor Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Emperor Technology Co Ltd filed Critical Shenzhen Emperor Technology Co Ltd
Priority to CN202010619489.6A priority Critical patent/CN111950353B/en
Publication of CN111950353A publication Critical patent/CN111950353A/en
Application granted granted Critical
Publication of CN111950353B publication Critical patent/CN111950353B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V30/00Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
    • G06V30/40Document-oriented image-based pattern recognition
    • G06V30/41Analysis of document content
    • G06V30/413Classification of content, e.g. text, photographs or tables
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/049Temporal neural networks, e.g. delay elements, oscillating neurons or pulsed inputs
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/60Type of objects
    • G06V20/62Text, e.g. of license plates, overlay texts or captions on TV images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V30/00Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
    • G06V30/10Character recognition

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Artificial Intelligence (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Computing Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Mathematical Physics (AREA)
  • Molecular Biology (AREA)
  • Multimedia (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Image Analysis (AREA)

Abstract

The embodiment of the disclosure provides a seal text recognition method and device and electronic equipment, and belongs to the technical field of image processing. The method comprises the following steps: receiving a seal picture to be identified; acquiring a site parameter of a text box of a target seal contained in the seal picture, wherein the site parameter at least comprises a vertex coordinate of the text box; extracting a characteristic layer corresponding to the text box of the target seal in the seal picture according to the site parameter of the text box; rotating and adjusting the characteristic layer to a standard pose; and carrying out text recognition on the text information in the characteristic image layer. Therefore, the position parameters of the text boxes are acquired firstly, and then the corresponding feature image layer is rotationally adjusted to the standard pose, so that the accuracy of text recognition in the seal can be greatly improved, and the text boxes with different angles and directions can be accurately and rapidly recognized.

Description

Seal text recognition method and device and electronic equipment
Technical Field
The disclosure relates to the technical field of image processing, in particular to a seal text recognition method and device and electronic equipment.
Background
The prior art is mainly directed at the recognition scheme of seal texts in invoices, and mainly structurally recognizes various card licenses, such as: identity card, bank card, driver's license, travel license, passport, visa, real estate card. In the text recognition process, the positions of elements to be recognized are fixed, the positions of the elements are positioned only by template matching, text recognition is performed through each row of slices, various rotation angles possibly exist in other seals such as passports, more types of seals and the like, the text is not easy to position, and the text recognition accuracy is low.
Therefore, the existing seal text recognition scheme has the technical problems that the accuracy rate of seal text recognition is low and the adaptability is poor.
Disclosure of Invention
In view of this, embodiments of the present disclosure provide a method, an apparatus, and an electronic device for identifying a seal text, which at least partially solve the problems existing in the prior art.
In a first aspect, an embodiment of the present disclosure provides a seal text recognition method, including:
Receiving a seal picture to be identified;
Acquiring a site parameter of a text box of a target seal contained in the seal picture, wherein the site parameter at least comprises a vertex coordinate of the text box;
extracting a characteristic layer corresponding to the text box of the target seal in the seal picture according to the site parameter of the text box;
Rotating and adjusting the characteristic layer to a standard pose;
And carrying out text recognition on the text information in the characteristic image layer.
According to a specific implementation manner of the embodiment of the present disclosure, the step of performing text recognition on the text information in the feature map layer includes:
extracting a target feature sequence corresponding to the text information in the feature image layer;
inputting the target characteristic sequence into a long-term and short-term memory network structure for characteristic matching;
Text information is identified by a time series classification loss function.
According to a specific implementation manner of the embodiment of the present disclosure, the step of extracting, according to the location parameter of the text box, a feature layer corresponding to the text box of the target seal in the seal picture includes:
Affine transformation is carried out according to the vertex coordinates of the text box;
and acquiring pixel characteristics corresponding to the text box position in a preset layer of the seal picture, wherein the preset layer is the layer in which the characteristics of the previous layer of the output layer are located.
According to a specific implementation manner of the embodiment of the disclosure, the position parameter at least includes vertex coordinates and deflection included angles of the text box;
The step of rotationally adjusting the feature layer to a standard pose comprises the following steps:
determining a deflection angle between the feature layer and a reference horizontal axis according to the vertex coordinates of the text box;
and according to the deflection angle, adjusting all pixel rotation of the characteristic image layer to be flush with the reference horizontal axis.
According to a specific implementation manner of the embodiment of the present disclosure, the step of adjusting all pixel rotations of the feature layer to be flush with the reference horizontal axis according to the deflection angle includes:
extracting a date frame of the feature layer;
Determining a deflection angle between the date frame and the reference horizontal axis;
and according to the deflection angle, adjusting all pixel areas of the integral characteristic image layer containing the date frame to be flush with the reference horizontal axis.
According to a specific implementation manner of the embodiment of the present disclosure, the step of receiving the stamp picture to be identified includes:
receiving an initial picture, wherein the initial picture comprises at least one pixel area where a target seal is located;
Outputting the initial picture to a seal detection model, and detecting the pixel area of each target seal contained in the initial picture;
And generating a seal picture containing one target seal according to the pixel region where each target seal is located.
According to a specific implementation of an embodiment of the disclosure, the location parameter includes at least a vertex pixel, a head-tail pixel, and a boundary pixel of the text box;
the step of obtaining the site parameter of the text box of the target seal contained in the seal picture comprises the following steps:
inputting the seal picture into a text box positioning model, wherein the text box positioning model comprises a first convolution block, a second convolution block and a third convolution block;
Obtaining a first output branch, a second output branch and a third output branch through multi-layer feature fusion between the first convolution block, the second convolution block and the third convolution block from top to bottom;
Identifying boundary pixels in the seal picture, which are positioned in the boundary box of the text box, according to the first output branch, identifying head and tail pixels positioned at the head and/or tail of the text box, according to the second output branch, and identifying vertex pixels positioned at the vertex position of the text box, according to the third output branch.
According to a specific implementation manner of the embodiment of the present disclosure, the stamp picture is a square picture, the single-side size range of the stamp picture is 256 to 400, the number of channels of the first convolution block is 32, the number of channels of the second convolution block is 64, and the number of channels of the third convolution block is 128;
the step of obtaining a first output branch, a second output branch and a third output branch through the multi-layer feature fusion function performed from top to bottom among the first convolution block, the second convolution block and the third convolution block comprises the following steps:
Performing convolution processing on the seal picture through the first convolution block to obtain a first feature map with the dimension of 128 x 32;
Performing convolution processing on the first feature map through the second convolution block to obtain a second feature map with the dimension of 64 x 64;
Performing convolution processing on the second feature map through the third convolution block to obtain a third feature map with the dimension of 32 x 128;
Performing up-sampling processing on the third feature map to obtain a fourth feature map with dimensions of 64 x 128;
Channel merging is carried out on the fourth characteristic diagram and the second characteristic diagram, and a fifth characteristic diagram with the dimension of 64 x 192 is obtained;
And carrying out convolution processing on the fifth characteristic map through a convolution layer containing 32 1*1 filters, a convolution layer containing 32 3*3 filters and a convolution layer containing 32 3*3 in sequence to obtain the first output branch, the second output branch and the third output branch.
In a second aspect, an embodiment of the present disclosure provides a seal text recognition apparatus, including:
the receiving module is used for receiving the seal picture to be identified;
The obtaining module is used for obtaining the site parameter of the text box of the target seal contained in the seal picture, wherein the site parameter at least comprises the vertex coordinates of the text box;
The extraction module is used for extracting a characteristic layer corresponding to the text box of the target seal in the seal picture according to the site parameter of the text box;
the rotation module is used for rotationally adjusting the characteristic image layer to a standard pose;
and the identification module is used for carrying out text identification on the text information in the characteristic image layer.
In a third aspect, embodiments of the present disclosure further provide an electronic device, including:
At least one processor; and
A memory communicatively coupled to the at least one processor; wherein,
The memory stores instructions executable by the at least one processor to enable the at least one processor to perform the seal text recognition method of the first aspect or any implementation of the first aspect.
In a fourth aspect, embodiments of the present disclosure also provide a non-transitory computer-readable storage medium storing computer instructions for causing the computer to perform the seal text recognition method of the first aspect or any implementation manner of the first aspect.
In a fifth aspect, embodiments of the present disclosure also provide a computer program product comprising a computer program stored on a non-transitory computer readable storage medium, the computer program comprising program instructions which, when executed by a computer, cause the computer to perform the seal text recognition method of the first aspect or any implementation of the first aspect.
According to the seal text recognition scheme, for a received seal picture to be recognized, a text box of a target seal is positioned by acquiring site parameters of the text box, then a feature image layer corresponding to the text box is rotationally adjusted to a standard pose according to the site parameters of the text box, and then text recognition is performed on text information in the feature image layer. Therefore, the position parameters of the text boxes are acquired firstly, and then the corresponding feature image layer is rotationally adjusted to the standard pose, so that the accuracy of text recognition in the seal can be greatly improved, and the text boxes with different angles and directions can be accurately and rapidly recognized.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present disclosure, the drawings that are needed in the embodiments will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present disclosure, and other drawings may be obtained according to these drawings without inventive effort to a person of ordinary skill in the art.
Fig. 1 is a schematic flow chart of a seal text recognition method provided in an embodiment of the disclosure;
FIGS. 2 and 3 are schematic flow diagrams of specific implementations of a seal text recognition method according to embodiments of the present disclosure;
FIG. 4 is a schematic partial flow chart of another seal text recognition method according to an embodiment of the disclosure;
fig. 5 is a schematic diagram of a text box positioning model related to a seal identification method according to an embodiment of the disclosure;
Fig. 6 is a schematic structural diagram of a seal text recognition device according to an embodiment of the present disclosure;
fig. 7 is a schematic diagram of an electronic device according to an embodiment of the disclosure.
Detailed Description
Embodiments of the present disclosure are described in detail below with reference to the accompanying drawings.
Other advantages and effects of the present disclosure will become readily apparent to those skilled in the art from the following disclosure, which describes embodiments of the present disclosure by way of specific examples. It will be apparent that the described embodiments are merely some, but not all embodiments of the present disclosure. The disclosure may be embodied or practiced in other different specific embodiments, and details within the subject specification may be modified or changed from various points of view and applications without departing from the spirit of the disclosure. It should be noted that the following embodiments and features in the embodiments may be combined with each other without conflict. All other embodiments, which can be made by one of ordinary skill in the art without inventive effort, based on the embodiments in this disclosure are intended to be within the scope of this disclosure.
It is noted that various aspects of the embodiments are described below within the scope of the following claims. It should be apparent that the aspects described herein may be embodied in a wide variety of forms and that any specific structure and/or function described herein is merely illustrative. Based on the present disclosure, one skilled in the art will appreciate that one aspect described herein may be implemented independently of any other aspect, and that two or more of these aspects may be combined in various ways. For example, an apparatus may be implemented and/or a method practiced using any number of the aspects set forth herein. In addition, such apparatus may be implemented and/or such methods practiced using other structure and/or functionality in addition to one or more of the aspects set forth herein.
It should also be noted that the illustrations provided in the following embodiments merely illustrate the basic concepts of the disclosure by way of illustration, and only the components related to the disclosure are shown in the drawings and are not drawn according to the number, shape and size of the components in actual implementation, and the form, number and proportion of the components in actual implementation may be arbitrarily changed, and the layout of the components may be more complicated.
In addition, in the following description, specific details are provided in order to provide a thorough understanding of the examples. However, it will be understood by those skilled in the art that the aspects may be practiced without these specific details.
The embodiment of the disclosure provides a seal text recognition method. The seal text recognition method provided in this embodiment may be executed by a computing device, which may be implemented as software, or as a combination of software and hardware, and the computing device may be integrally provided in a server, a terminal device, or the like.
Referring to fig. 1, a schematic flow chart of a seal text recognition method is provided in an embodiment of the disclosure. As shown in fig. 1, the method mainly comprises the following steps:
S101, receiving a seal picture to be identified;
The seal text recognition method provided by the embodiment is applied to seal text recognition scenes on corresponding pictures such as passports and invoices, and particularly aims at scenes with high seal text recognition difficulty caused by various seal types, seal positions, seal angles and the like on a page of the passport. The seal text recognition method provided by the embodiment is mainly used for recognizing text information in the seal picture to be recognized, so as to collect or count parameter information in the seal.
The provided seal text recognition is applied to electronic equipment, and the electronic equipment is externally connected or internally provided with an image acquisition device, so that the electronic equipment can acquire a seal picture to be recognized in advance through the image acquisition device, and then the text recognition operation is performed in the acquired seal picture by utilizing the provided method. In the implementation, the image acquisition device can be arranged in front of a channel for carrying out identity verification on the passport by a user, the user attaches the page of the passport to be identified to the image acquisition port of the image acquisition device, and the electronic equipment acquires the page picture of the passport acquired by the image acquisition device as the stamp picture to be identified according to the subsequent stamp text identification.
The seal picture in the embodiment mainly carries out text recognition aiming at a text box in a seal on the picture, preferably, the seal picture to be recognized is a seal slice only comprising a pixel area where one seal is positioned, the seal picture only comprises seal pixels to be subjected to text recognition at present, and does not comprise other seal pixels or interference pixels, so that the calculated amount of text recognition is less, and the accuracy is higher.
According to a specific implementation manner of the embodiment of the present disclosure, the step of receiving the stamp picture to be identified includes:
receiving an initial picture, wherein the initial picture comprises at least one pixel area where a target seal is located;
Outputting the initial picture to a seal detection model, and detecting the pixel area of each target seal contained in the initial picture;
And generating a seal picture containing one target seal according to the pixel region where each target seal is located.
For the case that the received initial picture contains a plurality of seal pixels or other interference pixels, contour position information of each seal in the initial picture can be obtained through an example segmentation algorithm, then seal pictures corresponding to each seal are obtained by combining the contour position information, and a single seal picture is used as an input picture of a seal text recognition process.
The strength segmentation algorithm refers to that the electronic equipment automatically frames different instance areas from the picture by using a target detection method, and then marks pixel by pixel in the different instance areas by using a semantic segmentation method. Example segmentation algorithms adopted in the embodiment can comprise Mask-RCNN algorithm, YOACT algorithm, cascade Mask-RCNN algorithm and other methods. After the contour position information of each seal in the picture to be identified is obtained according to the steps, the pixel characteristics pointed by the contour position information of each seal can be extracted, so that the seal slice corresponding to the seal can be obtained. In this way, the seal slice corresponding to each seal only contains the pixel characteristics corresponding to the seal, and does not contain other interference pixels which can influence text recognition.
S102, acquiring a site parameter of a text box of a target seal contained in the seal picture, wherein the site parameter at least comprises a vertex coordinate of the text box;
When the text recognition is carried out, the text box in the seal picture needs to be positioned first, and therefore the site parameters of the text box of the contained target seal need to be acquired first. For example, coordinate data corresponding to a specific pixel such as a vertex pixel, a head-to-tail pixel, or a boundary pixel of a text box. As shown in fig. 2 and 3, the vertex pixels (a shown in fig. 2) may be pixels corresponding to the vertex positions of the stamp, for example, four vertex pixels of a rectangular stamp, the head and tail pixels (B shown in fig. 2) may be pixels corresponding to the head or tail positions of the stamp, and the boundary pixels (C shown in fig. 2) may be pixels of an edge region within the text box.
The electronic equipment can be preloaded with a trained text box positioning model, and the text box positioning model can extract and fuse pixel characteristics of an input seal picture so as to obtain site parameters of various specific pixels in a text box in the input seal picture. Inputting the seal picture into a text box positioning model in the electronic equipment, and quickly acquiring various site parameters of the text box in the seal picture through feature extraction and algorithm matching of the text box positioning model.
S103, extracting a characteristic layer corresponding to the text box of the target seal in the seal picture according to the site parameter of the text box;
after the position parameters of the text box are obtained according to the steps, the text box can be rapidly positioned according to the parameters of various specific positions such as vertex pixels, head and tail pixels or boundary pixels in the text box, and then the feature layers corresponding to the text box are extracted. As shown in fig. 3, the head or tail boundary pixels may predict 2 vertex coordinates, respectively. All feature pixels in the seal picture form a feature layer of the text box, and boundary pixels are used for predicting vertex coordinates. Boundary pixels are defined as all pixels inside a dark box at both ends, and the weighted average of all the predicted values of the boundary pixels is used to predict two vertices at both ends of the short side of the head or tail. And respectively predicting 2 vertexes by the boundary pixels of the head part and the tail part, and finally obtaining 4 vertex coordinates.
In a specific embodiment, the step of positioning the text box in the seal picture according to the position parameter of the text box may include:
determining an initial pixel area of the text box according to the vertex pixels;
Correcting the initial pixel region to a standard pose according to the head and tail pixels;
and marking the text box in the initial pixel area corrected to the standard pose according to the edge pixels.
According to the vertex pixels of the text box, a minimum circumscribed frame corresponding to the text box, for example, a minimum circumscribed rectangle box can be generated, and the pixel areas in the minimum circumscribed rectangle box are all initial pixel areas of the text box. Then, the initial pixel area is corrected according to the head and tail pixels of the text box, so that the initial pixel area reaches the standard pose, and the standard pose is usually set to form an included angle of 0 degrees with the horizontal axis. And finally, screening out edge pixels from the corrected initial pixel region, wherein other pixel regions in the initial pixel region are text boxes. Therefore, the text box corresponding to the pixel where the text information is located can be rapidly positioned from the seal picture.
After the position of the text box is determined, all pixel features corresponding to the text box are extracted from the picture, a feature layer containing all pixel features is obtained, and a subsequent text recognition flow carries out corresponding operation on the feature layer.
S104, rotating and adjusting the characteristic layer to a standard pose;
Considering that various stamping angles can exist during stamping, correspondingly, a text box of a target stamp on a stamp picture can have various directions. In order to improve the accuracy of text recognition, the feature layer corresponding to the text box is adjusted to a standard pose in a rotating mode according to the position parameters of the text box. For example, a rotation correction operation of the region of interest (Region Of Interest Rotate, ROI Rotate for short) may be used to adjust the rotation of the acquired feature map layer corresponding to the region of interest.
According to a specific implementation manner of the embodiment of the disclosure, the position parameter may at least include vertex coordinates and a deflection included angle of the text box;
The rotation adjustment step may include:
determining a deflection angle between the feature layer and a reference horizontal axis according to the vertex coordinates of the text box;
and according to the deflection angle, adjusting all pixel rotation of the characteristic image layer to be flush with the reference horizontal axis.
Further, the step of adjusting all pixel rotations of the feature layer to be flush with the reference horizontal axis according to the deflection angle includes:
Extracting a date frame in the characteristic image layer;
Determining a deflection angle between the date frame and the reference horizontal axis;
and according to the deflection angle, adjusting all pixel areas of the integral characteristic image layer containing the date frame to be flush with the reference horizontal axis.
And rotating the characteristic diagram in any direction to the horizontal direction through ROI Rotate rotation correction operation. The rotation correction operation is mainly carried out according to the date text box through ROI Rotate, namely the rotation of the whole seal is realized when the included angle between the long edge of the date text box and the horizontal axis is adjusted to be 0.
Specifically, the region of interest of the feature map of "3x3, 32" in fig. 5 corresponding to the 4 vertices is recorded as ROI, the ROI is rotated to the horizontal direction, and rotated to the ROI in the horizontal direction, so as to obtain the feature sequence of each text, wherein the four vertices of the ROI are sequential, and can distinguish 0 degree from 180 degrees, and then the feature sequence of each text is input into LSTM. Wherein the 4 vertices correspond to an ROI, to a text box, to a feature sequence of a text instance.
S105, carrying out text recognition on the text information in the characteristic image layer.
After a text box is positioned in the seal picture and the corresponding feature layer is adjusted to the standard pose, text recognition can be performed on text information in the feature layer. There are various ways of text recognition, for example, according to the optical character recognition method (Optical Character Recognition, abbreviated as OCR).
According to a specific implementation of an embodiment of the disclosure, as shown in fig. 4, the steps of text recognition may include:
s401, extracting a target feature sequence of corresponding text information in the feature map layer;
And if the feature layer corresponds to the pixel feature of the previous layer of the text box position, the speed limiting feature of the text information in the text box is a partial feature sequence in the feature layer, and the target feature sequence of the corresponding text information in the feature layer is extracted, wherein the target feature sequence is fixed in height and variable in width, and the text information mainly comprises information such as entry and exit dates, countries, airports and the like.
Optionally, the step of extracting the target feature sequence of the corresponding text information in the feature map layer includes:
Affine transformation is carried out according to the vertex coordinates of the text box;
and acquiring pixel characteristics corresponding to the text box position in a preset layer of the seal picture, wherein the preset layer is the layer in which the characteristics of the previous layer of the seal picture are located.
Affine transformation is carried out according to the 4 vertex positions of the text box of the output layer to obtain the position of the text box corresponding to the feature map of the previous layer, and then the same operation is carried out, namely: and rotating the text box on the feature map in any direction to the horizontal direction through ROI Rotate rotation correction operation.
S402, inputting the target feature sequence into a long-term and short-term memory network structure for feature matching;
s403, identifying the text information through a time sequence classification loss function.
The electronic equipment is also loaded with a Long Short-Term Memory (LSTM) in which the characteristic sequence is input into the LSTM, and text recognition is carried out through a (Connectionist Temporal Classification CTC) loss function, so that end-to-end text recognition is realized, and the seal text recognition speed is improved.
Of course, in other embodiments, other methods for identifying text information of images may be used to quickly and accurately collect or identify text information in a stamp picture.
According to the seal text recognition method in the embodiment of the disclosure, for the received seal picture to be recognized, the feature layer corresponding to the target seal is extracted by acquiring the site parameter of the text box, then the feature layer is rotationally adjusted to the standard pose according to the site parameter of the text box, and then text recognition is performed on the text information in the text box. Therefore, the text box is accurately positioned and rotated and adjusted to the standard pose, the accuracy of text recognition in the seal can be greatly improved, and the text boxes with different angles and directions can be accurately and rapidly recognized.
Based on the above embodiments, according to a specific implementation of the embodiments of the present disclosure, the location parameter includes at least a vertex pixel, a head-tail pixel, and a boundary pixel of the text box;
The step of obtaining the site parameter of the text box of the target seal included in the seal picture in the step S103 may include:
inputting the seal picture into a text box positioning model, wherein the text box positioning model comprises a first convolution block, a second convolution block and a third convolution block;
Obtaining a first output branch, a second output branch and a third output branch through multi-layer feature fusion between the first convolution block, the second convolution block and the third convolution block from top to bottom;
Identifying boundary pixels in the seal picture, which are positioned in the boundary box of the text box, according to the first output branch, identifying head and tail pixels positioned at the head and/or tail of the text box, according to the second output branch, and identifying vertex pixels positioned at the vertex position of the text box, according to the third output branch.
As shown in fig. 5, the used text box positioning model may include a first convolution block conv block1, a second convolution block conv block2, and a third convolution block conv block3, where conv block1, conv block2, and conv block3 are modified vgg's 16 convolution blocks, and the number of channels is 32, 64, and 128, respectively, where the size of the filter in the convolution blocks conv block1, conv block2, and conv block3 is still 3x 3. The step size of the filter is 2, for example, the dimension is 256x256x3, wherein 256x256 represents the length and width of the picture, and 3 represents the r, g and b 3 color channels of the picture.
In specific implementation, the seal picture is a square picture, the unilateral size range of the seal picture is 256-400, the number of channels of the first convolution block is 32, the number of channels of the second convolution block is 64, and the number of channels of the third convolution block is 128;
the step of obtaining a first output branch, a second output branch and a third output branch through the multi-layer feature fusion function performed from top to bottom among the first convolution block, the second convolution block and the third convolution block comprises the following steps:
Performing convolution processing on the seal picture through the first convolution block to obtain a first feature map with the dimension of 128 x 32;
Performing convolution processing on the first feature map through the second convolution block to obtain a second feature map with the dimension of 64 x 64;
Performing convolution processing on the second feature map through the third convolution block to obtain a third feature map with the dimension of 32 x 128;
Performing up-sampling processing on the third feature map to obtain a fourth feature map with dimensions of 64 x 128;
Channel merging is carried out on the fourth characteristic diagram and the second characteristic diagram, and a fifth characteristic diagram with the dimension of 64 x 192 is obtained;
And carrying out convolution processing on the fifth characteristic map through a convolution layer containing 32 1*1 filters, a convolution layer containing 32 3*3 filters and a convolution layer containing 32 3*3 in sequence to obtain the first output branch, the second output branch and the third output branch.
Specifically, as shown in fig. 5, the initial input is a square picture with a single-side size range of 256 to 400, for example 256×256, and the square picture passes through a 3x3 filter in conv block1, 32,/2, wherein 32 represents 323 x3 filters,/2 represents the step length of the filter being 2, so as to obtain a 128x128x32 feature map, wherein 128x128 represents the length and width of the feature map, 32 represents the channel number of the feature map, how many filters obtain the feature map of how many channel numbers, then passes through a 3x3 filter in conv block2, 64x64 feature map is obtained, and then passes through conv block3 128, and/2 obtaining a 32x32x128 characteristic diagram, wherein the 32x32x128 characteristic diagram is subjected to up sampling to obtain a 64x64x128 characteristic diagram, then carrying out channel combination concat on the 64x64x128 characteristic diagram and the characteristic diagram obtained by conv block 264,/2 to obtain a 64x64x192 characteristic diagram, namely carrying out multi-layer characteristic fusion from top to bottom, obtaining a 64x64x32 characteristic diagram through 1x1 filters, 32 filters, namely 321 x1 filters, obtaining a 64x64x32 characteristic diagram through 3x3 filters, 32 filters, namely 32 filters, obtaining a 64x64x32 characteristic diagram through 3x 32, and finally obtaining 3 output branches.
The first output branch 1x1, 1 represents that 1x1 filter gets a 64x64x1 feature map, which represents whether each pixel is in the text bounding box, if yes, it is 1, otherwise it is 0, the second output branch 1x1,2 represents that 21 x1 filters get a 64x64x2 feature map, which represents whether each pixel belongs to the head or the tail of the text box, 64x64x2 wherein 2 represents the channel number of the feature map, the first channel represents that whether each pixel belongs to the head of the text box, if yes, it is 1, otherwise it is 0, the second channel represents that each pixel belongs to the tail of the text box, if yes, it is 1, otherwise it is 0, the third output branch 1x1, 4 represents that 41 x1 filters get a 64x64x4 feature map, and each channel respectively represents that whether each pixel belongs to one of 4 vertices, if yes, otherwise it is 1, and if it is 0.
Thus, the vertex pixels, boundary pixels and head and tail pixels of the text box in the seal picture can be rapidly and accurately identified.
According to the positioning method for the text box in the seal, the applied lightweight text box positioning model is small in size of the received seal picture, convolution layers are small, the number of filters in each layer is small, and the text box in any direction is rapidly positioned by predicting the positions of four vertexes of the text box.
In summary, according to the seal text recognition method provided by the embodiment of the disclosure, by predicting the positions of four vertexes of the text box, further, the text box in any direction is rapidly positioned, and then through ROI Rotate rotation correction operation, under the condition that the date text box is obtained, the whole seal is rotated to the horizontal direction according to the date text box, so that the text box in any direction is rapidly recognized, the sequential performance of text detection and text recognition is realized, targets of each stage are relatively clear, a lightweight network structure can be designed in each stage, the accuracy is relatively high, and the speed is relatively high.
Corresponding to the above method embodiment, referring to fig. 6, the embodiment of the present disclosure further provides a seal text recognition device 60, including:
A receiving module 601, configured to receive a stamp picture to be identified;
An obtaining module 602, configured to obtain a location parameter of a text box of the target seal included in the seal picture, where the location parameter at least includes a vertex coordinate of the text box;
The extracting module 603 is configured to extract, according to the location parameter of the text box, a feature layer corresponding to the text box of the target seal in the seal picture;
The rotation module 604 is configured to adjust the rotation of the feature layer to a standard pose;
and the recognition module 605 is used for recognizing the text of the text information in the characteristic image layer.
The apparatus shown in fig. 6 may correspondingly perform the content in the foregoing method embodiment, and the portions not described in detail in this embodiment refer to the content described in the foregoing method embodiment and are not described herein again.
Referring to fig. 7, an embodiment of the present disclosure also provides an electronic device 70, comprising:
At least one processor; and
A memory communicatively coupled to the at least one processor; wherein,
The memory stores instructions executable by the at least one processor to enable the at least one processor to perform the seal text recognition method of the foregoing method embodiments.
The disclosed embodiments also provide a non-transitory computer readable storage medium storing computer instructions for causing the computer to perform the seal text recognition method in the foregoing method embodiments.
The disclosed embodiments also provide a computer program product comprising a computer program stored on a non-transitory computer readable storage medium, the computer program comprising program instructions which, when executed by a computer, cause the computer to perform the seal text recognition method of the foregoing method embodiments.
Referring now to fig. 7, a schematic diagram of an electronic device 70 suitable for use in implementing embodiments of the present disclosure is shown. The electronic devices in the embodiments of the present disclosure may include, but are not limited to, mobile terminals such as mobile phones, notebook computers, digital broadcast receivers, PDAs (personal digital assistants), PADs (tablet computers), PMPs (portable multimedia players), in-vehicle terminals (e.g., in-vehicle navigation terminals), and the like, and stationary terminals such as digital TVs, desktop computers, and the like. The electronic device shown in fig. 7 is merely an example and should not be construed to limit the functionality and scope of use of the disclosed embodiments.
As shown in fig. 7, the electronic device 70 may include a processing means (e.g., a central processing unit, a graphics processor, etc.) 701, which may perform various appropriate actions and processes according to a program stored in a Read Only Memory (ROM) 702 or a program loaded from a storage means 708 into a Random Access Memory (RAM) 703. In the RAM 703, various programs and data necessary for the operation of the electronic device 70 are also stored. The processing device 701, the ROM 702, and the RAM 703 are connected to each other through a bus 704. An input/output (I/O) interface 705 is also connected to bus 704.
In general, the following devices may be connected to the I/O interface 705: input devices 706 including, for example, a touch screen, touchpad, keyboard, mouse, image sensor, microphone, accelerometer, gyroscope, and the like; an output device 707 including, for example, a Liquid Crystal Display (LCD), a speaker, a vibrator, and the like; storage 708 including, for example, magnetic tape, hard disk, etc.; and a communication device 709. The communication means 709 may allow the electronic device 70 to communicate wirelessly or by wire with other devices to exchange data. While an electronic device 70 having various means is shown, it should be understood that not all of the illustrated means are required to be implemented or provided. More or fewer devices may be implemented or provided instead.
In particular, according to embodiments of the present disclosure, the processes described above with reference to flowcharts may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product comprising a computer program embodied on a computer readable medium, the computer program comprising program code for performing the method shown in the flowcharts. In such an embodiment, the computer program may be downloaded and installed from a network via communication device 709, or installed from storage 708, or installed from ROM 702. The above-described functions defined in the methods of the embodiments of the present disclosure are performed when the computer program is executed by the processing device 701.
It should be noted that the computer readable medium described in the present disclosure may be a computer readable signal medium or a computer readable storage medium, or any combination of the two. The computer readable storage medium can be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples of the computer-readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the context of this disclosure, a computer-readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In the present disclosure, however, the computer-readable signal medium may include a data signal propagated in baseband or as part of a carrier wave, with the computer-readable program code embodied therein. Such a propagated data signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination of the foregoing. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: electrical wires, fiber optic cables, RF (radio frequency), and the like, or any suitable combination of the foregoing.
The computer readable medium may be contained in the electronic device; or may exist alone without being incorporated into the electronic device.
The computer readable medium carries one or more programs which, when executed by the electronic device, enable the electronic device to implement the solutions provided by the method embodiments described above.
Or the computer readable medium carries one or more programs which, when executed by the electronic device, enable the electronic device to implement the solutions provided by the method embodiments described above.
Computer program code for carrying out operations of the present disclosure may be written in one or more programming languages, including an object oriented programming language such as Java, smalltalk, C ++ and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computer (for example, through the Internet using an Internet service provider).
The flowcharts and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units involved in the embodiments of the present disclosure may be implemented by means of software, or may be implemented by means of hardware. The name of the unit does not in any way constitute a limitation of the unit itself, for example the first acquisition unit may also be described as "unit acquiring at least two internet protocol addresses".
It should be understood that portions of the present disclosure may be implemented in hardware, software, firmware, or a combination thereof.
The foregoing is merely specific embodiments of the disclosure, but the protection scope of the disclosure is not limited thereto, and any changes or substitutions that can be easily conceived by those skilled in the art within the technical scope of the disclosure are intended to be covered by the protection scope of the disclosure. Therefore, the protection scope of the present disclosure shall be subject to the protection scope of the claims.

Claims (9)

1. A method for identifying a seal text, comprising:
Receiving a seal picture to be identified;
Acquiring a site parameter of a text box of a target seal contained in the seal picture, wherein the site parameter at least comprises a vertex coordinate of the text box;
extracting a characteristic layer corresponding to the text box of the target seal in the seal picture according to the site parameter of the text box;
Rotating and adjusting the characteristic layer to a standard pose;
Text recognition is carried out on the text information in the characteristic image layer;
Wherein the locus parameter further comprises a vertex pixel, a head-tail pixel or a boundary pixel of the text box;
the step of obtaining the site parameter of the text box of the target seal contained in the seal picture comprises the following steps:
inputting the seal picture into a text box positioning model, wherein the text box positioning model comprises a first convolution block, a second convolution block and a third convolution block;
Obtaining a first output branch, a second output branch and a third output branch through multi-layer feature fusion between the first convolution block, the second convolution block and the third convolution block from top to bottom;
Identifying boundary pixels in the seal picture, which are positioned in the boundary box of the text box, according to the first output branch, identifying head and tail pixels positioned at the head and/or tail of the text box, according to the second output branch, and identifying vertex pixels positioned at the vertex position of the text box, according to the third output branch.
2. The method of claim 1, wherein the step of text recognition of the text information within the feature map layer comprises:
Extracting a target feature sequence of the corresponding text information in the feature map layer;
inputting the target characteristic sequence into a long-term and short-term memory network structure for characteristic matching;
Text information is identified by a time series classification loss function.
3. The method according to claim 2, wherein the step of extracting the feature layer corresponding to the text box of the target seal in the seal picture according to the location parameter of the text box comprises the following steps:
Affine transformation is carried out according to the vertex coordinates of the text box;
and acquiring pixel characteristics corresponding to the text box position in a preset layer of the seal picture, wherein the preset layer is the layer in which the characteristics of the previous layer of the seal picture are located.
4. The method of claim 1, wherein the locus parameters include at least a vertex coordinate and a deflection angle of the text box;
The step of rotationally adjusting the feature layer to a standard pose comprises the following steps:
determining a deflection angle between the feature layer and a reference horizontal axis according to the vertex coordinates of the text box;
and according to the deflection angle, adjusting all pixel rotation of the characteristic image layer to be flush with the reference horizontal axis.
5. The method of claim 4, wherein the step of adjusting the full pixel rotation of the feature map layer to be flush with the reference horizontal axis according to the deflection angle comprises:
extracting a date frame of the feature layer;
Determining a deflection angle between the date frame and the reference horizontal axis;
and according to the deflection angle, adjusting all pixel areas of the integral characteristic image layer containing the date frame to be flush with the reference horizontal axis.
6. The method according to any one of claims 1 to 5, characterized in that said step of receiving a stamp picture to be identified comprises:
receiving an initial picture, wherein the initial picture comprises at least one pixel area where a target seal is located;
Outputting the initial picture to a seal detection model, and detecting the pixel area of each target seal contained in the initial picture;
And generating a seal picture containing one target seal according to the pixel region where each target seal is located.
7. The method of claim 6, wherein the stamp picture is a square picture, the stamp picture has a single side size in the range of 256 to 400, the first convolution block has a channel number of 32, the second convolution block has a channel number of 64, and the third convolution block has a channel number of 128;
the step of obtaining a first output branch, a second output branch and a third output branch through the multi-layer feature fusion function performed from top to bottom among the first convolution block, the second convolution block and the third convolution block comprises the following steps:
Performing convolution processing on the seal picture through the first convolution block to obtain a first feature map with the dimension of 128 x 32;
Performing convolution processing on the first feature map through the second convolution block to obtain a second feature map with the dimension of 64 x 64;
Performing convolution processing on the second feature map through the third convolution block to obtain a third feature map with the dimension of 32 x 128;
Performing up-sampling processing on the third feature map to obtain a fourth feature map with dimensions of 64 x 128;
Channel merging is carried out on the fourth characteristic diagram and the second characteristic diagram, and a fifth characteristic diagram with the dimension of 64 x 192 is obtained;
And carrying out convolution processing on the fifth characteristic map through a convolution layer containing 32 1*1 filters, a convolution layer containing 32 3*3 filters and a convolution layer containing 32 3*3 in sequence to obtain the first output branch, the second output branch and the third output branch.
8. A seal text recognition device, comprising:
the receiving module is used for receiving the seal picture to be identified;
The obtaining module is used for obtaining the site parameter of the text box of the target seal contained in the seal picture, wherein the site parameter at least comprises the vertex coordinates of the text box; wherein the locus parameter further comprises a vertex pixel, a head-tail pixel or a boundary pixel of the text box; the step of obtaining the site parameter of the text box of the target seal contained in the seal picture comprises the following steps: inputting the seal picture into a text box positioning model, wherein the text box positioning model comprises a first convolution block, a second convolution block and a third convolution block; obtaining a first output branch, a second output branch and a third output branch through multi-layer feature fusion between the first convolution block, the second convolution block and the third convolution block from top to bottom; identifying boundary pixels in the seal picture, which are positioned in the boundary frame of the text box, according to the first output branch, identifying head and tail pixels positioned at the head and/or tail of the text box according to the second output branch, and identifying vertex pixels positioned at the vertex position of the text box according to the third output branch;
The extraction module is used for extracting a characteristic layer corresponding to the text box of the target seal in the seal picture according to the site parameter of the text box;
the rotation module is used for rotationally adjusting the characteristic image layer to a standard pose;
and the identification module is used for carrying out text identification on the text information in the characteristic image layer.
9. An electronic device, the electronic device comprising:
At least one processor; and
A memory communicatively coupled to the at least one processor; wherein,
The memory stores instructions executable by the at least one processor to enable the at least one processor to perform the seal text recognition method of any one of the preceding claims 1-7.
CN202010619489.6A 2020-06-30 2020-06-30 Seal text recognition method and device and electronic equipment Active CN111950353B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010619489.6A CN111950353B (en) 2020-06-30 2020-06-30 Seal text recognition method and device and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010619489.6A CN111950353B (en) 2020-06-30 2020-06-30 Seal text recognition method and device and electronic equipment

Publications (2)

Publication Number Publication Date
CN111950353A CN111950353A (en) 2020-11-17
CN111950353B true CN111950353B (en) 2024-04-19

Family

ID=73337850

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010619489.6A Active CN111950353B (en) 2020-06-30 2020-06-30 Seal text recognition method and device and electronic equipment

Country Status (1)

Country Link
CN (1) CN111950353B (en)

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112560854A (en) * 2020-12-18 2021-03-26 北京百度网讯科技有限公司 Method, apparatus, device and storage medium for processing image
CN112686236B (en) * 2020-12-21 2023-06-02 福建新大陆软件工程有限公司 Seal detection method for multi-feature fusion
CN112926511A (en) * 2021-03-25 2021-06-08 深圳市商汤科技有限公司 Seal text recognition method, device and equipment and computer readable storage medium
CN113420760A (en) * 2021-06-22 2021-09-21 内蒙古师范大学 Handwritten Mongolian detection and identification method based on segmentation and deformation LSTM
CN113436080B (en) * 2021-06-30 2024-09-10 平安科技(深圳)有限公司 Seal image processing method, device, equipment and storage medium
CN113963339B (en) * 2021-09-02 2024-08-13 泰康保险集团股份有限公司 Information extraction method and device
CN114565044B (en) * 2022-03-01 2022-08-16 北京九章云极科技有限公司 Seal identification method and system

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109447068A (en) * 2018-10-26 2019-03-08 信雅达系统工程股份有限公司 A method of it separating seal from image and calibrates seal
CN109635627A (en) * 2018-10-23 2019-04-16 中国平安财产保险股份有限公司 Pictorial information extracting method, device, computer equipment and storage medium
WO2019192397A1 (en) * 2018-04-04 2019-10-10 华中科技大学 End-to-end recognition method for scene text in any shape
CN110443250A (en) * 2019-07-31 2019-11-12 天津车之家数据信息技术有限公司 A kind of classification recognition methods of contract seal, device and calculate equipment
CN110516541A (en) * 2019-07-19 2019-11-29 金蝶软件(中国)有限公司 Text positioning method, device, computer readable storage medium and computer equipment
CN110555372A (en) * 2019-07-22 2019-12-10 深圳壹账通智能科技有限公司 Data entry method, device, equipment and storage medium

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8255822B2 (en) * 2007-12-21 2012-08-28 Microsoft Corporation Incorporated handwriting input experience for textboxes

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2019192397A1 (en) * 2018-04-04 2019-10-10 华中科技大学 End-to-end recognition method for scene text in any shape
CN109635627A (en) * 2018-10-23 2019-04-16 中国平安财产保险股份有限公司 Pictorial information extracting method, device, computer equipment and storage medium
CN109447068A (en) * 2018-10-26 2019-03-08 信雅达系统工程股份有限公司 A method of it separating seal from image and calibrates seal
CN110516541A (en) * 2019-07-19 2019-11-29 金蝶软件(中国)有限公司 Text positioning method, device, computer readable storage medium and computer equipment
CN110555372A (en) * 2019-07-22 2019-12-10 深圳壹账通智能科技有限公司 Data entry method, device, equipment and storage medium
CN110443250A (en) * 2019-07-31 2019-11-12 天津车之家数据信息技术有限公司 A kind of classification recognition methods of contract seal, device and calculate equipment

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
蒋冲宇 ; 鲁统伟 ; 闵峰 ; 熊寒颖 ; 胡记伟 ; .基于神经网络的发票文字检测与识别方法.武汉工程大学学报.2019,(第06期),82-86. *

Also Published As

Publication number Publication date
CN111950353A (en) 2020-11-17

Similar Documents

Publication Publication Date Title
CN111950353B (en) Seal text recognition method and device and electronic equipment
CN111598091A (en) Image recognition method and device, electronic equipment and computer readable storage medium
CN108009543B (en) License plate recognition method and device
CN111444921A (en) Scratch defect detection method and device, computing equipment and storage medium
CN107911753A (en) Method and apparatus for adding digital watermarking in video
CN111950355A (en) Seal identification method and device and electronic equipment
CN110781823B (en) Screen recording detection method and device, readable medium and electronic equipment
CN112001912B (en) Target detection method and device, computer system and readable storage medium
CN112487848B (en) Character recognition method and terminal equipment
CN111597953A (en) Multi-path image processing method and device and electronic equipment
CN110062157B (en) Method and device for rendering image, electronic equipment and computer readable storage medium
CN110211195B (en) Method, device, electronic equipment and computer-readable storage medium for generating image set
US20190286875A1 (en) Cloud detection in aerial imagery
CN111209856B (en) Invoice information identification method and device, electronic equipment and storage medium
CN111199567B (en) Lane line drawing method and device and terminal equipment
CN113436222A (en) Image processing method, image processing apparatus, electronic device, and storage medium
CN112232311A (en) Face tracking method and device and electronic equipment
CN111539341A (en) Target positioning method, device, electronic equipment and medium
CN113033715B (en) Target detection model training method and target vehicle detection information generation method
CN110633759A (en) Image fusion method and device and electronic equipment
CN112396060B (en) Identification card recognition method based on identification card segmentation model and related equipment thereof
CN110287350A (en) Image search method, device and electronic equipment
CN113762266A (en) Target detection method, device, electronic equipment and computer readable medium
CN110070042A (en) Character recognition method, device and electronic equipment
CN111950356B (en) Seal text positioning method and device and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant