CN111178355B - Seal identification method, device and storage medium - Google Patents

Seal identification method, device and storage medium Download PDF

Info

Publication number
CN111178355B
CN111178355B CN201911378424.0A CN201911378424A CN111178355B CN 111178355 B CN111178355 B CN 111178355B CN 201911378424 A CN201911378424 A CN 201911378424A CN 111178355 B CN111178355 B CN 111178355B
Authority
CN
China
Prior art keywords
image area
image
seal
determining
neural network
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201911378424.0A
Other languages
Chinese (zh)
Other versions
CN111178355A (en
Inventor
朱永魁
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sinochem Capital Co ltd
Original Assignee
Sinochem Capital Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sinochem Capital Co ltd filed Critical Sinochem Capital Co ltd
Priority to CN201911378424.0A priority Critical patent/CN111178355B/en
Publication of CN111178355A publication Critical patent/CN111178355A/en
Application granted granted Critical
Publication of CN111178355B publication Critical patent/CN111178355B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/60Type of objects
    • G06V20/62Text, e.g. of license plates, overlay texts or captions on TV images
    • G06V20/63Scene text, e.g. street names
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2415Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on parametric or probabilistic models, e.g. based on likelihood ratio or false acceptance rate versus a false rejection rate
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/56Extraction of image or video features relating to colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V30/00Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
    • G06V30/10Character recognition
    • G06V30/28Character recognition specially adapted to the type of the alphabet, e.g. Latin alphabet
    • G06V30/287Character recognition specially adapted to the type of the alphabet, e.g. Latin alphabet of Kanji, Hiragana or Katakana characters

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Artificial Intelligence (AREA)
  • Probability & Statistics with Applications (AREA)
  • Image Analysis (AREA)

Abstract

The application discloses a seal identification method, a seal identification device and a storage medium. The seal identification method comprises the following steps: acquiring an image to be identified, wherein the image to be identified comprises a seal pattern; determining first image area information by using a preset first neural network, wherein the first image area information is used for indicating a first image area containing seal patterns in an image to be identified; and identifying characters in the seal pattern in the first image area according to the determined first image area information.

Description

Seal identification method, device and storage medium
Technical Field
The present application relates to image recognition technology, and in particular, to a method, apparatus, and storage medium for recognizing a seal.
Background
At present, when recognizing Chinese characters in a circular seal, the position of the circular seal is positioned first, and then the Chinese characters in the circular seal are read according to the determined position of the circular seal. At present, the problem of the red circular seal position is usually solved by using a physical method, for example, using HSI or RGB and other color models to estimate the position of the circle center, and then estimating the radius through the boundary points of the circle to obtain the position. But if the circular seal and the document are of the same font color (e.g., when a document carrying a red circular seal is a black and white copy), the algorithm will no longer be applicable or will be less effective.
Aiming at the technical problem that the position of the seal in the image cannot be accurately determined when the colors of the seal and the font of the document are consistent in the prior art, no effective solution is proposed at present.
Disclosure of Invention
The embodiment of the disclosure provides a seal identification method, a seal identification device and a storage medium, which at least solve the technical problem that the position of a seal in an image cannot be accurately determined when the colors of the seal and a document font are consistent in the prior art.
According to an aspect of the embodiments of the present disclosure, there is provided a stamp identifying method, including: acquiring an image to be identified, wherein the image to be identified comprises a seal pattern; determining first image area information by using a preset first neural network, wherein the first image area information is used for indicating a first image area containing the seal pattern in an image to be identified; and identifying characters in the seal pattern in the first image area according to the determined first image area information.
According to another aspect of the embodiments of the present disclosure, there is also provided a storage medium including a stored program, wherein the method described above is performed by a processor when the program is run.
According to another aspect of the embodiments of the present disclosure, there is also provided a stamp identifying apparatus including: the image acquisition module is used for acquiring an image to be identified, wherein the image to be identified comprises a seal pattern; the first image area information determining module is used for determining first image area information by utilizing a preset first neural network, wherein the first image area information is used for indicating a first image area containing seal patterns in an image to be identified; and the character recognition module is used for recognizing characters in the seal graph in the first image area according to the determined first image area information.
According to another aspect of the embodiments of the present disclosure, there is also provided a stamp identifying apparatus including: a processor; and a memory, coupled to the processor, for providing instructions to the processor for processing the steps of: acquiring an image to be identified, wherein the image to be identified comprises a seal pattern; determining first image area information by using a preset first neural network, wherein the first image area information is used for indicating a first image area containing seal patterns in an image to be identified; and identifying characters in the seal pattern in the first image area according to the determined first image area information.
According to the embodiment of the disclosure, a convolutional neural network based on deep learning is used for replacing a method for estimating the circle center position by using a color model such as HSI or RGB in the prior art, so that an image area containing a seal pattern can be accurately positioned even if the seal pattern is consistent with the text color in a document. Therefore, the technical problem that the position of the seal in the image cannot be accurately determined when the colors of the seal and the document fonts are consistent in the prior art is solved.
Drawings
The accompanying drawings, which are included to provide a further understanding of the disclosure, illustrate and explain the present disclosure, and together with the description serve to explain the present disclosure. In the drawings:
FIG. 1 is a block diagram of a hardware architecture of a computing device for implementing a method according to embodiment 1 of the present disclosure;
Fig. 2 is a flow chart of a stamp identifying method according to a first aspect of embodiment 1 of the present disclosure;
FIG. 3 is a schematic view of an image to be identified including a stamp graphic according to embodiment 1 of the present disclosure;
fig. 4 is a schematic view of a first image area and first image area information according to embodiment 1 of the present disclosure;
fig. 5A is a schematic diagram of a first neural network according to embodiment 1 of the present disclosure;
FIG. 5B is a schematic diagram of output data of a first neural network according to embodiment 1 of the present disclosure;
FIG. 6 is a schematic view of a denoised first image region according to embodiment 1 of the present disclosure;
FIG. 7 is a schematic view of the center point and outer edge radius of a stamp graphic according to embodiment 1 of the present disclosure;
FIG. 8 is a schematic view of a first image area and a sector area containing text according to embodiment 1 of the present disclosure;
fig. 9 and 10 are schematic views showing an image area after flattening the sector area shown in fig. 8, respectively;
fig. 11 is a schematic view of a stamp identifying apparatus according to embodiment 2 of the present disclosure; and
Fig. 12 is a schematic view of a stamp identifying apparatus according to embodiment 3 of the present disclosure.
Detailed Description
In order to better understand the technical solutions of the present disclosure, the following description will clearly and completely describe the technical solutions of the embodiments of the present disclosure with reference to the drawings in the embodiments of the present disclosure. It will be apparent that the described embodiments are merely embodiments of a portion, but not all, of the present disclosure. All other embodiments, which can be made by one of ordinary skill in the art without inventive effort, based on the embodiments in this disclosure, shall fall within the scope of the present disclosure.
It should be noted that the terms "first," "second," and the like in the description and claims of the present disclosure and in the foregoing figures are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged where appropriate such that the embodiments of the disclosure described herein may be capable of operation in sequences other than those illustrated or described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
Example 1
According to the present embodiment, there is also provided a method embodiment of a stamp identifying method, it being noted that the steps shown in the flowchart of the drawings may be performed in a computer system such as a set of computer executable instructions, and that although a logical order is shown in the flowchart, in some cases the steps shown or described may be performed in an order different from that herein.
The method embodiments provided by the present embodiments may be performed in a mobile terminal, a computer terminal, a server, or similar computing device. Fig. 1 shows a block diagram of a hardware architecture of a computing device for implementing a stamp identification method. As shown in fig. 1, the computing device may include one or more processors (which may include, but are not limited to, a microprocessor MCU, a programmable logic device FPGA, etc., processing means), memory for storing data, and transmission means for communication functions. In addition, the method may further include: a display, an input/output interface (I/O interface), a Universal Serial Bus (USB) port (which may be included as one of the ports of the I/O interface), a network interface, a power supply, and/or a camera. It will be appreciated by those of ordinary skill in the art that the configuration shown in fig. 1 is merely illustrative and is not intended to limit the configuration of the electronic device described above. For example, the computing device may also include more or fewer components than shown in FIG. 1, or have a different configuration than shown in FIG. 1.
It should be noted that the one or more processors and/or other data processing circuits described above may be referred to herein generally as "data processing circuits. The data processing circuit may be embodied in whole or in part in software, hardware, firmware, or any other combination. Furthermore, the data processing circuitry may be a single stand-alone processing module, or incorporated in whole or in part into any of the other elements in the computing device. As referred to in the embodiments of the present disclosure, the data processing circuit acts as a processor control (e.g., selection of the variable resistance termination path to interface with).
The memory may be used to store software programs and modules of application software, such as program instructions/data storage devices corresponding to the seal identification method in the embodiments of the present disclosure, and the processor executes the software programs and modules stored in the memory, thereby executing various functional applications and data processing, that is, implementing the seal identification method of the application program. The memory may include high speed random access memory, and may also include non-volatile memory, such as one or more magnetic storage devices, flash memory, or other non-volatile solid state memory. In some examples, the memory may further include memory remotely located with respect to the processor, which may be connected to the computing device via a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The transmission means is used for receiving or transmitting data via a network. Specific examples of the network described above may include a wireless network provided by a communications provider of the computing device. In one example, the transmission means includes a network adapter (Network Interface Controller, NIC) that can be connected to other network devices via the base station to communicate with the Internet. In one example, the transmission device may be a Radio Frequency (RF) module, which is used to communicate with the internet wirelessly.
The display may be, for example, a touch screen type Liquid Crystal Display (LCD) that may enable a user to interact with a user interface of the computing device.
It should be noted herein that in some alternative embodiments, the computing device shown in FIG. 1 described above may include hardware elements (including circuitry), software elements (including computer code stored on a computer-readable medium), or a combination of both hardware and software elements. It should be noted that fig. 1 is only one example of a particular specific example and is intended to illustrate the types of components that may be present in the computing devices described above.
In the above-described operating environment, according to a first aspect of the present embodiment, there is provided a stamp identifying method implemented by the computing device shown in fig. 1. Fig. 2 shows a schematic flow chart of the method, and referring to fig. 2, the method includes:
s202: acquiring an image to be identified, wherein the image to be identified comprises a seal pattern of a seal;
S204: determining first image area information by using a preset first neural network, wherein the first image area information is used for indicating a first image area containing seal patterns in an image to be identified; and
S206: and identifying characters in the seal in the first image area according to the determined first image area information.
Specific map 3 exemplarily shows one specific example of an image to be recognized including a stamp figure. As described in the background art, at present, when recognizing chinese characters in a circular seal, the position of the circular seal is first located, and then the chinese characters in the circular seal are read according to the determined position of the circular seal. At present, the problem of the red circular seal position is usually solved by using a physical method, for example, using HSI or RGB and other color models to estimate the position of the circle center, and then estimating the radius through the boundary points of the circle to obtain the position. But if the circular seal and the document are of the same font color (e.g., when a document carrying a red circular seal is a black and white copy), the algorithm will no longer be applicable or will be less effective.
In view of this, the present disclosure proposes a method of stamp identification, which may be performed, for example, by a computing device as shown in fig. 1. First, a computing device obtains an image to be identified. Referring to fig. 3, the image to be recognized includes a stamp pattern of a stamp. But the seal graph is the same as the color of the document font and is black.
Then, the computing device determines first image area information by using a preset first neural network, wherein the first image area information is used for indicating a first image area containing seal patterns in the image to be identified. For example, the area information may be information of a rectangular area containing the stamp pattern (including the position and size of the rectangular area, etc.). Thus, by means of the deep-learning convolutional neural network, the computing device can accurately recognize the region information of the image region containing the seal pattern even if the seal pattern is consistent with the color of the document font.
The computing device may then perform text recognition in the region based on the determined region information, thereby recognizing text on the stamp.
In summary, the present disclosure replaces the method of estimating the center position of a circle using a color model such as HSI or RGB in the prior art with a convolutional neural network based on deep learning, so that even if the seal pattern is consistent with the text color in the document, the image area containing the seal pattern can be accurately located. Therefore, the technical problem that the position of the seal in the image cannot be accurately determined when the colors of the seal and the document fonts are consistent in the prior art is solved.
Optionally, the stamp is a circular stamp, and the determining the first image area information by using a preset first neural network includes: and determining the area information of a rectangular area circumscribed with the circular seal pattern of the circular seal in the image to be identified as the first image area information by using the first neural network. Referring to fig. 3 and 4, the technical scheme of the present disclosure can identify a circular stamp. Specifically, referring to fig. 4, the technical solution of the present disclosure can identify, in an image to be identified, area information of a circumscribed square of a circular stamp, that is, (x, y, w, h) by using a first neural network. Where (x, y) may be used, for example, to indicate the coordinate position of the left vertex of the circumscribed square. w may be used to indicate the width of the circumscribed square and h may be used to indicate the height of the circumscribed square. The area information set in this way is information that can be recognized and outputted by using the convolutional neural network model. Therefore, people can conveniently identify the region position of the circular seal in the image by using the convolutional neural network.
In particular, the present disclosure enables circumscribed square positioning of a circular stamp through a specific deep convolutional neural network model. Wherein the specific deep convolutional neural network model is obtained by training a plurality of training sample images containing circular seals. The training sample image contains one or more circular stamps, but the circular stamps are not overlapped, the stamps can be of any color, and the corresponding marking data (English: label) of each image is one or more positions, and the positions are (x, y, w, h). Thus, a specific deep convolutional neural network model in the embodiment of the application can be trained.
Because the training data adopted when the deep convolutional neural network model trains the model is a sample image containing a circular seal, and the marking data is the position of the circular seal circumscribed square in the training sample, after the image to be identified is input into the deep convolutional neural network model, the convolutional neural network model can pass through the position of one or more circular seals circumscribed square in the image to be identified.
Optionally, the determining the first image area information using a preset first neural network includes: and determining the first image area information by using a preset master-RCNN convolutional neural network.
Specifically, a reasonable structure of the deep convolutional neural network model of the present embodiment includes: convolution layer, pooling layer, full connection layer. A large number of experiments prove that when the following network structure (shown in fig. 5A. Fasterr-cnn network structure is adopted), the detection of the circular seal position by using the deep convolutional neural network model has better accuracy.
In the process of identifying the circular seal, the main functions of each layer in the network structure are as follows:
1) A convolutional neural network layer (CNN layer) that is used to extract a feature map for the input image using a convolutional layer + relu activation function + pooling layer, which is used for the subsequent candidate region network layer and fully-connected layer. Specifically, for an image matrix (e.g., RGB image matrix) of size 3×m×n, the convolutional neural network layer convolves the three RGB channels. Wherein the convolution layer comprises a plurality 3*3 of convolution kernels and the image matrix is zero-padded prior to the convolution operation to ensure that the input and output matrices are the same size. Each pooling layer is used to halve the width and height of the image matrix, i.e. to change the m matrix to (m/2) x (n/2). So that the 4 pooling layers can eventually change the size of the feature map to (m/16) × (n/16). And the convolution kernel of the last convolution layer may be, for example, 256, so 256 (m/16) x (n/16) feature maps are output.
2) Candidate area network layer (Region Proposal Networks layers) which is mainly used to generate candidate areas. For example, the candidate area network layer first convolves the feature map with a convolution layer having 256 3*3 convolution kernels, and then generates 36 feature maps of size (m/16) ×n/16 with a convolution layer of 36 1*1 convolution kernels (and step size 1), to represent the positions and sizes of 9×m/16×n/16 candidate areas. And generating 18 feature graphs with the size of (m/16) and the size of (n/16) by using a convolution layer formed by 18 1*1 convolution kernels (with the step length of 1), wherein the feature graphs are used for representing positive and negative classification of 9 (m/16) candidate areas. And the candidate area network layer further selects a predetermined number of candidate areas (for example, 300 candidate areas) from 9 (m/16) (n/16) candidate areas according to a preset step, and generates 300 candidate areas in the form of 256 (m/16) x (n/16) according to the feature map output by the convolutional neural network layer (CNN layer).
3) The RoI pooling layer obtains a fixed-size region candidate feature map (for example, converting a (m/16) ×n/16) candidate region into a 7*7 candidate region) by using the candidate region generated by the candidate region network layer and the feature map obtained by the cnn layer, and enters the next step to perform object recognition and positioning by using the full join operation,
4) And the Classifier layer is used for performing full-connection operation on the feature map with the fixed size formed by the ROI pooling layer, performing specific category classification by using softmax, and simultaneously, performing bounding box regression operation by using Smooth L1 Loss to obtain the accurate position of the object. Specifically, fig. 5B shows a schematic diagram of the form of output data of the master= RCNN neural network. Referring to fig. 5B, the full link layer 2 outputs a multi-dimensional vector, wherein each two values of the multi-dimensional vector correspond to a candidate region for indicating that the candidate region contains the integral of the stamp pattern and does not contain the integral of the stamp pattern. For example, a1 is used to indicate that candidate region 1 contains an integral of a seal pattern, b1 is used to indicate that candidate region 1 does not contain an integral of a seal pattern, a2 is used to indicate that candidate region 2 contains an integral of a seal pattern, and b2 is used to indicate that candidate region 2 does not contain an integral of a seal pattern. So that the output of each softmax classifier also corresponds to the probability that each candidate region contains a stamp graphic and the probability that each candidate region does not contain a stamp graphic, respectively. For example, c1 indicates a probability that the candidate region 1 contains a seal pattern, d1 indicates a probability that the candidate region 1 does not contain a seal pattern, c2 indicates a probability that the candidate region 2 contains a seal pattern, d2 indicates a probability that the candidate region 2 does not contain a seal pattern, and the like. Furthermore, the output of the bounding box regression in the fully connected layer 3 is a multi-dimensional vector, where every four values in the vector correspond to a candidate region, e.g., (x 1, y1, w1, h 1) is used to represent the position and size of the candidate region 1, and (x 2, y2, w2, h 2) is used to represent the position and size of the candidate region 2. In this way, it is thus possible to determine candidate regions matching the stamp pattern, as well as their location and size.
Optionally, after determining the first image area information by using a preset first neural network, the method further includes: and in the first image area, denoising by utilizing the color difference between the seal graph and the background character. Specifically, fig. 6 shows a schematic diagram of the first image region after the noise reduction processing. In an actual copy, although the stamp graphic in the copy is not red but black, the shade of the color is still different from that of the text in the copy (see fig. 3 and 4). The computing device may thus utilize the color difference of the stamp graphic and the background character to perform a denoising process to remove noise (e.g., text characters, etc.) in the first image region. Specifically, the gray distribution in the first image area may be counted by the histogram, then a gray threshold value between the gray values of the background character and the stamp graphic is determined according to the case of the gray distribution, and then the gray value of the pixel below the gray threshold value is set to be white, thereby removing noise of the background character. Of course, other denoising methods may be used, for example, using bm3d (block-MATCHING AND d filtering, 3-dimensional matched filtering) to perform denoising processing. And are not listed here.
Optionally, the operation of identifying the text in the seal pattern in the first image area according to the determined first image area information includes: determining the center position of the circular seal pattern and the radius of the outer edge of the circular seal pattern according to the first image area information; characters in the stamp graphic are identified in a second image area defined by the determined center position and radius.
In particular, referring to fig. 7, the computing device may determine a center position (x, y) of the circular stamp pattern, and a radius r of an outer edge of the circular stamp pattern, based on the determined circumscribed square. Thus, a circular image area defined by the outer edge of the circular stamp can be determined based on the determined center position (x, y) and the radius r. Thus, the characters in the seal pattern can be more accurately identified in the circular image area.
Further optionally, the operation of identifying text in the stamp graphic within an area defined by the determined center location and radius further comprises: determining a sector image area containing characters in the seal pattern in a second image area by using a preset second neural network; converting the sector image area into an image area that can be identified using an OCR model; and identifying the converted image area by utilizing the OCR model, and determining the characters in the seal graph.
In particular, referring to FIG. 8, the computing device utilizes a second neural network to determine fan-shaped image areas 101 and 102 within second image area 100 that contain text in the stamp graphic.
Then, referring to fig. 9 and 10, the computing device geometrically levels the fan-shaped image areas 101 and 102, thereby generating the image areas shown in fig. 9 and 10 that can be recognized using the OCR model. The computing device then uses the OCR model to perform text recognition on the image areas shown in fig. 9 and 10 to determine text in the stamp graphic. In this way, the text information in a fan-shaped distribution in the bank can be recognized by utilizing the existing OCR technology.
Optionally, the operation of locating the sector image area containing the text in the stamp within the defined area using a second neural network that is preset, includes: and determining a sector image area containing characters in the seal in the defined area by utilizing a preset PSENET neural network.
In addition, it is to be noted that:
psenet advantages:
Psenet the neural network can locate text of any shape, while text in a circular seal belongs to one of any shape.
In contrast, existing regression-based text detection methods have difficulty finding text of arbitrary shape, and it is difficult to fully enclose most pixel-based segmentation detectors in rectangles, which may not separate text instances that are very close to each other.
The network structure and the processing flow of PSENET in this embodiment are approximately as follows:
First, its backbone network is resnet-34 (since it is a deep convolutional network with a fixed structure, it will not be described in any more detail).
Next is a method of operation similar to fpn (area candidate network) that can generate a series of fpn feature maps.
Thirdly, a post algorithm processing flow:
3-1: the fpn feature maps are divided into two types, and the feature maps except the first feature map are up-sampled to the scale of the first feature map;
3-2: adding feature maps of different scales;
3-3: and carrying out 1*1 convolution processing and outputting a plurality of prediction results.
Further, referring to fig. 1, according to a second aspect of the present embodiment, there is provided a storage medium. The storage medium includes a stored program, wherein the method of any one of the above is performed by a processor when the program is run.
It should be noted that, for simplicity of description, the foregoing method embodiments are all described as a series of acts, but it should be understood by those skilled in the art that the present invention is not limited by the order of acts described, as some steps may be performed in other orders or concurrently in accordance with the present invention. Further, those skilled in the art will also appreciate that the embodiments described in the specification are all preferred embodiments, and that the acts and modules referred to are not necessarily required for the present invention.
From the description of the above embodiments, it will be clear to a person skilled in the art that the method according to the above embodiments may be implemented by means of software plus the necessary general hardware platform, but of course also by means of hardware, but in many cases the former is a preferred embodiment. Based on such understanding, the technical solution of the present invention may be embodied essentially or in a part contributing to the prior art in the form of a software product stored in a storage medium (e.g. ROM/RAM, magnetic disk, optical disk) comprising instructions for causing a terminal device (which may be a mobile phone, a computer, a server, or a network device, etc.) to perform the method according to the embodiments of the present invention.
Example 2
Fig. 11 shows a stamp identifying apparatus 1100 according to the first aspect of the present embodiment, the apparatus 1100 corresponding to the method according to the first aspect of embodiment 1. Referring to fig. 11, the apparatus 1100 includes: an image acquisition module 1110, configured to acquire an image to be identified, where the image to be identified includes a seal pattern; the first image area determining module 1120 is configured to determine first image area information by using a preset first neural network, where the first image area information is used to indicate a first image area including a seal pattern in an image to be identified; and a text recognition module 1130 for recognizing text in the stamp graphic in the first image area based on the determined first image area information.
Optionally, the seal pattern is a circular seal pattern, and the first image area determining module 1120 includes: and the first image area determining submodule is used for determining area information of a square area circumscribed with the circular seal graph in the image to be identified by using the first neural network as first image area information.
Optionally, the first image region determining module 1120 includes: and the first image area determining submodule is used for determining first image area information by utilizing a preset master-RCNN convolutional neural network.
Optionally, the apparatus further comprises: and the denoising module is used for denoising in the first image area by utilizing the color difference between the seal graph and the background character.
Optionally, the text recognition module includes: the circular feature determining submodule is used for determining the center position of the circular seal pattern and the radius of the outer edge of the circular seal pattern according to the first image area information; and a character recognition sub-module for recognizing characters in the seal pattern in a second image area defined by the determined center position and radius.
Optionally, the text recognition sub-module further includes: a sector image area determining unit, configured to determine a sector image area containing characters in the seal pattern in the second image area by using a preset second neural network; an image area conversion module for converting the sector image area into an image area that can be recognized using an OCR model; and the character recognition unit is used for recognizing the converted image area by utilizing the OCR model and determining characters in the seal.
Optionally, the fan-shaped image area determining unit includes: and the sector image area determining subunit is used for determining a sector image area containing characters in the seal in the defined area by utilizing a preset PSENET neural network.
In summary, the present disclosure replaces the method of estimating the center position of a circle using a color model such as HSI or RGB in the prior art with a convolutional neural network based on deep learning, so that even if the seal pattern is consistent with the text color in the document, the image area containing the seal pattern can be accurately located. Therefore, the technical problem that the position of the seal in the image cannot be accurately determined when the colors of the seal and the document fonts are consistent in the prior art is solved.
Example 3
Fig. 12 shows a stamp identifying apparatus 1200 according to the present embodiment, the apparatus 1200 corresponding to the method according to the first aspect of embodiment 1. Referring to fig. 12, the apparatus 1200 includes: a processor 1210; and a memory 1220, coupled to the processor 1210, for providing instructions to the processor 1210 for processing the steps of: acquiring an image to be identified, wherein the image to be identified comprises a seal pattern; determining first image area information by using a preset first neural network, wherein the first image area information is used for indicating a first image area containing seal patterns in an image to be identified; and identifying characters in the seal pattern in the first image area according to the determined first image area information.
Optionally, the seal pattern is a circular seal pattern, and the operation of determining the first image area information by using a preset first neural network includes: and determining the area information of a square area circumscribed with the circular seal graph in the image to be identified as first image area information by using the first neural network.
Optionally, the determining the first image area information using a preset first neural network includes: and determining the first image area information by using a preset master-RCNN convolutional neural network.
Optionally, after determining the first image area information by using a preset first neural network, the method further includes: and in the first image area, denoising by utilizing the color difference between the seal graph and the background character.
Optionally, the operation of identifying the text in the seal pattern in the first image area according to the determined first image area information includes: determining the center position of the circular seal pattern and the radius of the outer edge of the circular seal pattern according to the first image area information; and identifying text in the stamp graphic within a second image area defined by the determined center location and radius.
Optionally, the operation of identifying text in the stamp graphic in a second image area defined by the determined center position and radius further comprises: determining a sector image area containing characters in the seal pattern in a second image area by using a preset second neural network; converting the sector image area into an image area that can be identified using an OCR model; and identifying the converted image area by utilizing the OCR model, and determining the characters in the seal.
Optionally, the operation of locating the sector image area containing the text in the stamp within the defined area using a second neural network that is preset, includes: and determining a sector image area containing characters in the seal in the defined area by utilizing a preset PSENET neural network.
In summary, the present disclosure replaces the method of estimating the center position of a circle using a color model such as HSI or RGB in the prior art with a convolutional neural network based on deep learning, so that even if the seal pattern is consistent with the text color in the document, the image area containing the seal pattern can be accurately located. Therefore, the technical problem that the position of the seal in the image cannot be accurately determined when the colors of the seal and the document fonts are consistent in the prior art is solved.
The foregoing embodiment numbers of the present invention are merely for the purpose of description, and do not represent the advantages or disadvantages of the embodiments.
In the foregoing embodiments of the present invention, the descriptions of the embodiments are emphasized, and for a portion of this disclosure that is not described in detail in this embodiment, reference is made to the related descriptions of other embodiments.
In the several embodiments provided in the present application, it should be understood that the disclosed technology may be implemented in other manners. The above-described embodiments of the apparatus are merely exemplary, and the division of the units, such as the division of the units, is merely a logical function division, and may be implemented in another manner, for example, multiple units or components may be combined or may be integrated into another system, or some features may be omitted, or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be through some interfaces, units or modules, or may be in electrical or other forms.
The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in the embodiments of the present invention may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in software functional units.
The integrated units, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present invention may be embodied essentially or in part or all of the technical solution or in part in the form of a software product stored in a storage medium, including instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to perform all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a Read-Only Memory (ROM), a random access Memory (RAM, randomAccess Memory), a removable hard disk, a magnetic disk, or an optical disk, or other various media capable of storing program codes.
The foregoing is merely a preferred embodiment of the present invention and it should be noted that modifications and adaptations to those skilled in the art may be made without departing from the principles of the present invention, which are intended to be comprehended within the scope of the present invention.

Claims (7)

1. A method of stamp identification, comprising:
acquiring an image to be identified, wherein the image to be identified comprises a seal pattern;
Determining first image area information by using a preset first neural network, wherein the first image area information is used for indicating a first image area containing the seal pattern in the image to be identified;
The seal pattern is a circular seal pattern, and the operation of determining the first image region information by utilizing a preset first neural network comprises the following steps:
Determining area information of a square area circumscribed with the circular seal graph in the image to be identified as the first image area information by utilizing the first neural network;
The operation of determining the first image area information by using a preset first neural network comprises the following steps: determining the first image region information by using a preset master-RCNN convolutional neural network; and
Identifying characters in the seal graph in the first image area according to the determined first image area information;
The operation of identifying the characters in the seal graph in the first image area according to the determined first image area information comprises the following steps:
determining the center position of the circular seal pattern and the radius of the outer edge of the circular seal pattern according to the first image area information; and
And identifying characters in the seal graph in a second image area defined by the determined center position and the radius.
2. The method of claim 1, further comprising, after determining the first image region information using a preset first neural network: and in the first image area, denoising by utilizing the color difference between the seal graph and the background character.
3. The method of claim 2, wherein the operation of identifying text in the stamp graphic within a second image area defined by the determined center location and radius, further comprises:
Determining a sector image area containing characters in the seal graph in the second image area by using a preset second neural network;
converting the sector image area into an image area which can be identified by an OCR model; and
And identifying the converted image area by utilizing the OCR model, and determining the characters in the seal.
4. A method according to claim 3, wherein positioning a sector image area containing text in the stamp within the defined area using a second neural network set in advance, comprises: and determining a sector image area containing the characters in the seal in the defined area by utilizing a preset PSENET neural network.
5. A storage medium comprising a stored program, wherein the method of any one of claims 1 to 4 is performed by a processor when the program is run.
6. A seal recognition device, comprising:
The image acquisition module is used for acquiring an image to be identified, wherein the image to be identified comprises a seal pattern;
The first image area determining module is used for determining first image area information by utilizing a preset first neural network, wherein the first image area information is used for indicating a first image area containing the seal pattern in the image to be identified;
The seal pattern is a circular seal pattern, and the operation of determining the first image region information by utilizing a preset first neural network comprises the following steps:
Determining area information of a square area circumscribed with the circular seal graph in the image to be identified as the first image area information by utilizing the first neural network;
The operation of determining the first image area information by using a preset first neural network comprises the following steps: determining the first image region information by using a preset master-RCNN convolutional neural network; and
The character recognition module is used for recognizing characters in the seal graph in the first image area according to the determined first image area information;
The operation of identifying the characters in the seal graph in the first image area according to the determined first image area information comprises the following steps:
determining the center position of the circular seal pattern and the radius of the outer edge of the circular seal pattern according to the first image area information; and
And identifying characters in the seal graph in a second image area defined by the determined center position and the radius.
7. A seal recognition device, comprising:
a processor; and
A memory, coupled to the processor, for providing instructions to the processor to process the following processing steps:
acquiring an image to be identified, wherein the image to be identified comprises a seal pattern;
Determining first image area information by using a preset first neural network, wherein the first image area information is used for indicating a first image area containing the seal pattern in the image to be identified;
The seal pattern is a circular seal pattern, and the operation of determining the first image region information by utilizing a preset first neural network comprises the following steps:
Determining area information of a square area circumscribed with the circular seal graph in the image to be identified as the first image area information by utilizing the first neural network;
The operation of determining the first image area information by using a preset first neural network comprises the following steps: determining the first image region information by using a preset master-RCNN convolutional neural network; and
Identifying characters in the seal graph in the first image area according to the determined first image area information;
The operation of identifying the characters in the seal graph in the first image area according to the determined first image area information comprises the following steps:
determining the center position of the circular seal pattern and the radius of the outer edge of the circular seal pattern according to the first image area information; and
And identifying characters in the seal graph in a second image area defined by the determined center position and the radius.
CN201911378424.0A 2019-12-27 2019-12-27 Seal identification method, device and storage medium Active CN111178355B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911378424.0A CN111178355B (en) 2019-12-27 2019-12-27 Seal identification method, device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911378424.0A CN111178355B (en) 2019-12-27 2019-12-27 Seal identification method, device and storage medium

Publications (2)

Publication Number Publication Date
CN111178355A CN111178355A (en) 2020-05-19
CN111178355B true CN111178355B (en) 2024-05-10

Family

ID=70654086

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911378424.0A Active CN111178355B (en) 2019-12-27 2019-12-27 Seal identification method, device and storage medium

Country Status (1)

Country Link
CN (1) CN111178355B (en)

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111767911B (en) * 2020-06-22 2024-06-28 平安科技(深圳)有限公司 Method, device and medium for detecting and identifying seal characters facing complex environment
CN111950356B (en) * 2020-06-30 2024-04-19 深圳市雄帝科技股份有限公司 Seal text positioning method and device and electronic equipment
CN112507946A (en) * 2020-12-18 2021-03-16 北京百度网讯科技有限公司 Method, apparatus, device and storage medium for processing image
CN112784835B (en) * 2021-01-21 2024-04-12 恒安嘉新(北京)科技股份公司 Method and device for identifying authenticity of circular seal, electronic equipment and storage medium
CN113033325A (en) * 2021-03-04 2021-06-25 杭州睿胜软件有限公司 Image processing method and device, intelligent invoice recognition equipment and storage medium
CN112766275B (en) * 2021-04-08 2021-09-10 金蝶软件(中国)有限公司 Seal character recognition method and device, computer equipment and storage medium
CN113378821B (en) * 2021-07-02 2023-02-28 人民中科(济南)智能技术有限公司 Official seal character recognition method, system, equipment and storage medium based on deep learning

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP3022236U (en) * 1995-08-31 1996-03-22 耀元電子股▲ふん▼有限公司 A seal matching system similar to a neural network
JPH11110411A (en) * 1997-10-03 1999-04-23 Ebaretto Sogo Kenkyusho:Kk System for making document including seal impression or signature into data base
WO2005064524A1 (en) * 2003-12-30 2005-07-14 Ruizhen Liu Digital seal system
CN107944452A (en) * 2017-12-12 2018-04-20 深圳市创业印章实业有限公司 A kind of circular stamp character recognition method
CN109145723A (en) * 2018-07-09 2019-01-04 长江大学 A kind of seal recognition methods, system, terminal installation and storage medium
CN109376658A (en) * 2018-10-26 2019-02-22 信雅达系统工程股份有限公司 A kind of OCR method based on deep learning
CN109657666A (en) * 2018-11-23 2019-04-19 汉王科技股份有限公司 A kind of circular stamp recognition methods and device
WO2019174130A1 (en) * 2018-03-14 2019-09-19 平安科技(深圳)有限公司 Bill recognition method, server, and computer readable storage medium
CN110443250A (en) * 2019-07-31 2019-11-12 天津车之家数据信息技术有限公司 A kind of classification recognition methods of contract seal, device and calculate equipment

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP3022236U (en) * 1995-08-31 1996-03-22 耀元電子股▲ふん▼有限公司 A seal matching system similar to a neural network
JPH11110411A (en) * 1997-10-03 1999-04-23 Ebaretto Sogo Kenkyusho:Kk System for making document including seal impression or signature into data base
WO2005064524A1 (en) * 2003-12-30 2005-07-14 Ruizhen Liu Digital seal system
CN107944452A (en) * 2017-12-12 2018-04-20 深圳市创业印章实业有限公司 A kind of circular stamp character recognition method
WO2019174130A1 (en) * 2018-03-14 2019-09-19 平安科技(深圳)有限公司 Bill recognition method, server, and computer readable storage medium
CN109145723A (en) * 2018-07-09 2019-01-04 长江大学 A kind of seal recognition methods, system, terminal installation and storage medium
CN109376658A (en) * 2018-10-26 2019-02-22 信雅达系统工程股份有限公司 A kind of OCR method based on deep learning
CN109657666A (en) * 2018-11-23 2019-04-19 汉王科技股份有限公司 A kind of circular stamp recognition methods and device
CN110443250A (en) * 2019-07-31 2019-11-12 天津车之家数据信息技术有限公司 A kind of classification recognition methods of contract seal, device and calculate equipment

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
卷积神经网络在印章编号识别中的应用;阳哲;;现代计算机(专业版)(04);全文 *

Also Published As

Publication number Publication date
CN111178355A (en) 2020-05-19

Similar Documents

Publication Publication Date Title
CN111178355B (en) Seal identification method, device and storage medium
US10410353B2 (en) Multi-label semantic boundary detection system
CN111160352B (en) Workpiece metal surface character recognition method and system based on image segmentation
CN110659647B (en) Seal image identification method and device, intelligent invoice identification equipment and storage medium
US10817741B2 (en) Word segmentation system, method and device
CN107944450B (en) License plate recognition method and device
EP3806064A1 (en) Method and apparatus for detecting parking space usage condition, electronic device, and storage medium
CN109753953B (en) Method and device for positioning text in image, electronic equipment and storage medium
US11823358B2 (en) Handwritten content removing method and device and storage medium
CN111680690B (en) Character recognition method and device
CN109409377B (en) Method and device for detecting characters in image
CN111259878A (en) Method and equipment for detecting text
CN112418216A (en) Method for detecting characters in complex natural scene image
CN112699775A (en) Certificate identification method, device and equipment based on deep learning and storage medium
CN111223065A (en) Image correction method, irregular text recognition device, storage medium and equipment
CN110443235B (en) Intelligent paper test paper total score identification method and system
KR101842535B1 (en) Method for the optical detection of symbols
CN114444565B (en) Image tampering detection method, terminal equipment and storage medium
CN114170468B (en) Text recognition method, storage medium and computer terminal
WO2024002014A1 (en) Traffic marking identification method and apparatus, computer device and storage medium
CN111274863A (en) Text prediction method based on text peak probability density
CN111178153A (en) Traffic sign detection method and system
CN111199240A (en) Training method of bank card identification model, and bank card identification method and device
CN116311290A (en) Handwriting and printing text detection method and device based on deep learning
CN110414497A (en) Method, device, server and storage medium for electronizing object

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant