CN117523586A - Check seal verification method and device, electronic equipment and medium - Google Patents

Check seal verification method and device, electronic equipment and medium Download PDF

Info

Publication number
CN117523586A
CN117523586A CN202311535772.0A CN202311535772A CN117523586A CN 117523586 A CN117523586 A CN 117523586A CN 202311535772 A CN202311535772 A CN 202311535772A CN 117523586 A CN117523586 A CN 117523586A
Authority
CN
China
Prior art keywords
check
image
seal
feature map
foreground
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311535772.0A
Other languages
Chinese (zh)
Inventor
汤慧新
张杨锴
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Industrial and Commercial Bank of China Ltd ICBC
Original Assignee
Industrial and Commercial Bank of China Ltd ICBC
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Industrial and Commercial Bank of China Ltd ICBC filed Critical Industrial and Commercial Bank of China Ltd ICBC
Priority to CN202311535772.0A priority Critical patent/CN117523586A/en
Publication of CN117523586A publication Critical patent/CN117523586A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V30/00Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
    • G06V30/10Character recognition
    • G06V30/22Character recognition characterised by the type of writing
    • G06V30/224Character recognition characterised by the type of writing of printed characters having additional code marks or containing code marks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0464Convolutional networks [CNN, ConvNet]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V30/00Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
    • G06V30/10Character recognition
    • G06V30/14Image acquisition
    • G06V30/148Segmentation of character regions
    • G06V30/153Segmentation of character regions using recognition of characters or words
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V30/00Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
    • G06V30/10Character recognition
    • G06V30/19Recognition using electronic means
    • G06V30/191Design or setup of recognition systems or techniques; Extraction of features in feature space; Clustering techniques; Blind source separation
    • G06V30/19147Obtaining sets of training patterns; Bootstrap methods, e.g. bagging or boosting
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/30Computing systems specially adapted for manufacturing

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Evolutionary Computation (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Molecular Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Medical Informatics (AREA)
  • Databases & Information Systems (AREA)
  • Character Input (AREA)

Abstract

The check seal verification method, device, electronic equipment and medium can be applied to the technical field of big data and the technical field of artificial intelligence. The method comprises the following steps: collecting image information of checks; performing first image segmentation on the image information by using a foreground extraction model to extract a foreground and remove a background pattern, so as to obtain a foreground image of a check; inputting the foreground image of the check into a segmentation model for carrying out second image segmentation to segment the seal and the text region, and obtaining the seal region and the text region of the check; performing character recognition on the check text area, and acquiring a target seal picture from a check information database based on a character recognition result; inputting the check seal area and the target seal picture into a pre-constructed seal identification model for seal authenticity verification, and outputting an authenticity verification result; and obtaining the verification result of the check seal based on the authenticity verification result.

Description

Check seal verification method and device, electronic equipment and medium
Technical Field
The invention relates to the technical field of big data and artificial intelligence, in particular to a check seal verification method, a check seal verification device, electronic equipment and a check seal verification medium.
Background
Check business has been one of the important forms of transactions in the financial industry as a payment instrument that plays an important role. However, the legitimacy and authenticity of checks have been a challenge for banks and financial institutions, and checks may be counterfeited or tampered with, which can result in financial losses, compromising the reputation of the financial institution, and affecting the customer's trust. Currently, one of the main ways to forge checks is to forge a seal, which is an important component of the check, for verifying the authenticity of the check. Therefore, the authenticity identification of the seal is critical to the security and legitimacy of the check.
However, current check seal verification is also dependent primarily on manual handling and expertise, and this approach has many limitations, including time consuming, error prone and inapplicable to large scale handling. In addition, counterfeiters can use advanced image processing techniques to make false seals, such as fuzzy boundaries, making verification more difficult, which requires continual improvement in seal verification methods to cope with the evolution of counterfeit techniques.
Disclosure of Invention
In view of the foregoing, according to a first aspect of the present invention, there is provided a check seal verification method, the method comprising: collecting image information of checks; performing first image segmentation on the image information by utilizing a foreground extraction model to extract a foreground and remove a background pattern to obtain a foreground image of a check, wherein the foreground extraction model is trained by utilizing a U2-Net model and comprises M convolution layers and N pooling layers which are alternately arranged, and M, N is a positive integer; inputting the foreground image of the check into a segmentation model for carrying out second image segmentation to segment the seal and the text region, and obtaining the seal region and the text region of the check; performing character recognition on the check text area, and acquiring a target seal picture from a check information database based on a character recognition result, wherein the check information database comprises check recognition information stored in advance; inputting the check seal area and the target seal picture into a pre-constructed seal identification model for seal authenticity verification, and outputting an authenticity verification result; and obtaining the verification result of the check seal based on the authenticity verification result.
According to some exemplary embodiments, the first image segmentation of the image information using the foreground extraction model to extract foreground and remove background patterns specifically includes: processing by using the M convolution layers and the N pooling layers based on the image information to obtain an image feature map, wherein the convolution layers are used for extracting image features, and the pooling layers are used for reducing the spatial resolution of the image information; performing interpolation operation on the image feature map to generate an interpolated image feature map; fusing the image feature map and the interpolated image feature map to obtain a fused image feature map; outputting a binary segmentation image based on the fused image feature map, wherein the binary segmentation image comprises foreground pixels and background pixels; and generating the check foreground image based on foreground pixels of the binary split image.
According to some exemplary embodiments, the fusing the image feature map and the interpolated image feature map specifically includes: adding the corresponding channels of the image feature map and the interpolated image feature map; or multiplying the image feature map and the corresponding channel of the interpolated image feature map; or splicing the image characteristic diagram and the corresponding channel of the interpolated image characteristic diagram.
According to some exemplary embodiments, after the fusing the image feature map and the interpolated image feature map to obtain a fused image feature map, the method further includes: performing activation function processing and normalization processing on the fusion image feature images to obtain optimized fusion feature images; and outputting a binary segmentation image representation based on the fused image feature map as: and outputting a binary segmentation image based on the optimized fusion feature map.
According to some exemplary embodiments, the segmentation model is trained using a U-Net model or a DeepLab model.
According to some exemplary embodiments, training the stamp authentication model using a deep neural network; inputting the check seal area and the target seal picture into a pre-constructed seal identification model for seal authenticity verification, and outputting an authenticity verification result, wherein the method specifically comprises the following steps of: extracting seal identification features from the check seal area and the target seal picture and forming feature mapping, wherein the seal identification features comprise edges, textures and/or shape features corresponding to the check seal area and the target seal picture; and carrying out nonlinear transformation and combination on the characteristic map through the multi-layer convolution and the full-connection layer of the seal identification model, and outputting an authenticity verification probability value.
According to some exemplary embodiments, a training dataset of the stamp authentication model is formed based on check stamp images acquired from a plurality of data sources, including in particular: acquiring check seal images from a plurality of data sources; labeling the check seal image, wherein the labeling comprises labeling the position and the authenticity label of each seal, and obtaining the check seal image with the label; randomly rotating, translating, zooming and turning the check seal image with the label to obtain supplementary image data; and forming a training dataset based on the tagged check seal image and supplemental image data.
According to some exemplary embodiments, the inputting the check seal area and the target seal picture into a pre-constructed seal identification model for seal authenticity verification, and outputting an authenticity verification result, specifically includes: calculating the Euclidean distance between the check seal area and the target seal picture; and outputting an authenticity verification result based on a comparison result of the Euclidean distance and a preset authenticity verification threshold value.
According to some exemplary embodiments, before the first image segmentation of the image information using the foreground extraction model to extract foreground and background patterns, the method further comprises: judging whether the image information meets the preset resolution and definition requirements or not; and re-acquiring the image information of the check in response to the image information not meeting the preset resolution and definition requirements.
According to some exemplary embodiments, the performing text recognition on the check text area specifically includes: identifying characters or words in the check text area using optical character recognition techniques; combining the identified characters or words into a text string; and correcting spelling errors, processing segmentation problems and/or standardizing text formats for the text character strings to obtain text recognition results.
According to some exemplary embodiments, after the obtaining of the verification result of the check seal based on the authenticity verification result, the method further comprises: and storing the image information, the check foreground image, the check seal area, the check text area and the text recognition result into the check information database according to an encryption protocol in response to the check seal verification result being passed.
According to a second aspect of the present invention, there is provided a check seal verification apparatus, the apparatus comprising: the acquisition module is used for: collecting image information of checks; a first image segmentation module for: performing first image segmentation on the image information by utilizing a foreground extraction model to extract a foreground and remove a background pattern to obtain a foreground image of a check, wherein the foreground extraction model is trained by utilizing a U2-Net model and comprises M convolution layers and N pooling layers which are alternately arranged, and M, N is a positive integer; a second image segmentation module for: inputting the foreground image of the check into a segmentation model for carrying out second image segmentation to segment the seal and the text region, and obtaining the seal region and the text region of the check; the target seal picture acquisition module is used for: performing character recognition on the check text area, and acquiring a target seal picture from a check information database based on a character recognition result, wherein the check information database comprises check recognition information stored in advance; the true and false verification result acquisition module is used for: inputting the check seal area and the target seal picture into a pre-constructed seal identification model for seal authenticity verification, and outputting an authenticity verification result; and a check seal verification result acquisition module for: and acquiring the verification result of the check seal based on the authenticity verification result.
According to some example embodiments, the first image segmentation module may include an image feature map acquisition unit, an interpolated image feature map generation unit, a fused image feature map acquisition module, a binary segmentation image acquisition unit, and a check foreground image acquisition unit.
According to some exemplary embodiments, the image feature map obtaining unit may be configured to obtain an image feature map by processing the M convolution layers and the N pooling layers based on the image information, wherein the image feature is extracted by the convolution layers, and the spatial resolution of the image information is reduced by the pooling layers.
According to some exemplary embodiments, the interpolated image feature map generating unit may be configured to perform an interpolation operation on the image feature map to generate an interpolated image feature map.
According to some exemplary embodiments, the fused image feature map obtaining module may be configured to fuse the image feature map and the interpolated image feature map to obtain a fused image feature map.
According to some exemplary embodiments, the binary segmented image obtaining unit may be configured to output a binary segmented image based on the fused image feature map, wherein the binary segmented image includes foreground pixels and background pixels.
According to some exemplary embodiments, the check foreground image acquisition unit may be configured to generate the check foreground image based on foreground pixels of the binary split image.
According to some example embodiments, the fused image feature map acquisition module may include a fusion module and an optimization module.
According to some example embodiments, the fusion module may include a corresponding channel adding unit, a corresponding channel multiplying unit, or a corresponding channel stitching unit.
According to some example embodiments, the corresponding channel adding unit may be configured to add the image feature map and the corresponding channels of the interpolated image feature map.
According to some exemplary embodiments, the corresponding channel multiplication unit may be configured to multiply the image feature map with a corresponding channel of the interpolated image feature map.
According to some exemplary embodiments, the corresponding channel stitching unit may be configured to stitch corresponding channels of the image feature map and the interpolated image feature map.
According to some example embodiments, the optimization module may include a processing unit and an output unit.
According to some exemplary embodiments, the processing unit may be configured to perform an activation function processing and a normalization processing on the fused image feature map, to obtain an optimized fused feature map.
According to some exemplary embodiments, the output unit may be configured to output a binary split image based on the optimized fusion feature map.
According to some exemplary embodiments, the target stamp picture obtaining module may include a character or word recognition unit, a text character string combining unit, and a text recognition result obtaining unit.
According to some exemplary embodiments, the character or word recognition unit may be configured to recognize characters or words in the check text area using optical character recognition techniques.
According to some exemplary embodiments, the text string combining unit may be used to combine the recognized characters or words into a text string.
According to some exemplary embodiments, the text recognition result obtaining unit may be configured to correct spelling errors, process segmentation problems, and/or normalize text formats for the text string, and obtain a text recognition result.
According to some exemplary embodiments, the authentication result obtaining module may include a deep neural network module and a euclidean distance module.
According to some example embodiments, the deep neural network module may include a training data set generation module and an authentication probability value output module.
According to some example embodiments, the training data set generation module may include a check seal image acquisition unit, a labeling unit, a supplemental image data acquisition unit, and a training data set formation unit.
According to some exemplary embodiments, the check seal image acquisition unit may be configured to acquire check seal images from a plurality of data sources.
According to some exemplary embodiments, the labeling unit may be configured to label the check seal image, where the labeling unit may be configured to obtain a labeled check seal image by marking a position of each seal and an authenticity label.
According to some example embodiments, the supplemental image data acquisition unit may be configured to randomly rotate, translate, scale, and flip the tagged check seal image to acquire supplemental image data.
According to some example embodiments, the training data set forming unit may be configured to form a training data set based on the tagged check seal image and supplemental image data.
According to some example embodiments, the authentication probability value output module may include a feature map forming unit and an authentication probability value output unit.
According to some exemplary embodiments, the feature map forming unit may be configured to extract seal identification features from the check seal region and the target seal picture and form a feature map, where the seal identification features include edge, texture, and/or shape features corresponding to the check seal region and the target seal picture.
According to some exemplary embodiments, the authenticity verification probability value output unit may be configured to output an authenticity verification probability value by performing nonlinear transformation and combination on the feature map through a multi-layer convolution and a full connection layer of the stamp authentication model.
According to some example embodiments, the euclidean distance module may include a euclidean distance calculation unit and a comparison unit.
According to some example embodiments, the euclidean distance calculating unit may be configured to calculate a euclidean distance of the check seal area and the target seal picture.
According to some exemplary embodiments, the comparing unit may be configured to output an authentication result based on a comparison result of the euclidean distance and a preset authentication threshold.
According to some exemplary embodiments, the verification device of the check seal may further comprise a storage unit.
According to some exemplary embodiments, the storing unit may be configured to store the image information, the check foreground image, the check seal area, the check text area, and the result of the text recognition in the check information database according to an encryption protocol in response to the verification result of the check seal being passed.
According to some exemplary embodiments, the check seal verification apparatus may further include a resolution and clarity determination module.
According to some example embodiments, the resolution and sharpness determination module may include a resolution and sharpness determination unit and a re-acquisition unit.
According to some exemplary embodiments, the resolution and sharpness determination unit may be configured to determine whether the image information meets a preset resolution and sharpness requirement.
According to some exemplary embodiments, the re-acquisition unit may be configured to re-acquire image information of the check in response to the image information not meeting preset resolution and sharpness requirements.
According to a third aspect of the present invention, there is provided an electronic device comprising: one or more processors; and a storage device for storing one or more programs, wherein the one or more programs, when executed by the one or more processors, cause the one or more processors to perform the method as described above.
According to a fourth aspect of the present invention there is provided a computer readable storage medium having stored thereon executable instructions which when executed by a processor cause the processor to perform a method as described above.
According to a fifth aspect of the present invention there is provided a computer program product comprising a computer program which, when executed by a processor, implements a method as described above.
One or more of the above embodiments have the following advantages or benefits: according to the check seal verification method provided by the invention, the foreground extraction model comprising M convolution layers and N pooling layers which are alternately arranged can be utilized, and the segmentation model is utilized to segment the seal and the text region, so that the foreground image and the partition with higher quality can be extracted, thereby reducing unnecessary computing resources and improving the processing efficiency of a computer; through image segmentation, character recognition and seal authenticity verification, the system can provide more accurate seal verification results, so that manual intervention can be reduced, manual misjudgment is reduced, and user experience is improved.
Drawings
The foregoing and other objects, features and advantages of the invention will be apparent from the following description of embodiments of the invention with reference to the accompanying drawings, in which:
FIG. 1 schematically illustrates an application scenario diagram of a check seal verification method, apparatus, device, medium according to an embodiment of the present invention.
Fig. 2 schematically illustrates a flow chart of a method of validating a check seal in accordance with an embodiment of the present invention.
Fig. 3 schematically shows a flow chart of a method of first image segmentation of image information according to an embodiment of the invention.
Fig. 4 schematically shows a flow chart of a method of fusing an image feature map with an interpolated image feature map according to an embodiment of the invention.
Fig. 5 schematically shows a flow chart of a method of optimizing a fused image feature map according to an embodiment of the invention.
FIG. 6 schematically illustrates a flow chart of a method of text recognition of the check text area according to an embodiment of the invention.
FIG. 7 schematically illustrates a flow chart of a method of training data set generation based on a deep neural network model, in accordance with an embodiment of the present invention.
Fig. 8 schematically shows a flowchart of a method of outputting a verification result of authenticity by a deep neural network model according to an embodiment of the invention.
Fig. 9 schematically shows a flowchart of a method of outputting a verification result of authenticity by calculating a euclidean distance according to an embodiment of the invention.
Fig. 10 schematically illustrates a flow chart of a method of storing confidential information in accordance with an embodiment of the invention.
Fig. 11 schematically shows a flowchart of a method of resolution and sharpness determination of image information according to an embodiment of the present invention.
Fig. 12 schematically shows a block diagram of a check seal verification apparatus according to an embodiment of the present invention.
Fig. 13 schematically shows a block diagram of an electronic device adapted for a method of verification of a check seal according to an embodiment of the invention.
Detailed Description
Hereinafter, embodiments of the present invention will be described with reference to the accompanying drawings. It should be understood that the description is only illustrative and is not intended to limit the scope of the invention. In the following detailed description, for purposes of explanation, numerous specific details are set forth in order to provide a thorough understanding of the embodiments of the invention. It may be evident, however, that one or more embodiments may be practiced without these specific details. In addition, in the following description, descriptions of well-known structures and techniques are omitted so as not to unnecessarily obscure the present invention.
The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention. The terms "comprises," "comprising," and/or the like, as used herein, specify the presence of stated features, steps, operations, and/or components, but do not preclude the presence or addition of one or more other features, steps, operations, or components.
All terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art unless otherwise defined. It should be noted that the terms used herein should be construed to have meanings consistent with the context of the present specification and should not be construed in an idealized or overly formal manner.
Where expressions like at least one of "A, B and C, etc. are used, the expressions should generally be interpreted in accordance with the meaning as commonly understood by those skilled in the art (e.g.," a system having at least one of A, B and C "shall include, but not be limited to, a system having a alone, B alone, C alone, a and B together, a and C together, B and C together, and/or A, B, C together, etc.).
In the technical scheme of the invention, the acquisition, storage, application and the like of the related personal information of the user accord with the regulations of related laws and regulations, necessary security measures are taken, and the public order harmony is not violated.
First, technical terms described herein are explained and illustrated as follows.
The U2-Net (Universal U-Net) model is a deep learning model for image segmentation tasks, which is a variation of the U-Net architecture, modified and optimized to perform well in different image segmentation tasks. U2-Net is used primarily to segment foreground and background in images, typically for semantic segmentation, instance segmentation, and other tasks related to the boundaries and contours of objects in images.
Deep neural networks (Deep Neural Network, DNN) are a machine learning model built based on artificial neurons and multiple neuron layers, widely used to handle complex nonlinear relationships and large-scale data sets. Deep neural networks are composed of multiple layers of neurons, typically including an input layer, a hidden layer, and an output layer. The input layer accepts the raw data, the output layer generates the prediction or classification results of the model, and the hidden layer is used to learn the features and patterns in the data.
The Euclidean distance (Euclidean Distance) is a straight line distance between two points in space, typically used to measure the similarity or distance between the points. Euclidean distance is a common distance measurement method, and is particularly suitable for calculating the distance between data points in a multidimensional space.
Optical character recognition technology (Optical Character Recognition, OCR) technology is an automated process for converting printed or handwritten text into editable text, the main objective of which is to extract characters, words and text information from an image or scanned document, enabling a computer to understand and process text data.
Checks have been one of the important payment instruments in the financial industry, which plays a critical role in commercial transactions. However, the legitimacy and authenticity of checks has been a major challenge for banks and financial institutions. Checks may be counterfeited or tampered with, which may result in financial losses, compromising the reputation of the financial institution, and affecting the customer's trust. Therefore, the security and legitimacy of checks is a focus of the financial industry.
One of the main ways of cheque counterfeiting is to forge a seal. Seals are an important component of checks and are commonly used to verify the authenticity of checks. However, verification of current check seals still relies primarily on manual handling and expertise, which has many limitations. First, manual verification is time consuming and error prone, requiring specialized knowledge and experience, which complicates and expensive processing large-scale check transactions. Second, counterfeiters can use advanced image processing techniques to make false seals, such as fuzzy boundaries and deceptive tricks, thereby increasing the difficulty of verification.
Therefore, in order to increase the security and legitimacy of check transactions, there is a need for more efficient, accurate and automated check seal authenticity identification methods.
Based on this, an embodiment of the present invention provides a check seal verification method, which is characterized in that the method includes: collecting image information of checks; performing first image segmentation on the image information by utilizing a foreground extraction model to extract a foreground and remove a background pattern to obtain a foreground image of a check, wherein the foreground extraction model is trained by utilizing a U2-Net model and comprises M convolution layers and N pooling layers which are alternately arranged, and M, N is a positive integer; inputting the foreground image of the check into a segmentation model for carrying out second image segmentation to segment the seal and the text region, and obtaining the seal region and the text region of the check; performing character recognition on the check text area, and acquiring a target seal picture from a check information database based on a character recognition result, wherein the check information database comprises check recognition information stored in advance; inputting the check seal area and the target seal picture into a pre-constructed seal identification model for seal authenticity verification, and outputting an authenticity verification result; and obtaining the verification result of the check seal based on the authenticity verification result. According to the method provided by the invention, the foreground extraction model comprising M convolution layers and N pooling layers which are alternately arranged can be utilized, and the segmentation model is utilized to segment the seal and the text region, so that the foreground image and the partition with higher quality can be extracted, thereby reducing unnecessary computing resources and improving the processing efficiency of a computer; through image segmentation, character recognition and seal authenticity verification, the system can provide more accurate seal verification results, so that manual intervention can be reduced, manual misjudgment is reduced, and user experience is improved.
The check seal verification method, device, equipment and medium can be used in the technical field of big data and the technical field of artificial intelligence, can also be used in the financial field, and can also be used in various fields except the technical field of big data and the technical field of artificial intelligence and the financial field. The application fields of the check seal verification method, the check seal verification device, the check seal verification equipment and the check seal verification medium provided by the embodiment of the invention are not limited.
In the technical scheme of the invention, the related user information (including but not limited to user personal information, user image information, user equipment information, such as position information and the like) and data (including but not limited to data for analysis, stored data, displayed data and the like) are information and data authorized by a user or fully authorized by all parties, and the processing of the related data such as collection, storage, use, processing, transmission, provision, disclosure, application and the like are all conducted according to the related laws and regulations and standards of related countries and regions, necessary security measures are adopted, no prejudice to the public welfare is provided, and corresponding operation inlets are provided for the user to select authorization or rejection.
FIG. 1 schematically illustrates an application scenario diagram of a check seal verification method, apparatus, device, medium according to an embodiment of the present invention.
As shown in fig. 1, an application scenario 100 according to this embodiment may include terminal devices 101, 102, 103, a network 104, and a server 105. The network 104 is used as a medium to provide communication links between the terminal devices 101, 102, 103 and the server 105. The network 104 may include various connection types, such as wired, wireless communication links, or fiber optic cables, among others.
The user may interact with the server 105 via the network 104 using the terminal devices 101, 102, 103 to receive or send messages or the like. Various communication client applications, such as shopping class applications, web browser applications, search class applications, instant messaging tools, mailbox clients, social platform software, etc. (by way of example only) may be installed on the terminal devices 101, 102, 103.
The terminal devices 101, 102, 103 may be a variety of electronic devices having a display screen and supporting web browsing, including but not limited to smartphones, tablets, laptop and desktop computers, and the like.
The server 105 may be a server providing various services, such as a background management server (by way of example only) providing support for websites browsed by users using the terminal devices 101, 102, 103. The background management server may analyze and process the received data such as the user request, and feed back the processing result (e.g., the web page, information, or data obtained or generated according to the user request) to the terminal device.
It should be noted that, the check seal verification method provided by the embodiment of the present invention may be generally executed by the server 1 05. Accordingly, the verification device for check seals provided by embodiments of the present invention may be generally disposed in the server 105. The check seal verification method provided by the embodiment of the present invention may also be performed by a server or a server cluster that is different from the server 105 and is capable of communicating with the terminal devices 101, 102, 103 and/or the server 105. Accordingly, the verification device for check seal provided in the embodiment of the present invention may also be provided in a server or a server cluster different from the server 105 and capable of communicating with the terminal devices 101, 102, 103 and/or the server 105.
It should be understood that the number of terminal devices, networks and servers in fig. 1 is merely illustrative. There may be any number of terminal devices, networks, and servers, as desired for implementation.
Fig. 2 schematically illustrates a flow chart of a method of validating a check seal in accordance with an embodiment of the present invention.
As shown in fig. 2, the check seal verification method 200 of this embodiment may include operations S210 to S260.
In operation S210, image information of a check is acquired.
In the embodiment of the invention, in order to acquire a clearer check image for subsequent processing, the embodiment of the invention also provides a method for judging the resolution and the definition of the image information.
Fig. 11 schematically shows a flowchart of a method of resolution and sharpness determination of image information according to an embodiment of the present invention.
As shown in fig. 11, the method of determining resolution and sharpness of image information of this embodiment may include operations S1110 to S1120.
In operation S1110, it is determined whether the image information meets a preset resolution and definition requirement.
In embodiments of the present invention, the system may perform a preliminary evaluation of the acquired check image to determine whether it meets preset resolution and sharpness requirements. In particular, image resolution inspection and image sharpness evaluation may be included. The system will check the resolution of the image to ensure that it meets the required criteria; the system will evaluate the sharpness of the image to ensure that the text and image elements are legible, which can be done by an image quality evaluation algorithm, checking the details and edge sharpness in the image. If the image meets the preset standard, the subsequent processing can be continued; otherwise, it will be marked as requiring a re-acquisition.
In operation S1120, in response to the image information not meeting the preset resolution and sharpness requirements, the image information of the check is re-acquired.
In the embodiment of the invention, if the image information does not meet the preset resolution and definition requirements, the operation of re-acquisition is triggered. Wherein the system may issue a warning or indication to the user that rescanning or capturing of the check image is required. After re-acquisition, the system can automatically verify the image again to ensure that the newly acquired image meets the requirements.
Referring back to fig. 2, in operation S220, the image information is subjected to first image segmentation using a foreground extraction model to extract a foreground and remove a background pattern, and a check foreground image is obtained, wherein the foreground extraction model may be trained using a U2-Net model, and the foreground extraction model includes M convolution layers and N pooling layers alternately arranged, wherein M, N is a positive integer.
Fig. 3 schematically shows a flow chart of a method of first image segmentation of image information according to an embodiment of the invention.
As shown in fig. 3, the method for performing the first image segmentation on the image information of the embodiment may include operations S310 to S350, and the operations S310 to S350 may at least partially perform the operation S220.
In operation S310, processing is performed using the M convolution layers and the N pooling layers based on the image information to obtain an image feature map, wherein image features are extracted using the convolution layers, and spatial resolution of the image information is reduced using the pooling layers.
In an embodiment of the invention, each of the M convolution layers generally includes a plurality of convolution kernels, each for detecting a different feature in the image, the operation of the convolution kernels converting the image into a feature map, wherein each channel corresponds to a detected feature. The pooling layer is used to reduce the spatial resolution of the feature map, typically employing a max-pooling or average pooling operation, which helps reduce computational complexity while preserving the most important features.
In an embodiment of the invention, the image feature map generated after passing through the convolution and pooling layers of the U2-Net model contains various features extracted from the original image, including edges, textures, colors, and other abstract features. Image feature maps typically have a higher level of semantic information for subsequent tasks such as segmentation or classification.
In operation S320, an interpolation operation is performed on the image feature map, and an image feature map after interpolation is generated.
In embodiments of the present invention, interpolation operations may generate new data points from known data points to preserve the spatial relationship and structure of image features. Specifically, bilinear interpolation and bicubic interpolation implementations may be selected as desired. Wherein bilinear interpolation uses the weighted average of the four nearest neighbor pixels to generate a new pixel value; bicubic interpolation uses a weighted average of 16 nearest neighbor pixels to generate the value of the new pixel, which provides higher quality when processing more complex image cases, but is computationally expensive.
In operation S330, the image feature map and the interpolated image feature map are fused, and a fused image feature map is obtained.
In accordance with embodiments of the present invention, to provide a richer and comprehensive feature representation to support subsequent image processing or analysis tasks, increase the performance of the model, and improve understanding and abstraction of image content, embodiments of the present invention also provide a fusion operation that fuses feature maps.
Fig. 4 schematically shows a flow chart of a method of fusing an image feature map with an interpolated image feature map according to an embodiment of the invention.
As shown in fig. 4, the method of fusing an image feature map and an interpolated image feature map of this embodiment may include operations S410, S420, or S430, and the operations S410, S420, or S430 may at least partially perform operation S330.
In operation S410, the image feature map and the corresponding channels of the interpolated image feature map are added.
In an embodiment of the present invention, the pixel values of the image feature map of the corresponding channel and the interpolated image feature map may be added. The channel addition method is generally used for preserving the complementary information of the two feature maps, thereby enhancing the characterization capability of the features. Channel addition helps to increase the response of the feature map, enabling it to capture more image detail.
In operation S420, the image feature map and the corresponding channels of the interpolated image feature map are multiplied.
In an embodiment of the present invention, the image feature map of the corresponding channel and the pixel values of the interpolated image feature map may be multiplied. Channel multiplication methods are typically used to enhance co-occurring features, reduce the weight of uncorrelated features, which helps the model learn better about the correlation between features.
In operation S430, the image feature map and the corresponding channels of the interpolated image feature map are spliced.
In the embodiment of the invention, the image feature images of the corresponding channels and the interpolated image feature images can be spliced in the channel dimension to generate a new feature image, and the number of channels of the feature image is increased to contain more feature information. The method of channel stitching helps to combine information from different sources together to obtain a more comprehensive representation of the features.
In addition, in order to optimize the feature map to prepare for a subsequent segmentation task, after the image feature map and the interpolated image feature map are fused to obtain a fused image feature map, the invention further provides a method for optimizing the fused image feature map.
Fig. 5 schematically shows a flow chart of a method of optimizing a fused image feature map according to an embodiment of the invention.
As shown in fig. 5, the method of optimizing the fused image feature map of this embodiment may include operations S510 to S520.
In operation S510, an activation function process and a normalization process are performed on the fused image feature map, so as to obtain an optimized fused feature map.
In embodiments of the present invention, an activation function may be used to introduce non-linear properties and increase the characterizability of the feature map; whereas normalization processing may be used to ensure that feature maps have similar scales and ranges. Among them, normalization helps to stabilize the training process, preventing gradient explosions or vanishing, common normalization methods include batch normalization (Batch Normalization) or layer normalization (Layer Normalization), etc.
In operation S520, a binary-segmentation image is output based on the optimized fusion feature map.
Referring back to fig. 3, in operation S340, a binary-divided image including foreground pixels and background pixels is output based on the fused image feature map.
In an embodiment of the invention, the binary segmented image comprises two classes of pixels, namely foreground pixels and background pixels. The foreground pixels represent an object of interest in the check image, such as text, a stamp, or other foreground element, while the background pixels represent the background of the check image.
In operation S350, the check foreground image is generated based on foreground pixels of the binary split image.
In an embodiment of the present invention, the process of generating the foreground image may include: traversing pixels of the binary-segmented image; for foreground pixels, copy it to the check foreground image; for background pixels, they may be set to transparent or other suitable values to ensure that the foreground image contains only the object of interest.
Referring back to fig. 2, in operation S230, the check foreground image is input into a segmentation model for second image segmentation to segment the seal and text region, and the check seal region and the check text region are acquired.
In embodiments of the invention, the segmentation model may be trained using a U-Net model or a DeepLab model. The check foreground image is input into a segmentation model that classifies different regions in the image into different categories. In particular, it identifies and segments the check seal area and the check text area.
In operation S240, text recognition is performed on the check text region, and a target stamp picture is acquired from a check information database based on a result of the text recognition, wherein the check information database includes check recognition information stored in advance.
FIG. 6 schematically illustrates a flow chart of a method of text recognition of the check text area according to an embodiment of the invention.
As shown in fig. 6, the method for recognizing the text area of the check according to the embodiment may include operations S610 to S630, and operations S610 to S630 may at least partially perform operation S240.
In operation S610, characters or words in the check text area are recognized using an optical character recognition technique.
In operation S620, the recognized characters or words are combined into a text character string.
In operation S630, spelling errors are corrected, segmentation problems are processed, and/or text formats are normalized for the text strings, and text recognition results are obtained.
In the embodiment of the invention, in order to obtain the optimized recognition result, possible spelling errors in the text can be detected and corrected so as to improve the accuracy of the text; the segmentation or disconnection problem in the text character string can be processed, so that the continuity and structure of the text are ensured; the format of the text may be standardized according to requirements, such as case-to-case conversion, addition or deletion of punctuation marks, etc.
Referring back to fig. 2, in operation S250, the check seal area and the target seal picture are input into a pre-constructed seal authentication model for seal authenticity verification, and an authenticity verification result is output.
In embodiments of the invention, the stamp identification model may be trained using a deep neural network.
FIG. 7 schematically illustrates a flow chart of a method of training data set generation based on a deep neural network model, in accordance with an embodiment of the present invention.
As shown in fig. 7, the method of generating a training data set based on a deep neural network model of this embodiment may include operations S710 to S740.
In operation S710, check seal images are acquired from a plurality of data sources.
In embodiments of the present invention, check seal images may be collected from multiple data sources. These data sources may be stamp images from different checks, which may include real checks and simulated checks, so that the model can learn different types of stamps.
In operation S720, the check seal image is marked, which includes marking the position and the authenticity of each seal, and obtaining a check seal image with a label.
In an embodiment of the invention, the check seal image may be annotated. Labeling may include marking the location of each stamp and the authenticity of the tag. Wherein the position mark typically comprises a bounding box or mask of the stamp so that the model knows the position of the stamp in the image; the authenticity label is used to indicate whether the stamp is authentic or counterfeit.
In operation S730, the tagged check seal image is randomly rotated, translated, scaled, and flipped to obtain supplemental image data.
In the embodiment of the invention, in order to increase the diversity of training data, the model can be better generalized to different types of seals and different transformations, and the data enhancement operations such as random rotation, translation, scaling, overturning and the like can be performed on the check seal image with the label to obtain the supplementary image data.
In operation S740, a training data set is formed based on the tagged check seal image and supplemental image data.
Fig. 8 schematically shows a flowchart of a method of outputting a verification result of authenticity by a deep neural network model according to an embodiment of the invention.
As shown in fig. 8, the method for outputting the authenticity verification result through the deep neural network model of this embodiment may include operations S810 to S820, and operations S810 to S820 may at least partially perform operation S250.
In operation S810, seal identification features are extracted from the check seal area and the target seal picture and form a feature map, where the seal identification features include edge, texture, and/or shape features corresponding to the check seal area and the target seal picture.
In operation S820, the feature map is non-linearly transformed and combined through the multi-layer convolution and full-connection layer of the stamp authentication model, and an authenticity verification probability value is output.
In embodiments of the present invention, the extracted stamp identifying features may be non-linearly transformed and combined by multi-layer convolution and full-connection layers of the deep neural network model. The purpose of these layers is to perform a deep processing of the extracted features to capture complex patterns in the image. Finally, the model will output a true or false verification probability value indicating whether the stamp is authentic or counterfeit.
In addition, in order to improve the operation speed and the verification efficiency, a simple scheme for verifying authenticity is also provided.
Fig. 9 schematically shows a flowchart of a method of outputting a verification result of authenticity by calculating a euclidean distance according to an embodiment of the invention.
As shown in fig. 9, the method of outputting the authenticity verification result by calculating the euclidean distance of this embodiment may include operations S910 to S920, and operations S910 to S920 may at least partially perform operation S250.
In operation S910, a euclidean distance between the check seal area and the target seal picture is calculated.
In the embodiment of the invention, it is assumed that P and Q are respectively two seal images to be compared, and their shape feature vectors are respectively P i And q i I=1, 2..n, then the degree of match or degree of difference is calculated by the following formula:
wherein, I are Euclidean distance, the formula represents the sum of the distances between the two stamp images at each feature vector. According to the above formula, the contours can be matched and compared using a least squares method or the like. In particular, the transformation parameters that minimize the degree of difference can be found by enumerating different transformation modes such as translation, rotation, scaling, etc.
In operation S920, a verification result of authenticity is output based on the comparison result of the euclidean distance and a preset verification threshold value of authenticity.
In the embodiment of the invention, the matched transformation parameters are set to have a rotation angle theta, a scaling factor s and a translation vectorThe distance between the two stamp images after transformation can be calculated by the following formula:
wherein R is θ Representing a rotation matrix with a counterclockwise rotation angle theta around an origin, the difference between the two seals is:
wherein a decision threshold γ can be set, when D min And (P, Q) is less than or equal to gamma, and the seal is considered to be successfully checked in a consistent way.
Referring back to fig. 2, in operation S260, a verification result of the check seal is obtained based on the authenticity verification result.
In addition, in order to track the information and history of the check and ensure confidentiality of the data, the information related to the check may be securely stored in a database for future use and retrieval in the event of verification passing.
Fig. 10 schematically illustrates a flow chart of a method of storing confidential information in accordance with an embodiment of the invention.
As shown in fig. 10, the method of storing secret-related information of this embodiment may include operation S1010.
In response to the verification result of the check seal being passed, storing the image information, the check foreground image, the check seal area, the check text area, and the result of the text recognition in the check information database according to an encryption protocol in operation S1010.
According to the check seal verification method provided by the invention, the foreground extraction model comprising M convolution layers and N pooling layers which are alternately arranged is utilized, and the segmentation model is utilized to segment the seal and the text region, so that the foreground image and the subarea with higher quality can be extracted, thereby reducing unnecessary computing resources and improving the processing efficiency of a computer; through image segmentation, character recognition and seal authenticity verification, the system can provide more accurate seal verification results, so that manual intervention can be reduced, manual misjudgment is reduced, and user experience is improved. Specifically, the following beneficial effects are brought:
1. automation and efficiency: according to the method, the efficiency of verifying the authenticity of the seal is greatly improved through automatic image processing and a deep learning model, and compared with manual verification, the automatic method can process a large number of check images more quickly, so that time and manpower resources are saved;
2. Accuracy promotes: deep neural network model and U 2 Advanced technologies such as Net model and the like improve the accuracy of authenticity verification, and can more accurately identify the seal and the text area, thereby reducing the possibility of misjudgment;
3. user friendliness: for banks and financial institutions, this automated verification method provides a better user experience, the customer does not have to wait long to verify the check, and the risk of human error is reduced;
4. more comprehensive information extraction: besides the authenticity verification, the method also comprises character recognition, so that various information on the check, such as date, amount and the like, can be extracted, and more comprehensive information extraction capability is provided;
5. database integration: the verification result and the related information are stored in the check information database, so that the integration and management of the database are facilitated, and the accessibility and the retrieval efficiency of the information are improved;
6. safety: the security of seal authenticity verification can be improved through the deep learning model. Counterfeiters have difficulty spoofing these models through advanced image processing techniques.
Based on the check seal verification method, the invention also provides a check seal verification device. The device will be described in detail below in connection with fig. 12.
Fig. 12 schematically shows a block diagram of a check seal verification apparatus according to an embodiment of the present invention.
As shown in fig. 12, the check seal verification apparatus 1200 according to this embodiment includes an acquisition module 1210, a first image segmentation module 1220, a second image segmentation module 1230, a target seal picture acquisition module 1240, a true-false verification result acquisition module 1250, and a check seal verification result acquisition module 1260.
The acquisition module 1210 may be used to acquire image information of the check. In an embodiment, the acquisition module 1210 may be configured to perform the operation S210 described above, which is not described herein.
The first image segmentation module 1220 may be configured to perform a first image segmentation on the image information using a foreground extraction model to extract foreground and remove background patterns to obtain a check foreground image, where U is used 2 The Net model trains the foreground extraction model, which includes M convolution layers and N pooling layers alternately arranged, wherein M, N is a positive integer. In an embodiment, the first image segmentation module 1220 may be configured to perform the operation S220 described above, which is not described herein.
The second image segmentation module 1230 may be configured to input the check foreground image into a segmentation model for second image segmentation to segment the seal and text regions, and obtain a check seal region and a check text region. In an embodiment, the second image segmentation module 1230 may be configured to perform the operation S230 described above, which is not described herein.
The target seal picture obtaining module 1240 may be configured to perform text recognition on the check text area, and obtain a target seal picture from a check information database based on a result of the text recognition, where the check information database includes check identification information stored in advance. In an embodiment, the target stamp image obtaining module 1240 may be configured to perform the operation S240 described above, which is not described herein.
The authenticity verification result obtaining module 1250 may be configured to input the check seal area and the target seal picture into a pre-constructed seal authentication model to perform seal authenticity verification, and output an authenticity verification result. In an embodiment, the authentication result obtaining module 1250 may be configured to perform the operation S250 described above, which is not described herein.
The check seal verification result acquisition module 1260 may be configured to acquire a verification result of the check seal based on the authenticity verification result. In an embodiment, the check seal verification result obtaining module 1260 may be configured to perform the operation S260 described above, which is not described herein.
According to an embodiment of the present invention, the first image segmentation module 1220 may include an image feature map acquisition unit, an interpolated image feature map generation unit, a fused image feature map acquisition module, a binary segmentation image acquisition unit, and a check foreground image acquisition unit.
The image feature map obtaining unit may be configured to obtain an image feature map by processing the M convolution layers and the N pooling layers based on the image information, where the image feature is extracted by the convolution layers, and the spatial resolution of the image information is reduced by the pooling layers. In an embodiment, the image feature map obtaining unit may be configured to perform the operation S310 described above, which is not described herein.
The interpolated image feature map generating unit may be configured to perform interpolation operation on the image feature map to generate an interpolated image feature map. In an embodiment, the interpolated image feature map generating unit may be configured to perform the operation S320 described above, which is not described herein.
The fused image feature map obtaining module may be configured to fuse the image feature map and the interpolated image feature map to obtain a fused image feature map. In an embodiment, the fused image feature map obtaining module may be configured to perform the operation S330 described above, which is not described herein.
The binary segmented image acquisition unit may be configured to output a binary segmented image based on the fused image feature map, wherein the binary segmented image includes foreground pixels and background pixels. In an embodiment, the binary image obtaining unit may be configured to perform the operation S340 described above, which is not described herein.
The check foreground image acquisition unit may be configured to generate the check foreground image based on foreground pixels of the binary split image. In an embodiment, the check foreground image obtaining unit may be configured to perform the operation S350 described above, which is not described herein.
According to an embodiment of the present invention, the fused image feature map obtaining module may include a fusion module and an optimization module.
According to an embodiment of the present invention, the fusion module may include a corresponding channel adding unit, a corresponding channel multiplying unit, or a corresponding channel splicing unit.
The corresponding channel adding unit may be configured to add the corresponding channels of the image feature map and the interpolated image feature map. In an embodiment, the corresponding channel adding unit may be used to perform the operation S410 described above, which is not described herein.
The corresponding channel multiplication unit may be configured to multiply the image feature map with a corresponding channel of the interpolated image feature map. In an embodiment, the corresponding channel multiplying unit may be used to perform the operation S420 described above, which is not described herein.
The corresponding channel stitching unit may be configured to stitch the image feature map and the corresponding channel of the interpolated image feature map. In an embodiment, the corresponding channel stitching unit may be configured to perform the operation S430 described above, which is not described herein.
According to an embodiment of the invention, the optimization module may comprise a processing unit and an output unit.
The processing unit can be used for performing activation function processing and normalization processing on the fusion image feature images to obtain optimized fusion feature images. In an embodiment, the processing unit may be configured to perform the operation S510 described above, which is not described herein.
The output unit may be configured to output a binary segmentation image based on the optimized fusion feature map. In an embodiment, the output unit may be configured to perform the operation S520 described above, which is not described herein.
According to an embodiment of the present invention, the target stamp picture obtaining module 1240 may include a character or word recognition unit, a text character string combining unit, and a text recognition result obtaining unit.
The character or word recognition unit may be configured to recognize characters or words in the check text area using optical character recognition techniques. In an embodiment, the character or word recognition unit may be used to perform the operation S610 described above, which is not described herein.
The text string combining unit may be used to combine the recognized characters or words into a text string. In an embodiment, the text string combining unit may be configured to perform the operation S620 described above, which is not described herein.
The text recognition result obtaining unit may be configured to correct spelling errors for the text string, process segmentation problems, and/or normalize text formats, and obtain a text recognition result. In an embodiment, the text recognition result obtaining unit may be configured to perform the operation S630 described above, which is not described herein.
According to an embodiment of the present invention, the authentication result obtaining module 1250 may include a deep neural network module and a euclidean distance module.
According to an embodiment of the present invention, the deep neural network module may include a training data set generation module and an authentication probability value output module.
According to the embodiment of the invention, the training data set generation module can comprise a check seal image acquisition unit, a labeling unit, a supplementary image data acquisition unit and a training data set forming unit.
The check seal image acquisition unit may be configured to acquire check seal images from a plurality of data sources. In an embodiment, the check seal image capturing unit may be configured to perform the operation S710 described above, which is not described herein.
The marking unit can be used for marking the check seal image, wherein the marking unit comprises the step of marking the position and the authenticity label of each seal, and the check seal image with the label is obtained. In an embodiment, the labeling unit may be configured to perform the operation S720 described above, which is not described herein.
The supplementary image data obtaining unit may be configured to randomly rotate, translate, scale and flip the tagged check seal image to obtain supplementary image data. In an embodiment, the supplementary image data obtaining unit may be used to perform the operation S730 described above, which is not described herein.
The training data set forming unit may be configured to form a training data set based on the tagged check seal image and supplemental image data. In an embodiment, the training data set forming unit may be configured to perform the operation S740 described above, which is not described herein.
According to an embodiment of the present invention, the authentication probability value output module may include a feature map forming unit and an authentication probability value output unit.
The feature map forming unit may be configured to extract seal identification features from the check seal region and the target seal picture and form a feature map, where the seal identification features include edge, texture, and/or shape features corresponding to the check seal region and the target seal picture. In an embodiment, the feature map forming unit may be configured to perform the operation S810 described above, which is not described herein.
The true and false verification probability value output unit can be used for carrying out nonlinear transformation and combination on the feature map through the multi-layer rolling and full-connection layers of the seal identification model and outputting the true and false verification probability value. In an embodiment, the probability value output unit for authenticity verification may be used to perform operation S820 described above, which is not described herein.
According to an embodiment of the present invention, the euclidean distance module may include a euclidean distance calculating unit and a comparing unit.
The Euclidean distance calculating unit may be used to calculate the Euclidean distance of the check seal area and the target seal picture. In an embodiment, the euclidean distance calculating unit may be configured to perform the operation S910 described above, which is not described herein.
The comparing unit may be configured to output an authentication result based on a comparison result of the euclidean distance and a preset authentication threshold. In an embodiment, the comparing unit may be configured to perform the operation S920 described above, which is not described herein.
According to an embodiment of the present invention, the check seal verification device 1200 may further include a storage unit.
The storage unit may be configured to store the image information, the check foreground image, the check seal area, the check text area, and the result of the text recognition in the check information database according to an encryption protocol in response to the verification result of the check seal being passed. In an embodiment, the storage unit may be used to perform the operation S1010 described above, which is not described herein.
According to an embodiment of the present invention, the check seal verification apparatus 1200 may further include a resolution and sharpness determination module.
According to an embodiment of the present invention, the resolution and sharpness determination module may include a resolution and sharpness determination unit and a re-acquisition unit.
The resolution and definition judging unit may be configured to judge whether the image information meets a preset resolution and definition requirement. In an embodiment, the resolution and sharpness determination unit may be used to perform the operation S1110 described above, which is not described herein.
The re-acquisition unit may be configured to re-acquire image information of the check in response to the image information not meeting a preset resolution and sharpness requirement. In an embodiment, the reacquisition unit may be configured to perform the operation S1120 described above, which is not described herein.
Any of the acquisition module 1210, the first image segmentation module 1220, the second image segmentation module 1230, the target stamp picture acquisition module 1240, the authenticity verification result acquisition module 1250, and the check stamp verification result acquisition module 1260 may be combined in one module to be implemented, or any of the modules may be split into a plurality of modules, according to an embodiment of the invention. Alternatively, at least some of the functionality of one or more of the modules may be combined with at least some of the functionality of other modules and implemented in one module. According to embodiments of the present disclosure, at least one of acquisition module 1210, first image segmentation module 1220, second image segmentation module 1230, target stamp picture acquisition module 1240, authenticity verification result acquisition module 1250, and check stamp verification result acquisition module 1260 may be implemented at least in part as hardware circuitry, such as a Field Programmable Gate Array (FPGA), programmable Logic Array (PLA), system-on-chip, system-on-substrate, system-on-package, application Specific Integrated Circuit (ASIC), or may be implemented in hardware or firmware in any other reasonable manner of integrating or packaging circuitry, or in any one of or a suitable combination of three of software, hardware, and firmware. Alternatively, at least one of the acquisition module 1210, the first image segmentation module 1220, the second image segmentation module 1230, the target stamp picture acquisition module 1240, the authenticity verification result acquisition module 1250, and the check stamp verification result acquisition module 1260 may be at least partially implemented as computer program modules, which, when executed, perform the corresponding functions.
Fig. 13 schematically shows a block diagram of an electronic device adapted for a method of verification of a check seal according to an embodiment of the invention.
As shown in fig. 13, an electronic device 1300 according to an embodiment of the present invention includes a processor 1301 that can perform various appropriate actions and processes according to a program stored in a Read Only Memory (ROM) 1302 or a program loaded from a storage section 1308 into a Random Access Memory (RAM) 1303. Processor 1301 may include, for example, a general purpose microprocessor (e.g., a CPU), an instruction set processor and/or an associated chipset and/or a special purpose microprocessor (e.g., an Application Specific Integrated Circuit (ASIC)), or the like. Processor 1301 may also include on-board memory for caching purposes. Processor 1301 may include a single processing unit or multiple processing units for performing different actions of the method flow according to an embodiment of the invention.
In the RAM 1303, various programs and data necessary for the operation of the electronic apparatus 1300 are stored. The processor 1301, the ROM 1302, and the RAM 1303 are connected to each other through a bus 1304. The processor 1301 performs various operations of the method flow according to the embodiment of the present invention by executing programs in the ROM 1302 and/or the RAM 1303. Note that the program may be stored in one or more memories other than the ROM 1302 and the RAM 1303. Processor 1301 may also perform various operations of the method flow according to embodiments of the present invention by executing programs stored in the one or more memories.
According to an embodiment of the invention, the electronic device 1300 may also include an input/output (I/O) interface 1305, the input/output (I/O) interface 1305 also being connected to the bus 1304. The electronic device 1300 may also include one or more of the following components connected to the I/O interface 1305: an input section 1306 including a keyboard, a mouse, and the like; an output portion 1307 including a Cathode Ray Tube (CRT), a Liquid Crystal Display (LCD), and the like, and a speaker, and the like; a storage portion 1308 including a hard disk or the like; and a communication section 1309 including a network interface card such as a LAN card, a modem, or the like. The communication section 1309 performs a communication process via a network such as the internet. The drive 1310 is also connected to the I/O interface 1305 as needed. Removable media 1311, such as magnetic disks, optical disks, magneto-optical disks, semiconductor memory, and the like, is installed as needed on drive 1310 so that a computer program read therefrom is installed as needed into storage portion 1308.
The present invention also provides a computer-readable storage medium that may be embodied in the apparatus/device/system described in the above embodiments; or may exist alone without being assembled into the apparatus/device/system. The computer-readable storage medium carries one or more programs which, when executed, implement methods in accordance with embodiments of the present invention.
According to embodiments of the present invention, the computer-readable storage medium may be a non-volatile computer-readable storage medium, which may include, for example, but is not limited to: a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the context of this document, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. For example, according to embodiments of the invention, the computer-readable storage medium may include ROM 1302 and/or RAM 1303 described above and/or one or more memories other than ROM 1302 and RAM 1303.
Embodiments of the present invention also include a computer program product comprising a computer program containing program code for performing the method shown in the flowcharts. The program code means for causing a computer system to carry out the methods provided by embodiments of the present invention when the computer program product is run on the computer system.
The above-described functions defined in the system/apparatus of the embodiment of the present invention are performed when the computer program is executed by the processor 1301. The systems, apparatus, modules, units, etc. described above may be implemented by computer program modules according to embodiments of the invention.
In one embodiment, the computer program may be based on a tangible storage medium such as an optical storage device, a magnetic storage device, or the like. In another embodiment, the computer program can also be transmitted, distributed over a network medium in the form of signals, downloaded and installed via the communication portion 1309, and/or installed from the removable medium 1311. The computer program may include program code that may be transmitted using any appropriate network medium, including but not limited to: wireless, wired, etc., or any suitable combination of the foregoing.
In such an embodiment, the computer program can be downloaded and installed from the network through the communication portion 1309 and/or installed from the removable medium 13l 1. The above-described functions defined in the system of the embodiment of the present invention are performed when the computer program is executed by the processor 1301. The systems, devices, apparatus, modules, units, etc. described above may be implemented by computer program modules according to embodiments of the invention.
According to embodiments of the present invention, program code for carrying out computer programs provided by embodiments of the present invention may be written in any combination of one or more programming languages, and in particular, such computer programs may be implemented in high-level procedural and/or object-oriented programming languages, and/or in assembly/machine languages. Programming languages include, but are not limited to, such as Java, c++, python, "C" or similar programming languages. The program code may execute entirely on the user's computing device, partly on the user's device, partly on a remote computing device, or entirely on the remote computing device or server. In the case of remote computing devices, the remote computing device may be connected to the user computing device through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computing device (e.g., connected via the Internet using an Internet service provider).
The flowcharts and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present invention. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams or flowchart illustration, and combinations of blocks in the block diagrams or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The embodiments of the present invention are described above. However, these examples are for illustrative purposes only and are not intended to limit the scope of the present invention. Although the embodiments are described above separately, this does not mean that the measures in the embodiments cannot be used advantageously in combination. Various alternatives and modifications can be made by those skilled in the art without departing from the scope of the invention, and such alternatives and modifications are intended to fall within the scope of the invention.

Claims (15)

1. A method of validating a check seal, the method comprising:
collecting image information of checks;
performing first image segmentation on the image information by utilizing a foreground extraction model to extract a foreground and remove a background pattern to obtain a foreground image of a check, wherein the foreground extraction model is trained by utilizing a U2-Net model and comprises M convolution layers and N pooling layers which are alternately arranged, and M, N is a positive integer;
inputting the foreground image of the check into a segmentation model for carrying out second image segmentation to segment the seal and the text region, and obtaining the seal region and the text region of the check;
performing character recognition on the check text area, and acquiring a target seal picture from a check information database based on a character recognition result, wherein the check information database comprises check recognition information stored in advance;
Inputting the check seal area and the target seal picture into a pre-constructed seal identification model for seal authenticity verification, and outputting an authenticity verification result; and
and acquiring the verification result of the check seal based on the authenticity verification result.
2. The method according to claim 1, wherein the first image segmentation of the image information using a foreground extraction model to extract foreground and remove background patterns, comprises:
processing by using the M convolution layers and the N pooling layers based on the image information to obtain an image feature map, wherein the convolution layers are used for extracting image features, and the pooling layers are used for reducing the spatial resolution of the image information;
performing interpolation operation on the image feature map to generate an interpolated image feature map;
fusing the image feature map and the interpolated image feature map to obtain a fused image feature map;
outputting a binary segmentation image based on the fused image feature map, wherein the binary segmentation image comprises foreground pixels and background pixels; and
the check foreground image is generated based on foreground pixels of the binary split image.
3. The method according to claim 2, wherein the fusing the image feature map and the interpolated image feature map specifically comprises:
adding the corresponding channels of the image feature map and the interpolated image feature map; or (b)
Multiplying the image feature map with a corresponding channel of the interpolated image feature map; or (b)
And splicing the image characteristic diagram and the corresponding channel of the interpolated image characteristic diagram.
4. A method according to claim 3, wherein after said fusing the image feature map and the interpolated image feature map to obtain a fused image feature map, the method further comprises:
performing activation function processing and normalization processing on the fusion image feature images to obtain optimized fusion feature images; and
based on the fused image feature map, the output binary segmentation image is expressed as:
and outputting a binary segmentation image based on the optimized fusion feature map.
5. The method according to any one of claims 1 to 4, wherein the segmentation model is trained using a U-Net model or a DeepLab model.
6. The method of any one of claims 1-4, wherein the stamp identification model is trained using a deep neural network;
Inputting the check seal area and the target seal picture into a pre-constructed seal identification model for seal authenticity verification, and outputting an authenticity verification result, wherein the method specifically comprises the following steps of:
extracting seal identification features from the check seal area and the target seal picture and forming feature mapping, wherein the seal identification features comprise edges, textures and/or shape features corresponding to the check seal area and the target seal picture; and
and carrying out nonlinear transformation and combination on the feature map through the multi-layer convolution and the full-connection layer of the seal identification model, and outputting an authenticity verification probability value.
7. The method of claim 6, wherein the training dataset of the stamp authentication model is formed based on check stamp images acquired from a plurality of data sources, and wherein specifically comprising:
acquiring check seal images from a plurality of data sources;
labeling the check seal image, wherein the labeling comprises labeling the position and the authenticity label of each seal, and obtaining the check seal image with the label;
randomly rotating, translating, zooming and turning the check seal image with the label to obtain supplementary image data; and
A training dataset is formed based on the tagged check seal image and supplemental image data.
8. The method according to any one of claims 1 to 4, wherein inputting the check seal area and the target seal picture into a pre-constructed seal authentication model for seal authenticity verification, and outputting an authenticity verification result, specifically comprises:
calculating the Euclidean distance between the check seal area and the target seal picture; and
and outputting an authenticity verification result based on a comparison result of the Euclidean distance and a preset authenticity verification threshold value.
9. The method of claim 7, wherein prior to the first image segmentation of the image information using the foreground extraction model to extract foreground and background patterns, the method further comprises:
judging whether the image information meets the preset resolution and definition requirements or not; and
and re-acquiring the image information of the check in response to the image information not meeting the preset resolution and definition requirements.
10. The method of claim 9, wherein said performing text recognition on said check text area specifically comprises:
Identifying characters or words in the check text area using optical character recognition techniques;
combining the identified characters or words into a text string; and
correcting spelling errors, processing segmentation problems and/or standardizing text formats for the text character strings to obtain text recognition results.
11. The method of claim 10, wherein after the obtaining the verification result of the check seal based on the authenticity verification result, the method further comprises:
and storing the image information, the check foreground image, the check seal area, the check text area and the text recognition result into the check information database according to an encryption protocol in response to the check seal verification result being passed.
12. A check seal verification apparatus, said apparatus comprising:
the acquisition module is used for: collecting image information of checks;
a first image segmentation module for: performing first image segmentation on the image information by utilizing a foreground extraction model to extract a foreground and remove a background pattern to obtain a foreground image of a check, wherein the foreground extraction model is trained by utilizing a U2-Net model and comprises M convolution layers and N pooling layers which are alternately arranged, and M, N is a positive integer;
A second image segmentation module for: inputting the foreground image of the check into a segmentation model for carrying out second image segmentation to segment the seal and the text region, and obtaining the seal region and the text region of the check;
the target seal picture acquisition module is used for: performing character recognition on the check text area, and acquiring a target seal picture from a check information database based on a character recognition result, wherein the check information database comprises check recognition information stored in advance;
the true and false verification result acquisition module is used for: inputting the check seal area and the target seal picture into a pre-constructed seal identification model for seal authenticity verification, and outputting an authenticity verification result; and
the check seal verification result acquisition module is used for: and acquiring the verification result of the check seal based on the authenticity verification result.
13. An electronic device, comprising:
one or more processors;
storage means for storing one or more programs,
wherein the one or more programs, when executed by the one or more processors, cause the one or more processors to perform the method of any of claims 1-11.
14. A computer readable storage medium having stored thereon executable instructions which, when executed by a processor, cause the processor to perform the method according to any of claims 1-11.
15. A computer program product comprising a computer program which, when executed by a processor, implements the method according to any one of claims 1 to 11.
CN202311535772.0A 2023-11-17 2023-11-17 Check seal verification method and device, electronic equipment and medium Pending CN117523586A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311535772.0A CN117523586A (en) 2023-11-17 2023-11-17 Check seal verification method and device, electronic equipment and medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311535772.0A CN117523586A (en) 2023-11-17 2023-11-17 Check seal verification method and device, electronic equipment and medium

Publications (1)

Publication Number Publication Date
CN117523586A true CN117523586A (en) 2024-02-06

Family

ID=89762203

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311535772.0A Pending CN117523586A (en) 2023-11-17 2023-11-17 Check seal verification method and device, electronic equipment and medium

Country Status (1)

Country Link
CN (1) CN117523586A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN118278358A (en) * 2024-05-29 2024-07-02 云南香农信息技术有限公司 Cross-platform PDF online preview and conversion system

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN118278358A (en) * 2024-05-29 2024-07-02 云南香农信息技术有限公司 Cross-platform PDF online preview and conversion system

Similar Documents

Publication Publication Date Title
Sun et al. Template matching-based method for intelligent invoice information identification
RU2695489C1 (en) Identification of fields on an image using artificial intelligence
US9576221B2 (en) Systems, methods, and devices for image matching and object recognition in images using template image classifiers
CA3154393A1 (en) System and methods for authentication of documents
EP4085369A1 (en) Forgery detection of face image
US20200294130A1 (en) Loan matching system and method
CN111753496B (en) Industry category identification method and device, computer equipment and readable storage medium
CN117523586A (en) Check seal verification method and device, electronic equipment and medium
CN111898544B (en) Text image matching method, device and equipment and computer storage medium
Ji et al. Uncertainty-guided learning for improving image manipulation detection
Wang et al. Open-Set source camera identification based on envelope of data clustering optimization (EDCO)
Nandanwar et al. Forged text detection in video, scene, and document images
Alkhowaiter et al. Evaluating perceptual hashing algorithms in detecting image manipulation over social media platforms
CN114282258A (en) Screen capture data desensitization method and device, computer equipment and storage medium
CN111414889B (en) Financial statement identification method and device based on character identification
CN117195319A (en) Verification method and device for electronic part of file, electronic equipment and medium
CN117115833A (en) Certificate classification method, device, equipment and storage medium
CN116756281A (en) Knowledge question-answering method, device, equipment and medium
CN115035533B (en) Data authentication processing method and device, computer equipment and storage medium
US11935331B2 (en) Methods and systems for real-time electronic verification of content with varying features in data-sparse computer environments
CN116092094A (en) Image text recognition method and device, computer readable medium and electronic equipment
CN111881778B (en) Method, apparatus, device and computer readable medium for text detection
CN114820211B (en) Method, device, computer equipment and storage medium for checking and verifying quality of claim data
CN118366175B (en) Document image classification method based on word frequency
US20230316795A1 (en) Auto-Document Detection & Capture

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination