WO2024145466A1 - Document image assessment - Google Patents

Document image assessment

Info

Publication number
WO2024145466A1
WO2024145466A1 PCT/US2023/086219 US2023086219W WO2024145466A1 WO 2024145466 A1 WO2024145466 A1 WO 2024145466A1 US 2023086219 W US2023086219 W US 2023086219W WO 2024145466 A1 WO2024145466 A1 WO 2024145466A1
Authority
WO
WIPO (PCT)
Prior art keywords
document
under test
valid
image
blur
Prior art date
Application number
PCT/US2023/086219
Other languages
French (fr)
Inventor
Stuart Wells
Attila Balogh
Anshuman Vikram SINGH
Thomas Krump
Daryl Huff
Original Assignee
Jumio Corporation
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jumio Corporation filed Critical Jumio Corporation
Publication of WO2024145466A1 publication Critical patent/WO2024145466A1/en

Links

Abstract

The disclosure includes a system and method for determining whether an inconsistency in a set measure of blur values associated with the document under test is present. The disclosure includes a system and method for generating a set of derived checks based on the set of bounding boxes and generating a document assembly object describing valid instances of the document and the set of derived checks usable to determine validity of a document under test. The disclosure includes a system and method for applying a set of checks including one or more of: a first check determining whether the document holder image in the document under test complies with one or more rules relating to valid document holder images and a second check determining whether the first visible characteristic as described in the document content is consistent with the first visible characteristic as visible in the document holder image.

Description

Document Image Assessment
BACKGROUND
[0001] The present disclosure relates to document verification. More specifically, the present disclosure relates to confirming the authenticity of a document.
[0002] Documents are provided in many contexts. For example, documents may be provided to prove a person’s age or identity, as is the case with identification documents, as proof ownership, as is the case with documents such as title documents, as proof of authenticity (e.g., a certificate of authenticity), as proof of address, etc. Those contexts may have significant, financial, legal, or safety implications.
SUMMARY
[0003] This specification relates to methods and systems for determining, using one or more processors, a first measure of blur value associated with a first portion of a document under test; determining, using the one or more processors, a second measure of blur value associated with a second portion of the document under test; determining, using the one or more processors, whether an inconsistency in a set measure of blur values associated with the document under test is present, wherein the set of measure of blur values associated with the document under test includes the first measure of blur value and the second measure of blur value; and modifying, using the one or more processors, a likelihood that the document is accepted or rejected based on whether the inconsistency is absent or present, respectively. [0004] Other implementations of one or more of these aspects include corresponding systems, apparatus, and computer programs, configured to perform the actions of the methods, encoded on computer storage devices.
[0005] These and other implementations may each optionally include one or more of the following features. For instance, the features include that the first portion of the document under test is associated with a first bounding box generated using optical character recognition, and the second portion of the document under test is associated with a first bounding box generated using optical character recognition. For instance, the features include that an inconsistency exists when a difference between the first measure of blur and the second measure of blur satisfies a threshold. For instance, the features include that the first portion of the document under test is a first character in a first text string and the second portion of the document under test is a second character in the first text string. For instance, the features include that the first portion of the document under test is a first character in a first text string and the second portion of the document under test is a second character in the first text string, determining a third measure of blur associated with the first text string at a field level; determining a fourth measure of blur associated with a second text string at the field level; comparing the third measure of blur and the fourth measure of blur; and determining based on the comparison whether a difference in blur at the field level exists. For instance, the features include that the first portion of the document under test is associated with a first text string and the second portion of the document under test is associated with a second text string. For instance, the features include that the first portion of the document under test is associated with a field label and the second portion of the document under test is a text field associated with the field label. For instance, the features include that the first measure of blur is determined by applying Canny edge detection to the first portion of the document under test and the second measure of blur is determined by applying Canny edge detection to the second portion of the document under test. For instance, the features include that the first measure of blur is determined by applying Laplacian variance detection to the first portion of the document under test and the second measure of blur is determined by applying Laplacian variance to the second portion of the document under test. For instance, the features include that the first measure of blur is determined by applying Cepstral techniques to the first portion of the document under test and the second measure of blur is determined by applying Cepstral techniques to the second portion of the document under test.
[0006] This application also relates to methods and systems for obtaining a document specification in an electronic format, wherein the document specification is associated with a first document, and describes features present in valid instances of the first document; determining a set of labels describing the first document from the document specification; obtain one or more digital images of at least one valid instance of the first document from the document specification; obtaining information describing a set of bounding boxes resulting from application, to the one or more images of the least one valid instance of the first document, of optical character recognition, object detection, or both; generating a set of derived checks based on the set of bounding boxes; and generating a document assembly object describing valid instances of the document and the set of derived checks usable to determine validity of a document under test. [0007] Other implementations of one or more of these aspects include corresponding systems, apparatus, and computer programs, configured to perform the actions of the methods, encoded on computer storage devices.
[0008] These and other implementations may each optionally include one or more of the following features. For instance, the features include obtaining a set of test images representing multiple instances of the first document; determining, based on a first derived check in the document assembly object, whether each image in the set of test images is a valid with respect to the first derived check or an invalid with respect to the first derived check; and adjusting how subsequent determinations are made based on a presence of a false positive or false negative in the determination of a test image with respect to the first derived check. For instance, the features include adjusting how subsequent determinations are made includes: retraining a machine learning model associated with the derived check to reduce an instance of a false positive or a false negative; adjusting a tolerance; or both retraining the machine learning model associated with the derived check to reduce an instance of a false positive or a false negative and adjusting a tolerance. For instance, the features include obtaining a set of valid document images, wherein each image in the set of valid document images represents a valid instance of the first document; applying pattern recognition to the set of valid document images; generating, based on a first detected pattern, a newly derived check; and adding the newly derived check to the document assembly object. For instance, the features include that the newly derived check is associated with an unpublished security feature present in the first document. For instance, the features include that the pattern recognition identifies a repetition in at least a portion of personally identifiable information (PII) text between two or more bounding boxes associated with a common, valid document instance in the set of valid document images, and wherein the newly derived check, when applied to a document image under test, checks for: (1) whether a bounding box, which is associated with at least a partial repetition of PII in valid instances of the first document, is present in the document under test; (2) whether the bounding box, which is associated with at least a partial repetition of PII in valid instances of the first document, in the document under test is in a location consistent with a valid instance of the first document; (3) whether text content of the bounding box is repeats an appropriate portion of PII text found elsewhere in the document under test; (4) whether a bounding box, which is associated with at least a partial repetition of PII in valid instances of the first document, is present in the document under test and whether the bounding box, which is associated with at least a partial repetition of PII in valid instances of the first document, in the document under test is in a location consistent with a valid instance of the first document; (5) whether the bounding box, which is associated with at least a partial repetition of PII in valid instances of the first document, in the document under test is in a location consistent with a valid instance of the first document and whether text content of the bounding box is repeats an appropriate portion of PII text found elsewhere in the document under test; (6) whether a bounding box, which is associated with at least a partial repetition of PII in valid instances of the first document, is present in the document under test and whether text content of the bounding box is repeats an appropriate portion of PII text found elsewhere in the document under test; or (7) whether a bounding box, which is associated with at least a partial repetition of PII in valid instances of the first document, is present in the document under test, whether the bounding box, which is associated with at least a partial repetition of PII in valid instances of the first document, in the document under test is in a location consistent with a valid instance of the first document, and whether text content of the bounding box is repeats an appropriate portion of PII text found elsewhere in the document under test. For instance, the features include that the bounding box, which is associated with at least a partial repetition of PII in valid instances of the first document, is a portion of a ghost image. For instance, the features include that the bounding box, which is associated with at least a partial repetition of PII in valid instances of the first document, is undiscernible to a human eye absent magnification. For instance, the features include that the electronic format is one of hypertext markup language and printable document format and published by a trusted source. For instance, the features include that the document assembly object is human and machine readable.
[0009] This specification also relates to methods and systems for obtaining, using one or more processors, a document assembly object associated with a document under test subsequent to receiving an electronic image of the document under test, wherein the document assembly object indicates that valid instances of the document under test include a document holder image and document content describing a first visible characteristic of the document holder; automatically obtaining, using one or more processors, the document holder image from the electronic image of the document under test using object detection; automatically obtaining, using the one or more processors, document content describing a first visible characteristic of the document holder from the electronic image of the document under test using optical character recognition, object detection, or both optical character recognition and object detection; applying, using the one or more processors, a set of checks associated with the document assembly object to evaluate the document under test image for validity, the set of checks including: (1) a first check determining whether the document holder image in the document under test complies with one or more rules relating to valid document holder images, as defined in the document assembly object associated with the document under test; (2) a second check determining whether the first visible characteristic as described in the document content is consistent with the first visible characteristic as visible in the document holder image; or (3) the first check determining whether the document holder image in the document under test complies with one or more rules relating to valid document holder images, as defined in the document assembly object associated with the document under test; and the second check determining whether the first visible characteristic as described in the document content is consistent with the first visible characteristic as visible in the document holder image; and further modifying, using the one or more processors, a likelihood that the document under test is accepted or rejected based on the first check, the second check, or both the first check and the second check.
[0010] Other implementations of one or more of these aspects include corresponding systems, apparatus, and computer programs, configured to perform the actions of the methods, encoded on computer storage devices.
[0011] These and other implementations may each optionally include one or more of the following features. For instance, the features further include that the one or more rules relating to valid document holder images includes a first rule explicitly defined by an issuer of the document in a document specification. For instance, the features further include that the one or more rules relating to valid document holder images includes a first rule inferred from an analysis of a plurality of valid document instances. For instance, the features further include that the document content includes field content. For instance, the features further include that the document content includes a ghost image. For instance, the features further include that the one or more rules relating to valid document holder images include one or more dimensional requirements selected from the set of document holder image height, document holder image width, document holder image aspect ratio, a valid range for document holder head height, a valid range for document holder head width, and a margin. For instance, the features further include that the one or more rules relating to valid document holder images include at least one feature-based requirement, wherein the feature-based requirement is associated with a feature that is either present in, or absent from, the document holder image in valid instances of the document, and wherein a machine learning model is applied to the document holder image in the electronic image of the document under test to determine whether a feature is present or absent. For instance, the features further include that the feature is associated with one or more of headwear, glasses, hair coverage of one or more facial features, background color, presence of an object in a background, facial shadowing, background shadowing, facial expression, eyes being open, and direction of gaze. For instance, the features further include that the first visible characteristic includes one or more of a sex, hair color, eye color, height, weight, a head size ratio, and a head outline of the document holder. For instance, the features further include that one or more machine learning models are used to determine the first visible characteristic in the document holder image, which is compared to field content obtained using optical character recognition.
BRIEF DESCRIPTION OF THE DRAWINGS
[0012] The disclosure is illustrated by way of example, and not by way of limitation in the figures of the accompanying drawings in which like reference numerals are used to refer to similar elements.
[0013] Figure 1 is a block diagram of one example implementation of a system for document evaluation in accordance with some implementations.
[0014] Figure 2 is a block diagram of an example computing device in accordance with some implementations.
[0015] Figure 3 is a block diagram of an example document evaluator in accordance with some implementations.
[0016] Figure 4 is a block diagram of an example document configuration engine in accordance with some implementations.
[0017] Figure 5 is an image of an example of a California Driver’s License, which is an example document in accordance with some implementations.
[0018] Figure 6 is an image of the example California Driver’s License with examples of bounding boxes superimposed in accordance with some implementations.
[0019] Figure 7 is an example illustration of the bounding boxes without the example California Driver’s License in accordance with some implementations.
[0020] Figure 8 is an example document assembly object derived in part from a set of valid instances of a document and describing valid instances of the associated document in accordance with some implementations.
[0021] Figure 9 illustrates example snippets derived from the example California Driver’s License (CADL) in accordance with some implementations.
[0022] Figure 10 illustrate an example representation of a bounding box template included in a document assembly object in accordance with some implementations. [0023] Figure 11 illustrates an example of microprint from a valid instance of another example CADL in accordance with some implementations.
[0024] Figure 12 illustrates an example document assembly object in accordance with some implementations.
[0025] Figure 13 is a block diagram of an example decision engine in accordance with some implementations.
[0026] Figure 14A and 14B illustrate an example of a CADL under test and portions thereof in accordance with some implementations.
[0027] Figure 15 illustrates examples of an intra-bounding box text evaluation in accordance with some implementations.
[0028] Figure 16 is a block diagram of an example document database in accordance with some implementations.
[0029] Figure 17 is a flowchart of an example method for generating a document assembly object in accordance with some implementations.
[0030] Figure 18 is a flowchart of an example method for processing a request to verify a document under test using a document assembly object in accordance with some implementations.
[0031] Figure 19 is a flowchart of an example method for evaluating a document under test in accordance with some implementations.
[0032] Figure 20 is a flowchart of an example method for determining whether an inconsistency in blur within a portion of the document under test is present in accordance with some implementations.
[0033] Figure 21 is a flowchart of an example method for determining whether an inconsistency in blur between two portions of a document under test is present in accordance with some implementations.
[0034] Figure 22 is a flowchart of an example method for using a plurality of measure of blur values per set of text in accordance with some implementations.
[0035] Figure 23 illustrates a portion of an example document under test in accordance with some implementations.
[0036] Figure 24A illustrates a first example result of Canny edge detection on the example portion of a document under test in accordance with some implementations.
[0037] Figure 24B illustrates a second example result of Canny edge detection on the example portion of a document under test in accordance with some implementations. [0038] Figure 25 illustrates example results of Laplacian variance on the example portion of a threshold applied in accordance with some implementations.
[0039] Figure 29 illustrates an example of a portion of a document under test with microprint in accordance with some implementations.
[0040] Figure 30 is a flowchart of another example method for generating a document assembly object in accordance with some implementations.
[0041] Figures 31, 31 A, and 3 IB illustrate an example Italian document specification and portions thereof in accordance with some implementations.
[0042] Figure 32 illustrates an example document assembly object associated with the Italian document in accordance with some implementations.
[0043] Figures 33A and 33B illustrate an example Indian passport with a ghost image under varying levels of magnification in accordance with some implementations.
[0044] Figure 34 illustrates a portion of a CADL from which a derived check may be generated in accordance with some implementations.
[0045] Figure 35 is a flowchart of another example method for using one or more checks based at least in part on a document holder image in accordance with some implementations. [0046] Figures 36A and 36B illustrate examples of document holder image specifications provided by document issuers in accordance with some implementations.
[0047] Figure 37 illustrates examples of valid and invalid document holder images in accordance with some implementations.
DETAILED DESCRIPTION
[0048] The present disclosure is described in the context of an example document evaluator and use cases; however, those skilled in the art should recognize that the document evaluator may be applied to other environments and use cases without departing from the disclosure herein.
[0049] Documents are provided in many contexts. For example, documents may be provided to prove a person’s age or identity, as is the case with identification documents, as proof ownership, as is the case with documents such as title documents, as proof of authenticity (e.g., a certificate of authenticity), etc. Those contexts may have significant, financial, legal, or safety implications. For example, documents may be provided to confirm an identity of a user prior to a financial transaction. If an invalid document is accepted and used for identification, identity theft, circumvention of sanctions, watchlists, or anti-money laundering mechanisms may occur. [0050] Accordingly, it is desirable to verify a document, particularly before that document is relied upon. For example, before the document is relied upon as a reference for a comparison between an attribute (e.g., a biometric such as a signature, voice, face, retina, palm print, fingerprint, etc.) of a person present and the document.
[0051] A user wishing to establish his/her identity with an entity, e.g., a government agency or a commercial enterprise, may be asked to submit an image of a document through the entity’s application on his/her mobile phone or through the entity’s portal on a web browser. The entity may, depending on the implementation, may request verification of the document by the document evaluation systems and methods described herein.
[0052] Fraudsters may leverage technology to automate a series of repeated, fraudulent attempts to mislead an entity until a successful vector of attack is discovered, and their attacks may become increasingly more sophisticated (e.g., using photo editing software, such as Photoshop to modify images of valid documents to create fake/invalid documents, such as fake IDs). The document evaluator 226 described herein may beneficially detect such fraudulent documents.
[0053] Figure 1 is a block diagram of an example system 100 for document evaluation in accordance with some implementations. As depicted, the system 100 includes a server 122 and a client device 106 coupled for electronic communication via a network 102.
[0054] The client device 106 is a computing device that includes a processor, a memory, and network communication capabilities (e.g., a communication unit). The client device 106 is coupled for electronic communication to the network 102 as illustrated by signal line 114. In some implementations, the client device 106 may send and receive data to and from other entities of the system 100 (e.g., a server 122). Examples of client devices 106 may include, but are not limited to, mobile phones (e.g., feature phones, smart phones, etc.), tablets, laptops, desktops, netbooks, portable media players, personal digital assistants, etc.
[0055] Although a single client device 106 is shown in Figure 1, it should be understood that there may be any number of client devices 106. It should be understood that the system 100 depicted in Figure 1 is provided by way of example and the system 100 and/or further systems contemplated by this present disclosure may include additional and/or fewer components, may combine components and/or divide one or more of the components into additional components, etc. For example, the system 100 may include any number of client devices 106, networks 102, or servers 122.
[0056] The network 102 may be a conventional type, wired and/or wireless, and may have numerous different configurations including a star configuration, token ring configuration, or other configurations. For example, the network 102 may include one or more local area networks (LAN), wide area networks (WAN) (e.g., the Internet), personal area networks (PAN), public networks, private networks, virtual networks, virtual private networks, peer-to- peer networks, near field networks (e.g., Bluetooth®, NFC, etc.), cellular (e.g., 4G or 5G), and/or other interconnected data paths across which multiple devices may communicate. [0057] The server 122 is a computing device that includes a hardware and/or virtual server that includes a processor, a memory, and network communication capabilities (e.g., a communication unit). The server 122 may be communicatively coupled to the network 102, as indicated by signal line 116. In some implementations, the server 122 may send and receive data to and from other entities of the system 100 (e.g., one or more client devices 106).
[0058] Other variations and/or combinations are also possible and contemplated. It should be understood that the system 100 illustrated in Figure 1 is representative of an example system and that a variety of different system environments and configurations are contemplated and are within the scope of the present disclosure. For example, various acts and/or functionality described herein may be moved from a server to a client, or vice versa, data may be consolidated into a single data store or further segmented into additional data stores, and some implementations may include additional or fewer computing devices, services, and/or networks, and may implement various functionality client or server-side. Furthermore, various entities of the system may be integrated into a single computing device or system or divided into additional computing devices or systems, etc.
[0059] For example, as depicted, the server 122 include an instance of the document evaluator 226. However, in some implementations, the components and functionality of the document evaluator 226 may be entirely client-side (e.g., at client device 106; not shown), entirely server side (i.e., at server 122, as shown), or divide among the client device 106 and server 122.
[0060] Figure 2 is a block diagram of an example computing device 200 including an instance of the document evaluator 226. In the illustrated example, the computing device 200 includes a processor 202, a memory 204, a communication unit 208, an optional display device 210, and a data storage 214. In some implementations, the computing device 200 is a server 122, the memory 204 stores the document evaluator 226, and the communication unit 208 is communicatively coupled to the network 102 via signal line 116. In some implementations, the computing device 200 is a client device 106, which may occasionally be referred to herein as a user device, and the client device 106 optionally includes at least one sensor (not shown), and the communication unit 208 is communicatively coupled to the network 102 via signal line 114.
[0061] The processor 202 may execute software instructions by performing various input/output, logical, and/or mathematical operations. The processor 202 may have various computing architectures to process data signals including, for example, a complex instruction set computer (CISC) architecture, a reduced instruction set computer (RISC) architecture, and/or an architecture implementing a combination of instruction sets. The processor 202 may be physical and/or virtual and may include a single processing unit or a plurality of processing units and/or cores. In some implementations, the processor 202 may be capable of generating and providing electronic display signals to a display device, supporting the display of images, capturing and transmitting images, and performing complex tasks and determinations. In some implementations, the processor 202 may be coupled to the memory 204 via the bus 206 to access data and instructions therefrom and store data therein. The bus 206 may couple the processor 202 to the other components of the computing device 200 including, for example, the memory 204, the communication unit 208.
[0062] The memory 204 may store and provide access to data for the other components of the computing device. The memory 204 may be included in a single computing device or distributed among a plurality of computing devices. In some implementations, the memory 204 may store instructions and/or data that may be executed by the processor 202. The instructions and/or data may include code for performing the techniques described herein. For example, in one implementation, the memory 204 may store an instance of the document evaluator 226. The memory 204 is also capable of storing other instructions and data, including, for example, an operating system, hardware drivers, other software applications, databases, etc. The memory 204 may be coupled to the bus 206 for communication with the processor 202 and the other components of the computing device 200.
[0063] The memory 204 may include one or more non-transitory computer-usable (e.g., readable, writeable) device, a static random access memory (SRAM) device, a dynamic random access memory (DRAM) device, an embedded memory device, a discrete memory device (e.g., a PROM, FPROM, ROM), a hard disk drive, an optical disk drive (CD, DVD, Blu-ray™, etc.) mediums, which can be any tangible apparatus or device that can contain, store, communicate, or transport instructions, data, computer programs, software, code, routines, etc., for processing by or in connection with the processor 202. In some implementations, the memory 204 may include one or more of volatile memory and nonvolatile memory. It should be understood that the memory 204 may be a single device or may include multiple types of devices and configurations. In some implementations, the memory 204 stores a document database 242. In some implementations, the document database 242 is stored on a portion of the memory 204 comprising a network accessible storage device.
[0064] The communication unit 208 is hardware for receiving and transmitting data by linking the processor 202 to the network 102 and other processing systems. The communication unit 208 receives data and transmits the data via the network 102. The communication unit 208 is coupled to the bus 206. In one implementation, the communication unit 208 may include a port for direct physical connection to the network 102 or to another communication channel. For example, the computing device 200 may be the server 122, and the communication unit 208 may include an RJ45 port or similar port for wired communication with the network 102. In another implementation, the communication unit 208 may include a wireless transceiver (not shown) for exchanging data with the network 102 or any other communication channel using one or more wireless communication methods, such as IEEE 802.11, IEEE 802.16, Bluetooth® or another suitable wireless communication method.
[0065] In yet another implementation, the communication unit 208 may include a cellular communications transceiver for sending and receiving data over a cellular communications network such as via short messaging service (SMS), multimedia messaging service (MMS), hypertext transfer protocol (HTTP), direct data connection, WAP, e-mail or another suitable type of electronic communication. In still another implementation, the communication unit 208 may include a wired port and a wireless transceiver. The communication unit 208 also provides other connections to the network 102 for distribution of files and/or media objects using standard network protocols such as TCP/IP, HTTP, HTTPS, and SMTP as will be understood to those skilled in the art.
[0066] The display device 218 is a conventional type such as a liquid crystal display (LCD), light emitting diode (LED), touchscreen, or any other similarly equipped display device, screen, or monitor. The display 218 represents any device equipped to display electronic images and data as described herein. In some implementations, the display device 218 is optional and may be omitted.
[0067] It should be apparent to one skilled in the art that other processors, operating systems, inputs (e.g., keyboard, mouse, one or more sensors, etc.), outputs (e.g., a speaker, display, haptic motor, etc.), and physical configurations are possible and within the scope of the disclosure. [0068] In some implementations, the document evaluator 226 provides the features and functionalities described below responsive to a request. For example, a request on behalf of an entity (not shown) to evaluate an image of a document. In some implementations, the evaluation of the document determines whether the document is accepted (e.g., determined to be valid) or rejected (e.g., invalid, abused, modified, fraudulent, etc.).
[0069] Referring now to Figure 3, a block diagram of an example document evaluator 226 is illustrated in accordance with one implementation. As illustrated in Figure 3, in some implementations, the document evaluator 226 may include an image preprocessor 302, a document configurator 304, an optical character recognition (OCR) engine 306, an object detection engine 308, and a decision engine 310. The components 302, 304, 306, 308, and 310, subcomponents, sub-subcomponents, etc. thereof are communicatively coupled to one another and/or to the document database 242 to perform the features and functionalities described herein.
[0070] In some implementations, the image preprocessor 302 receives one or more images representing a document, also referred to occasionally as an image of a document or document image and preprocesses the one or more document images to generate a set of postprocessed images of the document for subsequent use by one or more of the other components of the document evaluator 226. The image preprocessor 302 is communicatively coupled to receive the one or more document images (e.g., from a camera sensor on the client device 106 via a web browser, mobile application, or API and the network 102).
[0071] The preprocessing performed by the image preprocessor 302, and accordingly the set of post-processed images generated, may vary depending on the implementation and use case. Examples of preprocessing performed by the image preprocessor 302 may include one or more of document extraction, rectification, composite image generation, edge detection, etc. In some implementations, the image preprocessor 302 may extract the portion of the image depicting the document (e.g., from the background or surrounding environment. In some implementations, the image preprocessor 302 may rectify the image data, or a portion thereof, by performing one or more of a rotation, a translation, and a de-skew. For example, in some implementations, the image preprocessor 302 determines the polygon associated with a document portion within the image and rotates and de-skews the polygon, e.g., to generate a normalized, rectangular representation of the document.
[0072] In some implementations, the image preprocessor 302 may receive multiple images of the same document instance (e.g., multiple frames from a video clip recording an identification document) and generate a composite image based on the multiple images. For example, some documents, such as government issued identification documents, may have optically dynamic security features such as color shifting ink, hologram, kinegrams, etc., which may not be represented in a single image. In some implementations, the image preprocessor 302 may make a composite document image that represents the optically dynamic security feature, when present, so that the document evaluator 226 may use those optically dynamic security features, or their absence, in the evaluation.
[0073] In some implementations, the image preprocessor 302 may perform other image processing on a document image or snippets thereof. For example, in some implementations, the image preprocessor 302 may perform portions of the image processing described below with reference to Figure 13 and the blur determiner 1346.
[0074] In some implementations, the image preprocessor 302 may perform edge detection. For example, such as government issued identification documents, may include a watermark which may not be captured under normal conditions (e.g., because the user’s client device 106 and associated camera does not have/use a UV light source, backlighting with sufficient light to show a watermark is problematic for a user to capture, etc.). In some implementations, the image preprocessor 302 may perform an edge detection, such as a Canny edge detection to identify edges associated with a border of a watermark. In some implementations, the edge detection may be used, or applied, by the object detection engine 308 to detect missing or partially occluded security features, e.g., microprint, hologram, ghost image, watermark, etc.
[0075] In some implementations, a subset of the preprocessing performed by the image preprocessor 302 may be conditional based on a classification of the document. For example, in some implementations, the image preprocessor may extract the document portion and perform rectification for a document image. A document classifier may identify the document (e.g., a CA Driver’s License issued in 2022), and the image preprocessor 302 may perform edge detection and/or composite image generation based on whether valid instances of that document class include a watermark or optically dynamic security feature.
[0076] In some implementations, the set of post-processed document images includes one or more of a rectified image, a composite document image, and an output of an edge detection. In some implementations, the image preprocessor 302 communicates the set of one or more post-processed images to, or stores (e.g., in the document database 242), the set of post processed document images for retrieval by one or more of the document configurator 304, the object detection engine 308, and the decision engine 310. In some implementations, the features and functionalities of one or more of the document configurator 304, the object detection engine 308, and the decision engine 310 described below with reference a valid sample, or image under test, or document image, are on post-processed version(s) of the referenced document image (e.g., valid/invalid sample image or image of the document under test).
[0077] The document configurator 304 generates a document assembly object describing a valid document. In some implementations, the document assembly object describing a valid document is used by the decision engine 310, described further below, to at least partially determine whether a document under test is valid. In some implementations, the document configurator 304, by generating the document assembly object describing a particular valid document, adds support for a new document, where the new document may be a newly issued document (e.g., new driver’s license) or a previously unsupported document (e.g., an identification document not previously supported by the document evaluator 226 and/or its decision engine 310).
[0078] It should be recognized that for artificial intelligence/machine learning many instances (e.g., hundreds or thousands) are needed to train a reliably accurate model. In cases such as a newly issued document (e.g., a new driver’s license), this poses a challenge sometimes referred to as the “cold start problem,” i.e., there is not enough data (e.g., valid and/or invalid instances of that document) to begin training a reliable model. Additionally, when sufficient data is available, it may take weeks or months to train, validate, and optimize an AI/ML model. This delay may result in suboptimal outcomes such as not supporting the document, which may anger and frustrate users or customers, as their valid documents cannot be used and they may not have an alternative, supported document available.
[0079] In some implementations, the document evaluator 226, by using a document assembly object generated by the document configurator 304 using as few as three valid samples in some implementations, may quickly support and process a new document, thereby reducing or eliminating the cold start problem and substantially shortening the time to add support of a new document. In some implementations, the number of valid samples may be subsequently or iteratively supplemented, e.g., to better account for variations in the document issuer’s printing equipment, (e.g., standard deviations in alignment and/or spacing).
[0080] For clarity and convenience, the description herein makes repeated reference to documents that are government issued identification documents, such as ID cards, voter cards, passports, driver’s licenses, visas, etc., and more particularly to an example California Driver’s License (CADL) 500 depicted in Figure 5, sometimes referred to herein as the CADL example. However, it should be recognized that other documents exist and may be supported by the system 100. For example, financial documents (e.g., check, bearer bonds, stock certificates, bills, etc.) or other documents may be supported and evaluated by the system 100.
[0081] Referring now to Figure 4, a block diagram of an example document configurator 304 is illustrated in accordance with some implementations. As illustrated in Figure 4, the document configurator 304 includes a sample obtainer 402, a document class labeler 404, an issuer information encoder 406, and a derived information encoder 408.
[0082] The sample obtainer 402 obtains a set of one or more valid samples of a document, where a valid sample includes an image of a valid instance of a document. For example, a valid sample may be an example published by the document’s issuer or other verified document instances. Referring now to Figure 5 an image of an example of a CADL 500 published by its issuer, the California Department of Motor Vehicles (DMV) is illustrated. The illustrated example, despite indicating an issue date 1440 of “08/31/2009” is an example of a CADL that started being issued January 22, 2018. Referring to Figure 5, in some implementations, the sample obtainer 402 may obtain the illustrated CADL 500, e.g., directly from the issuer’s website or other electronic publication, as a valid sample.
[0083] An issuer’s electronic publication is merely one example of a potential source of one or more valid samples. Valid samples may be obtained from different or additional sources depending on the implementation and use case. For example, in some implementations, a valid sample or set of valid samples may be obtained from a manual/human review (e.g., in the case of a newly issued ID) and/or from the decision engine 310 (e.g., as more instances of the document are processed and valid instances are determined by the decision engine 310). In some implementations, the image of a valid instance of a document may be a postprocessed image of a valid instance of that document obtained via the image preprocessor 302.
[0084] Referring again to Figure 4, the document class labeler 404 obtains a set of labels describing the document and associates the set of labels with the document assembly object describing a particular (e.g., a new) valid document, which is being generated by the document configurator 304.
[0085] In some implementations, the document class labeler 404 may obtain labels describing one or more of the document and the document’s issuer. For example, in some implementations, the document class labeler 404 may obtain labels describing one or more of a document type (e.g., ID card, passport, driver’s license, or visa), a country code (e.g., associated with the issuer, such as US for the CA Driver’s License), a subtype (e.g., national ID, resident permit, work permit, voter card, driver’s license in the case of the CADL example, etc.), an edition (e.g., a resident permit may have citizen and non-citizen versions, a driver’s license may have commercial or regular editions instead of different endorsements, or an ID may have different versions for an adult and a minor, etc.), a state (e.g., CA in the case of the CADL example), a time of first issuance of the document (e.g., January 22, 2018 or just the year 2018 in the CADL example, depending on the implementation), and a tag (e.g., a version number if multiple versions are released in the same year). In some implementations, document class labeler 404 may associate, or assign, a document identifier, e.g., a unique number assigned sequentially, to the document assembly object. In some implementations, the document class labeler 404 may receive a parent identifier, where the parent identifier identifies a parent document, and the “child” may inherit at least a portion of the document assembly object.
[0086] In some implementations, the document class labeler 404 obtains one or more labels based on user input. For example, in some implementations, the document class labeler 404 presents a user interface and a user selects one or more of the document identifier, a parent identifier, the type (e.g., ID type), the country code, subtype, state, year, tag, etc. via the user interface (e.g., via selection in a drop-down menu or entry into a field). In some implementations, the document class labeler 404 obtains one or more labels based a source of a valid document sample. For example, the document class labeler 404 obtains the labels DL_US_REGULAR-DL_CA_2018_0 for the CADL example when the valid sample is obtained from the CA Department of Motor Vehicles, which issues driver’s licenses in the US state of California, and the CADL example began issuing in 2018 and was the only version issued that year.
[0087] In some implementations, the set of labels is consistent with an output of a document classifier. For example, during production, when the document evaluator 226 receives an image of a document under test, the decision engine 310 may, in some implementations, use a document classifier 1302, described below in Figure 13, to identify the class of the document under test. In some implementations, that identified class is represented by a document class label, and that document class label may be used by the decision engine 310 to obtain a document assembly object associated with the corresponding label. For example, in some implementations, the document classifier may output a class including a concatenation of labels, such as DL_US_REGULAR-DL_CA_2018_0 (i.e., ID type, country, subtype, state, year, tag), which may uniquely identify a document assembly object describing valid instance of the document being evaluated under test. It should be recognized that the preceding is merely an example of an output class and the labels and order of those labels comprising the output class may vary based on the document and implementation.
[0088] The issuer information encoder 406 obtains information provided by a document issuer and encodes the issuer provided information into the document assembly object. In some implementations, the issuer provided information encoded into the document assembly object includes one or more of a set of document components and a set of direct checks. Examples of document components may include, but are not limited to, whether a photograph is present (e.g., in the case of a photo ID), fields (e.g., first name, last name, address, gender, date of birth, issue date, expiry date, ID number, endorsements, restrictions, class, etc.), field labels (e.g. “HGT” for height, “WGT” for weight, etc.), what fields are optional and mandatory, what the available options are (e.g., the available set of abbreviations used for eye color, hair color, sex, etc.), the security features (e.g., presence of a watermark or optically dynamic security feature such as a hologram or kinegram), etc.
[0089] Examples of direct checks may include but are not limited to checking the presence, or absence, of issuer-specified-mandatory components (e.g., field(s) and or security feature(s), such as laser perforations); checking an issuer prescribed rule, e.g., to ensure that a driver’s license number has one or more of a valid format, composition, length, or falls within a range specified by the issuer; etc.
[0090] In some implementations, the issuer information encoder 406 may automatically obtain and/or encode the issuer provided information. For example, in some implementations, the issuer information encoder 406 may crawl issuer websites (e.g., including the CA DMV’s website) for an electronic publication of new document and associated technical documentation, parse technical documentation and encode the set of document components and direct checks extracted therefrom.
[0091] In some implementations, the issuer information encoder 406 may obtain a specification related to a document holder image, when the document includes an image of the document holder, as is the case with many passports, driver’s licenses, and other forms of photo ID, and encode one or more checks associated therewith. For example, referring to Figure 36A, an example of a portion of a Netherlands passport document specification associated with the passport holder’s image is available online and illustrated. As another example, referring to Figure 36B, an example portion of a UK passport document specification associated with the passport holder’s image is available online and illustrated. As still another example, referring to Figure 37, an example portion of a UK passport document specification associated with the passport holder’s image is available online and illustrated. For example, the UK passport issuer permits religious headwear, an example of which is shown in image 3702 and is indicated at 3722, but does not permit fashion hair accessories, an example of which is shown in image 3704 and is indicated at 3724, or fashion headwear, an example of which is shown in image 3706 and is indicated at 3726. The UK passport permits plain light-colored backgrounds, an example of which is shown in image 3708 and is indicated at 3728, but does not permit a textured background, an example of which is shown in image 3710 and is indicated at 3730, or objects in the background, an example of which is shown in image 3712 and is indicated at 3732. The UK passport requires that the face be fully visible, an example of which is shown in image 3732 and is indicated at 3752, and does not permit glasses covering the eyes, an example of which is shown in image 3734 and is indicated at 3754, or hair covering the eyes, an example of which is shown in image 3736 and is indicated at 3756. The UK passport requires even lighting and no shadow, an example of which is shown in image 3738 and is indicated at 3758, and does not permit shadow on the face, an example of which is shown in image 3740 and is indicated at 3760, or permit a shadow in the background, an example of which is shown in image 3742 and is indicated at 3762.
[0092] The foregoing are merely examples of document specification relating to a document holder image and not intended to be comprehensive. It should be recognized that other features may be permitted, required, not permitted, or forbidden, depending on the issuer and document. It should further be recognized that variations exist from issuer to issuer, even for a common requirement. For example, the dimensional requirements relating to the size of the face within the document holder image vary between the Netherlands, as illustrated at Figure 36A, and the UK, as illustrated at Figure 36B. As another example, while the UK requires only that the hair not cover the user’s face, the Netherlands may require that hair be tucked behind the ears, so the ears are visible. As another example, another jurisdiction may permit glasses provided that the glasses do not have glare. The document evaluator 226 may encode a document specific set of check in the document assembly object and apply the associated set of checks to document under test. Therefore, while a human security officer may not be aware of the differences in document image holder image rules, sometimes also referred to as “requirements,” (e.g., facial dimension requirements, whether the ears need to be visible, whether glasses are permitter, etc.) between various documents, or is unable to detect a deviation (e.g., a mm or fraction of a mm difference in facial size), the document evaluator 226, according to some implementations, may be able to do so, thereby identifying invalid documents/fraud undetected, or undetectable, by humans. [0093] In some implementations, the issuer information encoder 406 encodes one or more checks related to the document holder image as derived checks. For example, the issuer information encoder 406 encodes a check for the one or more of the dimensional requirements associated with Figure 36A in a document assembly object associated with the associated version of the Netherlands passport and/or one or more of the dimensional requirements associated with Figure 36B in a document assembly object associated with the associated version of the UK passport.
[0094] As used herein, a check may, depending on the implementation, be explicit or implicit. For example, an explicit check may include confirm height of face is between 32 and 36 mm. In some implementations, an implicit check may be represented by a set of information in a document assembly object that a document under test can be checked against. For example, an acceptable face height range as between 32 mm and 36 mm, or the max face height: 36mm and min face height: 32mm, or glasses permitted: False may be encoded in the document assembly object.
[0095] In some implementations, information from a document specification may be encoded as a direct check. In some implementations, information from the specification may be encoded as a derived check by the derived information encoder 408, described below. For example, in some implementations, in addition to or instead of a derived check to determine whether the document holder image is 45 mm tall and 35 mm wide and that the face height is between 32 mm and 36 mm tall, the derived information encoder 408 may generate a derived check for the aspect ratio for the document holder image (based on the 45 mm tall and 35 mm wide specification) and an acceptable proportion range for the face (e.g., face height is between 72% and 80% of the image height, based on the 32-36 mm range and 45 mm image height). As another example, a human head typically has an aspect ratio of 3 to 4 (width to height). In some implementations, the ratio or range of ratios typical of a human head may be derived from images and used to check the document holder image in the document under test and catch instances where a nefarious user stretches or squishes (e.g., horizontally or vertically) a facial image to fit in the space in the document.
[0096] The derived information encoder 408 derives information describing valid instances based on one or more valid sample images (e.g., post-processing) and encodes the derived information into the document assembly object. In some implementations, the derived information encoded into the document assembly object includes one or more of a set of document features and a set of derived checks. [0097] In some implementations, derived information may refer to information not explicitly provided by an issuer in technical documentation. In some implementations, the derived information and/or derived security checks may be initially based on valid instances and be modified or supplemented based on subsequent valid and/or fraudulent samples. In some implementations, the combination of the direct checks and indirect checks in combination may determine if any security feature has been violated in any way.
[0098] For example, while examples of prescribed dimensional requirements and featurebased requirements related to document holder images are described above and in reference to Figures 36A, 36B, and Figure 37, not all document issuers may publish, or explicitly define those requirements, even when such requirements exist. In some implementations, the derived information encoder 408 may derive one or more checks associated with a document holder image, such as those discussed with reference to Figures 36A, 36B and Figure 37. For example, the derived information encoder 408 may analyze a plurality of document holder images (e.g., using a bounding box determined by the object detector 308) from valid document instances, and determine a set of derived checks associated with one or more of a dimensional requirement related to the document holder image or a face included therein and a feature-based requirement. Examples of dimensional requirements include, but are not limited to document holder image height, document holder image width, document holder image aspect ratio, a valid range for document holder head height, a valid range for document holder head width, and a margin.
[0099] Examples of feature-based requirements include, but are not limited to, the presence or absence of one or more of: (A) headwear or certain type of headwear, (B) glasses, (C) hair coverage of one or more facial features (e.g., an ear or an eye), (D) background color, (E) presence of an object in the background, (F) facial shadowing, (G) background shadowing, (H) facial expression (e.g., mouth close, smiling if permitted, or neutral expression if required), (I) eyes being open, and (J) direction of gaze. Accordingly, example combinations of feature-based requirements may include, but are not limited to: A, B, C, D, E, F, G, H, I, J, A B, A C, A D, A E, A F, A G, A H, A I, A J, B C, B D, B E, B F, B G, B H, B I, B J, C D, C E, C F, C G, C H, C I, C J, D E, D F, D G, D H, D I, D J, E F, E G, E H, E I, E J, F G, F H, F I, F J, G H, G I, G J, H I, H J, I J, A B C, A B D, A B E, A B F, A B G, A B H, A B I, A B J, A C D, A C E, A C F, A C G, A C H, A C I, A C J, A D E, A D F, A D G, A D H, A D I, A D J, A E F, A E G, A E H, A E I, A E J, A F G, A F H, A F I, A F J, A G H, A G I, A G J, A H I, A H J, A I J, B C D, B C E, B C F, B C G, B C H, B C I, B C J, B D E, B D F, B D G, B D H, B D I, B D J, B E F, B E G, B E H, B E I, B E J, B F G, B F H, B F I, B F J, B G H, B G I, B G J, B H I, B H J, B I J, C D E, C D F, C D G, C D H, C D I, C D J, C E F, C E G, C E H, C E I, C E J, C F G, C F H, C F I, C F J, C GH, C G I, C G J, C H I, C H J, C I J, D E F, D E G, D E H, D E I, D E J, D F G, D F H, D F I, D F J, D G H, D G I, D G J, D H I, D H J, D I J, E F G, E F H, E F I, E F J, E G H, E G I, E G J, E H I, E H J, E I J, F G H, F G I, F G J, FHI, FH J, FI J, GHI, GH J, GI J, HI J, ABCD, ABCE, ABCF, ABC G, ABC H, ABC I, ABC J, ABD E, ABD F, ABD G, ABD H, ABDI, ABD J, ABE F, AB EG, ABEH, ABEI, ABEJ, ABFG, ABFH, ABFI, ABFJ, ABGH, ABGI, A BGJ, ABHI, ABHJ, ABIJ, ACDE, ACDF, ACDG, ACDH, ACDI, ACDJ, ACEF, ACEG, ACEH, ACEI, ACEJ, ACFG, ACFH, ACFI, ACFJ, ACG H, ACGI, ACGJ, ACHI, ACHJ, ACIJ, ADEF, ADEG, ADEH, ADEI, ADE J, ADFG, ADFH, ADFI, ADFJ, ADGH, ADGI, ADGJ, ADHI, ADHJ, ADI J, AEFG, AEFH, AEFI, AEFJ, AEGH, AEG I, AEG J, AEHI, AEHJ, AEIJ, AFGH, AFGI, AFGJ, AFHI, AFHJ, AFIJ, AGHI, AGHJ, AGIJ, AHIJ,B CD E, BCD F, BCD G, BCD H, BCD I, BCD J, BCE F, BCE G, BCE H, BCE I, BCEJ, BCFG, BCFH, BCFI, BCFJ, BCGH, BCGI, BCGJ, BCHI, BCHJ, BCIJ, BDEF, BDEG, BDEH, BDEI, BDEJ, BDFG, BDFH, BDFI, BDFJ, BDGH, BDGI, BDGJ, BDHI, BDHJ, BDIJ, BEFG, BEFH, BEFI, BEFJ, B EGH, BEGI, BEGJ, BEHI, BEHJ, BEIJ, BFGH, BFGI, BFGJ, BFHI, BF HJ, BFIJ, BGHI, BGHJ, BGIJ, BHIJ, CDEF, CDEG, CDEH, CDEI, CDE J, CDFG, CDFH, CDFI, CDFJ, CDGH, CDGI, CDGJ, CDHI, CDHJ, CDI J, CEFG, CEFH, CEFI, CEFJ, CEGH, CEGI, CEGJ, CEHI, CEHJ, CEIJ, CFGH, CFG I, CFG J, CFHI, CFHJ, C F I J, CGHI, CGHJ, C G I J, C H I J, D E FG, DEF H, DEFI, DEF J, DEG H, DEG I, DEG J, DEH I, DEH J, DEI J, DFG H, DFGI, DFGJ, DFHI, DFHJ, DFIJ, DGHI, DGHJ, DGIJ, DHIJ, EFGH, EFGI, EFGJ, EFHI, EFHJ, EFIJ, EGHI, EGHJ, EGIJ, EHIJ, FGHI, FGH J, FGI J, FHI J, GHI J, ABCD E, ABCD F, ABCD G, ABCD H, ABCD I, AB CD J, ABCE F, ABCE G, ABCE H, ABCE I, ABCE J, ABCF G, ABCF H, A BCFI, ABCFJ, ABCGH, ABCGI, ABCGJ, ABCHI, ABCHJ, ABCIJ, AB DEF, ABDEG, ABDEH, ABDEI, ABDEJ, ABDFG, ABDFH, ABDFI, A BDFJ, ABDGH, ABDGI, ABDGJ, ABDHI, ABDHJ, ABDIJ, ABEFG, A BEFH, ABEFI, ABEFJ, ABEGH, ABEGI, ABEGJ, ABEHI, ABEHJ, AB E I J, ABFGH, ABFG I, ABFG J, ABFH I, ABFH J, ABFIJ, ABGH I, ABG HJ, ABGIJ, ABHIJ, ACDEF, ACDEG, ACDEH, ACDE I, ACDE J, ACD FG, ACDF H, ACDF I, ACDF J, ACDG H, ACDG I, ACDG J, ACDH I, AC DHJ, ACDIJ, ACEFG, ACEFH, ACEFI, ACEFJ, ACEGH, ACEGI, ACE GJ, ACEHI, ACEHJ, ACEIJ, ACFGH, ACFGI, ACFGJ, ACFHI, ACFH J, ACFIJ, ACGHI, ACGHJ, ACGIJ, ACHIJ, ADEFG, ADEFH, ADEFI, ADEFJ, ADEGH, ADEGI, ADEGJ, ADEHI, ADEHJ, ADEIJ, ADFGH, ADFGI, ADFGJ, ADFHI, ADFHJ, ADFIJ, ADGHI, ADGHJ, ADGIJ, A DHIJ, AEFGH, AEFGI, AEFGJ, AEFHI, AEFHJ, AEFIJ, AEGHI, AEG HJ, AEGIJ, AEHIJ, AFGHI, AFGHJ, AFGIJ, AFHIJ, AGHIJ, BCDEF, BCDEG, BCDEH, BCDEI, BCDEJ, BCDFG, BCDFH, BCDFI, BCDFJ, BCDGH, BCDGI, BCDGJ, BCDHI, BCDHJ, BCDIJ, BCEFG, BCEFH, BCEFI, BCEFJ, BCEGH, BCEGI, BCEGJ, BCEHI, BCEHJ, BCEIJ, BC FGH, BCFGI, BCFGJ, BCFHI, BCFHJ, BCFIJ, BCGHI, BCGHJ, BCGI J, BCHIJ, BDEFG, BDEFH, BDEFI, BDEFJ, BDEGH, BDEGI, BDEGJ, BDEHI, BDEHJ, BDEIJ, BDFGH, BDFGI, BDFGJ, BDFHI, BDFHJ, B DFIJ, BDGHI, BDGHJ, BDGIJ, BDHIJ, BEFGH, BEFGI, BEFGJ, BEF HI, BEFHJ, BEFIJ, BEGHI, BEGHJ, BEGIJ, BEHIJ, B F GH I, B F GH J, BFGIJ, BFHIJ, BGHIJ, CDEFG, CDEFH, CDEFI, CDEFJ, CDEGH, C DEG I, CDEGJ, CDEHI, CDEHJ, CDEIJ, CDFGH, CDFGI, CDFGJ, CD FHI, CDFHJ, CDFIJ, CDGHI, CDGHJ, CDGIJ, CDHIJ, CEFGH, CEFG I, CEFGJ, CEFHI, CEFHJ, CEFIJ, CEGHI, CEGHJ, CEGIJ, CEHIJ, CF GHI, CFGHJ, CFGIJ, CFHIJ, CGHIJ, DEFGH, DEFGI, DEFGJ, DEFH I, DEFHJ, DEFI J, DEGHI, DEGHJ, DEGIJ, DEHIJ, DFGHI, D F G H J, D FGIJ, DFHIJ, DGHIJ, EFGHI, EFGHJ, EFGIJ, EFHIJ, EGHIJ, FGHIJ, ABCDEF, ABCDEG, ABCDEH, ABCDEI, ABCDEJ, ABCDFG, ABC DFH, ABCDFI, ABCDFJ, ABCDGH, ABCDGI, ABCDGJ, ABCDHI, A BCDHJ, ABCDIJ, ABCEFG, ABCEFH, ABCEFI, ABCEFJ, ABCEGH, ABCEGI, ABCEGJ, ABCEHI, ABCEHJ, ABCEIJ, ABCFGH, ABCFG I, ABCFGJ, ABCFHI, ABCFHJ, ABCFIJ, ABCGHI, ABCGHJ, ABCG I J, ABCHIJ, ABDEFG, ABDEFH, ABDEFI, ABDEFJ, ABDEGH, ABD EG I, ABDEGJ, ABDEHI, ABDEHJ, ABDEIJ, ABDFGH, ABDFGI, AB DFGJ, ABDFHI, ABDFHJ, ABDFIJ, ABDGHI, ABDGHJ, ABDGIJ, A BDHIJ, ABEFGH, ABEFGI, ABEFGJ, ABEFHI, ABEFHJ, ABEFIJ, A BEGHI, ABEGHJ, ABEGIJ, ABEHIJ, ABFGHI, ABFGHJ, ABFGIJ, A BFHIJ, ABGHIJ, ACDEFG, ACDEFH, ACDEFI, ACDEFJ, ACDEGH, ACDEGI, ACDEGJ, ACDEHI, ACDEHJ, ACDEIJ, ACDFGH, ACDF GI, ACDFGJ, ACDFHI, ACDFHJ, ACDFIJ, ACDGHI, ACDGHJ, ACD GIJ, ACDHI J, ACEFGH, ACEFGI, ACEFG J, ACEFHI, ACEFH J, ACE FIJ, ACEGHI, ACEGHJ, ACEGIJ, ACEHIJ, ACFGHI, ACFGHJ, ACF GIJ, ACFHI J, ACGHI J, ADEFGH, ADEFGI, ADEFG J, ADEFHI, ADE FHJ, ADEFIJ, ADEGHI, ADEGHJ, ADEGIJ, ADEHIJ, ADFGHI, ADF GHJ, ADFGIJ, ADFHIJ, ADGHIJ, AEFGHI, AEFGHJ, AEFGIJ, AEF HI J, AEGHIJ, AFGHIJ, BCDEFG, BCDEFH, BCDEFI, BCDEFJ, BCD EGH, BCDEGI, BCDEGJ, BCDEHI, BCDEHJ, BCDEIJ, BCDFGH, BC DFGI, BCDFGJ, BCDFHI, BCDFHJ, BCDFIJ, BCDGHI, BCDGHJ, B CDGIJ, BCDHIJ, BCEFGH, BCEFGI, BCEFGJ, BCEFHI, BCEFHJ, B CEFIJ, BCEGHI, BCEGHJ, BCEGIJ, BCEHIJ, BCFGHI, BCFGHJ, B CFGIJ, BCFHIJ, BCGHIJ, BDEFGH, BDEFGI, BDEFGJ, BDEFHI, B DEFHJ, BDEFIJ, BDEGHI, BDEGHJ, BDEGIJ, BDEHIJ, BDFGHI, B DFGHJ, BDFGIJ, BDFHI J, BDGHIJ, BEFGHI, BEFGHJ, BEFGIJ, BE FHIJ, BEGHIJ, BFGHIJ, CDEFGH, CDEFGI, CDEFGJ, CDEFHI, CD EFHJ, CDEFIJ, CDEGHI, CDEGHJ, CDEGIJ, CDEHIJ, CDFGHI, CD FGHJ, CDFGIJ, CDFHIJ, CDGHIJ, CEFGHI, CEFGHJ, CEFGIJ, CEF HI J, CEGHIJ, CFGHIJ, DEFGHI, DEFGHJ, DEFGIJ, DEFHIJ, DEGH I J, DFGHIJ, EFGHIJ, ABCDEFG, ABCDEFH, ABCDEFI, ABCDEFJ, ABCDEGH, ABCDEGI, ABCDEGJ, ABCDEHI, ABCDEHJ, ABCDE I J, ABCDFGH, ABCDFGI, ABCDFGJ, ABCDFHI, ABCDFHJ, ABCD FIJ, ABCDGHI, ABCDGHJ, ABCDGIJ, ABCDHIJ, ABCEFGH, ABC EFGI, ABCEFGJ, ABCEFHI, ABCEFHJ, ABCEFIJ, ABCEGHI, ABC EGH J, ABCEGIJ, AB C EH I J, ABCFGHI, ABCFGHJ, AB CF GI J, ABC FHIJ, ABCGHIJ, ABDEFGH, ABDEFGI, ABDEFGJ, ABDEFHI, AB DEFHJ, ABDEFIJ, ABDEGHI, ABDEGHJ, ABDEGIJ, ABDEHIJ, AB DFGHI, ABDFGHJ, ABDFGIJ, ABDFHIJ, ABDGHIJ, ABEFGHI, AB EFGHJ, ABEFGIJ, ABEFHIJ, ABEGHIJ, ABFGHIJ, ACDEFGH, AC DEFGI, ACDEFGJ, ACDEFHI, ACDEFHJ, ACDEFIJ, ACDEGHI, A CDEGHJ, ACDEGIJ, ACDEHIJ, ACDFGHI, ACDFGHJ, ACDFGIJ, ACDFHI J, ACDGHI J, ACEFGH I, ACEFGH J, ACEFGI J, ACEFHI J, ACEGHI J, ACFGHI J, ADEFGH I, ADEFGH J, ADEFGI J, ADEFHI J, ADEGHI J, ADFGHI J, AEFGHI J, BCDEFG H, BCDEFG I, BCDEFG J, BCDEFH I, BCDEFH J, BCDEFI J, BCDEGHI, BCDEGHJ, BCDEGI J, BCDEHIJ, BCDFGHI, BCDFGHJ, BCDFGIJ, BCDFHIJ, BCDGHIJ, BCEFGHI, BCEFGHJ, BCEFGIJ, BCEFHIJ, BCEGHIJ, BCFGHIJ, B DEFGHI, BDEFGHJ, BDEFGIJ, BDEFHIJ, BDEGHIJ, BDFGHIJ, B EFGHIJ, CDEFGHI, CDEFGHJ, CDEFGIJ, CDEFHIJ, CDEGHIJ, C DFGHIJ, CEFGHIJ, DEFGHI J, ABCDEFGH, ABCDEFGI, ABCDEF GJ, ABCDEFHI, ABCDEFHJ, ABCDEFIJ, ABCDEGHI, ABCDEGH J, ABCDEGIJ, ABCDEHIJ, ABCDFGHI, ABCDFGHJ, ABCDFGIJ, A BCDFHIJ, ABCDGHIJ, ABCEFGHI, ABCEFGHJ, ABCEFGIJ, ABC EFHIJ, ABCEGHIJ, ABCFGHIJ, ABDEFGHI, ABDEFGHJ, ABDEF GIJ, ABDEFHIJ, ABDEGHIJ, ABDFGHIJ, ABEFGHIJ, ACDEFGH I, ACDEFGHJ, ACDEFGIJ, ACDEFHIJ, ACDEGHIJ, ACDFGHIJ, A CEFGHIJ, ADEFGHIJ, BCDEFGHI, BCDEFGHJ, BCDEFGIJ, BCD EFHIJ, BCDEGHI J, BCDFGHI J, BCEFGHI J, BDEFGHI J, CDEFGH I J, ABCDEFGH I, ABCDEFGH J, ABCDEFGI J, ABCDEFHI J, ABCD EGHI J, ABCDFGHI J, ABCEFGHI J, ABDEFGHI J, ACDEFGHI J, B CDEFGHI J, AB CDEFGHI J.
[0100] In some implementations, the derived information encoder 408 includes a bounding box obtainer 412, a templating engine 414, and a background/microprint reconstructor 416. The bounding box obtainer 412 receives information regarding the one or more bounding boxes generate by one or more of the optical character recognition (OCR) engine 306 and the object detection engine 308.
[0101] Referring to Figure 3, it should be understood that, while text/characters may be detectable objects, this description generally refers separately to the detection of text and textual characters with reference to the OCR engine 306 and other objects (e.g., holograms, seals, watermarks, laser perforations, etc., which if present, absent, or occluded may indicate tampering) with reference to the object detection engine 308 for clarity and convenience. Depending on the implementation, only textual characters, only other objects, or a combination of textual characters and other objects may be bound in box(es) and used to evaluate a document. It should further be understood that the use of bounding boxes may reduce the area of the document being processed to the area(s) likely to be tampered with, thereby reducing the amount of processing without, or with minimal, loss in accuracy.
[0102] It should be understood that, while a single OCR engine 306 and a single object detection engine 308 are illustrated in Figure 3, different implementations may use one or more OCR engines 306 and/or one or more object detectors 308. For example, in some implementations, the OCR engine 306 represents a bank of multiple OCR engines with different detection qualities. As another example, in some implementations, the object detection engine 308 includes multiple different object detectors, (e.g., a first object detector for detecting holes such as a punch or laser perforation, a second object detector for detecting a facial image in a photo ID, etc.)
[0103] The document evaluator 226 includes one or more of an OCR engine 306 and an object detection engine 308 according to some implementations. In some implementations, the OCR engine 306 and/or the object detection engine 308 are executed, at the request of the document configurator 304, during configuration and provide at least a subset of derived information describing valid instances of a document (e.g., CADL 500 as a valid CADL example shown in Figure 5), which the derived information encoder 408 encodes into the document assembly object associates with that class of document. In some implementations, the OCR engine 306 and/or object detection engine 308 are executed at the request of the decision engine 310 during production and provide information derived from the image of the document under test (e.g., the post-processed image of the document under test) for comparison, by the decision engine 310, to the document assembly object associated with that class of document.
[0104] The OCR engine 306 converts text in an image into machine-readable text. In some implementations, when the OCR engine 306 executes, the presence of text is recognized in the input image (e.g., a valid sample during configuration or a document under test image during production) and bounding boxes are generated around that text. In some implementations, derived information describing these bounding boxes, which enclose text in the input image, are made accessible to one or more of the document configurator 304 (e.g., when the document image is of a valid sample) and the decision engine 310 (e.g., when the document image is of a document under test). In some implementations, the OCR engine 306 derives information describing one or more of a size, position, orientation (e.g., horizontal or vertical), and textual content of each bounding box. For example, the size and position of the bounding box around the DL number could be represented by a set of coordinates associated with the four vertices of the bounding box and the content could be represented as “11234568.”
[0105] It should be understood that other representations are within the scope of this description (e.g., center, width, and height of the bounding box instead of the vertices/comers, the description of the content may include other or additional information than the text, such as font characteristics including one or more of a font such as “Arial”, font size such as 10 pt., font style such as bold, capitalization such as use of all caps or small caps, etc.). It should further be understood that, while the description herein refers to bounding boxes that are quadrilateral with four vertices, a bound box may be any shape with any number of vertices.
[0106] Referring to Figure 6, an example illustrating the CADL 500 of Figure 5 with bounding boxes, with examples of bounding boxes superimposed in accordance with some implementations. In some implementations, the OCR engine 306 generates the illustrated bounding boxes including bounding boxes 602, 604, 606, 608, 610, 612, 614, 616, 618, and 620. Referring to Figure 7, an example illustration of the bounding boxes shown in Figure 6 but illustrated without the background of the CADL 500 example of Figure 5, is shown in accordance with some implementations. Referring to Figure 8, an example of derived information describing a subset of the bounding boxes illustrated in Figures 6 and 7, which may be generated by the OCR engine 306 in accordance with some implementations. For example, portion 802 describes the bounding box 602 in shown in Figures 6 and 7. In the illustrated implementation, portion 802 textually describes the textual content (i.e., “‘description’ : ‘California’” and the size and position of the polygon associated with the bounding box (i.e., the x and y coordinates of the four vertices, where the x and y axis and associated labels may be seen in Figures 6 and 7). Similarly, portion 804 describes bounding box 604 in Figures 6 and 7, portion 806 describes bounding box 606, and so on. It should be understood that, while portions 802-814 corresponding to 602-614 are illustrated as examples in Figure 8, additional portions (not shown) describing the other bounding boxes of Figures 6 and 7 may be generated but are not shown for the sake of brevity and conciseness.
[0107] Referring to Figure 3, the object detection engine 308 detects one or more objects in a document image. In some implementations, when the OCR engine 306 executes, the presence of an object is recognized in the input image (e.g., a valid sample during configuration or a document under test image during production) and bounding box(es) are generated around the object(s). Examples of object may include one or more of a hole punched in the document (often indicating that the document is expired or invalid), the overall shape of the document (e.g., a clipped bottom right corner may be used by the system 100 to quickly determine invalidity for certain jurisdictions/issuers), signatures, facial images, ghost images, holograms, watermarks, kinegrams, seals, symbols, laser perforations, etc. For example, referring to Figure 5, in some implementations, the object detection engine 308 may detect the facial image 510 and ghost image 520, and generate derived information describing the bounding boxes associated with detected objects. For example, the object detection engine 308 may generate bounding boxes such as those illustrated in Figure 5 around the images 510 and 520 and may generate derived information (not shown) analogous that in Figure 8 but describing the detected objects in the image and associated bounding boxes 510 and 520. As another example, in some implementations, the object detection engine 308 may perform edge detection to identify the edge of a document holder image 510 or an edge within the document holder image (e.g., a facial outline or silhouette), which may be to determine whether microprint in the background is present and/or consistent with that of a valid document and catch instances where a nefarious user copy-pasted in a rectangular image with an approved background color thereby destroying microprint in the background of the facial image or copy-pasted an image from another document instance with microprint, but misaligned the microprint between the document holder image and an adjacent or surrounding portion of the document under test.
[0108] In some implementations the derived information includes a description of the content, size, and position of the generated bounding box(es), e.g., as illustrated in Figure 8. In some implementations, the derived information, generated by the OCR engine 306 and/or the object detection engine 308 includes one or more snippets based on the generated bounding boxes. For example, in some implementations, the bounding boxes are used to crop the (e.g., post-processed) image of the document and generate a snippet of the associated text or object contained therein.
[0109] Referring to Figure 9, examples of snippets are illustrated in accordance with some implementations. In Figure 9, snippet 902 corresponds to the portion of the CADL 500 in bounding box 602 as illustrated in Figure 6, snippet 904 corresponds to the portion of the CADL 500 in bounding box 604 as illustrated in Figure 6, snippet 912 corresponds to the portion of the CADL 500 in bounding box 612 as illustrated in Figure 6, snippet 910 corresponds to the portion of the CADL 500 in bounding box 510 as illustrated in Figure 5, snippet 916 corresponds to the portion of the CADL 500 in bounding box 616, snippet 918 corresponds to the card holders signature (which were two, distinct bounding boxes 618 and 620 in Figure 6 for the first and last name, but may be treated as a signature unit in a single bounding box and snippet 918 as illustrated in Figure 9), and snippet 920 corresponds to the portion of the CADL 500 in bounding box 520 as illustrated in Figure 5. Figure 9 illustrates other snippets associated with other content visible within other bounding boxes illustrated in Figure 6 but may not be referenced or described herein for the sake of brevity and conciseness. [0110] Referring Figure 4, the illustrated derived information encoder 408 includes a bounding box obtainer 412 which is communicatively coupled to receive or retrieve the derived information generated by the OCR engine 306 and/or the object detection engine 308. [OHl] The templating engine 414 may generate a template based on the derived information from one or more valid instance of the document. In some implementations, the templating engine 414 generates a bounding box template describing valid instances of the document, which may be included in the document assembly object for that document. In some implementations, the templating engine 414 label obtained bounding boxes based on different types of content. For example, referring to Figure 10, an example of a template generated by the templating engine 414 is illustrated in accordance with some implementations. In the illustrated example, the template is shown overlayed on the CADL 500.
[0112] In some implementations, the templating engine 414 determines a set of template bounding boxes based on the bounding boxes generated from the valid samples. For example, in Figure 10, a set of bounding boxes is illustrated and includes bounding boxes 1010, 1012, 1014, 1032 and others not identified with a reference number. In some implementations, the template bounding boxes may be based sized and positioned such that a template bounding box would cover the associated text or object in all instances of the valid samples. For example, the bounding box 1032 is wider than necessary to contain the first name “IMA,” but a second valid instance may have had a much longer first name, so the width of the template bounding box 1032 is larger based on that. In some implementations, the templating engine 414 may label bounding boxes within the bounding box template. For example, bounding box 1010 may be labeled as a “field prefix” and bounding box 1012 may be labeled as a “field.”
[0113] It should be understood that the bounding box template of Figure 10 and the labels described herein are merely examples and may be modified without departing from the description herein. For example, while Figure 10 illustrates bounding boxes associated with text fields and their prefixes, the template may include bounding boxes (not shown) associated with one or more objects, such as the facial image 510, the ghost image 520, the gold star bear in the top-right corner, etc. As another example, the illustrated template of Figure 10 only includes a subset of potential bounding boxes that may comprise the template. For instance, the bounding boxes may include a field prefix bounding box (not shown) for the “SEX” field prefix and a field bounding box for the associated “F” or “M” (not shown). As another example, the cardholder’s signature below the facial image may be associated with a bounding box (not shown) to conduct a comparison to a signature on the back of the card (not shown) and/or to computer-generated fonts, e.g., Lucidia Console, posing as human written text/signature. As yet another example, the DOB printed over the facial image (i.e., 083177 in example CADL 500) may be associated with a bounding box (not shown) to determine whether that text is tactile text, which may be characteristic of a valid CADL instance.
[0114] In some implementations, the templating engine 414 may determine other derived information for the template. For example, the templating engine 414 may determine characteristics of the font associated with each bounding box (e.g., each field prefix and field) in the document and include that in the template. In some implementations, the templating engine 414 may determine background/microprint information for each of the bounding boxes in the template. For example, in some implementations, the templating background may obtain snippets associated with a template bounding box from a reconstructed background/microprint generated by the background/microprint reconstructor 416.
[0115] The background/microprint reconstructor 416 generates a background and/or reprint associated with the one or more valid instances of a document. In some implementations, the background/microprint reconstructor 416 extracts the text and objects present in an image (e.g., post-processing) of a valid document to obtain the microprint and/or background. For example, in the CADL 500 of Figure 5, the microprint background includes flowers in the bottom left corner, the man panning for gold on the right side with a dotted outline of a bear and the outline of the state of California superimposed, sail boats in the bottom-center, mountains in the top-center, a depiction of the outline of the state of California in the center, some clouds in the top-right comer, a lot of fine, visual texture (e.g., swirls, fine lines, shadows reminiscent of topography, patterns, etc.) throughout the document.
[0116] As another example, referring to Figure 11, an example of a microprint background 1100 obtained from a first instance of a CA Driver’s License, which is a different version from the CADL 500 illustrated in Figures 5, 6 and 10. In some implementations, the background/microprint reconstructor 416 obtains the microprint background 110 from a first valid sample, obtains one or more other microprint backgrounds (not shown) from other valid instances of the same document class, and combines the microprint backgrounds to reconstruct the microprint/background of the document. For example, if a first instance includes an “O” or a “0” and the second instance includes an “I” or a “1” in the same position, since there is little overlap between the portions obscured by those two instances, the background/microprint reconstructor 416 may be able to reconstruct most of the background/microprint in that area, thereby reconstructing a representation of the microprint or background without obstructions by text and/or objects. By using all 10 numerals and 26 characters in the English language, nearly all the occluded portions may be reconstructed. While the disclosure herein refers to the English alphabet and Arabic numerals, the application to one or more other numerical and alphabetical systems, including, but not limited to, Greek, Cyrillic, Kanji, Arabic, Hebrew, etc., is within the scope of this disclosure. [0117] The document configurator 304 generates a document assembly object describing valid instances of a document. The contents of the document assembly object may vary in content depending on the document it describes (e.g., some documents may lack fields or security features, the direct and indirect validation checks may differ, as well as the relative positions of the fields and security features). However, in some implementations, the document assembly object has a common structure or framework across the document assembly object instances. In some implementations, the document assembly object is low code or no code. For example, a user provides the class labels using drop down menus and template and checks are automatically derived by the document evaluator 226 and its subcomponents from the valid samples and/or extracted from issuer information.
[0118] In some implementations, the document assembly object includes encoded issuer information (e.g., for US drivers licenses this may include mandatory fields, optional fields, images, security features, document layout, etc. as defined by the American Association of Motor Vehicles) and/or direct checks on that issuer information. In some implementations, includes derived information (e.g., bounding boxes associated with document fields and relative positions, fonts, reconstructed microprint images, color information, etc.) and/or derived checks on the derived information (e.g., spacing between field prefix and field text, etc.).
[0119] In some implementations, the document assembly object includes or is associated (e.g., via a link) with context information associated with the document represented by the document assembly object. Examples of context information may include, but are not limited to, IP addresses and/or locations (e.g., physical and/or network) and/or device information (e.g., associated with submissions of valid document under test images and/or invalid document under test images), a risk assessment associated with the document (e.g., a tier or metric, which may indicate a level of scrutiny or fraud risk, since some documents may be more frequently used in attempted fraud), etc. In some implementations, the context information is aggregated based on one or more requests including a document under test is that associated with the document represented by the document assembly object. For example, the context information includes information associated with the documents under test (e.g., IP addresses, snippets including facial image, device IDs, document numbers, etc.), which may be used by the decision engine 310 to evaluate the document (e.g., an IP address associated with a number of invalid attempts may increase the likelihood that a document under test received from the IP address is determined to be invalid and/or subjected to greater scrutiny). To summarize and simplify, in some implementations, the context information may be gathered with the image of the document image under test and used to identify and neutralize repeated fraud attempts.
[0120] The document assembly object may vary in its data representation depending on the implementation. In some implementations, the document assembly object comprises a data object. In some implementations, the document assembly object is in a format that is both machine and human readable. For example, the document assembly object is a JavaScript Object Notation (JSON) object. Referring to Figure 12, an example portions of a document assembly object 1200 are illustrated in accordance with some implementations. In portion 1202, example class labels are represented. More specifically, the document type is indicated as an “ID CARD,” the country is “FRA” indicating France, the state is nonapplicable (i.e., “null”) since the document is a national ID, the version, printed document name, and other properties are also included. In some implementations, portion 1202 may be generated by the document class labeler 404.
[0121] In portion 1204, an example description of the document number field is represented. More specifically, the expected data type (i.e., a string”) length of the string (i.e., 0-60), etc. are defined in portion 1204. In portion 1206, some examples of direct checks related to the document number field are represented. For example, the document number must be 7 characters in length, the first two characters must be alphabetic and the third through seventh characters must be numeric. In some implementations, portions 1204 and 1206 may be generated by the issuer information encoder 406.
[0122] In portion 1208, an example of derived information associated with the document number field is represented. More specifically, portion 1208 identifies that the field is a human-readable zone (HRZ), as opposed to a machine-readable zone, such as a barcode or QR code. The position (i.e., x, y coordinates) and size (i.e., height and width) of the bounding box associated with the document number field, the side of the document on which the document number is found, and the font “Arial Bold” used for the document number, which may be compared to a detected font in a document under test as a derived check. In some implementations, portion 1208 of the data assembly object is generated by the derived information encoder 408 from derived information. For example, the position and size of the bounding box and the font are generated by the templating engine 414. [0123] It should be recognized that Figure 12 is merely one example of a section of an example document assembly object and that the document assembly object may differ therefrom without departing from the disclosure herein. For example, the document assembly object (not shown) for the CADL 500 may include a derived check to (1) determine whether the numbers present in the DOB field bounding box are consistent with the numbers in the bounding box overlaying the facial image and the numbers in the bounding box on the right side of the CADL next to the boot of the man panning for gold; (2) determine whether the face in the facial image 510 and ghost image 520 are the same; and (3) whether the sex (e.g., as determined by a AI/ML model such as a classifier using the facial image 510) of the person pictured is consistent with the sex identified in the “SEX” field (i.e., “F” as illustrated), (4) whether the age (e.g., as determined by a AI/ML model such as a regression model using the facial image 510) of the person pictured is consistent with the age indicated by the DOB, etc.
[0124] As another example, an example of an automated generation of a document assembly object is described in accordance with some implementations with reference to Figures 31- 33. For clarity and convenience, some features and functionality of the document configurator 304, in accordance with some implementations, with reference to an example Italian document specification 3100 represented in Figures 31, 31 A, and 31 B. For example, the Italian document specification 3100 of Figure 31 represents a specification maintained by the Public Register of Authentic identity and travel Documents Online (PRADO), which may be presently found at the URL https://www.consilium.europa.eu/prado/en/ITA-BO- 04004/index.html. PRADO is an online and publicly available resource describing valid documents and their associated security features for participating member states. In some implementations, the document configurator 304 may parse multiple languages. For example, the document configurator 304 may include one or more of a natural language processing model, a natural language understanding model, and a large language model for each of the approximately twenty -four different languages used by PRADO member states and may, therefore, generate a document assembly object independent of the language in which the document specification is provided.
[0125] It should be recognized that PRADO is just one example source of a document specification in an electronic format and other sources and formats exist. For example, PRADO is merely one example of a trusted source for information describing valid instances of a document. Other examples exist and may be used without departing from the present description. For example, issuer websites, trade groups or journals, etc. may be trustworthy sources or publishers valid samples and/or document specifications. Additionally, while the present example is associated with an online, hypertext mark-up language (HTML) format, other formats exist (e.g., PDF, scanned and OCRed version of a paper specification, etc.) and may be used by the document configurator 304 without departing from the present description.
[0126] A PRADO specification for a particular document has multiple portions — a first portion with metadata (e.g., title, document category, version, first issue date, validity, purpose/legal status, construct on/size, validity information including any min/max ages, duration of validity from issue, etc.), and a second portion with images (e.g., of security features). For example, Figure 31 A illustrates a first, partial portion 3102 of the PRADO specification 3100 for the example Italian document. More specifically, the first illustrated portion 3102 includes the document’s title, issuing country, document category, document type, document version, etc. In some implementations, the document class labeler 404 or issuer info encoder 406 may scrape, or otherwise automatically obtain, that information from the HTML of PRADO’s website and encode a set of class labels for a newly created (e.g., blank) document assembly object (e.g., a JSON object) for that Italian document, which is subsequently populated automatically by the document configuration engine 304 and its subcomponents, thereby generating a document assembly object for the document. A portion of a document assembly object for the example Italian document described by the specification 3100 is illustrated in Figure 32. While not shown, these labels may, or may not, be used to support inheritance of features from an earlier document version depending on the implementation and use case (e.g., whether a prior version of the document exists).
[0127] In some implementations, the issuer info encoder 406 may scrape, or otherwise automatically obtain, the document’s format and size (i.e., card, and 86mm in width and 54 mm in height) from the HTML specification 3100 and encode that information into the document assembly object. Still referring to Figure 31 A, the issuer info encoder 406 may scrape, or otherwise automatically obtain, the validity information (i.e., card, and 86mm in width and 54 mm in height) from the HTML specification 3100 and encode that information into the document assembly object. For example, at 3142 in portion 3104 of Figure 3 IB, the PRADO specification 3100 indicates that the document issued to an individual 18+ years of age is valid for a maximum of 10 years. In some implementations, the issuer info encoder 406 encodes, into the document assembly object, a direct check to determine whether the present date is within 10 years of the issue date in the document under test, when the individual was 18+ years of age at the date of issuance in the document under test (e.g., If userAgeAtIssue>18, then (if currentDate-issueDate<10 years, valid=Yes; else valid=NO). In some implementations, the security features identified in the “Recto-identity” subsection 3152 of portion 3104 in Figure 3 IB may be identified by the issuer info encoder 406, and a section for each of the enumerated security features may be added to the document assembly object by the issuer info encoder 406 and further populated by the document configurator 304 or its subcomponents.
[0128] In some implementations, the sample obtainer 402 parses the HTML of the specification 3100 and extracts the images 3162 and 3164 as valid samples of the document and/or other images (not shown) of valid instances of security features. In some implementations, the sample obtainer 402 provides the sample(s) to one or more of the OCR engine 306 and the object detection engine 308. For example, the front image 3162 of the document is sent to one or more of the OCR engine 306 and the object detection engine 308. In some implementations, the derived information encoder 408 derives checks based on the sample images and/or bounding box(es) determined therefrom. For example, the bounding box obtainer 412 obtains a set of one or more bounding boxes from one or more of the OCR engine 306 and the object detection engine 308, and the bounding box obtainer 412 supplements the document assembly object, e.g., with bounding box location coordinates and dimensions for one or more security features.
[0129] The derived information encoder 408 encodes one or more derived checks into the document assembly object. It should be recognized that the distinction between derived and direct checks may not be consistent from document to document, as different documents may have more, or less, comprehensive documentation. For example, a US driver’s license a US driver’s license may be associated with and comply with the American Association of Motor Vehicle Administrators DL/ID Card Design Standard, whose 2020 version is over 120 pages in length, as well as documentation released by the particular issuer (e.g., the state of California’s DMV). Accordingly, what may be a direct check for a security feature in the context of a CA driver’s license may be a derived check in a different document that is associated with less thorough documentation. As another example, passport issuers may publish more information about the requirements for a document holder image than driver’s license issuers, which may be a byproduct of passport issuers more frequently using document holder provided images than photos taken at/by the document issuer, which may be more commonly done by driver’s license issuers.
[0130] Therefore, while in some implementations and use cases, dimensional or featurebased requirements may be explicitly defined (e.g., as may be the case with a passport) and used in a direct check for one document, in some implementations, the analogous dimensional or feature-based requirements may be inferred from valid instances and used in a derived check for another document or type of document (e.g., by analysis of valid instances of a CADL to infer the dimensional and feature-based requirements). For example, in some implementations, a derived check encoded into the document assembly object, by the derived information encoder 408, is based on a bounding box. For example, the templating engine 414 may encode the location and dimensions of a bounding box, which may be derived information, as the specification may only specify an approximate location, e.g., “top-right corner,” or no location at all — leaving it to a person to visually reference the specification, neither of which are directly useable for an automated evaluation of a document under test using computer vision techniques. In some implementations, the templating engine 414, by encoding the coordinates of a bounding box from a sample, creates a derived check, i.e., to check that a corresponding bounding box in the document under test is (1) present, (2) present at that location, (3) has the same size within a margin of error. In some implementations, the templating engine 414 may encode a file location of an image of a security feature (e.g., a seal or hologram), which may be used by the decision engine 310 to compare to a corresponding bounding box in a document under test, so the decision engine 310 may determine whether the content of the bounding box in the document under test is consistent with a valid document.
[0131] In some implementations, the document assembly object may be subjected, by the document configuration engine 304 to a validation process. For example, after generating the document assembly object by automatically scraping an electronic version of a document specification, the document assembly object is used, by the decision engine 310, on a sample set of documents under test to determine the accuracy of the decisions using that document assembly object. For example, the documents under test may have known validity or invalidity statuses, and adjustments may be made, e.g., to reduce false positives and/or false negatives, with respect to one or more of a derived and a direct check in subsequent determinations.
[0132] The adjustment s) may vary based on the implementation and use case. For example, a threshold for the position, dimensions, orientation, etc. of a bounding box may be adjusted to account for a margin of error or tolerances due to variations in the manufacturing of the document or that are introduced by the processing of the document (e.g., a margin of error in the rectification or other preprocessing of the document under test image). Depending on the implementation, the threshold may be present in and modified in the document assembly object and/or a threshold used by the decision engine 310. Examples of adjustments include, but are not limited to, retraining one or more machine learning algorithms, adding or adjusting a threshold, modifying the set of rules (e.g., adding or removing a derived rule), etc. For example, in some implementations, a machine learning algorithm for an object detector may be retrained to better identify a particular object or type of object in a document image, or a weighting or parameterization in a model used by the decision engine 310 may be adjusted.
[0133] In some implementations, the document configurator 304 may apply pattern recognition to a plurality of valid samples and derive one or more derived checks. A document specification may not include all useable security features. The omission of a security feature may be intentional (e.g., so that a particular feature is not publicly known so less likely to be circumvented by nefarious users) or unintentional (e.g., a general lack of issuer provided documentation, which may be due to funding or other external drivers). In some implementations, the document configurator 304 may identify and incorporate derived checks into the document assembly object for unpublished security features.
[0134] For example, referring now to Figure 33A, an example of an Indian passport 3302 is illustrated. Assume that the specification for the Indian passport 3302 indicates that a ghost image 3312 of the facial image 3304 is present in a valid document instance. However, assume that the specification for the Indian passport does not indicate that the ghost image 3312 is a letter screen image, i.e., that the ghost image 3312 comprises personally identifiable information (PII) reproduced from other fields in passport and that due to the small print it may not be apparent to the human eye absent magnification. For example, the ghost image is magnified in Figure 33 A, and portion 3322 of the ghost image 3312 is further magnified in Figure 33B.
[0135] In some implementations, the document configurator 304 may surface such non- readily apparent, undiscernible, and/or unpublished security features. For example, the sample obtainer 402 may obtain samples at various levels of magnification including, e.g., the ghost image 3312, at increased magnification. The bounding box obtainer 412 obtains, from the OCR engine 306, the bounding boxes associated with text in the ghost image 3312. For example, the bounding box obtainer 402 receives the bounding box 3332 around “1986,” the bounding box 3334 around “GUPTA” and the bounding box 3336 around “KOPAL,” in Figure 33B. In some implementations, the document configurator 304 applies pattern recognition to the text and/or bounding boxes comprising the ghost image 3312, determines that bounding box 3332 repeats, or is consistent with, the birth year, i.e., 1986, of the individual to which the document is issued, bounding box 3334 repeats, or is consistent with, the individual’s first name, i.e., GUPTA at 3314, and bounding box 3336 repeats, or is consistent with the individual’s last name, i.e., KOPAL at 3336. In some implementations, the derived information encoder 408, or the templating engine 414, generating and encoding a derived check in the document assembly object to determine whether, e.g., bounding boxes consistent with the first name, last name, and birth year fields are present in the ghost image 3312. While other PII used in generating the ghost image 3312, the bounding boxes 3332, 3334, and 3336 and the text therein are not comprehensive and used by way of example. [0136] Referring now to Figure 34, a CADL security feature is illustrated in accordance with some implementations. More specifically, the individual’s first name initial (i.e., S), the individual’s last name initial (i.e., W), and the last two digits of the individual's birth year (i.e., 56) are reproduced as illustrated in bounding box 3402. In some implementations, the derived information encoder 408 may generate a derived check to determine whether bounding box 3402 in a document under test is present, whether the location and size of the bounding box is within tolerances, and whether the content of the bounding box 3402 is consistent with information elsewhere on the document under test, e.g., the initials and two- digit birth year.
[0137] It should be understood that document assembly object and its subsequent use by the decision engine 310, as described herein, may provide a number of potential benefits. For example, detection of variations undetectable to a human eye, but indicative of fraud (e.g., slight variations in alignment, font size, color, spacing, etc. As another example, detection and use of new, unknown, or unpublished security features is another potential benefit. These are merely examples and other benefits are discussed herein and should be recognized to one having ordinary skill in the art.
[0138] The decision engine 310 obtains an image of a document under test (e.g., a postprocessed document under test) and evaluates the document under test to determine whether the document under test is valid or invalid (e.g., void, modified, tampered with, or forged). Referring to Figure 13, a block diagram of an example decision engine 310 is illustrated in accordance with some implementations. In the illustrated implementation, includes a document classifier 1302, a document assembly object obtainer 1304, a document under test derived information obtainer 1306, a bounding box presence/absence evaluator 1308, an inter-bounding box evaluator 1310, an intra-bounding box evaluator 1312, and a verification determiner 1314. [0139] The document classifier 1302 obtains an image of a document and determines a document classification associated with the document under test. For example, the document classifier 1302 receives a post-processed version of a document image taken by a user’s smartphone camera and determines a class of the document. For example, referring to Figure 14A, an image of a document under test, which is CADL 1400, is illustrated. In some implementations, the CADL 1400 illustrated is a post-processed image based on an image of the CADL taken using a user’s cellphone camera and rectified by the image preprocessor 302 to generate the CADL 1400 image shown. In some implementations, the document classifier 1302 determines that the CADL 1400 belongs to the same class as CADL 500. For example, the document classifier 1302 returns the concatenated set labels DL US REGULAR- DL_CA_2018_0 as the CADL 1400’s class.
[0140] The document assembly object obtainer 1304 obtains the document assembly object associated with that class or set of labels. For example, the document assembly object obtainer queries the document database 242 using at least a subset of the set of labels and obtains the document assembly object generated at least in part based on the CADL 500 example of a valid instance discussed above.
[0141] The document under test derived information obtainer 1306 obtains derived information associated with the document under test. For example, the document under test derived information obtainer 1306 passes CADL 1400 image to the OCR engine 306 and/or the object detection engine 308 and receives derived information therefrom. For example, the document under test derived information obtainer 1306 receives one or more of at least one bounding box associated with an object from the object detection engine 308 and at least one bounding box associated with text from the OCR engine 306 along with information describing the bounding box content (e.g., the textual content and font from the OCR engine 306 or the object detected from the object detection engine 308).
[0142] The bounding box presence/absence evaluator 1308 evaluates whether a bounding box associated with content is present or absent. For example, in some implementations the bounding box presence/absence evaluator 1308 determines whether a particular security feature (e.g., laser perforations or a ghost image) object is present or absent; the latter being indicative of invalidity. As another example, in some implementations, the bounding box presence/absence evaluator 1308 determines whether a mandatory field is absent. As another example, in some implementations, the bounding box presence/absence evaluator 1308 determines whether an object indicative of invalidity (e.g., a hole punch, clipped bottom-right corner, or vertical text, which may indicate that the document is expired or otherwise void) is present.
[0143] The inter-bounding box evaluator 1310 evaluates one or more of a relationship between a plurality of bounding boxes, or contents therein, and a relationship between a bounding box and document itself. Examples of a relationship between a plurality of bounding boxes include, but are not limited to, a relative position between two bounding boxes, such as a bounding box associated with a field prefix and the field, and a consistency of content between the plurality of bounding boxes, such as the eye color of the document holder in the document holder image and eye color listed in the field associated with eye color. Examples of a relationship between a bounding box and document itself include, but are not limited to, a size or position of a bounding box relative to a reference point (e.g., a corner or edge) of the document. The example inter-bounding box evaluator 1310 illustrated in Figure 13 includes a prefix to field position evaluator 1322, a content consistency evaluator 1324, a relative position evaluator 1326, and a 3D consistency evaluator 1328.
[0144] In some implementations, the OCR engine 306 may assign a bounding box to individual characters. For example, the OCR engine 306 may assign a bounding to each character in a field, or other text string, and the inter-bounding box evaluator 1310 may evaluate the relationship(s) between those bounding boxes and/or their content. For example, the inter-bounding box size and spacing representative of inter-character spacing and relative heights, may be analyzed and may identify inconsistencies associated with a single character in field being changed (e.g., a single digit in the year to make the document appear to still be valid, or so the cardholder appears older to satisfy a minimum age requirement). As another example, the OCR engine 306 may assign a bounding box to individual text/characters to the first name “IMANOTTA” at 1442 of Figure 1442, and the characters may be compared to one another to indicate an inconsistency in blur between the original any more blurred “IMA” characters and the added “NOTTA” characters.
[0145] In some implementations, the inter-bounding box evaluator 1310 includes a prefix to field position evaluator 1322. The prefix to field position evaluator 1322 determines whether the relative positions of the bounding boxes for a field prefix and corresponding field are consistent with the bounding box template of the document assembly object. For example, the prefix to field position evaluator 1322 evaluates the spatial relationship between a bounding box associated with a field prefix (e.g., “DOB” in Figure 14A) and the bounding box associated with the field (e.g., “08/31/22” in Figure 14A). In the example CADL 1400 it may not be visually apparent to a person, but the test “08/31/22” and its associated field is slightly closer to the “DOB” prefix in Figure 14A than Figure 5 and slightly misaligned vertically.
[0146] It should be recognized that, while some of the issues in CADL 1400 under test are readily apparent to a human, the illustrated, invalid document under test (i.e., CADL 1400) is intentionally unsophisticated and the example issues are relatively apparent and numerous for discussion purposes and clarity of demonstration. Digital image manipulation (e.g., using photoshop) is increasingly available and used by nefarious individuals to generate fraudulent documents, and fraud attempts vary in levels of sophistication. The computer-vision (e.g., OCR, object detection, similarity matching, and anomaly detection) based methods described herein may beneficially detect even sophisticated fraud attempts by identifying issues undetectable to a human eye, such as an imperceptible (to a human) discrepancy in the relative positions between bounding boxes or within the document itself, artifacts generated by the digital manipulation, microprint errors, differences in bounding box dimensions (e.g., due to usage of a slightly larger font or exceeding a width for the field), discrepancies based on the ghost image, etc. In some implementations, the computer-vision based methods described herein account for potential errors, or variances, in the computer-vision assigned bounding boxes (e.g., position and/or bounding box dimensions), thereby reducing false positives for invalidity or manipulation due to such variances or errors. For example, one or more of at least one position or at least one dimensions may have an acceptable margin of error (e.g., a threshold value or percentage/factor) associated therewith.
[0147] In some implementations, the inter-bounding box evaluator 1310 includes a content consistency evaluator 1324. The content consistency evaluator 1324 evaluates whether content in two or more bounding boxes in the document under test, which are expected to contain consistent content per one or more checks (direct and/or derived) in the document assembly object, are consistent. Examples of inter-bounding box content consistency checks include, but are not limited to, one or more of a consistency of content between two or more fields (e.g., a constancy or repeated information such as DOB where repeated), a consistency of content between the document holder image and the content of a field (e.g., visible eye color in the image holder vs. listed eye color in eye color field, text comprising the ghost image such as the DOB and holder’s name vs the DOB and holder’s name in those document fields), and a consistency of content between the document holder image and the content of a field (e.g., visible eye color in the image holder vs. listed eye color in eye color field), etc. [0148] In some implementations, the content consistency evaluator 1324 evaluates a consistency of content between two or more fields. For example, the content consistency evaluator 1324 evaluates whether the content of the DOB field 1432 (i.e., 08/31/22) is consistent with the DOB in field 1434 (i.e., 08311977), and field 1436 (i.e., 083177), which is not that case as the year 2022 is not consistent with the year 1977 in fields 1434 and 1436 of Figure 14A. As another example, in some implementations, the inter-bounding box evaluator 1310 may compare the face in a gray scaled version of the facial image 1410 to the face in the ghost image 1420 to determine similarity or lack thereof as illustrated in Figure 14A. As another example, in some documents, there is an equivalent of a check sum (e.g., an alpha-numeric reference number that may be a composite of information in various fields such as the initials concatenated with the date of birth or year of issuance), and the checksum may be evaluated to determine whether it is consistent with the content in the bounding boxes from which the checksum is derived. It should be recognized that, while the preceding examples refer to consistencies between information associated with a single side of a document (i.e., the front as described), in some implementations, the inter-bounding box evaluator 1310 may evaluate consistency between bounding boxes on different sides of the document (e.g., by performing a similarity check between the signature 1438 and a signature on the back (not shown) of the CADL 1400 under test.
[0149] In some implementations, the content consistency evaluator 1324 evaluates whether a document holder image is externally consistent, i.e., the (visible) content in the document holder image is consistent with content external to that document holder image. For example, the content consistency evaluator 1324 evaluates whether a consistency of content between the document holder image and at least one field is present. As another example, the content consistency evaluator 1324 evaluates whether a consistency of content between document holder images is present, e.g., by comparing visible characteristics in the image to those characteristics listed in a field. As another example, document content such as face, address, age, etc. from the document under test may be compared with externally obtained content, such as government databases and/or commercial providers.
[0150] In some implementations, the content in the document holder image that is evaluated by the content consistency evaluator 1324 is a physical characteristic that may be visible in or determined from the document holder image. Examples of physical characteristics include, but are not limited to sex, hair color, eye color, height, weight, a head size ratio, and a head outline/silhouette of the document holder.
[0151] In some implementations, the content consistency evaluator 1324 may train, validate, optimize, or apply one or more machine learning models. For example, the content consistency evaluator 1324 may apply one or more machine learning models to the document holder image to determine the physical characteristics present in the document holder image, and then compare the extracted physical characteristic to the text content associated with a corresponding field. For example, referring to Figure 14A, in some implementations, the content consistency evaluator 1324 may determine the sex of a face (i.e., male in 1410) in the facial image 1410 (e.g., using an AI/ML model) and compare that to the sex field (i.e., “F” for female in CADL 1400) for consistency, or lack thereof, as is the case in CADL 1400. As another example, the content consistency evaluator 1324 may determine the eye color in document holder image 1410, which are brown as illustrated, and compare that to the color listed for the eye color field, “EYES”, which are indicated as “BRN,” or brown, in CADL 1400 under test, and determine that the eye color is consistent. It should be recognized that, while the description herein focus on consistency, between portions of the document under test, the consistency may be to evaluate external sources of content. For example, the above describes comparing the eye color between the eye color listed on the document under test and the eye color in the document holder image under test. However, it should be recognized that content may be compared to an external source, such an eye color in a selfie or video obtained for a liveness check or the eye color listed in a government or commercial database. [0152] In some implementations, the content consistency evaluator 1324 evaluates a consistency of content between two document holder images. For example, the content consistency evaluator 1324 compares a primary document holder image (e.g., document holder image 1410) to a secondary document holder image (e.g., ghost image 1420) to determine whether the images are consistent. Depending on the implementation and use case, the analysis may vary. In some implementations, facial recognition (e.g., using a machine learning model) may be used to compare key point (not shown) in image 1410 to key points (not shown) in the ghost image 1420, to determine a likelihood of a match. In some implementations, the content consistency evaluator 1324 may determine the silhouette of the document holder’s head and shoulders in 1410 and compare that to the silhouette in the ghost image 1420. In some implementations and use cases, utilization of the silhouette may be preferable. For example, in some documents, such as certain versions of the Indian passport, the ghost image is made from text (e.g., repeating portions of the cardholder’s personally identifiable information, such as name, DOB, etc.) and usage of the silhouette may achieve better accuracy and/or low resource utilization than applying a key point-based facial recognition model. In some implementations, the content consistency evaluator 1324 may determine, e.g., using edge detection, the outline of the face (e.g., hairline, jawline, etc.) and compare that facial outline of the document holder image to the facial outline in the ghost image 1420.
[0153] The content consistency evaluator 1324 architecture is adaptive and dynamic over time. For example, in some implementations, the content consistency evaluator 1324 may have an initial set of machine learning models available to it to obtain an initial set of various visible characteristics from the document holder image (e.g., eye color, hair color, sex, and approximate age), which the content consistency evaluator 1324 may compare to corresponding field content (e.g., listing the eye color, hair color, sex, and DOB, which is used to calculate age, etc.), but the initial set does not have a consistency check for weight. Buccal (cheek) fat generally diminishes with age, and a face include fat stores that can grow with a person’s weight, so assume a machine learning model is subsequently developed (e.g., trained, validated, and optimized) to accurately use facial fat visible in the document holder image to approximate the individual’s age and/or weight (maybe also based on the height listed in height field) and compares those estimates to the DOB and/or weight fields. The content consistency evaluator 1324 may modularly add the model thereby adding support for a weight consistency check. Depending on the implementation, use case, and relative accuracy, the new model may replace or supplement (e.g., by generating a second age approximation), the existing age estimation model. In some implementations, the content consistency evaluator 1324 may change or improve its evaluations and/or extend the scope of evaluations which it may perform over time.
[0154] In some implementations, the inter-bounding box evaluator 1310 includes a relative position evaluator 1326. For example, in some implementations, the relative position evaluator 1326 determines the relative position of a bounding box within the document under test. For example, in some implementations, the relative position evaluator 1326 determines that the position of the facial image 1410 is too close to the left edge of the document and/or the signature 1438 bounding box extends too far up from the bottom edge of the document under test, i.e., CADL 1400, based on a bounding box template included in the document assembly object.
[0155] In some implementations, functionality of one or more of the bounding box presence/absence evaluator 1308 and the inter-bounding box evaluator 1310 is at least partially performed by comparing the bounding box template to the bounding boxes derived from the document under test to determine whether overlap exists. For example, a determination is made as to whether the bounding boxes in the document under test are within the template bounding boxes or within a predetermined margin of error, which may account for variances and misalignments that may occur during the printing of valid documents. In some implementations, when an overlap exists the content of the overlapping bounding boxes (e.g., a security feature object, field, field prefix, etc.) expected to be present is present and in the expected relative position. In some implementations, when there is no overlap, e.g., a detected object is not present in the bounding box template of the document assembly object or a bounding box associated with an expected (e.g., mandatory) object or text is absent, the bounding box presence/absence evaluator 1308 and/or the inter-bounding box evaluator 1310 may extend the area of search.
[0156] In some implementations, the inter-bounding box evaluator 1310 includes a blur comparator 1328, which is described further below.
[0157] The intra-bounding box evaluator 1312 performs one or more intra-bounding box evaluations. Examples of intra-bounding box evaluations include, but are not limited to, one or more of an evaluation of the microprint within a bounding box (e.g., using color information and/or a reconstructed microprint or snippet thereof), an evaluation of the textual content within a bounding box (e.g., the textual content, font, font size, font style, capitalization, font color, intercharacter spacing, blur, bounding box width consistency with expectation for number of characters present, etc.), an evaluation of the object in the box (e.g., to see if an object such as a seal is intact, is occluded, or is modified), an evaluation of whether a purported signature is consistent with a font (e.g., Lucida Console which is used by some as a font for electronic signatures), an evaluation of whether a document holder image is consistent with one or more requirements (e.g., whether the dimensional and/or featurebased requirements are met), etc.
[0158] In some implementations, the intra-bounding box evaluator 1312 includes a background/microprint evaluator 1342. The background/microprint evaluator 1342 analyzes (e.g., using similarity matching or anomaly detection) the background/microprint within a bounding box associated with a document under test to determine whether the background/microprint, or lack thereof, indicates manipulation.
[0159] Referring now to Figure 14B, a portion 1460 of the example CADL 1400 under test is enlarged and illustrated. In Figure 14B, the date of birth field has been modified by adding new, red text, i.e., 08/31/22, in a text box with a white background, thereby destroying the microprint background in the area associated with the DOB field 1432. In some implementations, the destruction, or alteration, of microprint is determined by background/microprint evaluator 1342 and indicative of manipulation and increases the likelihood that the document under test is invalid. The destruction of the microprint in the DOB field 1432 is fairly apparent for clarity of demonstration, it should be recognized that in some documents under test, the destruction may be more limited and more difficult to detect with the human eye. For example, assume a nefarious person wanted to change the day in the DOB from “31” to “01” and carefully deleted the “3” before adding the “0” in its place. Such a manipulation would result in some white pixels in the center and at the edge of the “0,” which may be difficult to see with the human eye due to the small size. If the nefarious user chose to fill those pixels with some adjacent color, rather than leaving it white, the manipulation could be undetectable the human eye. The background/microprint evaluator 1342 may detect such manipulations by evaluating the background/microprint in some implementations.
[0160] In Figure 14B, the first name field 1442 has been modified to read “IMANOTTA” by adding “NOTTA” as a suffix to the “IMA” present in CADL 500. In some implementations, the background/microprint evaluator 1342 evaluates one or more boundaries within a bounding box. For example, in some implementations, the background/microprint evaluator 1342 evaluates one or more boundaries between the background microprint and the edge of the text. When comparing the edge of the “IMA” text to that of the “NOTTA” at 1442, it is apparent that the “NOTTA” text has a crisper edge. In some implementations, the background/microprint evaluator 1342 detects such differences, which may be indicative of digital manipulation, e.g., by detecting sharp changes in pixel intensity that may indicate tampering.
[0161] In some implementations, the background/microprint evaluator 1342 may evaluate for continuity along an edge, such as a boundary between a bounding box and its surrounding. For example, referring to Figure 29, a portion 2900 of a document under test with microprint is illustrated in accordance with some implementations. The illustrated portion 2900 includes a partial facial image 2902 and an adjoining portion 2906 in the document under test. The boundary line 2904 illustrates the boundary between 2902 and 2904. As can be seen, there are continuous microprint features (i.e., lines) from 2902 to 2906 (or vice versa) that cross the boundary 2904. In some implementations, the boundary 2904 may be associated with a bounding box (not shown) surrounding the facial image. In some implementations, the background/microprint evaluator 1342 may evaluate edges/boundaries such as 2904 to determine whether a discontinuity in the mi croprint/b ackground at or near (e.g., within a specified distance) of the boundary.
[0162] As another example, referring now to Figure 27, an image snippet 2702 of a string of numerals (i.e., 1603513645) from a document under test in grayscale is illustrated in magnified format. In the image snippet 2702, even though it is magnified, the destruction of the microprint/background is difficult for the human eye to discern. For example, the numeral “3” at 2712 appears to have been copy-pasted at 2714 based on the background, and some of the other numerals appear to have been copy-pasted (e.g., from other positions in this document under test or another instance of the document).
[0163] In some implementations, the background/microprint evaluator 1342 may convert at least a portion of the image to greyscale and apply a threshold. For example, referring to Figure 28, the background/microprint evaluator 1342 has masked out the glyphs of the numeric field and applied a threshold to the greyscale image 2802, so that pixels with a value above 160, in the pixel range of 0 to 255, are white, and pixel values equal to or less than 160 are black. In this representation, it can be seen that some regions of microprint/background are darker than expected, others are lighter, and some expected patterns are not present, as indicated in Figure 28.
[0164] The background/microprint evaluation performed by the background/microprint evaluator 1342 may vary depending on the implementations and use case. Examples of background/microprint evaluation that may be applied by the background/microprint evaluation 1342 may include, but at not limited to, one or more of an average value difference within a bounding box, a comparison between the reconstructed background/microprint and that present in the document under test, and a machine learning model (e.g., a convolutional neural network or other AI/ML model) trained on digitally manipulated text fields over microprint areas.
[0165] In some implementations, the background/microprint evaluator 1342 applies an average value difference. For example, referring to Figure 27, the background/microprint evaluator 1342 determines a background (e.g., a portion in the bounding box snippet not obscured by the text or object therein) in the document under test, such as the background for the first instance of the numeral “3” in the grayscale representation of 2702, takes an average grayscale value of that background/microprint. The background/microprint evaluator 1342 determines the corresponding background in the reconstructed background/microprint also converted to grayscale and obtains that average grayscale value, which is compared to the average grayscale value associated with the document under test to determine whether a match exists. Such an evaluation may detect destroyed or manipulated backgrounds or microprint, e.g., by determining where the unobstructed microprint/background is too light or dark and may be relatively inexpensive computationally. [0166] In some implementations, the background/microprint evaluator 1342 may analyze grayscale (or color) information in the frequency domain, as tall and narrow spikes in the frequency domain may indicate a level of uniformity in gray (or in one or more colors) atypical of what would be expected in an image of a document that was not digitally manipulated.
[0167] In some implementations, a color version of a snippet may, such as a color version of snippet 2702 (not shown), may be analyzed by the background/microprint evaluator 1342. For example, the background/microprint evaluator 1342 converts the color snippet into a different color space, such as a hue saturation value (HSV) color scale, to control for variations in different camera sensors, lighting conditions, etc., and in some implementations, the background/microprint evaluator 1342 may analyze that HSV color information in the frequency domain, as tall and narrow spikes in the frequency domain may indicate a level of uniformity in color atypical of what would be expected in an image of a document that was not digitally manipulated.
[0168] In some implementations, the background/microprint evaluator 1342 compares a snippet of the document under test to a corresponding snippet from the reconstructed background/microprint to determine whether a difference exists between the portion(s) of the background/microprint in the document under test that are unobstructed by text or an object and the reconstructed microprint.
[0169] In some implementations, the background/microprint evaluator 1342 trains and applies a machine learning (ML) model trained on digitally manipulated text fields over microprint areas. For example, the background/microprint evaluator 1342 trains and applies a convolutional neural network or other machine learning model the manipulations (e.g., to identify whether a boundary of the text or associated artifacts are indicative of fraud).
[0170] In some implementations, the intra-bounding box evaluator 1312 includes a text evaluator 1344. The text evaluator 1344 determines one or more of a similarity and an anomaly between the text of a document under test and the text described in the document assembly object, which describes and/or represents (e.g., using snippets) valid instances of the document or portions thereof.
[0171] In some implementations, the text evaluator 1344 evaluates one or more of a textual content, font, font size, font style, orientation (e.g., horizontal or vertical), capitalization, font color, intercharacter spacing, bounding box width consistency with expectation for number of characters present, blur, etc. associated with text in the document under test and determines whether the one or more of the textual content, font, font size, font style, orientation, capitalization, font color, intercharacter spacing, bounding box width consistency with expectation for number of characters present, blur, etc. are consistent with that/those of a valid document. For example, assume the CADL 1400 under test is processed by the OCR engine 306, bounding boxes analogous to 602 and 604 in Figure 6 and associated snippets, as represent by snippets 1502 and 1504 in Figure 15, respectively, are generated.
[0172] Referring to Figure 15, in some implementations, the text evaluator 1344 may analyze the text (e.g., in a snippet). In the illustrated implementation, the text evaluator 1344 has analyzed snippet 1502, thereby generating the result set 1512, and analyzed the snippet 1504, thereby generating the result set 1514. In the illustrated implementation, the result 1512 includes the text (i.e., “California”) present in snippet 1502; a set, or subset, of fonts recognized by the text evaluator (e.g., “Arial Bold,” “Roboto Medium,” etc.) and a similarity, or dissimilarity, score associated with each font in the provided set (e.g., “14567.448 ...” and “14709.592. . .,” etc., respectively ), and the font determined to be present in snippet 1502 (i.e., “Arial Bold ”) and a tag or label (i.e., “state” in result 1512). The result 1514 includes analogous components. The text evaluator 1344 may compare the text content (e.g., “California”) and the font characteristics (e.g., “Arial Bold”) to the text content and font characteristics included in the document object assembly object to determine whether a match exists.
[0173] It should be recognized that the snippets 1502 and 1504, the results 1512 and 1524, and components of the results, e.g., 1522, 1524, and 1526, are merely examples and variations are expected and within the scope of this disclosure. For example, while snippets that are more likely to be modified (e.g., associated with a name field, DOB, etc.) are not shown, such snippets are evaluated in some implementations. As another example, the illustrated results show a determined font (i.e., “Arial Bold” at 1526), which may be compared to the font in the document assembly object determined, from one or more valid instances of the document, for that portion of the ID. In some implementations, the text evaluator 1344 may determine other or additional characteristics of the text such as, but not limited to, one or more of a font size (e.g., 8 pt.), font color (e.g., using the red, green, blue (RGB) or cyan, magenta, yellow, black (CMYK) or other color representation model), font style (e.g., italic, bold, underlined), orientation (e.g., horizontal or vertical), and the capitalization scheme (e.g., all caps, caps and small caps, or caps and lower case letters), which may be compared to corresponding information in the document assembly object. [0174] It should be noted that, while blur is described above with reference to the text evaluator 1344, blur may be applied to not text fields, in some implementations, without departing from the description herein. For clarity and convenience, the evaluation of blur in a document image is described in greater detail with reference to the blur determiner 1346 and the blur comparator 1328. However, it should be recognized that the features and functionality described with reference to the blur determiner 1346 and the blur comparator 1328 may be moved to other components, subcomponents, sub-subcomponents, etc. of the system 100 described herein without departing from this disclosure. For example, the processing of an image snippet to determine the measure(s) of blur is described below in reference to the blur determiner 1346. However, in some implementations, the image preprocessor 302 may process text (e.g., in a bounding box defined snippet) and determine the measure(s) of blur, which is/are provided to the blur comparator 1328.
[0175] For clarity and convenience, the features and functions of the blur determiner 1346 and the blur comparator 1328 are described with reference to an example portion 2302 of text extracted from a document under test illustrated in Figure 23. In Figure 23, a portion of a passport document under test is shown unmagnified at 2302 and magnified at 2302b for ease of explanation and reference. As labeled in the magnified portion 2302b, the illustrated portion of the passport document includes a field label 2312 for an individuals’ surname (i.e., "Surname/Nom”), the individual’s surname 2314 (i.e., “KYRSTIN”), a field label 2316 for the individuals given or first name (i.e., “Given names/Prenoms”), and the individual’s given name 2318 (i.e., “POLK”). When an image of a document is taken, some amount of blurring may occur. For example, the blurring may be introduced by the image format and associated compression algorithms (e.g., JPEG, which uses lossy compression) and/or the camera’s resolution. When a nefarious user modifies a document image, e.g., using photo editing software such as Adobe Photoshop, the nefarious user may type in the desired information over the image of an otherwise valid document under test. When this is done, it is atypical for the nefarious user to blur that inserted text. In the magnified instance 2302b, the “KYRSTIN” at 2314 and “POLK” at 2318 do exhibit some blurring due to enlargement from 2302, but it may be apparent that “KYRSTIN” at 2314 and “POLK” at 2318 are not as blurred as their respective field labels 2312 and 2316. When looking at the unmagnified instance at 2302 it may or may not be clear to the human eye that the “KYRSTIN” and “POLK” are not blurred, and if the image is being evaluated by the human eye, the manipulation may go undetected. However, the system 100 may beneficially use one or more of images captured at different magnifications, computer vision, and measures of blur to detect differences that may be undetectable by the human eye. [0176] In some implementations, the intra-bounding box evaluator 1312 includes a blur determiner 1346. The blur determiner 1346 determines one or more measures of blur for a given portion of the document under test. Depending on the implementation, the portion of the document under test may be an individual or subset of characters (e.g., for a component- by-component analysis within a field/string), a string of characters (e.g., a string such as a field label, the field content, etc.), or both. In some implementations, the portion of the document is associated with a bounding box. For example, the blur determiner 1346 determines one or more measures of blur for a snippet representing the given portion of the document under test.
[0177] The one or more measures of blur determined, by the blur determiner 1346, for a given portion of the document under test may vary depending on the implementation. For example, a set of one or more measure of blur values for a given portion of the document under test may be determined by applying one or more of Canny edge detection, Laplacian variance, and Cepstral techniques. However, it should be understood that other measures of blur values and method for determining those values exist and may be applied without departing from the description herein.
[0178] In some implementations, the blur determiner 1346 determines a Canny edge detection value as a measure of blur value. Referring now to Figure 24A, an output 2402 of an application of Canny edge detection to portion 2302 is illustrated in accordance with some implementations. As illustrated, the edges 2412 and 2416 corresponding to the field labels 2312 and 2316, respectively, are relatively incomplete when compared to the edges 2414 and 2418 corresponding to the field content 2314 and 2316. This is further indicated by the maximum Canny edge detection value of 150 for text 2312, 2314, 2316, and 2316 for text in portion 2302, which is half the maximum Canny edge detection value of 300, which is determined when analyzing only the “KYRSTIN” and “POLK” fields at 2314 at 2318 in portion 2302, as illustrated in Figure 24 B.
[0179] In some implementations, the blur determiner 1346 determine a Laplacian variance as a measure of blur value. For example, referring to Figure 25, a portion of a document under test is illustrated with bounding boxes and associated Laplacian variances in accordance with some implementations. In Figure 25, bounding box 2502 is associated with the field label for an individuals’ surname (i.e., "Surname/Nom”), bounding box 2504 is associated with the individual’s surname 2314 content field (i.e., “KYRSTIN”), bounding box 2506 is associated with the field label 2316 for the individuals given or first name (i.e., “Given names/Prenoms”), and bounding box 2508 is associated the individual’s given name 2318 field content (i.e., “POLK”). In some implementations, the bounding boxes 2502-2508 are generated by the OCR engine 306, and the blur determiner 1346 applies Laplacian variance to each image snippet generated from and representing the portions within the bounding boxes 2502-2508. Although, the foregoing examples are fields and field labels, a measure of blur may be determined on a character-by-character basis or to individual fonts.
[0180] As illustrated, the blur determiner 1346 determines, at 2512, a Laplacian variance of 25.6 for the text within bounding box 2502; determines, at 2514, a Laplacian variance of 82.7 for the text within bounding box 2504; determines, at 2516, a Laplacian variance of 22.1 for the text within bounding box 2506; and determines, at 2518, a Laplacian variance of 94.3 for the text within bounding box 2508.
[0181] In some implementations, the blur determiner 1346 applies Cepstral techniques to determine a measure of blur value. In some implementations, Cepstral analysis techniques may apply a double Fast Fourier Transform (FFT) to determine a Point Spread Function, which may convert the portion of the image into the frequency domain (not shown). In some implementations, the blur determiner 1346 applies Cepstral techniques to a first portion of the document comprising invariant information, i.e., information that does not change from document instance to document instance, such as field labels, seals, document titles, etc., thereby obtaining a first measure of blur value; applies Cepstral techniques to personally identifiable information that changes from document instance to document instance to identify the particular document holder, thereby obtains a second measure of blur value; and makes the two measure of blur values available to the blur comparator 1328 for comparison — a greater difference indicative of a greater likelihood of manipulation according to some implementations.
[0182] In some implementations, the blur determiner 1346 applies a histogram analysis to determine a measure of blur. Referring now to Figure 26, graphs associated with the blue and red color channels for a manipulated field (using black text) and an unmanipulated field (using black text) are illustrated. It should be recognized that green (not shown) may be used instead of or in addition to the red and blue channels. It should also be recognized that the example uses an RGB color representation model and that others exist and may be used. When comparing the graphs 2602 and 2604 of a snippet of a manipulated field to the respective graphs 2612 and 2614 of an unmanipulated field snippet, some characteristics are apparent, which may be captured by the blur determiner 1346 in one or more measure of blur values and used by the blur comparator 1328. For example, the maximum value/bar height is twice as high in the manipulated snippet and the bars to the right are much lower. This conceptually makes sense as text added using a photo editor would be consistent (e.g., in color) and create a concentration (spike) in the graph. For example, unmanipulated black text will have more blur and not be “as black,” so peak at zero on the X axis will not be as high in unmanipulated fields, and there will be more of a tail to the right of the graph. In some implementations, the one or more measure of blur values based on an application of histogram analysis may include, but are not limited to, one or more of a mean, median, mode, standard deviation, range, inter-quartile range, variance, etc. for one or more color channels in the document under test. In some implementations, the one or more measure of blur values based on an application of histogram analysis may include a distance measure (e.g., Chi-squared, Euclidean distance, normalized, Euclidean distance, intersection, normalized intersection, etc.) between two histograms — one histogram from the document under test (e.g., for a first channel) and one reference histogram (e.g., for the first channel based on one or more valid instances for the same portion of the document).
[0183] In some implementations, multiple measures of blur using different approaches may be determined. For example, in some implementations, one or more measures of blur determined by histogram analysis of the color channel(s) may be used in addition to, e.g., the result of the Laplacian variance and/or Canny edge detection, to measure the blur, and subsequently identify the document under test as valid or invalid (e.g., digitally manipulated) based on the measures of blur.
[0184] In some implementations, the blur comparator 1328 is communicatively coupled to the blur determiner 1346. For example, in some implementations, the blur comparator 1328 is communicatively coupled to receive the one or more measures of blur associated with two or more portions of the document under test.
[0185] The blur comparator 1328 compares two or more measure of blur values. In some implementations, blur comparator 1328 compares the measure of blur value for one portion of a document under test to the measure of blur value for another portion of the same document under test. In some implementations, the blur comparator 1328 compares the measure of blur values in-kind. For example, the blur comparator 1328 compares a first Canny edge detection value for a first portion of a document under test to a second Canny edge detection value for a second portion of that document under test and/or compare the Laplacian variances for those portions of the document under test.
[0186] In some implementations, the blur comparator 1328 determines based on the comparison of two or more measure of blur values whether a threshold is satisfied. The threshold may vary based on one or more of the implementation, use case, and measure of blur value(s) used. Examples of thresholds include, but are not limited to a raw difference (e.g., a difference in Laplacian variance greater that 40), a factor (e.g., a max Canny difference greater than a factor of 1.5), a percentage (e.g., where the larger of the two Laplacian variances is greater than 300% the lower value), etc.
[0187] In some implementations, the threshold may be dynamic. For example, the blur comparator 1328 uses machine learning to (e.g., supervised machine learning using snippets labeled as invalid or valid) to set the threshold(s), and periodically retrains to adjust the threshold(s). As another example, in some implementations, a customer for whom the documents are being validated may adjust the thresholds to change or maintain one or more of a number of false positives and false negatives.
[0188] In some implementations, a threshold is used to provide a tolerance or margin of error, as some degree of variability (e.g., noise) in a measure of blur is to be expected even in absent document manipulation. For example, compare the “25.6” Laplace variance value at 2512 for the unmanipulated field label 2312 to the “22.1” Laplace variance value at 2516 for the unmanipulated field label 2316. The presence of some degree of variability may be independent of the actual measure of blur, so some variation is expected whether using Canny edge detection, Laplacian variance, Cepstral techniques, another valuation method, or a composite of multiple valuation methods, but the natural variance, or noise, may vary in degree based on the method of valuation and the blur comparator 1328 may set different thresholds for different measures of blur accordingly. Therefore, in some implementations, the blur comparator 1328 performs a comparison not to determine whether just any difference exists, but to determine whether a degree of difference between two measure of blur values indicates that an inconsistency present, or, stated differently, whether the difference is indicative of an inconsistency or manipulation of the document. For example, referring to Figures 24A and 24B, the blur comparator 1328 compares the Canny edge detection max values (i.e., 150 and 300, respectively) and determines that an inconsistency is present. As another example, referring to Figure 25, the blur comparator 1328 compares the Laplacian variances at 25.6, 82.7, 22.1, and 94.3 at 2512, 2514, 22516, and 1518, respectively, and determines that an inconsistency indicative of manipulation is present.
[0189] It should be recognized that while Figures 24A and 24B compare relative blurriness between field content and field labels and Figure 25 compares blurriness on a field-by-field (or string-by-string) basis. In some implementations, measure of blur value(s) may be performed on a character-by-character, or other subcomponent by other subcomponent basis, which may identify partial manipulation of a string. For example, referring now to Figure 14B, a comparison of one or more measure of blur values generated by the blur determiner 1346 for a first character selected from the subset of “IMA” within 1442 to one or more measure of blur values generated by the blur determiner 1346 for a second character selected from the subset of “NOTTA” within 1442 would, when compared by the blur comparator 1328, result in the blur comparator 1328 determining that an inconsistency in the blur is present. The inconsistency in the blur may be indicative of document, or document image, manipulation and/or invalidity.
[0190] In some implementations, the blur may be evaluated for inconsistencies within a string of text (e.g., between characters within the personally identifiable information of a content field), which may be referred to as “at the character level” or similar, between strings of text (e.g., between one or more characters comprising one string of text and one or more characters comprising another string of text) within a document under test, which may occasionally be referred to as “at the field level” or similar, between strings of text from different documents under test, or a combination thereof. Additionally, while the discussion herein focuses on blur with reference to text, the evaluation of blur may be extended to other security features, e.g., seals, facial images, etc.
[0191] In some implementations, the intra-bounding box evaluator 1312 includes a document image holder evaluator 1348. The document image holder evaluator 1348 analyzes a document holder image to determine whether the document holder image is internally consistent with one or more rules associated with valid document instances. For example, the document image holder evaluator 1348 determines whether a document holder image from a UK passport under test (not shown) complies with a set of checks based on the dimensional requirements illustrated in Figure 21B. For example, the document image holder evaluator 1348 determines whether a document holder image from a UK passport under test (not shown) complies with one or more checks based on the feature-based requirements, or prohibitions, illustrated in Figure 22.
[0192] In some implementations, the document image holder evaluator 1348 may train, validate, optimize, or apply one or more machine learning models to analyze the document holder image. In some implementations, the document image holder evaluator 1348 uses one or more machine learning models to analyze one or more dimensional requirements associated with the document holder image. For example, the document image holder evaluator 1348 uses one or more machine learning models to extract one or more dimensions associated with the document holder image, and the document image holder evaluator 1348 determines whether the extracted dimension(s), aspect ratio(s), etc. are consistent with a valid document instance based on the document assembly object. In some implementations, the one or more models may use a post-processed version of the document holder image so the image is normalized, de-skewed, etc., which may improve the accuracy of generated dimensional information.
[0193] In some implementations, the document image holder evaluator 1348 uses one or more machine learning models to analyze one or more feature-based requirement (or prohibitions) associated with the document holder image. For example, the document image holder evaluator 1348 may apply one or more machine learning models to determine whether headwear (i.e., an object) is present in the document holder image and/or classify headwear (e.g., as fashion headwear, hair accessory, or religious headwear). As another example, the document image holder evaluator 1348 may apply one or more machine learning models to determine whether a shadow, object, texture, or unacceptable color is present in the background. As another example, document image holder evaluator 1348 may apply one or more machine learning models to determine whether hair, glasses, or a shadow is obstructing a portion of the document holder’s face, whether eyes are open, a direction of gaze, whether the head is square to the camera or tilted, a facial expression (e.g., neutral, smiling, other), etc.
[0194] In some implementations, the document image holder evaluator 1348 architecture is adaptive and dynamic over time. For example, in some implementations, the document image holder evaluator 1348 may have available a large set of available machine learning models available to it to check for various features or dimensions and, based on the document under test and the checks associated with that document, apply only those machine learning models relevant to the document, or a subset thereof, the document under test. As another example, when a new machine learning model is trained, e.g., that out performs an existing model or identifies a new feature or dimension, that new machine learning model may be added to the set of machine learning models available to the document image holder evaluator 1348. Therefore, in some implementations, the document image holder evaluator 1348 may continue to improve its evaluations and/or extend the scope of evaluations which it may perform over time.
[0195] In some implementations, the evaluations by one or more of the bounding box presence/absence evaluator 1308, the inter-bounding box evaluator 1310, the intra-bounding box evaluator 1312, or the subcomponents 1322, 1324, 1326, 1328, 1342, 1344, 1346, 1348, thereof may use a direct check or derived check included in the document assembly object. For example, referring to portion 1206 of Figure 12, three heuristic rules are included as checks. In some implementations, the intra-bounding box evaluator 1312 may use these rules from the document assembly object to generate the intermediate results of whether the document number is the correct length and alphanumeric composition.
[0196] In some implementations, the outcome of any one of the evaluations performed by one or more of the bounding box presence/absence evaluator 1308, the inter-bounding box evaluator 1310, the intra-bounding box evaluator 1312, or the subcomponents 1322, 1324, 1326, 1328, 1342, 1344, 1346, 1348 thereof, may not be definitive for determining whether the document under test is valid or invalid. For example, an inconsistency between the font determined by the text evaluator 1344 and the font in the document assembly object may not definitively indicate that document is invalid, since the font determination (e.g., a font classifier applied by the text evaluator 1344) may have trouble distinguishing between those two fonts. Accordingly, the results of the evaluations performed by one or more of the bounding box presence/absence evaluator 1308, the inter-bounding box evaluator 1310, the intra-bounding box evaluator 1312, or the subcomponents 1322, 1324, 1326, 1328, 1342, 1344, 1346, 1348 thereof, are occasionally used and referred to as intermediary results.
[0197] The verification determiner 1314 determines whether to verify the document under test. In some implementations, the verification determiner 1314 obtains at least a subset of the intermediary results generated by one or more of the bounding box presence/absence evaluator 1308, the inter-bounding box evaluator 1310 or its subcomponent s), and the intra- bounding box evaluator 1312 or its subcomponent(s) and, based on at least a subset of the intermediary results, determines whether the document under test is a valid instance of the document. In some implementations, the verification determiner 1314 may obtain the intermediary results from the document database 242.
[0198] In some implementations, the verification determiner 1314 obtains other information (e.g., context information, a decision history, etc.) and, based at least in part on the other information, determines whether the document under test is a valid instance of the document. For example, the verification determiner 1314 may query the document database 242 to determine whether the user’s information (e.g., client device 106 identifier) is associated with previously received and rejected as invalid documents, to determine whether the document ID number in the document under test (e.g., a driver’s license number) has been associated with other verification requests and whether the document was determined to be verified/valid or invalid and/or associated with different information (e.g., different names appearing on different documents with the same doc ID). [0199] Depending on the implementation and use case, the verification determiner 1314 may apply one or more of heuristics, statistical analysis, and AI/ML model(s) to determine whether the document under test is verified. For example, the verification determiner 1314 may determine one or more heuristics, such as reject the document under test as invalid when the facial image and ghost image do not match or reject the document under test as invalid when the content in the DOB field is inconsistent with the content of other related bounding boxes (e.g., not repeated in those portions of the ID). As another example, the verification determiner 1314 may use statistical analysis, such as assigning a value of “1” to an intermediate result that indicates a match/similarity/consistency and a “0” to an intermediary result that indicates an anomaly/mismatch/inconsistency is detected and determining whether an average or weighted average satisfies a verification threshold. For example, the verification determiner 1314 may use machine learning to perform feature set reduction to reduce (e.g., based on information gain) the number of intermediary results (and associated evaluations) used for a particular document and tune associated parameters (e.g., their relative weighting in a weighted average). It should be noted that the above are merely examples of heuristics, statistical analysis, and AI/ML models that may be used by the verification determiner 1314. The verification determiner 1314 may use other or different mechanisms without departing from the disclosure herein.
[0200] The verification determiner 1314 returns a verification result. For example, the verification determiner 1314 returns a result to a requesting customer, such as a bank, indicating that the document (e.g., the imaged photo ID) is not verified/invalid or is valid. As another example, the verification determiner 1314 returns a result to other system components, such as a liveness detector (not shown). In some implementations, a liveness detection may be performed before, or in parallel, with evaluation of the document by the document evaluator 226.
[0201] In some implementations, the verification determiner 1314 triggers an action or inaction based on the verification result. The liveness detector (not shown) may, e.g., compare a selfie of the user that provided the document image to the facial image in the document. In some implementations, the liveness detector (not shown) may be triggered by the verification determiner 1314 to execute based on the document being verified, as it may not be worth the time and computational resources to determine whether the person in the selfie is the same person in the fake ID document. In some implementations, the verification determiner 1314 may trigger other actions such as contacting law enforcement of the jurisdiction in which the user’s client device 106 is located (e.g., to report the attempted fraud or identity theft and providing associated information).
[0202] Referring now to Figure 16, an example of a document database 242 is illustrated in accordance with some implementations. The document database 242 manages, stores, and provides information related to documents, which may be used by the system 100 to perform the features and functionalities described herein. The document database 242 may comprise at least one relational database and/or at least one nonrelational database. Therefore, the document database 242 is not necessarily a document-orient database. In some implementations, the document database 242 may comprise a look up table (not shown) or relational database (not shown) with columns for class labels (e.g., document type, country, state, etc.) and a location or pointer of the associated document assembly object. In some implementations, the document assembly objects and snippets may be stored in a nonrelational/NoSQL portion of the document database 242 such as an object-oriented or document-oriented database. In some implementations, the document database 242 may include a graphical database, e.g., a dependency graph defining an order and dependency of various data lookups and verification checks.
[0203] The information related to documents stored by the document database 242 may include but, is not limited to, valid samples 1652 (whether provided by the issuer, determined to be verified/valid by the system 100, or both), unverified/invalid samples 1654 (whether provided by the issuer, determined to be verified/valid by the system 100, or both), preprocessed images of document(s) under test (not shown), post-processed images of document(s) under test (not shown), one or more document assembly objects 1656 each associated with a document supported by the system 100, the snippets (not shown) derived from valid samples and/or documents under test, context information 1658, intermediate results 1660 associated with one or more documents under test, and decision history 1662 describing the final verification (valid/invalid) decision for documents under test.
[0204] In some implementations, the document database 242 includes representations of fraudulent users, e.g., one or more of a snippet of the facial image from a document determined to be invalid; a biometric associated with a liveness check, such as a selfie image of a face, a video, a voice, etc. associated with an invalid document; the information provided, or used, by the fraudulent user (e.g., images of the documents, signatures, document class/type used, etc.), which may be used by the system 100 to generate new checks and/or train an AI/ML model to generate validity checks targeting known fraudulent users and/or their methods (e.g., documents of choice). [0205] In some implementations, an instance of a document assembly object(s) 1656 stored by the document database 242 may include one or more of a set of class labels 1672 identifying the document described by the document assembly object 1656, one or more fields 1674 (e.g., mandatory fields, optional fields, field prefixes, etc.), one or more objects 1676 (e.g., security features such as images, holograms, watermarks, kinegrams, laser perforations, microprint, etc.), one or more properties 1678 (e.g., font, font color, font size, font style, orientation, capitalization scheme, microprint, text, etc.), position data 1680(e.g. a bounding box template describing position(s) of one or more of at least one field, at least one field prefix, and at least one object), and a set of validation checks (e.g., direct check(s) and/or indirect check(s)).
[0206] In some implementations, a subset of checks included in an instance of a document assembly object 1656 is a “local” check, which may be specific to that document, and, in some cases, those documents related (e.g., via inheritance) to that document. In some implementations, “global” security checks may be used and applied by the document evaluator 226 to multiple documents, e.g., security checks generalized to many documents using common security features.
[0207] In some implementations, a document assembly object instance includes one or more links. For example, at least one instance of the document assembly object(s) 1656 may include links to one or more snippets (e.g., from a valid sample), where the one or more snippets may be represented in a binary image format to be used in computer-vision and similarity checks, such as those described with reference to the decision engine 310 and/or its subcomponents. Examples of context information 1658 include, but are not limited to, location (physical and/or network), IP address, device identifier (e.g., MAC, electronic serial number, etc.), user ID (e.g., name, username, etc.), facial images (e.g., from selfies and/or documents), etc. As described herein, in some implementations, the context information may be used by the decision engine 310, e.g., to identify repeated fraudulent attempts and/or users or devices associated therewith and determine a level of risk or scrutiny to which a document under test is subjected.
[0208] In some implementations, intermediate results 1660 associated with one or more documents under test are stored by the document database 242. In some implementations, the intermediate results 1660 are stored beyond the time needed by the system 100 to evaluate and verify (or not) the document under test. For example, in some implementations, the intermediary results and other information associated with that document under test (e.g., one or more of a preprocessed image, post processed image, and at least one snippet, etc.) may be archived for future use to enhance system 100 performance. For example, such information may be used to determine which intermediate results are the most frequently encountered and/or most highly predictive of fraud or invalidity so that, e.g., those evaluations may be applied by the associated component s) of the decision engine 310 as a first tier of analysis to more efficiently triage documents under test. For example, such data may reveal that it would be more efficient in terms of time and computational resources to compare the interbounding box consistency of the repeated DOB information in the CADL 500 example as an early step, and only proceed to more intensive analysis (e.g., of the microprint) when that intermediate result is a “pass” and not a “fail.” As another example, the intermediate results may be useful in enhancing individual evaluators, e.g., as training and/or test data, or may be used to train other models.
[0209] The intermediate results 1660 may provide transparency. For example, the intermediary results may be used to explain to the user (e.g., the person providing the image of the document), or a requesting customer (e.g., a bank requesting verification of the document), why the document under test is not verified/ is rejected.
[0210] In some implementations, the intermediate results 1660 may provide auditability. For example, assume it becomes apparent that the text evaluator 1344 cannot detect a particular attack vector involving a document number and provides a false negative (e.g., the text evaluator did not previously check that the initials and DOB comprised a portion of the document number for this document); in some implementations, the document database 242 may query the decision history 1662 for documents under test of that document class that passed (e.g., as a final verification decision), where the intermediate results and pull the OCRed document number text associated therewith, so that those document numbers can be evaluated to determine which and/or how many documents were incorrectly verified and, potentially, trigger remedial action.
[0211] In some implementations, the decision history 1662 describes an overall verification decision (valid/invalid or accepted/rejected) for one or more documents under test processed by the document evaluator 226.
[0212] It should be apparent that systems, methods, features, and functionalities described herein provide a number of potential benefits. For example, the systems, methods, features, and functionalities described herein may provide a highly flexible decision architecture that can rapidly adapt to keep up with the highly dynamic nature of document fraud and/or provide decisions quickly and/or efficiently, even on newly issued documents. [0213] In some implementations, the cold start problem is reduced or diminished using the computer-vision based approaches described herein. In some implementations, the computervision based approaches described herein may allow a previously unsupported document (e.g., newly issued) to be supported and evaluated by the system 100 more quickly (e.g., a day or two instead of weeks or months, as may be the case with (re)training an AI/ML model for the new document).
[0214] In some implementations, the systems, methods, features and functionalities described herein may detect modifications or fraud indetectable by humans. For example, sophisticated user of photo editing may be able to modify a document so that the modification/anomaly is indistinguishable to a human eye, but the systems, methods, features and functionalities described herein may, in some implementations, identify such modifications.
[0215] In some implementations, the document assembly objects may be dynamic. For example, the document assembly object may be continuously improved as newly derived security features or checks are learned and added (e.g., via a feedback loop). For example, computer-vision based approaches described herein may be layered with AI/ML models to extract new combinations of features that may be indicative of validity or invalidity or detect and neutralize new vectors of attack (e.g., fraudulent modification).
[0216] In some implementations, the systems, methods, features, and functionalities described herein provide a modular architecture wherein components may be reused in the processing of multiple different documents, which may allow greater investment in the refinement and optimization of those components and allow those components to be “plug- and-play” for new documents. For example, in some implementations, one or more object detections performed by the object detection engine 308 and/or one or more evaluations performed by the decision engine 310 may be reused/reapplied on multiple different documents. For example, in some implementations, a laser perforation detection model (e.g., may be trained, validated, retrained, optimized, etc. to detect laser perforations using edge detection and circular Hough transformation, and the object detection engine 308 may apply that previously developed model to a valid sample to generate the document assembly object and/or to documents under test to determine the presence of such security features in a newly supported document, thereby lower the barrier for supporting a new document.
[0217] In some implementations, the modularity provides efficient and quick support of newly developed security features. For example, assume that watermarks are a newly developed security feature not previously used by issuers and are starting to be implemented in new documents, in some implementations, a model or algorithm to detect that new security feature as an object may be trained, and the object detection engine 308 may then call and apply that object detection model/algorithm moving forward, thereby incrementally building out support for new security features as they are developed without disruption to existing systems or architecture. A previously generated document assembly object may be modified to add that the document includes a watermark along with associated information (e.g., bounding box location) and verification check, when the document included the watermark, but the system 100 did not previously support and evaluate watermarks, e.g., because the method/model for detecting UV watermarks had not been developed at the time the document assembly object was initially created.
[0218] In some implementations, the systems, methods, features and functionalities described herein allow for faster processing and return of result(s). For example, in some implementations, the intermediate evaluations, sometimes also referred to as verification checks, are decoupled and/or may be performed asynchronously. As an example, the microprint of multiple snippets may be evaluated in series and/or parallel to determine, which may occur in series or in parallel with other evaluations, such as consistency checks between the content of multiple text fields and/or objects. As another example, evaluations/verification checks may be tiered, so that results may be returned more quickly. For example, a set of security features associated with recent fraud attempts using a particular document may be checked/evaluated first to triage requests involving that document classification, and when those initial checks are passed, additional checks may or may not be performed. As another example, the number and/or types of checks and evaluations may vary depending on a risk assessment, e.g., how likely the document under test is likely to be invalid, so documents that are more frequently used by fraudsters, or that come from sources (e.g., devices, IP addresses, countries, etc.) associated with prior invalid attempts, etc. may receive additional scrutiny via the use of more evaluations, while lower risk documents may be evaluated using fewer and/or less (time or computationally) intensive evaluations, such as average color value comparison vs a CNN for evaluating the microprint, thereby improving system throughput, efficiency, and costs while mitigating the risk of false negatives.
[0219] In some implementations, the generation and/or persistence in the document database of the intermediary results may provide auditability. For example, assume it becomes apparent that the decision engine 310 is not detecting a particular attack vector and provides a false negative (e.g., the text evaluator did not previously check that the initials and DOB comprised a portion of the document number for a particular class of document). In some implementations, document assembly object may be updated to include a verification check regarding whether a first identified portion of the document number is consistent with the DOB and a second identified portion of the document number is consistent with the initials extracted from the name fields. In some implementations, the document database 242 may query the decision history 1662 for documents of that document class which that passed (e.g., as an overall verification decision) and had valid intermediate result(s) associated with the document number. In some implementations, the decision engine 310 or a portion thereof (e.g., the inter-bounding box’s content consistency evaluator 1324) may be executed to determine whether, which, or how many documents were incorrectly verified and, potentially, trigger remedial action.
[0220] In some implementations, the generation and/or persistence in the document database of the intermediary results may provide transparency. For example, the intermediate result(s) may be used to at least partially explain a rejection or acceptance of a document under test. Such transparency may be help in compliance to demonstrate that acceptances or rejections are based on appropriate criteria and not inappropriate or forbidden criteria (e.g., race, sex, country of origin, etc.).
[0221] In some implementations, the systems, methods, features and functionalities described herein may be layered with others. For example, the systems, methods, features and functionalities described herein may, in some implementations, be used in conjunction with liveness detection, so that, when an identification document is valid, a liveness detector (not shown) may determine whether a user that submitted the document is live and whether his/her face matches the photo in the ID.
[0222] As another example, in some implementations, the systems, methods, features and functionalities described herein may, in some implementations, be layer with human auditors or reviewers, who may confirm and/or reject an intermediate or overall result or may be looped in under certain circumstances or predefined criteria.
[0223] For example, in some implementations, the systems, methods, features and functionalities described herein may be layered with machine learning. For example, to perform additional validity checks or modify the evaluations performed by the decision engine 310 (e.g., change an order of evaluations, change a risk tier in a document assembly object thereby changing the evaluations to which those documents under test are subjected, perform a feature set reduction and reduce the number of verification checks in the document assembly object or which verification checks are performed on a document, etc.). In some implementations, the use of computer-vision and simple matching algorithms is robust compared to and may supplement a more volatile machine learning data extraction pipeline and/or provide a set of signals, which may be weak individually, for stacking in a machine learning model.
[0224]
[0225] Example Methods
[0226] Figures 17-22, 30, and 35 are flowcharts of example methods that may, in accordance with some implementations, be performed by the systems described above with reference to Figures 1-4, 13, and 16. The example methods 1700, 1800, 1900, 2000, 2100, 2200, 3000, and 3500 of Figures 17-22, 30, and 35 are provided for illustrative purposes, and it should be understood that many variations exist and are within the scope of the disclosure herein. [0227] Figure 17 is a flowchart of an example method 1700 for generating a document assembly object in accordance with some implementations. At block 1702, the document class labeler 404 obtains a set of labels describing a document. At block 1704, the sample obtainer 402 obtains one or more images of the document, wherein the document in the one or more images are valid samples of the document. At block 1706, the issuer information encoder 406 identifies a set of document components based on document issuer provided information and a set of direct checks. At block 1708, the derived information encoder 408 derives a set of document features based at least in part on the one or more images of the document and a set of derived checks. At block 1710, the document configurator 304 generates a document assembly object describing valid instances of the document including the set of document components, the set of derived document features and a set of verification checks including the set of direct checks and the set of derived checks.
[0228] Figure 18 is a flowchart of an example method 1800 for processing a request to verify a document under test using a document assembly object in accordance with some implementations. At block 1802, the document database 242 obtains a query including a document assembly object identifier, the query associated with a request to verify a document under test present in an image. At block 1804, the document database 242 obtains a document assembly object describing a valid document uniquely associated with the identifier, the document assembly object including: a set of document components, a set of derived document features, and a set of verification checks including one or more of a direct check and a derived check. At block 1806, the document database 242 obtains aggregated context information associated with the document under test. At block 1808, the document database 242 sends the document assembly object and aggregated context information for use in verification of the document under test. [0229] Figure 19 is a flowchart of an example method 1900 for evaluating a document under test in accordance with some implementations. At block 1902, the document classifier 1302 obtains at least one image of a document under test. At block 1904, the document classifier 1302 determines a classification of the document under test. At block 1906, the document assembly object obtainer 1304 obtains a document assembly object associated with the classification determined at block 1904. At block 1908, the OCR engine 306 and/or object detection engine 308 performs object (e.g., text or other object) detection on the document under test. At block 1910, one or more of the bounding box presence/absence evaluator 1308, the inter-bounding box evaluator 1310, and the intra-bounding box evaluator 1312 evaluate the objects detected in the document under test against the document assembly object obtained at block 1906. At block 1912, the verification determiner 1314 determines whether the document under test is a valid or abused document.
[0230] Figure 20 is a flowchart of an example method 2000 for determining whether an inconsistency in blur within a portion of the document under test is present in accordance with some implementations. At block 2002, the blur determiner 1346 obtains a first image snippet partially representing a first portion of a document under test. At block 2004, the blur determiner 1346 determines a first measure of blur value associated with the first image snippet. At block 2006, the blur determiner 1346 obtains an Nth (e.g., second) image snippet partially representing the first portion of the document under test. At block 2008, the blur determiner 1346 determines an Nth measure of blur value associated with the Nth image snippet. At block 2010, the blur comparator 1328 determines, based on the first measure of blur value determined at 2004 and the Nth measure of blur value determined at 2008, whether an inconsistency is present. In some implementations, N is incremented by 1, at block 2012, and blocks 2006 through 2010 may be repeated. For example, to process each character in a field, where the first portion of the document is the field and the 1st through Nth snippets correspond to the first through Nth character of that field, and thereby identify any inconsistency of blurring within that field. It should be noted that, while the illustrated method 2000 compares the Nth snippet(s) to the first snippet, in some implementations, the method 2000 may be modified to compare other combinations of snippets. For example, in some implementations, every combination of characters within the field may be compared (not shown). At block 2014, the verification determiner 1314 modifies a likelihood that the document under test is accepted as valid, or rejected as invalid, based on the determination(s) at block 2010. [0231] Figure 21 is a flowchart of an example method 2100 for determining whether an inconsistency in blur between two portions of a document under test, is present in accordance with some implementations. Depending on the implementations and use case, the two portions may vary. For example, the first portion may be a first text character and the second portion may be a second text character in the same, or a different (depending on the implementation), field. As another example, the first portion and the second portion may be associated with distinct fields of text. As another example, the first portion and the second portion may be associated with distinct but related fields of text, e.g., both associated with invariant data within the document, which may include field labels, such as “FN” and “LN” or both associated with personally identifiable information, such as the actual first name and last name appearing on the document.
[0232] At block 2102, the blur determiner 1346 determines a set of one or more measure of blur values associated with a first portion of a document under test. At block 2104, the blur determiner 1346 determines a second set of one or more measure of blur values associated with a second portion of a document under test. At block 2106, the blur determiner 1346 determines, based on the first and second set of measure of blur values determined at 2102 and 2104, respectively, whether an inconsistency is present. At block 2108, the verification determiner 1314 modifies a likelihood that the document under test is accepted as valid, or rejected as invalid, based on the determination at block 2106.
[0233] It should be noted that, while the illustrated method 2100 compares two sets of measure of blur values associated with two portions of the document under test, in some implementations, the method 2100 may be modified to compare the measure of blur values for more or different portions of the document under test. For example, in some implementations, every combination document portions may have their respective sets of one or more measure of blur values compared (not shown).
[0234] Figure 22 is a flowchart of an example method 2200 for using a plurality of measure of blur values per set of text in accordance with some implementations. At block 2202, the blur determiner 1346 the determines a first measure of blur associated with a first set of text based on Canny edge detection. At block 2204, the blur determiner 1346 determines a second measure of blur associated with first set of text based on Laplacian variance. At block 2206, the blur determiner 1346 determines a third measure of blur associated with the first set of text based on Cepstral techniques. At block 2208, the blur comparator 1328 compares, respectively, the first, second, and third measure of blur associated with the first set of text to a first, second, and third measure of blur associated with a second set of text. At block 2210, the blur comparator 1328 determines, based on the comparison at block 2208, whether an inconsistency in blur indicative of document manipulation is present.
[0235] It should be noted that, while the illustrated method 2200 generates three measures of blur values — using Canny edge detection, Laplacian variance, and Cepstral techniques — other implementations may use determine different measures of blur, whether differing in number of measure of blur values determined for a common set of text and/or differing in the method(s) of determining a measure of blur value. In some implementations a set of text, as referred to with reference to Figure 22 may be analogous to a portion of a document under test as referred to in reference to Figure 21. Accordingly, the comparison at 2210 may be between two sets in the same string of text (e.g., the same field, field label, doc. identifier, etc.) between two strings of text (e.g., a field and its respective field label or two other fields). [0236] Figure 30 is a flowchart of an example method 3000 for automatically generating a document assembly object from an electronic document specification in accordance with some implementations. At block 3002, the document configuration engine 304 obtains a document specification in an electronic format, wherein the document specification is associated with a first document and describes features present in valid instances of the first document. At block 3004, the document class labeler 404 determines a set of labels describing the first document from the document specification. At block 3006, the sample obtainer 402 obtains one or more digital images of at least one valid instance of the first document from the document specification. At block 3008, the bounding box obtainer 412 obtains information describing a set of bounding boxes resulting from application, to the one or more images of the at least one valid instance of the first document, of one or more of optical character recognition and object detection. At block 3010, the derived information encoder 408 generates a set of derived checks based on the set of bounding boxes. At block 3012, the document configurator 304 generates a document assembly object describing valid instances of the document and the set of derived checks usable to determine validity of a document under test.
[0237] Figure 35 is a flowchart of an example method 3500 for one or more checks based at least in part on a document holder image in accordance with some implementations. At block 3502, the document assembly object obtainer 1304 obtains a document assembly object associated with a document under test subsequent to receiving an electronic image of the document under test. At block 3504, the decision engine 310 obtains the image of the document holder from the electronic image of the document under test using object detection. At block 3506, the document under test derived info obtainer 1306 obtains document content describing a first visible characteristic of the document holder. At block 3508, the intrabounding box evaluator 1312 applies a first check determining whether the document holder image in the document under test complies with the one or more issuer prescribed rules relating to valid document holder images, as defined in the document assembly object associated with the document under test, and/or an inter-bounding box evaluator 1310 applies a second check determining whether the first visible characteristic as described in the document content is consistent with the first visible characteristic as visible in the document holder image. At block 3510, the verification determiner 1314 modifies a likelihood that the document under test is accepted or rejected based on one or more of the first check and second check.
[0238]
[0239] Other Considerations
[0240] It should be understood that the above-described examples are provided by way of illustration and not limitation and that numerous additional use cases are contemplated and encompassed by the present disclosure. In the above description, for purposes of explanation, numerous specific details are set forth in order to provide a thorough understanding of the present disclosure. However, it should be understood that the technology described herein may be practiced without these specific details. Further, various systems, devices, and structures are shown in block diagram form in order to avoid obscuring the description. For instance, various implementations are described as having particular hardware, software, and user interfaces. However, the present disclosure applies to any type of computing device that can receive data and commands, and to any peripheral devices providing services.
[0241] Reference in the specification to “one implementation” or “an implementation” or “some implementations” means that a particular feature, structure, or characteristic described in connection with the implementation is included in at least one implementation. The appearances of the phrase “in some implementations” in various places in the specification are not necessarily all referring to the same implementations.
[0242] In some instances, various implementations may be presented herein in terms of algorithms and symbolic representations of operations on data bits within a computer memory. An algorithm is here, and generally, conceived to be a self-consi stent set of operations leading to a desired result. The operations are those requiring physical manipulations of physical quantities. Usually, though not necessarily, these quantities take the form of electrical or magnetic signals capable of being stored, transferred, combined, compared, and otherwise manipulated. It has proven convenient at times, principally for reasons of common usage, to refer to these signals as bits, values, elements, symbols, characters, terms, numbers, or the like.
[0243] It should be borne in mind, however, that all of these and similar terms are to be associated with the appropriate physical quantities and are merely convenient labels applied to these quantities. Unless specifically stated otherwise as apparent from the following discussion, it is appreciated that throughout this disclosure, discussions utilizing terms including “processing,” “computing,” “calculating,” “determining,” “displaying,” or the like, refer to the action and processes of a computer system, or similar electronic computing device, that manipulates and transforms data represented as physical (electronic) quantities within the computer system’s registers and memories into other data similarly represented as physical quantities within the computer system memories or registers or other such information storage, transmission or display devices.
[0244] Various implementations described herein may relate to an apparatus for performing the operations herein. This apparatus may be specially constructed for the required purposes, or it may comprise a general-purpose computer selectively activated or reconfigured by a computer program stored in the computer. Such a computer program may be stored in a computer readable storage medium, including, but is not limited to, any type of disk including floppy disks, optical disks, CD-ROMs, and magnetic disks, read-only memories (ROMs), random access memories (RAMs), EPROMs, EEPROMs, magnetic or optical cards, flash memories including USB keys with non-volatile memory or any type of media suitable for storing electronic instructions, each coupled to a computer system bus.
[0245] The technology described herein can take the form of a hardware implementation, a software implementation, or implementations containing both hardware and software elements. For instance, the technology may be implemented in software, which includes but is not limited to firmware, resident software, microcode, etc. Furthermore, the technology can take the form of a computer program product accessible from a computer-usable or computer-readable medium providing program code for use by or in connection with a computer or any instruction execution system. For the purposes of this description, a computer-usable or computer readable medium can be any non-transitory storage apparatus that can contain, store, communicate, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device.
[0246] A data processing system suitable for storing and/or executing program code may include at least one processor coupled directly or indirectly to memory elements through a system bus. The memory elements can include local memory employed during actual execution of the program code, bulk storage, and cache memories that provide temporary storage of at least some program code in order to reduce the number of times code must be retrieved from bulk storage during execution. Input/output or I/O devices (including but not limited to keyboards, displays, pointing devices, etc.) can be coupled to the system either directly or through intervening I/O controllers.
[0247] Network adapters may also be coupled to the system to enable the data processing system to become coupled to other data processing systems, storage devices, remote printers, etc., through intervening private and/or public networks. Wireless (e.g., Wi-FiTM) transceivers, Ethernet adapters, and modems, are just a few examples of network adapters. The private and public networks may have any number of configurations and/or topologies. Data may be transmitted between these devices via the networks using a variety of different communication protocols including, for example, various Internet layer, transport layer, or application layer protocols. For example, data may be transmitted via the networks using transmission control protocol / Internet protocol (TCP/IP), user datagram protocol (UDP), transmission control protocol (TCP), hypertext transfer protocol (HTTP), secure hypertext transfer protocol (HTTPS), dynamic adaptive streaming over HTTP (DASH), real-time streaming protocol (RTSP), real-time transport protocol (RTP) and the real-time transport control protocol (RTCP), voice over Internet protocol (VOIP), file transfer protocol (FTP), WebSocket (WS), wireless access protocol (WAP), various messaging protocols (SMS, MMS, XMS, IMAP, SMTP, POP, WebDAV, etc ), or other known protocols.
[0248] Finally, the structure, algorithms, and/or interfaces presented herein are not inherently related to any particular computer or other apparatus. Various general-purpose systems may be used with programs in accordance with the teachings herein, or it may prove convenient to construct more specialized apparatus to perform the required method blocks. The required structure for a variety of these systems will appear from the description above. In addition, the specification is not described with reference to any particular programming language. It will be appreciated that a variety of programming languages may be used to implement the teachings of the specification as described herein.
[0249] The foregoing description has been presented for the purposes of illustration and description. It is not intended to be exhaustive or to limit the specification to the precise form disclosed. Many modifications and variations are possible in light of the above teaching. It is intended that the scope of the disclosure be limited not by this detailed description, but rather by the claims of this application. As should be understood by those familiar with the art, the specification may be embodied in other specific forms without departing from the spirit or essential characteristics thereof. Likewise, the particular naming and division of the modules, routines, features, attributes, methodologies and other aspects are not mandatory or significant, and the mechanisms that implement the specification or its features may have different names, divisions and/or formats. [0250] Furthermore, the modules, routines, features, attributes, methodologies, engines, and other aspects of the disclosure can be implemented as software, hardware, firmware, or any combination of the foregoing. Also, wherever an element, an example of which is a module, of the specification is implemented as software, the element can be implemented as a standalone program, as part of a larger program, as a plurality of separate programs, as a statically or dynamically linked library, as a kernel loadable module, as a device driver, and/or in every and any other way known now or in the future. Additionally, the disclosure is in no way limited to implementation in any specific programming language, or for any specific operating system or environment. Accordingly, the disclosure is intended to be illustrative, but not limiting, of the scope of the subject matter set forth in the following claims.

Claims

WHAT IS CLAIMED IS:
1. A method comprising: determining, using one or more processors, a first measure of blur value associated with a first portion of a document under test; determining, using the one or more processors, a second measure of blur value associated with a second portion of the document under test; determining, using the one or more processors, whether an inconsistency in a set measure of blur values associated with the document under test is present, wherein the set of measure of blur values associated with the document under test includes the first measure of blur value and the second measure of blur value; and modifying, using the one or more processors, a likelihood that the document is accepted or rejected based on whether the inconsistency is absent or present, respectively.
2. The method of claim 1, wherein the first portion of the document under test is associated with a first bounding box generated using optical character recognition, and the second portion of the document under test is associated with a first bounding box generated using optical character recognition.
3. The method of claim 1, wherein an inconsistency exists when a difference between the first measure of blur and the second measure of blur satisfies a threshold.
4. The method of claim 1, wherein the first portion of the document under test is a first character in a first text string and the second portion of the document under test is a second character in the first text string.
5. The method of claim 4, the method further comprising: determining a third measure of blur associated with the first text string at a field level; determining a fourth measure of blur associated with a second text string at the field level; comparing the third measure of blur and the fourth measure of blur; and determining based on the comparison whether a difference in blur at the field level exists.
6. The method of claim 1, wherein the first portion of the document under test is associated with a first text string and the second portion of the document under test is associated with a second text string.
7. The method of claim 1, wherein the first portion of the document under test is associated with a field label and the second portion of the document under test is a text field associated with the field label.
8. The method of claim 1, wherein the first measure of blur is determined by applying Canny edge detection to the first portion of the document under test and the second measure of blur is determined by applying Canny edge detection to the second portion of the document under test.
9. The method of claim 1, wherein the first measure of blur is determined by applying Laplacian variance detection to the first portion of the document under test and the second measure of blur is determined by applying Laplacian variance to the second portion of the document under test.
10. The method of claim 1, wherein the first measure of blur is determined by applying Cepstral techniques to the first portion of the document under test and the second measure of blur is determined by applying Cepstral techniques to the second portion of the document under test.
11. A system comprising: a processor; and a memory, the memory storing instructions that, when executed by the processor, cause the system to: determine a first measure of blur value associated with a first portion of a document under test; determine a second measure of blur value associated with a second portion of the document under test; determine whether an inconsistency in a set measure of blur values associated with the document under test is present, wherein the set of measure of blur values associated with the document under test includes the first measure of blur value and the second measure of blur value; and modify a likelihood that the document is accepted or rejected based on whether the inconsistency is absent or present, respectively.
12. The system of claim 11, wherein the first portion of the document under test is associated with a first bounding box generated using optical character recognition, and the second portion of the document under test is associated with a first bounding box generated using optical character recognition.
13. The system of claim 11, wherein an inconsistency exists when a difference between the first measure of blur and the second measure of blur satisfies a threshold.
14. The system of claim 11, wherein the first portion of the document under test is a first character in a first text string and the second portion of the document under test is a second character in the first text string.
15. The system of claim 14, wherein the instructions, when executed, cause the system to: determine a third measure of blur associated with the first text string at a field level; determine a fourth measure of blur associated with a second text string at the field level; compare the third measure of blur and the fourth measure of blur; and determine based on the comparison whether a difference in blur at the field level exists.
16. The system of claim 11, wherein the first portion of the document under test is associated with a first text string and the second portion of the document under test is associated with a second text string.
17. The system of claim 11, wherein the first portion of the document under test is associated with a field label and the second portion of the document under test is a text field associated with the field label.
18. The system of claim 11, wherein the first measure of blur is determined by applying Canny edge detection to the first portion of the document under test and the second measure of blur is determined by applying Canny edge detection to the second portion of the document under test.
19. The system of claim 11, wherein the first measure of blur is determined by applying Laplacian variance detection to the first portion of the document under test and the second measure of blur is determined by applying Laplacian variance to the second portion of the document under test.
20. The system of claim 11, wherein the first measure of blur is determined by applying Cepstral techniques to the first portion of the document under test and the second measure of blur is determined by applying Cepstral techniques to the second portion of the document under test.
21. A method compri sing : obtaining, using one or more processors, a document specification in an electronic format, wherein the document specification is associated with a first document, and describes features present in valid instances of the first document; determining, using the one or more processors, a set of labels describing the first document from the document specification; obtaining, using the one or more processors, one or more digital images of at least one valid instance of the first document from the document specification; obtaining, using the one or more processors, information describing a set of bounding boxes resulting from application, to the one or more images of the at least one valid instance of the first document, of one or more of optical character recognition and object detection; generating, using the one or more processors, a set of derived checks based on the set of bounding boxes; and generating, using the one or more processors, a document assembly object describing valid instances of the document and the set of derived checks usable to determine validity of a document under test.
22. The method of claim 21, the method further comprising: obtaining a set of test images representing multiple instances of the first document; determining, based on a first derived check in the document assembly object, whether each image in the set of test images is a valid with respect to the first derived check or an invalid with respect to the first derived check; and adjusting how subsequent determinations are made based on a presence of a false positive or false negative in the determination of a test image with respect to the first derived check.
23. The method of claim 21, wherein adjusting how subsequent determinations are made includes one or more of: retraining a machine learning model associated with the derived check to reduce an instance of a false positive or a false negative; and adjusting a tolerance.
24. The method of claim 21, the method further comprising: obtaining a set of valid document images, wherein each image in the set of valid document images represents a valid instance of the first document; applying pattern recognition to the set of valid document images; generating, based on a first detected pattern, a newly derived check; and adding the newly derived check to the document assembly object.
25. The method of claim 24, wherein the newly derived check is associated with an unpublished security feature present in the first document.
26. The method of claim 24, wherein the pattern recognition identifies a repetition in at least a portion of personally identifiable information (PII) text between two or more bounding boxes associated with a common, valid document instance in the set of valid document images, and wherein the newly derived check, when applied to a document image under test, checks for one or more of: whether a bounding box, which is associated with at least a partial repetition of PII in valid instances of the first document, is present in the document under test; whether the bounding box, which is associated with at least a partial repetition of PII in valid instances of the first document, in the document under test is in a location consistent with a valid instance of the first document; and whether text content of the bounding box is repeats an appropriate portion of PII text found elsewhere in the document under test.
27. The method of claim 21, wherein the bounding box, which is associated with at least a partial repetition of PII in valid instances of the first document, is a portion of a ghost image.
28. The method of claim 21, wherein the bounding box, which is associated with at least a partial repetition of PII in valid instances of the first document, is undiscernible to a human eye absent magnification.
29. The method of claim 21, wherein the electronic format is one of hypertext markup language and printable document format and published by a trusted source.
30. The method of claim 21, wherein the document assembly object is human and machine readable.
31. A system comprising: a processor; and a memory, the memory storing instructions that, when executed by the processor, cause the system to: obtain a document specification in an electronic format, wherein the document specification is associated with a first document, and describes features present in valid instances of the first document; determine a set of labels describing the first document from the document specification; obtain one or more digital images of at least one valid instance of the first document from the document specification; obtain information describing a set of bounding boxes resulting from application, to the one or more images of the at least one valid instance of the first document, of one or more of optical character recognition and object detection; generate a set of derived checks based on the set of bounding boxes; and generate a document assembly object describing valid instances of the document and the set of derived checks usable to determine validity of a document under test.
32. The system of claim 31, wherein the instructions, when executed, cause the system to: obtain a set of test images representing multiple instances of the first document; determine, based on a first derived check in the document assembly object, whether each image in the set of test images is a valid with respect to the first derived check or an invalid with respect to the first derived check; and adjust how subsequent determinations are made based on a presence of a false positive or false negative in the determination of a test image with respect to the first derived check.
33. The system of claim 31, wherein adjusting how subsequent determinations are made includes one or more of: retraining a machine learning model associated with the derived check to reduce an instance of a false positive or a false negative; and adjusting a tolerance.
34. The system of claim 31, wherein the instructions, when executed, cause the system to: obtain a set of valid document images, wherein each image in the set of valid document images represents a valid instance of the first document; apply pattern recognition to the set of valid document images; generate, based on a first detected pattern, a newly derived check; and add the newly derived check to the document assembly object.
35. The system of claim 34, wherein the newly derived check is associated with an unpublished security feature present in the first document.
36. The system of claim 34, wherein the pattern recognition identifies a repetition in at least a portion of personally identifiable information (PII) text between two or more bounding boxes associated with a common, valid document instance in the set of valid document images, and wherein the newly derived check, when applied to a document image under test, checks for one or more of: whether a bounding box, which is associated with at least a partial repetition of PII in valid instances of the first document, is present in the document under test; whether the bounding box, which is associated with at least a partial repetition of PII in valid instances of the first document, in the document under test is in a location consistent with a valid instance of the first document; and whether text content of the bounding box is repeats an appropriate portion of PII text found elsewhere in the document under test.
37. The system of claim 31, wherein the bounding box, which is associated with at least a partial repetition of PII in valid instances of the first document, is a portion of a ghost image.
38. The system of claim 31, wherein the bounding box, which is associated with at least a partial repetition of PII in valid instances of the first document, is undiscernible to a human eye absent magnification.
39. The system of claim 31, wherein the electronic format is one of hypertext markup language and printable document format and published by a trusted source.
40. The system of claim 31, wherein the document assembly object is human and machine readable.
41. A method compri sing : obtaining, using one or more processors, a document assembly object associated with a document under test subsequent to receiving an electronic image of the document under test, wherein the document assembly object indicates that valid instances of the document under test include a document holder image and document content describing a first visible characteristic of the document holder; automatically obtaining, using one or more processors, the document holder image from the electronic image of the document under test using object detection; automatically obtaining, using the one or more processors, document content describing a first visible characteristic of the document holder from the electronic image of the document under test using one or more of optical character recognition and object detection; applying, using the one or more processors, a set of checks associated with the document assembly object to evaluate the document under test image for validity, the set of checks including one or more of: a first check determining whether the document holder image in the document under test complies with one or more rules relating to valid document holder images, as defined in the document assembly object associated with the document under test; and a second check determining whether the first visible characteristic as described in the document content is consistent with the first visible characteristic as visible in the document holder image; and modifying, using the one or more processors, a likelihood that the document under test is accepted or rejected based on one or more of the first check and second check.
42. The method of claim 41, wherein the one or more rules relating to valid document holder images includes a first rule explicitly defined by an issuer of the document in a document specification.
43. The method of claim 41, wherein the one or more rules relating to valid document holder images includes a first rule inferred from an analysis of a plurality of valid document instances.
44. The method of claim 41, wherein the document content includes field content.
45. The method of claim 41, wherein the document content includes a ghost image.
46. The method of claim 41, wherein the one or more rules relating to valid document holder images include one or more dimensional requirements selected from the set of: document holder image height, document holder image width, document holder image aspect ratio, a valid range for document holder head height, a valid range for document holder head width, and a margin.
47. The method of claim 41, wherein the one or more rules relating to valid document holder images include at least one feature-based requirement, wherein the featurebased requirement is associated with a feature that is either present in, or absent from, the document holder image in valid instances of the document, and wherein a machine learning model is applied to the document holder image in the electronic image of the document under test to determine whether a feature is present or absent.
48. The method of claim 47, wherein the feature is associated with one or more of: headwear, glasses, hair coverage of one or more facial features, background color, presence of an object in a background, facial shadowing, background shadowing, facial expression, eyes being open, and direction of gaze.
49. The method of claim 41, wherein the first visible characteristic includes one or more of a sex, hair color, eye color, height, weight, a head size ratio, and a head outline of the document holder.
50. The method of claim 49, wherein one or more machine learning models are used to determine the first visible characteristic in the document holder image, which is compared to field content obtained using optical character recognition.
51. A system comprising: a processor; and a memory, the memory storing instructions that, when executed by the processor, cause the system to: obtain a document assembly object associated with a document under test subsequent to receiving an electronic image of the document under test, wherein the document assembly object indicates that valid instances of the document under test include a document holder image and document content describing a first visible characteristic of the document holder; automatically obtain the document holder image from the electronic image of the document under test using object detection; automatically obtain document content describing a first visible characteristic of the document holder from the electronic image of the document under test using one or more of optical character recognition and object detection; apply a set of checks associated with the document assembly object to evaluate the document under test image for validity, the set of checks including one or more of: a first check determining whether the document holder image in the document under test complies with one or more rules relating to valid document holder images, as defined in the document assembly object associated with the document under test; and a second check determining whether the first visible characteristic as described in the document content is consistent with the first visible characteristic as visible in the document holder image; and modify a likelihood that the document under test is accepted or rejected based on one or more of the first check and second check.
52. The system of claim 51, wherein the one or more rules relating to valid document holder images includes a first rule explicitly defined by an issuer of the document in a document specification.
53. The system of claim 51, wherein the one or more rules relating to valid document holder images includes a first rule inferred from an analysis of a plurality of valid document instances.
54. The system of claim 51, wherein the document content includes field content.
55. The system of claim 51, wherein the document content includes a ghost image.
56. The system of claim 51, wherein the one or more rules relating to valid document holder images include one or more dimensional requirements selected from the set of: document holder image height, document holder image width, document holder image aspect ratio, a valid range for document holder head height, a valid range for document holder head width, and a margin.
57. The system of claim 51, wherein the one or more rules relating to valid document holder images include at least one feature-based requirement, wherein the featurebased requirement is associated with a feature that is either present in, or absent from, the document holder image in valid instances of the document, and wherein a machine learning model is applied to the document holder image in the electronic image of the document under test to determine whether a feature is present or absent.
58. The system of claim 57, wherein the feature is associated with one or more of: headwear, glasses, hair coverage of one or more facial features, background color, presence of an object in a background, facial shadowing, background shadowing, facial expression, eyes being open, and direction of gaze.
59. The system of claim 51, wherein the first visible characteristic includes one or more of a sex, hair color, eye color, height, weight, a head size ratio, and a head outline of the document holder.
60. The system of claim 59, wherein one or more machine learning models are used to determine the first visible characteristic in the document holder image, which is compared to field content obtained using optical character recognition.
PCT/US2023/086219 2022-12-30 2023-12-28 Document image assessment WO2024145466A1 (en)

Applications Claiming Priority (9)

Application Number Priority Date Filing Date Title
US18/148,542 2022-12-30
US18/148,536 2022-12-30
US18/148,544 2022-12-30
US18/193,669 2023-03-31
US18/193,675 2023-03-31
US18/193,732 2023-03-31
US18/193,736 2023-03-31
US18/345,143 2023-06-30
US18/345,152 2023-06-30

Publications (1)

Publication Number Publication Date
WO2024145466A1 true WO2024145466A1 (en) 2024-07-04

Family

ID=

Similar Documents

Publication Publication Date Title
US20210124919A1 (en) System and Methods for Authentication of Documents
US11250285B2 (en) Detecting characteristics of identification documents
US11087125B2 (en) Document authenticity determination
US7630520B2 (en) Method and system for document comparison using cross plane comparison
Van Beusekom et al. Text-line examination for document forgery detection
CN106951832B (en) Verification method and device based on handwritten character recognition
Gebhardt et al. Document authentication using printing technique features and unsupervised anomaly detection
Abramova et al. Detecting copy–move forgeries in scanned text documents
US11961094B2 (en) Fraud detection via automated handwriting clustering
US20230368391A1 (en) Image Evaluation and Dynamic Cropping System
US20240221413A1 (en) Generating a Document Assembly Object and Derived Checks
US20240221405A1 (en) Document Image Blur Assessment
WO2024145466A1 (en) Document image assessment
US20240221414A1 (en) Document Checks Based on Document Holder Image
US20240221412A1 (en) Document Evaluation Based on Bounding Boxes
US20240221411A1 (en) Document Database
US20240221168A1 (en) Document Assembly Object Generation
US20240217257A1 (en) Evaluating Perforations on Document Images
US20240217256A1 (en) Evaluating Three-Dimensional Security Features on Document Images
WO2024144931A1 (en) Evaluating three-dimensional security features on document images
US20240217255A1 (en) Document Boundary Analysis
Banerjee et al. Quote examiner: verifying quoted images using web-based text similarity
Mane et al. Signature matching with automated cheque system
WO2024144934A1 (en) Document boundary analysis
Björkman Evaluation of the effects of different preprocessing methods on ocr results from images with varying quality