EP3132381A1 - Extension d'entrée/sortie (e/s) optique intelligente pour flux de tâches dépendant du contexte - Google Patents

Extension d'entrée/sortie (e/s) optique intelligente pour flux de tâches dépendant du contexte

Info

Publication number
EP3132381A1
EP3132381A1 EP15779936.2A EP15779936A EP3132381A1 EP 3132381 A1 EP3132381 A1 EP 3132381A1 EP 15779936 A EP15779936 A EP 15779936A EP 3132381 A1 EP3132381 A1 EP 3132381A1
Authority
EP
European Patent Office
Prior art keywords
optical input
user
textual information
input
mobile device
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
EP15779936.2A
Other languages
German (de)
English (en)
Other versions
EP3132381A4 (fr
Inventor
Anthony Macciola
Jan W. Amtrup
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tungsten Automation Corp
Original Assignee
Kofax Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from US14/686,644 external-priority patent/US9349046B2/en
Application filed by Kofax Inc filed Critical Kofax Inc
Publication of EP3132381A1 publication Critical patent/EP3132381A1/fr
Publication of EP3132381A4 publication Critical patent/EP3132381A4/fr
Withdrawn legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/02Input arrangements using manually operated switches, e.g. using keyboards or dials
    • G06F3/023Arrangements for converting discrete items of information into a coded form, e.g. arrangements for interpreting keyboard generated codes as alphanumeric codes, operand codes or instruction codes
    • G06F3/0233Character input methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/03Arrangements for converting the position or the displacement of a member into a coded form
    • G06F3/0304Detection arrangements using opto-electronic means
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V30/00Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
    • G06V30/10Character recognition
    • G06V30/14Image acquisition
    • G06V30/1444Selective acquisition, locating or processing of specific regions, e.g. highlighted text, fiducial marks or predetermined fields
    • G06V30/1452Selective acquisition, locating or processing of specific regions, e.g. highlighted text, fiducial marks or predetermined fields based on positionally close symbols, e.g. amount sign or URL-specific characters
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V30/00Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
    • G06V30/10Character recognition
    • G06V30/14Image acquisition
    • G06V30/1444Selective acquisition, locating or processing of specific regions, e.g. highlighted text, fiducial marks or predetermined fields
    • G06V30/1456Selective acquisition, locating or processing of specific regions, e.g. highlighted text, fiducial marks or predetermined fields based on user interactions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V30/00Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
    • G06V30/10Character recognition
    • G06V30/26Techniques for post-processing, e.g. correcting the recognition result
    • G06V30/262Techniques for post-processing, e.g. correcting the recognition result using context analysis, e.g. lexical, syntactic or semantic context
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • G06F3/0488Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
    • G06F3/04886Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures by partitioning the display area of the touch-screen or the surface of the digitising tablet into independently controllable areas, e.g. virtual keyboards or menus
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V30/00Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
    • G06V30/10Character recognition

Definitions

  • the present inventive disclosures relate to input/output (I/O) using optical component(s) of mobile devices. More specifically, the present concepts relate to integrating optical input functions of a mobile device into outpitt functions of the mobile device, and even more specifically performing context-dependent integration of optical input from a mobile device camera into textual output for a mobile workflow or application.
  • Mobile devices occupy an increasingly prominent niche in the evolving marketplace, serving as access points at various stages of conducting a seemingly infinite number of activities. As this trend continues, mobile devices and mobile network capabilities provided thereby are leveraged in an increasing number and breadth of scenarios. Recent examples include the extension of mobile technology to provide a host of financial services such as check deposit, bill payment, account management, etc. In addition, location data gathered via mobile devices are utilized in an increasing number of applications, e.g. to provide targeted advertising, situational awareness, etc.
  • a first inefficiency is small screen size typical to mobile devices, particularly mobile phones. Since the conventional "smartphone" excludes a physical keyboard and pointer de v ice, relying instead on touchscreen technology, the amount of phy sical space allocated to a given key on a virtual "keyboard" displayed on the mobile device screen is much smaller than possible for a human finger to accurately and precisely invoke. As a result, typographical errors are common when considering textual user input received via a mobile device.
  • typical mobile devices employ powerful predictive analytics and dictionaries to "learn" a given user's input behavior. Based on the predictive model developed, the mobile device is capable of predicting the user's intended input text when the user's actual input corresponds to text that does not fit within defined norms, patterns, etc.
  • the most visible example of utilizing such a predictive analysis and dictionary is embodied in conventional "autocorrect" functionality available with most typical mobile devices.
  • Audio input may be received via integrating an extension into the mobile virtual keyboard that facilitates the user providing input other than the typical tactile input received via the mobile device display.
  • the audio extension appears as a button depicting a microphone icon or symbol, immediately adjacent the space bar (at left).
  • a user may interact with a field configured to accept textual input, e.g. a field on an online form, PDF, etc.
  • the mobile device leverages the operating system to invoke the mobile virtual keyboard user interface in response to detecting the user's interaction with a field.
  • the user then optionally provides tactile input to enter the desired text, or interacts with the audio extension to invoke an audio input interface.
  • this technique is commonly known as "speech-to-text" functionality that accepts audio input and converts received audio input into textual information.
  • the user Upon invoking the audio input interface, and optionally in response to receiving additional input from the user via the mobile device display (e.g. tapping the audio extension a second time to indicate initiation of audio input), the user provides audio input, which is analyzed by the mobile device voice recognition component, converted into text and input into the field with which the user interacted to invoke the mobile virtual keyboard.
  • a user Via integration of audio input to the textual input/output capabilities of a mobile device, a user is enabled to input textual information in a hands-free approach that broadens the applicable utility of the device to a whole host of contexts otherwise not possible. For example, a user may generate a text message exclusively using audio input, according to these approaches.
  • these approaches are also plagued by similarly-frustrating and performance-degrading inaccuracies and inconsistencies well known for existing voice recognition technology. As a result, current voice recognition approaches to supplementing or replacing textual input are unsatisfactory.
  • Voice recognition currently available is known for being subject to failure - often the voice recognition software is simply incapable of recognizing the unique vocalization exhibited by a particular individual. Similarly, voice recognition is prone to "audiographical” errors (i.e. errors analogous to "typographical” errors for audio input, such as falsely "recognizing” a vocalized word).
  • predetermined set of rules e.g. a set of assumptions or conditions that may be defined based on the language being spoken.
  • audio input is often an unworkable alternative to tactile input in circumstances where the expected form of expression and/or usage (which often define the "rules" upon which vocal recognition relies) coiTespond to the written form of a language.
  • Voice recognition is also an inferior tool to utilize for acquiring or validating user input corresponding to information not typically or capable of expression in words.
  • the prototypical example of these limitations is demonstrable from the perspective of user input that includes symbols, such as often utilized to label units of measure.
  • symbols such as often utilized to label units of measure.
  • these vocalizations are not necessarily unique usages of the corresponding term (e.g. "pounds” may correspond to either a unit of measuring weight, i.e. "lbs.” or a unit of currency, e.g. "£", depending on context).
  • Voice recognition is also unsuitable for receiving and processing textual input that includes grammatical symbols (e.g. one or more "symbols” used to convey grammatical information, such as a comma ",” semicolon “;” period “.” and etc.) or formatting input, which includes symbols that do not necessarily have any corresponding physical representation in the language expressed (e.g. a carnage return, tab, space, particular text alignment, etc.).
  • grammatical symbols e.g. one or more "symbols” used to convey grammatical information, such as a comma ",” semicolon “;” period “.” and etc.
  • formatting input which includes symbols that do not necessarily have any corresponding physical representation in the language expressed (e.g. a carnage return, tab, space, particular text alignment, etc.).
  • the device Upon the user interacting with this separate button, the device facilitates including previously-captured optical input or alternatively invoking a capture interface to capture new optical input and include the previously- or newly-captured optical input in addition to any textual information input by the user providing tactile input to the mobile virtual keyboard.
  • FIG. 1A illustrates a mobile de vice user interface configured to receive user input, in accordance with one embodiment.
  • FIG. IB illustrates a mobile device user interface configured to receive user input, in accordance with one embodiment
  • FIG. 2 is a flowchart of a method, according to one embodiment
  • FIG, 3 is a flowchart of a method, according to one embodiment
  • a method includes invoking a user input interface on a mobile device; invoking an optical input extension of the user input interface; capturing optical input via one or more optical sensors of the mobile device; determining textual information from the captured optical input; and providing the determmed textual information to the user input interface.
  • a method includes receiving optical input via one or more optical sensors of a mobile device; analyzing the optical input using a processor of the mobile device to determine a context of the optical input; and automatically invoking a contextual! - appropriate workflow based on the context of the optical input.
  • computer program product includes a computer readable storage medium having program code embodied therewith.
  • the program code is
  • a processor readable/executable by a processor to: invoke a user input interface on a mobile device; invoke an optical input extension of the user input interface; capture optical input via one or more optical sensors of the mobile device; determine textual information from the captured optical input; and provide the determined textual information to the user input interface.
  • the present application refers to image processing of images (e.g. pictures, figures, graphical schematics, single frames of movies, videos, films, clips, etc.) captured by cameras, especially cameras of mobile devices.
  • a mobile device is any de vice capable of receiving data without having power supplied via a physical connection (e.g. wire, cord, cable, etc.) and capable of receiving data without a physical data connection (e.g. wire, cord, cable, etc.).
  • Mobile de vices within the scope of the present disclosures include exemplary devices such as a mobile telephone, smartphone, tablet, personal digital assistant, iPod @, iPad ®, BLACKBERRY ⁇ device, etc.
  • One benefit of using a mobile device is that with a data plan, image processing and information processing based on captured images can be done in a much more convenient, streamlined and integrated way than previous methods that relied on presence of a scanner.
  • an image may be captured by a camera of a mobile device.
  • the term "camera” should be broadly interpreted to include any type of device capable of capturing an image of a physical object external to the device, such as a piece of paper.
  • the term “camera” does not encompass a peripheral scanner or multifunction device. Any type of camera may be used. Preferred embodiments may use cameras having a higher resolution, e.g. 8 MP or more, ideally 12. MP or more.
  • the image may be captured in color, grayscale, black and white, or with any other known optical effect.
  • image as referred to herein is meant to encompass any- type of data corresponding to the output of the camera, including raw data, processed data, etc.
  • the term "voice recognition” is to be considered equivalent to, or encompassing, the so-called “speech-to-text” functionality provided with some mobile devices (again, e.g. "Siri”) that enables conversion of audio input to textual output.
  • speech-to-text functionality provided with some mobile devices (again, e.g. "Siri” that enables conversion of audio input to textual output.
  • inventive techniques discussed herein may be referred to as "image-to-text” or "video-to-text” functionality.
  • aspects of the present invention may be embodied as a system, method or computer program product. Accordingly, aspects of the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment (including firmware, resident software, micro-code, etc.) or an embodiment combining software and hardware aspects that may all generally be referred to herein as "logic,” “circuit,” “module” or “system.” Furthermore, aspects of the present invention may take the form of a computer program product embodied in one or more computer readable medium(s) having computer readable program code embodied thereon.
  • the computer readable medium may be a computer readable signal medium or a computer readable storage medium.
  • a computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electiOmagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing.
  • a computer readable storage medium may be any tangible medium that can contain or store a program for use by or in connection with an instruction execution system, apparatus, processor, or device.
  • a computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband, as part of a carrier wave, an electrical connection having one or more wires, an optical fiber, etc. Such a propagated signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof.
  • a computer readable signal medium may be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.
  • Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fsber cable, RF, etc., or any suitable combination of the foregoing,
  • Computer program code for carrying out operations for aspects of the present invention may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, Smalltalk, C++ or the like and conventional procedural programming languages, such as the "C" programming language or similar programming langitages.
  • the program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server.
  • the remote computer may be connected to the user's computer through any type of network, including a local area network (LAN) or a. wide area network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet Service Provider).
  • LAN local area network
  • WAN wide area network
  • Internet Service Provider for example, AT&T, MCI, Sprint, EarthLink, MSN, GTE, etc.
  • These computer program instructions may also be stored in a computer readable medium that can direct a computer, other programmable data processing apparatus, or other devices to function in a particular manner, such that the instructions stored in the computer readable medium produce an article of manufacture including instructions which implement the function/act specified in the flowchart and/or block diagram block or blocks,
  • the computer program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other devices to cause a series of operational steps to be performed on the computer, other programmable apparatus or other devices to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide processes for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks,
  • each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s).
  • the functions noted in the block may occur out of the order rioted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved.
  • each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration can be implemented by special purpose hardware- based systems that perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
  • An application may be installed on the mobile device, e.g., stored in a nonvolatile memory of the device.
  • the application includes instructions to perform processing of an image on the mobile device.
  • the application includes instructions to send the image to a remote server such as a network server.
  • the application may include instructions to decide whether to perform some or all processing on the mobile device and/or send the image to the remote site.
  • a method includes invoking a user input interface on a mobile device; invoking an optical input extension of the user input interface; capturing optical input via one or more optical sensors of the mobile de vice; determining textual information from the captured optical input; and providing the determined textual information to the user input interface.
  • a method includes receiving optical input via one or more optical sensors of a mobile de vice; analyzing the optical input using a processor of the mobile device to determine a context of the optical input; and automatically invoking a coiitextually-appropriate workflow based on the context of the optical input,
  • computer program product includes a computer readable storage medium having program code embodied therewith.
  • the program code is readable/executable by a processor to: invoke a user input interface on a mobile device; invoke an optical input extension of the user input interface; capture optical input via one or more optical sensors of the mobile device; determine textual information from the captured optical input; and provide the determined textual information to the user input interface.
  • the presently disclosed methods, systems and/or computer program products may optionally utilize and/or include any of the functionalities disclosed in related U.S. Patent No. 8,855,375, filed January 1 1 , 2013, U.S. Patent No. 13/948,046, filed July 22, 2013; U.S. Patent Publication No. 2014/0270349, fifed March 13, 2013; U.S. Patent
  • Digital images suitable for processing according to the presently disclosed algorithms may be subjected to any image processing operations disclosed in the aforementioned Patent Application, such as page detection, rectangularization, detection of uneven illumination, illumination normalization, resolution estimation, blur detection, classification, data extraction, document validation, etc.
  • the presently disclosed methods, systems, and/or computer program products may be utilized with, implemented in, and or include one or more user interfaces configured to facilitate performing any functionality disclosed herein and/or in the aforementioned related Patent Application, such as an image processing mobile application, a case management application, a classification application, and/or a data extraction application, in multiple embodiments.
  • the presently disclosed systems, methods and/or computer program products may be advantageously applied to one or more of the use methodologies and/or scenarios disclosed in the aforementioned related Patent Application, among others that would be appreciated by one having ordinary skill in the art upon reading these descriptions.
  • embodiments presented herein may be provided in the form of a service deployed on behalf of a customer to offer service on demand.
  • the presently disclosed inventive concepts concern the integration of optical input into the I/O capabilities of a mobile device in an intelligent manner that facilitates accurate and facile input of textual information.
  • the exemplary scenarios in which these concepts will be most applicable include inputting textual information to a document, form, web page, etc. as would be understood by one having ordinary skill in the art upon reading the present specification.
  • the presently disclosed techniques accomplish input of textual information without suffering from the inherent disadvantages of utilizing audio input (e.g. poor accuracy of voice recognition) or tactile input via a virtual mobile keyboard (e.g. inaccurate input due to small "key” size, improper "correction” using a predictive dictionary or “autocorrect” function, etc.).
  • audio input e.g. poor accuracy of voice recognition
  • tactile input via a virtual mobile keyboard e.g. inaccurate input due to small "key” size, improper "correction” using a predictive dictionary or “autocorrect” function, etc.
  • the present techniques provide superior performance and convenience to the user.
  • Superior performance includes features such as improved accuracy and reduced input time (especially where the optical input depicts information suitable for use in multiple contexts or fields) of providing textual input via the mobile device.
  • the performance benefits are due to the inventive approach disclosed herein being configured to capture, analyze, and provide textual information from optical input without relying on tactile feedback from the user.
  • these techniques are free from the disadvantages common to an input interface that utilizes a miniaturized virtual keyboard as described abo v e.
  • the present techniques offer superior performance over existing integrations of optical input for use in combination with textual input.
  • the present techniques advantageously integrate the optical input capabilities of the mobile device with textual I/O such that a user need not provide tactile input to convey textual information.
  • optical input may be captured, analyzed and converted to textual information in a context-dependent manner. Context-dependent invocation, capture and analysis of optical input will be discussed in further detail below.
  • optical input functionality is provided via leveraging native tools, procedures, calls, components, libraries, etc. to capture optical input and tactile input according to the particular mobile operating sysiem in which the functionality is included.
  • the present techniques represent a seamless integration of optical input to contexts typically limited to capturing textual information via either tactile or audio input,
  • some mobile operating systems may further provide the capability to analyze captured image data and identify, locate, and/or interpret textual information depicted therein (e.g. via an optical character recognition (OCR) or other similar function as would be recognized by one having ordinary- skill in the art).
  • OCR optical character recognition
  • these rare embodiments do not present any integration of native OS capabilities that allow a user to leverage the combined power of optical input capture and analysis to effectively accomplish inputting textual information via capturing the optical input.
  • no presently known, technique enables a user to input textual information e.g. into a field of a form, directly by capturing optical input depicting identifiers comprising the desired textual information or other information that may be utilized to determine or obtain the desired textual information .
  • "Other" information may include any type of information that would be understood by one having ordinary skill in the art upon reading the present descriptions as suitable or useful to obtain or determine the desired textual information.
  • identifiers suitable for extraction in the context of the present optical input extension and context-sensitive invocation applications may include any type of identifying information (preferably textual information) which may be useful in the process of performing a business workflow such as an insurance claim or application; an accounts-payable process such as invoicing; a navigation process, a communications process, a tracking process, a financial transaction or workflow such as a tax return or account statement review; a browsing process; an admissions or customer on-boarding process, etc. as would be understood by a person having ordinary skill in the art upon reading the present descriptions.
  • identifying information preferably textual information
  • identifiers may generally include any type of identifying information suitable in processes such as the exemplary embodiments above, it should be understood that several types of information are particularly useful in select applications, e.g. unique identifiers that may be necessar to access a particular resource or complete a particular workflow,
  • the extracted identifier preferably comprises any one or more of a phone number, a complete or partial address, a universal resource locator (URL), a tracking number; a vehicle identification number (VI ), vehicle make/model and/or year, a social security number (8SN), a product name or code (such as a universal product code (UPC) or stock keeping unit (SKU)) or other similar textual information typically depicted on an invoice; an insurance group number and/or policy number, an insurance provider name, a person's name, a date (e.g. a date of birth or due date), a (preferably handwritten) signature, etc, as would be understood by one having ordinary skill in the art upon reading the present descriptions.
  • a phone number e.g. a date of birth or due date
  • SKU stock keeping unit
  • “other information” may be obtained or determined using any suitable technique(s), including known teclinique(s) such as a lookup operation, reverse lookup, authentication, etc. as would be understood by one having ordinaiy skill in the art upon reading the present descriptions.
  • extension refers to a functionality that is included in an otherwise-existing feature of the mobile device.
  • the microphone "button" depicted in the figures above may be considered an audio extension of the mobile virtual keyboard user interface.
  • a standalone application, function, or feature that requires independent invocation by the user e.g. invoking the application, function or feature without interacting with one of the standard user interfaces provided with the mobile operating system
  • the optical input extension is configured to facilitate a user seamlessly navigating throughout plural fields presented via a user interface (e.g. a web page, application, form, field, etc.) in the course of capturing optical input.
  • a user interface e.g. a web page, application, form, field, etc.
  • this functionality may be embodied as a "next" or “finished” button, gesture, symbol, option, etc. included with the optical input capture interface.
  • a user may wish to capture data corresponding to textual information intended for input into a plurality of different fields of a form, web page, etc.
  • a data entry field which may be a first data entry field among a plurality of such data entry fields present on the user interface
  • the native user input/virtual keyboard interface including the optical input extension is invoked.
  • the user may interact with a first data entry field, invoke the optical input extension, e.g. by tapping a "camera” button displayed on the virtual keyboard.
  • the user may be presented with a capture interface comprising a "preview" of the optical input being captured, (e.g. substantially representing a "viewfinder” on a camera or other optical input device).
  • the "preview" and capture capabilities of the optical input extension may be utilized without switching the mobile device focus from the browser, application, etc. upon which the data entry field the user is interacting with is displayed.
  • the optical input extension of the virtual keyboard interface described herein is preferably a seamless integration of functionalities that enables a user to locate data entry fields, invoke an optical input extension, capture optical input via the optical input extension, and populate the data entry field(s) with iexiual information determined from the captured optical input.
  • the entirety of the foregoing process is "seamless" in that the user may complete ail constituent functionalities without needing to utilize, e.g. via a multitasking capability of the mobile device, or using a clipboard configured to "copy and paste" data between independent applications executable on the mobile device, etc. as would be understood by one having ordinary skill in the art upon reading the present descriptions,
  • the user may preferably navigate between the multiple data entry fields utilizing an additional functionality provided via the optical input extension. In this manner, the user may selectively utilize the optical input extension to input textual information via capturing optical input for a desired subset of the total number of data fields presented. Similarly, the user may utilize the optical input extension to input textual information in any number of the multiple data entry fields sequentially.
  • user navigation among multiple data entry fields is accomplished via a button or gesture with which the optical input interface is configured.
  • exemplary embodiments may employ, for example, a "next" and/or “previous” button, or be configured to interpret one or more swipe(s) or multi-touch gesture(s) to navigate among the multiple data entry fields.
  • the optical input interface also includes a functionality via which the user may terminate or otherwise indicate completion of the optical input capture process.
  • the optical input interface may include a "last" button, a "finished” or “done” button, etc, to enable the user to terminate the optical input capture process, and preferably to resume interaction with the browser page, application interface, etc.
  • a critical functionality of the presently- disclosed inventive concepts is that the optical input capability is integrated directly into an existing interface provided with the mobile operating system.
  • the optical input capability is specifically integrated into the native virtual keyboard user interface provided with the mobile operating system as an extension of that virtual keyboard user interface.
  • the present techniques are therefore to be distinguished from approaches that might seek to ineffectively "stitch" together existing capabilities such as conveyed via separate (i.e. not integrated) mobile device camera and virtual keyboard user interface components.
  • existing capabilities such as conveyed via separate (i.e. not integrated) mobile device camera and virtual keyboard user interface components.
  • techniques that simply leverage a combination of tactile input and optical input as receiv ed via entirely separate interfaces, functions, applications, etc. complicate, rather than facilitate, ease and accuracy of input.
  • a standalone application or function configured to capture optical input and analyze that optical input to determine the presence of textual information (and optionally to determine and/or output the depicted text) is incapable of performing such capture and/or analysis of the optical input in a context-dependent ma ner.
  • the standalone application, function, feature, etc. is not configured to yield desired textual information in the context of a particular field or form displayed, for example, on a website the standalone application, function, feature, etc. is not configured to render in the first place.
  • an exemplary process utilizing an integrated optical input and tactile input functionality via an optical input extension to a virtual keyboard user interface would be substantially more efficient (both with respect to consumption of system resources as well as from the perspective of the user's convenience and time, as illustrated according to one embodiment in method 200 of FIG. 2.
  • Method 200 may be performed in any suitable environment, including those depicted in FIGS. 1 A- 1 B, and any other suitable environment that would be appreciated by a person having ordinary skill in the art upon reading the present descriptions.
  • a user input user interface is invoked on a mobile de vice.
  • optical input is captured via one or more optical sensors of the mobile device.
  • Method 200 may include any one or more additional or alternative features as disclosed herein. In various approaches, method 200 may additionally and/or alternatively include functionality such as sel ective identification, normalization, validation, and provision of textual information from the optical input to the user input UL
  • the user input interface is preferably invoked in response to detecting a user interaction with an user interface element configured to receive textual information.
  • the method may advantageously include analyzing the optical input to determine the textual information. Accordingly, the analyzing may include one or more of performing optical character recognition (OCR); identifying desired textual information among the determined textual information based on the OCR; and selectively providing the desired textual information to the user input interface.
  • OCR optical character recognition
  • the desired textual information includes a plurality of identifiers, and each identifier corresponds to one of a plurality of user interface elements configured to receive textual information.
  • some or all of the identifiers include textual information required by one of the user interface elements.
  • the method includes one or more of validating and normalizing at least one of the identifiers to conform with one or more of an expected format of the desired textual information and an expected range of values for the desired textual information,
  • Validation in various approaches, may include determining one or more of reference content from a complementary document and business rules applicable to at least one of the identifiers. This determination is preferably based on the element corresponding to the identifier(s), and the validating is based on one or more of the reference content and the business rules.
  • normalization may include determining formatting from a complementary document, business rules, and/or the element invoked by the user.
  • the method may also include one or more of validating (i,e, checking for accuracy of content and/or format, e.g. against reference content) and normalizing (i.e. modifying format or presentation to match expected format or other business rules, etc.) the desired textual information to conform with either or both of an expected format of the desired textual information and an expected range of values for the desired textual information.
  • validating and the normalizing are based on one or more of reference content from a
  • the method may also include determining one or more of the complementary document and the busin ess rales based on the element with which the user interacted.
  • the optical input extension is presented simultaneous to presenting the invoked user input interface.
  • the user input interface comprises a virtual keyboard displayed on the mobile device, which includes a camera button displayed on the virtual keyboard.
  • the method may additionally and/or alternatively include automatically invoking an optical input capture interface in response to detecting the invocation of the optical input extension.
  • the method may additionally and/or alternatively include pre- analyzing the optical input prior to capturing the optical input, Preanalyzing includes operations such as: detecting an object depicted in the optical input; determining one or more characteristics of the object depicted in the optical input; and determining one or more analysis parameters based at least in part on the determined characteristic(s).
  • the one or more analysis parameters preferably include OCR parameters.
  • the presently disclosed inventive optical input techniques may leverage contextual information concerning the optical or textual information, the data input operation; the form, field, or etc. into which the data are to be input; etc. as would be understood by one having ordinary- skill in the art upon reading the present descriptions.
  • optical input may be preferred where a large volume and/or complex collection of textual information is required. For instance, if a user engaging in an activity via their mobile device wishes to complete a form having several fields requesting different types of t extual information, and some or ail of the textual information are depicted on one or more documents, then it may be advantageous to determine or obtain the textual information via capturing optical input comprising an image of the document depicting that textual information, rather than requiring the user to manually input each individual piece of the desired textual information.
  • a user may utilize a document as a source of textual information to be provided via optical input.
  • the document may take any form, and may exhibit unique characteristics that are indicative of the document belonging to a predetermined class of documents (e.g. a credit card, credit report, driver's license, financial statement, tax form, etc. as would be understood by one having ordinary skill in the art upon reading the present
  • a document class is characterized by known dimensions, a known orieniation, a known layout or organization of ihe textual information, etc. it may be advantageous to utilize analysis parameters, settings, etc. configured to produce superior analytical results for that layout, organization, orientation, etc.
  • the predetermined analysis parameters, settings, techniques, etc. employed preferably include one or more OCR parameters, settings, techniques, etc.
  • the mobile de vice may determine characteristics of the optical input, including but not limited to whether the optical input comprises an identifiable object or object(s), and ideally an identity or classification of any such detected object(s). Based on the determination reached by this pre-analysis, predetermined capture settings known to yield ideal optical input for subsequent analysis may be employed.
  • the optical input may be analyzed based on contextual information determined from or based on the web page, application, form, field, etc. with which the user interacts to invoke the user input interface (e.g. virtual keyboard and/or optical input extensions) thereof, in various embodiments).
  • the user input interface e.g. virtual keyboard and/or optical input extensions
  • existing techniques allow a user interface to restrict input a user may provide to the user interface, e.g. by selectively invoking a restricted input interface (e.g. an interface consisting of numerical characters for inputting a date of birth or social security number, an interface consisting of alphabetic characters for inputting a "name", etc).
  • the presently described optical input extension may influence, determine, or restrict the analytical parameters employed to analyze optical input captured using the extension.
  • analytical parameters employed for a field acceptmg only numerical characters may include an OCR alphabet that is restricted to numerals, or conversely to an OCR alphabet restricted to letters for a field accepting only alphabetic characters.
  • the optical input extension may automatically and transparently define the analytical parameters based on the type, format, etc. of acceptable input for a given data entry field, and the defining may be performed directly in response to receiving instructions identifying a type of acceptable input for the particular field upon the user interacting with the data entry field.
  • a user interacts with a tillable data entry field expecting a telephone number as input.
  • this data entry field is presented with a keyboard consisting of numerals 0-9
  • a user interacting with the same data entry field and utilizing an optical input extension as described herein may employ analytical parameters including an OCR alphabet being limited to numerals 0-9,
  • a user may navigate to a web page, form, mobile application, etc. using a mobile device.
  • the user may interact with one or more tillable fields presented on the web page, a navigation bar of the web browser, or any other element of the medium with which the user is interacting that accepts textual information as suitable input.
  • the mobile de vice may invoke an optical capture interface substantially representing a "camera" application, e.g. as typically included in native OS functionality provided with conventional mobile devices.
  • the mobile device display represents a "viewfinder" depicting the mobile device optical sensor's field of view, preferably in real- or near-real time.
  • the mobile device either in response to user input or (preferably) in an automatic manner transparent to the user, may perform pre-analysis as described above utilizing optical input received by the mobile device optical sensor(s) (e.g. the optical input utilized to generate the viewfinder display).
  • the pre-analysis may include identifying any textual information depicted in a portion of the optical sensor's field of vie (e.g. a bounding box) and displaying a preview of any identified textual information. Even more preferably, identified text may be displayed in the data entry field with which the user interacted to invoke the user input interface and or optical input extension thereof.
  • the presently disclosed methods, systems, and/or computer program products may be utilized with, implemented in, and/or include one or more user interfaces (UIs) configured to facilitate receiving user input and producing corresponding output.
  • the user input Ul(s) may be in the form of a siandard UI included with a mobile device operating system, such as a keyboard interface as employed with standard SMS messaging functionality and applications, browser applications, etc.; a number pad interface such as employed with standard telephony functionality and applications, or any other standard operating system UI configured to receive user input, particularly input comprising or corresponding to textual information (i.e. user input comprising taps on various locations of a screen or speech which will be converted to textual informati on) .
  • user input UI 100 includes a navigation UI 1 10, a form or page 120, and a keyboard UI 130.
  • Each UI 1 10, 120, 130 may be standard UIs provided via a mobile device operating system, standard browser or mobile application included with the mobile device operating system, or may be provided via a separately installed, standalone application. Standalone application embodiments are preferred due to the ability to efficiently integrate context-dependent functionality and capture/extraction functionality in a seamless workflow and user experience.
  • the navigation UI 110 includes a navigation component 112 such as an address bar of a mobile browser, forward and/or back buttons (not shown) to assist navigating through various stages of the workflow, etc. as would be understood by one having ordinary skill in the art upon reading the present descriptions.
  • a navigation component 112 such as an address bar of a mobile browser, forward and/or back buttons (not shown) to assist navigating through various stages of the workflow, etc. as would be understood by one having ordinary skill in the art upon reading the present descriptions.
  • Form/Page 120 of the workflow includes a plurality of fields 122-128 which are preferably configured to receive as input a plurality of identifiers (optionally normalized and/or validated as described herein) output from a capture and extraction operation of the workflow.
  • the fields include a City field 122, a Zip Code field 124, a Phone Number field 126 and a State field 128.
  • additional fields may be included in the form page 120 and the user may navigate around the form/page 120 to selectively display the various fields thereof using any suitable technique that would be appreciated by a person having ordinary skill in the art upon reading the present descriptions,
  • each field may be associated with an expected format and/or value or range of values for textual information received as input thereto.
  • City field 12.2 may expect a string of alphabetic characters beginning with a capital letter and followed by a plurality of lowercase letters, optionally including one or more spaces or hyphen characters but excluding numerals and other special characters.
  • Zip Code field 124 may expect a string of either five numerals or ten characters including numerals and an optional hyphen or space. Zip Code field 124 may further expect the ten-character siring to obey a particular format, such as "#####-####".
  • Phone Number field 126 may expect seven numerals and optionally one or more spaces, parenthesis, periods, commas, and/or hyphens. Phone number field 126 may also expect the textual information input therein to obey a mask corresponding to one of several standard phone number formats, such as "(XXX) ###-####" in the United States, or other corresponding known conventions depending on the locality in which the device is being used. State field 128 may expect a two-character string of capital letters. Of course, other fields may be similarly associated with expected format(s) and/or value(s) or ranges of values according to known conventions, standards, etc. associated with information intended for receipt as input thereto.
  • a user may interact with one of the fields 122-128 using any suitable means, e.g. via tapping on a region of the mobile device display corresponding to the field, and in response the keyboard interface 130 may be invoked.
  • keyboard interface may not be invoked if the field does not accept user-defined textual information, e.g. in the case of a drop-down menu field such as State field 128.
  • a user's interaction with the field may be indicated by presence of a cursor 121.
  • the user's interaction with a particular field may also invoke or schedule a context-dependent component of the workflow, e.g. a component configured to apply particular business rules, perform validation, document classification, etc. as described in further detail herein.
  • the keyboard interface 130 may selectively include alphabetic character set (e.g. as shown in FIG. 1A in response to user interaction with City field 122) or a numerical'symbolic character set (e.g. as shown in FIG. IB in response to user interaction with Zip Code field 124), based on the context of the field interacted with by the user (e.g. the expec ted value or value range of textual information input to the field).
  • keyboard interface 130 includes a plurality of keys 132 configured to facilitate the user "typing" textual information into the fields, as well as a function button 134 configured to execute one or more operations using an I/O component of the mobile device, such as a microphone and/or camera of the mobile device.
  • a function button 134 of the keyboard interface 130 may be interacted with by the user to invoke an optical input extension of the mobile application or workflow.
  • the optical input extension invokes a capture interface and initiates a capture and extraction operation (optionally including validation, classification, etc.) as described in further detail below.
  • the optical input extension may be displayed separately from the keyboard interface 130, e.g. as a separate button 136 within form/page 120 as depicted generally in FIG. IB.
  • an image of a document may be captured or received by the mobile device, and an image processing operation such as optical character recognition (OCR) may be performed on the image.
  • OCR optical character recognition
  • the extracted identifier may be compared with reference content or analyzed in view of one or more business rules.
  • the reference content and/or business rules are preferably stored locally on the mobile device to facilitate efficient comparison and/or analysis, and may be provided in any suitable form.
  • reference content may take the form of a complementary document to the document from which the identifier is to be extracted.
  • Complementary documents may include a document, file, or any other suitable source of textual information against which a simple comparison of the extracted identifier may be performed.
  • a mobile application includes a data store having one or more
  • each complementary document corresponding to at least one identifier or type of identifier utilized in one or more workflows of the mobile application.
  • the complementary document may comprise identifiers, e.g. as may be obtained and stored in the data store based on previous capture and extraction operations using the mobile application.
  • the complementary document may comprise a processed image of a document depicting the identifier, said processing being configured to improve the qualit of the image for purposes of data extraction (e.g. via custom binarization based on color profile, correction of projective effects, orientation correction, etc.).
  • the document image may serve as a validation tool to ensure accuracy of an identifier extracted from a document imaged in subsequent invocations of the mobile application or particular workflows therein.
  • similar functionality may be achieved when the complementary document comprises simply a validated identifier, e.g. a string of characters, symbols, of an identifier that are known to be accurate.
  • business rules may indicate an expected format of the extracted identifier, and may further include rales regarding how to selectively extract the identifier (e.g. using OCR parameters based on a particular color profile of the document, OCR parameters that restrict a location within the document for which the identifier is searched), and/or modify the extracted identifier to fit the expected format, for example using a mask, a regular expression, modifying OCR parameters such as via changing an OCR alphabet to exclude certain symbols or character sets, etc, as would be understood by a person having ordinary skill in the art upon reading the present descriptions.
  • OCR parameters e.g. using OCR parameters based on a particular color profile of the document, OCR parameters that restrict a location within the document for which the identifier is searched
  • modify the extracted identifier to fit the expected format, for example using a mask, a regular expression, modifying OCR parameters such as via changing an OCR alphabet to exclude certain symbols or character sets, etc, as would be understood by a person having ordinary skill in the art upon reading the present
  • business rules may indicate that only a portion of information properly considered an identifier within the scope of the present disclosm'es is needed or desired in the context of a particular workflo w.
  • a workflow may require only a zip code of an address, only the last four digits of a social security number or credit card number, only a month and year of a date, only a portion of a line item on an invoice, such as a price or product code but not both, etc. as would be understood by a person having ordinary skill in the art upon reading the present descriptions.
  • a particularly advantage of utilizing business rules with the presently disclosed inventive concepts is that the particular business rule applied to a particular extraction operation may be context sensitive and thus automatically determine appropriate business rules to apply to an extraction attempt.
  • the extracted identifier may be corrected.
  • the extracted identifier is corrected using the textual information from the complementary document and/or predefined business rales.
  • Predefined business rales may preferably include business-oriented criteria/conditions for processing data, such as setting a threshold for the acceptable amount of mismatch to which correction may be applied (e.g. correction may be applied to mismatches of less than a maximum threshold number of characters, a maximum percentage of characters, etc., corrections may only be applied to mismatches fitting within a predefined set of "acceptable” errors e.g. a number "1" instead of a letter "! and vise- versa, including dash(es) "— " instead of hyphen(s) etc.) and other similar business-oriented criteria/conditions as would be understood by one having ordinary skill in the art upon reading the present descriptions.
  • business-oriented criteria/conditions for processing data such as setting a threshold for the acceptable amount of mismatch to which correction may be applied (e.g. correction may be applied to mismatches of less than a maximum threshold number of characters, a maximum percentage of characters, etc., corrections may only be applied to mismatches fitting within a
  • an extracted identifier may be modified. For example, discrepancies arising from OCR errors may be automatically handled using the present techniques.
  • an identifier is expected to be in a predetermined format. For instance, in the context of a tender document such as a credit card, the identifier may be an account number expected in a 16- digit numerical format substantially fitting "####-####-####-####" as seen typically on conventional credit/debit cards, or an expiration date in a "MM/YY" format, etc. as would be understood by one having ordinary skill in the art upon reading the present descriptions.
  • the extracted identifier may be accurately extracted, but nonetheless be presented in a different format than expected (e.g. the identifier may include or exclude expected symbols or formatting, such as spaces, dashes, or impermissible characters (e.g. a month designation in a date, such as "Jan” or "January” including alphabetic characters where the expected format is strictly numerical, such as "01").
  • expected symbols or formatting such as spaces, dashes, or impermissible characters (e.g. a month designation in a date, such as "Jan” or "January” including alphabetic characters where the expected format is strictly numerical, such as "01").
  • Discrepancies of this nature may be automatically resolved by leveraging data normalization functionalities.
  • an extracted identifier comprises a date
  • there are a finite set of suitable formats in which the date data may be expressed such as 01 January, 2001 ; January 01 , 2001, 01/01/01, Jan. 1 , 01, etc. as would be understood by one having ordinary skill in the art upon reading the present descriptions.
  • Other types of identifier data may similarly be expressed in a finite number of formats, including account number (e.g. conventional 16-digit account numbers in the format ####-####-##-####, ################################, etc.), cardholder name (e.g.
  • Last, First; Last, First, Middle Initial (MI); First Last; First ML Last; etc.), security code e.g. either a three-digit or four-digit number, an alphanumeric string including both letters and numbers, etc.
  • the presently disclosed techniques may be configured to automatically normalize data obtained (e.g. via extraction) from the imaged financial document in a manner that the data obtained from the financial document matches an expected format of corresponding data, e.g. contained/depicted in textual information of the complementary document. For example, upon determining that extracted data such as a date is in a particular format (e.g. Jan. 01 , 2001) other than an expecied format (e.g.
  • a first iteration operates substantially as described above - extracting an identifier from an image of a document and comparing the extracted identifier to corresponding data from one or more data sources (e.g. the textual information from the complementary document, database record, the predefined business rules, etc.).
  • the first iteration comparison fails to yield any match between the extracted identifier and the corresponding data from the data source(s).
  • the mismatches may be a result of OCR errors rather than true mismatch between the identifier on the imaged document and the corresponding data from the one or more data sources.
  • OCR errors of this nature may be corrected, in some approaches, by determining one or more characteristics of data corresponding to the identifier.
  • the first OCR iteration may extract the identifier in an unacceptable format (e.g. the data is not properly normalized) and/or perform the OCR in a manner such that the extracted identifier contains one or more OCR errors.
  • the extracted identifier fails to match any corresponding data in the one or more data sources, despite the fact that the "true" identifier as depicted on the document actually matches at least some of the corresponding data. False negative results of this variety may be mitigated or avoided by modifying parameters, rules and/or assumptions underlying the OCR operation based on identifier characteristics.
  • an identifier is extracted, and compared to corresponding data from one or more data sources.
  • the string of characters comprising the extracted identifier does not match any account number in the corresponding data.
  • the extracted identifier is further analyzed to determine characteristics thereof.
  • the extracted identifier may be compared to a plurality of predefined identifier types (e.g. "First Name,” “Last Name,” “Account Number,” “expiration date,” “PIN,” etc.) to determine whether the extracted identifier exhibits any cbaracteristic(s) corresponding to one of the predefined identifier types.
  • predefined identifier types e.g. "First Name,” “Last Name,” “Account Number,” “expiration date,” “PIN,” etc.
  • the extracted identifier and the predefined identifier types may be compared to determine the existence of any similarities with respect to data formatting and/or data values.
  • Exemplary identifier characteristics suitable for such comparison include siring length, string alphabet, (i.e. the set of characters from which the identifier may be formed, such as "alphabetic,” “numeral,” “alphanumeric,” etc.), presence of one or more discernable pattern(s) common to identifiers of a particular ty e, or any other characteristic that would be recognized by a skilled artisan reading these descriptions.
  • identifier characteristics may include any pattern recognizable using known pattern-matching tools, for example regular expressions.
  • the identifier type may be determined in whole or in part based on one or more document characteristics, such as: a location in the document from which the identifier is extracted; a classification of the document from which the ident ifier is extracted (such as disclosed in related U.S. Patent Application No. 13/802,226, filed March 13, 2013, published as U.S. Patent Publication No. 2014/0270349 on September 18, 2014, and herein incorporated by reference); and/or characteristic(s) of data located adjacent, above, below, or otherwise spatially proximate to the identifier on the document, etc. as would be understood by skilled artisans upon reading the instant descriptions.
  • document characteristics such as: a location in the document from which the identifier is extracted; a classification of the document from which the ident ifier is extracted (such as disclosed in related U.S. Patent Application No. 13/802,226, filed March 13, 2013, published as U.S. Patent Publication No. 2014/0270349 on September 18, 2014, and herein incorporated by reference); and/or characteristic(s
  • identifier characteristics may be determined based on a location from which an identifier is extracted being located below data depicting related information, such as an identifier being located below a street address line, which typically corresponds to a city, state, and/or zip code, particularly in documents depicting a mailing address.
  • identifier characteristic(s) may be determined based on an identifier being extracted from a location horizontally adjacent to related data, for example as is the case for an expiration date or account number, respectively, as depicted on typical credit and debit card documents.
  • an extracted identifier is analyzed, and determined to have characteristics of a "payment amount" identifier type.
  • the extracted identifier is analyzed, and determined to have characteristics of a "payment amount" identifier type.
  • the extracted identifier :
  • the identifier may be determined exhibit characteristics such as consisting of characters expressed only in numerical digits, such a street or room number of an address, etc.
  • the extracted identifier may be analyzed to determine whether any convention ⁇ ) or rule(s) describing the identifier characteristics are violated, which may be indicative of the extracted identifier including OCR errors, improper data normalization, or both, in various approaches.
  • an extracted identifier fails to match any of the corresponding data in the one or more data sources based on a first comparison therebetween.
  • the extracted identifier is analyzed and determined to be of an identifier type "account number,” based at least in part on the extracted string being sixteen characters in length. The extracted identifier is further analyzed and determined to violate an "account number" characteristic.
  • the extracted identifier includes a non-numeral character, e.g. because one character in the extracted identifier string was improperly determined to be a letter "B" instead of a numeral "8," a letter “i” instead of a numeral “1 ", a letter “O” instead of a numeral “0,” etc. as would be understood by one having ordinary skill in the art upon reading the present descriptions.
  • the OCR error may be correc ted using a second OCR iteration based at least in part upon establishing the identifier characteristic ⁇ ). In the foregoing example of an account number erroneously including an alphabetic character instead of a numeral , the OCR.
  • the engine may be restricted to an alphabet of candidate characters consisting entirely of numerical digits.
  • the decision to restrict the OCR alphabet is based on predefined business rules applying to the format of the account number, i.e. that the account number consists of numerical digits.
  • the second iteration accordingly, properly recognizes the "8" numeral in the identifier, rather than the "B" letter erroneously determined from the first iteration.
  • the identifier complies with at least one business rule, such as described above. More preferably, the business rule(s) may be expressed as at least one logical expression (e.g. a rule, formula, a pattern, convention, structure, organization, etc. or any number or combination thereof).
  • a business rule may indicate that a particular alphabet of symbols should be used, e.g. as opposed to a more complete or different alphabet of symbols.
  • the business rule indicates an account number follows a convention including hyphen symbol characters, i.e. but excludes dash symbol characters (i.e. " underscore symbol characters
  • a second iteration may be performed using a more restricted alphabet to normalize the extraction results according to the expectations reflected in the business rule(s).
  • a user working within a mobile application or workflow may interact with a field of the application, a webpage, etc. and based on the particular field, a unique business rule may be applied to a subsequent capture and extraction task.
  • a field requesting a ZIP code e.g. field 124 of FIG. 1 A
  • the extracted identifier should have a format of five (or nine) digits, all characters should be numerical (or include hyphens), and alphabetic characters adjacent to a five (or nine) number string should not be included in the extracted identifier.
  • the user's interaction with the particular field can provide context-sensitive determination of the proper business rule to apply in a subsequent capture and extraction of an identifier from a document.
  • a user could selectively capture only the ZIP code from a document depicting a full street address, and populate the ZIP code field of the corresponding mobile application or workflow, all without providing any instruction to the mobile application or workflow and without having to input any textual information to the field,
  • business rules may be based partially or entirely on context of the document in view of the mobile application or workflow. For example, in a similar situation as described immediately above a user may interact with a field of a form or web page expecting a ZIP code. However, the form or page also includes other fields requesting different information, such as a phone number, city and state of an address, name, social security number, expiration date, credit card number, etc. The fact that the field with which the user interacted is part of a form/page requesting other information that is likely to be on a single document (e.g.
  • a driver's license, utility bill, credit card, etc. may invoke a business rule whereby a subsequent capture and extraction operation attempts to extract multiple identifiers and populate multiple fields of the form in a single process, even though the user may not have interacted with the other fields.
  • a document within the viewfinder may be analyzed to determine the type of document, in one approach. Based on this determination the multiple identifier extraction and field population process may be performed (e.g. if the document type is a type of document likely to contain multiple identifiers corresponding to the multiple fields) or circumvented (e.g. if the document is not an appropriate type of document to attempt multi-extraction because the document type typically does not depict information corresponding to the other fields on the form/page).
  • this dual-context approach enables an optical input-based autofill functionality without relying on any prior data entry. Autofill can be performed on first capture in a near-real time.
  • a user may capture an image of one or more documents.
  • the image is preferably captured using a capture component (e.g. "camera” as described above) of a mobile device by invoking the capture interface via an optical I/O extension (e.g. extension 134 or 136 of FIGS. 1 A and I B, respectively).
  • the captured image may be optionally stored to a memory, e.g. a memory of the mobile device, for future use and/or re-use as described herein.
  • embodiments of the present disclosures also encapsulate scenarios where a document image is not captured, but otherwise received at a device (preferably a device having a processor, such as a mobile device) for subsequent use in extracting and/or validating information depicted on or associated with the document (e.g. a corresponding identifier depicted on a different document).
  • a device preferably a device having a processor, such as a mobile device
  • extracting and/or validating information depicted on or associated with the document e.g. a corresponding identifier depicted on a different document.
  • the image of the document is analyzed by perfor ming OCR thereon.
  • the OCR may be utilized substantially as described above to identify and/or extract characters, and particularly text characters, from the image.
  • the extracted characters include an identifier that uniquely identifies the document.
  • the identifier may take any suitable form known in the art, and in some approaches may be embodied as an alphanumeric string of characters, e.g. a tender document account number (such as a 16-digit account number typically associated with credit/debit card accounts), a security code (such as a CCV code on a debit/credit card, a scratch- off validation code, a personal identification number (PIN), etc.) an expiration date (e.g. in the format "MM YY", etc. as would be understood by one having ordinary skill in the art upon reading the present descriptions.
  • a tender document account number such as a 16-digit account number typically associated with credit/debit card accounts
  • a security code such as a CCV code on a debit
  • the presently disclosed techniques may leverage a number of advantageous features to provide a document owner with useful information and/or services regarding their document. For example, and optionally taking into account contextual information such as a mobile application running on the mobile device, data may be automatically provided to the mobile application without requiring the user to input any textual information, thus avoiding time consuming processes, user error, predictive dictionary bias, and other problems common to conventional, user-based textual input for mobile devices.
  • contextual information such as a mobile application running on the mobile device
  • a mobile application which may be a standard browser displaying a particular web page, a standalone application, etc., includes a workflow configured to facilitate a user applying for automobile insurance.
  • the workflow may include fields requesting information such as the applicant's name, driver license number, vehicle make, model, and/or year, state of residence, etc.
  • the capture interface may include a prompt directing a user to capture image(s) of one or more documents, e.g. a driver license and vehicle registration, depicting some or all of the information requested for the fields of the workflow.
  • the capture interface is configured to automatically detect a document depicted in the viewfinder, and capture an image thereof when optimal capture conditions (e.g. illumination, perspective, and zoom/resolution) have been achieved.
  • the viewfmder may include a reticle, such as four comers arranged in a rectangular fashion to facilitate capturing an image of the entire document, a rectangular box to facilitate capturing a line, field, etc. of textual information depicted on the document, etc. as would be understood by a person having ordinary skill in the art upon reading the present descriptions.
  • the reticle is preferably configured to assist a user in orienting the device and/or document to achieve optimal capture conditions.
  • the capture operation is contextually aware to facilitate accurate and precise extraction of identifiers from the document, as well as accurate and precise output of corresponding textual information in the fields of the workflow .
  • the corresponding textual information may be identical to the extracted identifier(s), or may be normalized according to an expected format and/or to correct OCR errors, in various approaches.
  • the identifiers may be validated against reference content or business rales as described in further detail herein, to facilitate precise, accurate extraction and output.
  • the document may be analyzed and classified to determine the context of the document and/or determine whether to attempt a multi-field extraction operation, as described further herein.
  • Method 300 may be performed in any suitable environment, including those shown in FIGS. ⁇ ⁇ - ⁇ , and any other suitable environment that would be appreciated by a person having ordinary skill in the art upon reading the present descriptions.
  • method 300 includes operations 302-306.
  • optical input is received via one or more optical sensors of a mobile device, such as occurs when a viewfmder interface is invoked and a video feed depicting the field of view of the mobile device optical sensor(s) is displayed.
  • the optical input is analyzed using a processor of the mobile device to determine a context of the optical input
  • a contextually-appropriate workflow is invoked based on the context of the optical input.
  • the context may include any suitable information relevant to performing operations within the corresponding workflow, and preferably comprises one or more of: a type of document represented in the optical input; and content of the document represented in the optical input.
  • the type of document is selected from a group consisting of: a contract, a lender document, an identity document, an insurance document, a title, a quote, and a vehicle registration.
  • document content preferably the content is selected from: a phone number, a social security number, a signature, a line item of an invoice, a partial or complete address, a universal resource locator, an insurance group number, a credit card number, a tracking number, a photograph, and a distribution of fields depicted on the document.
  • a user may position a document depicting a signature, such as a driver's license, personal or business check, contract, etc. within range of the mobile device's optical sensors.
  • the mobile device may detect the presence of the signature, preferably in conjunction with one or more other identifying characteristics (e.g. a photograph on the driver's license, a particular font such as magnetic ink character recognition fonts on a check, a distribution of fields on a form, etc.) of the document and automatically or semi-automaiically invoke an appropriate mobile application on the mobile device.
  • identifying characteristics e.g. a photograph on the driver's license, a particular font such as magnetic ink character recognition fonts on a check, a distribution of fields on a form, etc.
  • a context-dependent business process or workflow may similarly be invoked.
  • Different information may indicate the proper workflow to invoke is either an insurance quote, a health care admission process, a signing ceremony, a deposit, or any combination thereof.
  • a driver license number and vehicle identification number may indicate propriety of an automobile insurance quote.
  • a health insurance provider name, policyholder (patient name) and'or group number may indicate propriety of the health care admission workflow or a health insurance quote workflow, alternatively.
  • a document containing textual information common to a loan agreement such as a mortgage or loan application, in conjunction with a signature or signature block may indicate propriety of a signing ceremony workflow.
  • a document including a signature and account number or deposit amount may indicate propriety of a deposit workflow.
  • the presently disclosed inventive concepts may be applied to other workflows as would be understood by a person having ordinary skill in the art upon reading the present disclosure, without departing from the scope of the instant descriptions.
  • a mobile application may invoke an insurance quote workflow to facilitate a user obtaining automobile insurance.
  • a mobile check deposit workflow may be invoked.
  • a mortgage application process or document signing ceremony process may be invoked.
  • the mobile device may invoke an application configured to facilitate the contextualiy-appropriate action such as described above, in various embodiments.
  • context-sensitive process invocation may include any one or more of the following, in response to detecting a document depicted in view of the mobile device optical sensors is an invoice (e.g. by detecting presence of the word "invoice," an invoice number, a known service-provider entity name, address, etc.), invoking a systems, applications, products (SAP) or other similar enterprise application and automatically displaying a status of the invoice.
  • an invoice e.g. by detecting presence of the word "invoice,” an invoice number, a known service-provider entity name, address, etc.
  • SAP systems, applications, products
  • a phone application of the mobile device operating system may be invoked, and the number may be automatically entered and/or dialed.
  • a web browser application of the mobile device in response to detecting textual information depicted in view of the mobile device optical sensors is a universal resource locator, a web browser application of the mobile device may be invoked, and the URL may be entered into the navigation or address bar, and/or the browser may automatically direct to resources indicated by the URL.
  • a financial services application or credit card company website may be invoked (via a browser in cases where a website is invoked) and a credit account statement, balance, due date, etc. may be displayed to the user.
  • a tax preparation application or website may be invoked.
  • a user input UI of a workflow may be contextuallv invoked based on optical input in the mobile device's field of view, and any appropriate information also within the mobile device fi eld of view is automatically captured and output into an appropriate field of the invoked UT with appropriate formatting and any OCR errors already corrected.
  • a system within the scope of the present description s may include a processor and logic in and/or executable by the processor to cause the processor to perform steps of a method as described herein.
  • a computer program product within the scope of the present descriptions may a computer readable storage medium having program code embodied therewith, the program code readable/exec table by a processor to cause the processor to perform steps of a method as described herein.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Human Computer Interaction (AREA)
  • Computational Linguistics (AREA)
  • User Interface Of Digital Computer (AREA)
  • Character Discrimination (AREA)

Abstract

L'invention concerne des systèmes, des procédés et des produits de programmes informatiques destinés à la capture automatisée intelligente d'informations textuelles à l'aide de capteurs optiques d'un dispositif mobile. Les informations textuelles sont transmises à une application mobile ou à un flux de tâches sans qu'il soit nécessaire que l'utilisateur saisisse ou transfère manuellement les données, sans nécessiter une intervention de l'utilisateur telle qu'une opération de copier-coller. La capture et la transmission sont sensibles au contexte, et peuvent normaliser ou valider les informations textuelles capturées préalablement à leur entrée dans le flux de tâches ou l'application mobile. D'autres informations qui sont nécessaires au flux de tâches et dont disposent les capteurs optiques du dispositif mobile peuvent également être capturées et transmises, dans un processus automatique unique. Par conséquent, le processus d'ensemble de capture d'informations à partir d'une entrée optique à l'aide d'un dispositif mobile est nettement simplifié et amélioré en termes de précision du transfert/de la saisie des données, de vitesse et de rendement des flux de tâches, et d'agrément d'utilisation.
EP15779936.2A 2014-04-15 2015-04-15 Extension d'entrée/sortie (e/s) optique intelligente pour flux de tâches dépendant du contexte Withdrawn EP3132381A4 (fr)

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
US201461979949P 2014-04-15 2014-04-15
US14/686,644 US9349046B2 (en) 2009-02-10 2015-04-14 Smart optical input/output (I/O) extension for context-dependent workflows
PCT/US2015/026022 WO2015160988A1 (fr) 2014-04-15 2015-04-15 Extension d'entrée/sortie (e/s) optique intelligente pour flux de tâches dépendant du contexte

Publications (2)

Publication Number Publication Date
EP3132381A1 true EP3132381A1 (fr) 2017-02-22
EP3132381A4 EP3132381A4 (fr) 2017-06-28

Family

ID=54324552

Family Applications (1)

Application Number Title Priority Date Filing Date
EP15779936.2A Withdrawn EP3132381A4 (fr) 2014-04-15 2015-04-15 Extension d'entrée/sortie (e/s) optique intelligente pour flux de tâches dépendant du contexte

Country Status (4)

Country Link
EP (1) EP3132381A4 (fr)
JP (1) JP2017514225A (fr)
CN (1) CN106170798A (fr)
WO (1) WO2015160988A1 (fr)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10210384B2 (en) 2016-07-25 2019-02-19 Intuit Inc. Optical character recognition (OCR) accuracy by combining results across video frames
CN108416681B (zh) * 2017-11-28 2021-05-28 中国平安财产保险股份有限公司 一种保险报价信息的展示方法、存储介质和服务器

Family Cites Families (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6512848B2 (en) * 1996-11-18 2003-01-28 Canon Kabushiki Kaisha Page analysis system
US6980331B1 (en) * 1999-12-02 2005-12-27 Lucent Technologies Inc. Automatic send to embedded fax/e-mail address
WO2004015619A1 (fr) * 2002-08-07 2004-02-19 Matsushita Electric Industrial Co., Ltd. Dispositif de traitement de reconnaissance de caracteres, procede de traitement de reconnaissance de caracteres et terminal mobile
WO2005091235A1 (fr) * 2004-03-16 2005-09-29 Maximilian Munte Systeme de traitement mobile de documents papiers
CN1773523A (zh) * 2004-11-08 2006-05-17 乐金电子(昆山)电脑有限公司 带摄像头的便携式信息终端机的文字识别及声音输出的装置和方法
KR100664421B1 (ko) * 2006-01-10 2007-01-03 주식회사 인지소프트 구비된 카메라를 이용한 명함 인식을 위한 휴대용 단말기및 명함 인식 방법
US20080040753A1 (en) * 2006-08-10 2008-02-14 Atul Mansukhlal Anandpura Video display device and method for video display from multiple angles each relevant to the real time position of a user
US8345159B2 (en) * 2007-04-16 2013-01-01 Caption Colorado L.L.C. Captioning evaluation system
US20080267504A1 (en) * 2007-04-24 2008-10-30 Nokia Corporation Method, device and computer program product for integrating code-based and optical character recognition technologies into a mobile visual search
US20090285445A1 (en) * 2008-05-15 2009-11-19 Sony Ericsson Mobile Communications Ab System and Method of Translating Road Signs
CN101620680B (zh) * 2008-07-03 2014-06-25 三星电子株式会社 字符图像的识别和翻译方法以及装置
US8774516B2 (en) * 2009-02-10 2014-07-08 Kofax, Inc. Systems, methods and computer program products for determining document validity
CN101609365B (zh) * 2009-07-21 2012-10-31 上海合合信息科技发展有限公司 字符输入方法及系统、电子设备及其键盘
CN101639760A (zh) * 2009-08-27 2010-02-03 上海合合信息科技发展有限公司 联系信息输入方法及系统
US20120092329A1 (en) * 2010-10-13 2012-04-19 Qualcomm Incorporated Text-based 3d augmented reality
CN102663124A (zh) * 2012-04-20 2012-09-12 上海合合信息科技发展有限公司 移动设备上的联系人信息的管理方法及系统
US9916514B2 (en) * 2012-06-11 2018-03-13 Amazon Technologies, Inc. Text recognition driven functionality

Also Published As

Publication number Publication date
EP3132381A4 (fr) 2017-06-28
JP2017514225A (ja) 2017-06-01
WO2015160988A1 (fr) 2015-10-22
CN106170798A (zh) 2016-11-30

Similar Documents

Publication Publication Date Title
US10380237B2 (en) Smart optical input/output (I/O) extension for context-dependent workflows
US10643164B2 (en) Touchless mobile applications and context-sensitive workflows
AU2017302250B2 (en) Optical character recognition in structured documents
CN107785021B (zh) 语音输入方法、装置、计算机设备和介质
US20170109610A1 (en) Building classification and extraction models based on electronic forms
EP3430567B1 (fr) Reconnaissance optique de caractères à l'aide de modèles hachés
US12008543B2 (en) Systems and methods for enrollment and identity management using mobile imaging
US11743216B2 (en) Digital file recognition and deposit system
US11995905B2 (en) Object recognition method and apparatus, and electronic device and storage medium
US10440197B2 (en) Devices and methods for enhanced image capture of documents
US20140279642A1 (en) Systems and methods for enrollment and identity management using mobile imaging
WO2015160988A1 (fr) Extension d'entrée/sortie (e/s) optique intelligente pour flux de tâches dépendant du contexte
US20230132261A1 (en) Unified framework for analysis and recognition of identity documents
US20210064864A1 (en) Electronic device and method for recognizing characters
JP2020021458A (ja) 情報処理装置、情報処理方法および情報処理システム
US20200065423A1 (en) System and method for extracting information and retrieving contact information using the same
US20150227787A1 (en) Photograph billpay tagging
RU2587406C2 (ru) Способ обработки визуального объекта и электронное устройство, используемое в нем
US20220230235A1 (en) Financial management using augmented reality systems
CN112508550A (zh) 一种转账处理方法、装置、设备及存储介质

Legal Events

Date Code Title Description
PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

17P Request for examination filed

Effective date: 20160922

AK Designated contracting states

Kind code of ref document: A1

Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR

AX Request for extension of the european patent

Extension state: BA ME

A4 Supplementary search report drawn up and despatched

Effective date: 20170529

RIC1 Information provided on ipc code assigned before grant

Ipc: G06K 9/72 20060101ALI20170522BHEP

Ipc: G06K 9/20 20060101AFI20170522BHEP

DAV Request for validation of the european patent (deleted)
DAX Request for extension of the european patent (deleted)
17Q First examination report despatched

Effective date: 20180131

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE APPLICATION HAS BEEN WITHDRAWN

18W Application withdrawn

Effective date: 20180611