US10735615B1 - Approach for cloud EMR communication via a content parsing engine - Google Patents

Approach for cloud EMR communication via a content parsing engine Download PDF

Info

Publication number
US10735615B1
US10735615B1 US16/355,629 US201916355629A US10735615B1 US 10735615 B1 US10735615 B1 US 10735615B1 US 201916355629 A US201916355629 A US 201916355629A US 10735615 B1 US10735615 B1 US 10735615B1
Authority
US
United States
Prior art keywords
data
superbill
parsing
received
image data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
US16/355,629
Inventor
Jayasimha Nuggehalli
James Woo
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ricoh Co Ltd
Original Assignee
Ricoh Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ricoh Co Ltd filed Critical Ricoh Co Ltd
Priority to US16/355,629 priority Critical patent/US10735615B1/en
Assigned to RICOH COMPANY, LTD. reassignment RICOH COMPANY, LTD. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: NUGGEHALLI, JAYASIMHA, WOO, JAMES
Priority claimed from JP2020043579A external-priority patent/JP6849121B2/en
Application granted granted Critical
Publication of US10735615B1 publication Critical patent/US10735615B1/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N1/00Scanning, transmission or reproduction of documents or the like, e.g. facsimile transmission; Details thereof
    • H04N1/00127Connection or combination of a still picture apparatus with another apparatus, e.g. for storage, processing or transmission of still picture signals or of information associated with a still picture
    • H04N1/00129Connection or combination of a still picture apparatus with another apparatus, e.g. for storage, processing or transmission of still picture signals or of information associated with a still picture with a display device, e.g. CRT or LCD monitor
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/20Natural language analysis
    • G06F40/205Parsing
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/40Processing or translation of natural language
    • G06F40/55Rule-based translation
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06KRECOGNITION OF DATA; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
    • G06K9/00Methods or arrangements for reading or recognising printed or written characters or for recognising patterns, e.g. fingerprints
    • G06K9/00442Document analysis and understanding; Document recognition
    • G06K9/00449Layout structured with printed lines or input boxes, e.g. business forms, tables
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06KRECOGNITION OF DATA; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
    • G06K9/00Methods or arrangements for reading or recognising printed or written characters or for recognising patterns, e.g. fingerprints
    • G06K9/00442Document analysis and understanding; Document recognition
    • G06K9/00456Classification of image contents, e.g. text, photographs, tables
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06KRECOGNITION OF DATA; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
    • G06K9/00Methods or arrangements for reading or recognising printed or written characters or for recognising patterns, e.g. fingerprints
    • G06K9/20Image acquisition
    • G06K9/2054Selective acquisition/locating/processing of specific regions, e.g. highlighted text, fiducial marks, predetermined fields, document type identification
    • G06K9/2081Selective acquisition/locating/processing of specific regions, e.g. highlighted text, fiducial marks, predetermined fields, document type identification based on user interaction
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N1/00Scanning, transmission or reproduction of documents or the like, e.g. facsimile transmission; Details thereof
    • H04N1/0035User-machine interface; Control console
    • H04N1/00405Output means
    • H04N1/00408Display of information to the user, e.g. menus
    • H04N1/0044Display of information to the user, e.g. menus for image preview or review, e.g. to help the user position a sheet
    • H04N1/00461Display of information to the user, e.g. menus for image preview or review, e.g. to help the user position a sheet marking or otherwise tagging one or more displayed image, e.g. for selective reproduction
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N1/00Scanning, transmission or reproduction of documents or the like, e.g. facsimile transmission; Details thereof
    • H04N1/00912Arrangements for controlling a still picture apparatus or components thereof not otherwise provided for
    • H04N1/00938Software related arrangements, e.g. loading applications
    • H04N1/00949Combining applications, e.g. to create workflows
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/10Text processing
    • G06F40/12Use of codes for handling textual entities
    • G06F40/14Tree-structured documents
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/20Natural language analysis
    • G06F40/279Recognition of textual entities
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06KRECOGNITION OF DATA; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
    • G06K2209/00Indexing scheme relating to methods or arrangements for reading or recognising printed or written characters or for recognising patterns, e.g. fingerprints
    • G06K2209/01Character recognition

Abstract

An approach provides sending captured Superbill image data and output data generated based on results of parsing the captured Superbill image data to an external system which manages Superbill data using a cloud system. The cloud system creates parsing rule data for parsing a captured Superbill image in accordance with user operation at a client device. The cloud system receives, via one or more computer networks from an input device, captured Superbill image data. The cloud system parses the received image data based on the created parsing rule and generates output data based on results of parsing. The cloud system sends the generated output data and the received image data to the external system via the one or more computer networks.

Description

RELATED APPLICATION DATA AND CLAIM OF PRIORITY
This application is related to U.S. patent application Ser. No. 15/942,414 entitled “Approach for Providing Access to Cloud Services on End-User Devices Using End-to-End Integration”, filed Mar. 30, 2018, U.S. patent application Ser. No. 15/942,415 entitled “Approach for Providing Access to Cloud Services on End-User Devices Using Local Management of Third-Party Services”, filed Mar. 30, 2018, U.S. patent application Ser. No. 15/942,415 entitled “Approach for Providing Access to Cloud Services on End-User Devices Using Local Management of Third-Party Services and Conflict Checking”, filed Mar. 30, 2018, and U.S. patent application Ser. No. 15/942,417 entitled “Approach for Providing Access to Cloud Services on End-User Devices Using Direct Link Integration”, filed Mar. 30, 2018, the contents all of which are incorporated by reference in their entirety for all purposes as if fully set forth herein.
FIELD
Embodiments relate generally to processing electronic documents. SUGGESTED GROUP ART UNIT: 2625; SUGGESTED CLASSIFICATION: 358.
BACKGROUND
The approaches described in this section are approaches that could be pursued, but not necessarily approaches that have been previously conceived or pursued. Therefore, unless otherwise indicated, the approaches described in this section may not be prior art to the claims in this application and are not admitted prior art by inclusion in this section.
The continued growth of network services, and in particular Internet-based services, provides access to an ever-increasing amount of functionality. Cloud services in particular continue to grow into new areas. One of the issues with cloud services is that they can be difficult to use with certain types of end-user devices, such as multi-function peripherals (MFPs) for several reasons. First, assuming that end-user devices have the requisite computing capability, some cloud services have interfaces that require a high level of programming skill and customization to implement on end-user devices. In addition, implementing a workflow that uses multiple cloud services requires an even higher level of skill and knowledge because each service has different settings requirements and the results of one cloud service have to be stored, reformatted, and provided to another cloud service. Further, some cloud services require that certain data, such as user authorization data, be maintained on the client side, e.g., at an MFP, which is unpractical and, in some situations, not possible.
SUMMARY
An apparatus comprises one or more processors and one or more memories communicatively coupled to the one or more processors. The one or more memories store instructions which, when processed by the one or more processors, cause source data of a sample Superbill to be received via one or more computer networks from a client device. As used herein, the term “Superbill” refers to an itemized form used by healthcare providers that details services provided to a patient and is the main data source for the creation of a healthcare claim. As used herein, the term “Superbill image data” refers to image data of a Superbill. Parsing rule data is created for parsing a Superbill which is based on the received source data by a user operation on the client device. The parsing rule data include at least region information where data in one or more data fields should be extracted from the Superbill and one or more field labels to be associated with the extracted data in the one or more data fields. The one or more field labels are managed by an external system which manages Superbill data. Captured Superbill image data is received via the one or more computer networks from an input device. The received Superbill image data is parsed based on the created parsing rule data. Output data is generated based on results of parsing the received Superbill image data, and the generated output data and the received Superbill image data is sent, via the one or more computer networks to the external system.
Embodiments may be implemented by one or more non-transitory computer-readable media and/or one or more computer-implemented methods.
BRIEF DESCRIPTION OF THE DRAWINGS
In the figures of the accompanying drawings like reference numerals refer to similar elements. Embodiments of the disclosure have other advantages and features which will be more readily apparent from the following detailed description and the appended claims, when taken in conjunction with the examples in the accompanying drawings, in which:
FIG. 1 is a block diagram that depicts an arrangement for accessing cloud services using end-user devices.
FIG. 2 is a flow diagram that depicts an approach for uploading and registering Superbill data to a cloud service.
FIG. 3 is a flow diagram that depicts a process for creating a parsing rule.
FIG. 4A is a user interface screen that allows a user to login to a Cloud EMR (Electronic Medical Records) application Manager.
FIG. 4B is a user interface screen that displays a parsing rule list for parsing Superbill data.
FIG. 4C is a user interface screen that allows a user to create a parsing rule.
FIG. 4D is a user interface screen that allows a user to select Superbill data to be uploaded on a Cloud EMR application Manager.
FIG. 4E is a user interface screen that displays identification information of the selected Superbill data to be uploaded.
FIG. 4F is a user interface screen that displays a preview image of Superbill source data.
FIG. 4G is a user interface screen that allows a user to select a region of the Superbill.
FIG. 4H is a user interface screen that allows a user to select or deselect the extracted one or more field labels.
FIG. 4I is a user interface screen that displays the data in the data fields of the Superbill image that corresponds to the extracted one or more field labels selected by the user.
FIG. 4J is a user interface screen that allows a user to enter bibliographic information for the parsing rule.
FIG. 4K is a user interface screen that allows a user to preview the created parsing rule before submitting it.
FIG. 5 is a block diagram that illustrates an example computer system with which an embodiment may be implemented.
DETAILED DESCRIPTION
In the following description, for the purposes of explanation, numerous specific details are set forth in order to provide a thorough understanding of the embodiments. It will be apparent, however, to one skilled in the art that the embodiments may be practiced without these specific details. In other instances, well-known structures and devices are shown in block diagram form to avoid unnecessarily obscuring the embodiments.
FIG. 1 is a block diagram that depicts an arrangement 100 for accessing cloud services using end-user devices. In the example depicted in FIG. 1, arrangement 100 includes end-user devices 110, a Cloud System 130, and third-party services 190. This approach is depicted in the context of preparing Superbill data for sending to Cloud EMR system 192 which is included in third-party services 190 and manages Superbill data.
End-user devices 110 include one or more input devices 112 and one or more client devices 114. According to one embodiment, an input device 112 is a device that performs capturing of a Superbill image, such as scanner device, a smart device, etc. According to another embodiment, a client device 114 is a device that may perform displaying of an operation screen for creating, deleting or editing a parsing rule for parsing a Superbill image which is captured, such as on a web browser. End-user devices 110 may include computer hardware, computer software, and any combination of computer hardware and computer software for performing capturing an image. This may include, for example, scanning components, one or more processors, one or more memories, one or more communications interfaces, such as wired and wireless interfaces, an operating system, one or more processes for supporting and performing the functionality provided by end-user devices 110, a display such as a touch screen display, etc. According to one embodiment, end-user devices 110 include one or more elements for communicating with other processes. In the example depicted in FIG. 1, a client device 114 may include a Web browser 120. An end-user device 110 may include a client application 116, or alternatively, a Web browser 118. The client application 116 is an application that is configured to communicate with Cloud System 130 and implements at least a portion of one or more application program interfaces (APIs) of Cloud System 130. For example, a client application 116 may implement Web browser functionality to communicate with Cloud System 130. Web browser 118 and Web browser 120 may be any type of Web browser for communicating with Cloud System 130 and allows end-user devices 110 to access functionality provided by Cloud System 130.
Cloud System 130 is a cloud environment that includes an Application System 140 and a Content Parsing Engine (CPE) 150. Cloud System 130 may be implemented on one or more computing elements, such as network elements, servers, etc., and embodiments are not limited to any particular implementation for Cloud System 130. Cloud System 130 may include fewer or additional elements than the elements depicted in FIG. 1, depending upon a particular implementation, and Cloud System 130 is not limited to any particular implementation. For example, Cloud System 130 may be implemented by one or more processes executing on one or more computing systems.
Cloud System 130 includes cloud applications 142. Cloud applications 142 may be managed by one or more application server processes. Cloud applications 142 may include a wide variety of applications for performing various functions, including as connectors to services provided by CPE 150, as described in more detail hereinafter. In the example depicted in FIG. 1, Cloud applications 142 include at least a Cloud EMR application 144. Cloud EMR application 144 provides a user interface to a user of end-user devices 110 and access to Cloud EMR application 144 in Cloud system 130. Cloud EMR system 192 may have specific requirements for registering Superbill data on Cloud EMR system 192 including, for example, formatting requirements, bibliographical information, etc. Bibliographical information to be registered on Cloud EMR system 192 has to be in accordance with items managed by Cloud EMR system 192.
Embodiments are not limited to this application and other applications may be provided, depending upon a particular implementation. One example application not depicted in FIG. 1 is an application for converting data for mobile applications, e.g., a PDF conversion application for mobile printing.
CPE 150 includes a Cloud EMR Application Manager 160, Content Parsing Engine (CPE) modules 170 and Content Parsing Engine (CPE) data 180. CPE may be implemented on one or more computing elements, such as network elements, servers, etc., and embodiments are not limited to any particular implementation for CPE 150. CPE 150 may include fewer or additional elements than the elements depicted in FIG. 1, depending upon a particular implementation, and CPE 150 is not limited to any particular implementation. For example, CPE 150 may be implemented by one or more processes executing on one or more computing systems.
Cloud EMR Application Manager 160 manages processes including at least processing of parsing controlling, processing of requests to and from Application System 140 and third-party services 190, and performing various administrative tasks for CPE 150. Further, Cloud EMR Application Manager 160 manages creating, editing and deleting Parsing rule data 182.
CPE modules 170 are processes that each implement one or more functions. The functions may be any type of function and may vary depending upon a particular implementation. According to one embodiment, the functions include input functions, output functions, and process functions. In the example depicted in FIG. 1, CPE modules 170 include several functions, including Optical Character Recognition (OCR) 172, Parsing 174, Data Generating 176 and Data Sending 178. OCR 172 provides OCR. Parsing 174 provides parsing of image data. Data Generating 176 provides to generate output data for sending to third-party services 190. Data Sending 178 provides to send output data to third-party services 190. These example modules are provided for explanation purposes and embodiments are not limited to these example modules. CPE modules 170 may be used by other processes to perform the specified function. For example, an application external to Cloud system 130 may use OCR 172 to convert image data to text using optical character recognition, although this requires that the application implement the API of OCR 172.
Content Parsing Engine (CPE) data 180 includes data used to configure and manage CPE 150. In the example depicted in FIG. 1, CPE data 180 includes Parsing Rule data 182 and Cloud EMR Communication data 184. Parsing Rule data 182, for example, may include a rule for parsing image data including at least information relating to an area of a Superbill from which information may be extracted so that OCR may be performed on the extracted information. Cloud EMR Communication data 184 may include information indicating where data may be sent to Cloud EMR system 192 included in third-party services 190. This data may include, for example, Uniform Resource Identifier (URI) of Web-API or File Transfer Protocol (FTP) address. This data is used, for example, to access Cloud EMR system 192, as described in more detail hereinafter.
FIG. 2 is a flow diagram 200 that depicts an approach for uploading and registering Superbill data captured by Input device 112 to Cloud EMR system 192. In step 202, Cloud EMR Application Manager 160 receives Superbill source data from Client device 114. For example, Cloud EMR Application Manager 160 provides a user interface to Web Browser 120 of Client device 114 via a network. The user interface receives an input of Superbill source data. This Superbill source data is sample data that will be used to create a parsing rule in step 204. A communication protocol between Cloud EMR Application Manager 160 and Web Browser 120 is, for example, Hyper Text Transfer Protocol (HTTP) or Hypertext Transfer Protocol Secure (HTTPS). In step 204, Cloud EMR Application Manager 160 creates a parsing rule for parsing a Superbill image based on the Superbill source data which has been received from Client device 114. The created parsing rule may be stored as Parsing Rule Data 182 in CPE Data 180. The creation process of the parsing rule for parsing a Superbill image is described in more detail hereinafter.
After creating the parsing rule, in step 206, Cloud EMR application 144 receives image data from Input device 112. The image data sent from Input device 112 is Superbill image data captured by Input device 112 or another capturing device. The image data is actual Superbill image data, as opposed to the sample Superbill image data obtained in step 202 to create the parsing rule in step 204. Cloud EMR application 144 transfers the received image data to CPE 150 through a network. For example, the image data may be transferred via Web Application Programming Interface (API) provided by CPE 150. In step 208, CPE 150 parses the image data transferred from Cloud EMR application 144 based on Parsing Rule data 182. In detail, OCR module 172 performs OCR on a region of the image data defined by Parsing Rule data 182. After OCR is performed, Parsing module 174 conducts a parsing process on the result of the OCR process in accordance with Parsing Rule data 182.
In step 210, CPE 150 generates output data for sending to Cloud EMR System 192 based on results of the parsing process. In detail, Data Generating module 176 generates output data suitable for a communication user interface provided by Cloud EMR System 192. Output data, for example, may be generated by Extensible Markup Language (XML). The communication interface may include, for example, Web API of Cloud EMR System 192. In step 212, CPE 150 sends, to Cloud EMR System 192 through a network, the generated output data and the image data transferred from Cloud EMR application 144. The output data and the image data may be sent via Web API of Cloud EMR System 192. Alternatively, the output data and the image data may be sent via different interfaces. For example, the output data may be sent via Web API of Cloud EMR System 192 and the image data may be sent using a file transfer protocol (FTP).
FIG. 3 is a flow diagram 300 that depicts a process for creating parsing rule for parsing a Superbill image by user operation on Web browser 120. The detail of the creation process is described with reference to the user interface shown in FIG. 4. In step 302, Cloud EMR Application Manager 160 sends Web content data including a preview image of Superbill source data to Web browser 120 in response to receiving the source data from Web browser as described in step 202. FIG. 4A through FIG. 4F are user interface screens which receive an input of Superbill source data and display a preview image of the source data on Web Browser 120. FIG. 4A is a user interface screen 400 that allows a user to login to Cloud EMR Application Manager 160 for receiving an input of User ID and Password from a user of Client device 114. FIG. 4B is a user interface screen 402 that displays a parsing rule list for parsing Superbill data. For example, the user interface screen 402 is shown in response to a successful login of the user on the user interface screen 402. In this example, the user has not yet added any parsing rules, so no parsing rules are listed on the screen. The user of Client device 114 starts to create a parsing rule on the user interface screen 402. FIG. 4C is a user interface screen 404 that allows a user to create a parsing rule. For example, a four-step process is shown. The first step 405A is for the user to upload a source file, such as a PDF or other document upon which OCR can be performed. The second step 405B is for the user to add or modify one or more field labels in the parsing rule. The source file will be processed to detect the one or more field labels, but the user can also add or modify the one or more field labels. The third step 405C is for the user to add additional information, such as parsing rule name or description. The fourth step 405D is for the user to finish, which means preview the parsing rule before submitting it. FIG. 4D is a user interface screen 406 that allows a user to select Superbill data to be uploaded on Cloud EMR Application Manager 160. This is the first step 405A shown in FIG. 4C. For example, the user selects a PDF file of Superbill data. The Superbill data may serve as the basis for creating one or more parsing rules. FIG. 4E is a user interface screen 408 that displays identification information of the selected Superbill data to be uploaded. After receiving user input on the user interface screen 408 to select the identified Superbill data, for example selection of the NEXT button 409, the selected Superbill data is sent to Cloud EMR Application Manager 160. FIG. 4F is a user interface screen 410 that displays a preview image of Superbill source data.
In step 304, Cloud EMR Application Manager 160 receives region information which was selected on the displayed preview image on Web browser 120 by the user, for example, by using a computer mouse. FIG. 4G is a user interface screen 412 that allows a user to select a region of the Superbill. For example, a region 414 on the user interface screen 412 is a region which is selected by the user, for example by clicking on the region with a computer mouse. In this example, the user may also select where one or more data fields are located relative to the one or more field labels, such as “Name,” “Address,” etc. within region 414. In this example is a pop-up window showing options top, right, bottom or left for selection by the user. The user selected “right” in this example, meaning that the data fields are located to the right of field labels in region 414. In step 306, Cloud EMR Application Manager 160 performs OCR on the selected region 414. In step 308, Cloud EMR Application Manager 160 extracts the one or more field labels corresponding to field labels managed by a Bill data management system, for example, Cloud EMR System 192. The field labels may be prestored in CPE data 180, or alternatively obtained from Cloud EMR System 192 by accessing via a Web API of Cloud EMR System 192. FIG. 4H is a user interface screen 416 that allows a user to select or deselect the extracted one or more field labels. This is the second step 405B shown in FIG. 4C, where the user can add or modify data fields in the parsing rule. For example, the extracted one or more field labels may be displayed on a popup screen 418 on the user interface screen 416. In this example, extracted field labels from region 414 are listed, such as “Practice ID,” “Practice Details,” “Name,” “MRN,” “Address,” “Referral Source,” and “Comments.” In step 310, Cloud EMR Application Manager 160 receives user selection of field labels to include in parsing rule via the popup screen 418. In this example, the user may then “Select All,” “Unselect All,” or select any number of the field labels to be added to the parsing rule.
In step 312, Cloud EMR Application Manager 160 sends Web content data including results of extracting. FIG. 4I is a user interface screen 420 that displays the data in the data fields of the Superbill image that corresponds to the extracted one or more field labels selected by the user. In detail, in this example, the extracted one or more field labels 421A selected by the user and data in the data fields 421B of the Superbill image are displayed on the user interface screen 420 side-by-side. For example, in user interface screen 420, “Practice Id” is one of the one or more field labels, “308” is actual data in the data filed corresponding to “Practice Id.” In step 314, Cloud EMR Application Manager 160 creates a parsing rule including the region information from which OCR will extract field label information. A parsing rule may further include bibliographic information, for example, Parsing rule name and Parsing rule description. This is the third step 405C of FIG. 4C, in which the user can provide such bibliographic information for the parsing rule. Cloud EMR Application Manager 160 stores the created parsing rule on CPE data 180 as Parsing rule data 182. FIG. 4J is a user interface screen 422 that allows a user to enter bibliographic information for the parsing rule. Parsing rule name and Description may be input into the user interface screen 422. FIG. 4K is a user interface screen 424 that allows a user to preview the created parsing rule before submitting it. This is the fourth and last step 405D of FIG. 4C, in which the user can preview the parsing rule before submitting it to be saved. In this example, field labels and corresponding data fields of the parsing rule are shown side-by-side with the Superbill image for the user to preview.
Some advantages of these parsing rule procedures are that they automate processing of EMR records and reduce the amount of data entry and time needed by a user in processing EMR records and that a user is able to create a parsing rule while viewing a preview image of the Superbill.
According to one embodiment, the techniques described herein are implemented by at least one computing device. The techniques may be implemented in whole or in part using a combination of at least one server computer and/or other computing devices that are coupled using a network, such as a packet data network. The computing devices may be hard-wired to perform the techniques, or may include digital electronic devices such as at least one application-specific integrated circuit (ASIC) or field programmable gate array (FPGA) that is persistently programmed to perform the techniques, or may include at least one general purpose hardware processor programmed to perform the techniques pursuant to program instructions in firmware, memory, other storage, or a combination. Such computing devices may also combine custom hard-wired logic, ASICs, or FPGAs with custom programming to accomplish the described techniques. The computing devices may be server computers, workstations, personal computers, portable computer systems, handheld devices, mobile computing devices, wearable devices, body mounted or implantable devices, smartphones, smart appliances, internetworking devices, autonomous or semi-autonomous devices such as robots or unmanned ground or aerial vehicles, any other electronic device that incorporates hard-wired and/or program logic to implement the described techniques, one or more virtual computing machines or instances in a data center, and/or a network of server computers and/or personal computers.
FIG. 5 is a block diagram that illustrates an example computer system with which an embodiment may be implemented. In the example of FIG. 5, a computer system 500 and instructions for implementing the disclosed technologies in hardware, software, or a combination of hardware and software, are represented schematically, for example as boxes and circles, at the same level of detail that is commonly used by persons of ordinary skill in the art to which this disclosure pertains for communicating about computer architecture and computer systems implementations.
Computer system 500 includes an input/output (I/O) subsystem 502 which may include a bus and/or other communication mechanism(s) for communicating information and/or instructions between the components of the computer system 500 over electronic signal paths. The I/O subsystem 502 may include an I/O controller, a memory controller and at least one I/O port. The electronic signal paths are represented schematically in the drawings, for example as lines, unidirectional arrows, or bidirectional arrows.
At least one hardware processor 504 is coupled to I/O subsystem 502 for processing information and instructions. Hardware processor 504 may include, for example, a general-purpose microprocessor or microcontroller and/or a special-purpose microprocessor such as an embedded system or a graphics processing unit (GPU) or a digital signal processor or ARM processor. Processor 504 may comprise an integrated arithmetic logic unit (ALU) or may be coupled to a separate ALU.
Computer system 500 includes one or more units of memory 506, such as a main memory, which is coupled to I/O subsystem 502 for electronically digitally storing data and instructions to be executed by processor 504. Memory 506 may include volatile memory such as various forms of random-access memory (RAM) or other dynamic storage device. Memory 506 also may be used for storing temporary variables or other intermediate information during execution of instructions to be executed by processor 504. Such instructions, when stored in non-transitory computer-readable storage media accessible to processor 504, can render computer system 500 into a special-purpose machine that is customized to perform the operations specified in the instructions.
Computer system 500 further includes non-volatile memory such as read only memory (ROM) 508 or other static storage device coupled to I/O subsystem 502 for storing information and instructions for processor 504. The ROM 508 may include various forms of programmable ROM (PROM) such as erasable PROM (EPROM) or electrically erasable PROM (EEPROM). A unit of persistent storage 510 may include various forms of non-volatile RAM (NVRAM), such as FLASH memory, or solid-state storage, magnetic disk or optical disk such as CD-ROM or DVD-ROM, and may be coupled to I/O subsystem 502 for storing information and instructions. Storage 510 is an example of a non-transitory computer-readable medium that may be used to store instructions and data which when executed by the processor 504 cause performing computer-implemented methods to execute the techniques herein.
The instructions in memory 506, ROM 508 or storage 510 may comprise one or more sets of instructions that are organized as modules, methods, objects, functions, routines, or calls. The instructions may be organized as one or more computer programs, operating system services, or application programs including mobile apps. The instructions may comprise an operating system and/or system software; one or more libraries to support multimedia, programming or other functions; data protocol instructions or stacks to implement TCP/IP, HTTP or other communication protocols; file format processing instructions to parse or render files coded using HTML, XML, JSON, JPEG, MPEG or PNG; user interface instructions to render or interpret commands for a graphical user interface (GUI), command-line interface or text user interface; application software such as an office suite, internet access applications, design and manufacturing applications, graphics applications, audio applications, software engineering applications, educational applications, games or miscellaneous applications. The instructions may implement a web server, web application server or web client. The instructions may be organized as a presentation layer, application layer and data storage layer such as a relational database system using structured query language (SQL) or no SQL, an object store, a graph database, a flat file system or other data storage.
Computer system 500 may be coupled via I/O subsystem 502 to at least one output device 512. In one embodiment, output device 512 is a digital computer display. Examples of a display that may be used in various embodiments include a touch screen display or a light-emitting diode (LED) display or a liquid crystal display (LCD) or an e-paper display. Computer system 500 may include other type(s) of output devices 512, alternatively or in addition to a display device. Examples of other output devices 512 include printers, ticket printers, plotters, projectors, sound cards or video cards, speakers, buzzers or piezoelectric devices or other audible devices, lamps or LED or LCD indicators, haptic devices, actuators or servos.
At least one input device 514 is coupled to I/O subsystem 502 for communicating signals, data, command selections or gestures to processor 504. Examples of input devices 514 include touch screens, microphones, still and video digital cameras, alphanumeric and other keys, keypads, keyboards, graphics tablets, image scanners, joysticks, clocks, switches, buttons, dials, slides, and/or various types of sensors such as force sensors, motion sensors, heat sensors, accelerometers, gyroscopes, and inertial measurement unit (IMU) sensors and/or various types of transceivers such as wireless, such as cellular or Wi-Fi, radio frequency (RF) or infrared (IR) transceivers and Global Positioning System (GPS) transceivers.
Another type of input device is a control device 516, which may perform cursor control or other automated control functions such as navigation in a graphical interface on a display screen, alternatively or in addition to input functions. Control device 516 may be a touchpad, a mouse, a trackball, or cursor direction keys for communicating direction information and command selections to processor 504 and for controlling cursor movement on display 512. The input device may have at least two degrees of freedom in two axes, a first axis (e.g., x) and a second axis (e.g., y), that allows the device to specify positions in a plane. Another type of input device is a wired, wireless, or optical control device such as a joystick, wand, console, steering wheel, pedal, gearshift mechanism or other type of control device. An input device 514 may include a combination of multiple different input devices, such as a video camera and a depth sensor.
In another embodiment, computer system 500 may comprise an internet of things (loT) device in which one or more of the output device 512, input device 514, and control device 516 are omitted. Or, in such an embodiment, the input device 514 may comprise one or more cameras, motion detectors, thermometers, microphones, seismic detectors, other sensors or detectors, measurement devices or encoders and the output device 512 may comprise a special-purpose display such as a single-line LED or LCD display, one or more indicators, a display panel, a meter, a valve, a solenoid, an actuator or a servo.
When computer system 500 is a mobile computing device, input device 514 may comprise a global positioning system (GPS) receiver coupled to a GPS module that is capable of triangulating to a plurality of GPS satellites, determining and generating geo-location or position data such as latitude-longitude values for a geophysical location of the computer system 500. Output device 512 may include hardware, software, firmware and interfaces for generating position reporting packets, notifications, pulse or heartbeat signals, or other recurring data transmissions that specify a position of the computer system 500, alone or in combination with other application-specific data, directed toward host 524 or server 530.
Computer system 500 may implement the techniques described herein using customized hard-wired logic, at least one ASIC or FPGA, firmware and/or program instructions or logic which when loaded and used or executed in combination with the computer system causes or programs the computer system to operate as a special-purpose machine. According to one embodiment, the techniques herein are performed by computer system 500 in response to processor 504 executing at least one sequence of at least one instruction contained in main memory 506. Such instructions may be read into main memory 506 from another storage medium, such as storage 510. Execution of the sequences of instructions contained in main memory 506 causes processor 504 to perform the process steps described herein. In alternative embodiments, hard-wired circuitry may be used in place of or in combination with software instructions.
The term “storage media” as used herein refers to any non-transitory media that store data and/or instructions that cause a machine to operation in a specific fashion. Such storage media may comprise non-volatile media and/or volatile media. Non-volatile media includes, for example, optical or magnetic disks, such as storage 510. Volatile media includes dynamic memory, such as memory 506. Common forms of storage media include, for example, a hard disk, solid state drive, flash drive, magnetic data storage medium, any optical or physical data storage medium, memory chip, or the like.
Storage media is distinct from but may be used in conjunction with transmission media. Transmission media participates in transferring information between storage media. For example, transmission media includes coaxial cables, copper wire and fiber optics, including the wires that comprise a bus of I/O subsystem 502. Transmission media can also take the form of acoustic or light waves, such as those generated during radio-wave and infra-red data communications.
Various forms of media may be involved in carrying at least one sequence of at least one instruction to processor 504 for execution. For example, the instructions may initially be carried on a magnetic disk or solid-state drive of a remote computer. The remote computer can load the instructions into its dynamic memory and send the instructions over a communication link such as a fiber optic or coaxial cable or telephone line using a modem. A modem or router local to computer system 500 can receive the data on the communication link and convert the data to a format that can be read by computer system 500. For instance, a receiver such as a radio frequency antenna or an infrared detector can receive the data carried in a wireless or optical signal and appropriate circuitry can provide the data to I/O subsystem 502 such as place the data on a bus. I/O subsystem 502 carries the data to memory 506, from which processor 504 retrieves and executes the instructions. The instructions received by memory 506 may optionally be stored on storage 510 either before or after execution by processor 504.
Computer system 500 also includes a communication interface 518 coupled to bus 502. Communication interface 518 provides a two-way data communication coupling to network link(s) 520 that are directly or indirectly connected to at least one communication networks, such as a network 522 or a public or private cloud on the Internet. For example, communication interface 518 may be an Ethernet networking interface, integrated-services digital network (ISDN) card, cable modem, satellite modem, or a modem to provide a data communication connection to a corresponding type of communications line, for example an Ethernet cable or a metal cable of any kind or a fiber-optic line or a telephone line. Network 522 broadly represents a local area network (LAN), wide-area network (WAN), campus network, internetwork or any combination thereof. Communication interface 518 may comprise a LAN card to provide a data communication connection to a compatible LAN, or a cellular radiotelephone interface that is wired to send or receive cellular data according to cellular radiotelephone wireless networking standards, or a satellite radio interface that is wired to send or receive digital data according to satellite wireless networking standards. In any such implementation, communication interface 518 sends and receives electrical, electromagnetic or optical signals over signal paths that carry digital data streams representing various types of information.
Network link 520 typically provides electrical, electromagnetic, or optical data communication directly or through at least one network to other data devices, using, for example, satellite, cellular, Wi-Fi, or BLUETOOTH technology. For example, network link 520 may provide a connection through a network 522 to a host computer 524.
Furthermore, network link 520 may provide a connection through network 522 or to other computing devices via internetworking devices and/or computers that are operated by an Internet Service Provider (ISP) 526. ISP 526 provides data communication services through a world-wide packet data communication network represented as internet 528. A server computer 530 may be coupled to internet 528. Server 530 broadly represents any computer, data center, virtual machine or virtual computing instance with or without a hypervisor, or computer executing a containerized program system such as DOCKER or KUBERNETES. Server 530 may represent an electronic digital service that is implemented using more than one computer or instance and that is accessed and used by transmitting web services requests, uniform resource locator (URL) strings with parameters in HTTP payloads, API calls, app services calls, or other service calls. Computer system 500 and server 530 may form elements of a distributed computing system that includes other computers, a processing cluster, server farm or other organization of computers that cooperate to perform tasks or execute applications or services. Server 530 may comprise one or more sets of instructions that are organized as modules, methods, objects, functions, routines, or calls. The instructions may be organized as one or more computer programs, operating system services, or application programs including mobile apps. The instructions may comprise an operating system and/or system software; one or more libraries to support multimedia, programming or other functions; data protocol instructions or stacks to implement TCP/IP, HTTP or other communication protocols; file format processing instructions to parse or render files coded using HTML, XML, JPEG, MPEG or PNG; user interface instructions to render or interpret commands for a graphical user interface (GUI), command-line interface or text user interface; application software such as an office suite, internet access applications, design and manufacturing applications, graphics applications, audio applications, software engineering applications, educational applications, games or miscellaneous applications. Server 530 may comprise a web application server that hosts a presentation layer, application layer and data storage layer such as a relational database system using structured query language (SQL) or no SQL, an object store, a graph database, a flat file system or other data storage.
Computer system 500 can send messages and receive data and instructions, including program code, through the network(s), network link 520 and communication interface 518. In the Internet example, a server 530 might transmit a requested code for an application program through Internet 528, ISP 526, local network 522 and communication interface 518. The received code may be executed by processor 504 as it is received, and/or stored in storage 510, or other non-volatile storage for later execution.
The execution of instructions as described in this section may implement a process in the form of an instance of a computer program that is being executed, and consisting of program code and its current activity. Depending on the operating system (OS), a process may be made up of multiple threads of execution that execute instructions concurrently. In this context, a computer program is a passive collection of instructions, while a process may be the actual execution of those instructions. Several processes may be associated with the same program; for example, opening up several instances of the same program often means more than one process is being executed. Multitasking may be implemented to allow multiple processes to share processor 504. While each processor 504 or core of the processor executes a single task at a time, computer system 500 may be programmed to implement multitasking to allow each processor to switch between tasks that are being executed without having to wait for each task to finish. In an embodiment, switches may be performed when tasks perform input/output operations, when a task indicates that it can be switched, or on hardware interrupts. Time-sharing may be implemented to allow fast response for interactive user applications by rapidly performing context switches to provide the appearance of concurrent execution of multiple processes simultaneously. In an embodiment, for security and reliability, an operating system may prevent direct communication between independent processes, providing strictly mediated and controlled inter-process communication functionality.

Claims (20)

What is claimed is:
1. An apparatus comprising:
one or more processors; and
one or more memories communicatively coupled to the one or more processors, the one or more memories storing instructions which, when processed by the one or more processors; cause:
receiving source data of a sample Superbill via one or more computer networks from a client device;
creating parsing rule data for parsing a Superbill which is based on the received source data of the sample Superbill by a user operation on the client device, the parsing rule data including at least region information where data in one or more data fields of the Superbill should be extracted and one or more field labels to be associated with the extracted data in the one or more data fields, the one or more field labels being managed by an external system which manages Superbill data;
receiving, via the one or more computer networks from an input device, actual captured Superbill image data;
parsing the received Superbill image data based on the created parsing rule data;
generating output data based on results of parsing the received Superbill image data; and
sending, via the one or more computer networks to the external system, the generated output data and the received Superbill image data.
2. The apparatus as recited in claim 1, wherein processing of the instructions further causes:
sending a web content including a preview image of the source data of the sample Superbill via the one or more computer networks to the client device; and
wherein the user operation on the client device includes a region selecting operation on the preview image displayed on the client device for the region information.
3. The apparatus as recited in claim 1, wherein the parsing parses the received image data by performing Optical Character Recognition (OCR) to the received image data based on the region information included in the parsing rule and extracting the data in the one or more data fields from a result of OCR.
4. The apparatus as recited in claim 3, wherein the generating generates the output data including an association between the one or more field labels managed by the external system and the data in the one or more data fields extracted from the result of OCR.
5. The apparatus as recited in claim 1, wherein the generating generates the output data by Extensible Markup Language (XML) format.
6. The apparatus as recited in claim 1, wherein the sending sends the output data and the received image data respectively via different interfaces.
7. The apparatus as recited in claim 6, wherein the different interfaces include a Web Application Program Interface (API) and File Transfer Protocol (FTP), and the sending sends the output data via the Web API and the received image data via the FTP.
8. A method comprising:
receiving source data of a sample Superbill via one or more computer networks from a client device;
creating parsing rule data for parsing a Superbill which is based on the received source data of the sample Superbill by a user operation on the client device, the parsing rule data including at least region information where data in one or more data fields of the Superbill should be extracted and one or more field labels to be associated with the extracted data in the one or more data fields, the one or more field labels being managed by an external system which manages Superbill data;
receiving, via the one or more computer networks from an input device, actual captured Superbill image data;
parsing the received Superbill image data based on the created parsing rule data;
generating output data based on results of parsing the received Superbill image data; and
sending, via the one or more computer networks to the external system, the generated output data and the received Superbill image data.
9. The method as recited in claim 8, further comprising:
sending a web content including a preview image of the source data of the sample Superbill via the one or more computer networks to the client device, wherein the user operation on the client device includes a region selecting operation on the preview image displayed on the client device for the region information.
10. The method as recited in claim 8, wherein the parsing parses the received image data by performing Optical Character Recognition (OCR) to the received image data based on the region information included in the parsing rule and extracting the data in the one or more data fields from a result of OCR.
11. The method as recited in claim 10, wherein the generating generates the output data including an association between the one or more field labels managed by the external system and the data in the one or more data fields extracted from the result of OCR.
12. The method as recited in claim 8, wherein the generating generates the output data by Extensible Markup Language (XML) format.
13. The method as recited in claim 8, wherein the sending sends the output data and the received image data respectively via different interfaces.
14. The method as recited in claim 13, wherein the different interfaces include a Web Application Program Interface (API) and File Transfer Protocol (FTP), and the sending sends the output data via the Web API and the received image data via the FTP.
15. One or more non-transitory computer-readable media providing an improvement in an external system which manages Superbill data, the one or more non-transitory computer-readable media storing instructions which, when processed by one or more processors, cause:
a Web application executing on an apparatus to perform:
receiving source data of a sample Superbill via one or more computer networks from a client device;
creating parsing rule data for parsing a Superbill which is based on the received source data of the sample Superbill by a user operation on the client device, the parsing rule data including at least region information where data in one or more data fields of the Superbill should be extracted and one or more field labels to be associated with the extracted data in the one or more data fields, the one or more field labels being managed by an external system which manages Superbill data;
receiving, via the one or more computer networks from an input device, actual captured Superbill image data;
parsing the received Superbill image data based on the created parsing rule data;
generating output data based on results of parsing the received Superbill image data; and
sending, via the one or more computer networks to the external system, the generated output data and the received Superbill image data.
16. The one or more non-transitory computer-readable media as recited in claim 15, further comprising additional instructions which, when processed by the one or more processors, cause the Web application to perform:
sending a web content including a preview image of the source data of the sample Superbill via the one or more computer networks to the client device; and
wherein the user operation on the client device includes a region selecting operation on the preview image displayed on the client device for the region information.
17. The one or more non-transitory computer-readable media as recited in claim 15, wherein the parsing parses the received image data by performing Optical Character Recognition (OCR) to the received image data based on the region information included in the parsing rule and extracting the data in the one or more data fields from a result of OCR.
18. The one or more non-transitory computer-readable media as recited in claim 17, wherein the generating generates the output data including an association between the one or more field labels managed by the external system and the data in the one or more data fields extracted from the result of OCR.
19. The one or more non-transitory computer-readable media as recited in claim 15, wherein the generating generates the output data by Extensible Markup Language (XML) format.
20. The one or more non-transitory computer-readable media as recited in claim 15, wherein the sending sends the output data and the received image data respectively via different interfaces.
US16/355,629 2019-03-15 2019-03-15 Approach for cloud EMR communication via a content parsing engine Active US10735615B1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US16/355,629 US10735615B1 (en) 2019-03-15 2019-03-15 Approach for cloud EMR communication via a content parsing engine

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US16/355,629 US10735615B1 (en) 2019-03-15 2019-03-15 Approach for cloud EMR communication via a content parsing engine
JP2020043579A JP6849121B2 (en) 2019-03-15 2020-03-13 Approach for Cloud EMR communication by content analysis engine

Publications (1)

Publication Number Publication Date
US10735615B1 true US10735615B1 (en) 2020-08-04

Family

ID=71838975

Family Applications (1)

Application Number Title Priority Date Filing Date
US16/355,629 Active US10735615B1 (en) 2019-03-15 2019-03-15 Approach for cloud EMR communication via a content parsing engine

Country Status (1)

Country Link
US (1) US10735615B1 (en)

Citations (32)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5390259A (en) 1991-11-19 1995-02-14 Xerox Corporation Methods and apparatus for selecting semantically significant images in a document image without decoding image content
US5832100A (en) 1991-08-30 1998-11-03 Trw Inc. Method and apparatus for converting documents between paper medium and electronic media using a user profile
US20010016068A1 (en) 1996-12-02 2001-08-23 Kazuki Shibata Electronic document generating apparatus, electronic document generating method, and program thereof
US20050062991A1 (en) 2003-08-08 2005-03-24 Takezo Fujishige Image processing apparatus, and computer product
US20070294614A1 (en) 2006-06-15 2007-12-20 Thierry Jacquin Visualizing document annotations in the context of the source document
US20080115057A1 (en) 2006-11-15 2008-05-15 Ebay Inc. High precision data extraction
US7840891B1 (en) 2006-10-25 2010-11-23 Intuit Inc. Method and system for content extraction from forms
US20110078098A1 (en) 2009-09-30 2011-03-31 Lapir Gennady Method and system for extraction
US20110249286A1 (en) 2010-04-09 2011-10-13 Actuate Corporation Document data access
US20110270760A1 (en) 2010-04-30 2011-11-03 Tobsc Inc. Methods and apparatus for a financial document clearinghouse and secure delivery network
US20120303636A1 (en) 2009-12-14 2012-11-29 Ping Luo System and Method for Web Content Extraction
US20130036347A1 (en) 2011-08-01 2013-02-07 Intuit Inc. Interactive technique for collecting information
US20130194613A1 (en) 2012-01-31 2013-08-01 Bruce A. Link Processing images from multiple scanners
US20140229312A1 (en) 2013-02-11 2014-08-14 Ricoh Company, Ltd. Auction item listing generation
US20140241631A1 (en) 2013-02-28 2014-08-28 Intuit Inc. Systems and methods for tax data capture and use
US20140268250A1 (en) 2013-03-15 2014-09-18 Mitek Systems, Inc. Systems and methods for receipt-based mobile image capture
US20150146984A1 (en) 2013-11-22 2015-05-28 Parchment System and method for identification and extraction of data
US20150256712A1 (en) 2014-03-04 2015-09-10 Xerox Corporation Methods and devices for form-independent registration of filled-out content
US20160216851A1 (en) 2015-01-27 2016-07-28 Abbyy Development Llc Image segmentation for data verification
US20160274745A1 (en) 2015-03-20 2016-09-22 Ryoichi Baba Information processing apparatus, information processing system, and method
US20160283444A1 (en) 2015-03-27 2016-09-29 Konica Minolta Laboratory U.S.A., Inc. Human input to relate separate scanned objects
US20170070623A1 (en) 2015-09-03 2017-03-09 Konica Minolta, Inc. Document generation system, document server, document generation method, and computer program
US20170372439A1 (en) 2016-06-23 2017-12-28 Liberty Pipeline Services, LLC Systems and methods for generating structured data based on scanned documents
US20180025222A1 (en) 2016-07-25 2018-01-25 Intuit Inc. Optical character recognition (ocr) accuracy by combining results across video frames
US20180107755A1 (en) 2015-02-10 2018-04-19 Researchgate Gmbh Online publication system and method
US9984471B2 (en) 2016-07-26 2018-05-29 Intuit Inc. Label and field identification without optical character recognition (OCR)
US10114906B1 (en) 2015-07-31 2018-10-30 Intuit Inc. Modeling and extracting elements in semi-structured documents
US20180330202A1 (en) 2015-08-27 2018-11-15 Longsand Limited Identifying augmented features based on a bayesian analysis of a text document
US20190050639A1 (en) 2017-08-09 2019-02-14 Open Text Sa Ulc Systems and methods for generating and using semantic images in deep learning for classification and data extraction
US20190095709A1 (en) 2017-09-28 2019-03-28 Kyocera Document Solutions Inc. Image forming apparatus
US10354134B1 (en) 2017-08-28 2019-07-16 Intuit, Inc. Feature classification with spatial analysis
US20190251192A1 (en) 2018-02-12 2019-08-15 Wipro Limited Method and a system for recognition of data in one or more images

Patent Citations (32)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5832100A (en) 1991-08-30 1998-11-03 Trw Inc. Method and apparatus for converting documents between paper medium and electronic media using a user profile
US5390259A (en) 1991-11-19 1995-02-14 Xerox Corporation Methods and apparatus for selecting semantically significant images in a document image without decoding image content
US20010016068A1 (en) 1996-12-02 2001-08-23 Kazuki Shibata Electronic document generating apparatus, electronic document generating method, and program thereof
US20050062991A1 (en) 2003-08-08 2005-03-24 Takezo Fujishige Image processing apparatus, and computer product
US20070294614A1 (en) 2006-06-15 2007-12-20 Thierry Jacquin Visualizing document annotations in the context of the source document
US7840891B1 (en) 2006-10-25 2010-11-23 Intuit Inc. Method and system for content extraction from forms
US20080115057A1 (en) 2006-11-15 2008-05-15 Ebay Inc. High precision data extraction
US20110078098A1 (en) 2009-09-30 2011-03-31 Lapir Gennady Method and system for extraction
US20120303636A1 (en) 2009-12-14 2012-11-29 Ping Luo System and Method for Web Content Extraction
US20110249286A1 (en) 2010-04-09 2011-10-13 Actuate Corporation Document data access
US20110270760A1 (en) 2010-04-30 2011-11-03 Tobsc Inc. Methods and apparatus for a financial document clearinghouse and secure delivery network
US20130036347A1 (en) 2011-08-01 2013-02-07 Intuit Inc. Interactive technique for collecting information
US20130194613A1 (en) 2012-01-31 2013-08-01 Bruce A. Link Processing images from multiple scanners
US20140229312A1 (en) 2013-02-11 2014-08-14 Ricoh Company, Ltd. Auction item listing generation
US20140241631A1 (en) 2013-02-28 2014-08-28 Intuit Inc. Systems and methods for tax data capture and use
US20140268250A1 (en) 2013-03-15 2014-09-18 Mitek Systems, Inc. Systems and methods for receipt-based mobile image capture
US20150146984A1 (en) 2013-11-22 2015-05-28 Parchment System and method for identification and extraction of data
US20150256712A1 (en) 2014-03-04 2015-09-10 Xerox Corporation Methods and devices for form-independent registration of filled-out content
US20160216851A1 (en) 2015-01-27 2016-07-28 Abbyy Development Llc Image segmentation for data verification
US20180107755A1 (en) 2015-02-10 2018-04-19 Researchgate Gmbh Online publication system and method
US20160274745A1 (en) 2015-03-20 2016-09-22 Ryoichi Baba Information processing apparatus, information processing system, and method
US20160283444A1 (en) 2015-03-27 2016-09-29 Konica Minolta Laboratory U.S.A., Inc. Human input to relate separate scanned objects
US10114906B1 (en) 2015-07-31 2018-10-30 Intuit Inc. Modeling and extracting elements in semi-structured documents
US20180330202A1 (en) 2015-08-27 2018-11-15 Longsand Limited Identifying augmented features based on a bayesian analysis of a text document
US20170070623A1 (en) 2015-09-03 2017-03-09 Konica Minolta, Inc. Document generation system, document server, document generation method, and computer program
US20170372439A1 (en) 2016-06-23 2017-12-28 Liberty Pipeline Services, LLC Systems and methods for generating structured data based on scanned documents
US20180025222A1 (en) 2016-07-25 2018-01-25 Intuit Inc. Optical character recognition (ocr) accuracy by combining results across video frames
US9984471B2 (en) 2016-07-26 2018-05-29 Intuit Inc. Label and field identification without optical character recognition (OCR)
US20190050639A1 (en) 2017-08-09 2019-02-14 Open Text Sa Ulc Systems and methods for generating and using semantic images in deep learning for classification and data extraction
US10354134B1 (en) 2017-08-28 2019-07-16 Intuit, Inc. Feature classification with spatial analysis
US20190095709A1 (en) 2017-09-28 2019-03-28 Kyocera Document Solutions Inc. Image forming apparatus
US20190251192A1 (en) 2018-02-12 2019-08-15 Wipro Limited Method and a system for recognition of data in one or more images

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Nuggehalli, U.S. Appl. No. 16/355,630, filed Mar. 15, 2019, Final Office Action dated Apr. 15, 2020.
Nuggehalli, U.S. Appl. No. 16/355,630, filed Mar. 15, 2019, Office Action dated Dec. 16, 2019.

Similar Documents

Publication Publication Date Title
US9851953B2 (en) Cloud based editor for generation of interpreted artifacts for mobile runtime
US10073825B2 (en) Model-driven tooltips in excel
US10402038B2 (en) Stack handling using multiple primary user interfaces
US10127206B2 (en) Dynamic column groups in excel
US10854013B2 (en) Systems and methods for presenting building information
US9729542B2 (en) Compartmentalizing application distribution for disparate electronic devices
US20180239747A1 (en) Mobile reports
US9887884B2 (en) Cloud services platform
US10038698B2 (en) External platform extensions in a multi-tenant environment
KR102196894B1 (en) Infrastructure for synchronization of mobile device with mobile cloud service
EP2875425B1 (en) Providing access to a remote application via a web client
US9948700B2 (en) ADFDI support for custom attribute properties
US10613916B2 (en) Enabling a web application to call at least one native function of a mobile device
US9894115B2 (en) Collaborative data editing and processing system
EP3244301A1 (en) User interface application and digital assistant
US9729635B2 (en) Transferring information among devices using sensors
US10868866B2 (en) Cloud storage methods and systems
JP2017529793A (en) Proxy server in the computer subnetwork
US8761811B2 (en) Augmented reality for maintenance management, asset management, or real estate management
US10540661B2 (en) Integrated service support tool across multiple applications
EP3213199B1 (en) Json stylesheet language transformation
US10038839B2 (en) Assisted text input for computing devices
US20150254074A1 (en) Method and system for platform-independent application development
CN105264492B (en) The automatic discovery of system action
US10505982B2 (en) Managing security agents in a distributed environment

Legal Events

Date Code Title Description
FEPP Fee payment procedure

Free format text: ENTITY STATUS SET TO UNDISCOUNTED (ORIGINAL EVENT CODE: BIG.); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

STCF Information on status: patent grant

Free format text: PATENTED CASE