CA3002447C - System and method for patient health record identifier scanner - Google Patents

System and method for patient health record identifier scanner Download PDF

Info

Publication number
CA3002447C
CA3002447C CA3002447A CA3002447A CA3002447C CA 3002447 C CA3002447 C CA 3002447C CA 3002447 A CA3002447 A CA 3002447A CA 3002447 A CA3002447 A CA 3002447A CA 3002447 C CA3002447 C CA 3002447C
Authority
CA
Canada
Prior art keywords
ocr
patient health
font
mobile device
subsystem
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CA3002447A
Other languages
French (fr)
Other versions
CA3002447A1 (en
Inventor
Jaime Wong Chujoy
Paul Shortt
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Royal Bank of Canada
Original Assignee
Royal Bank of Canada
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Royal Bank of Canada filed Critical Royal Bank of Canada
Priority to CA3002447A priority Critical patent/CA3002447C/en
Publication of CA3002447A1 publication Critical patent/CA3002447A1/en
Application granted granted Critical
Publication of CA3002447C publication Critical patent/CA3002447C/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V30/00Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
    • G06V30/10Character recognition
    • G06V30/14Image acquisition
    • G06V30/146Aligning or centring of the image pick-up or image-field
    • G06V30/1463Orientation detection or correction, e.g. rotation of multiples of 90 degrees
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V30/00Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
    • G06V30/10Character recognition
    • G06V30/16Image preprocessing
    • G06V30/164Noise filtering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V30/00Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
    • G06V30/10Character recognition
    • G06V30/16Image preprocessing
    • G06V30/166Normalisation of pattern dimensions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V30/00Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
    • G06V30/40Document-oriented image-based pattern recognition
    • G06V30/41Analysis of document content
    • G06V30/412Layout analysis of documents structured with printed lines or input boxes, e.g. business forms or tables
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H10/00ICT specially adapted for the handling or processing of patient-related medical or healthcare data
    • G16H10/60ICT specially adapted for the handling or processing of patient-related medical or healthcare data for patient-specific data, e.g. for electronic patient records
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H15/00ICT specially adapted for medical reports, e.g. generation or transmission thereof
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V30/00Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
    • G06V30/10Character recognition

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Public Health (AREA)
  • Primary Health Care (AREA)
  • Medical Informatics (AREA)
  • Health & Medical Sciences (AREA)
  • Epidemiology (AREA)
  • Artificial Intelligence (AREA)
  • Medical Treatment And Welfare Office Work (AREA)

Abstract

What is disclosed is a system for a user with a mobile device to scan a patient health record identifier. The mobile device is coupled via a network to a health record database. The system comprises a patient health application installed on the mobile device and configured to select a font, produce a region of interest detection window on the mobile device display, capture an image of a patient health record identifier, perform one or more pre- processing operations on the captured image to produce a pre-processed image, perform one or more optical character recognition (OCR) operations on the pre-processed image using the selected font to extract said patient health record identifier, perform one or more post-processing operations on the extracted health record identifier to verify said extracted patient health record identifier; and retrieve patient information from the health record database based on the verified and extracted patient health record identifier.

Description

SYSTEM AND METHOD FOR PATIENT HEALTH RECORD IDENTIFIER SCANNER
FIELD OF THE INVENTION
[0001] The present disclosure relates to optical character recognition for electronic medical records.
BRIEF DESCRIPTION OF THE DRAWINGS
[0002] The foregoing and other advantages of the disclosure will become apparent upon reading the following detailed description and upon reference to the drawings.
[0003] FIG. 1 is an illustration of a typical setup for scanning a patient health record identifier from a source using a mobile device.
[0004] FIG. 2 is an illustration of an example embodiment of a mobile device.
[0005] FIG. 3 is a detailed illustration of a patient health application.
[0006] FIGS. 4 and 4B is an illustration of a process to extract patient health record identifiers.
[0007] FIG. 4C is an illustration of a screen to achieve some of the steps of the process detailed in FIG. 4.
[0008] FIG. 4D illustrates a patient health record identifier surrounded by other data.
[0009] FIG. 4E illustrates use of a region of interest detection window to capture an image of a patient health record identifier.
[0010] FIG. 4F illustrates the presentation of a prompt for visual confirmation to a user.
[0011] While the present disclosure is susceptible to various modifications and alternative forms, specific embodiments or implementations have been shown by way of example in the drawings and will be described in detail herein. It should be understood, however, that the disclosure is not intended to be limited to the particular forms disclosed.
Rather, the disclosure is to cover all modifications, equivalents, and alternatives falling within the spirit and scope of an invention as defined by the appended claims.
BACKGROUND
[0012] Implementing optical character recognition (OCR) for scanning patient health record identifiers for electronic health records (EHR) presents challenges not typically seen in other applications for OCR. Firstly, a high level of accuracy is necessary for OCR systems for scanning for EHR purposes to be feasibly deployed, otherwise the risks of, among others, misdiagnosis of patients or incorrect billing is too high.
[0013] Secondly, there are large variations in the fonts used in the data sources which are scanned via OCR, as these data sources originate from different facilities such as hospitals, clinics and testing centres. This makes it more challenging to achieve the high accuracy necessary for feasible deployment of OCR scanning systems for EHR scanning purposes.
[0014] Mobile networked devices are increasingly being used in the medical field. Using mobile networked devices allows users in the medical field such as physicians, nurses, emergency medical technicians, medical administrative staff and other medical personnel to scan patient health record identifier data using OCR and, for example, access EHR
without needing to, for example, go to a fixed workstation.
[0015] Therefore there is a need for highly accurate OCR patient health record identifier scanning systems for EHR which can be implemented on mobile devices, and which can handle a large variety of fonts so as to accommodate the diverse requirements of various medical facilities.
[0016] Training of OCR engines for scanning patient health record identifiers has been proposed as a way to reduce errors and improve accuracy for mobile OCR systems in the medical field. However, when dealing with a wide variety of fonts and printers, training on its own does not yield sufficiently accurate results for feasible OCR deployment in the medical billing field. As an example, in "Digits Recognition on Medical Device", by C.
Liu, Masters Thesis, University of Western Ontario, April 2016, training was performed on the Tesseract OCR engine using a variety of seven segment fonts used in devices such as glucometers. In addition, "region of interest" detection and identification is used to improve the accuracy of the Tesseract OCR engine. In region of interest identification and detection, regions containing data of interest are identified and the data within these regions are captured and processed using OCR.
Despite these enhancements, an insufficient level of accuracy was achieved for feasible deployment of OCR for medical records in Liu.
[0017] Furthermore, in some cases an OCR engine trained with a wide variety of fonts was found to be less accurate than an OCR engine trained with a smaller number of fonts. Given the variety of fonts likely to be encountered while performing OCR scanning, this means that training on its own is likely to be insufficient to yield highly accurate results for feasible mobile deployment for EHR purposes.
[0018] Previous works of prior art have attempted to address these issues in other fields for mobile devices. For example in US Patent 9,251,431 to Doepke et al, filed on May 30, 2014, a mobile device with OCR functionality is used to capture credit card information from a credit card. In US Patent 9,251,431, region of interest detection and identification was used to improve OCR accuracy. However the techniques described in US Patent 9,251,431 are used mainly to identify credit card numbers. Credit card numbers use a standard font as described in, for example, "Part 1: Embossing" of the International Organization for Standardization/International Electrotechnical Commission (ISO/IEC) 7811 standard on "Identification cards ¨
Recording technique". Furthermore, US Patent 9,251,431 describes only the use of training to improve accuracy. As a consequence, the techniques described in US Patent 9,251,431 are likely to be insufficiently accurate when dealing with the wide variety of fonts seen in the data sources encountered in the medical field. Other works of prior art such as:
- US Patent 7,779,032 to Garfinkel et al, filed on September 6, 2006; and - US Patent 8,561,185 to Muthusrinivasan et al filed on May 17, 2011 are also targeted towards credit cards and suffer from the same issue as US
Patent 9,251,431.
DETAILED DESCRIPTION
[0019] The foregoing and additional aspects and embodiments of the present disclosure will be apparent to those of ordinary skill in the art in view of the detailed description of various embodiments and/or aspects, which is made with reference to the drawings, a brief description of which is provided next.
[0020] To address the challenges detailed above, the following details a system and method for a system deployed on mobile devices, and which uses optical character recognition (OCR) to scan patient health record identifiers for electronic health record (EHR) purposes with a high level of accuracy, when the sources employ a wide variety of fonts.
[0021] FIG.
1 illustrates an embodiment of the system and method that is the subject of this specification. User 105 has associated mobile device 101. Mobile device 101 is, for example a smartphone, tablet, hand-held scanner or any appropriate mobile computing device.
[0022] An embodiment of mobile device 101 is shown in FIG. 2. Processor 207 performs processing functions and operations necessary for the operation of mobile device 101, using data and programs stored in mobile storage 201. An example of such a program is patient health application 202 which will be discussed in more detail below.
[0023] Mobile storage 201 is used to store programs and data necessary for the operation of mobile device 101. Mobile storage 201 is implemented using storage technologies and techniques for mobile devices as known to those of skill in the art.
[0024] Display 205 performs the function of displaying information for user 105. Display 205 is implemented using display techniques and technologies for mobile devices as known to those of skill in the art.
[0025] Input devices 203 allow user 105 to enter information. This includes, for example, devices such as a touch screen, mouse, keypad, keyboard, microphone, camera, audio or voice capture, video camera and so on. In some embodiments, display 205 is a touchscreen which means it is also part of input devices 203.
[0026] Communications subsystem 204 allows mobile device 101 to couple to devices and networks external to mobile device 101 such as network 102 so as to communicate. This includes, for example, communications via BLUETOOTH , Wi-Fi, Near Field Communications (NFC), Radio Frequency Identification (RFID), 3G, Long Term Evolution (LTE), Universal Serial Bus (USB) and other protocols known to those of skill in the art.
[0027] Image capture subsystem 206 is, for example, a camera. Image capture subsystem 206 allows a user to capture, for example, images and videos using display 205. In some embodiments, image capture subsystem 206 comprises one or more commands to allow users to change image capture parameters using input devices 203 and display 205. This includes, for example:
- Turning a flash on and off, - Turning on a light to illuminate an area before capturing the image of that area, - Allowing user 105 to zoom in and zoom out, - Changing the resolution of captured images, and - Changing between capturing still images and videos.
[0028] As shown in FIG. 2, power unit 208 is coupled to the other components of mobile device 101 so as to supply power to these components. In some embodiments, power unit 208 comprises an alternating current (AC) adapter for connection to a mains supply. In some embodiments, power unit 208 comprises one or more batteries. In some embodiments, the one or more batteries comprise a rechargeable battery. In embodiments where power unit 208 is a rechargeable battery, power unit 208 is charged using:
- wired charging techniques known to those of skill in the art; or - wireless charging techniques known to those of skill in the art, such as the Qi protocol.
[0029] In further embodiments, power unit 208 comprises one or more batteries and an AC adapter. Then, if power unit 208 is not connected to a mains supply, mobile device 101 is powered by the one or more batteries. However, if power unit 208 is connected to a mains supply then the mobile device 101 stops being powered by the one or more batteries.
In some of the embodiments where the one or more batteries comprises a rechargeable battery, when mobile device 101 is connected to a mains supply, the rechargeable battery is then charged.
[0030] As shown in FIG. 2, the components of user device 101 are coupled to each other via interconnection 209. Interconnection 209 is implemented using one or more suitable techniques known to those of skill in the art.
[0031] Sensors 210 are sensors used to, for example, detect location, motion and environmental conditions surrounding the mobile device 101. These include, for example, Global Positioning Satellite (GPS) sensors, accelerometers and temperature sensors.
[0032] In FIG. 1, mobile device 101 is coupled to font selection server 103 and electronic health record (EHR) server 104 via network 102. Network 102 may be implemented in a variety of ways. For example, in one embodiment, network 102 comprises one or more subnetworks. In another embodiment, network 102 is implemented using one or more types of network technologies known to those of skill in the art. These types of network technologies include, for example, wireless networks, wired networks, Ethernet networks, local area networks, personal area networks, metropolitan area networks, satellite networks and optical networks.
[0033] Font selection server 103 is coupled to font database 107. Font selection server 103 is used to receive, process and fulfil requests for various types of fonts which are stored in font database 107. Font selection server 103 is implemented using, for example, hardware, software, or some combination of hardware and software. In some embodiments, font selection server 103 is distributed across one or more locations. In other embodiments, font selection server 103 is comprised of one or more servers networked together and distributed across one or more locations. In other embodiments, font selection server 103 is comprised of a single server.
[0034] Font database 107 stores a plurality of fonts to assist in performing OCR
operations, as will be explained further below. Font database 107 is coupled to font selection server 103. The coupling is achieved by, for example, connection via wired or wireless techniques. In some embodiments, coupling is achieved by, for example, using a network-based connection. In some embodiments, font database 107 is searchable.
[0035] In some embodiments, font selection server 103 and font database 107 are located within the same server or servers. In other embodiments, font selection server 103 and font database 107 are implemented separately from each other.
[0036] EHR
server 104 is used to receive, process and fulfil requests for data stored in health record database 108. EHR server 104 is implemented using, for example, hardware, software, or some combination of hardware and software. In some embodiments, EHR server 104 is distributed across one or more locations. In other embodiments, EHR
server 104 is comprised of one or more servers networked together and distributed across one or more locations. In yet other embodiments, EHR server 104 is comprised of a single server.
[0037]
Health record database 108 is used to store health records corresponding to a plurality of patients. These health records include patient information such as:
- Patient health numbers issued by a provider such as a government health authority, for example, Ontario Health Insurance Policy number, or an insurance provider;
- Patient name;
- Patient date of birth;
- Patient gender;
- Billing codes associated with the patient;
- Patient address;
- Patient payment information such as credit card information; and - Patient medical information.
[0038] In some embodiments, EHR server 104 is coupled to one or more external servers 109 to retrieve and send data as necessary. External servers 109 include, for example, Ontario Ministry of Health servers or other servers hosted by other providers. In some embodiments, external servers 109 are coupled to external databases 110. External databases 110 store patient health records similar to health record database 108.
[0039] In some embodiments, EHR server 104 and one or more external servers 109 are coupled via, for example, network 102. In other embodiments, EHR server 104 is coupled to one or more external servers 109 via a direct connection.
[0040] Data source, or more simply source 106 contains patient health record identifier information that needs to be scanned by user 105 using mobile device 101.
Source 106 is, for example, a document or any other medium with visible patient health record identifier information, for example, a sticker, letter, file record, Addressograph imprint, plastic identification card, or display screen.
[0041] As shown in FIG. 2, patient health application 202 is stored on mobile storage 201 residing on mobile device 101. Patient health application 202 facilitates the OCR scanning of patient health record identifiers via mobile device 101.
[0042] Patient health application 202 is provided to mobile device 101 where it is installed on mobile storage 201. In some embodiments, patient health application 202 is provided to mobile device 101 by making patient health application 202 available to a third party application store such as the GOOGLES PLAY store or APPLE App Store, from where it is downloaded on to mobile device 101. In some other embodiments, patient health application 202 is provided to mobile device 101, by making it available via a server, from where it is downloaded. In yet other embodiments, patient health application 202 is provided to mobile device 101 via other known techniques such as email, or from an external storage device such as a micro Secure Digital (SD) card or other storage device connected to mobile device 101 via, for example, a Universal Serial Bus (USB) connection. In further embodiments, updates for patient health application 202 on mobile device 101 are provided for download by the user. In some embodiments, updates of patient health application 202 are provided for automatic download. An example of this is where the updates are provided to the third party application stores mentioned above; and automatically downloaded by mobile device 101 as part of an updating or refreshing process initiated by mobile device 101.
[0043] FIG. 3 shows a detailed description of patient health application 202.
In FIG. 3, authentication and security subsystem 301 performs the functions of authenticating and authorizing users such as user 105 before they are allowed to use patient health application 202.
[0044] User interface and navigation subsystem 302 allows user 105 to enter inputs to patient health application 202 using, for example, input devices 203 of mobile device 101. In one embodiment, user interface and navigation subsystem 302 receives entered inputs, and transmits these entered inputs to other coupled components of patient health application 202. User interface and navigation subsystem 302 also handles generation of the various user interfaces and screens which form part of patient health application 202. Furthermore, user interface and navigation subsystem 302 handles navigation between these various user interfaces and screens.
[0045] Application storage and encryption subsystem 203 perform the function of storing information for the operation of patient health application 202, and encrypting sensitive data before storage. Information stored on application storage and encryption subsystem 203 include, for example:
- Fonts for user 105 to select, - Parameters for use in pre- and post-OCR operations, - Parameters for use in image processing and OCR operations, - Images captured by patient health application 202, - Results from performing pre-OCR, OCR and post-OCR operations, and - Health record information of patients.
[0046] Image processing and OCR subsystem 305 perform a variety of operations including:
- Region of interest identification and detection, - Creation of region of interest window for capture of images, - Pre-OCR processing operations, - OCR operations, and - Post-OCR processing operations.
[0047] Image processing and OCR subsystem 305 comprises one or more OCR
engines to perform OCR operations. In some embodiments, the one or more OCR engines are obtained from third party providers, for example, the Tesseract OCR system.
[0048] Prior to selection and usage of a font for OCR, the one or more OCR
engines must be trained with that font. In some embodiments, training is achieved using the following steps:
- a font is created by, for example, a Calligrapher using a format such as the TrueType format (TTF) or the OpenType format (OTF), and saved to either a TTF or OTF
file;

- the TTF or OTF files are used to train the one or more OCR engines;
- a "trained data" file for the particular font is produced, and this file is then stored in the application storage and encryption subsystem 303 or on the font selection database 107;
and - the trained data file is then loaded to the one or more OCR engines from application storage and encryption subsystem 303 or over network 102 from font selection database 107.
[0049] There are a variety of methods to facilitate training. For example, where the patient health application 202 is provided by a service provider, then in some embodiments the training is done by the service provider using one or more training tools. In some embodiments, the one or more sub-programs within image processing and OCR subsystem 305 are used to perform the font training.
[0050] In some embodiments, the patient health application 202 comprises one or more pre-loaded trained data files, stored in application storage and encryption subsystem 303. Then, when training is performed for a new font, the new trained data file is made available to the user as part of an update of the patient health application 202 on mobile device 101. An example is as follows: A new trained data file is uploaded to a third party application store as part of an update to be made available to mobile device 101. When the update is downloaded from the third party application store, the new trained data file is stored in application storage and encryption subsystem 303.
[0051] Making new trained data files available to the user via a third party application store may be complex and difficult. In other embodiments, the new trained data file is uploaded to the font selection database 107. Then, upon being notified of the existence of one or more new trained data files, patient health application 202 contacts font selection server 103 and downloads the new trained data file from font selection database 107.
[0052] In some further embodiments, user 105 supplies new fonts for OCR
operations. In these embodiments, the training is performed and the corresponding trained data files are made available via, for example, the third party application stores or font selection server 103 and font selection database 107.
[0053] Secure network communication subsystem 304 performs the function of processing information received from patient health application 202. It also processes information to be sent to patient health application 202, In one embodiment, this processing comprises functions such as encryption of information to be sent, and decryption of received information.
[0054] Information receiving subsystem 306 performs the function of receiving information sent from external devices and servers such as EHR server 104, via network 102, communications subsystem 204 and secure network communication subsystem 304.
In one embodiment, it does so by performing application programming interface (API) calls to, for example, EHR server 104.
[0055] Information sending subsystem 307 performs the function of sending information to external devices and servers such as EHR server 104, via secure network communication subsystem 304, communications subsystem 204 and network 102. In one embodiment, it does so by performing API calls to, for example, EHR server 104.
[0056] As shown in FIG. 3, the components of patient health application 202 are coupled to each other so as to receive and send data and commands between each other as necessary.
[0057] FIGS. 4 and 4B presents a process to perform OCR scanning using a mobile device so as to extract patient health record identifiers. This process will be described below in combination with FIGS. 4C to 4F.
[0058] In step 401 of FIG. 4, the user 105 initiates the process of extracting a patient health record identifier. In some embodiments, this is performed after the user is prompted to authenticate themselves with patient health application 202, and the user successfully authenticates themselves.
[0059] In step 402 of FIG. 4, a font is selected. In some embodiments, the user is prompted to provide inputs to user interface and navigation subsystem 302 to select a font. FIG. 4C shows an example embodiment of a screen to achieve some of the steps of FIGS. 4 and 4B. In FIG. 4C, screen 4B-00 is presented on display 205 of mobile device 101. Pressing on button 4B-02 presents options to user 105 to select a font. In some embodiments, the user is presented with one or more fonts stored in application storage and encryption subsystem 303. In other embodiments, the user is presented with one or more fonts stored in font selection database 103. In yet other embodiments, the user is first presented with one or more fonts stored in application storage and encryption subsystem 303, and a choice to view more font options. If the user wishes to view more font options, then patient health application 202 retrieves and presents fonts stored in font selection database 103 to user 105. In yet other embodiments, the user 105 is prompted to upload one or more fonts before selecting one of those fonts.
[0060] In some embodiments, the font is selected automatically by the patient health application 202. In some embodiments, automatic selection is achieved by, for example, utilizing one or more algorithms which use one or more inputs such as:
- Location of the mobile device 101 where the application 202 is used: In some embodiments, determination of location is performed by o acquiring results from sensors 210 on mobile device 101, o performing geo-location, and o detecting the location of mobile device 101. .
- Recency of usage of fonts;
- Amount of usage of fonts; and - One or more factors related to user 105's usage patterns such as:
o History of usage of billing codes, o Type of patient, and o Date and time of usage.
[0061] As previously explained, solely relying on training of the one or more OCR engines within image processing and OCR subsystem 305 when faced with sources having a wide variety of fonts is insufficient to yield the highly accurate results needed for deployment in EHR
systems. Using font selection to supplement font training enhances the accuracy of the one or more OCR engines in performing OCR operations on sources having a wide variety of fonts, which as a consequence improves the feasibility of OCR deployment.
[0062] In step 403 of FIG. 4, image processing and OCR subsystem 305 in combination with image capture subsystem 206 and processor 207 produces a region of interest detection window. This region of interest detection window is then displayed by display 205. An example is shown with reference to FIG. 4C. In FIG. 4C, the region of interest detection window 4B-04 is presented to user 105.
[0063] In step 404 of FIG. 4, the user 105 is prompted to adjust image capture parameters.
Image capture parameters are parameters surrounding the capture of images by image capture subsystem 206. These include, for example:
- illumination levels, - choice of flash, - image re-solution levels, - distance from object, - focus levels, and - zoom levels.
Examples of prompting are presented in screen 4B-00 of FIG. 4C. In some embodiments, objects such as object 4B-01 are presented to user 105 so as to effect adjustment of image capture parameters. Object 4B-01 is, for example, a button, a slider, or any other control to allow the user to adjust image capture parameters. In some embodiments, recommendations are presented to the user in a field to adjust image capture parameters, such as field 4B-03 of screen 4B-00. These instructions include, for example, instructions to - point mobile device 101 at a patient health record identifier, - avoid shadows or direct light, and - allow a camera which is part of image capture subsystem 206 to focus.
[0064] In step 405 of FIG. 4, an image of the patient health record identifier is captured using the region of interest detection window. This is explained with reference to FIGS. 4D and 4E. An example of a patient health record identifier surrounded by other data is presented in FIG.
4D. In FIG. 4D, field 4C-01 is, for example, part of source 106. Field 4C-01 comprises - other data 4C-03, and - patient health record identifier 4C-02.
[0065] Patient health record identifier 4C-02 is, for example, - a provincial health card number such as an Ontario Health Insurance Policy number, - an insurance policy number provided by an insurance company, and - a medical institution record identifier supplied by a medical institution such as a clinic, hospital or testing facility.
In some embodiments, patient health record identifier 4C-02 follows a pre-set format. In some embodiments, patient health record identifier 4C-02 is alphanumeric. In other embodiments, patient health record identifier 4C-02 is comprised only of alphabets. In yet other embodiments, patient health record identifier 4C-02 is comprised only of numbers. As would be appreciated by one of skill in the art, different types of patient health record identifier 4C-02 can have different lengths.
[0066] Then, the user is prompted to capture an image of the patient health record identifier 4C-02 using region of interest detection window 4B-04. An example embodiment of the use of the region of interest detection window to capture an image is described in FIG. 4E. In some embodiments, when region of interest detection window 4B-04 is brought in proximity to patient health record identifier 4C-02, a plurality of boxes 4D-01 appears on screen 4B-00. In some embodiments, the plurality of boxes 4D-01 appears in response to commands sent by image processing and optical character recognition subsystem 305 upon detecting text.
[0067] In some embodiments, the user is prompted by the image capture subsystem 206 and the patient health application 202 to align this plurality of boxes 4D-01 with the patient health record identifier 4C-02 displayed on screen 4B-00 before capturing the image. In some embodiments, this aligning comprises either zooming in or zooming out using display 205 as necessary. The image is then captured. In some embodiments, the number of boxes in plurality of boxes 4D-01 is set according to the format of the patient health record identifier 4C-02. For example, since an Ontario Health Insurance Policy number is a 12-character long alphanumeric code, comprising 10 numbers followed by a 2-alphabet code, then plurality of boxes 4D-01 contains 12 boxes.
[0068] In step 406 of FIG. 4B, one or more pre-processing operations are performed on the captured image to produce a pre-processed image. Examples of pre-processing operations include:
- image cleaning to remove noise;
- de-skewing;
- de-speckling;
- binarisation;
- line removal;
- layout analysis;
- line detection;
- word detection;
- script recognition; and - normalization of aspect ratio and scale.

One of skill in the art would appreciate that in other embodiments, pre-processing operations other than those listed above are performed. In some embodiments, these pre-processing operations also include enhancement operations.
[0069] In step 407 of FIG. 4B, one or more OCR operations are performed on the pre-processed image using the font selected in step 402, so as to extract the patient health record identifier 4C-02. In one embodiment, this is performed using the one or more OCR engines within image processing and OCR subsystem 305.
[0070] In step 408 of FIG. 4B, one or more post-processing operations are performed to verify the extracted patient health record identifier. Examples of post-processing operations include:
- testing based on confidence settings;
- regular expressions;
- Luhn algorithm checking or mod-10 checking: An example of such a check is presented in "Technical Specifications - Ministry Of Health And Long Term Care:
Interface to Health Care Systems ¨ Machine Readable Input Specifications", Section 5.13:

Check Digit, Ministry of Health, retrieved on Mar 31, 2018 from http://health.gov.on.ca/english/providers/pub/ohip/tech_specific/pdf/5 13.pdf;
and - specific lexicons.
[0071] In step 409 of FIG. 4B, a determination is made whether the image capture operation was successful. In some embodiments, this comprises determining whether one or more results from the performing of the one or more of the post-processing operations detailed above in step 408 met one or more minimum thresholds. For example, for Ontario Health Insurance Policy patient health numbers, a determination is made as to whether the extracted patient health number passed the mod-10 check above.
[0072] In some embodiments, this determination comprises presenting the extracted patient health record identifier to the user 105 for visual confirmation, as shown in FIG. 4F. In FIG. 4F, confirmation 4E-00 comprises patient health record identifier 4C-02, confirmation prompting 4E-01, "Retry" button 4E-02 and "Proceed" button 4E-03. If, upon reading confirmation prompting 4E-01 the user declines the extracted patient health record identifier and activates retry button 4E-02, then the determination is unsuccessful. If, upon reading confirmation prompting 4E-01 the user accepts the extracted patient health record identifier and activates proceed button 4E-03, then the determination is successful.
[0073] If capture is determined to be successful in step 409 of FIG. 4B, in step 413 of FIG.
4B, the verified, extracted and confirmed patient health record identifier is used to retrieve patient information. The retrieved information comprises, for example: patient demographic information, medical health, financial, or any other related health information to that particular patient. The information is retrieved from, for example, one of:
- application storage and encryption subsystem 303, - health record database 108, or - external databases 110 connected to external servers 109.
In one embodiment, a hierarchical retrieval process analogous to a process of retrieving data from a cache and storage in a computer system is used. First, the application storage and encryption subsystem 303 is checked to see if the corresponding patient health record is available. If it is not available in the application storage and encryption subsystem 303, then an API call is made by, for example, information sending subsystem 307 to EHR
server 104 via network 102 to retrieve the data from health record database 108. If it is still not available on health record database 108, then a call is made to other servers 109 to retrieve the data from databases connected to other servers 109.
[0074] Similar to a caching arrangement, this is likely to reduce the time needed to retrieve data. Furthermore, in situations where mobile device 101 cannot connect to network 102, patient information can still be retrieved and used as necessary.
[0075] In step 413 of FIG. 4B, this retrieved patient information is then used to populate an interface for presentation to the user 105 via the mobile device 101, along with any related messages. In embodiments where the hierarchical retrieval process detailed above is utilized, user 105 is prompted to confirm and save the retrieved patient record to the application storage and encryption subsystem 303. In some embodiments, patient information corresponding to a number of scanned patient health record identifiers are stored on the application storage and encryption subsystem 303. For example, patient information corresponding to the last five (5) scanned patient health record identifiers are stored on the application storage and encryption subsystem 303. In other embodiments, patient health information corresponding to patient health record identifiers scanned in a fixed period of time is stored on the application storage and encryption subsystem 303. For example, patient information corresponding to patient health record identifiers scanned in the last five (5) days are stored on the application storage and encryption subsystem 303.
[0076] Once the information is retrieved, in other embodiments other steps are performed. Some of these steps are facilitated by, for example, user prompting. These other steps comprise, for example:
- Retrieving and viewing additional health information for the patient, - Amending information to existing health information records for the patient, and - Creating new health records for the patient.
[0077] If capture is unsuccessful in step 409 of FIG. 4B, then image capture is repeated. In some embodiments, the image capture process of step 405 of FIG. 4 is repeated without prompting the user (step 415 of FIG. 4B). In other embodiments, in step 410 of FIG. 4B the user is prompted to try adjusting image capture parameters or reselect the font before repeating image capture. If in step 411 of FIG. 4B the user elects not to adjust the image capture parameters or reselect font, then the process returns to step 405 of FIG. 4. If in step 411 of FIG. 4B, the user elects to either adjust image capture parameters or reselect the font, then:
- If in step 412 of FIG. 4B the user elects to only adjust image capture parameters, the process returns to step 404 of FIG. 4; and - If in step 412 of FIG. 4B the user elects to also reselect the font, the process returns to step 402 of FIG. 4.
[0078] One of skill in the art would recognize that there may be variations to the above embodiments. For example, in some embodiments the OCR operations explained in step 407 are performed using one or more remote OCR engines coupled to mobile device 101 via network 102. In other embodiments, font selection is performed prior to initiating the process of extracting a patient health record identifier.
[0079] The systems and implementations detailed above and in FIGS. 1 ¨ 4F may be deployed in a variety of ways.
[0080] In some embodiments patient health application 202 is provided to mobile device 101 by a service provider. In some of these embodiments, font selection server 103, font database 107, EHR server 104 and health record database 108 are also operated by the service provider. The service provider also performs font training as explained previously.
[0081] In other embodiments, patient health application 202 is provided to mobile device 101 as part of an organization's in-house system. Then, font selection server 103, font database 107, EHR server 104 and health record database 108 are also operated by the organization, which also performs font training as explained previously.
[0082] In yet other embodiments, some parts of the systems and methods detailed above are part of an organization's in-house system and other parts are operated by one or more service providers. For example, the service provider provides patient health application 202 to mobile device 101 and administers font selection server 103 and font database 107, while EHR server 104 and health record database 108 are operated by the organization.
[0083] While the above has been described with regard to patient health record identifiers, one of skill in the art would know that it could be applied to other situations where the sources for OCR scanning utilize a variety of fonts; and a high level of accuracy is required.
[0084] Although the algorithms described above including those with reference to the foregoing flow charts have been described separately, it should be understood that any two or more of the algorithms disclosed herein can be combined in any combination.
Any of the methods, algorithms, implementations, or procedures described herein can include machine-readable instructions for execution by: (a) a processor, (b) a controller, and/or (c) any other suitable processing device. Any algorithm, software, or method disclosed herein can be embodied in software stored on a non-transitory tangible medium such as, for example, a flash memory, a CD-ROM, a floppy disk, a hard drive, a digital versatile disk (DVD), or other memory devices, but persons of ordinary skill in the art will readily appreciate that the entire algorithm and/or parts thereof could alternatively be executed by a device other than a controller and/or embodied in firmware or dedicated hardware in a well known manner (e.g., it may be implemented by an application specific integrated circuit (ASIC), a programmable logic device (PLD), a field programmable logic device (FPLD), discrete logic, etc.). Also, some or all of the machine-readable instructions represented in any flowchart depicted herein can be implemented manually as opposed to automatically by a controller, processor, or similar computing device or machine. Further, although specific algorithms are described with reference to flowcharts depicted herein, persons of ordinary skill in the art will readily appreciate that many other methods of implementing the example machine readable instructions may alternatively be used.

For example, the order of execution of the blocks may be changed, and/or some of the blocks described may be changed, eliminated, or combined.
[0085] It should be noted that the algorithms illustrated and discussed herein as having various modules which perform particular functions and interact with one another. It should be understood that these modules are merely segregated based on their function for the sake of description and represent computer hardware and/or executable software code which is stored on a computer-readable medium for execution on appropriate computing hardware.
The various functions of the different modules and units can be combined or segregated as hardware and/or software stored on a non-transitory computer-readable medium as above as modules in any manner, and can be used separately or in combination.
[0086] While particular implementations and applications of the present disclosure have been illustrated and described, it is to be understood that the present disclosure is not limited to the precise construction and compositions disclosed herein and that various modifications, changes, and variations can be apparent from the foregoing descriptions without departing from the spirit and scope of an invention as defined in the appended claims.

Claims (60)

CLAIMS:
1. A system to enable a user to utilize a mobile device to scan a patient health record identifier, wherein said mobile device is coupled to a network; and said system further comprising a patient health application installed on said mobile device and deployed for use within one or more medical facilities, the patient health record identifier associated with a patient within the one or more medical facilities, wherein the patient health application comprises an image processing and optical character recognition (OCR) subsystem, further wherein the image processing and OCR subsystem is trained for a plurality of fonts based on at least one of training performed within the patient health application, training performed using one or more training tools provided by a service provider, and an update of the patient health application, and a trained data file corresponding to each of the plurality of fonts is made available to the image processing and OCR subsystem after the image processing and OCR subsystem is trained for each of the plurality of fonts, and the patient health application is configured to select a font from the plurality of fonts automatically using one or more algorithms, wherein said one or more algorithms perform the selection using one or more inputs, and Date recue/date received 2021-10-26 further wherein at least one of said one or more inputs is based on a location of the mobile device, and produce a region of interest detection window on a display of said mobile device, capture an image of said patient health record identifier from one of a plurality of sources using said region of interest detection window, perfonn one or more pre-processing operations on said captured image to produce a pre-processed image, perform one or more OCR operations on said pre-processed image using said selected font to extract said patient health record identifier, further wherein the one or more OCR operations are performed by the image processing and OCR subsystem using the trained data file corresponding to the selected font, perform one or more post-processing operations on said extracted health record identifier to verify said extracted patient health record identifier, and retrieve patient information based on said verified and extracted patient health record identifier.
2. The system of claim 1, wherein the plurality of sources comprises stickers, letters, file records, Addressograph imprints, or plastic identification cards.
3. The system of claim 1, wherein said region of interest detection window comprises a plurality of boxes.
4. The system of claim 3, wherein said patient health record identifier is displayed on said display; and Date recue/date received 2021-10-26 said capture of said image is performed by said patient health application after said displayed health record identifier is aligned with said plurality of boxes on said display.
5. The system of claim 1, wherein said one or more pre-processing operations comprise at least one of image cleaning to remove noise;
de-skewing;
de-speckling;
binarisation;
line removal;
layout analysis;
line detection;
word detection;
script recognition; and normalization of aspect ratio and scale.
6. The system of claim 1, wherein said at least one of said one or more inputs is the location of the mobile device.
7. The system of claim 6, wherein said one or more inputs further comprise at least one of:
a recency of usage of said font;
an amount of usage of said font; and one or more factors related to a usage pattern of said user.
8. The system of claim 1, wherein said patient health application comprises an application storage and encryption subsystem.
9. The system of claim 8, wherein said font is selected from either said application storage and encryption subsystem or from a font database coupled to said mobile device via said network.

Date recue/date received 2021-10-26
10. The system of claim 1, wherein said one or more pre-processing operations are performed by the image processing and optical character recognition subsystem.
11. The system of claim 7, wherein said one or more factors related to said usage pattern of said user comprise at least one of a history of usage of one or more billing codes; and a type of patient.
12. The system of claim 1, wherein said one or more post-processing operations are performed by the image processing and optical character recognition subsystem.
13. The system of claim 1, wherein said one or more post-processing operations are based on at least one of testing based on confidence settings;
regular expressions;
Luhn algorithm; and specific lexicons.
14. The system of claim 1, wherein said one or more optical character recognition operations are repeated based on one or more results of said performing of said one or more post-processing operations.
15. The system of claim 1, wherein said retrieved patient information is used by said patient health application to populate an interface for presentation to the user.
16. The system of claim 1, wherein said patient health application is configured to select a different font, capture a different image of a different patient health record identifier from a different source, and perform one or more OCR operations using said different image and said selected different font.

Date recue/date received 2021-10-26
17. The system of claim 1, wherein said image processing and OCR subsystem comprises one or more OCR engines;
said one or more OCR engines perfomi said one or more OCR operations; and said training of the image processing and OCR subsystem comprises training said one or more OCR engines for said selected font.
18. The system of claim 16, wherein said image processing and OCR subsystem comprises one or more OCR engines; and said one or more OCR engines perfomi said one or more OCR operations. said training of the image processing and OCR subsystem comprises training said one or more OCR
engines for said selected font and said different font.
19. The system of claim 18, wherein after said one or more OCR engines is trained for said different font, a trained data file corresponding to the different font is made available to said mobile device.
20. The system of claim 19, wherein said trained data file corresponding to the different font is made available to said mobile device either as part of an update of said patient health application; or via a font selection database.
21. A method to enable a user to utilize a mobile device to scan a patient health record identifier, wherein said mobile device is coupled to a network; and said method further comprising providing a patient health application comprising an image processing and optical character recognition (OCR) subsystem for said mobile device for deployment for use within one or more medical facilities, Date recue/date received 2021-10-26 the patient health record identifier associated with a patient within the one or more medical facilities, the image processing and OCR subsystem is trained for a plurality of fonts based on at least one of training performed within the patient health application, training performed using one or more training tools provided by a service provider, and an update of the patient health application, and a trained data file corresponding to each of the plurality of fonts is made available to the image processing and OCR subsystem after the image processing and OCR subsystem is trained for each of the plurality of fonts, wherein said patient health application is configured to select a font from the plurality of fonts automatically based on one or more algorithms, wherein said one or more algorithms have one or more inputs, and at least one of said one or more inputs is based on a location of the mobile device, and produce a region of interest detection window on a display of the mobile device, capture an image of said patient health record identifier from one of a plurality of sources using said region of interest detection window, perform one or more pre-processing operations on said captured image to produce a pre-processed image, perform one or more OCR operations on said pre-processed image using said selected font to extract said patient health record identifier, further wherein Date recue/date received 2021-10-26 the one or more OCR operations are performed by the image processing and OCR
subsystem using the trained data file corresponding to the selected font, perform one or more post-processing operations on said extracted health record identifier to verify said extracted patient health record identifier, and retrieve patient information based on said verified and extracted patient health record identifier.
22. The method of claim 21, wherein said plurality of sources comprises stickers, letters, file records, Addressograph imprints, or plastic identification cards.
23. The method of claim 21, wherein said region of interest detection window comprises a plurality of boxes.
24. The method of claim 23, wherein said patient health record identifier is displayed on said display; and said patient health application is configured to perform said capture of image after said displayed health record identifier is aligned with said plurality of boxes on said display.
25. The method of claim 21, wherein said one or more pre-processing operations comprise at least one of image cleaning to remove noise;
de-skewing;
de-speckling;
binarisation;
line removal;
layout analysis;
line detection;
word detection;
script recognition; and normalization of aspect ratio and scale.
Date recue/date received 2021-10-26
26. The method of claim 21, wherein said at least one of said one or more inputs comprises a location of the mobile device.
27. The method of claim 26, wherein said one or more inputs further comprise at least one of:
a recency of usage of said font;
an amount of usage of said font; and one or more factors related to a usage pattern of said user.
28. The method of claim 21, wherein said patient health application comprises an application storage and encryption subsystem.
29. The method of claim 28, wherein said font is selected from either said application storage and encryption subsystem or from a font database coupled to said mobile device via said network.
30. The method of claim 21, wherein said one or more pre-processing operations are performed by the image processing and OCR subsystem.
31. The method of claim 27, wherein said one or more factors related to a usage pattern of said user comprise at least one of a history of usage of one or more billing codes; and a type of patient.
32. The method of claim 21, wherein said one or more post-processing operations are performed by the image processing and OCR subsystem.
33. The method of claim 21, wherein said one or more post-processing operations are based on at least one of testing based on confidence settings;
regular expressions;

Date recue/date received 2021-10-26 Luhn algorithm; and specific lexicons.
34. The method of claim 21, further comprising repeating said one or more OCR operations based on one or more results of said perfoiming of said one or more post-processing operations.
35. The method of claim 21, further comprising populating an interface for presentation to the user based on said retrieved patient information.
36. The method of claim 21, further wherein said method comprises selecting a different font, capturing a different image of a different patient health record identifier from a different source, and performing, by said patient health application, said one or more OCR
operations using said different image and said selected different font.
37. The method of claim 21, further wherein one or more OCR engines are used for said performing of one or more OCR operations;
said method further comprising training said one or more OCR engines for said selected font prior to said selecting of said selected font.
38. The method of claim 36, further wherein one or more OCR engines are used for said performing of one or more OCR operations; and said method further comprising training said one or more OCR engines for said selected font and said different font prior to either selecting of said selected font or selecting of said different font.
39. The method of claim 38, further comprising making a trained data file corresponding to said different available to said mobile device after said training of said one or more OCR engines for said different font.

Date recue/date received 2021-10-26
40. The method of claim 27, wherein the method further comprises determining the location of the mobile device based on one or more results acquired from the mobile device.
41. A method to enable a user to utilize a mobile device to scan a patient health record identifier, wherein said mobile device is coupled via a network to an electronic health record server and a health record database, further wherein said electronic health record server is coupled to one or more external servers and an external database, and said method further comprising providing a patient health application comprising an image processing and optical character recognition (OCR) subsystem for installation on said mobile device, wherein the patient health application is provided for deployment for use within one or more medical facilities, the patient health record identifier associated with a patient within the one or more medical facilities, the image processing and OCR subsystem is trained for a plurality of fonts based on at least one of training performed within the patient health application, training performed using one or more training tools provided by a service provider, and an update of the patient health application, and a trained data file corresponding to each of the plurality of fonts is made available to the image processing and OCR subsystem after the image processing and OCR subsystem is trained for each of the plurality of fonts, and said patient health application comprises an application storage and encryption subsystem, and said patient health application is configured to Date recue/date received 2021-10-26 select a first font from the plurality of fonts automatically using one or more algorithms, wherein said one or more algorithms have one or more inputs, and at least one of said one or more inputs is based on a location of the mobile device, and produce a region of interest detection window on a display of said mobile device, capture a first image of a first patient health record identifier from a first of a plurality of sources using said region of interest detection window and an image capture subsystem of said mobile device, perfomi one or more pre-processing operations on said captured first image to produce a pre-processed image;
perfomi a first one or more OCR operations on said pre-processed image using said selected font to extract said patient health record identifier, further wherein the first one or more OCR operations are perfomied by the image processing and OCR subsystem using the trained data file corresponding to the selected font, and perfomi one or more post-processing operations on said extracted health number to verify said extracted patient health record identifier; and retrieve patient information, based on said verified and extracted patient health record identifier, from one of said application storage and encryption subsystem, said health record database via said network, or said one or more external servers.
42. The method of claim 41, wherein said patient health application is further configured to select a second font from the plurality of fonts automatically using the one or more algorithms, capture a second image of a second patient health record identifier from a second of the plurality of sources, and Date recue/date received 2021-10-26 perfomi a second one or more OCR operations using said captured second image and said selected second font, wherein the second one or more OCR operations are performed using the image processing and OCR subsystem.
43. The method of claim 41, further wherein one or more OCR engines within the image processing and OCR subsystem are used for said perfomiing of the first one or more OCR
operations.
44. The method of claim 41, wherein said one or more post-processing operations are based on a specific lexicon.
45. The method of claim 41, wherein said one or more post-processing operations are based on a regular expression.
46. The method of claim 41, wherein said one or more post-processing operations are based on a Luhn algorithm.
47. The method of claim 42, wherein the first source is one of a sticker, letter, file record, Addressograph imprint, or plastic identification card, and the second source is a display screen.
48. The method of claim 41, wherein the image processing and OCR subsystem is trained using a file comprising the first font, wherein the file is created by a calligrapher.
49. The method of claim 41, wherein said making of said trained data file available to said mobile device is performed within an update of the patient health application.
50. The method of claim 41, wherein said making of said trained data file available to said mobile device is performed via a third party application store.
Date recue/date received 2021-10-26
51. The method of claim 41, wherein said making of said trained data file available to said mobile device is performed via a font selection server.
52. The method of claim 42, wherein the second font is supplied by the user.
53. The method of claim 42, wherein the selecting of the second font is based on inputs received after prompting the user.
54. The method of claim 53, wherein said prompting comprises presenting a screen.
55. The method of claim 54, wherein said prompting comprises presenting options.
56. The method of claim 54, wherein the received inputs are based on the user interacting with the screen.
57. A system to enable a user to utilize a mobile device to scan a patient health record identifier, wherein said mobile device is coupled to a network; and said system further comprising a patient health application installed on said mobile device and deployed for use within one or more medical facilities, the patient health record identifier associated with a patient within the one or more medical facilities, wherein the patient health application comprises an image processing and optical character recognition (OCR) subsystem, further wherein the image processing and OCR subsystem is trained for a plurality of fonts based on at least one of training performed within the patient health application, training performed using one or more training tools provided by a service provider, and Date recue/date received 2021-10-26 an update of the patient health application, and a trained data file corresponding to each of the plurality of fonts is made available to the image processing and OCR subsystem after the image processing and OCR subsystem is trained for each of the plurality of fonts, and the patient health application is configured to select a font from the plurality of fonts either automatically using one or more algorithms, wherein said one or more algorithms perform the selection using one or more inputs, further wherein at least one of the one or more inputs is based on a location of the mobile device, or based on inputs received after prompting the user, produce a region of interest detection window on a display of said mobile device, capture an image of said patient health record identifier from one of a plurality of sources using said region of interest detection window, perfonn one or more pre-processing operations on said captured image to produce a pre-processed image, perfonn one or more OCR operations on said pre-processed image using said selected font to extract said patient health record identifier, further wherein the one or more OCR operations are perfomied by the image processing and OCR subsystem using the trained data file corresponding to the selected font, and perfonn one or more post-processing operations on said extracted health record identifier to verify said extracted patient health record identifier, and retrieve patient infomiation based on said verified and extracted patient health record identifier.

Date recue/date received 2021-10-26
58. The system of claim 57, wherein said prompting comprises presenting a screen.
59. The system of claim 58, wherein said prompting comprises presenting options.
60. The system of claim 58, wherein the received inputs are based on user interaction with the screen.

Date recue/date received 2021-10-26
CA3002447A 2018-04-24 2018-04-24 System and method for patient health record identifier scanner Active CA3002447C (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CA3002447A CA3002447C (en) 2018-04-24 2018-04-24 System and method for patient health record identifier scanner

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CA3002447A CA3002447C (en) 2018-04-24 2018-04-24 System and method for patient health record identifier scanner

Publications (2)

Publication Number Publication Date
CA3002447A1 CA3002447A1 (en) 2019-10-24
CA3002447C true CA3002447C (en) 2023-03-14

Family

ID=68318141

Family Applications (1)

Application Number Title Priority Date Filing Date
CA3002447A Active CA3002447C (en) 2018-04-24 2018-04-24 System and method for patient health record identifier scanner

Country Status (1)

Country Link
CA (1) CA3002447C (en)

Also Published As

Publication number Publication date
CA3002447A1 (en) 2019-10-24

Similar Documents

Publication Publication Date Title
US11406265B2 (en) Method for automating collection, association, and coordination of multiple medical data sources
CN107077549B (en) Biometric authentication system, biometric authentication processing device, and biometric authentication method
CN108334809B (en) Electronic device for iris recognition and method of operating the same
EP4008059A1 (en) Alignment of antennas on near field communication devices for communication
US20180288040A1 (en) System and Method for Biometric Authentication-Based Electronic Notary Public
CN102622549B (en) Electronic seal implementation system and method
US20190333633A1 (en) Medical device information providing system, medical device information providing method, and program
US20200334429A1 (en) Alignment of Antennas on Near Field Communication Devices for Communication
KR101329003B1 (en) Method, patient's client device and relay device for delivery each electronic prescription services of patient and pharmacy
US20190304574A1 (en) Systems and methods for managing server-based patient centric medical data
US20160006715A1 (en) System and method for obtaining electronic consent
US11461816B1 (en) Healthcare provider bill validation
US10212159B2 (en) Pharmacy authentication methods and systems
JP2017151913A (en) Pdf file management system, pdf file management server, pdf file data acquiring server, pdf file management method, pdf file data acquiring method, pdf file management program, and pdf file data acquiring program
CA3002447C (en) System and method for patient health record identifier scanner
EP2866157A1 (en) Method, apparatus and system for electronically signing a document by a user by using a portable wireless communication device
CN109119131B (en) Physical examination method and system based on medical examination expert intelligence library platform
Alkawaz et al. Augmented reality for patient information using face recognition and cloud computing
CN116681045A (en) Report generation method, report generation device, computer equipment and storage medium
KR101541476B1 (en) Medicine information providing device
JP6318645B2 (en) Patient information display program, method, apparatus and system, and patient information management program, method and apparatus
KR20170036280A (en) Electric pole managing program and the program installed mobile terminal
JP5782969B2 (en) Attendance management system
EP3145117B1 (en) A method and a system for shared digital signing of a document
US10319038B2 (en) Mobile submission of pharmacy insurance information