WO2023166328A2 - Augmented documentation through computer vision detection, alignment, and application of paper markup to digital maps and forms - Google Patents

Augmented documentation through computer vision detection, alignment, and application of paper markup to digital maps and forms Download PDF

Info

Publication number
WO2023166328A2
WO2023166328A2 PCT/IB2022/000793 IB2022000793W WO2023166328A2 WO 2023166328 A2 WO2023166328 A2 WO 2023166328A2 IB 2022000793 W IB2022000793 W IB 2022000793W WO 2023166328 A2 WO2023166328 A2 WO 2023166328A2
Authority
WO
WIPO (PCT)
Prior art keywords
digital
record
paper
information
markup
Prior art date
Application number
PCT/IB2022/000793
Other languages
French (fr)
Other versions
WO2023166328A3 (en
Inventor
Matthew A. Molenda
Original Assignee
Mofaip, Llc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Mofaip, Llc filed Critical Mofaip, Llc
Priority to GB2408005.3A priority Critical patent/GB2627661A/en
Priority to US18/225,872 priority patent/US20230368878A1/en
Priority to PCT/IB2023/000535 priority patent/WO2024023584A2/en
Publication of WO2023166328A2 publication Critical patent/WO2023166328A2/en
Publication of WO2023166328A3 publication Critical patent/WO2023166328A3/en

Links

Classifications

    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H10/00ICT specially adapted for the handling or processing of patient-related medical or healthcare data
    • G16H10/60ICT specially adapted for the handling or processing of patient-related medical or healthcare data for patient-specific data, e.g. for electronic patient records
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V30/00Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
    • G06V30/10Character recognition
    • G06V30/22Character recognition characterised by the type of writing
    • G06V30/224Character recognition characterised by the type of writing of printed characters having additional code marks or containing code marks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V30/00Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
    • G06V30/40Document-oriented image-based pattern recognition
    • G06V30/41Analysis of document content
    • G06V30/412Layout analysis of documents structured with printed lines or input boxes, e.g. business forms or tables
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V30/00Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
    • G06V30/40Document-oriented image-based pattern recognition
    • G06V30/41Analysis of document content
    • G06V30/413Classification of content, e.g. text, photographs or tables
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V30/00Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
    • G06V30/40Document-oriented image-based pattern recognition
    • G06V30/41Analysis of document content
    • G06V30/414Extracting the geometrical structure, e.g. layout tree; Block segmentation, e.g. bounding boxes for graphics or text
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H15/00ICT specially adapted for medical reports, e.g. generation or transmission thereof
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H50/00ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
    • G16H50/70ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for mining of medical data, e.g. analysing previous cases of other patients

Definitions

  • This invention relates to converting paper documentation to digital documentation, and more particularly, to utilizing combined technology to more accurately and effectively capture handwritten and paper records and convert them to digital form and augmenting electronic records with the digitally converted information.
  • scanning a paper record into digital format does not necessarily create a digital file with the same degree of functionality and information as one originally created as a digital file.
  • it is not always convenient or possible or preferred to create a digital record particularly when treating patients.
  • Interet connectivity issues and server issues also arise, necessitating backup paper charts.
  • many workflows and industries still rely on paper documentation and markup on diagrams, with an exemplar workflow being Mohs micrographic surgery that almost universally documents Mohs maps on printed diagrams.
  • an electronic record that is already correctly input from the paper form can be further augmented with photos, attachments, links, and additional electronic information.
  • Optical character recognition, computer vision (CV), and handwriting recognition technologies are well established.
  • the present invention combines these technologies to apply detections to automatically create a digital medical record that precisely aligns detected data to digital forms and multidimensional anatomic maps.
  • the present invention creates interactive health data points on digital forms that contain not only the detections, but meaningful health data such as diagnostic and procedural information that becomes interactive.
  • the present invention includes a method that applies computer vision to detect annotation, coloring, and markup performed on paper forms that may include diagrams, typed language, fields, codes such as QR codes, orientation markers, images, workflow initiators (such as checkboxes), form information such as a version number, and language indicators, all of which are also detected.
  • the invention automatically detects, categorizes, aligns, and converts the detections to digital annotation, coloring, markup, and coordinates.
  • the detected annotation, coloring (including intensity, shades, and patterns), and markup includes labels, pins, areas, regions, characters, symbols, shapes, drawings, text, handwriting, codes such as QR codes.
  • the detections are all digitized and refined in alignment with coordinate normalization. Workflow initiators like checkboxes, when checked, may trigger a digital event like a refill on a medication.
  • An electronic record may be started or appended with information detected, aligned, and placed from the paper form, and the electronic record can then be modified or augmented with additional details.
  • This process being called “Augmented Documentation.”
  • the digital copy of the paper form can be as simple as overlaying the detections on a digital copy, or as complicated as applying neural networks for dropping pins, distribution segments, and health data onto multidimensional anatomic maps to automatically document anatomic locations, anatomic distributions, medical procedures, and diagnoses with automatic application of the documentation to calculate the correct code sets based on country and language.
  • the digital documentation can then be modified or augmented with additional details, photos, and attachments linked directly to the anatomic site, diagnostic, or other record elements.
  • FIG. 1 is a flowchart of the system of the present invention.
  • FIG. 2 is a representative annotated paper record with handwritten markings.
  • FIG. 3 illustrates a digital interpretation of only the handwritten markings of FIG. 2.
  • FIG. 4 is the generated electronic version of the paper record in FIG. 2.
  • FIG. 5 in a representative anatomic map in paper form with handwritten annotations.
  • FIG. 6 illustrates the digital interpretation of the handwritten markings overlayed on the digital photo of the paper form in FIG. 5.
  • FIG. 7 is an electronic record generated from the paper form in FIG. 5 with automatic documentation and mapping of correct procedures, diagnoses, anatomic sites, notes, patient demographics, and billing codes.
  • FIG. 8 is an alternative representative anatomic map in Chinese in paper form with handcolored anatomic distributions.
  • FIG. 9 is the generated electronic version of the paper record in FIG. 8 with detected color, area, intensity, and distribution in Chinese.
  • FIG. 10 is the generated electronic version depicted in FIG. 9 translated to English.
  • FIG. 11 depicts an exemplar automatically generated Mohs map that contains pre-filled documentation, blanks, and a anatomic diagram.
  • FIG. 1 depicts a flowchart of the augmented document system 400 of the present invention wherein a printed form 401 and the user markup 402 are converted into digital interactive forms, data, and maps 409 and ultimately augmented documentation 410.
  • a printed form 401 can include diagrams, typed language, fields, codes such as QR codes or bar codes, orientation markers, images, workflow initiators (such as checkboxes), form information such as a version number, language indicators, labels such as those associated with images, or any other contemplated information depicted in paper form.
  • User markup 402 can include annotation, coloring (including intensity, shades, and patterns), and markup (such as labels, pins, areas, regions, characters, symbols, shapes, drawings, text, handwriting, and codes such as QR codes and bar codes placed on with a sticker, such as a patient label as one exemplar).
  • the system takes in the user markup 402 from the printed form 401 through image capture 403 or alternately directly to a detection processor 405.
  • a detection processor 405 In one embodiment, using an electronic pen with coordinate detection on a specialized form to read the user markup 402 would be directly interpreted by a detection processor 405.
  • taking a digital photograph or scanning the paper form to capture the user markup 402 would be using image capture 403.
  • the image capture 403 would be processed by computer vision 404.
  • the captured image is automatically rotated, cropped, perspective warped, and rid of any artifacts
  • Computer vision 404 also detects the form-determined information like diagrams, typed language (e.g., forms that are populated with name, date of birth, and demographic information already), filled in fields, empty fields, codes such as QR codes, orientation markers, images, workflow initiators (e.g., checkboxes), form information such as a version number, language indicators, and labels such as those associated with images (e.g., laterality labels in one form version that contains anatomic maps). Computer vision 404 would further detect user markup 402 and their coordinates, colors, intensities, and properties.
  • the detections, or regions of interest (ROIs) determined in the computer vision 404 stage move on to the detection processor 405.
  • ROI information is categorized, organized, grouped, collated, and refined based on the context and language of the printed form
  • orientation markers on the printed form 401 serve as defined axis points for the form, which allow for axis normalization 406. This allows the system to automatically account for size variation in paper forms as well as different printable margins and zoom settings, ultimately eliminating user and printer errors in the printing process.
  • the comers of the form may contain hash marks that not only serve as orientation markers, but also serve as coordinate definitions used for axis normalization 406. The ROI information is then processed for alignment refinement
  • the ROI information is reconciled and placed into the correct spots on a digital version of the printed form or record.
  • the digital information from the paper form may be applied to multiple digital forms and records simultaneously, thus also creating a propagation point for new paper forms, such as a Mohs surgery form for a patient needing a second surgical layer as one exemplar.
  • the digital information allows for propagation and auto-fill of new forms, or for opening the correct area of an application automatically.
  • the digital interactive forms, maps, and applications 409 can automatically open the correct patient chart in an electronic health record, document the detections into various forms and maps within the application, and be ready for interactivity and additional augmented input from the user. These processes all combine into augmented documentation 410 workflows.
  • a pin is automatically placed on a multi-dimensional anatomy map to represent a procedure, such as a shave biopsy, and has the correct diagnosis, procedure type, order in a list, procedure description, category, color, map placement, anatomic description, billing code, patient information, and visual preview, and the application is ready to associate clinical photographs with the pin, or the application is ready print a new form from the digital material like a pathology requisition form.
  • a paper Mohs map used in micrographic dermatologic surgery. Multiple paper maps are used to mark areas of removed tissue for skin cancer removal and photographs are taken before, during, and after the surgery.
  • the paper Mohs maps processed in the system become a digital interactive form or map 409 capable of accepting the digital photos and creating augmented documentation of the procedure.
  • FIG. 2 depicts an exemplar image capture 403 (photo) of a printed form 401 with multiple handwritten annotations as the user markup 402.
  • User markup 402 may include different colors representing shapes, characters (alone or clustered), labels, arrows, pinpoints, pin orders, shading, coloring, and other markup.
  • image capture 403 or detection processor 405 the system detects the user markup 402.
  • FIG. 3 is a representation of the detected user markup 402 from the printed form in FIG. 2.
  • FIG. 4 depicts the digital interactive map 409 generated after the user markup is processed through the system. The user markup has been converted to digital markup 411 with precise coordinates related to the document comers and refined by the images and placed onto a digital version of the printed form, here a digital interactive map 409.
  • FIG. 5 is another exemplar of an image capture 403 (photo) of a printed form 401 depicting an anatomic map with handwritten annotations user markup 402.
  • an English anatomic map is printed on letter sized paper.
  • the image capture 403 in this exemplar is at an angle and has a distorted perspective, and it is contemplated that computer vision 404 will rotate, crop, and perspective warp this image.
  • computer vision 404 conversion of the handwritten annotations to digital markup and categorization occurs.
  • User markup 402 includes patient demographic information like the patient’s name and date of birth, which was handwritten in. (These paper maps can be preprinted with patient demographics, which can also be detected, not shown).
  • FIG. 6 shows the digital markup 411 is overlayed on the image capture 403 of the printed form 401.
  • the digital detections are appropriately and automatically applied to the correct diagnostic, procedural, map, diagram, drawing, coordinates, and demographic input areas of the application, applying the extracted digital documentation to the session which syncs to other session workflows, such as automatic billing code calculation, diagnosis categorization, and more.
  • the map contains orientation comers 413 and a QR code 412 containing map information, including map language which is used by computer vision as the default interpretation language unless otherwise specified by the user.
  • Paper size and orientation comers 413 are detected for automatic map alignment, rotation, cropping, perspective warping, and detections, and other processing. Alignments are simultaneously refined further to the map version that was automatically detected.
  • this information is automatically applied to anatomic visualization 10, or digital map, with automatic documentation 414 of correct procedures, diagnoses, anatomic sites, notes, patient demographics, and billing codes.
  • the documentation can now be modified, enhanced, or augmented by attaching photos, attachments, forms and other data directly to the dynamic anatomic addresses, thus creating augmented documentation. It is contemplated that additional forms can be propagated from the electronic record.
  • Augmented documentation allows for attaching photos, attachments, forms and other data directly to the dynamic anatomic addresses. Additionally, data blocks can be changed, modified, rearranged, or added. Other workflows, such as label printing with isolated visual previews and other dynamic anatomic address information, become instantly available. Furthermore, the session, data, and visualizations are still translatable to any coded, linguistic, or symbolic language.
  • FIGs. 8-10 depict another exemplar conversion of printed form 401 to digital information.
  • the printed form 401 in FIG. 8 is a Chinese version of an anatomic map. Again, user markup 402 is depicted, this time depicting distributions of anatomy for different diagnoses represented by different manually shaded in colors on the paper form 401.
  • FIG. 9 is the generated electronic record of the paper form with detected color, area, intensity, and anatomic distribution in
  • each color represented a diagnosis, and the anatomic distribution is reported automatically along with the diagnosis.
  • the exemplar depicts automatic conversion of detections to digital map, including appropriate selection and coloring of hierarchical anatomic site components of dynamic anatomic addresses, visualizations, diagnostic categories, and anatomic groupings. Additionally, it is contemplated that surface area calculations, intensities, and overlaps are detected and applied. It is further contemplated that augmented documentation allows for attaching photos, attachments, forms, and other data directly to the documented dynamic anatomic addresses. Additionally, data blocks and the diagnosis can be changed, modified, rearranged, or added.
  • FIG. 10 shows the automatic English translation of the generated electronic version in FIG.
  • paper form markup can occur in one language, and that the digital information can automatically be translated and applied to an electronic record in another language.
  • FIG. 11 is an exemplar of an automatically generated paper form 300.
  • the form is a Mohs map used in micrographic dermatologic surgery.
  • the form 300 contains a diagram, map, alerts, and country specific information all generated from the dynamic anatomy library and the non-anatomy data. Patient demographics, encounter demographics, information from the pathology report, diagnosis information, and diagnosis extensions are automatically filled in. A QR code (redacted) automatically links this form and other documentation, such as photos during surgery, to the correct dynamic anatomic address.
  • augmented documentation workflows use computer vision to automatically detect, categorize, digitize, and place handwritten markup, apply it to the map, and attach it to the correct dynamic anatomic address in the correct position in the healthcare and encounter timeline.
  • Anatomic site specific and procedure specific alerts and checklists are shown, with this one being related to the Mohs surgery on the nose.
  • Electronic markup and form filling can also be done. This achieves seamless blending of paper and digital workflows related to surgical documentation, and the QR code provides quick access to add photos to the correct dynamic anatomic address from any device, even one that is not logged in.
  • a consent form 301 and a surgical whiteboard 302 that can be printed, modified, signed, or marked up, and processed through the system of this invention, to automatically update and file the electronic records.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • Medical Informatics (AREA)
  • Health & Medical Sciences (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Public Health (AREA)
  • General Physics & Mathematics (AREA)
  • Epidemiology (AREA)
  • Primary Health Care (AREA)
  • General Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Data Mining & Analysis (AREA)
  • Computer Graphics (AREA)
  • Geometry (AREA)
  • Databases & Information Systems (AREA)
  • Pathology (AREA)
  • Biomedical Technology (AREA)
  • User Interface Of Digital Computer (AREA)
  • Medical Treatment And Welfare Office Work (AREA)

Abstract

The present invention includes a method that applies computer vision to detect annotation, coloring, and markup performed on paper forms that may include diagrams, typed language, fields, codes such as QR codes, orientation markers, images, workflow initiators (such as checkboxes), form information such as a version number, and language indicators, all of which are also detected. The invention automatically detects, categorizes, aligns, and converts the detections to digital annotation, coloring, markup, and coordinates. The detected annotation, coloring (including intensity, shades, and patterns), and markup includes labels, pins, areas, regions, characters, symbols, shapes, drawings, text, handwriting, codes such as QR codes. The detections are all digitized and refined in alignment with coordinate normalization. Workflow initiators like checkboxes, when checked, may trigger a digital event like a refill on a medication. An electronic record may be started or appended with digital information detected, aligned, and placed from the paper form, and the electronic record can then be modified or augmented with additional details. The application of digital information extracted from the paper form can be as simple as overlaying the detections on a digital copy of the form, or as complicated as applying neural networks for dropping pins, distribution segments, and health data onto multidimensional anatomic maps to automatically document anatomic locations, anatomic distributions, medical procedures, and diagnoses with automatic application of the documentation to calculate the correct code sets based on country and language. The digital documentation can then be modified or augmented with additional details, photos, and attachments linked directly to the anatomic site, diagnostic, or other record elements. In one such iteration, adding photographs to biopsy sites that are already mapped, labeled with correct anatomic site descriptions, and ordered with a diagnosis in place from markup on a paper map. This process being called "Augmented Documentation."

Description

UTILITY PATENT APPLICATION
CONFIDENTIAL INFORMATION
Applicant: MoFalP, LLC
Address: 4327 Pine Ridge Circle, Monclova, OH 43542, USA
Title: Augmented documentation through computer vision detection, alignment, and application of paper markup to digital maps and forms
First Named Inventor: Matthew A. Molenda
Attorney: McCarthy, Lebit, Crystal & Liftman, Co. L.P.A.
Customer No.: 113863
Attorney Docket No.: AML.P027.PCT
RELATED APPLICATIONS
[0001] This application claims priority from each of U.S. Provisional Patent Application Serial No.
63/265,216 and its filing date December 10, 2021. This application is hereby incorporated by reference in its entirety for all purposes.
FIELD OF THE INVENTION
[0002] This invention relates to converting paper documentation to digital documentation, and more particularly, to utilizing combined technology to more accurately and effectively capture handwritten and paper records and convert them to digital form and augmenting electronic records with the digitally converted information.
SUMMARY OF THE INVENTION [0003] Currently, documentation in healthcare can be on paper records, electronic records, or often both. Many records systems do not have a way to automatically document anatomic site names, and the ones that do may have three-dimensional models that require rotation and manipulation, or multiple screens to click through, thus creating challenges with documentation efficiency. Associating the correct diagnosis, treatment, or plan with the correct anatomic site label also requires medical knowledge and knowledge on how to navigate the electronic health record input system. Medical documentation is easier, more time-efficient, and requires less human training hours when performed on paper but having related documentation in different formats makes having a consolidated record difficult. Additionally, traditional methods of converting from one form to the other, lose valuable characteristics and features of the record type or require dual entry and transcription from paper to electronic records. For example, scanning a paper record into digital format does not necessarily create a digital file with the same degree of functionality and information as one originally created as a digital file. However, it is not always convenient or possible or preferred to create a digital record, particularly when treating patients. Interet connectivity issues and server issues also arise, necessitating backup paper charts. Furthermore, many workflows and industries still rely on paper documentation and markup on diagrams, with an exemplar workflow being Mohs micrographic surgery that almost universally documents Mohs maps on printed diagrams. By creating a system capable of accurately detecting the handwritten annotations, coloring, and markup on a paper form and categorizing, aligning and converting them to digital information, an electronic record that is already correctly input from the paper form can be further augmented with photos, attachments, links, and additional electronic information.
[0004] Optical character recognition, computer vision (CV), and handwriting recognition technologies are well established. The present invention combines these technologies to apply detections to automatically create a digital medical record that precisely aligns detected data to digital forms and multidimensional anatomic maps. The present invention creates interactive health data points on digital forms that contain not only the detections, but meaningful health data such as diagnostic and procedural information that becomes interactive.
[0005] The present invention includes a method that applies computer vision to detect annotation, coloring, and markup performed on paper forms that may include diagrams, typed language, fields, codes such as QR codes, orientation markers, images, workflow initiators (such as checkboxes), form information such as a version number, and language indicators, all of which are also detected. The invention automatically detects, categorizes, aligns, and converts the detections to digital annotation, coloring, markup, and coordinates. The detected annotation, coloring (including intensity, shades, and patterns), and markup includes labels, pins, areas, regions, characters, symbols, shapes, drawings, text, handwriting, codes such as QR codes. The detections are all digitized and refined in alignment with coordinate normalization. Workflow initiators like checkboxes, when checked, may trigger a digital event like a refill on a medication.
An electronic record may be started or appended with information detected, aligned, and placed from the paper form, and the electronic record can then be modified or augmented with additional details. In one such iteration, adding photographs to biopsy sites that are already mapped, labeled with correct anatomic site descriptions, and ordered with a diagnosis in place from markup on a paper map. This process being called “Augmented Documentation.” The digital copy of the paper form can be as simple as overlaying the detections on a digital copy, or as complicated as applying neural networks for dropping pins, distribution segments, and health data onto multidimensional anatomic maps to automatically document anatomic locations, anatomic distributions, medical procedures, and diagnoses with automatic application of the documentation to calculate the correct code sets based on country and language. The digital documentation can then be modified or augmented with additional details, photos, and attachments linked directly to the anatomic site, diagnostic, or other record elements.
[0006] It is contemplated that this system has application in other industries as well. For example, an architect meeting with a client could annotate a blueprint with the clients requested changes and be able to later convert those notes into a digital record that could even be manipulatable in electronic form to directly edit the blueprint. Another application could be a civil engineer assessing infrastructure for deterioration and repairs in the field taking notes and marking representation maps or photographs of the site would be able to create an electronic record and later be able to update that record when repaired or to continue to track deterioration. Still other benefits and advantages of the invention will become apparent to those skilled in the art to which it pertains upon a reading and understanding of the following detailed specification.
BRIEF DESCRIPTION OF DRAWINGS
[0007] FIG. 1 is a flowchart of the system of the present invention.
[0008] FIG. 2 is a representative annotated paper record with handwritten markings.
[0009] FIG. 3 illustrates a digital interpretation of only the handwritten markings of FIG. 2.
[0010] FIG. 4 is the generated electronic version of the paper record in FIG. 2.
[0011] FIG. 5 in a representative anatomic map in paper form with handwritten annotations.
[0012] FIG. 6 illustrates the digital interpretation of the handwritten markings overlayed on the digital photo of the paper form in FIG. 5.
[0013] FIG. 7 is an electronic record generated from the paper form in FIG. 5 with automatic documentation and mapping of correct procedures, diagnoses, anatomic sites, notes, patient demographics, and billing codes.
[0014] FIG. 8 is an alternative representative anatomic map in Chinese in paper form with handcolored anatomic distributions.
[0015] FIG. 9 is the generated electronic version of the paper record in FIG. 8 with detected color, area, intensity, and distribution in Chinese.
[0016] FIG. 10 is the generated electronic version depicted in FIG. 9 translated to English. [0017] FIG. 11 depicts an exemplar automatically generated Mohs map that contains pre-filled documentation, blanks, and a anatomic diagram.
DETAILED DESCRIPTION OF DRAWINGS
[0018] Referring now to the figures, FIG. 1 depicts a flowchart of the augmented document system 400 of the present invention wherein a printed form 401 and the user markup 402 are converted into digital interactive forms, data, and maps 409 and ultimately augmented documentation 410. A printed form 401 can include diagrams, typed language, fields, codes such as QR codes or bar codes, orientation markers, images, workflow initiators (such as checkboxes), form information such as a version number, language indicators, labels such as those associated with images, or any other contemplated information depicted in paper form. User markup 402 can include annotation, coloring (including intensity, shades, and patterns), and markup (such as labels, pins, areas, regions, characters, symbols, shapes, drawings, text, handwriting, and codes such as QR codes and bar codes placed on with a sticker, such as a patient label as one exemplar). The system takes in the user markup 402 from the printed form 401 through image capture 403 or alternately directly to a detection processor 405. In one embodiment, using an electronic pen with coordinate detection on a specialized form to read the user markup 402 would be directly interpreted by a detection processor 405. In another embodiment, taking a digital photograph or scanning the paper form to capture the user markup 402 would be using image capture 403.
[0019] The image capture 403 would be processed by computer vision 404. In this step, the captured image is automatically rotated, cropped, perspective warped, and rid of any artifacts
(e.g., shadows on an image taken with a photo camera). Computer vision 404 also detects the form-determined information like diagrams, typed language (e.g., forms that are populated with name, date of birth, and demographic information already), filled in fields, empty fields, codes such as QR codes, orientation markers, images, workflow initiators (e.g., checkboxes), form information such as a version number, language indicators, and labels such as those associated with images (e.g., laterality labels in one form version that contains anatomic maps). Computer vision 404 would further detect user markup 402 and their coordinates, colors, intensities, and properties. The detections, or regions of interest (ROIs) determined in the computer vision 404 stage move on to the detection processor 405. Here the ROI information is categorized, organized, grouped, collated, and refined based on the context and language of the printed form
401 (for example, who the user is and what their preferences are; or what specialization is the form, for example dermatology versus dentistry). In the present embodiment, orientation markers on the printed form 401 serve as defined axis points for the form, which allow for axis normalization 406. This allows the system to automatically account for size variation in paper forms as well as different printable margins and zoom settings, ultimately eliminating user and printer errors in the printing process. In one embodiment, the comers of the form may contain hash marks that not only serve as orientation markers, but also serve as coordinate definitions used for axis normalization 406. The ROI information is then processed for alignment refinement
407 where detected borders of form-printed content, such as a line art drawing, are used to even more precisely place the detections into the correct locations in forms and in particular on digital maps, diagrams, and images.
[0020] Once the ROI information has been normalized and refined, detection placement 408 on the digital interactive forms, maps, records, and applications 409 occurs. The ROI information is reconciled and placed into the correct spots on a digital version of the printed form or record. In one embodiment, the digital information from the paper form may be applied to multiple digital forms and records simultaneously, thus also creating a propagation point for new paper forms, such as a Mohs surgery form for a patient needing a second surgical layer as one exemplar. To restate, the digital information allows for propagation and auto-fill of new forms, or for opening the correct area of an application automatically. In one embodiment, the digital interactive forms, maps, and applications 409 can automatically open the correct patient chart in an electronic health record, document the detections into various forms and maps within the application, and be ready for interactivity and additional augmented input from the user. These processes all combine into augmented documentation 410 workflows. In one embodiment, a pin is automatically placed on a multi-dimensional anatomy map to represent a procedure, such as a shave biopsy, and has the correct diagnosis, procedure type, order in a list, procedure description, category, color, map placement, anatomic description, billing code, patient information, and visual preview, and the application is ready to associate clinical photographs with the pin, or the application is ready print a new form from the digital material like a pathology requisition form. In another embodiment, a paper Mohs map used in micrographic dermatologic surgery. Multiple paper maps are used to mark areas of removed tissue for skin cancer removal and photographs are taken before, during, and after the surgery. The paper Mohs maps processed in the system become a digital interactive form or map 409 capable of accepting the digital photos and creating augmented documentation of the procedure.
[0021] FIG. 2 depicts an exemplar image capture 403 (photo) of a printed form 401 with multiple handwritten annotations as the user markup 402. User markup 402 may include different colors representing shapes, characters (alone or clustered), labels, arrows, pinpoints, pin orders, shading, coloring, and other markup. Using image capture 403 or detection processor 405, the system detects the user markup 402. FIG. 3 is a representation of the detected user markup 402 from the printed form in FIG. 2. FIG. 4 depicts the digital interactive map 409 generated after the user markup is processed through the system. The user markup has been converted to digital markup 411 with precise coordinates related to the document comers and refined by the images and placed onto a digital version of the printed form, here a digital interactive map 409.
[0022] FIG. 5 is another exemplar of an image capture 403 (photo) of a printed form 401 depicting an anatomic map with handwritten annotations user markup 402. In this embodiment, an English anatomic map is printed on letter sized paper. It is noted that the image capture 403 in this exemplar is at an angle and has a distorted perspective, and it is contemplated that computer vision 404 will rotate, crop, and perspective warp this image. Using computer vision 404 conversion of the handwritten annotations to digital markup and categorization occurs. User markup 402 includes patient demographic information like the patient’s name and date of birth, which was handwritten in. (These paper maps can be preprinted with patient demographics, which can also be detected, not shown).
[0023] FIG. 6 shows the digital markup 411 is overlayed on the image capture 403 of the printed form 401. The digital detections are appropriately and automatically applied to the correct diagnostic, procedural, map, diagram, drawing, coordinates, and demographic input areas of the application, applying the extracted digital documentation to the session which syncs to other session workflows, such as automatic billing code calculation, diagnosis categorization, and more.
The map contains orientation comers 413 and a QR code 412 containing map information, including map language which is used by computer vision as the default interpretation language unless otherwise specified by the user. Paper size and orientation comers 413 are detected for automatic map alignment, rotation, cropping, perspective warping, and detections, and other processing. Alignments are simultaneously refined further to the map version that was automatically detected.
[0024] In FIG. 7, this information is automatically applied to anatomic visualization 10, or digital map, with automatic documentation 414 of correct procedures, diagnoses, anatomic sites, notes, patient demographics, and billing codes. The documentation can now be modified, enhanced, or augmented by attaching photos, attachments, forms and other data directly to the dynamic anatomic addresses, thus creating augmented documentation. It is contemplated that additional forms can be propagated from the electronic record.
[0025] Augmented documentation allows for attaching photos, attachments, forms and other data directly to the dynamic anatomic addresses. Additionally, data blocks can be changed, modified, rearranged, or added. Other workflows, such as label printing with isolated visual previews and other dynamic anatomic address information, become instantly available. Furthermore, the session, data, and visualizations are still translatable to any coded, linguistic, or symbolic language.
[0026] FIGs. 8-10 depict another exemplar conversion of printed form 401 to digital information.
The printed form 401 in FIG. 8 is a Chinese version of an anatomic map. Again, user markup 402 is depicted, this time depicting distributions of anatomy for different diagnoses represented by different manually shaded in colors on the paper form 401. FIG. 9 is the generated electronic record of the paper form with detected color, area, intensity, and anatomic distribution in
Chinese. In this exemplar, each color represented a diagnosis, and the anatomic distribution is reported automatically along with the diagnosis. The exemplar depicts automatic conversion of detections to digital map, including appropriate selection and coloring of hierarchical anatomic site components of dynamic anatomic addresses, visualizations, diagnostic categories, and anatomic groupings. Additionally, it is contemplated that surface area calculations, intensities, and overlaps are detected and applied. It is further contemplated that augmented documentation allows for attaching photos, attachments, forms, and other data directly to the documented dynamic anatomic addresses. Additionally, data blocks and the diagnosis can be changed, modified, rearranged, or added.
[0027] FIG. 10 shows the automatic English translation of the generated electronic version in FIG
9. It is contemplated that paper form markup can occur in one language, and that the digital information can automatically be translated and applied to an electronic record in another language.
[0028] FIG. 11 is an exemplar of an automatically generated paper form 300. In the present embodiment, the form is a Mohs map used in micrographic dermatologic surgery. The form 300 contains a diagram, map, alerts, and country specific information all generated from the dynamic anatomy library and the non-anatomy data. Patient demographics, encounter demographics, information from the pathology report, diagnosis information, and diagnosis extensions are automatically filled in. A QR code (redacted) automatically links this form and other documentation, such as photos during surgery, to the correct dynamic anatomic address.
[0029] If this form is printed, augmented documentation workflows use computer vision to automatically detect, categorize, digitize, and place handwritten markup, apply it to the map, and attach it to the correct dynamic anatomic address in the correct position in the healthcare and encounter timeline. Anatomic site specific and procedure specific alerts and checklists are shown, with this one being related to the Mohs surgery on the nose. Electronic markup and form filling can also be done. This achieves seamless blending of paper and digital workflows related to surgical documentation, and the QR code provides quick access to add photos to the correct dynamic anatomic address from any device, even one that is not logged in. Also depicted are a consent form 301 and a surgical whiteboard 302 that can be printed, modified, signed, or marked up, and processed through the system of this invention, to automatically update and file the electronic records.
[0030] The foregoing descriptions of specific embodiments of the present invention are presented for purposes of illustration and description. They are not intended to be exhaustive nor are they intended to limit the invention to precise forms disclosed and, obviously, many modifications and variations are possible in light of the above teaching. The embodiments are chosen and described in order to best explain principles of the invention and its practical application, to thereby enable others skilled in the art to best utilize the invention and its various embodiments with various modifications as are suited to the particular use contemplated. It is intended that a scope of the invention be defined broadly by the drawings and specification appended hereto and to their equivalents. Therefore, the scope of the invention is in no way to be limited only by any adverse inference under the rulings of Wamer-Jenkinson Company, v. Hilton Davis Chemical, 520 US 17
[1997] or Festo Corp. v. Shoketsu Kinzoku Kogyo Kabushiki Co., 535 U.S. 722 (2002), or other similar caselaw or subsequent precedent should not be made if any future claims are added or amended subsequent to this patent application.

Claims

CLAIMS What is claimed is:
1. A method of creating an improved electronic healthcare record, said method comprising: converting a paper form to digital images through an image capture device; receiving the digital images on a computing device; employing computer vision, on the computing device to analyze the digital images to detect information on the paper form; detecting healthcare record information on the paper form to convert it to digital information; detecting non-medical information on the paper form to capture context and language characteristics of the paper form; categorizing the converted healthcare record information according to the captured context and language characteristics; organizing the categorized healthcare record information according to the captured context and language characteristics; processing the organized information to refine alignment and determine precise locations for the organized information; placing the processed information on a digital interactive record representative of the paper record to create a digital version of the paper record; and displaying the digital interactive record.
2. The method of claim 1, wherein the image capture device directly analyzes the digital images to detect information and places the detected information on a digital interactive record representative of the paper record.
3. The method of claim 1, wherein the paper form has orientation markers to serve as defined axis points for the form and allows for axis normalization.
4. The method of claim 1, wherein the paper form has a QR code or detectable text containing information to define the context and language characteristics of the paper form.
5. The method of claim 1, further comprising augmenting the digital interactive record with additional data.
6. The method of claim 5, wherein the additional data is attached directly to a dynamic anatomic address and becomes instantly available to users.
7. The method of any one of the preceding claims, wherein the information is translatable to any coded, linguistic, or symbolic language.
8. A computerized electronic healthcare record management system for improved consolidation of medical data from varying types of healthcare records, the system configured to: convert at least one paper form with markings wherein the markings represent healthcare record information through an image capture device; receive images of the paper form; interpret the received images using computer vision wherein the markings on the paper form are digitized to an electronic form; using the digitized data, create or append to a digital interactive record representative of the paper record to create or append to a digital version of the paper record; augment the digital interactive record with additional information wherein the additional information is already digital; and display the augmented record on a graphical user interface wherein the resulting augmented record is a combination of records with different original formats.
9. The system of claim 8, wherein the image capture device directly analyzes the digital images to detect information and places the detected information on a digital interactive record representative of the paper record.
10. The system of claim 8, wherein the additional information is attached directly to a dynamic anatomic address and becomes instantly available to users.
PCT/IB2022/000793 2021-12-10 2022-12-12 Augmented documentation through computer vision detection, alignment, and application of paper markup to digital maps and forms WO2023166328A2 (en)

Priority Applications (3)

Application Number Priority Date Filing Date Title
GB2408005.3A GB2627661A (en) 2022-03-01 2022-12-12 Augmented documentation through computer vision detection, alignment, and application of paper markup to digital maps and forms
US18/225,872 US20230368878A1 (en) 2021-12-10 2023-07-25 Systems and methods using multidimensional language and vision models and maps to categorize, describe, coordinate, and track anatomy and health data
PCT/IB2023/000535 WO2024023584A2 (en) 2022-07-26 2023-07-25 Systems and methods using multidimensional language and vision models and maps to categorize, describe, coordinate, and track anatomy and health data

Applications Claiming Priority (6)

Application Number Priority Date Filing Date Title
US202263315289P 2022-03-01 2022-03-01
US63/315,289 2022-03-01
US202263269516P 2022-03-17 2022-03-17
US63/269,516 2022-03-17
US202263362791P 2022-04-11 2022-04-11
US63/362,791 2022-04-11

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US18/225,872 Continuation-In-Part US20230368878A1 (en) 2021-12-10 2023-07-25 Systems and methods using multidimensional language and vision models and maps to categorize, describe, coordinate, and track anatomy and health data

Publications (2)

Publication Number Publication Date
WO2023166328A2 true WO2023166328A2 (en) 2023-09-07
WO2023166328A3 WO2023166328A3 (en) 2023-11-09

Family

ID=87883134

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/IB2022/000793 WO2023166328A2 (en) 2021-12-10 2022-12-12 Augmented documentation through computer vision detection, alignment, and application of paper markup to digital maps and forms

Country Status (2)

Country Link
GB (1) GB2627661A (en)
WO (1) WO2023166328A2 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN118230333A (en) * 2024-05-23 2024-06-21 安徽安天利信工程管理股份有限公司 OCR technology-based image preprocessing system

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1615563B1 (en) * 2003-04-24 2015-07-29 A.M.P.S., Llc Method and system for converting paper ecg printouts to digital ecg files
US8311848B2 (en) * 2009-10-05 2012-11-13 Muthiah Subash Electronic medical record creation and retrieval system
US20150134361A1 (en) * 2013-11-08 2015-05-14 The Cleveland Clinic Foundation Graphical generation and retrieval of medical records
JP6201780B2 (en) * 2014-01-20 2017-09-27 富士ゼロックス株式会社 Image processing apparatus and program
JP5835381B2 (en) * 2014-03-17 2015-12-24 富士ゼロックス株式会社 Image processing apparatus and program
EP3537447A1 (en) * 2018-03-07 2019-09-11 Koninklijke Philips N.V. Display of medical image data
JP7392120B2 (en) * 2019-09-06 2023-12-05 エフ. ホフマン-ラ ロシュ アーゲー Automated information extraction and refinement within pathology reports using natural language processing

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN118230333A (en) * 2024-05-23 2024-06-21 安徽安天利信工程管理股份有限公司 OCR technology-based image preprocessing system

Also Published As

Publication number Publication date
GB202408005D0 (en) 2024-07-17
WO2023166328A3 (en) 2023-11-09
GB2627661A (en) 2024-08-28

Similar Documents

Publication Publication Date Title
US6155603A (en) Laboratory reporting system and labeling system therefor
US8081165B2 (en) Multi-functional navigational device and method
US7607079B2 (en) Multi-input reporting and editing tool
US7421647B2 (en) Gesture-based reporting method and system
US20130251233A1 (en) Method for creating a report from radiological images using electronic report templates
US7936925B2 (en) Paper interface to an electronic record system
US20050060644A1 (en) Real time variable digital paper
EP1895468A2 (en) Medical image processing apparatus
JP2016151827A (en) Information processing unit, information processing method, information processing system and program
EP1881418A2 (en) Document management system and its method
US20080273774A1 (en) System and methods for capturing a medical drawing or sketch for generating progress notes, diagnosis and billing codes
CN110931095A (en) System and method based on DICOM image annotation and structured report association
EP2657866A1 (en) Creating a radiology report
WO2023166328A2 (en) Augmented documentation through computer vision detection, alignment, and application of paper markup to digital maps and forms
WO2010099224A1 (en) Systems and methods for reviewing digital pen data
US20230368878A1 (en) Systems and methods using multidimensional language and vision models and maps to categorize, describe, coordinate, and track anatomy and health data
WO2024023584A2 (en) Systems and methods using multidimensional language and vision models and maps to categorize, describe, coordinate, and track anatomy and health data
JP2009252023A (en) Medical certificate preparation support apparatus and program
JP2009032075A (en) Document computerization system
KR102578043B1 (en) A system, a method and a program for managing medical documents
US20080049258A1 (en) Printing Digital Documents
US20040258287A1 (en) Method and system for configuring a scanning device without a graphical user interface
WO2023170442A2 (en) Targeted isolation of anatomic sites for form generation and medical record generation and retrieval
WO2023156809A2 (en) Shadow charts to collate and interact with past, present, and future health records
JPH1131187A (en) Diagnostic report generating system

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22929679

Country of ref document: EP

Kind code of ref document: A2

ENP Entry into the national phase

Ref document number: 202408005

Country of ref document: GB

Kind code of ref document: A

Free format text: PCT FILING DATE = 20221212