WO2023166329A2 - Coordinated visualization and translation from uncoordinated and mixed descriptions of anatomy - Google Patents

Coordinated visualization and translation from uncoordinated and mixed descriptions of anatomy Download PDF

Info

Publication number
WO2023166329A2
WO2023166329A2 PCT/IB2022/000813 IB2022000813W WO2023166329A2 WO 2023166329 A2 WO2023166329 A2 WO 2023166329A2 IB 2022000813 W IB2022000813 W IB 2022000813W WO 2023166329 A2 WO2023166329 A2 WO 2023166329A2
Authority
WO
WIPO (PCT)
Prior art keywords
anatomic
input
inputs
linguistic
mixed
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Ceased
Application number
PCT/IB2022/000813
Other languages
French (fr)
Other versions
WO2023166329A3 (en
Inventor
Matthew A. MOLENDA
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Mofaip LLC
Original Assignee
Mofaip LLC
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Mofaip LLC filed Critical Mofaip LLC
Priority to GB2407999.8A priority Critical patent/GB2627660A/en
Priority to PCT/IB2023/000535 priority patent/WO2024023584A2/en
Priority to EP23845770.9A priority patent/EP4562645A4/en
Priority to US18/225,872 priority patent/US20230368878A1/en
Publication of WO2023166329A2 publication Critical patent/WO2023166329A2/en
Publication of WO2023166329A3 publication Critical patent/WO2023166329A3/en
Anticipated expiration legal-status Critical
Ceased legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H10/00ICT specially adapted for the handling or processing of patient-related medical or healthcare data
    • G16H10/60ICT specially adapted for the handling or processing of patient-related medical or healthcare data for patient-specific data, e.g. for electronic patient records
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H30/00ICT specially adapted for the handling or processing of medical images
    • G16H30/40ICT specially adapted for the handling or processing of medical images for processing medical images, e.g. editing
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H50/00ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
    • G16H50/70ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for mining of medical data, e.g. analysing previous cases of other patients
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H70/00ICT specially adapted for the handling or processing of medical references
    • G16H70/60ICT specially adapted for the handling or processing of medical references relating to pathologies

Definitions

  • This invention relates to medical systems, and more particularly, a superior system for creating enhanced anatomic descriptions, coordinated visualizations, and translations based upon uncoordinated and mixed linguistic, coded, and symbolic anatomic site data.
  • Standardized lexicons for anatomy are usually limited to a single language, or less than a handful of languages at best, and recent publicly available lexicons like the International Classification of Diseases (“ICD-11”) anatomy chapter simply lists terms without visualizations.
  • existing anatomic references including digital references, are unable to detect and translate mixed inputs automatically and in real-time.
  • existing anatomic references do not assign coordinates to anatomic sites derived from mixed inputs and are unable to apply progressive sub-segmentation to enhance visualization to near pinpoint precision, and furthermore do not supply anatomic maps relevant to the anatomy lookup.
  • a practical application of the present invention would be to lookup and apply precise visualizations and multidimensional anatomic maps to coded descriptions of anatomy.
  • visual definitions, relevant anatomic maps, visualizations, photos, and records relevant to a patient and dynamic custom coordinates of anatomic addresses can be looked up by anatomic site name and components (such as laterality, prefixes, and suffixes) in any combination of linguistic language or code or symbols, and progressively sub-segmented with directional and magnitude modifier terms with mixed code and language order (on which the present invention applies coded and linguistic dissection to provide precise visualizations and translations).
  • Those definitions can be detected and visualized in different views, images, and multimedia automatically.
  • the mixed input delivers relevant multidimensional anatomic maps and avatars, with automatically targeted, enhanced, optionally progressively subsegmented, and color-coded visualization.
  • descriptions of anatomy extracted from medical records, in linguistic, coded, or symbolic form can be used as mixed inputs to collate patient records for an anatomic area of interest while simultaneously visualizing the anatomic area of interest.
  • the present invention includes a method for detecting and translating uncoordinated, mixed linguistic, coded, and symbolic anatomic site data into coordinates, axes, visualizations, maps, avatars, record results, and sequenced data.
  • the mixed inputs may be text terms or verbal inputs in any linguistic language, or coded inputs such as ICD-11 codes for anatomy, or numerical codes corresponding to an anatomic site, region or other descriptive term, or symbolic inputs such as emojis.
  • a “nose” emoji categorizes nasal sites.
  • the mixed inputs are automatically detected, categorized, and organized into fully sequenced translations, including synonyms, and into visualizations and coordinates relative to anatomic sites.
  • Linguistic inputs that are categorized can include identifiers, lateralities, prefixes, suffixes, anatomic site names, categories, modifier terms such as directional modifiers, custom descriptors, anatomic distributions, distribution segments, and synonyms for each of the preceding inputs.
  • the present invention delivers language translations in a natural linguistic sequence for the language, with corresponding visualizations, including anatomic maps and avatars which themselves can include multidimensional, custom axes defined coordinate systems, and records related to a patient that have anatomic descriptions or images associated with them.
  • the delivered anatomic maps and avatars can be overlaid or underlaid and aligned to other images that contain anatomy.
  • this invention includes progressive sub-segmentation of the mixed and uncoordinated inputs to deliver human readable, accurate, and precise descriptions with corresponding visualizations and coordinates up to a pinpoint level, as well as coded descriptions.
  • the present invention includes axial mirroring to display the mixed-input-derived visualizations and anatomic maps, to show and interact with the visualizations and maps in both outside-observer view and in selfie-view.
  • FIG. 1 is a screenshot of an anatomic site name builder associated with a specific pin.
  • FIG. 2 is a flowchart of a system in accordance with an aspect of the present invention.
  • FIG. 3 is a screenshot of an anatomic site code translator.
  • FIG. 4 depicts progressive linguistic and visual sub-segmentation of a dynamic anatomic site.
  • FIG. 5 is a screenshot of an exemplar anatomic site name to code translator.
  • FIG. 6 is a screenshot of another exemplar anatomic site name to code translator. DETAILED DESCRIPTION OF DRAWINGS
  • the present invention is a superior system to supply specific, relevant, enhanced, and translated anatomic site descriptions coordinated visualizations, maps, and records based upon uncoordinated and mixed linguistic, coded, and symbolic anatomic site data.
  • the system generates automatic, real-time, enhanced visualizations, coordinate assignments, avatars, maps, record delivery, and translations of uncoordinated and mixed linguistic, coded, and symbolic anatomy data inputs. It is contemplated that in a preferred embodiment the components of an anatomic site description or name are capable of being reordered.
  • FIG. 1 is a screenshot of an anatomic site name builder 110 generating a name for a particular anatomic site 18.
  • the visible name components 114 include but are not limited to laterality, enhanced modifiers, prefixes, suffixes, automatic coded translations, and symbolic groupings (here depicted showing for ICD-11 anatomy codes and Foundation ID codes and Anatomy Mapper IDs).
  • the visibility toggle 112 has been toggled for the ICD-11 codes in this example, the code displayed being “XA1Z38&XK8G (XK4H)”.
  • select name components and crossmappings have been hidden with the visibility toggle 112. These hidden name components 116 can be made visible again at the user discretion.
  • the anatomic site name builder 110 displays linguistically dissected, reorderable, uncoordinated components of the selected hierarchy level’s site name delivered by the mapping engine from a visual input selecting an anatomic site 18 on a multidimensional anatomic map. Name components are automatically placed into digital “chips” in the appropriate category.
  • the label reordering capability of the system allows for both reordering and deletion of dissected name components. In one embodiment, custom triangulation can be added.
  • FIG. 2 illustrates a system that incorporates mixed inputs to various outputs 214.
  • Inputs may be input by a user as either written inputs 200 or verbal inputs 201 or extracted inputs 203 from an existing record, such as a detected text description that contains an anatomic site name synonym and a laterality.
  • Extracted inputs may include linguistic input 202, coded input 204 or symbolic input 205.
  • Linguistic input 202 may include linguistic descriptions of anatomic site that include any of the following: identifiers, lateralities, prefixes, suffixes, anatomic site names, categories, modifier terms such as directional modifiers, custom descriptors, anatomic distributions, distribution segments, and synonyms each of the preceding inputs.
  • Linguistic input 202 is a mixed input category in that it may comprise written 200 or verbal 201 input.
  • the linguistic input 202 is language agnostic and therefore can also be in a mixed language, such as a mixture of English, Chinese, and Spanish to describe different components of the anatomic site.
  • Written input 200 is derived from text-based linguistic descriptions of anatomic site, as defined above, including the synonyms, that is typed, pasted, or extracted from handwriting or optical character recognition.
  • Verbal input 201 is spoken live or via a recording, in any language.
  • linguistic input 202 is “ear” (anatomic site) or "left” (laterality). Such extractions may be in any linguistic language.
  • Other mixed inputs include coded input 204 and symbolic input 205.
  • Coded input 204 is numeric or alphanumeric in most cases but includes other unicode characters.
  • coded input 204 is the ICD code for a “left” laterality, “XK8G.”
  • Symbolic input 205 is a drawing, character, icon, or image, emoji, or unicode character, or a string of these symbols.
  • symbolic input 205 for ear is the “ear” emoji or a picture or diagram of an ear.
  • Each of these mixed inputs are detected to be descriptors of anatomy, modifiers of anatomy, or laterality and are organized into digital “chips” or blocks of data by a dissection and categorization engine 206.
  • the system 215 uses the dissected and categorized inputs in a neural network to generate outputs 214, including visualizations 207, avatars 208, maps 209, records (e.g., medical records that describe the left ear) 210, linguistic translations (e.g., left posterior surface of pinna) 211 , synonyms (e.g., left back of ear) 212, and code strings (e.g., XA3S47&XK8G) 213.
  • records e.g., medical records that describe the left ear
  • linguistic translations e.g., left posterior surface of pinna
  • synonyms e.g., left back of ear
  • code strings e.g., XA3S47&XK8G
  • FIG. 3 depicts a screenshot of an anatomic site code translator 220.
  • the code string 222 identified is the ICD-11 code string “XA1Z38&XK8G (XK4H), which corresponds with the “left (inferior) lateral forehead.”
  • Inputting the code string 222 into the input box 224 returns all relevant visualizations 226 related to the patient, maps, and avatars, including highlighted, targeted, enhanced, and color-coded locations on multidimensional maps.
  • the code string 222 could be “lateral forehead izquierdo (XK4H)” or any other combination of uncoordinated and mixed inputs to reach the same result.
  • Translations 228 into other coded, linguistic, and symbolic languages are delivered automatically and simultaneously based on user preference, the depicted embodiment showing three different code string translations.
  • Visual definitions 230 are also delivered with enhanced modification visualized through automatic color-coding.
  • the anatomic site and laterality components of the anatomic site could be visualized in red, and the enhanced modifier showing the possible zoned area of interest within the anatomic site could be shown in blue.
  • the outside observer view is shown.
  • the selfie/mirror view can be depicted as well as depicted with a silver gradient background to represent a mirror 226.
  • the input code string 222 can also be mixed.
  • the code string 222 could be in “Spanglish” where an English-speaking user is familiar with Spanish but cannot remember the term for “left”.
  • the user can verbalize “left mano” and will receive relevant visualizations and coordinated maps 226, translations 228, records, and visual definitions 230 related to the “left hand” and simultaneously receive the translation 228 as “mano izquierda.”
  • relevant visualizations and coordinated maps 226, translations 228, records and visual definitions 230 related to the nose may be found by using a nose emoji in the code string 222.
  • a code string 222 incorporating “left” and a nose emoji would deliver relevant visualizations 226, translations 228, and visual definitions 230 related to the “left nose” whereas searching for just a nose emoji would deliver more relevant visualizations 226, translations 228, and visual definitions 230 related to the entire nose.
  • the present invention enables medical records to be searched for uncoordinated anatomic site descriptors, dissection of the descriptors, and delivery of visualizations, maps, avatars, and records associated with the anatomy of interest.
  • FIG. 4 depicts anatomic visualizations 10 (moving from left to right) progressively sub-segments from the previous until the right most visualization achieves pinpoint precision for the dynamic anatomic address - a reproducible location across multiple images, maps, and diagrams.
  • Above each visualization 10 are the English linguistic anatomic site descriptions 120 of the progressively sub-segmented anatomic sites.
  • Each subsequent diagram adds enhanced modification language and simultaneous visualization. It is contemplated that the same sub-segmentation could be applied to a diagram as shown or a patient photo, avatar, video, live camera for augmented reality, virtual reality avatar, or other multimedia.
  • anatomic site descriptions 120 could be linguistic, symbolic, coded, mathematical or some combination of descriptors.
  • an enhanced modifier for sequence sensitivity is turned on causing the term “lateral” to be shown before “superior” in the anatomic site descriptions 120.
  • Sequence insensitivity would visualize the entire upper right aspect of the highlighted area (combining the last two right-most visualizations 10 in this figure). Keeping sequence sensitivity on, visualization of the last right-most diagram color-codes the description for the ⁇ lower medial aspect of ⁇ left (superior lateral) paramedian forehead to pin-point precision.
  • the present invention is further capable of dissecting anatomic site descriptions into components in any language, categorizing those components, and translating those components to other languages while simultaneously applying natural linguistic sequencing, and providing visualizations, anatomic maps, and coded translations (shown in FIG. 3).
  • FIG. 5 depicts the anatomic site name to code translator 220 dissecting English 240 code strings 222 representative of anatomic site descriptions into data blocks shown as chips that are categorized into anatomic description components b.
  • the human readable input 244 that describes an enhanced anatomic site and dynamic anatomic address is put in the input box 224 and dissected and categorized.
  • FIG. 6 depicts the anatomic site code translator 220 translating an English description 244 of “left (inferior) lateral forehead” into dissected Spanish 242 data blocks shown in a different linguistic sequence from FIG. 5.
  • natural linguistic sequencing is shown using a natural language processor, where the laterality category is listed after the anatomic site and other automatic sequence changes.
  • the input box 224 can accept code strings, any language (including written and spoken language), symbolic representations, or a mixture of these components and dissect them, categorize and organize them, sequence them, visualize them in all views (with additional enhanced visualization and segmentation), and target them as components of the dynamic anatomic address located in multidimensional anatomic maps and avatars.
  • synonyms in any language can be detected, dissected, categorized, looked up, and visualized.
  • “pinna” could be searched for by the synonym of “ear”, which would detect English, that it’s a synonym, and show the chips, records, and visualizations for “pinna.”
  • synonym detection will recognize “belly button” or “navel” and return the relevant visualizations, translations, records, and visual definitions associated with the “umbilicus” chip.
  • the language is automatically detected as English and the “umbilicus” chip gets loaded into the “anatomic site” field automatically, and if further translated in real time to any coded, linguistic, or symbolic language along with a real time visual preview on standardized diagrams; 3D avatars; or imaging / multimedia that contain a visible “belly button.”

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Public Health (AREA)
  • General Health & Medical Sciences (AREA)
  • Primary Health Care (AREA)
  • Epidemiology (AREA)
  • Data Mining & Analysis (AREA)
  • Biomedical Technology (AREA)
  • Databases & Information Systems (AREA)
  • Pathology (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Machine Translation (AREA)

Abstract

The present invention includes a method for detecting and translating uncoordinated, mixed linguistic, coded, and symbolic anatomic site data into coordinates, axes, visualizations, maps, avatars, record results, and sequenced data. The mixed inputs may be text terms or verbal inputs in any linguistic language, or coded inputs such as ICD-11 codes for anatomy, or numerical codes corresponding to an anatomic site, region or other descriptive term, or symbolic inputs such as emojis. In an exemplar of symbolic code, a "nose" emoji categorizes nasal sites. The mixed inputs are automatically detected, categorized, and organized into fully sequenced translations, including synonyms, and into visualizations, records, and coordinates relative to anatomic sites. Linguistic inputs that are categorized can include identifiers, lateralities, prefixes, suffixes, anatomic site names, categories, modifier terms such as directional modifiers, custom descriptors, anatomic distributions, distribution segments, and synonyms for each of the preceding inputs. The present invention delivers language translations in a natural linguistic sequence for the language, with corresponding visualizations, including anatomic maps and avatars which themselves can include multidimensional, custom axes defined coordinate systems, and records related to a patient that have anatomic descriptions or images associated with them. The delivered anatomic maps and avatars can be overlaid or underlaid and aligned to other images that contain anatomy. Additionally, this invention includes progressive sub¬ segmentation of the mixed inputs to deliver human readable, accurate, and precise descriptions with corresponding visualizations and coordinates up to a pinpoint level, as well as coded descriptions. Finally, the present invention includes axial mirroring to display the mixed-input- derived visualizations and anatomic maps, to show and interact with the visualizations and maps in both outside-observer view and in selfie-view.

Description

UTILITY PATENT APPLICATION
CONFIDENTIAL INFORMATION
Applicant: MoFalP, LLC
Address: 4327 Pine Ridge Circle, Monclova, OH 43542, USA
Title: Coordinated visualization and translation from uncoordinated and mixed descriptions of anatomy
First Named Inventor: Matthew A. Molenda
Attorney: McCarthy, Lebit, Crystal & Liffman, Co. L.P. A.
Customer No.: 113863
Attorney Docket No.: AML.P024.PCT
RELATED APPLICATIONS
[0001] This application claims priority from each of U.S. Provisional Patent Application Serial No. 63/265,216 and its filing date December 10, 2021, and U.S. Provisional Patent Application Serial No. 63/369,717 and its filing date July 28, 2021. Each of these applications is hereby incorporated by reference in their entireties for all purposes.
FIELD OF THE INVENTION
[0002] This invention relates to medical systems, and more particularly, a superior system for creating enhanced anatomic descriptions, coordinated visualizations, and translations based upon uncoordinated and mixed linguistic, coded, and symbolic anatomic site data.
SUMMARY OF THE INVENTION
[0003] Standardized lexicons for anatomy are usually limited to a single language, or less than a handful of languages at best, and recent publicly available lexicons like the International Classification of Diseases (“ICD-11”) anatomy chapter simply lists terms without visualizations. Existing anatomic references, including digital references, are unable to detect and translate mixed inputs automatically and in real-time. Furthermore, existing anatomic references do not assign coordinates to anatomic sites derived from mixed inputs and are unable to apply progressive sub-segmentation to enhance visualization to near pinpoint precision, and furthermore do not supply anatomic maps relevant to the anatomy lookup. A practical application of the present invention would be to lookup and apply precise visualizations and multidimensional anatomic maps to coded descriptions of anatomy.
[0004] I n the present invention, visual definitions, relevant anatomic maps, visualizations, photos, and records relevant to a patient and dynamic custom coordinates of anatomic addresses can be looked up by anatomic site name and components (such as laterality, prefixes, and suffixes) in any combination of linguistic language or code or symbols, and progressively sub-segmented with directional and magnitude modifier terms with mixed code and language order (on which the present invention applies coded and linguistic dissection to provide precise visualizations and translations). Those definitions can be detected and visualized in different views, images, and multimedia automatically. Additionally, the mixed input delivers relevant multidimensional anatomic maps and avatars, with automatically targeted, enhanced, optionally progressively subsegmented, and color-coded visualization. Furthermore, descriptions of anatomy extracted from medical records, in linguistic, coded, or symbolic form, can be used as mixed inputs to collate patient records for an anatomic area of interest while simultaneously visualizing the anatomic area of interest..
[0005] The present invention includes a method for detecting and translating uncoordinated, mixed linguistic, coded, and symbolic anatomic site data into coordinates, axes, visualizations, maps, avatars, record results, and sequenced data. The mixed inputs may be text terms or verbal inputs in any linguistic language, or coded inputs such as ICD-11 codes for anatomy, or numerical codes corresponding to an anatomic site, region or other descriptive term, or symbolic inputs such as emojis. In an exemplar of symbolic code, a “nose” emoji categorizes nasal sites. The mixed inputs are automatically detected, categorized, and organized into fully sequenced translations, including synonyms, and into visualizations and coordinates relative to anatomic sites. Linguistic inputs that are categorized can include identifiers, lateralities, prefixes, suffixes, anatomic site names, categories, modifier terms such as directional modifiers, custom descriptors, anatomic distributions, distribution segments, and synonyms for each of the preceding inputs. The present invention delivers language translations in a natural linguistic sequence for the language, with corresponding visualizations, including anatomic maps and avatars which themselves can include multidimensional, custom axes defined coordinate systems, and records related to a patient that have anatomic descriptions or images associated with them. The delivered anatomic maps and avatars can be overlaid or underlaid and aligned to other images that contain anatomy. Additionally, this invention includes progressive sub-segmentation of the mixed and uncoordinated inputs to deliver human readable, accurate, and precise descriptions with corresponding visualizations and coordinates up to a pinpoint level, as well as coded descriptions. Finally, the present invention includes axial mirroring to display the mixed-input-derived visualizations and anatomic maps, to show and interact with the visualizations and maps in both outside-observer view and in selfie-view.
BRIEF DESCRIPTION OF DRAWINGS
[0006] FIG. 1 is a screenshot of an anatomic site name builder associated with a specific pin. [0007] FIG. 2 is a flowchart of a system in accordance with an aspect of the present invention. [0008] FIG. 3 is a screenshot of an anatomic site code translator.
[0009] FIG. 4 depicts progressive linguistic and visual sub-segmentation of a dynamic anatomic site.
[0010] FIG. 5 is a screenshot of an exemplar anatomic site name to code translator.
[0011] FIG. 6 is a screenshot of another exemplar anatomic site name to code translator. DETAILED DESCRIPTION OF DRAWINGS
[0012] The present invention is a superior system to supply specific, relevant, enhanced, and translated anatomic site descriptions coordinated visualizations, maps, and records based upon uncoordinated and mixed linguistic, coded, and symbolic anatomic site data. The system generates automatic, real-time, enhanced visualizations, coordinate assignments, avatars, maps, record delivery, and translations of uncoordinated and mixed linguistic, coded, and symbolic anatomy data inputs. It is contemplated that in a preferred embodiment the components of an anatomic site description or name are capable of being reordered.
[0013] Referring now to the figures, the system delivering enhanced anatomic site description or name with natural linguistic sequencing automatically separates the anatomic site description or name components into laterality, prefixes, enhanced modifiers, suffixes, anatomic sites, anatomic distributions, distribution segments, custom descriptions, codes (which can be alphanumeric, numeric, or other codes), and symbolic groupings (such as emoji groups). FIG. 1 is a screenshot of an anatomic site name builder 110 generating a name for a particular anatomic site 18. The visible name components 114, include but are not limited to laterality, enhanced modifiers, prefixes, suffixes, automatic coded translations, and symbolic groupings (here depicted showing for ICD-11 anatomy codes and Foundation ID codes and Anatomy Mapper IDs). The visibility toggle 112 has been toggled for the ICD-11 codes in this example, the code displayed being “XA1Z38&XK8G (XK4H)”. In the present embodiment, select name components and crossmappings have been hidden with the visibility toggle 112. These hidden name components 116 can be made visible again at the user discretion. The anatomic site name builder 110 displays linguistically dissected, reorderable, uncoordinated components of the selected hierarchy level’s site name delivered by the mapping engine from a visual input selecting an anatomic site 18 on a multidimensional anatomic map. Name components are automatically placed into digital “chips” in the appropriate category. The label reordering capability of the system allows for both reordering and deletion of dissected name components. In one embodiment, custom triangulation can be added.
[0014] FIG. 2 illustrates a system that incorporates mixed inputs to various outputs 214. Inputs may be input by a user as either written inputs 200 or verbal inputs 201 or extracted inputs 203 from an existing record, such as a detected text description that contains an anatomic site name synonym and a laterality. Extracted inputs may include linguistic input 202, coded input 204 or symbolic input 205. Linguistic input 202 may include linguistic descriptions of anatomic site that include any of the following: identifiers, lateralities, prefixes, suffixes, anatomic site names, categories, modifier terms such as directional modifiers, custom descriptors, anatomic distributions, distribution segments, and synonyms each of the preceding inputs. Linguistic input 202 is a mixed input category in that it may comprise written 200 or verbal 201 input. The linguistic input 202 is language agnostic and therefore can also be in a mixed language, such as a mixture of English, Chinese, and Spanish to describe different components of the anatomic site. Written input 200 is derived from text-based linguistic descriptions of anatomic site, as defined above, including the synonyms, that is typed, pasted, or extracted from handwriting or optical character recognition. Verbal input 201 is spoken live or via a recording, in any language. In one embodiment, linguistic input 202 is “ear” (anatomic site) or "left” (laterality). Such extractions may be in any linguistic language. Other mixed inputs include coded input 204 and symbolic input 205. Coded input 204 is numeric or alphanumeric in most cases but includes other unicode characters. In one embodiment, coded input 204 is the ICD code for a “left” laterality, “XK8G.” Symbolic input 205 is a drawing, character, icon, or image, emoji, or unicode character, or a string of these symbols. In one embodiment, symbolic input 205 for ear is the “ear” emoji or a picture or diagram of an ear. Each of these mixed inputs are detected to be descriptors of anatomy, modifiers of anatomy, or laterality and are organized into digital “chips” or blocks of data by a dissection and categorization engine 206. The system 215 uses the dissected and categorized inputs in a neural network to generate outputs 214, including visualizations 207, avatars 208, maps 209, records (e.g., medical records that describe the left ear) 210, linguistic translations (e.g., left posterior surface of pinna) 211 , synonyms (e.g., left back of ear) 212, and code strings (e.g., XA3S47&XK8G) 213.
[0015] The present invention is further capable of automatically converting coded input. FIG. 3 depicts a screenshot of an anatomic site code translator 220. In the present embodiment the code string 222 identified is the ICD-11 code string “XA1Z38&XK8G (XK4H), which corresponds with the “left (inferior) lateral forehead.” Inputting the code string 222 into the input box 224 returns all relevant visualizations 226 related to the patient, maps, and avatars, including highlighted, targeted, enhanced, and color-coded locations on multidimensional maps. In another embodiment, the code string 222 could be “lateral forehead izquierdo (XK4H)” or any other combination of uncoordinated and mixed inputs to reach the same result. Translations 228 into other coded, linguistic, and symbolic languages are delivered automatically and simultaneously based on user preference, the depicted embodiment showing three different code string translations. Visual definitions 230 are also delivered with enhanced modification visualized through automatic color-coding. In one embodiment, the anatomic site and laterality components of the anatomic site could be visualized in red, and the enhanced modifier showing the possible zoned area of interest within the anatomic site could be shown in blue. In the present embodiment, the outside observer view is shown. In another embodiment, the selfie/mirror view can be depicted as well as depicted with a silver gradient background to represent a mirror 226. The input code string 222 can also be mixed. In an example scenario, the code string 222 could be in “Spanglish” where an English-speaking user is familiar with Spanish but cannot remember the term for “left”. The user can verbalize “left mano” and will receive relevant visualizations and coordinated maps 226, translations 228, records, and visual definitions 230 related to the “left hand” and simultaneously receive the translation 228 as “mano izquierda.” In an alternate example scenario, relevant visualizations and coordinated maps 226, translations 228, records and visual definitions 230 related to the nose may be found by using a nose emoji in the code string 222. A code string 222 incorporating “left” and a nose emoji would deliver relevant visualizations 226, translations 228, and visual definitions 230 related to the “left nose” whereas searching for just a nose emoji would deliver more relevant visualizations 226, translations 228, and visual definitions 230 related to the entire nose. By generating enhanced anatomic site names, the present invention enables medical records to be searched for uncoordinated anatomic site descriptors, dissection of the descriptors, and delivery of visualizations, maps, avatars, and records associated with the anatomy of interest.
[0016] The present invention further applies progressive linguistic and visual sub-segmentation to achieve pinpoint precision in detecting and visualizing anatomic sites 18 on a diagram or image. FIG. 4 depicts anatomic visualizations 10 (moving from left to right) progressively sub-segments from the previous until the right most visualization achieves pinpoint precision for the dynamic anatomic address - a reproducible location across multiple images, maps, and diagrams. Above each visualization 10 are the English linguistic anatomic site descriptions 120 of the progressively sub-segmented anatomic sites. Each subsequent diagram adds enhanced modification language and simultaneous visualization. It is contemplated that the same sub-segmentation could be applied to a diagram as shown or a patient photo, avatar, video, live camera for augmented reality, virtual reality avatar, or other multimedia. It is further contemplated that the anatomic site descriptions 120 could be linguistic, symbolic, coded, mathematical or some combination of descriptors. In the present embodiment, an enhanced modifier for sequence sensitivity is turned on causing the term “lateral” to be shown before “superior” in the anatomic site descriptions 120. Sequence insensitivity would visualize the entire upper right aspect of the highlighted area (combining the last two right-most visualizations 10 in this figure). Keeping sequence sensitivity on, visualization of the last right-most diagram color-codes the description for the {lower medial aspect of} left (superior lateral) paramedian forehead to pin-point precision. In this embodiment, since “superior” is listed before “lateral” and sequence sensitivity is on for the modifier terms, the upper right of the highlighted area is more accurately and precisely targeted. It is contemplated that the progressive sub-segmentation also applies to a mirrored axis.
[0017] The present invention is further capable of dissecting anatomic site descriptions into components in any language, categorizing those components, and translating those components to other languages while simultaneously applying natural linguistic sequencing, and providing visualizations, anatomic maps, and coded translations (shown in FIG. 3). FIG. 5 depicts the anatomic site name to code translator 220 dissecting English 240 code strings 222 representative of anatomic site descriptions into data blocks shown as chips that are categorized into anatomic description components b. In other words, the human readable input 244 that describes an enhanced anatomic site and dynamic anatomic address is put in the input box 224 and dissected and categorized. FIG. 6 depicts the anatomic site code translator 220 translating an English description 244 of “left (inferior) lateral forehead” into dissected Spanish 242 data blocks shown in a different linguistic sequence from FIG. 5. When changing the language to Spanish 242, natural linguistic sequencing is shown using a natural language processor, where the laterality category is listed after the anatomic site and other automatic sequence changes. The input box 224 can accept code strings, any language (including written and spoken language), symbolic representations, or a mixture of these components and dissect them, categorize and organize them, sequence them, visualize them in all views (with additional enhanced visualization and segmentation), and target them as components of the dynamic anatomic address located in multidimensional anatomic maps and avatars. Additionally, synonyms in any language can be detected, dissected, categorized, looked up, and visualized. In one embodiment, “pinna” could be searched for by the synonym of “ear”, which would detect English, that it’s a synonym, and show the chips, records, and visualizations for “pinna.” In another embodiment, synonym detection will recognize “belly button” or “navel” and return the relevant visualizations, translations, records, and visual definitions associated with the “umbilicus” chip. [0018] The language is automatically detected as English and the “umbilicus” chip gets loaded into the “anatomic site” field automatically, and if further translated in real time to any coded, linguistic, or symbolic language along with a real time visual preview on standardized diagrams; 3D avatars; or imaging / multimedia that contain a visible “belly button.”
[0019] The foregoing descriptions of specific embodiments of the present invention are presented for purposes of illustration and description. They are not intended to be exhaustive nor are they intended to limit the invention to precise forms disclosed and, obviously, many modifications and variations are possible in light of the above teaching. The embodiments are chosen and described in order to best explain principles of the invention and its practical application, to thereby enable others skilled in the art to best utilize the invention and its various embodiments with various modifications as are suited to the particular use contemplated. It is intended that a scope of the invention be defined broadly by the drawings and specification appended hereto and to their equivalents. Therefore, the scope of the invention is in no way to be limited only by any adverse inference under the rulings of Warner-Jenkinson Company, v. Hilton Davis Chemical, 520 US 17 (1997) or Festo Corp. v. Shoketsu Kinzoku Kogyo Kabushiki Co., 535 U.S. 722 (2002), or other similar caselaw or subsequent precedent should not be made if any future claims are added or amended subsequent to this patent application.

Claims

CLAIMS What is claimed is:
1. A computerized electronic visualization, map, coordinate, and description generation system for creating enhanced, universal anatomic references for an anatomic site, the system configured to:
Receive uncoordinated mixed input data correlating to the anatomic site wherein the data may be input by a user or extracted from existing records; detect descriptors of anatomy, modifiers of anatomy, and/or laterality in the inputs; use a dissection and categorization engine to organize the descriptors into digital data blocks; and use the organized data blocks to generate outputs wherein the outputs are translated visualizations, maps, coordinates, and descriptions incorporating the mixed input data that correlates to the anatomic site.
2. The system of claim 1 , wherein the data input by a user is written input.
3. The system of claim 1 , wherein the data input by the user is verbal input.
4. The system of claim 1 , wherein the data input is linguistic, coded, and/or symbolic.
5. The system of claim 1 , wherein the outputs are visualizations, avatars, maps, records, linguistic translations, synonyms, symbolic translations, and coded translations.
6. A computerized electronic visualization, map, coordinate, and description generation system for creating enhanced, universal anatomic references for an anatomic site, the system configured to: receive uncoordinated mixed input data correlating to the anatomic site wherein the data may be input by a user or extracted from existing records; detect descriptors of anatomy, modifiers of anatomy, and/or laterality in the inputs; use a dissection and categorization engine to organize the descriptors into digital data blocks; apply progressive linguistic and visual sub-segmentation to achieve pinpoint precision for the anatomic site and automatic sequencing of inputs into coded and translations that apply natural linguistic sequencing; and use the organized data blocks to generate outputs wherein the outputs are translated visualizations, maps, coordinates, and descriptions incorporating the mixed input data that correlates to the anatomic site.
7. The system of claim 6, wherein the data input by a user is written input.
8. The system of claim 6, wherein the data input by the user is verbal input.
9. The system of claim 6, wherein the data input is linguistic, coded, and/or symbolic.
10. The system of claim 6, wherein the outputs are real-time visualizations, avatars, maps, records, linguistic translations, synonyms, symbolic translations, and coded translations.
11. A method of creating enhanced, universal anatomic references for an anatomic site, the method comprising to: receiving mixed input data wherein the mixed input correlates to the anatomic site; detecting descriptors of anatomy, modifiers of anatomy, and/or laterality in the inputs wherein the descriptors can be anatomic or non-anatomic descriptors; categorizing and organizing the descriptors into digital data blocks; and progressively sub-segmenting the inputs to achieve pinpoint precision and automatic sequencing of inputs into coded translations and linguistic translations that apply natural linguistic sequencing.
12. The method of claim 11, wherein the mixed data may be input by a user or extracted from existing records.
13. The method of claim 11, wherein the mixed input data is linguistic, coded, and/or symbolic.
PCT/IB2022/000813 2021-12-10 2022-12-12 Coordinated visualization and translation from uncoordinated and mixed descriptions of anatomy Ceased WO2023166329A2 (en)

Priority Applications (4)

Application Number Priority Date Filing Date Title
GB2407999.8A GB2627660A (en) 2022-03-01 2022-12-12 Coordinated visualization and translation from uncoordinated and mixed descriptions of anatomy
PCT/IB2023/000535 WO2024023584A2 (en) 2022-07-26 2023-07-25 Systems and methods using multidimensional language and vision models and maps to categorize, describe, coordinate, and track anatomy and health data
EP23845770.9A EP4562645A4 (en) 2022-07-26 2023-07-25 SYSTEMS AND METHODS USING MULTI-DIMENSIONAL LANGUAGE AND VISUAL MODELS AND MAPS FOR CATEGORIZING, DESCRIPTING, COORDINATING AND TRACKING ANATOMY AND HEALTH DATA
US18/225,872 US20230368878A1 (en) 2021-12-10 2023-07-25 Systems and methods using multidimensional language and vision models and maps to categorize, describe, coordinate, and track anatomy and health data

Applications Claiming Priority (8)

Application Number Priority Date Filing Date Title
US202263315289P 2022-03-01 2022-03-01
US63/315,289 2022-03-01
US202263269516P 2022-03-17 2022-03-17
US63/269,516 2022-03-17
US202263362791P 2022-04-11 2022-04-11
US63/362,791 2022-04-11
US202263369717P 2022-07-28 2022-07-28
US63/369,717 2022-07-28

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US18/225,872 Continuation-In-Part US20230368878A1 (en) 2021-12-10 2023-07-25 Systems and methods using multidimensional language and vision models and maps to categorize, describe, coordinate, and track anatomy and health data

Publications (2)

Publication Number Publication Date
WO2023166329A2 true WO2023166329A2 (en) 2023-09-07
WO2023166329A3 WO2023166329A3 (en) 2023-11-02

Family

ID=87883113

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/IB2022/000813 Ceased WO2023166329A2 (en) 2021-12-10 2022-12-12 Coordinated visualization and translation from uncoordinated and mixed descriptions of anatomy

Country Status (2)

Country Link
GB (1) GB2627660A (en)
WO (1) WO2023166329A2 (en)

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10796390B2 (en) * 2006-07-03 2020-10-06 3M Innovative Properties Company System and method for medical coding of vascular interventional radiology procedures
EP2191399A1 (en) * 2007-09-21 2010-06-02 International Business Machines Corporation System and method for analyzing electronic data records
US11663463B2 (en) * 2019-07-10 2023-05-30 Adobe Inc. Center-biased machine learning techniques to determine saliency in digital images

Also Published As

Publication number Publication date
GB202407999D0 (en) 2024-07-17
WO2023166329A3 (en) 2023-11-02
GB2627660A (en) 2024-08-28

Similar Documents

Publication Publication Date Title
Luccioni et al. Stable bias: Analyzing societal representations in diffusion models
US20230368878A1 (en) Systems and methods using multidimensional language and vision models and maps to categorize, describe, coordinate, and track anatomy and health data
CN105940401B (en) System and method for providing executable annotations
CN108073555B (en) Method and system for generating virtual reality environment from electronic document
EP0615201B1 (en) Document detection system using detection result presentation for facilitating user's comprehension
US7912289B2 (en) Image text replacement
US20130251233A1 (en) Method for creating a report from radiological images using electronic report templates
Collins et al. Visualization of uncertainty in lattices to support decision-making.
US20060061595A1 (en) System and method for visual annotation and knowledge representation
CN107908641B (en) Method and system for acquiring image annotation data
CN113220836A (en) Training method and device of sequence labeling model, electronic equipment and storage medium
CN113157170B (en) Data labeling method and device
US10114888B2 (en) Terminal, system, method, and program for presenting sentence candidate
CN114996388A (en) Intelligent matching method and system for diagnosis name standardization
Colhon et al. Relating the opinion holder and the review accuracy in sentiment analysis of tourist reviews
CN120563721A (en) A visual teaching system for traditional Chinese medicine and acupuncture integrating multiple perspectives
EP4562645A2 (en) Systems and methods using multidimensional language and vision models and maps to categorize, describe, coordinate, and track anatomy and health data
WO2023166329A2 (en) Coordinated visualization and translation from uncoordinated and mixed descriptions of anatomy
US20120150530A1 (en) Information processing device and display control method
Chen et al. Exploring Cultural Losses in the Tourism Website Translation: A Case Study of Trip. com.
US20250078970A1 (en) Speech organization in medical imaging examination reports
US20240153072A1 (en) Medical information processing system and method
CN112509692B (en) Method, device, electronic device and storage medium for matching medical expressions
CN112966153A (en) Term mapping method, term mapping device, electronic equipment and storage medium
Mayer et al. PhonMatrix: Visualizing co-occurrence constraints of sounds

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22929680

Country of ref document: EP

Kind code of ref document: A2

ENP Entry into the national phase

Ref document number: 202407999

Country of ref document: GB

Kind code of ref document: A

Free format text: PCT FILING DATE = 20221212

WWE Wipo information: entry into national phase

Ref document number: 2407999.8

Country of ref document: GB

WWP Wipo information: published in national office

Ref document number: 2407999.8

Country of ref document: GB

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 22929680

Country of ref document: EP

Kind code of ref document: A2