WO2024044573A1 - Procédés et appareil de génération de données chirurgicales synthétiques - Google Patents

Procédés et appareil de génération de données chirurgicales synthétiques Download PDF

Info

Publication number
WO2024044573A1
WO2024044573A1 PCT/US2023/072627 US2023072627W WO2024044573A1 WO 2024044573 A1 WO2024044573 A1 WO 2024044573A1 US 2023072627 W US2023072627 W US 2023072627W WO 2024044573 A1 WO2024044573 A1 WO 2024044573A1
Authority
WO
WIPO (PCT)
Prior art keywords
surgical
statistical models
radiological images
variations
patient
Prior art date
Application number
PCT/US2023/072627
Other languages
English (en)
Inventor
Chandra Jonelagadda
Original Assignee
Kaliber Labs Inc.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Kaliber Labs Inc. filed Critical Kaliber Labs Inc.
Publication of WO2024044573A1 publication Critical patent/WO2024044573A1/fr

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/774Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H20/00ICT specially adapted for therapies or health-improving plans, e.g. for handling prescriptions, for steering therapy or for monitoring patient compliance
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H50/00ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H70/00ICT specially adapted for the handling or processing of medical references
    • G16H70/60ICT specially adapted for the handling or processing of medical references relating to pathologies
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/03Recognition of patterns in medical or anatomical images
    • G06V2201/031Recognition of patterns in medical or anatomical images of internal organs
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/03Recognition of patterns in medical or anatomical images
    • G06V2201/034Recognition of patterns in medical or anatomical images of medical instruments
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H30/00ICT specially adapted for the handling or processing of medical images

Definitions

  • the present embodiments relate generally to surgical data and more specifically to generating synthetic surgical data.
  • the surgical output may be generated based on one or more neural networks.
  • the surgical output may include imaging (video, stills, or both) data, including actual and/or synthetic imaging data.
  • the surgical output may describe detected surgical procedures.
  • the surgical output may be reviewed and corrected by the patient’s surgeon. These corrections may be used to generate a billing report that may be used to generate a bill for a surgical operation.
  • these surgical reports may be referred to as initial surgical reports, although they may be reviewed and/or subsequently finalized and//or modified.
  • the methods and apparatuses may be part of an interoperative guidance an assistance system that may reduce variability in the quality of patient care and may reduce time in the operating room (OR) and/or may reduce complication rates.
  • These methods and apparatuses may provide Al-powered surgical training and/or feedback to allow surgeons to better understand the patient’s anatomy as well as actual or potential conditions and complications before, during or after a surgical procedure. In some examples these methods and apparatuses may use synthetic patient data.
  • Any of the methods and apparatuses may be used to generate a surgical outputs describing detected surgical features and potential activities.
  • Any of the methods may include receiving a surgical video of a surgical procedure performed on a patient, identifying one or more surgical tools in the surgical video, detecting surgical activity within the surgical video, and determining one or more activities based on the identified surgical tools and the detected surgical activities.
  • Any of the methods described herein may also include recognizing a patient’s anatomy in the surgical video, where determining the one or more surgical activities is based, at least in part, on the recognized patient’s anatomy.
  • recognizing the patient’s anatomy may include executing a neural network trained to recognize anatomy in a surgical video.
  • these methods may include generating synthetic user data, based on real user/patient data, which may be useful for research, training and/or treatment.
  • the methods and apparatuses may include identifying the one or more surgical tools includes executing a neural network trained to identify surgical tools in a surgical video. Any of the methods may further include recognizing a pathology in a surgical video, where determining the one or more surgical activities is based, at least in part, on the recognized pathology. In some examples, recognizing the pathology may include executing a neural network trained to recognize pathology in a surgical video. [0012] In any of the methods described herein, the surgical activities may include a video clip of the detected surgical activity. Still, in any of the methods, the surgical activities may include a descriptive text based at least in part on the detected surgical activity.
  • Any of the methods described herein may include generating an initial surgical output based at least in part on the one or more determined surgical activities.
  • the surgical video may be captured with an orthoscopic camera.
  • detecting surgical activity may include executing a neural network trained to detect surgical activity in a surgical video.
  • Any of the systems described herein may include one or more processors and a memory configured to store instructions that, when executed by the one or more processors, cause the system to receive a surgical video of a surgical procedure performed on a patient, identify one or more surgical tools in the surgical video, detect surgical activity within the surgical video, and determine one or more surgical activities based on the identified surgical tools and the detected surgical activities.
  • Any of the methods described herein may provide an initial surgical report describing detected surgical and/or treatment activities.
  • the methods may include determining a plurality of video clips from a surgical video of a surgical procedure performed on a patient, determining a plurality of recommended key frames from the plurality of video clips, and determining one or more surgical activities based on the plurality of key frames. Any of these may include synthetic data that is matched to the actual patient anatomy and/or patient treatment, e.g., based on statistical matching to real-world data. Thus surgical complications and/or conditions may be generated based on statical likelihood from one or more patient features (e.g., age, race, health, etc.).
  • Any of the methods and apparatuses descried herein may further include detecting a plurality of surgical phases from the plurality of video clips, where the key frames are based, at least in part, on the plurality of surgical phases.
  • any of the methods may further include recognizing one or more stages within at least one of the plurality of surgical phases, where the key frames are based, at least in part, on the one or more stages.
  • the key frames may include diagnostic key frames, site preparation key frames, suture passing key frames, anchor insertion key frames, post treatment key frames, or a combination thereof. Furthermore, in any of the methods described herein may further include generating an initial surgical output (including synthetic data) based at least in part on the key frames.
  • any of the methods described herein may include recognizing patient anatomy in one or more of the key frames, where the billable activities are based, at least in part, on the recognized patient anatomy.
  • recognizing patient anatomy may include executing a neural network trained to recognize patient anatomy.
  • any of the methods described herein may include recognizing a pathology in one or more of the key frames, where the billable activities are based, at least in part, on the recognized pathology.
  • recognizing the pathology may include executing a neural network trained to recognize patient pathology.
  • Any of the methods described herein may include recognizing a surgical tool in one or more of the key frames, where the billable activities are based, at least in part, on the recognized surgical tool.
  • recognizing the surgical tool may include executing a neural network trained to recognize one or more surgical tools.
  • Any of the systems described herein may include one or more processors and a memory configured to store instructions that, when executed by the one or more processors, cause the system to determine a plurality of video clips from a surgical video of a surgical procedure performed on a patient, determine a plurality of recommended key frames from the plurality of video clips, and determine one or more billable activities based on the plurality of key frames.
  • Any of the methods described herein may include receiving video clips of an operation performed on a patient, determining any modifications to billable activities based, at least in part, on the video clips, and generating a billing report based, at least in part, on the determined modifications.
  • determining any modifications to surgical activities may include verifying that at least one of the video clips include a particular surgical procedure. In any of the methods, determining any modifications to surgical activities may include verifying that at least one of the video clips include a particular surgical tool, patient anatomy, or pathology.
  • verifying may include executing a neural network trained to recognize surgical tools, patient anatomy, or pathology.
  • generating the billing report may include mapping detected surgical activity to surgical procedures.
  • Any of the systems described herein may include one or more processors, and a memory configured to store instructions that, when executed by the one or more processors, cause the system to receive video clips of an operation performed on a patient, determine any modifications to billable activities based, at least in part, on the video clips, and generate a billing report based, at least in part, on the determined modifications.
  • All of the methods and apparatuses described herein, in any combination, are herein contemplated and can be used to achieve the benefits as described herein.
  • FIG. 1 shows an example system for generating synthetic medical images.
  • FIG. 2 is a flowchart showing an example method for generating synthetic medical images.
  • FIG. 3 shows example surgical video images.
  • FIG. 4 shows another example of surgical video images.
  • FIG. 5 shows a block diagram of a device that may be one example of any device, system, or apparatus that may provide any of the functionality described herein.
  • the generated images may include video images and/or radiological images.
  • the radiological images may include x-ray images, ultrasound images, or any other feasible images.
  • the generated images may be based on actual patient video images or radiological images. Any feasible patient images may be analyzed for anatomies or pathologies. In some cases, the analysis may be based on execution of one or more trained neural networks. Furthermore, a processor or processing device may determine one or more statistics associated with any identified anatomies or pathologies. Based on the determined statistics, the processor or processing device may generate synthetic video and/or radiological images. In some cases, the synthetic video or radiological image generation may be based on an execution of trained generative adversarial networks.
  • FIG. 1 shows an example system 100 for generating synthetic medical images.
  • the system 100 may include a compute node 110.
  • the compute node 110 may include a processor, computer, or the like.
  • the compute node 110 may be, for example, located in or near a surgeon’s medical office or clinic.
  • the compute node 110 may be a remote, virtual, or cloud-based processor, computer, or the like remotely located with respect to the surgeon, doctor, or other clinician.
  • the compute node 110 may include, one or more processors, memory (including dynamic, non-volatile, mechanical, solid-state, or the like), and any number of interfaces (including user interfaces), communication interfaces (serial, parallel, wired, wireless, and the like).
  • the system 100 may include surgical video data 140 and radiological image data 150.
  • the surgical video data 140 may include surgical video associated with a variety of surgeries that may be associated with a variety of different patients.
  • the surgical video data 140 may include arthroscopic video images from any feasible number of patients.
  • the surgical video data 140 may include any other feasible (e.g., non-arthroscopic) video images.
  • the radiological image data 150 may include x-ray data, ultrasound data, or any other feasible radiological data that may be associated with a variety of different patients and a variety of different surgeries.
  • the compute node 110 may generate statistical models 160 from the surgical video data 140 and the radiological image data 150.
  • the statistical models may be built, and the networks (GANs) may be trained during a learning phase.
  • the statistical models 160 may describe variations of features that are denoted or identified within the surgical video data 140 and the radiological image data 150.
  • the statistical models 160 may include statistical models of pathological variations determined from the surgical video data 140.
  • the statistical models 160 may include statistical models of morphological variations determined from the radiological image data 150.
  • a given patient's radiological images may be used to synthesize pathology. This could be used for report generation purposes.
  • the report to the care provider may therefore use synthesized images.
  • the compute node 110 may use the statistical models 160 to generate synthesized images 130.
  • the compute node 110 may generate a set of synthesized surgical video images based on one or more of the statistics models 160.
  • features within the synthesized surgical videos may be within in a range of variations included in the surgical video data 140.
  • the compute node 110 may use the statistical models 160 to generate a set of synthesized radiological images.
  • Features within the synthesized radiological images may be within a range of variations included in the radiological image data 150.
  • features may be enlarged or expanded, e.g., to expand the range of morphological variations, even beyond the actual variations in the dataset.
  • these methods and apparatuses may synthesize pathology images for a larger person (e.g., a person who is 7’2”) even when the dataset only contains actual images for people who are smaller (e.g., under 6’).
  • These methods and apparatuses may also produce new combinations, i.e., large tears on people who are slightly built, etc.
  • FIG. 2 is a flowchart showing an example method 200 for generating synthetic medical images. Some examples may perform the operations described herein with additional operations, fewer operations, operations in a different order, operations in parallel, and some operations differently. The method 200 is described below with respect to the system 100 of FIG. 1, however, the method 200 may be performed by any other suitable system or device.
  • the method 200 begins in block 210 as surgery videos are obtained.
  • the compute node 110 may obtain or receive the surgical video data 140 of FIG. 1.
  • the surgery videos may be actual surgical videos that were recorded during a patient’s surgery.
  • the videos may be arthroscopic videos recorded from arthroscopic cameras.
  • anatomy recognition is performed.
  • the compute node 110 can identify various anatomical parts and/or features visible and/or identifiable from the surgery video.
  • the compute node 110 may execute a neural network trained to identify patient anatomy from the surgery video (from the surgical video data 140).
  • pathology recognition is performed.
  • the compute node 110 may identify various patient pathologies that may be visible and/or identifiable from the surgery video.
  • Example pathologies may include, but are not limited to, tendon damage, tom ligaments, rotator cuff injury, meniscus damage, and the like.
  • the compute node 110 may execute a neural network trained to identify pathologies from the surgery video (from the surgical video data 140).
  • pathology quantification and classification is performed.
  • the compute node 110 may quantify and classify any of the pathologies identified in block 216.
  • the apparatus/method may quantify/classify pathologies. For example, some classifications are categorical; major tear, minor tear, etc. Some are binary, partial thickness tears, which implies that the tendons are frayed but some residual tendons are still holding; as opposed to total tears where we can see through the tendons. These methods and apparatuses may report on the measured sizes of tears and defects (e.g., 5mm partial thickness, 5mm cartilage lesion, 5mm x 6mm cartilage defect, etc.).
  • the compute node 110 may execute a neural network trained to quantify and classify from the surgery video (from the surgical video data 140) based on the identified pathologies in block 214.
  • the compute node 110 may determine statistics regarding anatomies and pathologies that are included within the surgery videos.
  • the statistics may describe, at least in part, the ranges of various features (anatomies and pathologies) that have been identified and/or recognized from within the surgery videos.
  • radiological images are obtained.
  • the compute node 110 may obtain or receive radiological image data 150.
  • the radiological images may be actual radiological images (x-rays, ultrasounds, and the like) that were collected during diagnosis or treatment of a patient.
  • radiological anatomy recognition is performed.
  • the compute node 110 may identify various anatomical parts and/or features visible and/or identifiable from the obtained radiological images.
  • the compute node 110 may execute a neural network trained to identify patient anatomy from the radiological images (from the radiological image data 150).
  • radiological pathology recognition is performed.
  • the compute node 110 may identify various patient pathologies that may be visible and/or identifiable from the radiological images.
  • Example pathologies may include, but are not limited to, tendon damage, torn ligaments, rotator cuff injury, meniscus damage, and the like.
  • the compute node 110 may execute a neural network trained to identify pathologies from the radiological images (from the radiological image data 150).
  • Morphological variations may include variations identified from the literature, such as small variations in the curvature of the glenoid and humeral heads, i.e., the socket and ball joints in the shoulder (and the hip). Even in the absence of a significant number of real examples to train the models, these methods and apparatuses may start with a normal patient’s radiological image, use computer vision algorithms to alter the curvatures (within realistic limits) and synthesize optical images based on these images. Other variations could be straightforward patient height / body structure related. For example, larger people have larger bones (and joint spaces), so the field of view might look slightly different.
  • the compute node 110 may determine statistics regarding anatomies and pathologies that are included within the radiological images.
  • the statistics may describe, at least in part, the ranges of various features (anatomies and pathologies) that have been identified and/or recognized from within the radiological images.
  • statistical models may capture the variations in the size of the bones, curvature of structures such as the humeral heads, glenoids, condyle, etc., and these variations may be controlled for appropriate factors.
  • Synthetic surgery videos and/or synthetic radiological images may be generated that are consistent with the statistical models of pathological variations (block 218) and statistical models of morphological variations (block 228).
  • the synthesized images although representative of actual pathologies, are not directly attributed to any one patient or individual.
  • the synthesized images can effectively anonymize surgery videos and radiological images.
  • the synthetic images may be generated through an execution of a neural network.
  • GANs generative adversarial networks
  • the surgical video (such as the surgical video data 140) may be processed independently from radiological images (such as the radiological image data 150).
  • operations associated with blocks 210, 212, 214, 216, and 218 may be performed separately from operations associated with blocks 220, 222, 224, and 228.
  • the method 200 may process only surgical video data or only radiological image data. In some other cases, the method 200 may process both surgical video data and radiological image data.
  • FIG. 3 shows example surgical video images 300.
  • a first image 310 shows a reference image.
  • the first image 310 may be an actual surgical video image associated with an individual patient.
  • a second image 320 is a synthetic image that is generated in accordance with the method 200.
  • FIG. 3 is an example of the ACL in the knee joint.
  • the apparatus e.g., system
  • the apparatus may take a reference pathology, i.e., the image on the left in FIG. 3, and produces a variation based on other images it has seen.
  • the apparatus may generate a plurality of different candidate images; some of these candidates may look more or less realistic.
  • the system may reject unrealistic images and settles on images that look sufficient different from the original image but retains attributes (in an abstract sense) of the original. This may be combined and variations in the radiological images may be used to synthesize data to cover variations which would not be seen easily in real-life.
  • the second image 320 may include similar features (anatomies) that are included within the first image 310. However, the features of the first image 310 may be modified based on the statistics determined through the method 200.
  • FIG. 4 shows example surgical video images 400.
  • a first image 410 shows a reference image.
  • the first image 410 may be an actual surgical video image associated with an individual patient.
  • a second image 420 is a synthetic image that is generated in accordance with the method 200.
  • FIG. 4 is an example of how the system can generate variations in a meniscal tear in the knee joint.
  • FIG. 5 shows a block diagram of a device 500 that may be one example of any device, system, or apparatus that may provide any of the functionality described herein.
  • the device 500 may include a transceiver 520, a processor 530, and a memory 540.
  • the transceiver 520 which is coupled to the processor 530, may be used to interface with any other feasible device and/or network.
  • the transceiver 520 may include any feasible wired and/or wireless interface to transmit and/or receive data.
  • the transceiver 520 may include a wired transceiver that includes a wired network (Ethernet or the like) interface.
  • the transceiver 520 may include a wireless transceiver that conforms to Wi-Fi, Bluetooth, Zigbee, Long Term Evolution (LTE) or other wireless protocols.
  • LTE Long Term Evolution
  • the processor 530 which is also coupled to the memory 540, may be any one or more suitable processors capable of executing scripts or instructions of one or more software programs stored in the device 500 (such as within memory 540).
  • the memory 540 may include image data 542 that may include surgical video data (such as the surgical video data 140 of FIG. 1) and/or radiological image data (such as the radiological image data 150).
  • the device 500 may obtain or receive the image data (e.g., the surgical video data and/or the radiological image data 150) through the transceiver 520.
  • the memory 540 may also include synthetic image data 543.
  • the synthetic image data 543 may include synthetic surgery video data and/or synthetic radiological image data that may be generated in accordance with the method 200 of FIG. 2.
  • the memory 540 may also include a non-transitory computer-readable storage medium (e.g., one or more nonvolatile memory elements, such as EPROM, EEPROM, Flash memory, a hard drive, etc.) that may store the following software modules: an anatomy recognizer module 544 to recognize and/or identify patient anatomy; a pathology recognizer module 545 to recognize and/or identify pathologies; a pathology quantification and classification module 546 to quantify and classify any feasible pathology; a statistical model generation module 547 to generate statistical models; and an image generation module 548 to generate surgical and/or radiological images.
  • a non-transitory computer-readable storage medium e.g., one or more nonvolatile memory elements, such as EPROM, EEPROM, Flash memory, a hard drive, etc.
  • an anatomy recognizer module 544 to recognize and/or identify patient anatomy
  • a pathology recognizer module 545 to recognize and/or identify pathologies
  • a pathology quantification and classification module 546 to quantify and classify any
  • Each software module includes program instructions that, when executed by the processor 530, may cause the device 500 to perform the corresponding function(s).
  • the non-transitory computer-readable storage medium of memory 540 may include instructions for performing all or a portion of the operations described herein.
  • the processor 530 may execute the anatomy recognizer module 544 to identify and/or recognize patient anatomies within surgical video clips and/or radiological image data. In some examples, execution of the anatomy recognizer module 544 may cause the processor 530 to execute a neural network trained to identify and/or recognize any feasible patient anatomy.
  • the processor 530 may execute the pathology recognizer module 545 to identify and/or recognize pathologies within surgical video clips and/or radiological image data.
  • execution of the pathology recognizer module 545 may cause the processor 530 to execute a neural network trained to identify and/or recognize any feasible pathology.
  • the processor 530 may execute the pathology quantification and classification module 546 to quantify and/or classify any recognized (identified) pathologies within the surgical video data and/or the radiological image data.
  • execution of the pathology quantification and classification module 546 may cause the processor 530 to execute a neural network trained to quantify and/or classify any recognized (identified) pathologies.
  • the processor 530 may execute statistical model generation module 547 to generate any feasible statistical models in accordance with the recognized anatomy and pathology from the surgical video data and the radiological image data.
  • the processor 530 may execute the image generation module 548 to generate synthetic surgery videos and synthetic radiological images.
  • execution of the image generation module 548 may cause the processor 530 to execute one or more neural networks trained to generate surgery videos and radiological images based on one or more statistical models.
  • execution of the image generation module 548 may cause the processor 530 to execute one or more GAN neural networks trained to iteratively generate surgery videos and radiological images.
  • any of the methods and apparatuses may include creating synthetic data that may match real-world statistical distributions for the patient-specific context.
  • these methods and apparatuses may include generating individual readings (attributes) of patient health datasets; replicating the statistical distributions present in the real-world data; combining the readings into synthetic patient records which feed into the datasets.
  • these methods and apparatuses may include matching distributions should balance how and what parameters could be combined. This may work with quantified and categorical data attributes.
  • the methods or apparatuses may include generating synthetic data to mimic subject responses to various dosings, which may rely on assumptions about the drug toxicities, statistical models of subject reactions, etc.
  • any of the methods (including user interfaces) described herein may be implemented as software, hardware or firmware, and may be described as a non-transitory computer-readable storage medium storing a set of instructions capable of being executed by a processor (e.g., computer, tablet, smartphone, etc.), that when executed by the processor causes the processor to control perform any of the steps, including but not limited to: displaying, communicating with the user, analyzing, modifying parameters (including timing, frequency, intensity, etc.), determining, alerting, or the like.
  • any of the methods described herein may be performed, at least in part, by an apparatus including one or more processors having a memory storing a non-transitory computer-readable storage medium storing a set of instructions for the processes(s) of the method.
  • computing devices and systems described and/or illustrated herein broadly represent any type or form of computing device or system capable of executing computer-readable instructions, such as those contained within the modules described herein.
  • these computing device(s) may each comprise at least one memory device and at least one physical processor.
  • memory or “memory device,” as used herein, generally represents any type or form of volatile or non-volatile storage device or medium capable of storing data and/or computer-readable instructions.
  • a memory device may store, load, and/or maintain one or more of the modules described herein.
  • Examples of memory devices comprise, without limitation, Random Access Memory (RAM), Read Only Memory (ROM), flash memory, Hard Disk Drives (HDDs), Solid-State Drives (SSDs), optical disk drives, caches, variations or combinations of one or more of the same, or any other suitable storage memory.
  • processor or “physical processor,” as used herein, generally refers to any type or form of hardware-implemented processing unit capable of interpreting and/or executing computer-readable instructions.
  • a physical processor may access and/or modify one or more modules stored in the above-described memory device.
  • Examples of physical processors comprise, without limitation, microprocessors, microcontrollers, Central Processing Units (CPUs), Field-Programmable Gate Arrays (FPGAs) that implement softcore processors, Application- Specific Integrated Circuits (ASICs), portions of one or more of the same, variations or combinations of one or more of the same, or any other suitable physical processor.
  • the method steps described and/or illustrated herein may represent portions of a single application.
  • one or more of these steps may represent or correspond to one or more software applications or programs that, when executed by a computing device, may cause the computing device to perform one or more tasks, such as the method step.
  • one or more of the devices described herein may transform data, physical devices, and/or representations of physical devices from one form to another. Additionally or alternatively, one or more of the modules recited herein may transform a processor, volatile memory, non-volatile memory, and/or any other portion of a physical computing device from one form of computing device to another form of computing device by executing on the computing device, storing data on the computing device, and/or otherwise interacting with the computing device.
  • computer-readable medium generally refers to any form of device, carrier, or medium capable of storing or carrying computer-readable instructions.
  • Examples of computer-readable media comprise, without limitation, transmission-type media, such as carrier waves, and non-transitory-type media, such as magnetic-storage media (e.g., hard disk drives, tape drives, and floppy disks), optical -storage media (e.g., Compact Disks (CDs), Digital Video Disks (DVDs), and BLU-RAY disks), electronic-storage media (e.g., solid-state drives and flash media), and other distribution systems.
  • transmission-type media such as carrier waves
  • non-transitory-type media such as magnetic-storage media (e.g., hard disk drives, tape drives, and floppy disks), optical -storage media (e.g., Compact Disks (CDs), Digital Video Disks (DVDs), and BLU-RAY disks), electronic-storage media (e.g., solid-state drives and flash
  • the processor as described herein can be configured to perform one or more steps of any method disclosed herein. Alternatively or in combination, the processor can be configured to combine one or more steps of one or more methods as disclosed herein.
  • the device may be otherwise oriented (rotated 90 degrees or at other orientations) and the spatially relative descriptors used herein interpreted accordingly.
  • the terms “upwardly”, “downwardly”, “vertical”, “horizontal” and the like are used herein for the purpose of explanation only unless specifically indicated otherwise.
  • first and second may be used herein to describe various features/elements (including steps), these features/elements should not be limited by these terms, unless the context indicates otherwise. These terms may be used to distinguish one feature/element from another feature/element. Thus, a first feature/element discussed below could be termed a second feature/element, and similarly, a second feature/element discussed below could be termed a first feature/element without departing from the teachings of the present invention.
  • any of the apparatuses and methods described herein should be understood to be inclusive, but all or a sub-set of the components and/or steps may alternatively be exclusive, and may be expressed as “consisting of’ or alternatively “consisting essentially of’ the various components, steps, sub-components or sub-steps.
  • a numeric value may have a value that is +/- 0.1% of the stated value (or range of values), +/- 1% of the stated value (or range of values), +/- 2% of the stated value (or range of values), +/- 5% of the stated value (or range of values), +/- 10% of the stated value (or range of values), etc.
  • Any numerical values given herein should also be understood to include about or approximately that value unless the context indicates otherwise. For example, if the value "10" is disclosed, then “about 10" is also disclosed. Any numerical range recited herein is intended to include all sub-ranges subsumed therein.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • General Health & Medical Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Public Health (AREA)
  • General Physics & Mathematics (AREA)
  • Databases & Information Systems (AREA)
  • Physics & Mathematics (AREA)
  • Computing Systems (AREA)
  • Software Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Epidemiology (AREA)
  • Primary Health Care (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • Data Mining & Analysis (AREA)
  • Pathology (AREA)
  • Image Analysis (AREA)

Abstract

L'invention concerne un système et un procédé de génération de données chirurgicales synthétiques. Les systèmes et les procédés peuvent recevoir des données vidéo chirurgicales et/ou des données d'image radiologique. Les données de vidéo chirurgicale et d'image radiologique peuvent être analysées pour reconnaître des anatomies et/ou des pathologies incluses. Sur la base des anatomies et pathologies reconnues, des statistiques peuvent être déterminées pour décrire des plages de variation des anatomies et des pathologies. Des données chirurgicales synthétiques comprenant une vidéo chirurgicale synthétique et des données radiologiques synthétiques peuvent être générées sur la base des statistiques déterminées.
PCT/US2023/072627 2022-08-22 2023-08-22 Procédés et appareil de génération de données chirurgicales synthétiques WO2024044573A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US202263400032P 2022-08-22 2022-08-22
US63/400,032 2022-08-22

Publications (1)

Publication Number Publication Date
WO2024044573A1 true WO2024044573A1 (fr) 2024-02-29

Family

ID=90014006

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2023/072627 WO2024044573A1 (fr) 2022-08-22 2023-08-22 Procédés et appareil de génération de données chirurgicales synthétiques

Country Status (1)

Country Link
WO (1) WO2024044573A1 (fr)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20200265273A1 (en) * 2019-02-15 2020-08-20 Surgical Safety Technologies Inc. System and method for adverse event detection or severity estimation from surgical data
WO2021144230A1 (fr) * 2020-01-16 2021-07-22 Koninklijke Philips N.V. Procédé et système de détection automatique de structures anatomiques dans une image médicale
US20210350934A1 (en) * 2020-05-06 2021-11-11 Quantitative Imaging Solutions, Llc Synthetic tumor models for use in therapeutic response prediction

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20200265273A1 (en) * 2019-02-15 2020-08-20 Surgical Safety Technologies Inc. System and method for adverse event detection or severity estimation from surgical data
WO2021144230A1 (fr) * 2020-01-16 2021-07-22 Koninklijke Philips N.V. Procédé et système de détection automatique de structures anatomiques dans une image médicale
US20210350934A1 (en) * 2020-05-06 2021-11-11 Quantitative Imaging Solutions, Llc Synthetic tumor models for use in therapeutic response prediction

Similar Documents

Publication Publication Date Title
US20220230750A1 (en) Diagnosis assistance system and control method thereof
JP7309605B2 (ja) ディープラーニング医療システムおよび画像収集のための方法
US20190110754A1 (en) Machine learning based system for identifying and monitoring neurological disorders
Sait et al. A deep-learning based multimodal system for Covid-19 diagnosis using breathing sounds and chest X-ray images
US20200258616A1 (en) Automated identification and grading of intraoperative quality
US10984894B2 (en) Automated image quality control apparatus and methods
JP7382306B2 (ja) 診断支援装置、プログラム、学習済みモデル、および学習装置
Mazumder et al. Synthetic PPG signal generation to improve coronary artery disease classification: Study with physical model of cardiovascular system
CN110930373A (zh) 一种基于神经网络的肺炎识别装置
WO2024044573A1 (fr) Procédés et appareil de génération de données chirurgicales synthétiques
CN105578958B (zh) 用于背景感知成像的系统和方法
US20230144621A1 (en) Capturing diagnosable video content using a client device
Keserwani et al. A comparative study: prediction of parkinson’s disease using machine learning, deep learning and nature inspired algorithm
JP7144370B2 (ja) 診断支援装置、診断支援方法、及び診断支援プログラム
Schuster et al. Quantitative detection of substitute voice generator during phonation in patients undergoing laryngectomy
KR20220123518A (ko) 기계 학습을 사용하여 개선된 수술 보고서를 생성하기 위한 방법 및 그 디바이스
JP2019107453A (ja) 画像処理装置及び画像処理方法
US20240203567A1 (en) Systems and methods for ai-assisted medical image annotation
WO2021066039A1 (fr) Programme de traitement d'informations médicales et dispositif de traitement d'informations médicales
WO2023220646A2 (fr) Système pour fournir des soins et une surveillance postopératoires par la voix humaine
US20240173011A1 (en) Artificial Intelligence System for Determining Expected Drug Use Benefit through Medical Imaging
WO2023220674A2 (fr) Génération de rapport de preuve de chirurgie
Maugeon S16-02 SESSION 16: PLANNING/IMAGING–PART II TRANSLATION OF PATIENT UNIQUENESS AND TREATMENT PLAN INTO PRE-SURGICAL PLANNING OF CRANIOPLASTY SURGERY USING WEB ACCESS 3D MODEL-BASED ICT TOOLS
Tan et al. COVID-19 Chest X-Ray Classification Using Residual Network
WO2020255228A1 (fr) Dispositif d'enregistrement d'image, procédé de traitement d'informations, dispositif d'enregistrement d'image et programme d'enregistrement d'image

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 23858228

Country of ref document: EP

Kind code of ref document: A1