CN117615705A - System and device control using shape clustering - Google Patents
System and device control using shape clustering Download PDFInfo
- Publication number
- CN117615705A CN117615705A CN202280048502.4A CN202280048502A CN117615705A CN 117615705 A CN117615705 A CN 117615705A CN 202280048502 A CN202280048502 A CN 202280048502A CN 117615705 A CN117615705 A CN 117615705A
- Authority
- CN
- China
- Prior art keywords
- data
- shape
- training
- patient
- shape measurement
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000005259 measurement Methods 0.000 claims abstract description 390
- 238000012549 training Methods 0.000 claims abstract description 369
- 238000000034 method Methods 0.000 claims abstract description 251
- 238000010801 machine learning Methods 0.000 claims abstract description 69
- 238000001356 surgical procedure Methods 0.000 claims abstract description 65
- 238000003384 imaging method Methods 0.000 claims description 114
- 210000003484 anatomy Anatomy 0.000 claims description 47
- 238000012545 processing Methods 0.000 claims description 44
- 238000004590 computer program Methods 0.000 claims description 20
- 230000008859 change Effects 0.000 claims description 18
- 230000015654 memory Effects 0.000 claims description 18
- 230000000694 effects Effects 0.000 claims description 5
- 210000000056 organ Anatomy 0.000 claims description 4
- 238000004422 calculation algorithm Methods 0.000 description 104
- 230000000875 corresponding effect Effects 0.000 description 54
- 230000006870 function Effects 0.000 description 52
- 230000009466 transformation Effects 0.000 description 31
- 230000008569 process Effects 0.000 description 27
- 230000007704 transition Effects 0.000 description 27
- 230000001419 dependent effect Effects 0.000 description 26
- 238000013528 artificial neural network Methods 0.000 description 21
- 238000010586 diagram Methods 0.000 description 21
- 239000011159 matrix material Substances 0.000 description 19
- 238000000513 principal component analysis Methods 0.000 description 18
- 230000001960 triggered effect Effects 0.000 description 18
- 238000009826 distribution Methods 0.000 description 17
- 238000013527 convolutional neural network Methods 0.000 description 16
- 230000004044 response Effects 0.000 description 14
- 239000000523 sample Substances 0.000 description 14
- 230000009467 reduction Effects 0.000 description 13
- 238000011282 treatment Methods 0.000 description 12
- 239000013598 vector Substances 0.000 description 12
- 238000005452 bending Methods 0.000 description 11
- 239000002872 contrast media Substances 0.000 description 11
- 239000013307 optical fiber Substances 0.000 description 11
- 230000002792 vascular Effects 0.000 description 11
- 230000001276 controlling effect Effects 0.000 description 10
- 210000002216 heart Anatomy 0.000 description 9
- 238000005457 optimization Methods 0.000 description 9
- 238000011524 similarity measure Methods 0.000 description 9
- 230000002123 temporal effect Effects 0.000 description 9
- 238000012800 visualization Methods 0.000 description 9
- 208000031481 Pathologic Constriction Diseases 0.000 description 8
- 230000003287 optical effect Effects 0.000 description 8
- 238000007781 pre-processing Methods 0.000 description 8
- 230000005855 radiation Effects 0.000 description 8
- 210000002254 renal artery Anatomy 0.000 description 8
- 238000012360 testing method Methods 0.000 description 8
- 230000008901 benefit Effects 0.000 description 7
- 238000006243 chemical reaction Methods 0.000 description 7
- 238000004891 communication Methods 0.000 description 7
- 238000002608 intravascular ultrasound Methods 0.000 description 7
- 230000003902 lesion Effects 0.000 description 7
- 239000003550 marker Substances 0.000 description 7
- 210000000709 aorta Anatomy 0.000 description 6
- 238000004364 calculation method Methods 0.000 description 6
- 230000003993 interaction Effects 0.000 description 6
- 230000005865 ionizing radiation Effects 0.000 description 6
- 230000033001 locomotion Effects 0.000 description 6
- 238000010606 normalization Methods 0.000 description 6
- 230000002829 reductive effect Effects 0.000 description 6
- 238000004088 simulation Methods 0.000 description 6
- 230000036262 stenosis Effects 0.000 description 6
- 208000037804 stenosis Diseases 0.000 description 6
- 238000012706 support-vector machine Methods 0.000 description 6
- 238000013459 approach Methods 0.000 description 5
- 210000004204 blood vessel Anatomy 0.000 description 5
- 238000003745 diagnosis Methods 0.000 description 5
- 239000003814 drug Substances 0.000 description 5
- 229940079593 drug Drugs 0.000 description 5
- 239000000835 fiber Substances 0.000 description 5
- 238000001914 filtration Methods 0.000 description 5
- 238000002595 magnetic resonance imaging Methods 0.000 description 5
- 238000007726 management method Methods 0.000 description 5
- 238000013507 mapping Methods 0.000 description 5
- 238000002560 therapeutic procedure Methods 0.000 description 5
- 210000005166 vasculature Anatomy 0.000 description 5
- 238000012795 verification Methods 0.000 description 5
- 238000004458 analytical method Methods 0.000 description 4
- 230000017531 blood circulation Effects 0.000 description 4
- 230000036772 blood pressure Effects 0.000 description 4
- 230000000747 cardiac effect Effects 0.000 description 4
- 239000002131 composite material Substances 0.000 description 4
- 230000002596 correlated effect Effects 0.000 description 4
- 238000003066 decision tree Methods 0.000 description 4
- 238000002594 fluoroscopy Methods 0.000 description 4
- 239000007943 implant Substances 0.000 description 4
- 230000001965 increasing effect Effects 0.000 description 4
- 210000003734 kidney Anatomy 0.000 description 4
- 238000005192 partition Methods 0.000 description 4
- 238000002601 radiography Methods 0.000 description 4
- 238000007637 random forest analysis Methods 0.000 description 4
- 230000008439 repair process Effects 0.000 description 4
- 230000011218 segmentation Effects 0.000 description 4
- 206010002329 Aneurysm Diseases 0.000 description 3
- 230000009471 action Effects 0.000 description 3
- 230000004913 activation Effects 0.000 description 3
- 238000002399 angioplasty Methods 0.000 description 3
- 210000001367 artery Anatomy 0.000 description 3
- 230000003190 augmentative effect Effects 0.000 description 3
- 230000009286 beneficial effect Effects 0.000 description 3
- 230000037237 body shape Effects 0.000 description 3
- 238000011143 downstream manufacturing Methods 0.000 description 3
- 238000000556 factor analysis Methods 0.000 description 3
- 238000002347 injection Methods 0.000 description 3
- 239000007924 injection Substances 0.000 description 3
- 238000013152 interventional procedure Methods 0.000 description 3
- 230000007246 mechanism Effects 0.000 description 3
- 210000004115 mitral valve Anatomy 0.000 description 3
- 238000012806 monitoring device Methods 0.000 description 3
- 238000012544 monitoring process Methods 0.000 description 3
- 238000012014 optical coherence tomography Methods 0.000 description 3
- 238000013439 planning Methods 0.000 description 3
- 238000002360 preparation method Methods 0.000 description 3
- 238000005070 sampling Methods 0.000 description 3
- 230000035945 sensitivity Effects 0.000 description 3
- 238000000926 separation method Methods 0.000 description 3
- 239000000243 solution Substances 0.000 description 3
- 238000003860 storage Methods 0.000 description 3
- 238000000844 transformation Methods 0.000 description 3
- 238000012384 transportation and delivery Methods 0.000 description 3
- 230000000007 visual effect Effects 0.000 description 3
- 238000010207 Bayesian analysis Methods 0.000 description 2
- 206010028980 Neoplasm Diseases 0.000 description 2
- 210000001765 aortic valve Anatomy 0.000 description 2
- 230000006399 behavior Effects 0.000 description 2
- 238000013276 bronchoscopy Methods 0.000 description 2
- 238000002591 computed tomography Methods 0.000 description 2
- 238000007408 cone-beam computed tomography Methods 0.000 description 2
- 238000012790 confirmation Methods 0.000 description 2
- 238000002790 cross-validation Methods 0.000 description 2
- 230000006378 damage Effects 0.000 description 2
- 238000013135 deep learning Methods 0.000 description 2
- 238000011161 development Methods 0.000 description 2
- 230000018109 developmental process Effects 0.000 description 2
- 230000009977 dual effect Effects 0.000 description 2
- 239000012636 effector Substances 0.000 description 2
- 238000001839 endoscopy Methods 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 238000009499 grossing Methods 0.000 description 2
- 230000036541 health Effects 0.000 description 2
- 210000003090 iliac artery Anatomy 0.000 description 2
- 238000002513 implantation Methods 0.000 description 2
- 238000001727 in vivo Methods 0.000 description 2
- 238000012880 independent component analysis Methods 0.000 description 2
- 208000014674 injury Diseases 0.000 description 2
- 238000002372 labelling Methods 0.000 description 2
- 230000000670 limiting effect Effects 0.000 description 2
- 238000002324 minimally invasive surgery Methods 0.000 description 2
- 239000000203 mixture Substances 0.000 description 2
- 238000012633 nuclear imaging Methods 0.000 description 2
- 230000010399 physical interaction Effects 0.000 description 2
- 238000011176 pooling Methods 0.000 description 2
- 230000002980 postoperative effect Effects 0.000 description 2
- 230000000717 retained effect Effects 0.000 description 2
- 238000000638 solvent extraction Methods 0.000 description 2
- 238000000547 structure data Methods 0.000 description 2
- 230000001360 synchronised effect Effects 0.000 description 2
- 230000001225 therapeutic effect Effects 0.000 description 2
- 238000013175 transesophageal echocardiography Methods 0.000 description 2
- 238000002604 ultrasonography Methods 0.000 description 2
- 238000011144 upstream manufacturing Methods 0.000 description 2
- 230000002485 urinary effect Effects 0.000 description 2
- 238000011179 visual inspection Methods 0.000 description 2
- 230000003936 working memory Effects 0.000 description 2
- 206010067484 Adverse reaction Diseases 0.000 description 1
- 238000007476 Maximum Likelihood Methods 0.000 description 1
- 241001465754 Metazoa Species 0.000 description 1
- 244000208734 Pisonia aculeata Species 0.000 description 1
- 206010039897 Sedation Diseases 0.000 description 1
- 208000007536 Thrombosis Diseases 0.000 description 1
- 206010046996 Varicose vein Diseases 0.000 description 1
- 208000027418 Wounds and injury Diseases 0.000 description 1
- 238000002679 ablation Methods 0.000 description 1
- 230000002411 adverse Effects 0.000 description 1
- 230000006838 adverse reaction Effects 0.000 description 1
- 230000002776 aggregation Effects 0.000 description 1
- 238000004220 aggregation Methods 0.000 description 1
- 230000016571 aggressive behavior Effects 0.000 description 1
- 238000002583 angiography Methods 0.000 description 1
- 238000010171 animal model Methods 0.000 description 1
- 210000002376 aorta thoracic Anatomy 0.000 description 1
- 210000000013 bile duct Anatomy 0.000 description 1
- 239000000560 biocompatible material Substances 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 238000009530 blood pressure measurement Methods 0.000 description 1
- 201000011510 cancer Diseases 0.000 description 1
- 239000003638 chemical reducing agent Substances 0.000 description 1
- 230000004087 circulation Effects 0.000 description 1
- 238000007635 classification algorithm Methods 0.000 description 1
- 238000004138 cluster model Methods 0.000 description 1
- 238000005094 computer simulation Methods 0.000 description 1
- 208000029078 coronary artery disease Diseases 0.000 description 1
- 238000012937 correction Methods 0.000 description 1
- 230000008878 coupling Effects 0.000 description 1
- 238000010168 coupling process Methods 0.000 description 1
- 238000005859 coupling reaction Methods 0.000 description 1
- 230000001186 cumulative effect Effects 0.000 description 1
- 238000013501 data transformation Methods 0.000 description 1
- 230000007423 decrease Effects 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000002059 diagnostic imaging Methods 0.000 description 1
- 230000005672 electromagnetic field Effects 0.000 description 1
- 238000002001 electrophysiology Methods 0.000 description 1
- 230000007831 electrophysiology Effects 0.000 description 1
- 230000002708 enhancing effect Effects 0.000 description 1
- 230000007717 exclusion Effects 0.000 description 1
- 238000000605 extraction Methods 0.000 description 1
- 210000001105 femoral artery Anatomy 0.000 description 1
- 230000014509 gene expression Effects 0.000 description 1
- 231100000206 health hazard Toxicity 0.000 description 1
- 230000005802 health problem Effects 0.000 description 1
- 210000003709 heart valve Anatomy 0.000 description 1
- 230000000004 hemodynamic effect Effects 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 238000000338 in vitro Methods 0.000 description 1
- 230000002452 interceptive effect Effects 0.000 description 1
- 238000011835 investigation Methods 0.000 description 1
- 238000012804 iterative process Methods 0.000 description 1
- 238000012417 linear regression Methods 0.000 description 1
- 238000007477 logistic regression Methods 0.000 description 1
- 230000007774 longterm Effects 0.000 description 1
- 210000004072 lung Anatomy 0.000 description 1
- 238000012423 maintenance Methods 0.000 description 1
- 239000000463 material Substances 0.000 description 1
- 238000002483 medication Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000003058 natural language processing Methods 0.000 description 1
- 230000000399 orthopedic effect Effects 0.000 description 1
- 230000010355 oscillation Effects 0.000 description 1
- 230000036961 partial effect Effects 0.000 description 1
- 210000005259 peripheral blood Anatomy 0.000 description 1
- 239000011886 peripheral blood Substances 0.000 description 1
- 229920000642 polymer Polymers 0.000 description 1
- 230000000644 propagated effect Effects 0.000 description 1
- 238000012797 qualification Methods 0.000 description 1
- 238000013139 quantization Methods 0.000 description 1
- 238000001454 recorded image Methods 0.000 description 1
- 230000000306 recurrent effect Effects 0.000 description 1
- 230000002787 reinforcement Effects 0.000 description 1
- 230000000246 remedial effect Effects 0.000 description 1
- 238000007634 remodeling Methods 0.000 description 1
- 230000036387 respiratory rate Effects 0.000 description 1
- 238000012552 review Methods 0.000 description 1
- 230000036280 sedation Effects 0.000 description 1
- 230000001953 sensory effect Effects 0.000 description 1
- 230000005236 sound signal Effects 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
- 238000007619 statistical method Methods 0.000 description 1
- 238000013106 supervised machine learning method Methods 0.000 description 1
- 230000000153 supplemental effect Effects 0.000 description 1
- 230000002195 synergetic effect Effects 0.000 description 1
- 239000003826 tablet Substances 0.000 description 1
- 230000036962 time dependent Effects 0.000 description 1
- 230000001131 transforming effect Effects 0.000 description 1
- 230000008733 trauma Effects 0.000 description 1
- 238000010200 validation analysis Methods 0.000 description 1
- 208000027185 varicose disease Diseases 0.000 description 1
- 210000003462 vein Anatomy 0.000 description 1
- 238000009528 vital sign measurement Methods 0.000 description 1
- 239000002699 waste material Substances 0.000 description 1
Landscapes
- Apparatus For Radiation Diagnosis (AREA)
Abstract
A computer-implemented system (C-SYS) and related methods for controlling one or more devices in a procedure performed for a patient. A shape sensing system is used to acquire shape measurement data. Prediction Logic (PL) predicts anatomical locations, types of surgery, or phases of such surgery based on the shape measurement data. The system may support navigation of the device within the patient. There is no need to acquire images for navigation during surgery. Prediction Logic (PL) may be based on a machine learning model. Systems (TS, TDS) and methods for training such models and for generating training data are also described herein.
Description
Technical Field
The present invention relates to a system for controlling the operation of at least one device in a medical procedure, a related method, a method and a system for training a machine learning module, a method and a system for generating training data, a computer program element and a computer readable medium.
Background
Minimally invasive surgery (also known as "keyhole surgery") is beneficial to both the patient and the healthcare provider. Instead of open surgery, interventional devices (such as catheters and surgical instruments) are introduced into the patient at an access point (typically one or more tiny incisions). During the navigation phase, the interventional device is advanced from the entry point into the patient to a lesion in the patient for treatment and/or diagnosis. In the case of minimally invasive surgery, the patient recovers faster, can be discharged earlier, and patient comfort is generally greatly improved compared to conventional invasive open surgery.
The intervention, in particular the navigation phase, is usually performed in an image-guided manner. During an interventional procedure, an image of the internal anatomy of a patient is acquired by an imaging device. The images may be acquired as a real-time stream. Imaging devices typically use ionizing radiation. Such an image is very useful, but requires the cost of radiation dose for the patient and the interventional therapist. Ionizing radiation is a health hazard and can cause a range of health problems, including cancer.
Shape sensing techniques are described in US 2009/0137552 A1. Early, radiopacity of some interventional devices was exploited as this allowed the current shape of such devices to be detailed in the projection image. During operation of the device, the shape or state of the device often changes, which can be monitored by acquiring a stream of X-ray images. However, as previously mentioned, this also requires the cost of paying out the radiation dose. In case the shape sensing device described in US 2009/0137552 A1 is integrated into the interventional device, the current shape of the interventional device can be inferred by shape measurements of the shape sensing device without acquiring an image.
Disclosure of Invention
Additional support may be required for procedures involving intervention.
One object of the invention is achieved by the subject matter of the independent claims, further embodiments being incorporated in the dependent claims. It should be noted that the following described aspects of the invention apply equally to the relevant method, the method and system of training a machine learning module, the method and system of generating training data, the computer program element and the computer readable medium.
The system and method according to the invention are according to the appended claims.
The proposed system and method can be applied to a range of procedures, such as interventions, including vascular (guidewires, catheters, stent sheaths, deployment systems, etc.), endoluminal (endoscopy or bronchoscopy), orthopedic (K-wire and screwdrivers), and non-medical applications. The proposed control system and method may facilitate navigation and/or control of a large number of different devices to improve efficiency in surgery.
Shape sensing as used herein may be of any type or technology, preferably without the use of ionizing radiation. For example, shape sensing may be based on rayleigh scattering (enhanced and conventional) or Fiber Bragg (Fiber Bragg) implementations of one or more shape sensing fibers. In particular, the shape sensing system may be based on continuously distributed shape sensing elements or discrete sensing elements arranged along a deformable (preferably flexible) elongated portion or body (herein referred to as "(shape sensing) arm). At least a portion of the arm is positioned within the patient. The system may interpolate the discrete sensed data points collected here to produce an instance of shape measurement data, or alternatively, continuously distribute sensing along the interventional device. Preferably, such a shape sensing system is an optical shape sensing system comprising a shape sensing element disposed on one or more optical fibers located over at least a portion of the length of the optical fiber/arm. Such an optical shape sensing element may comprise a fibre bragg grating whose optical properties change with a change in strain, shape or temperature. They may be provided in the optical fibre or may be provided in the multicore of the optical fibre forming the arm. The optical fiber or core is arranged to retrieve the shape of an interventional device (e.g., a guidewire or catheter, or other such generally elongated device), wherein the optical fiber is integrated, attached, or otherwise associated with space so as to extend with the arm. For example, such devices may include a central optical fiber or core, and other optical fibers or cores may be angularly disposed and wrapped around the central optical fiber or core, as is known in the art. Instead of or in addition to such optical shape sensing, changes in resistance of the piezoresistive flexible shape sensor or changes in electromagnetic field provided by the electromagnetic sensor may also be used to measure shape changes.
Shape measurement data may be generated in response to deformation of the portion of the arm as the user or robot moves the arm within the patient. The arm may be integrated with or coupled to the interventional device. A portion of the interventional device may be deformed with the arm. However, such an arm is not a required requirement herein, and any other shape sensing technique is also contemplated herein. For example, the shape measurement data may include measurements of any one or more of curvature, stress, and strain at different locations along the arm. Other measurements representing or convertible to the shape of the arm/interventional device are also contemplated herein.
In a particular embodiment, the shape sensing interventional device (e.g., a guidewire extending with a shape sensing arm) may be used coaxially within or around other non-shape sensing interventional devices or tools (e.g., catheters, sheaths, other surgical tools). By registering the two interventional devices together, the computing device may reconstruct the shape of the other non-shape sensing interventional devices. Such registration may be performed, for example, by a hub that receives both devices or by any other linking device capable of communicating shape information. Thus, the scope of use of the shape data presented herein extends even to non-shape sensing interventional devices, as long as the latter are registered to the shape sensing interventional device.
The logic of the present invention may be based on a trained model, such as a machine learning model. The training is based on training data. Training may be a one-time operation or, preferably, repeated in a dynamic learning scheme in which measurements encountered during deployment are added to augment or build a training dataset. An uncertainty value (uncertainty value) associated with the output or prediction may be used to decide which new measurements to add to the training dataset. Interactive user interface options for deciding which measurements to add are also contemplated herein.
The training data may or may not include image data. Preferably, the training data does not include image data. Logic is capable of calculating a result based on the input shape measurement sensed by the shape sensing system, the previous shape measurement, and optionally the context data. The shape measurement and the data associated in tandem may together form "enriched data". Using these rich data allows for more robust output or prediction. For example, the variance of the training data set may be increased. The rich data may provide better context, thereby providing support for training algorithms. The data associated in tandem may be annotated or tagged as an annotator module ("annotator"). The annotator may be operated by a user through a suitable user interface (e.g., a graphical user interface). Alternatively, the annotators may also be semi-automatic (and thus there may be some user input) or automatic (no user input).
The previous shape measurement data upon which the logic operates (in addition to the current shape measurement data) may relate to one or more patients other than the "current patient. Thus, previous measurement data may be extracted from, for example, historical data from previous procedures collected from a different patient or patients. The previous data form training data on which a logical model is trained. The previous measured value data as training data need not be historical data, but may be user-generated or computer-automatically generated, image-based or non-image-based, manually generated synthetic training data. The previous data may be exclusively historical data, exclusively synthetic data, or hybrid data formed from synthetic data and historical data.
During deployment, the "previous shape measurement data" may be substituted for or otherwise correlated with early shape measurement data collected during the same procedure for the same patient. Thus, rather than being based solely on current shape measurement data (which is also contemplated herein), the logic may co-process n shape measurements (n.gtoreq.1) previously (in time) with the current shape measurement data, preferably in a chronological order of those shape measurements obtained early by the shape sensing system. The block or buffer length n may be varied to account for the initial portion of the procedure. The buffer length n may be initially selected to be smaller and then expanded at a later stage in the procedure.
The predicted or output result includes the anatomical location, the type of procedure, or the stage of the procedure. The surgery is preferably a medical surgery such as interventions related to organs or anatomical structures, lesions, target sites, etc. The predicted or output anatomical location is an estimate of what type or stage of surgery the arm or interventional device is currently in the patient, or for which type or stage of surgery the arm/interventional device is currently being used.
During deployment, logic can calculate output or prediction results without using image data. During deployment, the logic does not need to acquire images of the patient (real-time or non-real-time). The logic may operate solely on the shape measurement data, especially in the case of excluding image data. The X-ray dose can thus be reduced or completely avoided. For example, the predicted or output results allow for, for example, navigation without relying on the acquired images. The user can learn from the predicted or outputted results the position of the device/arm and which procedure or stage of the procedure is in.
An indication (graphical or otherwise) of the output or predicted result and/or the current shape measurement may be displayed on a display device to aid in user navigation. However, controlling such a display device is only one example of a predicted or output result that may be used to control the device. Other control operations on one or more other devices are contemplated in lieu of or in addition to the control display devices described and claimed herein.
A control system with logic may be coupled with the training system to retrain the model in dynamic learning based on shape measurements encountered during deployment. The current training data set may be augmented with new data and the training system may rerun one or more training links based on the uncertainty of the data or predictions or outputs associated with the front and back, as required by the user or automatically triggered.
The model may be a cluster model that is trained and built on training data that includes historical and/or synthetically generated shape measurements. Thus, the predicted or output categories are presented by clusters (clusters) in the training data. Training may be performed independently of images, such as X-ray images, but alternatively such images may still be used, particularly when synthetic training data is generated by a training data generator system, as claimed and described herein.
Each category (e.g., cluster) of shape measurements may be associated with an anatomical feature or corresponding anatomical portion/location or surgical type or stage, which may then be retrieved without any additional information beyond shape data, although, alternatively, the data associated back and forth may still be used if desired. If a clustering scheme is used, the current measurements may be displayed simultaneously with the graphical presentation of some or all of the clusters. However, the model contemplated herein for logic is not limited to a focused model. Other (machine learning) models, such as artificial neural networks, encoder-auto encoders, and other models, may also be used in embodiments. In other words, the categories of predictions or outputs need not represent clusters in the training data, but may be related to classifier results, etc. Graphical indications of the categories may also be displayed.
To enhance performance, training and/or deployment may not be based on raw shape measurements (although it may still be), but instead the raw measurements are first converted to a new representation. The training/deployment is then based on the transformed shape measurements. As used herein, a "presentation" relates to a system, parameters, and numbers that are used to describe or express shape measurements. Changes in representations may include changing the number and/or nature of such systems, parameters, and quantities. The changes in the representation may include coordinate transformations, primitive (basic element) changes, dimension reduction, sparsity changes relative to a set of primitives, and the like. The transformation may map elements of one parameter space (e.g., vector space) to elements of another space or subspace, etc. Such transformations may help reduce the return time of the logic, may make training and/or deployment more robust, and/or reduce memory requirements.
The interventions supported herein include manual and/or robotic manipulation of interventional devices.
As noted, although not required herein, the proposed systems and methods may be used in conjunction with any imaging system, including X-ray, ultrasound, magnetic resonance imaging, CT, OCT, IVUS, endoscopy, and the like. Alternatively, the category of prediction or output may be used to decide whether to register the interventional device/arm P with a given image.
"user" refers to a person (e.g., medical or other person) operating an imaging device or supervising an imaging process. In other words, the user is typically not a patient.
In general, the term "machine learning" includes computerized devices (or modules) that implement a machine learning ("ML") algorithm. Some such ML algorithms may use explicit modeling. They operate to adjust the machine learning model. The model may contain one or more functions with parameters. The parameters are adjusted based on the training data. The adjustment of the parameters may be performed under the direction of an objective function defined in parameter space. The model is configured to perform ("learn") tasks. Some ML models (e.g., implicit modeling) include training data. Such adjustment or updating of the training dataset is referred to as "training". Processing data to perform tasks using the trained model is referred to herein as deployment or reasoning. Training may be a one-time operation or may be repeated after different deployments. Generally, with training experience, the task performance of the ML module can be significantly improved. Training experience may include contacting appropriate training data. Training may be supervised or unsupervised. Task performance improves when the data better represents the task to learn. "training experience helps improve performance if the training data well represents an example distribution that measures the final system performance. Performance may be measured by objective testing based on output generated by the module in response to input of test data to the module. Performance may be defined in terms of a certain error rate to be achieved for a given test data. See, for example, T.M. Mitchell, "Machine Learning" (page 2, section 1.1, page 6, 1.2.1, mcGraw-Hill, 1997).
The terms "tag"/"annotation"/"label" are used interchangeably herein to a great extent and relate to correlating additional/tandem associated data with shape measurements during training or deployment. The term "tag" is used herein primarily (but not exclusively) in relation to training data, in particular in relation to supervised learning arrangements, wherein the tag relates to a corresponding target.
The term "anatomical location" need not be a point location, but may include a region or may relate to anatomical features.
The term "category" is the prediction result or output result of a logical prediction/calculation. In general, the category may be associated with a cluster or category label. The categories may be associated with anatomical locations, types of surgery, or phases of surgery. Although in the examples/embodiments described herein reference is mainly made to the indication that the category is an anatomical location, this is not a limitation herein. In particular, each reference to an "anatomical location" in this disclosure may be interpreted as replacing, or otherwise referring to, the "surgical type" or "surgical stage".
An "interventional device", preferably any medical device, tool, instrument or instrument, having a generally elongated shape or such portion for at least partial introduction into a patient. Preferably, such an interventional device is used with a shape sensing arm of a shape sensing system.
As used herein, "context data" refers to data that is associated with a shape measurement value. This includes data that affects shape measurements (and the type of prediction or output) and/or describes the manner in which shape measurements are collected. Data associated in tandem herein may be used to improve the performance of the logic.
"prediction" as used herein refers to the computation or operation of input data by logic (e.g., computational means or computer program) to output results based on a model.
Drawings
Exemplary embodiments of the present invention will now be described with reference to the following drawings, in which:
FIG. 1 shows a block diagram of an apparatus that supports interventions;
FIG. 2 illustrates the operation of a shape sensing system that may be used in the apparatus of FIG. 1;
FIG. 3 shows a block diagram of a control system that may be used with the apparatus described above and that is capable of processing shape measurements and a training system for training a machine learning module;
FIG. 4 illustrates a more detailed block diagram of a control system including a trained machine learning module;
FIG. 5 illustrates exemplary shape measurements defining clusters representing anatomical locations;
FIG. 6 illustrates a block diagram of a training algorithm for training a machine learning model for classifying shape measurements;
FIG. 7 illustrates an architecture of a neural network type machine learning model;
FIG. 8 illustrates a network of encoder-decoder types that may be used in embodiments herein to learn representations of shape measurements;
FIG. 9 illustrates a training system for training a machine learning model;
FIG. 10 illustrates an embodiment of a control system with dynamic learning capabilities;
FIG. 11 illustrates image-based generation of synthetic training data for use in training a machine learning model;
FIG. 12 illustrates image-based generation of training data using registration;
FIG. 13 illustrates a verification module for verifying classification of shape measurements;
FIG. 14 illustrates the operation of the rebalancing module for changing the size of clusters in training data;
FIG. 15 illustrates the evolution of shape measurements and their annotations and context-dependent data over time;
FIG. 16 shows a graphical representation of the evolution of a shape measurement over time;
FIG. 17 illustrates shape measurements representing an undesired bending event;
fig. 18 shows a control operation of the registration means based on the predicted category;
FIG. 19 shows a flow chart of a computer-implemented method of controlling a medical device;
FIG. 20 illustrates a computer-implemented method of generating training data; and
FIG. 21 illustrates a computer-implemented method of training a machine learning model.
Detailed Description
Referring to fig. 1, a block diagram of an apparatus that may be used to support medical interventions or other medical procedures is shown. The apparatus includes an imaging apparatus IA (sometimes referred to herein as an "imager") and a computer-implemented system SYS for processing some data generated during an intervention.
The supported intervention may be an intervention using a medical tool or instrument, such as a balloon catheter CB, to treat a stenosis in a human or animal patient. Other catheter types or other medical interventions or procedures using different tools, such as mitral valve repair procedures, etc., are also contemplated herein.
Preferably, the imaging device is an X-ray imaging device that generates a single X-ray exposure, as in radiography, or as in fluoroscopy or angiography applications, a series of X-ray exposures are generated at a suitable frame rate as moving pictures in real time. Still images are also contemplated. Contrast agents may be injected through an injector to enhance the contrast of the image. The image generated by imager IA may be displayed on display device DD to assist a medical user (e.g., an interventional cardiologist) in performing an intervention.
It is mainly envisaged herein that online support is provided by the computer system SYS during an intervention, but the computer system SYS may instead be used for pre-or post-intervention investigation. For example, the system SYS may be used for preparing or planning an intervention, or for analyzing data generated during a previous intervention or interventions, to plan or inform subsequent treatments, interventions or other measures. Thus, the computer system may be a stand-alone system, not necessarily associated with any imaging device, and the processed data may not necessarily be real-time data, but may instead be historical data retrieved from a database or other storage device.
Broadly, system SYS may include some device/system D for controlling use during a medical procedure j (sub) system C-SYS. In some medical procedures (e.g., interventions), some tools or devices (referred to herein as "interventional device IDs") may be used. One or more interventional device IDs are introduced at least partially into a patient for treatment or diagnosis. The introduced portion of the device ID is occluded and therefore generally has no line of sight or a very limited line of sight. During the navigation phase of the intervention, it is necessary to navigate the introduced portion of the device to a target within the patient. The control system C-SYS may provide information support to the user during navigation or other phases of the medical procedure. The control system C-SYS (sometimes referred to herein as a "controller") may use machine learning.
Before explaining the control system C-SYS in more detail, some exemplary medical procedures and imaging devices IA will first be described in more detail to better illustrate the operation of the proposed control system C-SYS.
Turning now first to the imaging apparatus IA, which is preferably of the X-ray type and is configured for acquiring X-ray projection images. The image may be a single still image, such as may be acquired in radiography, or may be part of a stream of images ("frames") used to display a video feed, such as in fluoroscopy. The still image or image stream may be displayed on the display device DD.
The imaging apparatus IA comprises an X-ray source XS and an X-ray sensitive detector D. The imaging means IA preferably allow to acquire images from different projection directions d. The imaging device may comprise an optional gantry GT to which the X-ray source XS and/or the detector D are connected. By rotating the gantry around the lesion ST or ROI (and with it the source XS and detector D), projection images can be acquired in different projection directions. Such gantry-based imaging devices include C-arm or O-arm systems and are primarily contemplated herein, but as are (CT) computed tomography scanners. Furthermore, non-gantry based imaging solutions are also contemplated, such as mobile or portable imaging devices, in which there is no physical connection or no permanent physical connection between the detector D and the radiation source XS. The image I processed by the system SYS may be a projection image in the projection domain recorded by the detector D or may comprise a reconstructed image in the image domain obtained by a computed tomography algorithm.
In more detail, during imaging, an X-ray source XS emits an X-ray beam, which propagates in a projection direction D to interact with patient tissue and/or a balloon catheter or other medical device BC, such that a modified X-ray beam is present at the distal end of the patient and detected at a detector D. The data acquisition circuitry (not shown) of the detector D converts the received modified radiation into a set of numbers ("pixel values"), preferably stored in a corresponding matrix per frame/image, and having respective rows and columns. The rows and columns define the size of the image/frame. The pixel value represents the detected intensity. The pixel values of each frame/image may be used by the visualization component VC to display the image on the display device DD during the intervention. Hereinafter, we will not distinguish between frames and still images, but only use the term "image/video" to refer broadly to both.
The user may control the operation of the imaging device, in particular the image acquisition, via an operator console or control unit OC. The operator console may be arranged as a dedicated computing unit, which is in communication with the imaging device. The operator console may be located in the same room as the imager IA or in an adjacent "control room". It is contemplated herein in various embodiments that the remotely controlled imaging system IA is connected to a control unit OC via a suitable communication network, the control unit OC being located possibly remotely from the imaging device. Autonomous imaging systems are also contemplated herein, wherein the control unit OC may operate fully or semi-autonomously, with no or little user input.
The imaging control unit OC controls the imaging operation by setting some imaging parameters IP. The imaging parameters determine or at least influence the contrast mechanism and thus the image quality processed by the system SYS. The intensity values (and pixel values) captured in the recorded images may be affected by changing parameters, and this is often the case, as different imaging settings may be required for different phases of the medical procedure, which is typically described by a medical imaging protocol. In X-ray imaging, imaging parameters include, for example, settings of imaging geometry, etc. The imaging geometry describes the spatial relationship between the patient being imaged, or at least the region of interest ("ROI"), and the plane of the X-ray source and/or detector D. The imaging geometry thus determines, inter alia, the projection direction d. The imaging geometry determines the current FOV. During the interventional procedure, the imaging geometry and FOV may be dynamically changed as may be required by the user to set different imaging parameters. Other imaging parameters of the imaging geometry may include collimation parameters. The collimation parameters are related to a collimator (not shown) of the imaging device IA. The collimator may include a plurality of X-ray opaque leaves that are adjustable by an operator console OC to set collimation parameters to define an aperture of a desired shape and/or size. Controlling the shape/size of the aperture allows limiting or widening the X-ray beam, thus further determining the FOV.
The control unit OC may also allow the user to operate robots that may assist in moving or operating the interventional device on the guide wire, for example by means of a flow chain of actuators and end effectors. Corresponding to some critical phases of the intervention, a very stable hand may be required. While trained surgeons are often very adept at this point, robots can still provide assistance in this regard, especially in situations where the period or pressure to treat a large number of patients is very high.
Imaging parameters can also affect the nature of the X-ray beam, thereby affecting the manner in which radiation interacts with patient tissue, and thus the contrast of the captured image. The nature of the X-rays can be determined in particular by the arrangement of the X-ray source XS. In an embodiment, the X-ray source XS is an X-ray tube (simply referred to as "tube") comprising a cathode and an anode. One such imaging parameter may be related to the amperage and/or voltage of the X-ray tube. Imaging parameters of tube operation are referred to herein collectively as (X-ray) source parameters. The X-ray source parameters may in particular dictate the energy of the X-ray beam. The source parameters may depend on the particular imaging scheme or mode. In embodiments, the primary application contemplated herein is fluoroscopic X-ray imaging. The C-arm or U-arm imaging system IA sketched in FIG. 1 may be used for fluoroscopy. In fluoroscopy, the envisaged beam energy is typically in the range of 50-60 keV. Other applications may require other energy ranges.
One type of (medical) intervention using medical equipment that at least sometimes resides within the field of view (FOV) of an imager is the treatment of heart stenosis, or balloon angioplasty, also known as Percutaneous Transluminal Angioplasty (PTA). The FOV is the portion of space that the imaging device is imaging. Stenosis can be treated by advancing the distal portion of the balloon catheter BC through the patient (from the entry point EP) to the lesion (e.g., the site of stenosis ST, a portion of the vessel of the lesion). The distal portion is formed as an inflatable balloon portion. Once the target area (the stricture site) is reached, the balloon is inflated to treat the stricture. The force exerted on the vessel wall VL by the inflated balloon causes widening of the stenosis. The guidewire GW may be used to navigate and push the distal treatment portion of the catheter to site ST. A range of different types of balloon catheters are contemplated herein, including total exchange (OTW) type or rapid exchange (RX) type catheters, or other types of catheters.
Other medical procedures or interventions contemplated herein may require other medical devices, such as Transcatheter Aortic Valve Replacement (TAVR) or Transcatheter Aortic Valve Implantation (TAVI) interventions. In these or similar procedures, one or more tools/devices may at least sometimes reside in the field of view of the imager IA. Some tools/devices may be movable and/or capable of assuming different shapes/states.
As briefly mentioned above, (medical) procedures typically rely on interventional devices that are used at least during the part of the intervention. These interventional device IDs may reside at least partially and sometimes within the patient. These interventional medical device IDs may be introduced externally into the patient during an intervention, such as the mentioned balloon catheters, guidewires, guide catheters, microcatheters, introducer sheaths, pressure guidewires, robotic guide wires and instruments, sensing catheters, imaging systems (such as transesophageal echocardiography (TEE) probes or intravascular ultrasound (IVUS) or optical coherence tomography devices), any catheters with sensing capabilities (such as spectroscopic sensing), laser atherectomy devices, mechanical atherectomy devices, blood pressure devices and/or flow sensor devices, needles, ablation electrodes, cardiac electrophysiology mapping electrodes, balloons, endoscopes, stents, stent retrievers, aneurysm coils, and a number of other therapeutic or diagnostic tools, such as mitral valve clips, pacemakers, implants, other prosthetic heart valves, blood vessels, etc.
Together, imaging device IA and computer-implemented system C-SYS form an image-guided therapy or diagnostic system that supports therapeutic and/or diagnostic interventions by a user for a patient.
Examples of such interventions include the treatment or diagnosis of coronary artery disease, as described above. For example, other interventions related to the heart include implantation of cardiac devices such as cardiac pacemakers. Other types of vascular interventions involve peripheral blood vessels, such as the treatment of thrombosis or varicose veins. However, interventions around the circulation and/or vasculature do not exclude other interventions, such as bronchoscopies for diagnosis and treatment of the lungs, interventions for treatment or diagnosis of the bile duct or urinary system, or other interventions.
Medical intervention is a medical procedure. The type of medical procedure varies and may consist of a number of different phases, such as a navigation phase. Different phases may require the use of different types of medical instruments or implements, may require the administration of different types of medications, and are arranged and coordinated to manage the medical procedure by summoning appropriate personnel with the necessary medical expertise, allocating subsequent treatment or diagnostic resources, and the like.
The operation of the imaging device IA ("imager") may be controlled from the operator console OC by adjusting certain imaging parameters that affect the type and quality of the acquired images. For example, the imaging device may allow its imaging geometry to be adjusted to acquire images of a particular type and/or stage that are best suited for a medical procedure at different FOVs. For example, at different stages of a medical procedure, the pose of the detector D and/or source SS in 3D space relative to at least a portion of the patient may be changed, such that images along different projection directions p may be acquired. The control system C-SYS may automatically require adjustment of the imaging geometry to adjust the FOV, or may be controlled by a robotic or manual operator. Projection or tomographic type X-ray imaging, such as CT or radiography, is not the only imaging modality contemplated herein. Other imaging modalities may also include ultrasound, MRI, and nuclear imaging.
Image-guided support allows for the acquisition of real-time images of the interior of a patient, thereby monitoring the position of the device ID within the patient relative to surrounding anatomy and target sites (e.g., lesions or other specific interventional regions). As mentioned, a stream of such images may be acquired to support navigation of the device ID to the lesion. Such images represent support for interventional image guidance. While image-guided support is very useful for minimally invasive interventions (such as those mentioned above), it may be at a cost. The imaging device is expensive, complex, and requires maintenance during long-term use to absorb the effects of wear. For example, cathode disks in the X-ray tube source SS ("tube") of an X-ray type imager are repeatedly subjected to extremely high temperatures and temperature gradients and therefore require replacement. Furthermore, some types of imaging modalities, such as X-ray and nuclear imaging modalities, use ionizing radiation, thus bringing the cost of radiation dose to the patient and staff, which requires a balance between benefits and costs. Patients may sometimes need to inject contrast agent to enhance the contrast of the image, but this can have a negative impact on the patient's health. For example, contrast agents may damage the kidneys.
In order to alleviate the dependency on the imaging apparatus IA, the computer-implemented support system SYS comprises a control unit SYS which can navigate the interventional tool IT in the patient without using the imaging apparatus and the images. The location of the medical device ID can be known without acquiring an image. As will become apparent soon, the control means SYS may be used to control other types of systems or systems Dj, preferably during a medical procedure, instead of or in addition to navigation support. Although in principle the proposed control system C-SYS may completely replace imaging, this need not be envisaged in all embodiments herein. That is, the cooperation of the imager IA and the newly proposed control system C-SYS is contemplated herein to help reduce the frequency of use of the imager IA and the reliance on images during medical procedures.
Broadly, the proposed control system C-SYS may collect non-image-based information about the anatomical location where the interventional device is currently located. This information may be presented as a graphical display and may be displayed to the user on a display device as an indication of the current location of the interventional device ID. Additionally or alternatively, based on the collected locations, the system may also control other aspects of the medical procedure, such as controlling any one or more devices or systems Dj that may be needed for different phases of the medical procedure.
To enable such collection of non-image-based location information, the control system C-SYS may comprise or at least cooperate with a shape sensing system SSS. The distributed sensing system SSS is configured for shape sensing without using ionizing radiation. As schematically shown in fig. 1, the shape sensing system SSS preferably comprises an elongated shaped device (which may be referred to herein as "(shape sensing) arm a"). As will be explained in detail below, the arm a allows measuring its own shape as it deforms during in vivo navigation. The arm may be associated with an interventional device ID, so that the arm is deformed when the interventional device is used or moved during an intervention. Specifically, in some embodiments, the shape sensing system measures deformation along the length of its elongated arm a to display a 3D curve as an indication of the geometry of the current shape of arm a. The 3D curve is represented by a shape measurement of the arm a due to its deformation. In a simple embodiment of the shape sensing system SSS, only a measured value of the 2D curve may be sufficient. The nature of the shape measurement (which may be represented herein by italics "s") will be explained in more detail below. It has been found that spatial information (which may be represented by a 3D curve) as captured in the shape measurement can be associated with the corresponding anatomical location where arm a is currently located. The shape measurement itself may also be correlated. Anatomical structure positions that can be deduced from the shape measurement s can be used to control the other devices D j Comprising a display device DD in which the current anatomy can be represented graphically, textually or bothLocation LA i Is an indication of (a). Control of the control system based on the current shape measurement s of the device Dj may be performed by means of a suitable control interface CIF.
As briefly mentioned before, during use, the shape measurement collection arm a may be integrated or at least attached to the elongated portion of the interventional device ID so as to extend therewith such that both deform in the same way during an intervention. In response to the deformation, different shape measurements are acquired over time and analyzed by a newly proposed control system C-SYS to infer the current anatomical position of the arm A/interventional device ID and based thereon one or more support devices/systems D may be controlled j . As in most embodiments, the arm a and the interventional device are coupled, so reference to the shape or deformation of the arm a may also be interpreted as reference to the associated interventional device ID, and vice versa. The shape sensing arm a is preferably arranged to extend alongside the elongated interventional device. The shape sensing arm a, when used with an interventional device, makes the interventional device a shape sensing interventional device. The use of shape data as proposed herein may be "extended" to additional (e.g. non-shape sensing) interventional devices by means of a suitable coupling device, such as a hub device as described in WO2017/055620 or WO 2017/055950. The two interventional devices may be registered with each other. Such registration may be performed, for example, by the hub device being configured to receive at least a portion of both interventional devices. In this way, shape data is given to two or more interventional devices. Thus, the shape sensing arm may be directly or indirectly coupled to one or more interventional devices. However, the use of shape sensing arm a in combination with one or more such interventional devices does not exclude a possible use scenario herein, e.g. arm a may be used alone without an associated interventional device ID during exploration.
The shape measurement value s may be stored in a database DB, which will be described in detail later. The newly proposed control system C-SYS based on shape measurement values can advantageously be implemented by means of a machine learning module MLM.
Fig. 2 is a diagram of the operating principle of the proposed shape sensing control system C-SYS.
The common features of the above-described medical procedures are shown in the schematic diagram of fig. 2A. The above-described vascular, biliary or urinary interventions can be understood as a general task of probing a complex cavity system SC. In a simplified and highly schematic way, the system CS is shown in fig. 2 as a system formed by intersecting lines. As sketched in fig. 2A, the cavity SC system can be seen as a network formed by interconnected pipes with junction points. The task of the interventional therapist is to advance the interventional device ID (e.g. catheter) from the entry point EP along a path (as indicated by the dotted line type) to the target location TG. For ease of illustration, the path ratios in fig. 2A are shown as extending alongside the system cs—which can be understood as the path of the arm a extending within the system CS. The access point may be a natural orifice or may be formed via an incision in the body. During navigation, the interventional device ID traverses along the tubular structure. The tubular junction in fig. 2A is represented as a simple intersection of lines that require successful navigation to reach a target location TG, such as a stenosis of a given artery or vein of a patient's heart. Not all the intersections or branch points must be simple double-forked points, but may be multiple-forked points, forking into multiple pipes, making navigation more difficult. However, for simplicity we refer to these double or multi-pronged points herein as intersections, and understand that there may be more than two tubes intersecting at the intersection.
The interventional device will pass through a number of anatomical locations AL1-4 on its way from the entry point EP to the target TG. For ease of illustration, only four are shown, but there may be more or less. By "correctly" turning at each junction, the arm a/device ID reaches the target. Some or each such junction may represent a respective different anatomical location. By correctly turning during a given medical procedure, the interventional device is forced to assume a certain shape in order to conform to the curvature and trend of the tubular structure and its intersection. Since the distributed shape sensing arm a is attached to or integrated with the interventional device, this arm a also experiences the same deformation, which will be at different such positions AL1 at different time instances T1, T2, T3 during the passage towards the target TG-3 different shape readings are obtained. Thus, as shown in FIG. 2A, the shape measurement value s j Can be regarded as being in position with the anatomical structure Al j Associated with the corresponding anatomical location.
FIG. 2B provides more details of the shape sensing system SSS contemplated herein in some embodiments. In some embodiments (but not necessarily all embodiments), the local shape sensor LS is arranged in suitable increments ΔL along the length of the deformable arm A j . Local sensor LS of arm A j Wired or wireless communication with the data processing unit DPU of the shape sensing system SSS. The data processing unit or interrogator may include or communicate with light sensor LSE and light source LSO. The light source LSO is operable to transmit an optical signal to the local shape sensor LS j Light sensor LSE capture by local shape sensor LS j Reflecting or otherwise being formed by a local shape sensor LS j The corresponding light contribution caused. The data processor means DP, preferably comprising an A2D (analog to digital) conversion stage, process the reflected light signal into a number representing the shape measurement value s.
As known to those of ordinary skill in the art, the arm a may include one or more optical fibers with an integrated Fiber Bragg Grating (FBG) that acts as a strain sensor for detecting shape information and providing shape sensing data indicative of the shape sensing device. For example, the shape sensing system SSS may be implemented using fiber-optic true shape (fos) technology, in which case the shape sensing data includes fos data including, but not limited to, the 3D shape, curvature, and axial strain of the shape sensing device 140. As an alternative to fiber bragg gratings, it is contemplated herein in embodiments that the inherent backscatter of conventional fibers may be utilized. One such method is to use rayleigh scattering in standard single mode communication fibers. In alternative embodiments, the shape sensing system SSS may be implemented using shape sensing techniques other than optical shape sensing. For example, the shape sensing system SSS may include transducers, electrodes, and/or electromagnetic sensors (e.g., piezoelectric sensors) disposed along at least a portion of the arm a, such that the sensing system SSS is able to determine the shape of the arm a. For example, if three or more electromagnetic sensors are attached to arm a, the shape may be determined from the three positions, providing the position and orientation of arm a, and thus the device ID associated with arm a. Fig. 2B shows a sensing system SSS with a single arm a, but other multi-arm systems with more than one such elongated arm are also contemplated to capture more complex shapes. The shape measurement may be represented as a curve with bifurcations or multiple bifurcations. It is also contemplated herein to replace the elongated (non-imaging) probe/arm with a shape sensing patch to capture shape measurements as a surface, rather than as a linear curve that can be parameterized by 1D parameters.
It should be appreciated that while arm a is typically attached or integrated into a corresponding interventional device ID, this is not the case in all embodiments. In some alternative embodiments, arm a may be used alone, advanced through the patient, but this may be less of a concern for most medical applications. However, for diagnostic purposes, a purely shape exploration is still considered. Such shape exploration may be of interest in non-medical fields, for example in exploration of cable/hose tubing systems of machinery, such as engines, or hydraulic systems of ships, airplanes or automobiles, etc. In these separate embodiments, as well as in embodiments where the arm a is attached to the interventional device ID, the deformable elongated arm a may be made of a biocompatible material (e.g., a polymer with suitable stiffness) with FBG sensors or any other sensors embedded therein and arranged along the length of the arm a.
While various embodiments of the arm a/shape sensing system SSS have been described above, any shape sensing technique is also contemplated herein, so long as the technique allows the shape of the device ID/arm a to be determined by measuring a shape quantity (e.g., strain, curvature, bending, torsion, etc. of the device ID/arm a) or any other quantity representative of such (geometric) shape.
Referring now to FIG. 2C, an illustration is shown illustrating the position AL at various anatomical locations j A qualitative plot of the relationship between the shape measurement readings s displayed along the respective axes of the plot in fig. 2C.
When the shape measurement arm a is passed through a tubular system of interest (e.g. through the vasculature of a patient) with or without an interventional device, the corresponding anatomical location will be identified mainly by the characteristic curvature pattern recorded by the shape measurement arm a.
The interventional device ID and the arm a associated therewith are moved through the tubular system of interest in certain increments, preferably small increments, as this has to be carefully handled by the interventional robot or the user to avoid injury. Arm a also has a length, for example, 10-30cm, and as can be seen in connection with fig. 2A, not all continuous shape measurement readings are associated with different anatomical locations of interest. Some measurements, especially continuously acquired measurements, may be of the same anatomical location AL j In relation to the platform (plateau) P in the measurement map as in FIG. 2C j As shown.
Once some characteristics of the shape measurement have changed sufficiently as the arm a is snaked from one given anatomical location to the next, this may indicate that the next anatomical location encountered as the arm a passes through the patient is reached. This change occurs gradually. Thus, there will be some transition areas T1-T3 between the platforms P1-5 in which the measured values do not yet (completely) correspond to any of the anatomical locations (the anatomical location that the arm a has just passed and the next anatomical location that the arm a will reach). Although not excluded herein, it is therefore understood that the term "anatomical location" as used herein does not generally indicate a point location, but rather may indicate a range or region of locations that produce some characteristic shape measurement readings, such as readings recorded by arm a as it passes through the location.
Considering fig. 2A-2C as a whole, it is apparent that there is a potential relationship or mapping between, on the one hand, different anatomical locations and the time series of shape measurements collected by arm a during use. The proposed C-SYS system is configured to take advantage of this potential relationship or correlation. This potential relationship between anatomical location and shape measurement is difficult to accurately model with an analysis volume. However, it has been found herein that machine learning schemes do not require such accurate modeling in an analytical manner, but rather are able to determine the underlying relationship with sufficient accuracy based on training data.
The training data TD preferably comprises previous shape measurements collected from a suitable wide range of patients in a corresponding intervention performed in the past. Thus, the training data may represent historical shape measurements collected for different patients of different biological characteristics. In some cases, it has been observed that historical measurements for a single or very few (e.g., 5 or less) patients may be sufficient training data. The training data should represent shape measurements collected for a medical procedure of interest, but may also relate to different procedures involving the same anatomical region (e.g., heart vessels, etc.). In other embodiments, the previous shape measurements may relate to shape measurements collected for the same patient during an early medical procedure, or indeed to shape measurements collected at an early point in the current procedure. The latter embodiment is particularly useful in combination with a dynamic learning paradigm, as will be more fully described below. Furthermore, in any of the above embodiments, the previous measurements need not be from previous field measurements made during a real medical procedure, such as in the intervention described, but may in fact be automatically synthesized by the user through computer simulation or laboratory in vitro (phantom) or in vivo (animal model) settings. Moreover, the latter aspect of synthetic training data generation will be set forth in more detail below.
In machine learning ("ML"), training data may be collectively analyzed to determine potential relationships between anatomical locations and shape measurements. By this analysis ML is able to predict anatomical locations when new shape measurements are given, which are not obtained from training data, but are newly measured in a given future or new intervention using the shape sensing arm a.
Referring now to FIG. 3A, a block diagram of a control system C-SYS contemplated herein is shown.
As mentioned, the system CSYS is based on machine learning and comprises a trained machine learning module MLM. The module may employ a machine learning model M (described further below) that is processed during the training or learning process. Once sufficiently trained, the system CSYS can be deployed and can receive new shape measurements collected during a given intervention to predict anatomical locations during clinical use.
During deployment, the current (new) shape measurements are processed by a trained machine learning module, e.g., applied to a model, to produce results such as category c. In other words, the machine learning module may attempt to classify new shape measurements from multiple categories c j Is included in the category of (1). Each category c is preferably associated with a given corresponding anatomical location AL j Associated or related. The calculated category result c may be used directly or may be converted or mapped into other relevant information items or results of interest, such as an indication of the type or stage of surgery. The conversion by the converter CV may be performed in a non-ML manner such as a table or database query or LUT. The class c or the conversion into the result CV (c) can then be used for controlling the mentioned device D j Such as a monitor or display device DD or an augmented reality head set (head set), etc. The anatomical location, type or stage of surgery may be visualized as a graphical display and displayed on a monitor or in a head display. Other applications of the categories calculated by the machine learning module are discussed in further detail below.
Thus, a computerized system, and in particular a machine learning module, may be understood as being operable in at least two stages or phases, such as a learning/training phase or a deployment phase. Generally, the training phase involves computational processing of the training data in order to learn the above-mentioned potential relationships, such as correlations between anatomical locations and shape measurements.
If a sufficiently large and representative training dataset is used that exhibits sufficient variability to allow such potential relationships to be captured, then at the end of the training phase the machine learning model can be considered sufficiently trained and then deployed as shown in FIG. 2A. Relatively large training data (e.g., N > >100, depending on the application) containing extensive experience (e.g., patients with different anatomy, users operating different interventional device IDs, or different interventional device IDs, shape sensing devices with different degrees of wear, calibration quality) will improve the performance of the training model and will well represent the expected situation that may be encountered during clinical use.
Preferably, learning is not a one-time operation (although this may still be the case in some embodiments), but is repeated one or more times at intervals/rates during deployment. If a new class of shape measurements is encountered during deployment (which is not adequately represented at all or by the original training dataset) then learning may be re-triggered. This type of dynamic learning allows the machine learning module MLM to continuously adjust according to new data that occurs before it during deployment.
The block diagram in fig. 3B is an illustration of a training phase, which is different from the deployment phase in fig. 3A. The computerized training system TSS is used for computing and processing training data comprising, inter alia, historical shape measurement values in the training database(s) TD. The training data may be related to the same patient in the current or early medical procedure, or to one or more different patients in the early medical procedure, as described above in connection with fig. 2. In dynamic learning embodiments, the control system C-SYS and training system TSS, particularly for deployment, may be integrated into a common system, implemented on a single or multiple computing systems, or as part of a joint learning framework. If the latter, it is preferred that these computing systems should be communicatively coupled in a communication network. Alternatively, the control system C-SYS and the training system may run on different computing systems, especially when training is envisaged to be disposable, or it is expected that little training will take place. The trained model M can then be transplanted into the computing system of the medical facility where the control system will be deployed and run in clinical use via network data transmission (or using portable storage devices, etc.).
The training data may be retrieved and accessed from a medical database such as a PACS or other medical/patient record MDB. The training data may be provided manually. Alternatively, the computerized search may be expressed using a database language or a scripting language.
It is not always possible to provide a sufficiently large and/or representative number of training data specimens. In this case, it is contemplated herein to use a training data generator TDG that allows for the manual or synthetic generation of training data samples that do not exist previously. This may be done fully automatically or may be engaged by the user through a suitable user interface UI, as will be described in further detail below.
The training system TSS applies training/learning algorithms to the training data to adjust the model M. More specifically, model adjustment based on training data may be referred to herein as training or learning.
Fig. 3C shows a diagram of the training data generator TDG. The existing training data set is supplemented by synthetically generated training data samples. In other words, the manually generated shape measurements are added to existing "true" historical shape measurements, or the training dataset TD is built up entirely from these synthetically generated measurements from scratch.
Once a sufficient number of such shape measurements, with appropriate changes, are obtained, either by manual generation or retrieval from existing historical shape measurements in a medical database, they can be passed as training data to the training system TSS. As mentioned above in fig. 3B, the training system TSS processes the training data to adjust the model. The training and generation of training data will be discussed in detail again below.
Reference is now first made to the block diagram in fig. 4, which shows the components of the proposed control system C-SYS in more detail.
During deployment, the deformable shape sensing arm a of the shape sensing system SSS deforms and generates the above-described signal flow emitted by the local sensor LS along the arm a. The signal stream may then be processed by a data processor DPS of the shape sensing system to produce shape measurements s at a time. The shape measurement s is received at an input interface of the system C-SYS.
The operation of each shape measurement instant is envisaged to be in a mode, and so is the aggregated per block mode of operation, in which the shape measurements collected over a period of time are buffered and then passed as a block for processing by the control system C-SYS. No specific distinction is made between these two modes herein, unless otherwise described herein. Thus, reference to a shape measurement s may be understood as a reference to a single measurement data at a given time instant, or may be understood as a plurality of such measurements at a plurality of time instants.
The control system C-SYS comprises prediction logic PL. The prediction logic PL includes a trained machine learning module MLM that receives shape measurements s through an interface. The machine learning model M is applied to the shape measurements to produce as output a class c, which indicates the corresponding anatomical location, surgical stage or surgical type.
The prediction logic Pl may not necessarily use ML-based logic alone to process the shape measurement s into a prediction category. The system C-SYS may also include one or more auxiliary logic to process the context-dependent data κ, especially to resolve ambiguities or uncertainties associated with the ML predicted category. Thus, the prediction can be made more robust. Alternatively, both are processed by the ML-based logic PL, processing the shape measurement into a prediction category.
The concept of the data k associated before and after will be discussed in more detail below. One specific type of data that is correlated is time, particularly the temporal order of predicted categories during an intervention ("predicted time"). The previously predicted order may be used to inform the next prediction during the current intervention. The machine learning model processes the time information, especially the early predicted order (predicted time), in conjunction with the current shape measurement to predict the next category. Alternatively, the mentioned auxiliary logic compares the order of the prediction categories with the expected order stored as a look-up table. Thus, the operation of the auxiliary logic need not include ML, but rather may be accomplished by way of a lookup operation, interpolation, or the like. Only if there is a conflict between the predicted outcome of the ML-based prediction logic PL and the outcome expected by the auxiliary logic will remedial action be taken, such as updating the training database, e.g. opening a new category, or relearning the model M, etc., as will be described in more detail herein.
If desired, as mentioned briefly above, the conversion stage CV may convert the predicted class c into another result R, CV (c), for example into an applicable stage of the medical procedure, or may even indicate the type of medical procedure as desired. The converted result R or class C may be suitably coded out, such as numbered data, which is then forwarded to the control interface CIF. The control interface CIF may comprise middleware that allows control of one or more devices or systems D j = { D1-D6}, as exemplarily shown on the right hand side of fig. 4. The middleware may map the prediction category c/results into the appropriate control commands. A lookup table or other structure may be used.
Device D j A visualization module D1 may be included that may graphically, textually, or both, display a graphical display/visualization of the result or category on a display device (e.g., a monitor and/or projection on a screen of an operating room), or control an augmented reality head display worn by the user during the intervention.
The visualization module D1 may generate a graphical display based on the prediction results, including the current anatomical location, the type of procedure, or a stage thereof. The interface CIF controls the video circuitry to display a graphical display on the display device DD (or on a different display device). The graphical display so generated is dynamic, corresponding to the dynamic nature of the prediction logic: the shape measurement flow is acquired as arm a is moved during the intervention, and the graphical display is adjusted based on the predicted class flow. The position of the graphical component in the graphical display may be updated with each new prediction category when the prediction category indicates a corresponding anatomical location. For example, a cursor widget (e.g., a circle, cross-hair symbol, or other symbol) may be displayed on a graphical representation of the patient's anatomy (e.g., an anatomical atlas or the like) to indicate the anatomical location in this context. Plain text information, or text information other than graphics, such as text boxes containing strings, are also contemplated, such as "current anatomical location: the lower aorta. The current shape measurement may also be presented in the graphical view in a different view pane in the same or different graphical display. The user may switch between different such panes. In particular, as a visual inspection aid, the current shape measurement may be visualized in contrast to a graphical display of training data clusters, similar to fig. 5. The current shape measurement may be displayed in association with its predicted cluster, with other clusters or clusters also displayed in other portions of the graphical display. Thus, the user can check very quickly and intuitively whether the predicted cluster is correct. The planned path of the procedure may also be displayed if desired. If the predicted category indicates a surgical type or stage, rather than an anatomical location, the graphical display may include a timeline or time map with portions representing the stage. The graphical display is dynamically updated such that responsive to the prediction logic predicting the category over time, the graphical component visually advances or "moves" along the timeline/time diagram to indicate the current stage. In other visualization embodiments, the graphical display may comprise a virtual image of the reconstructed shape of the shape sensing device, optionally together with a virtual and/or image representation of the anatomical location. The virtual representation may include computer graphic elements, CAD elements, or other elements. Further, a virtual and/or image representation of the next anatomical location that the shape sensing device may pass or enter is included in the graphical display. Optionally, the navigation path of the shape sensing device A, ID is represented in a graphical display. Optionally, and in addition (where possible), the current and/or next anatomical locations are represented as dynamic widgets associated with the path, e.g. superimposed on the path display, which widgets obviously move along or over the path when the logic PL updates the predictions. In another embodiment, in combination with any of the above embodiments, the graphical display includes a representation of the planned path of navigation, which may be known a priori or derived by the logic PL from previous predictions. In some embodiments, the graphical display includes an indication of the new predicted anatomical location, which may be associated with a new relevant organ into which shape arm a enters. The next anatomical location and the newly entered location/organ, surgical stage or type may be obtained by analyzing the temporal behaviour of the early predictive flow and/or some characteristic variation of the calculated class by means of a logic PL, for example in a cluster of shape measurements, as will be explained in more detail below.
Other devices contemplated to be controlled include an imaging system D2, such as an imaging apparatus IA, whose imaging parameters are changed in response to the calculated class and/or result. The FOV may be changed depending on the current stage of the procedure or depending on the current anatomical location.
One or more controllable devices D j A monitoring device D5 may be included. The monitoring device D5 may monitor vital signs of the patient. For example, the sampling rate or any other setting of the monitoring device D5 may be adjusted. The medical device may instead include a database module D4, or may additionally include a database module D4. The controllable device may comprise the interventional device d3=id itself. For example, the power provided to the interventional device ID may be controlled according to category c predicted by logic PL. The powered interventional device ID may be an atherectomy catheter in which the laser power may be adjusted. Another example may be for adjusting imaging parameters of an imaging catheter, such as an IVUS (intravascular ultrasound) or ICE (intracardiac echo) catheter.
The controlled device D6 may include a management device for guiding or managing a clinical workflow, such as a communication device that calls a clinical person based on a predicted outcome, category c, or otherwise sends information to the clinical person.
Although in some embodiments the logic PL processes the shape data s as raw data, this may prove inefficient. For example, the numbers may contain noise components therein, or may be difficult to compare. In order to improve the processing efficiency, a data preparation stage PS may be provided to pre-process the shape measurement values, thereby facilitating efficient processing. For example, the preparation stage PS may be configured as a filter, such as a noise filter, or as a normalization operation to normalize the data s, as will be described in detail below.
In addition or instead, the dimensions of the data s may be too high, wherein multiple dimensions of the data may be redundant or may not contain information related to the task. High dimensional data may also cause the processing logic PL to consume excessive computing or memory resources. To increase computational efficiency and/or memory resource usage, a data transformation stage PS may be provided to convert shape measurements into representations that facilitate more efficient processing. For example, a transform stage may be used to reduce the dimensionality of the data s, thereby using only the most relevant aspects of the data, thereby facilitating more efficient operation. For example, principal Component Analysis (PCA) or other factor analysis algorithms may be used in the transformation stage TS to simplify the representation of the shape measurement s. Shape measurements taken over time can be conceptualized as a plurality of discrete elements ("points") in a high dimensional space, such as a vector space having coordinates relative to a base formed by base vectors. In a dimensionality reduction ("DR") technique based on projection (e.g., PCA or other factor analysis technique), the points are projected onto a low-dimensional subspace that allows for a simpler representation, such as in terms of fewer coordinates relative to a new smaller basis with fewer basis vectors. Other types of dimension reduction techniques are also contemplated herein, such as various PCA variants (kernel-based, graph-based, etc.), as well as other techniques, such as statistical techniques, e.g., LDA ("linear discriminant analysis"). LDA finds a linear combination of features to separate the underlying set of training data samples into multiple parts, thereby enabling a simplified representation and thus computer-implemented advantages. However, the "reduced by projection" paradigm, which underlies at least some aspects of some of the DR techniques previously described, is not necessarily required in all embodiments contemplated herein. For example, DR techniques of the feature selection type are also contemplated herein, wherein some members from a collection or specific attributes thereof ("features") are designated as representatives, and then further processing is performed based on those representatives. Such feature selection mechanisms may be driven by information metrics (information theoretic quantities), such as mutual information, snow-cost information, or other information. In an embodiment, it is also contemplated herein to convert data into a representation with less or more sparsity, which is another example of a different representation of the data.
As mentioned above, it is worth noting that the variation of this representation algorithm implemented by the transformation stage TS does not necessarily have to be of the dimension-reduction type. For example, although primarily contemplated herein to process shape measurements in the time or frequency domain, the transform stage TS may support conversion to the frequency domain by appropriate Fourier or wavelet operations. The transform stage TS will be described in more detail below. In general, the transformation stage TS may be implemented by any algorithm that transforms a set of shape measurements s into a representation that imparts implementation advantages to the prediction logic PL, such as more efficient processing (e.g., faster, low latency turnarounds, etc.) and/or conservative memory usage.
The machine learning module may operate as a multi-class classifier as primarily contemplated herein, classifying shape measurements into different anatomical structure locations of interest AL j Is included in the plurality of categories. Each category c j Representing a different, given anatomical structure of interest position AL j . Some machine learning models allow for the calculation of uncertainty or prediction error Δ associated with some or each of the calculated categories c. The system may thus comprise an uncertainty determiner UD. In some embodiments, the mapping between categories and anatomical locations is achieved through training, for example using appropriately labeled shape data in a supervised training scheme. For example, the aforementioned synthetically generated or sourced historical training data may be assigned a label by a human expert that indicates the corresponding anatomical location. Visualizing the shape measurements as a 3D curve may allow an experienced user to guess the corresponding anatomical locations, such as the iliac aortic transition or other portions of interest of the human or mammalian vasculature.
The calculation of the prediction error delta by the uncertainty determiner UD is particularly useful in combination with the dynamic learning embodiment outlined above. In dynamic learning, the training phase is not completed as a one-time operation prior to deployment, but rather the training phase is considered to be an operation that continues during deployment and is based on new data that the system encounters during deployment. In this alternative dynamic learning/training embodiment, the system C-SYS may also include a training stage TS that cooperates with the prediction logic PL. In one example, if the prediction error returned by the uncertainty determiner is high, e.g., above a certain threshold, a warning signal may be sent to the user. Alternatively or in addition to issuing such a warning, the currently processed shape measurement s resulting in high uncertainty is excluded from dynamic training and passed to the training system TSS if the prediction error exceeds a certain threshold. Although reference is made herein mainly to uncertainty values Δ, a larger uncertainty value indicates a higher uncertainty, this does not exclude an equivalent quantization of the prediction error in terms of confidence values. Thus, a low confidence value corresponds to a high uncertainty, and it should be appreciated that if Δ represents a confidence value rather than an uncertainty, the thresholding of Δ described above needs to be reversed.
A high prediction error may indicate that a new class of data is encountered, whereas training data heretofore used to train models does not represent the new class of data. In this case a new class can be opened and the current shape measurement value added to the training memory TD as a new training data sample for this new class. An automatic or user-operable annotation tool AT may be used to annotate the corresponding shape measurement values that result in the logic PL calculating the high uncertainty class. The newly opened category may represent a new anatomical location, or a known anatomical location of a shape anomaly, or indeed may represent only two anatomical locations AL of interest j 、AL j+k The transition region between (k.gtoreq.1), as described above in connection with FIG. 2C. The category of this "intermediate location" is useful and has the benefit of updating the system so that the logic PL can better locate where the arm a is currently located within the patient's body, although it is not a priori of interest for the task at hand. Based on arm a currently being located at two ALs j 、AL j+k The new class of positioning classification between locations may still provide a benefit to the user in terms of navigation. By adding more and more such transition region categories, the space of the logical PL is predicted The resolution capability may be refined to any desired granularity. Uncertainty values may be used to assign measurement values to more than one category. Thus, an uncertainty value may be used to measure and quantify membership of a category.
Likewise, the training system TSS may optionally further comprise a pre-processing PS 'and/or a transformation stage TS' corresponding to the pre-processing PS and transformation stage TS of the prediction logic PL. These operations are preferably identical.
Various machine learning techniques are contemplated herein, including neural networks and other techniques, which will be discussed in more detail below.
In some types of machine learning techniques, such as clustering techniques, a rebalancing component RB can be used to rebalance or change the size of clusters encountered so far. This may ensure that the recorded clusters are sufficient to represent the various locations encountered by the system. Having clusters of sufficient size such that each or most clusters contain a sufficient number of different representatives may make the performance of the machine learning module MLM more robust.
Before turning in more detail to machine learning model implementation of the prediction logic, it may be useful to describe the features of the shape measurement data s in more detail. As shown in fig. 2 above, the shape measurement s is obviously a space-time amount based on the nature of the shape measurement. The shape measurement values can be collectively written as s= (S) j )=(s j (x j ,t j ))。
In this notation, index j represents the patient whose measurement has been taken, but this index may be omitted herein. Index x represents the spatial increment along arm a, each local sensor LS x Is located at the spatial increment and each shape measurement is recorded at the spatial increment. The index t represents time and refers to different shape readings taken over time as the user or robot advances the arm a through the patient. Such indexing by location x is then not necessary herein, as the shape may be defined by a single number for the entire arm a globally at a given time t, such as by an average (e.g., average strain, curvature, etc.) or other such global number.
Shape measurement s= (S) j )=s j (x j ,t j ) Any form of measurement obtained by arm a may be included, for example, measurements may be obtained from the corresponding optical fibers of arm a. The measurements may include, but are not limited to, curvature, axial strain, torsion, or 3D shape of the arm a. The 3D shape may be defined by a set of spatial 3D coordinates (X, Y, Z) x It is indicated that it can be indexed by the position x along the arm a and relative to a certain coordinate system. For present purposes, it is not important whether the shape measurement (s-value) is expressed as a spatial coordinate, curvature, strain, or the like, and any such semantic format is contemplated herein.
Reference will now be made in more detail to the operation of the prediction logic PL, which may be implemented by machine learning techniques primarily contemplated herein. Some such machine learning techniques use a model on which the predictions are based.
The model is a data structure, function or system of functions that allows shape measurement data to be applied thereto in order to calculate a prediction result, e.g. class c.
In the training phase, the model is adjusted or tuned given the training data. The training data includes synthetic or historical shape measurement data. The prediction result of the prediction logic PL is preferably one of a plurality of categories. That is, during deployment, newly observed shape measurements collected, for example, during an intervention, are categorized into one of a plurality of categories. Each category represents a respective anatomical location, type of procedure, or stage of procedure.
Given the current shape measurement measured by the arm a at a given time, the prediction logic PL can thus predict at which of a plurality of anatomical locations the shape measurement arm a is currently located. In most cases, and given the above-described explanation of the characteristics of spatially distributed shape measurement data (in fig. 2), the term "position" will be understood to refer to the position of the shape arm a through the corresponding anatomical structure predicted by the prediction logic PL. If a more detailed positioning is required, the measurement values may be calibrated to respective reference points along the length of the arm a, e.g. to the middle or preferably end portions of the shape measuring arm a and/or the respective interventional device ID integrated or associated with the arm a.
In the training phase, a training or learning algorithm is applied to or with respect to the training data currently held in the memory TD. Once training has been completed, the predictive algorithm may use the model to apply it to the measured value data of the deployment phase during the deployment phase. The prediction algorithm processes the adjusted model and the newly received shape measurements to produce a prediction result, i.e., a category of anatomical location.
The training algorithm may be supervised or unsupervised. In supervised training, each training sample or shape measurement is associated with a previously known anatomical location. In unsupervised learning, such (explicit) tagging is not required (or at least there is no explicit tagging). Both supervised learning and unsupervised learning variants are contemplated herein. These examples are discussed further below and are contemplated in the embodiments.
In general, two cases can be distinguished herein, namely explicit or implicit ML modeling. The two are in common a model M which will be s->c j Mapping shape measurements to representative anatomical locationsIs a category of (2). The model is typically one of a multi-classifier/multi-classifier (multi-bin), but binary classifiers are not excluded herein, for example when the goal of the logic is simply to determine or "test" whether arm a is located at a predefined anatomical location.
In some embodiments, the model is a binary classifier or a multivariate classifier, where each class represents a class label c (in which case we use the same label). However, in other embodiments, the model may be a cluster or clusters c j Wherein each category c j Corresponding to one cluster. Each cluster is a corresponding subset of training data
From now on, a large will be usedThe letter S 'is written to indicate the set of measured values S' e S 'and the superscript symbol' "will indicate a reference to the training data set S 'or the samples S' therein. Thus, the label "s" (without the prime symbol being defined asAnd thus refers to trained shape measurements received during deployment.
In implicit modeling, what constitutes the model is the training data itself, optionally including a subset S ' describing the specimen S ' or S ' j Is described as D, m= (S', D ()). As described above, the clustering algorithm contemplated herein in some embodiments is one example of implicit modeling.
In explicit modeling, a model is represented as one or more parameterized functions or function systems m=m θ () Which forms a different computational entity than the training dataset S itself. In some embodiments, some or all of the function M θ () The inputs and/or outputs of (a) may be connected by a combination of functions, such as nodes in a model of the neural network type, with the output of one function being provided as an input to one or more other functions, etc. Function M θ () Mapping from training set (space) S to class space c= { C j }. Other types of explicit modeling contemplated herein include decision trees, random forests, and statistical techniques such as hybrid models, particularly hybrid models with gaussian components (known as gaussian hybrid models ("GMMs").
In explicit modeling, the training algorithm corresponds to adjusting the model function M based on the training data S θ () And a parameter θ of (2). In some such embodiments, the training algorithm may be expressed as an optimization process in which one or more objective functions F are used to guide M θ () Is provided. Once the predetermined objective measured by the objective function F has been achieved, the model M is fully trained. The objective function may be one of a cost function or a utility function. For example, when expressing optimizations in terms of cost functions, the goal is to minimize the cost function, or at least adjust the parameters, such that the cost falls below a predetermined threshold:
θ * =argmin θ Σ s′∈S′ F(M θ (s′)) (1a)
the objective function F is some function that measures the target or cost, such as a distance function. The total cost over all or part of the subset of training data S' is considered to be represented by a summation. In the context of supervised multi-classification or binary classification, (1 a) can be more specifically expressed as
argmin θ F=Σ k ||M θ (s′),c k || (1b)
I is some distance or similarity measure/function configured for classification tasks, such as mean square error (L 2 Criterion) or average absolute error (L 1 Criteria), or any other. The parameter may be a parameter of a probability distribution, and in this case, the term "is chosen as some statistical similarity measure. The symbol is used herein in a general sense, therefore, not limited to L p Criteria, instead, include a more general measure of similarity, which may be defined in terms of cross entropy or kulbeck-leibler ("KL") divergence, or other aspects, for example.
In explicit modeling, a trained model M is based on (1 a, b) θ* Can be used to formulate predictive algorithms, i.e
In implicit modeling, the training algorithm can be represented in terms of the training set S' as follows:
pi is a partition that can be defined by a subset S 'of S' j Is described as the above description function D (S' j ) Is defined.
However, some learning embodiments in implicit modeling may involve storing only one new class, e.g., in clusters, so that only one new set pi needs to be added + =Π∪S k ',k≠j。S k New category c represented by k May be a single set (unit). It may come from the new measurement value (S) S encountered during deployment k '={s}。
Furthermore, in some embodiments, the learning phase of implicit modeling may involve only storing training data samples S '∈s', such as the k-nearest neighbor ("k-NN") algorithm contemplated in some embodiments.
The prediction algorithm may be configured to assign each s into its cluster based on the following optimizations:
and
M(s)=c, (2c)
According to (2 b), c and cluster S j ' associate.
Clustering machine learning algorithms have been found to be particularly useful for classification of shape measurements. The focused machine learning method is an example of implicit modeling in the ML mentioned above. Broadly, in such a clustering algorithm contemplated herein, the training data set S' is broken down into subsets or clusters that may or may not overlap. This represents the partitioning operation according to (2 a). Cluster S j 'may be described by a descriptor function D (), e.g., cluster S' j A center point or other feature of each of the above. The training data samples S ' in the training data set S ' are referred to as representatives or members of the cluster to which they belong, the center point D (S ' j ) I.e. the respective prototype.
During deployment, the prediction logic PL implements a prediction algorithm to find (2 b) one or more clusters to which a given new measurement value s belongs. Each cluster constitutes a category c and thus indicates the respective anatomical location. In applying the predictive algorithm, hard or soft membership may be used in the objective function F in (2 b) to calculate a score for the corresponding membership.
The previously implemented k-nearest neighbor algorithm can be implemented in a supervised manner, with each measurement value sample s' having its own tag c. Predictive computation for new measured value s deploymentThe method is classified by majority voting in S' S based on D () and some proximity function. For example, S can be assigned to its center point D (S' j ) Cluster S 'nearest to S' j Or to clusters whose center points are within a certain neighborhood of s.
In general, for solving (1 a-c), (2 a-c) a gradient-based technique may be used, wherein the gradient of the objective function F is calculated and parameter adjustments are made based on said gradient, which gradient defines the parameter spaceIs a direction of the center line. The back propagation algorithm of the neural network type model is one example and is contemplated herein in embodiments. One or more forward passes may be used to calculate variables that may be used for backward propagation. The solution of (1 a-c) depends on the numerical solver algorithm used and may result in an iterative update procedure according to which parameters are updated in one or more iteration cycles i: =i+1 to obtain a solution θ. The numerical solver algorithm may be based on any one or more of the following techniques: gradient descent, random gradient, conjugate gradient, nelde-Mei De Expectation Maximization (EM), maximum likelihood, or any other technique. The updater UP module may implement an updating procedure. In the GMM example described above, an EM-type algorithm may be used as the training algorithm. Non-gradient-based methods are not precluded herein. / >
While the above optimizations (1 a-c), (2 a-c) have been described as minimizing problems, this is not limited to such herein, as corresponding dual expressions of utility functions are also contemplated herein, and optimization of (1 a-c), (2 a-c) is a maximizing problem. Furthermore, while the above formulas (1 a-c), (2 a-c) are related to optimization, they should not be construed as exclusive pursuits of absolute and strict minimum or maximum values. Indeed, this need not be envisaged herein, and for most practical purposes it will be sufficient to run the iteration until the cost falls below the quality threshold epsilon in order to consider that the minimization problems (1 a-c), (2 a-c) have been solved. Moreover, the minimum or maximum reached within ε may be a local minimum or maximum, and these may be sufficient in some application scenarios.
As mentioned before, there is also preferably an uncertainty result delta calculated by the uncertainty determiner UD together with the classification result to quantify the uncertainty of the calculated classification. If the uncertainty is above a set threshold, the currently processed shape measurement s may indicate a new class in the current model that has not been considered. Uncertainty may also be presented to the user rather than automatically rejected by the system. Uncertainty value Δ ("Δ value") may be calculated by bayesian analysis over some or all of the parameters of model M, as described elsewhere. For example, in implicit modeling, new shape measurement data identified by its delta values may be added to the existing training dataset S', thereby updating the model. In some embodiments, it may be necessary to re-run the clustering algorithm (2 b) to consider the newly added shape measurement(s) to define a new cluster or to assign the new shape measurement(s) to an existing cluster. In other embodiments, e.g. in k-NN, the training algorithm may simply add a so-called new sample S (with a high uncertainty value Δ) to the training set S'. In explicit modeling (1 a-c), such as neural network type models, support vector machines ("SVMs"), decision trees, etc., it may be necessary to readjust the parameters of the model based on the now-expanding training dataset (including newly added shape measurements found during deployment).
The usefulness of the clustering algorithm for classifying shape measurements is illustrated in the set of graphs of FIG. 5, referring now to FIG. 5. Fig. 5 illustrates exemplary shape measurement data collected from multiple patients. Each shape measurement sample for a given patient is represented in the spatial domain as a 3D curve in X-Y-Z coordinates. Other representations in terms of curvature, axial strain, etc. may be used in the same way in the alternative. As indicated previously, instead of the spatial domain representation in fig. 5, in an embodiment the representation and processing in the frequency domain is also contemplated herein. Each of fig. 5A-5F represents a measurement for a particular anatomical location, such as the mid-aortic artery in the a-pane. The measurements in the B pane indicate the lower aortic segment, the measurements in the C pane indicate the aortic arch, the measurements in the D pane indicate the upper aortic segment, the measurements in the E pane indicate the side branch and the measurements in the F pane indicate the iliac transition. Although the shape measurements are taken from a plurality of different patients, the corresponding shape measurements for each anatomical location exhibit significant structural similarity, which may be used herein for clustering, and more generally, for any type of machine learning method in which such potential structural similarity may be successfully exploited. The black portions in panes A-F represent clusters or "bundles" of shape measurement values s. The black portion can be understood as the geometric envelope of the curves that make up each cluster.
Referring now to fig. 6, a block diagram of a training apparatus for unsupervised learning is shown. In general, FIG. 6 illustrates one embodiment of the clustering algorithm referred to above. The shape measurement data S' is represented by differently shaded squares, one representing a spatially resolved measure of curvature and the other representing a spatially resolved measure of strain. The spatial resolution along the length of arm a can be indexed by x. However, spatial coordinates such as those in fig. 5 may be used instead of or in combination.
To increase computational efficiency, e.g. to support parallel processing, the training data input s may be stored as a matrix (tensor) of two dimensions or more, as schematically indicated by the square. The training system TSS and/or the prediction logic PL may be implemented by hardware (e.g., a processor having a multi-core design) that supports such parallel processing. This may include a Graphics Processor Unit (GPU) or Tensor Processor Unit (TPU) or similar device. One dimension (row or column) may contain a shape measurement per increment x along arm a, and the other dimension may be indexed for time t. The third dimension may index patient j. Another dimension may index the context-dependent data κ obtained from the appropriate tags or notes provided by the annotator AT.
The context-associated data provided by the annotator AT as a marker/annotation may comprise any event during surgery or clinically relevant patient parameters. For example, blood pressure level, hemodynamics, sedation level, cardiac pacing need, etc. The annotator may obtain this information from an intraoperative health record or EMR. The context-dependent data may also comprise (interventional) imaging systems or other devices D j Any one or more of the settings or parameters.
The history or user-generated shape measurement S 'saved in the training memory TD may be preprocessed by the preprocessing stage PS' mentioned before for the purpose of filtering and/or normalizing the raw shape measurement. Filtering (e.g., smoothing or denoising the signal) may help improve the prediction accuracy of model M. Normalization is to make shape measurements of different patients more comparable. For example, arm a may be used for different patients, although calibration may have been considered during the original measurement, small adjustments or normalization between arms a may still be helpful to model M. Normalization can be performed by statistical analysis, e.g. scaling the set S by standard deviation and/or shifting by population mean, etc.
In the notation herein, we will not distinguish between such pre-processed and raw shape measurements, and will refer to each measurement as previously defined, with the lowercase S 'or collectively S'.
The shape measurements S, whether or not pre-processed, may be arranged in a matrix, such as in rows and columns, but this may not be necessary. The curvature and strain portions of the shape measurement may be processed separately, as shown in the upper portion of fig. 6, or may be processed in combination by combining the curvature and strain measurements into a single combining matrix as shown in the bottom portion of fig. 6.
The shape measurements, alone or in combination, are then passed through an optional transformation stage TS'. The transformation stage TS' transforms the shape measurement data S into a more processing-friendly representation. In an embodiment, this includes processing by a dimension reduction algorithm. Alternatively or additionally, such processing may include other types of processing, such as converting sparse representations into denser (less sparse) representations. Alternatively or additionally, fourier or wavelet transforms may be used within stage TS' if processing in the frequency domain is desired.
In the dimensionality-reducing embodiment, the transformation stage TS' implements principal component analysis PCA. As shown in items 1) -4) in the large rectangle in the central part of fig. 6, different modes of operation are contemplated herein. For example, the curvature data matrix and the strain data matrix may be processed separately into principal components, and then the resulting principal components combined into a single training data matrix, as shown in 1). Alternatively, as shown in 2) and 3), a series matrix (concatenated matrix) is first formed and then processed as a whole by principal component analysis. The first k principal components may be retained, as shown in 2) and 3), with the first 2 or the first 3 principal components respectively retained. K greater than 3 is also contemplated.
The transformed training data matrix is then passed to a model builder MB, which implements a training algorithm to compute the machine learning model M. In an embodiment, this may be accomplished by a k-means algorithm in which the transformed data is divided into subsets, each subset representing a respective cluster, as described above in (2 b), for implicit ML modeling. The descriptor D () for each cluster, such as the respective center point, may be calculated.
In alternative embodiments, the transformation stage TS' may be omitted, passing the data directly to the model building stage MB. This is indicated in the large rectangular item 4).
When the transformation stage TS' is not used, the corresponding shape measurement portions of curvature and strain may be passed to the model builder MB, rather than in series as shown in 4). The preprocessing stage PS' may also be omitted, such as noise filtering or normalization, as processing of the raw data is also contemplated herein.
While the preprocessing PS 'and the representation transformation TS' have been explained in the context of the training system TSs, the same implementation is used upstream of the prediction logic PL during deployment, which implements the prediction algorithm. Preferably, if the prediction logic PL has been used in the training system TSS, the stages PS, TS are used in conjunction with the prediction logic PL. Otherwise, the stages PS, TS may be omitted.
Although the clustering scheme or the like as in FIG. 6 is unsupervised because the clusters are not known in advance, the number of clusters (typically the superparameter K) is predetermined and may be through clinical knowledgeKnown. The clusters may be calculated by the model builder MB using any known clustering algorithm, such as a Voroni-cell based subdivision (tessellation) algorithm (Lloyd's algorithm), the Hartigan-Wong method, or other algorithms. Distribution between clusters and anatomical locationsMay be done by a human expert and supported by a graphical representation of the calculated clusters like that of fig. 5. Instead of the curve envelope as in fig. 5, the curve itself may be displayed in clusters. The clusters may be displayed in respective different panes or sequences in the spatial domain on the display device DD. The user may use the annotation tool AT to mark the corresponding cluster with an identifier of the anatomical location AT the time of visual inspection.
It has been found that the training system TSS of FIG. 6 results in different curvature, strain and shape profiles. The use of both axial strain and curvature data in the model results in better cluster prediction and separation. Running PCA on the curvature matrix and strain matrix, respectively, results in better performance than reducing the combining matrix.
Once the new shape (e.g., test data, data obtained during clinical surgery) is provided, its weight along each of the PCA dimensions is calculated and the distance (or other similarity measure) between its weight and the center point of some or all of the previously determined clusters is calculated. The new shape belongs to a specific cluster only if the distance is smaller than a preset threshold. Otherwise, the new sample may be an outlier or part of a completely new unidentified cluster, which will inform the user via dialogue information on the user interface. Alternatively, the shape data may also be considered as a mixture of clusters. This is especially the case when the arm a acquires measurement data during navigation from one anatomical location to another. Thus, the aforementioned definition of the transition region may occur in the acquired data. To support this, a soft membership function may be used in place of a hard membership in the clustering function. In the literature, this is called fuzzy clustering or soft K-means.
The K-means algorithm (using hard or soft membership functions) ensures convergence. However, K-means type algorithms do not appear to scale well to highly unbalanced data, and become computationally expensive as the dimensions of the data increase. In order to avoid information loss due to a dimension reduction method such as PCA, a neural network-based deep learning method may alternatively be used. In this embodiment, we propose two deep learning methods for shape data classification: a supervised approach and an unsupervised approach. Both methods will now be described in more detail.
In particular, reference is now made to FIG. 7, which illustrates a block diagram of a neural network type model, representing one embodiment of explicit ML modeling of a supervised learning type.
As described above, the neural network ("NN") architecture M may represent a multi-class, single-label supervised learning classifier to classify shape measurements s into one cj1.ltoreq.1.ltoreq.K of a plurality of (K) known classes.
The classifier may be arranged as a network g in a convolutional neural network ("CNN") architecture as schematically shown in the operator diagram in fig. 8A, which is one particular example of the more general NN diagram in fig. 7. Such a CNN architecture may be beneficial herein, as shape measurements may exhibit spatial correlation.
As with any NN, a CNN includes compute nodes (functions) arranged in a cascade layer. These layers may include a single or multiple intermediate ("hidden") layers L between the input layer IL and the output layer OL j . The output layer OL may also be referred to as a task layer. Nodes in a given layer process outputs received from nodes in an upper layer, generate outputs for nodes in a lower layer, and so on. The output of each layer is also known as a feature mapThe number of layers (i.e., the "depth" of the network) depends on the complexity of the data or task (e.g., the length of the shape, the number of clusters, the extent of the clinical site, etc.). In CNN, some or each intermediate layer includes a convolution operation and a nonlinear activation function (e.g., reLU, leak ReLU, sigmoid, hyperbolic tangent, etc.) for applying the output of the convolution operation The processing is a feature map value. Optionally, each layer also has one or more additional operators, including any one or more of batch normalization, culling, spatial pooling. />
From the last intermediate layer L N The generated feature map is reshaped into a low-dimensional representation. Such remodeling may use global average pooling. The remodeled output may be processed by a single or multiple task layers OL. The task layer is currently the classification layer. The classification layer may be implemented by a soft maximum activation function (softmax-acivation function). The output generated by the output layer OL may be presented by a classification vector, wherein each entry represents the probability of the corresponding class and thus also the anatomical class/position. Instead of using CNN, an at least partially fully connected network may be used.
The NN or CNN based classifier (g) may be applied by the training system TSS using a random or small batch training method, using a single or multiple training instances (S 'randomly selected from the training dataset S' i ,c i ) Training is performed. The training data may represent a large patient population N (e.g., N>>100 Reasonably cover different types of medical procedures of interest (e.g., vascular procedures, etc.), surgical sites, anatomical structures, and interventional device IDs (e.g., types of guidewires supporting fos, etc.) that may be associated with arm a.
To expand the variations in training data, data enhancement techniques may be applied, such as smoothing the shape data using variable window sizes, geometric deformation of the data, and noise addition.
The training data S' may be organized and processed in training into continuous data, e.g., multiple column/multiple row shape data instancesWhich consists of values such as any one or more of curvature, strain, twist/torsion, normalized position and/or rotation, etc. as mentioned above. For example, matrix (a ij ) May represent values of curvature and/or strain at some or each point along the arm a as the device is navigated. One index i iterates over points along arm a, while another index, e.g., j, is guided over time by the guidewireThe navigation iterates over time as it passes through the anatomy.
Optionally, the data k associated back and forth are combined with (pure) shape measurement structure data to obtain enriched data H i . Such rich data may also be represented in a multi-index structure (e.g., tensor) and have an increased dimension to accommodate the context-dependent data k. Combining may include abutting and/or compressing the shape measurement and the context-dependent data into such tensor/matrix structure. The use of such a multi-index structure may allow for efficient processing of shape measurement structure data as well as context-dependent data, thereby improving learning efficiency and performance. Other ways of combining the context-dependent data with the shape measurement are also contemplated. Nevertheless, the handling of rich data in training and/or deployment in different data structures, respectively, is not excluded herein and is contemplated in alternative embodiments. The context-dependent data κ may include any one or more of temperature, acquisition time, or other annotations/tags assigned by the annotator AT. Training data H i The expert can add labels into multiple categories, in this case category c i . These categories serve as target or benchmark truth values. The above-described use of matrix or tensor representations may also be used during deployment, wherein shape measurement structures that will vary over time are stored in such matrices or tensors.
In some embodiments, the context-dependent data includes one or more early predictions calculated in a given (current) procedure for the same patient. This may help make training and performance more robust and reliable, as the previously predicted temporal order may provide further clues for the next prediction. In an embodiment, only the immediately preceding prediction c t-1 And the current measured value s t Processed together to calculate the current category c t Or as in a recurrent neural network ("RNN"), one or more previous measurements are processed with the current measurement to calculate the current category. In a modification thereof, only a subset of one or more of the most recent predictors is used to calculate the current predictor, i.e. the subset of those predictors which correctly/reliably predict the corresponding different anatomical locationsA collection. This allows for the exclusion of transition region predictions or other unreliable or less focused measurements/predictions. Uncertainty delta or similarity-based filtering can be used to decide which one or more early predictions to use. Processing only one or a few early predictors (rather than all or a large number of predictors) allows for faster low latency processing and/or lower memory requirements.
The network g may be trained in an iterative process, where the labels are predicted (training output)Cost function F (e.g., cross entropy loss, log loss, etc.) will be used with reference truth label c i A comparison is made. Training is continued for a number of iterations until convergence is reached, i.e. the predicted outcome matches the reference truth value (within a predefined margin), wherein +.>K represents the number of categories.
Instead of using the raw measurements, the transformation stages TS', TS and the transformed data are used as inputs to the network g. For example, as explained above, PCA may be used. It may be the case that the transformed features allow to achieve a sufficient accuracy when used instead of the original measurement. In this case, the lightweight model M can be trained and deployed in system components with lower computing power. For example, the number of intermediate layers may be reduced compared to the number of intermediate layers that may be used to process high-dimensional bulk raw measurements. Alternatively, the transformed data TS '(S'), such as PCA features, are data-abutted to the raw measurement data for training.
It will be appreciated that the CNN network itself acts as a dimension reducer, independent of and in addition to the optional upstream transform stages TS', TS (if any). In other words, a general high-dimensional measurement input H i Will be converted into a single classification vector of length K. This dimension reduction effect of CNN uses the steered triangle operator in FIG. 8A) and the rest of FIGS. 8B) -C)To represent.
Classifiers other than NN/CNN types are also contemplated herein, such as any one or more of the following: support Vector Machines (SVMs), decision trees (random forests), multi-layer perceptrons (MLPs), or multiple regression (weighted linear or logistic regression).
Referring now in more detail to fig. 8A), fig. 8A shows a further embodiment of the transformation stage TS', TS. As shown in the embodiment of fig. 8, the transformation stages TS', TS themselves may be implemented by machine learning. In an embodiment, the transformation stages TS, TS' may be implemented by some neural network type architecture that allows learning of advantageous feature representations. One such neural network includes an auto-encoder or auto-encoder of a variational type, or more generally an encoder-decoder type architecture. Generally, an encoder-decoder ("a/D") type architecture is of CNN type, having a bottleneck structure as shown in fig. 8A).
In particular, such an ("A/D") network is configured to operate in a contracted path (g e Encoder) reduces the dimension of shape input data to a potential representation o i And in the extension path (g d Decoder) from the potential representation o i The input data shape is reconstructed back. Latent representation o i May be a vector. An "a/D" network, such as an automatic encoder or a variational automatic encoder network, is preferably trained on a large dataset S' of shape data using any of the training algorithms and settings described above, except that the training algorithm is now a regression algorithm rather than a classification algorithm. For example, a suitable distance function of the cost function F is euclidean squared distance or mean square error (L1 rule), and furthermore KL-divergence in a variational automatic encoder allows the network to additionally learn the distribution over the potential representations.
FIG. 8B) illustrates classifying data into class c using a representation learned by an auto encoder (as in FIG. 8A) i Training of models (e.g., CNN).
FIG. 8C) is a schematic diagram showing a CNN trained during deployment, in a matrix or tensorForm of (c) represents one or more "real-time" shape measurements at time t
Dimension reduction algorithms, such as the "a/D" type dimension reduction algorithm, which contain complex multidimensional data, learn to extract the most relevant features, allowing them to reconstruct the input from the underlying representation in low dimensions. The size of the potential representation (e.g., vector) may be predetermined, but is preferably selected in consideration of the complexity of the shape data. During training of such a/D networks, potential representation o i May be initially small in size and then incrementally increased during training until g is reconstructed from the input and evaluated from some verification data d The difference between them is less than a certain threshold. The threshold may be set by a user. If the dimensions of the potential representation remain too large, additional non-ML dimension reduction methods can be applied. Typical dimension reduction methods known in the art are the PCA method, independent Component Analysis (ICA), t-distributed random neighborhood embedding (t-SNE), etc.
Alternatively, a meaningful representation of shape measurements can be learned by combining the shape measurements with acquisition or prediction times as data that are correlated back and forth, and using the time dimension in training. Thus, the ('A/D') type network is trained to predict future shape measurements from early shape measurements. That is, instead of learning, a potential representation o of the input shape can be reconstructed i The potential representation may be learned from the shape at time tEnabling it to reconstruct the shape at time t + 1. The encoder-decoder network may also receive time intervals t, t+n]Shape data within and learn that it is possible to time the interval [ t+n+1, t+m ] in the future]A potential representation of shape data is internally reconstructed. It has been shown that learning more complex tasks (such as predicting the future) allows the network to learn more meaningful and separable potential representations. When a potential representation is projected onto a lower dimension using for example t-SNE, The detachable potential representation shows more separation between clusters associated with different labels (e.g. anatomical locations). More separation means less overlap between clusters, which reduces errors or ambiguity in assigning data to clusters.
The main drawback of the above-mentioned K-means and supervised machine learning methods is that the number of clusters/classes (K) needs to be known in advance in order to tag the training data. Alternatively, if the number of clusters is unknown or tagged and/or training data is not available or cannot be generated, an unsupervised feature learning method may be used in this case. One embodiment of an unsupervised learning method is described below. Preferably, a network-based transformation stage TS, TS' of the type described above ("a/D") is used. Broadly speaking, a network-based transformation stage TS 'of the trained ("A/D") type is used to transform the training set S' into a more advantageous representation. The transformation stage TS' uses only the encoder stage g of a trained ("A/D") type network e . The transformed representations are clustered by any clustering algorithm that does not require K information. A predictive algorithm is then used in the deployment that identifies the relevant clusters of shape measurements s based on the distance measurements. In more detail, extraction uses a pre-trained encoder network g e Giving potential representation (vector) o of shape data as input i . At this stage, the decoder network may be excluded. This applies to all or a predetermined number of training data samples s'. The potential spaces thus obtained may then be clustered. Suitable clustering algorithms do not require prior information about the number of clusters K and include any of hierarchical clustering (aggregation, segmentation), model-based clustering (gaussian mixture), or density-based noisy applied spatial clustering (DBSCAN). The cluster definitions thus obtained, including their descriptors D () (if any), are stored in a memory MEM. Once the new shape s is provided during deployment, the distance (or other similarity measure) between the potential representation of the new sample computed by the deployment transformer TS and the descriptors (e.g., center points) of all previously identified clusters is computed. The new shape s belongs to a specific cluster only if the distance is smaller than a preset threshold. Otherwise, in dynamic learningIn the setting, the new sample s may be an outlier or a part of a completely new, yet unrecognized cluster (e.g. transition region). For example, the new measurement s may represent the transition region described above. This fact may be communicated to the user by dialog information on the user interface. If such outliers are detected, a retraining routine is triggered and new adjustments are made to the model M, as will be discussed in more detail below in conjunction with dynamic learning in FIG. 10.
As an improvement to any of the above, it is not necessary to explicitly cluster the learned representations using one of the methods specifically described above. For example, assuming that an unsupervised learning approach can learn a well-separable representation, the problem is reduced to identifying clusters (e.g., how many clusters there are, the range of each cluster, etc.), and labeling each cluster with a meaningful label (e.g., a specific anatomical location). This can be achieved by a one-time or several-time learning method.
In this method, it is assumed that there is at least one tagged data. In this case, this may only require a single, fine-grained labeling process, with each frame having information about the location of the anatomy of the device. This labeled dataset, the "prototype set," is not used for training, but rather for identifying clusters in the learned representation. As shown above in fig. 3B, this may be achieved by the functionality of the automatic labeler ALL.
In an embodiment, the automatic labeler ALL is operable to interface with a transformation stage TS' to obtain a potential representation of a portion of the labeled data of the prototype set or of each frame or frame interval (window) s. For example, a trained encoder g may be used e And the automatic labeler ALL calculates a distance measure between a part or ALL of the frames/windows within the known label and a part or ALL of the encoder-generated representation of the untagged training data. One suitable distance measure may be cosine similarity, euclidean distance squared (L2 rule), or others.
The automatic labeler ALL then performs a minimum value of the recorded distance metric (or a maximum value of the similarity metric) on a per frame/window basis to calculate the strongest response for a portion or ALL of the training data encodings of a particular label. The automatic labeler ALL may then cycle through some of ALL the labels to assign each training data a respective label that elicits the strongest response. Thus, the automatic labeler ALL allows the propagation of labels from the prototype set to the entire training set. The number of clusters will be defined by the number of labels in the prototype set. The cluster center can now be found and the potential representation of any new untagged data is compared to the cluster center, assigning it to a specific cluster.
Referring now to FIG. 9, a block diagram of a training system TSS is shown. The training system is configured to address one of the learning algorithms (formulas (1 a-b), (2 a-b)) applicable for explicit and implicit modeling described above. The optimization process can be guided by improving the mentioned objective function F. In explicit modeling, the model M can be adjusted by θ The training parameters θ of (a) to refine the function F, or in implicit modeling, the function F may be refined by storing and processing training data sets (e.g., clustered algorithms). The system TS may be coupled with the preprocessing stage PS 'and/or the transformation stage TS' of any of the embodiments described above.
The training system may be disposable or may be used repeatedly in dynamic learning once new training data is presented. Once a sufficient number of new training data samples s have been accumulated + The operation of the training system TSS may be triggered by a user or automatically.
For example, in the above-described architecture of the CNN type model M or the transform stage TS, the training parameters may include weights W, and optionally all convolution/deconvolution filter kernel offsets for the CNN model. The weight of each layer may be different and each layer may comprise a plurality of convolution operators, some or each of which has a different set of kernels. In particular, these weights Wj are learned during a training phase, where the index j is used on multiple layers and their convolution operators. Once the training phase has ended, the fully learned weights and the architecture in which the nodes are arranged may be stored in one or more data stores and made available for deployment. The training system TSS may be adapted for use with models of other neural network types, or indeed with M models of non-neural network types, such as SVM, random forest, or any of the above types.
In the explicit modeling of the supervision type, the training data includes k pairs of data (x k ,y k ). The training data comprises training input data s 'for each pair k' k And associated target c k . Thus, the training data is organized in pairs k, especially for the supervised learning schemes contemplated in the embodiments. It should be noted, however, that an unsupervised learning scheme is not excluded herein. s' k Representing historical shape measurements s', c k Representation representing the corresponding anatomical structure position AL j Is associated with category c of (2) k 。
Training input data s' k May be obtained from historical data, obtained in a catheter laboratory, and stored in a medical database, such as a HIS (hospital information system) or patient database system, etc. Target c k Or "benchmark true values" may represent class labels, particularly class labels of the class-type model M.
These tags may be obtained from a medical data repository or may be distributed later by a medical professional. The corresponding label may be found in the header data or may be inferred by examining medical records and notes of the history data. Label y for training a model of a classification type k Any representation of the location of the corresponding anatomical structure may be included.
If the training contains data k associated in tandem, then for a given pair of k, target c k And generally does not contain the corresponding context associated data. In other words, for learning using data associated before and after, the data pair is typically in the form ((s ')' k ,κ),c k ) Wherein the context data k is only related to the training input s' k Is related to target c k Irrespective of the fact that the first and second parts are.
In the training phase, the architecture of the machine learning model M (CNN network as shown in fig. 7 and 8) will be pre-populated with an initial set of weights. These may be selected randomly. The weight θ of the model M represents the parameterization M θ And train the system TSSIs based on training data pairs (s' k ,c k ) The parameter θ is optimized and updated to improve the objective function F according to any one of the formulas (1 a-b), (2 a-b).
For example, in explicit modeling for supervised learning, the cost function F scales the cumulative residual, i.e., the data M estimated by the model, according to equation (1 b) θ (s k ) And the target according to some or all of the training data pairs k:
in supervised training of NN-type models, training input data x of training pairs k Propagated through the initializing network M. Specifically, the kth pair of training inputs s' k Received at input IL, passed through the model and output at output OL as output training data M θ (s'). The cost function measures M generated by model M using a suitable similarity measure, such as cross entropy θ (s' k ) With the desired target c k Differences between them.
In which the pair of current data (s 'is updated by the updater UP' k ,c k ) After one or more iterations of the first inner loop updating the parameter θ of the model, the training system TSS enters a second outer loop, where the next training data pair s 'is correspondingly performed' k+1 、c k+1 And (5) processing. The structure of the updater UP depends on the optimization scheme used. For example, the inner loop managed by the updater UP may be implemented by one or more of forward and backward pass in a forward/backward propagation algorithm. The aggregate (e.g., summed) residuals for all training pairs are considered in adjusting the parameters until the current training pair to improve the objective function. The summarized residuals may be formed by configuring the objective function F as the sum of the squares of the residuals of a part or all of the considered residuals of each pair (as in equation (1 b)). Other algebraic combinations are also contemplated instead of a sum of squares. Instead of performing each training data item as described above, batch training is performed, in which a set (batch) of training data items is iterated. As an extreme case, all training data items are considered and processed together. Batch training is preferred herein.
The training system shown in fig. 9 may be regarded as being used for all learning schemes, in particular supervised schemes, including clustering, wherein the objective function F is configured to measure some targets related to training partitions divided into subsets of the training dataset S', as in a clustered algorithm, formulas (2 a-b). The number of clusters may or may not be known in advance.
In alternative embodiments, unsupervised learning schemes are also contemplated herein. In an unsupervised scheme, there is no reference truth data y k The updater UP may update the objective function F in one or more iteration cycles. The objective function F is configured to measure some other objective related to the training data, such as best reconstructing the training data from a reduced-dimension representation of the training data.
A GPU or TPU may be used to implement the training system TSS.
The trained machine learning module M may be stored in one or more memories MEM or databases and may be available to the system C-SYS as a pre-trained machine learning model. The trained model M may be provided for use in a cloud service. Access rights may be provided for free or their usage rights may be granted by a license payment or pay-per-use scheme.
With continued reference now to the block diagram in fig. 10, this figure shows the mode of operation of the prediction logic PL. The mode of operation is a dynamic learning. The context-dependent data κ may be used in combination with shape measurements received from the distributed sensing system SSS. However, the use of such context-dependent data κ is optional.
As explained before, a given shape measurement s is processed by the machine learning module MLM. The model M after training thereof is applied to the shape measurement s received during deployment, to calculate the current classification result c. The result c may be a classification result obtained by the classifier according to any of the above embodiments, or the category represents a cluster obtained in a cluster arrangement. Category c j With corresponding anatomical structure position AL j And (5) associating. The predicted result c may be used to control one or more of the aforementioned devices/systems D via one or more control interfaces CIS j 。
However, it may happen that the logic PL encounters a shape measurement s during deployment + It represents a new class that has not been previously assigned by the training system TSS. If the logic PL encounters such a new sample, this may trigger a re-entry into the training phase to adjust the model M according to the new data. Such repeated re-entry triggered by the quality of the received measurement s during training may be referred to herein as dynamic learning. Thus, during deployment, the current training data set S' passes through the data samples S representing this new class + Is supplemented. Thus, when the prediction logic encounters new data S + And becomes more reliable and learns to spread better over time. Re-entry retraining in dynamic learning may be triggered on a per measurement basis or may be triggered in batches. More specifically, during each reference process, for each received shape measurement value, it is determined whether it constitutes a new shape measurement class c + . If so, process flow goes to the training system TSS and triggers retraining. Alternatively, instead of triggering retraining on a per-reference basis, a lower retraining rate is selected, e.g. only if a certain number of representative new categories c are received + Shape measurement s of (2) + The retraining is triggered only when it is. In other words, the new class of shape measurements are stored in the buffer and the training TS retrains the model once a critical number has been reached, which can be set by the user or otherwise predefined.
Retraining may include rerun a training algorithm that was previously used to train the model, but taking into account new data. For example, in explicit modeling, the parameters of a model of neural network type or any other classifier may be adjusted by re-running a backward propagation training algorithm. Alternatively, if a clustered machine learning method is used, the training data S 'is compared with' + Update to now include representing new category c + New measurement value s of/cluster + . Depending on the type of clustering algorithm used (2 b), this may require rerun. Alternatively, as in the k-NN schemeIt is sufficient to update only the training set. In particular, new training samples s + A new class or cluster may be defined that represents a new anatomical location. When the training algorithm is re-run, some of the hitherto thought new specimens s may be found + In fact belonging to one or more previously known categories, so that it may not be necessary to set up a new category.
The triggering of the retraining may be based on the aforementioned uncertainty value Δ, which is calculated by the uncertainty determiner UD during deployment. As mentioned before, the uncertainty value Δ may be based on a bayesian analysis and based on the current internal parameter set θ of the trained model M by the uncertainty determiner UD * And (5) calculating. However, such a special, supplemental uncertainty calculation may not be necessary. The prediction result itself may be used to quantify the uncertainty. If a classification result is provided in terms of classification, the classification result is typically output as a vector having a plurality of entries, each entry representing a respective probability for a respective category. The final classification output is typically provided as the class with the highest probability. However, if none of the categories has a threshold Δ above a certain critical threshold 0 (e.g., 50%) of probability, which may itself indicate that there is uncertainty, because probability mass (probability mass) is distributed approximately equally across most or all clusters/categories. In the present embodiment, the uncertainty determiner UD functions as a thresholder. If the calculated or determined uncertainty in none of the class probabilities is above the critical threshold delta 0 Then retraining is triggered. Critical threshold delta 0 Can be adjusted by the user. The uncertainty determiner UD may be a component of the prediction logic PL. It has been found that the fact that a high uncertainty shape measurement sample can be marked by a marker phi, which can trigger the message release. The message may ask the user to notice through visual, audio signals, or any other means. Similar uncertainty determinations may also be used for the clustering algorithm.
Another embodiment of triggering retraining based on new data is data-centric, as briefly mentioned above in connection with fig. 8B). In this or a similar method, a similarity measure is used to measure the distance/similarity between a given newly received shape measurement S and a shape measurement that has been classified (e.g. clustered) in the training database TD. Retraining is triggered if the new shape measurement S exceeds the distance of any predefined cluster.
In addition to dynamically updating the training dataset S, once the procedure begins, arm a is navigated through the anatomy, the logic PL will process the shape data S quasi-continuously at a reasonable frame rate that is preset or user-triggered. In some embodiments, for example in clustering, the prediction logic PL determines whether the currently measured shape S matches a cluster in the training dataset S'. Whether there is a match is determined based on a similarity measure or certainty score formula (2 c) between the current shape and the previous cluster.
The transformation stage TS' may facilitate computation of the similarity, which is computed based on the transformed data. As explained before, the training data s' is projected into a dimension-reducing space constructed by performing principal component analysis PCA. The training system TSS may then use the principal components or patterns to cluster the data, such as through K-means.
Since each cluster contains similar shapes, it is expected that the pattern weights of the shapes in each cluster will also be similar. In addition to or instead of computing categories (e.g., clusters) based on training models, logical PL may define and use statistical similarities. For this purpose, it can be assumed that the variance of the intra-cluster data should be within ±cσ, where σ is the variance or square root of the eigenvalue of each mode. c is a constant defining how much variance from the mean is acceptable. For example, if c=2, one processes 95.45% of the sampleable data through the distribution associated with the cluster. If the logical PL assigns a new shape to the cluster, but the pattern weight of the shape falls outside the allowed + -2σ range, this may indicate that a new cluster may be needed for the new shape. Such statistics or other forms of similarity metrics may be used in addition to training models for predictive clusters. Thus, the similarity measure is another embodiment of the aforementioned uncertainty meter UD.
Furthermore, if the new shape is dissimilar to the shapes currently held in the database and the model M is built on the basis of these shapes, reconstructing the new shape using only the pattern weights generated by projecting the new shape onto the PCA pattern will result in a poor reconstruction. The difference between the original shape and the reconstructed shape may also indicate a similarity or dissimilarity between the new shape and the shapes in the cluster and may be used to indicate whether a new cluster needs to be created.
As an alternative to PCA, the transformation stages TS, TS' may be configured to use the ML itself, such as the VAE mentioned earlier. The VAE may also be used to learn another low-dimensional representation of shape data. The disadvantage of PCA is that it requires point-to-point correspondence between shapes to successfully interpret variances in the data. Such correspondence may not always be available, for example, when the user operates the training data generator TDG (fig. 11) during pre-planning in order to draw shapes in the image. In this or a similar case, a VAE may be used instead, in order to pass the encoder g of the VAE e Both representative corresponding features and low-dimensional potential representations of the auto-learned shape. The VAE may also learn the potential spatial distribution of shapes. Thus, similar to the above, an acceptable variance range in the distribution may be defined to accept or reject new shapes assigned to clusters.
Furthermore, decoder g d The potential representation of the shape may be used in order to reconstruct the new shape using only the low-dimensional representation of the input shape. If the training data does not represent the shape well, the difference between the original shape and the reconstructed shape is large. As described above, this difference may also be used to indicate whether a new cluster is required for a new shape. The logic PL may use a simple threshold for this difference to decide whether to set up a new category, such as clusters.
If the similarity measure is below a threshold (i.e. not similar), then either the current shape is "between clusters" and thus represents a transition region, or the pre-built database TD does not capture all relevant shape data types for the current procedure.
A superframe-based similarity observation may be performed to distinguish such transitions. To ensure that the current shape is not between clusters, shapes acquired over several frames (time points) can be expected to have a low similarity to previous clusters/categories. If the number of subsequent frames with lower similarity is below a threshold, it may be determined that the shape did transition from one cluster to another. This transition-evidence threshold (transition-threshold) may be learned retrospectively from past patient data. However, if the number of frames with lower similarity is above the threshold, this may indicate that the shape does represent a new cluster that has not yet been defined.
If such a new cluster is found, the controller C-SYS may alert the system or user. The current shape s may be stored in the database TD as a new cluster. Any new shape measurement s, acquired after the current shape measurement, that matches the shape more than any other cluster, may be added as a representation to the new cluster. The creation of clusters of new shapes may mark the training system TSS, i.e. a re-clustering/re-learning should be triggered. Retraining can be performed intraoperatively, or can be performed postoperatively, as desired.
It is also possible to know that a new patient is atypical or may have a complex anatomy before surgery, increasing the likelihood of observing the shape of a new cluster that may need to be built immediately. Such information can be known from preoperative 3D imaging. In this case, instead of forming new clusters on the fly, a different method may be employed in which simulations are used to generate shape data for pre-constructed clusters. The simulation may be performed by segmenting the vasculature to be traversed and extracting the 3D mesh. Known mechanical properties of the device ID to be used (e.g., thickness, flexibility, tip shape, etc.) may be input into the training data generator to simulate a path from an entry point to a target location within the 3D mesh. The simulated shapes may be saved and clustered as described above. The steps of fig. 11 may be used for this purpose. Since the data is simulated, multiple variations can be generated (e.g., by adjusting parameters that control the simulation of the device), resulting in a greater amount of data being generated. During surgery, shapes generated by the device may be assigned to clusters constructed from the simulation data, thereby reducing the likelihood of immediately opening new clusters or categories, as this may consume CPU resources.
Alternatively, the simulated shape may be combined with shapes from past patient data (or data from multiple patients similar to the current patient), and clusters may then be formed from the combined data. In this case, some clusters may include both shapes from past patient data and simulated shapes, while some clusters (representing atypical anatomy) may contain only simulated shapes. The ratio of the device shape from surgery to be clustered into typical clusters (with both real and simulated shapes) and the shape clustered into atypical clusters (clusters containing only/mainly simulated data) can be used to quantify how atypical the patient is.
This applies not only to the dynamic learning method but also to any of the embodiments described previously, wherein it has been found that not only is the model trained on the basis of the shape measurement S, but also the model is jointly trained on the basis of the previously mentioned context-dependent data k. Thus, the input for training and/or deployment consists of rich measurements (S, κ). In other words, a new dimension is added to the shape measurement data. This rich data (s, κ) may be represented by a higher-dimensional matrix and may be processed by the training system as separate channels to jointly train the model based on the actual shape measurement data s and the context-dependent data κ as the model is trained. The contextually relevant data may be indicative of the time of acquisition or a characteristic of the patient, or any other suitable type of contextually relevant data describing or relating to the intervention or patient. The data associated in tandem is digitally encoded and may be represented as vectors or as one/more matrices or tensors. The context-dependent data k may comprise any device Dj or system generated data used or operated on in a medical procedure. The context-dependent data may comprise 2D or 3D images of the patient, such as X-ray images. These images may be intra-operative images or inter-operative images. Other modes are also contemplated, such as MRI, PET, US, etc. The context-dependent data can be effectively utilized to correctly classify shape measurements of outliers that cannot be classified into existing categories.
The data portion κ associated in tandem (which may not necessarily be digital in nature) may have undesirable sparsity. For example, an embedding algorithm based on an automatic encoder may be used to find a denser representation, and then such denser representation of the data κ associated before and after is used to train the model M when processed with shape measurements.
The dynamic learning described above is an instant training in which an existing training dataset is supplemented and the model is adapted accordingly. However, the additional data may not necessarily occur during deployment, but may instead be generated synthetically. This may occur automatically or may be user-assisted. Additional training data, which the generation system TS can train the model, is implemented by the training data generator TDG.
The training data generator TDG may comprise a user interface UI, such as a touch screen, pointer tool (e.g. stylus, mouse, etc.), through which a user may provide input to assist in generating training data. Optionally, the shape measurement is generated by using the context-dependent data κ. In an embodiment, the context-dependent data κ may comprise 2D or 3D pre-operative or inter-operative medical images. Such images may be used to generate composite measurements, including their respective categories, such as clusters or class labels.
An example of one such user assistance is shown in fig. 11. Specifically, fig. 11 shows a 2D or 3D X radiographic image, or an image from any other modality such as MRI, PET, or the like.
For example, 3D imaging data may be reconstructed in a manner that represents a particular aspect of the anatomy (e.g., an airway or a blood vessel). Identifying one or more target locations, such as a tumor or a portion of a vessel in need of repair, may map a 3D path from the origin to the target. These 3D shapes or paths may be extracted from the imaging data. The entire path/shape or a section of the path/shape may then define a single shape cluster, respectively. Instead of 3D image data, 2D projection data (e.g. in X-ray radiography) may be used.
In more detail, the vessel tree VT may be defined using a segmentation algorithm SEG, with which a user may interact through a user interface UI. For example, a user may use a free hand-drawing tool to draw a shape in a vessel tree that represents a location of an anatomical structure of interest (e.g., a portion of a vessel). Alternatively, the user sets a discrete number of control points CP1-5 in the vessel tree through the interface, and then uses a spline algorithm to calculate a curve that passes through these control points and conforms in shape to the relevant portion of the segmented vessel tree. Based on shape, curvature, or strain, stress values may be calculated as desired, or alternatively, 3D (X, Y, Z) or 2D (X, Y) coordinates may be used to define shape s. The shape may be automatically generated by the segmenter SEG using the centerline of the vessel tree.
Due to the generated composite shape data S + Is drawn by the user, who knows the anatomy represented by the underlying image, and therefore classification is implicit. The user tool may be used to mark the drawn shape accordingly by an identifier of the corresponding anatomical location. Thus, by drawing shapes in a set of images or different parts of an image, a large number of classified training data can be generated synthetically and added to the existing training data set S'. This allows, for example, for the enrichment of an under-reserved training database, or the use of local images that may represent characteristics of patients coming to the medical facility, to help set up new systems at new sites. It is also possible to build up the entire training dataset S' from scratch, which completely contains such artificially generated samples.
If a clustering algorithm is used for training, the generated artificial shapes S+ and their corresponding categories may be grouped according to their categories to define clusters, which may then be used immediately during deployment to assign new shape measurements to existing clusters. The user-generated tags may be analyzed using a language analyzer component to group them into clusters. Alternatively, the training algorithm is re-run to adjust parameters of the neural network or other classifier model to account for the newly added synthetic training data sample S + . In addition to the newly generated data, the stored previous (old) training data may be used to retrain the model. Composite shape measurement S + The generation of (a) may not necessarily depend on the use ofThe support provided by the user via the user interface UI, as described above in fig. 11, may be fully automated. In this example, the image would be segmented as before and portions of the vessel tree labeled for anatomical locations. The spline curve is then extended over the anatomy to obtain artificial shape measurements S + 。
Another embodiment of generating training data may be based on biophysical modeling algorithms to define the most likely path that an interventional device ID (such as a guidewire or catheter) will take based on the shape of the vessel and the mechanical characteristics of the device. A path may be defined based on such predictions driven by the biophysical modeling algorithm from which synthetically generated shape measurements may be generated. In addition or instead, in an embodiment, past device ID data (such as that provided by a shape sensing system or other tracking technique) is used to learn the path that the device is most likely to take during successful navigation to the target. A variable division auto-encoder may be used for this ML task. Alternatively, uncertainty values for each path are provided, such as standard deviations or other deviations obtainable by encoder stages of a variable-division automatic encoder.
Thus, the imaging data may be used to define shapes/lines, which may then be used to define the corresponding clusters and their anatomical location labels/annotations. Images of the anatomical structure of interest may be acquired at the beginning of the procedure or during the procedure. The image may be a 2D X radiographic image. The proposed training data generator TDG is used. The user may "draw" lines to create his/her own meaningful clusters, such as paths to the target branch vessel or points where stent grafts should be deployed.
Once the user draws the desired shape (e.g., left kidney, etc.) on the X-ray image, these drawn shapes can be used to identify clusters that are expected to be seen during surgery. The training data generator TDG may comprise a visualization component. The shapes that already belong to these clusters can be used to show the distribution of possible shapes for each cluster.
New 2D or 3D shapes/paths/lines can be used to create clusters specific to the current patient from scratch. Alternatively, if the existing training dataset does (also) contain one fitted cluster, additional clusters may be created using synthetically generated shapes.
Another embodiment of generating training data in a user-assisted manner is shown in FIG. 12. This embodiment uses shape measurements registered onto images occasionally needed during surgery. The continuous line shows the shape measurements registered to the structure of the current image. The user may define additional, manually generated shape measurement data s + As indicated by the dashed and dotted lines ug1, ug 2. The user may use a suitable user interface, such as a computer mouse, stylus, touch screen, in combination with drawing software, to generate artificial shape measurement data s + . Due to registration, classification/tagging is implicit. For example, if arm a is located in a renal artery, the user may mark the acquired shape as "renal artery". The 3D shape measurements returned by arm a are saved with the user's "renal artery" label defined from, for example, a 2D X radiograph.
However, such user-assisted input is not necessary herein. Alternatively, the markers/annotations may also be obtained automatically by registering the 3D vascular map onto the 3D pre-operative data. Then, some or each corresponding marked position and shape measurement s will be stored as a corresponding cluster. Alternatively, if the user or system generates too many clusters, the similar labels/clusters in the annotation may be grouped using natural language processing and/or shape analysis. The above-mentioned training data S for generating the composite + May be used in place of or in combination with "real" historical data. Training data enhancement techniques may be used to enhance variability of training data, thereby facilitating better training results.
The shape measurement data, in combination with data associated with the image type (e.g., X-rays), provides a way to tag categories found during learning. That is, more generally, the shape data provided by arm A may be used to automatically discover meaningful clusters or categories, while the image data associated back and forth (e.g., X-ray imaging data) may be used to automatically infer the location relative to the anatomy where the shape data in each category was acquired. By classifying these data into pre-existing categories, or by forming new clusters in dynamic learning as mentioned before, shape measurements of image data that are not associated back and forth can be used to add more data to the training set S'. In this way, anatomical location labels can be assigned to shape measurements even without reference truth values from the X-ray image. Furthermore, shape data acquired at the time of acquiring the X-ray image results in more natural changes in shape and may be used herein to more robustly estimate characteristics (e.g., average) of the cluster. This may help to detect clusters correctly, for example when the corresponding location is accessed again after surgery.
To associate anatomical location markers with categories found by machine learning, the automatic labeler ALL may implement an image processing algorithm to generate labels for corresponding portions of the image. For example, if a kidney is identified in an image by an image processing algorithm and then blood vessels connected to the identified kidney are found, the automatic labeler ALL assigns "renal artery" labels to these blood vessels as an example of suitable anatomical labels. However, if an aneurysm is identified, the aorta on both sides of the aneurysm may be marked as the "aorta" or the like. Arm a may be registered with (i.e., in the same coordinate system as) the X-ray image in order to pick up the tag from the processed image. Instead, a user interface may be provided that allows the user to tag the data manually.
The prediction logic PL may use data associated with the image type in tandem with the predicted category to determine the stage of surgery. For example, the fact that a stent is detectable in an intra-operative X-ray image, in combination with the predicted current class, may provide sufficient clues to the current stage of the procedure.
In another manner, the system C-SYS may utilize the current predicted category information and any current information, such as imaging parameters (e.g., C-arm position, table position, or otherwise) received from the imaging system to facilitate surgery. For example, if the user decides to request an image to be acquired at this instant, the combined information may be used to determine whether the current position of the C-arm will generate the best image (no foreshortening) or poor angle view) of the device and anatomy. The system C-SYS may then mark the user to alter the imaging geometry before capturing the image to obtain a better FOV.
In addition or in lieu of, the system C-SYS may use the combined information to determine whether and when to re-register the arm A onto an imaging system, such as the X-ray imager IA. For example, it is preferable not to attempt re-registration while the user is deploying the stent graft, as there may be a significant amount of impending movement. However, if the interventional device ID/arm a is stopped in the upper aorta, this fact may indicate that the device may remain stable for a period of time, as it is unlikely to move quickly and thus is a good opportunity for re-registration, thereby improving the accuracy of the registration and thus the visualization of arm a with respect to the image. Thus, the combined information may be utilized herein to identify a static phase in the surgery that is advantageous for the (re) registration attempt.
Unlike the data associated with the front and rear of the image type considered above, another type or component of the data associated with the front and rear is time. In this context, the temporal data may also be effectively utilized for generating new shape measurements and/or enhancing performance during training and/or annotation, or more accurately predicting the current stage or type of medical procedure. The time component may be a time stamp that indicates when the shape sensing system SSS acquired a given shape measurement. Thus, it should be appreciated that the prediction logic operates in a dynamic manner in that it computes the predicted class-formed flow or sequence in response to the shape measurement flow that the shape sensing system SSS provides as its arm a moves during surgery. The conversion unit CV may use the time component in order to not only convert the shape measurement values into anatomical locations, but alternatively or additionally to the type of medical procedure or the stage of the medical procedure in which the arm a and the associated device ID are used. The time component may be a predicted time, i.e. the absolute time or a specific time sequence of a given class is predicted by the logic PL in response to shape data s measured during an ongoing surgery. Thus, the time, and in particular the point in time at which the logic PL identified a given category during surgery, may represent useful information for helping to automatically determine the current stage of surgery. This information may then be used to control any one or more of a variety of systems or sub-processes in surgery and/or associated with hospital operation.
By the shape sensing system SSS, the imaging system IA, or the system/device D j Any other emitted timing information may be used herein to facilitate surgery. In one embodiment, the training system TSS uses timing information in a dynamic learning mode. For example, when running a clustering algorithm on the current training dataset S' that utilizes the shape of arm a, the timing information may be used to clarify the partitions as clusters. For example, if the training dataset S' consists of only the shape of arm a, the clustering algorithm may classify similar shapes into only one cluster (or class, more generally). However, similar shapes may actually come from different points in time during the procedure, and thus may actually represent different phases of the procedure. Thus, rather than just a single class, the training data set S' may be divided into two or more classes (having similar shapes) using timing information when the shape is acquired.
For example, clusters based on the shape of arm A and the time stamp may be used to determine the stage of surgery, which may be used to control other systems D j . For example, for a procedure to place a stent in the renal artery, the catheter must be navigated from the femoral artery into the aorta and then into the renal artery prior to deployment of the stent. The clusters encountered during such navigation can be represented as (see fig. 5): a) (shape aa,0-30 minutes), B) (shape bb,30-35 minutes), C) (shape aa,35-45 minutes), D) (shape cc,45-50 minutes) and E) (shape bb,30-75 minutes) in this particular order. If clusters A) and B) have already appeared, the system recognizes that cluster C) is the next cluster and can inform other systems D j The next stage of surgery is cluster C). Upon reaching cluster C), the imaging system may require a different imaging protocol to be set, or the hospital system may alert the caregiver to prepare for the next patient because the current procedure is approaching the tail sound based on the stage of the procedure.
In another embodiment, using the same example described above, where the clusters encountered are predicted in a specific order (in our example of fig. 5, the order is a), B), C), D), and then E)), if the procedure time is longer than predicted, the physician continues in cluster a) for 45 minutes instead of up to 30 minutes, and the timing of the remaining clusters can be dynamically updated with new time predictions. The above-described time-sequential based approach may be used with more general categories and is not limited to clustering embodiments.
Referring to the training data generator TDG with user input, when the user draws shapes that are expected to be predicted in a given procedure, the drawn shapes can be used to identify not only the clusters that are expected to occur, but also "when" (the drawn shapes are expected to be predicted in that order during the procedure). Thus, with the addition of a time dimension, the logical PL may use a deterministic "time cue" defined by the order labels assigned to the shape classes as a priori knowledge when predicting the class for which the shape is to be measured in a deployment. In addition or instead, the logic PL may suggest categories to predict before drawing the shape. The suggestions may be fine tuned or updated as the user draws more desired shapes.
If the predicted categories during surgery are not in this order, this may indicate that the new sequence defines a different type of surgery. This new cluster sequence can then be saved back in the predefined database TD as data defining the new procedure. In this way, the system can learn new procedures during clinical practice.
Deviations from the predicted class prediction order may also indicate complications or difficulties in the procedure, which may affect the procedure time, or may require additional users with different qualification to provide assistance.
The system C-SYS may use the class prediction order and/or timing information to inform the control system IS and suggest an alternative method of using a surgical card, enabling specific software (e.g., a Vessel Navigator) or other tools, using alternative imaging protocols, or reminding the user that an attempt may be made during surgery.
The system C-SYS may also use the timing information to distinguish between different types of users. For example, novice users may spend more time completing steps than experienced users, or may strive to the specific device operation, resulting in new categories being formed, or categories that are not in sequence compared to those categories maintained in the predefined database TD.
If the user has completed the procedure correctly and, therefore, predicted categories in the predicted order, but slower than predicted, the controller C-SYS will mark the current user as a "novice" user. Any subsequent prediction time to proceed further down or in the next procedure can then be extended by the novice margin for the user.
If a "novice" user is marked, the controller C-SYS may save the sequence of clusters and/or new clusters for review after surgery. This may help educate novice users in the area where they may be striving to perform surgery correctly or optimally.
The controller C-SYS may adjust the navigation plan or visualization displayed for the user on the laboratory display device DD based on the time predicted to be of the corresponding category during the procedure. For example, a view of the renal artery is displayed, rather than the iliac view. Thus, by taking into account the predicted time, the visual support scheme is synchronized in real-time with the predicted class stream.
The prediction order may be used for markingOr trigger useful events related to the shape sensing system SSS. For example, if the device is in a single cluster/class for an extended (predefined) period of time, the logic PL may trigger (or suggest to do so) the device ID to re-register with the imaging system, which may be represented by repeatedly predicting the same class. For example, if the device ID is "parked" in the aorta while the physician is preparing another device, the "parked" device ID is stable and will not be used, which will be the proper time to re-register to the imaging system. Occasional realignment may ensure that the device ID is relative to fluorescence The accuracy of the fluoroscopic image is good.
Alternatively or in addition to any of the above, the context-dependent data κ may also include demographic/biometric characteristics of the patient, such as gender, weight, age, BMI, etc.
By utilizing demographic information and/or surgical information of the patient, shape categories may be predicted to better fit the current patient. By filtering the existing training dataset S' and using only information from patients with similar patient characteristics in predicting categories, using any of age, BMI, anatomical targets, or surgical types may improve the robustness and accuracy of the predicted categories. Thus, the training dataset S' may be stratified according to patient characteristics. The system C-SYS queries the user for characteristics of the current patient and then uses only a subset of the training data samples corresponding to the patient characteristics.
For example, during training, the training system SYS may run to perform cross-validation (n times, n >1, where possible) to improve the generalization ability of the trained model. In cross-validation, training data is divided into multiple groups, which are then used in different combinations for training and validation. The training data is divided based on the BMI and the like. Thus, using the data described above, the training system may generate a different model for a particular type of patient than one generic model M (although this is still contemplated in some embodiments).
During deployment, the logic PL obtains patient characteristics of the new patient in order to dynamically access and use the correct model that most corresponds to the obtained patient characteristics.
Alternatively, the system TS partitions the raw data S' based on patient characteristics. However, the model is still trained on all data, but the training may be adjusted based on the data partitioning based on patient characteristics. For new patients who fall under some patient characteristics, the logical PL dynamic access is tuned to the model of that patient sub-population.
Alternatively, the 3D anatomy of the current patient may also be compared to the previous patient. Such comparison may be based on image data, such as 3D image data of CBCT, CT or MRI. The similarity in structures of interest (e.g., vascular structures) for surgery can be quantified. The model may then be selected based on similar anatomical patterns.
In yet another embodiment, patient characteristics may be used to "normalize" the data S. For example, the shape from pediatric cases may be smaller, but should be similar to the shape from adult cases. The pediatric and adult cases may be used together to train the model, for example in clusters, but with consideration of a corresponding "scaling" factor for normalizing each shape. The scaling factor may then be associated with the time spent in a given category (the time the logic returns the predicted outcome for the same category) as this may be different for different patient characteristics.
In addition or in lieu of the above-mentioned patient information/features may be used to correct data imbalance and/or as an extension of test bias.
Referring back to the block diagram in fig. 10, the corresponding flag is issued by the control logic PLThe detected new class of current shape measurement s may be marked + . New class alarm sign->May be used to influence the control logic PL to control various devices or systems D j One or more of the following.
For example, once marked, prediction logic PL may instruct control interface CIF to signal to affect or change some imaging parameters of imaging apparatus IA. For example, the imaging geometry may be changed by moving the gantry GT or the like. In addition or instead, the operation/setting of the arm a itself may also be changed, or the operation/setting of the interventional device may be changed.
Referring now to fig. 13, this figure shows a verification module WM comprising a user interface UI that may be configured to allow a user to verify a certain prediction category c predicted by the prediction logic PL, in particular for high uncertainties (or equivalently lowConfidence) category. The user interface UI is preferably, but not necessarily, arranged as a graphical user interface (as shown in fig. 13) comprising a plurality of graphical widgets W configured for human-computer interaction j 。
This widget W1 may include an identifier for the category predicted for the received shape measurement S collected by the arm a. The shape measurement s itself may be graphically presented by the visualization module in the form of a 3D or 2D curve, e.g. g(s). The user can visually evaluate whether the proposed classification, i.e. the predicted anatomical location represented by the identifier widget, corresponds to the shape g(s) displayed on the display device DD or other display devices DD. In addition, one or more control widgets W1, W2 may also be provided that allow a user to confirm or reject the W2 prediction. Preferably, the user can provide a new identifier that, at their perspective, will correctly represent the current location corresponding to the displayed current shape measurement. The correct identifier may be entered into the text box by a keyboard or touch screen action, and widget W1 is actuated to change into the text box at reject button W3. The user may then replace the error identifier that the machine proposed for the anatomical location. Other input options for capturing user-provided correction information are also contemplated, as are just one of many embodiments described above in terms of how this functionality may be implemented. Moreover, while touch screen interaction as shown in FIG. 13 is preferred, this is not a requirement herein, other ways of interaction include the use of a pointer tool (such as a stylus or mouse), and keyboard-based interactions are not precluded herein. The new identifier is then either a label for the new category or the measured value S is assigned to the existing category according to the training set S'. The natural language analyzer may analyze the newly provided identifier and attempt to establish a correspondence with the existing category (if any). The need for such Natural Language (NL) based processing adjustments can be avoided by using controlled vocabulary. The input widget for the user to provide the correct identifier may take the form of a list or drop down menu having a suggested identifier for the user to select from. The list of suggestions can be known by the type of medical procedure (e.g. the type of intervention), in which case the measured value to be verified is taken.
The deployed system C-SYS can associate shape clusters with anatomical labels, and can display predicted labels on the screen DD. If the user accepts the tag, the current shape will be added to the tagged cluster database along with the confirmed tag. If the user refuses to accept the tag and provides the correct tag, the shape is saved along with the corrected tag. However, if the user refuses to accept the tag and does not provide the correct tag, the shape data is discarded or put aside for tagging in a retrospective manner. The embodiment of fig. 11 and similar schemes may be used with preoperative CT or real-time fluoroscopic images. In this application case, the module WM may automatically cross-check the predicted label with anatomical labels in pre-or intra-operative images (which may be available by automatic segmentation, etc.). If the predicted label and the image label do not match all the time, the system triggers a proposal to recalculate the registration.
However, the user may not typically confirm or reject the tag. In this case, acceptance or rejection of a tag may be based on the confidence of the system for the predicted tag, which may be quantified by the uncertainty determiner calculating an uncertainty value Δ or using the similarity metric described above. If the system has a higher confidence/higher similarity to the current category, the tag may be added to the database without confirmation from the user. However, if the confidence level is low/similarity level is low, the shape data and the predicted label are not added to the database TD without confirmation from the user. Below a certain confidence threshold, the predicted tag is not displayed to the user. For example, the shape measurement s may be due to current navigation through highly tortuous iliac arteries. Without imaging data, such anatomical context may not be available to the system. Since the degree of tortuosity of an artery may be unusual, the measured shape s may not be well represented in the tagged cluster database. However, the prediction logic PL may still infer that arm a is located in the iliac artery based on the time of use context κ as described above. The logic may use the order of clusters (or steps) that the arm a/device ID passed before the currently ambiguous shape measurement s. In this case, the confidence of the system in the tag may be low. Thus, if the user does not accept the tag predicted by the verification module (e.g., using the pop-up widget w 1), the shape and tag will not be contained in the currently tagged cluster database/memory TD. Ambiguous, unallocated measurements s may be added to a separate shape database/memory or "unreliable" category and may be manually and retrospectively labeled by an expert.
Referring now to FIG. 14, the operation of the previously mentioned rebalancing module RB is illustrated. Especially in the context of dynamic learning, where the training database TD is updated by adding new data, a situation may arise, especially for cluster-based machine learning algorithms, where clusters with different sizes are established at a certain time t 1. The different sizes, i.e. the number of representatives in each cluster, are schematically shown in the upper part of fig. 14 as ellipses of different sizes. For example, three clusters shown in the figure are designated A, B and AB, respectively. It may be that some clusters contain only a small number of representatives, while others contain a large number of representatives. This situation may result in inefficiency, accuracy or consistency of the prediction logic PL operation. It is desirable that these clusters contain a significant number of representatives, or at least to ensure that no cluster contains less than a critical number of only very few representatives, as this may destabilize the performance of the prediction logic PL. The operation of the rebalancer RB increases the number of small clusters, possibly at the cost of the size of some large clusters. This is schematically shown in the lower part of fig. 14, where at a later time t2> t1, the early small cluster AB has evolved into cluster AB ', while the early larger cluster or clusters A, B have now shrunk into smaller clusters a' and B 'due to some leakage into the newly enlarged cluster AB'. Especially immediately constructed clusters may be unbalanced (i.e. contain very few data samples). To re-balance the clusters and/or accommodate non-clustered data, the shape data may be re-clustered (or the clustering model re-trained). Most shapes should be reassigned into similar clusters. For example, in fig. 14, most of the data samples belonging to clusters a and B before re-clustering will still belong to clusters a 'and B', respectively, after re-clustering. On the other hand, cluster AB' is now larger than cluster AB, which is built immediately to capture the shape between cluster a and cluster B. Cluster AB' should now contain data samples near the tail of the data distribution in clusters a and B.
The new clusters may assign anatomical labels based on labels associated with most (or a percentage of) data samples in these clusters. If most groups fall below a certain threshold percentage, then the clusters must be reviewed by an expert before assigning anatomical labels to the groups. The previous cluster may remain until the new cluster is verified. If the accuracy of the system decreases after re-clustering, the system may revert to the old cluster. Alternatively, outlier data samples may be identified in each cluster (e.g., as explained above based on similarity metrics) and progressively removed from the clusters until system performance matches or is better than the previous clusters. Alternatively, any new clusters generated from data that has not been clustered before, which show a large contribution to errors, may also be removed.
The illustration in fig. 14 is not only conceptual, however, in an embodiment, similar such graphical presentations for the current cluster situation displayed on the display device DD or on a different display device DD are also contemplated herein. In such a presentation, the color or graphic/geometric shape or symbol may represent the size of the cluster, as represented in fig. 14 by using ellipses. This provides the user with graphical feedback about the current state of the training dataset S'.
Referring now to fig. 15, the operation of the annotator AT is shown. In general, annotations and related annotation tools AT are used herein to facilitate assignment of tags for anatomical locations to respective categories. This allows to associate clusters or classifications with corresponding anatomical locations, phases or types of medical procedures. Although in the foregoing, the categories predicted by the ML model may be mapped to stages or types of medical procedures, ML may be trained to predict locally end-to-end without the need for a post-converter CV.
Broadly speaking, the annotator can be operative to annotate the shape measurement data and/or the newly acquired shape measurement s in the training database TD with tags, e.g. time tags t or some time-derived context-dependent data κ, e.g. anatomical annotations κa, metadata annotations κ m And/or system or workflow annotation kappa w Any one or more of the following. Context notes κ for all three classes a 、κ m 、κ w Eventually derived from time. FIG. 15 shows shape data s j It may change or deform over time, with the time axis being plotted horizontally (the spatial plane being parallel to the image plane).
The annotator AT may be integrated into the shape sensing/measuring system SSS and may be implemented as a time stamp assigning time stamps for time stamps, each time stamp encoding a time when the respective shape s (t) has been measured by the arm a. Annotators can be used in deployments or training to annotate/tag instances of new shape measurement data or training data.
Turning now first to anatomical annotation/context κ in more detail a As the procedure begins and proceeds, annotations may be made using the anatomical context traversed or the temporal sequence of events w encountered, as mentioned above in connection with the example cluster/category sequence a) -f) in fig. 5. For example, assume that from time t 0 To t 1 Arm a enters the vascular system through the introducer. Such discrete time intervals may be used to label and annotate clusters of 3D shape data s, time data, and 3D shape + time data within one procedure and between multiple procedure datasets.
Shape class evolution based on intra-operative and inter-operative anatomical context, where the inherent differences within clusters can be associated with temporal features, will be discussed in further detail in fig. 16. As for metadata annotation κ m This can be used with EMR intra-operative recordings toMetadata associated with the corresponding step of the procedure, such as time clusters and shape data clusters, are marked. This allows for the identification or annotation of critical steps during surgery and helps identify adverse events. Data from intra-operative telemetry (systems and devices) may be annotated with anatomical context.
Metadata annotation kappa m Allowing the development of algorithms that can provide context for intraoperative steps for patient management. For example, in heart-targeted procedures (e.g., mitral valve repair), it may be advisable to slow down heart rate/respiratory rate if intra-cardiac navigation proves challenging in some patient populations.
Metadata annotation kappa m It may further allow specific clusters to be associated with known complications and other adverse reactions. For example, if the catheterization process for a particular vessel is complex, the fact that the predicted current shape measurement is related to that vessel may trigger a message alerting the user that the upcoming step may be more complex or associated with a higher likelihood of an adverse event. This in turn may trigger a suggestion that draws the attention of the user, for example suggesting that the user slow down tool operation, or if the current user is identified as a novice, the suggestion is taken over by a more funded user, as explained before.
For system or workflow notes κ w This involves annotating intra-operative telemetry (system and device) data with time-derived anatomical context. This may facilitate predictive optimization of system parameters based on event development during data clustering/classification. Furthermore, inter-operative time-based event scenarios may facilitate creation of preset values based on patient populations.
For example, if one surgical step proves to be complex, imaging geometry settings, such as C-arm settings, may be provided based on the clusters. The current C-arm settings may be based on the C-arm settings (or an average or median of such settings) associated with the particular cluster/category currently predicted. A single category may also be associated with different C-arm settings. For example, different such settings can be associated with different predicted times, such as a temporal order of predictions for the category. In more detail, the settings associated with an earlier predicted category may be different from the settings associated with the same category predicted later. The system may suggest these settings as the categories are predicted over time, representing the evolution of the categories.
The system C-SYS may provide user instructions to prepare for the upcoming class prediction at the appropriate time given the current cluster prediction results. This may simplify the procedure. For example, if the procedure remains in the current category for an average of 6 minutes and the shape evolution proceeds at an average rate, an alarm may be displayed at minute 4 to take a specific action. These actions may include preparing some tools OR imaging devices (location, dose, etc.) that may be needed at the upcoming location, dimming lights within the OR, etc. However, if the shape evolves faster than the average value through the current class, an alarm may be displayed in advance (e.g., 2 minutes before the end of the cluster is expected).
The data k associated with other patients or systems may be used to create markers or annotations that further help make shape classification by the logic PL more robust.
In general, the prediction logic PL may consider device D for calculating class prediction results j The resulting mark (except for arm a). For example, a certain device D j A data marker is generated (e.g. a contrast agent injector) which marks the shape data s with a "contrast agent-ON" marker. The prediction logic processes the marked shape data. The shape may be ambiguous and thus may belong to two different categories (a or B). However, with the "contrast-ON" marker, the B cluster is assigned, since contrast injection will not typically occur during surgery based ON the location LA to be traversed j The a priori known temporal order of (c) may predict the point at which cluster a will be predicted.
Other types of marks that may be used for the well-defined shape may be any one or more of the following: x-ray-on/off, CBCT-on/off, amount of contrast injected, amount of drug administered to the patient, heart rate of the patient, size of shape, body shape of the patient, etc. In general, there may be a correlation between these markers, which may help to estimate markers that are lost during use of the system. For example, the size of the shape is related to the body shape of the patient. Thus, when new data assigns a category based on the shape of the device and the size of the shape, the average patient size associated with the cluster may be inferred as the patient size. The shape s can be transformed into a new representation by a transformation stage TS, and it is the nature of this representation that allows correlation with the patient's body shape. Determining patient size is useful for many applications (e.g., adjusting keV settings for dual energy X-ray imaging, the amount of contrast agent to be injected, etc.).
Device D j The generated data markers may also be useful after prediction. For example, assume that tagged data has been processed by logic PL to predict class a, and that controller C-SYS will typically initiate control operations in a specific manner in response to predicting class a. However, in this case, the class a result and additional information about, for example, how long the device ID has been in the class a may generate a "complex" or "novice user" tag. If one of such flags is generated, the controller C-SYS may perform a different control operation. For example, if such a "complex" flag is issued, prediction logic PL may instruct system interface CIF to perform one or more of the following: rotating the C-arm to a new angle for better anatomic viewing; or a contrast injector with a new dose, as the user will need to acquire a contrast image; or provide feedback to the user to select a different device. For the "novice user" tab, the system C-SYS may suggest an imaging view or scheme that is different from that typically used by "advanced users" in a given situation.
A "complex" mark or other mark may be used to create a new cluster. For example, if a "complex" tag is assigned, and based on the shape measurement s and, if possible, from device D j A new class "class a + complex" may be defined and then fed back into the training system TSS. Thus, existing categories may be refined to form new categories. When predicting a new category "cluster a+complex" in a new operation, complex flags can be found faster, thereby finding the type of cluster, and appropriate control operations can be performed faster.
The "complex" indicia may be assigned based on any one or more of the following: the amount of time spent in a cluster; contrast agent amount used during a specific time period. Other conditions may be applicable to other devices.
Contrast agent injection causes a change in the axial strain captured in the shape measurement. The logic PL may detect contrast agent injection based on a change in the measured value characteristic. Thus, it can be derived from the measured value data when and how long the contrast agent was injected. Alternatively or in addition, the contrast injector itself may provide information about the amount of contrast to the logic PL in the form of a data tag, as described above.
Another cue that may trigger complex markers is the number of X-ray images or CBCTs acquired.
Repeated movements may also ensure that such complex markers are given. The repeated movement of arm a may be defined by the amount of time spent in a given category or the number/frequency of arm a's back and forth/in and out at a given anatomical location. This unstable motion behavior will be reflected in the corresponding sequence of class predictions of oscillations.
Another indication that may trigger giving a complex mark is the number or frequency of axial strain increases at the end of arm a, as this may indicate that the end of arm a collides with, for example, a vessel wall or implant.
Another indication that may trigger the administration of a complex marker is the amount of drug administered to the patient, for example, in the event that deployment of the implant is unsuccessful, the drug may be required to slow down the heart rate for use in heart structural repair procedures.
FIG. 16 is at t 1 -t m Dynamic representation of shape data acquired over a period of time, the changing characteristics of which represent the aforementioned transition areas between anatomical locations, if the training data initially only includes anatomical locations of interest LA j These transition regions may then create new different clusters/categories or classes. For example, at time t 1 At this point, the shape measurement s taken when the shape sensing arm A is about to enter a new anatomical position is shown 1 . At a later time t n Arm AMeasured value s n The anatomical location is now fully captured, at a later time t m Arm a generates tail data while remaining advanced on its path. At time t m The acquired tail data now indicates that arm a is away from the current anatomical location. Thus, according to the time t above 1 、t m 、t n May be referred to as "entering the shape of the category", "within the category" or "leaving the category", respectively. This type of implicit language may sometimes be used herein as shorthand.
For example, in the context of a cluster-based machine learning algorithm, the shape measurement dynamics through the transition region can be understood by the change in clusters. Conceptually, this can be understood as a virtual movement of shape measurements in cluster space, shape data at t 1 Enter clusters at time, t n While residing in a cluster, then at t m Leaving the cluster.
The lower part of FIG. 16 indicates the potential representation z tj (j: 1→m) which can be derived by operating on shape data s of a suitable transform stage TS, TS' (a variable automatic encoder or other as explained above).
In particular, for time-stamped/time-stamped shape measurement data s (t), the transformation stage TS, TS' may use a probabilistic method to convert it into an appropriate representation. More specifically, in some embodiments, the encoder stage of the variational automatic encoder encodes the received shape measurement data into a representation in terms of probability distribution parameters. The probability distribution represents the distribution of shapes that may be encountered and thus also more generally the distribution over clusters or categories.
Using such potential spatial representation z obtained by a variant automatic encoder embodiment of the transformation stage TS, TS tj Not only is a compact and useful representation of the initial shape data in terms of probability distributions allowed, but further sampling from these distributions is allowed to occur in order to generate any number of synthetically-producible shapes. In fact, such embodiments based on a variational automatic encoder are particularly envisaged for training dataAnd generating a TDG.
Potential spatial representation z tj The time-varying dynamics can be used to distinguish novice users from expert users and corresponding indicia can be assigned to encourage more exercises.
Thus, in an embodiment, a Variational Automatic Encoder (VAE) of the transformation stage TS' may be used herein to learn a probability distribution over potential representations of shapes, thereby learning an association between the potential representation z of each shape and the timestamp of that shape. For example, the shape recorded before predicting a given cluster ("shape into cluster/category") is likely to be different from the shape measurement recorded after logically predicting a given cluster ("shape out of cluster/category") (as shown in the lower part of fig. 16) and associated with a different latent variable z. Thus, it can be said that these shapes are measured when entering a cluster, when within a cluster, and when leaving a cluster, respectively. In FIG. 16, z t1 The configuration of potential space into the shape of the cluster is shown. The shadow stack shows the range of z-values produced by the shape that has entered the cluster. Similarly, z tm Such information is shown as being about to leave the cluster. The shaded area shows the distribution over the z-configuration associated with the shape in a specific time dependent state. In this way, the configuration of the potential spatial representation z is defined from data with a known timestamp, and new data without a timestamp can be linked not only automatically to the appropriate category, but also to the location where the device is currently located. These categories may also be subdivided into sub-categories. For example, if one cluster becomes large enough or shows a large enough difference, the cluster may be divided into individual clusters.
The potential space-based observations described above may be used to provide anatomical context to subgroup data based on shape/data statistics in, for example, challenging or roundabout anatomical structures.
It should be understood that observations herein regarding shape dynamics in potential space Z (as shown in fig. 16 above) are also applicable to other ML classifications in terms of classes of binary or multi-class classifiers, and that non-clustering algorithms are unique.
The type of device Dj used during surgery is another source of data k associated back and forth. As well as including a time component during data acquisition, the device type or device characteristics associated with arm a may also be included with shape data s. If the device shapes differ significantly (e.g., very flexible devices versus very stiff devices) even when navigating through the same anatomy, these shapes may result in the logic PL predicting different categories. Tags associated with these categories may contain both anatomical context and device type or some other distinguishing device feature. Yet another source of contextual data is EHR/medication data used during surgery.
Furthermore, as explained above with respect to time, differences in shape within a cluster may also be attributed to device features. For example, by observing differences in the configuration of potential representations z generated by different device types/features within the same class, differences that may not be sufficient to generate individual/new clusters or classes can still be separated. New data assigned to a particular category may then also be associated with device characteristics associated with the z-configuration it generates. Furthermore, if new data is generated with respect to a known device type, clusters associated with other device types may be filtered out of the clustering algorithm, allowing the system to be assigned only to new data clusters associated with the appropriate device type. This may reduce clustering and errors in predicting which cluster is expected to be next. Moreover, the same distinction with respect to device type can also be used for the classifier ML algorithm.
Referring now to fig. 17, the operation of the soundness checker SC is shown. In general, the shape measurement s is generated in response to the deformable arm a dynamically attempting to conform to surrounding anatomy, for example, as the user or robot advances the arm a through the anatomy (e.g., vascular system). Thus, arm a is typically forced to assume the curvature of the portion of the vessel through which it is advanced and around.
However, as shown in fig. 17A) and 17B), a friction effect may occur when the arm a is advanced, which may cause the arm a to bend. Kink K may occur and result in false shape measurement readings, which in turn may cause prediction logic to incorrectly classify the current shape measurement. In fact, these bending events may lead to situations where a correct classification cannot be made at all, and the calculated classification has a high uncertainty value Δ, or once the probability mass is sufficiently concentrated in any one of the categories. The machine learning module MLL may be trained to identify such bending events as specific, additional non-anatomical categories.
Thus, in these embodiments, the classification or cluster not only represents the anatomical location, type of procedure, or stage thereof, but may additionally include one or more categories indicative of a bending event. In some such embodiments, once the bending event is classified, this fact may be marked in the system and a flag phi set. The alert message may be sent to the user via an interface CIF, such as their cell phone, tablet, pager, notebook, desktop or other user terminal device. Alternatively or additionally, an alarm light may be illuminated in the operating room/catheter laboratory, a message displayed on the display device DD, or an alarm sounded, or any other suitable sensory output provided to alert the user.
In an embodiment, the sanity checker SC may be configured to attempt to correct the false shape readings s distorted by the buckling event K. As can be seen in fig. 17A) and 17B), a next interpolation line l (shown in dotted line fashion) can be thrown to eliminate kink portions. As shown in fig. 17B, the kink portion is "cut away" and the tangent interpolation line portion is replaced. By this line, the local measurement of the proximal portion of arm a is connected with the measurement of the distal portion of arm a. The corrected/reconstructed shape measurement readings will then be re-submitted by the sanity checker SC to the machine learning module MLM of the logic PL and reclassified to output the better class c. Such reconstructed shapes may be generated from the shape measurement s alone or based on additional information from the segmented image, e.g. a 3D image (e.g. a CT image). The shape so reconstructed may then be labeled based on its closest matching anatomical cluster.
To avoid misclassification due to bending, the shape of the device in which bending is detected may be automatically marked in a number of ways.
Such "curved shapes" may be labeled as "non-anatomical" shapes such that they are not saved in the cluster database. The curved shape may be marked as its own cluster. If a shape is detected that is associated with curved arm A, the system may prompt the user to accept the proposed mark or to enter his own mark.
Turning now in more detail to a different way of controlling the various devices mentioned before using the predicted classification c and/or the marking by the control interface CIF, reference is now made to fig. 18.
One such controllable device may include a registration module that may run on the same computing unit as the prediction logic or may run on a different computing unit. As described above, the registration module may be configured to predict when to perform (next) registration.
While the proposed system C-SYS may be used to replace imaging entirely, it is preferable herein that the prediction logic PL be used with imaging, so that the imaging apparatus IA may be used at a slower rate and timing in situations where imaging may be advantageous to visualize anatomical structures or device features (e.g., when providing actual treatment). Since the classified shape measurements calculated by the prediction logic PL already provide a sufficient indication of where the arm a is located within the body and where the associated interventional device ID is located, the system C-SYS may result in fewer images being acquired during the intervention.
A small number of images taken during the intervention may still be useful and may be used to provide additional information to the user. In such an embodiment, where prediction logic is used with the imaging device, it is desirable to occasionally register the measured shape s with the acquired image. To this end, the registration module REG runs a registration algorithm. In general, the registration algorithm establishes a correspondence between corresponding points in the two data sets (in this case, the 3D curve and shape measurement values identified in the image). In various embodiments, elastic or rigid registration algorithms are contemplated herein.
With continued reference to fig. 18, a block diagram is shown,to show how the registration component REG is controlled by the calculated classification of the logic PL. Registration has been observed to be most useful when the shape measurement does not represent a transition but rather indicates that arm a is in the anatomical location of interest. Attempting to register the currently acquired image with shape measurements representing only transitions between anatomical locations may waste computational resources. Fig. 18 shows the synchronization between the registration operation and the classification result c. At some initial time t 0 The arm a captures shape measurement data s (t 0 ) For example in the form of (X, Y, Z) coordinates of a 3D curve, while acquiring X-ray images at substantially the same time. The shape data and the image are then registered by a registration module REG. The footprint of the device ID can be easily identified by segmentation for better centralized registration.
At a later time t>t 0 Arm a captures a new shape measurement s (t). The prediction logic PL then operates on the measured value s (t) to produce a prediction of the anatomical location in terms of category c as described above. The registration logic REG-L then checks whether the current position according to the current category is suitable for registration. For example, the registration logic REG-L may check whether the found category c indicates a transition period or an anatomical location of interest. For this purpose, the data associated with the above-described time can be used. If the current class is not suitable for registration, because it happens to represent a transition, a wait loop is entered. Once the next image is acquired, the registration logic REG-L re-checks whether registration is appropriate at this time. If so, that is, if the measurement now indicates the anatomical location of interest, rather than a transition, a synchronous delay loop is triggered and entered to ensure that the next captured image is registered with the current shape measurement.
The delay component exploits the fact that once the measurements indicate anatomical locations, rather than the transition regions between these locations, there is a plateau event as described earlier in fig. 2. That is, as the arm a moves slowly through the anatomical location in increments and many such measurements indicate the same location, a number of subsequent measurements may still be related to the same anatomical location, so that platform events may be discerned in the time and shape measurement map as explained in fig. 2 c).
In other words, once it has been determined that the measurement value represents the anatomical structure location of interest, it is a good strategy to wait until the next image has been acquired and register this next image with the current shape measurement value s or with the next shape measurement value that still possibly indicates the same anatomical structure location.
Will now be directed to a different device or system D j Other control operations based on categories as calculated by the prediction logic are described.
In an embodiment, controlled is the arm a or the shape sensing system SSS. For example, an automatic data saving routine may be initiated to save some shape measurements s. The shape sensing system SSS can be controlled in a number of possible ways using the predicted shape class c. In the case where the frame rate is about 60Hz and the operation may take several hours, while the shape measurement value s per frame itself is not a large data amount, saving all the shape measurement values s of the entire operation may consume resources, but may do so if necessary. Instead, it is preferred herein that the shape sensing system SSS trigger automatic saving of shape measurements when moving from one class to another or when a significant change in shape within a given class has been detected. It is possible that the data worth of memory space attention is data representing the transition regions between categories, as compared to when the device is idle in one place. In addition to automatically saving shape measurements, any other data related to the procedure may be saved.
Another contemplated control operation is to control the imagers IA, D2. For example, the image acquisition parameters may be set according to the detected category c.
Any one or more of the image acquisition parameters may be set by the system controller C-SYS. These parameters for an over-the-wire device include, but are not limited to, frame rate, field of view, imaging depth, pullback speed, X-ray source voltage or amperage, image angle, and the like. For external imaging systems, these parameters include, but are not limited to, collimation, windowing, X-ray dose, and the like. For whole exchange imaging devices like ICE, IVUS or OCT catheters, the type of vessel in which the imaging device is arranged is very important for setting acquisition parameters to obtain high quality images. For example, if the class prediction results in return for the superior aortic versus renal artery, the acquisition parameters for imaging depth or frame rate may be quite different. Here, a predefined set of image acquisition parameters may be assigned to different categories. When the category prediction result is sent to the control interface CIF, the appropriate image acquisition parameters for that category will be retrieved and set on the imaging system IA.
Another contemplated control option is to trigger automatic image acquisition. For example, moving from one category to another may trigger the system controller C-SYS to automatically capture new images, or alert the doctor that they have entered a new category and should acquire new images.
IVUS catheters are capable of measuring blood flow, which is important at some stages of surgery. The blood flow imaging mode may be automatically turned on by monitoring the predicted class or class sequence and/or changes that the IVUS catheter has undergone. For example, blood flow should be measured after placement of the stent. However, to place the stent, the arm a/device ID will undergo a series of specific navigation steps. These various navigation steps may be determined based on the categories identified throughout the procedure. After predicting the correct class sequence, the system controller C-SYS can automatically turn on blood flow imaging.
The specific shape class, after being predicted, may be used to cause the system IA to change its own parameters (e.g., PID controller or temperature compensation method) or other parameters.
Instead of or in addition to controlling the imaging device, the therapy delivery device may be controlled by setting therapy delivery parameters. These therapy delivery parameters may include any one or more of power, therapy residence time, or other parameters.
Another conceivable control operation is to change the mechanical properties of the interventional device ID, D3. In some device IDs, there are options to change their mechanical properties (e.g. stiffness), such as some types of colonoscopes. For example, the catheter may be floppy under normal conditions, but it becomes more rigid after activation. In this case, the control system C-SYS may set or trigger the mechanical properties of the device in response to the predicted class C. The stiffness may be reduced (set to be soft) when the device ID is within a specific category or a plurality of such categories, and may become more rigid when within other categories. This type of control may be used to pass through CTO or to maintain a rigid rail upon which EVAR or FEVAR implants may be deployed.
Another contemplated control operation is robotic control. Robotic device D3 is continuously introduced into the interventional procedure. If the robot is used during surgery, any one or more of the following control options may be enabled for robotic control:
the control interface CIF may alter the forces/curvatures of the maximum and minimum speeds/times of the robotic device D3 based on the current class and/or transition to a new class. For example, if the distal end of the device ID is close to a tortuous vessel category, a lower maximum allowable advancement speed is required.
The control interface CIF may vary the force gain on the tactile user interface according to the predicted category depending on the type of vessel in which arm a is currently located. For example, if arm a is in a larger vessel and there is less haptic feedback, the sensitivity may be set higher in the robotic system or haptic backbone. In smaller vessels, the set sensitivity can then be set to a lower value.
When the device enters a specific category, the control interface CIF may change the user interface layout for robotic control, e.g. joystick buttons. In some embodiments, devices may automatically advance between categories that are used as waypoints.
The control interface CIF may select an automatic (robotic) navigation control scheme based on the predicted category, e.g. direct steering may be used to turn into a large vessel. A higher probability method for navigating into a branch with a small diameter may be used, for example: the advancement-rotation-retraction-advancement is repeated until the branch is cannulated, etc.
Another control operation contemplated herein is to store data in a database/memory. This can be used for automatic annotation and recording. The user typically annotates and records the procedure after the end of the procedure. The predicted shape class may be used to help automatically annotate or record some aspect of the procedure and automatically save the appropriate data. For example, the stent placement region may be automatically labeled with information about the point in time during surgery and the anatomical location identified by category. This may be accomplished by first identifying the anatomical location, then identifying the point in time during surgery (predicting when it occurs, in which timing it occurs, etc.), and further utilizing movement information from the guidewire or other device ID to determine which overall switching device is deploying, for example, a stent.
In another example, the number of vascular interactions for a region is calculated (based on category). The percentage of time spent in surgery may be automatically recorded for reporting purposes.
Another record is envisaged that is to associate the use of contrast agent with a certain area and predict the likelihood of reaching a limit at the postoperative period.
In some embodiments, controls related to patient-specific planning are also contemplated. For example, in unusual situations where the anatomy of the patient is unusual (e.g. due to trauma or previous surgery), the shape database TD may be adapted to the patient's specificity using pre-and intra-operative 3D vessel images of the patient as well as the simulation methods mentioned above. For example, the blood vessel is segmented and a Monte Carlo-like simulation of a "virtual" arm A traversing the patient 'S vasculature is used to generate a potentially large number of measured values S, which are then added to the database TD, making the training data S' more patient-specific.
Another control operation in the contemplated control is to provide user feedback. Ideally, the user knows many factors during surgery and changes methods or protocols based on the information they are reading or receiving. The predicted shape class may be utilized to provide user feedback during surgery. For example, based on the predicted category, a user message is sent that indicates when the user should be careful in his scenario, or when more freedom/aggression is available. More specifically, if the interventional device ID is at an anatomical location or point in time during surgery where fine manipulation of the device is critical to safety or accuracy, a slow down signal may be provided to the user. Alternatively, the user may increase the speed at which the device operates when in a low risk area.
As another example, based on the predicted category, the system can detect whether the user has placed the device on the wrong side (i.e., "wrong side surgery").
Anatomical anomalies/outliers may be detected. A portion of the population has anomalies that may not be known in advance, making the procedure more challenging. Detecting such anomalies allows for better guidance to the user.
The system may automatically determine which user is using the system based on the user's workflow, the category being predicted, and the order/sequence. The personalization parameters may be set on the system. This allows for a better understanding of post-operative patient risk or operator skill based on how long the user spends in different areas (according to predicted categories). Can be beneficial to operation training; using predicted categories, their order, and the time spent in each category allows an assessment of how far they are from more experienced users. It is possible to determine situations where category overlap is good (no more training is needed) and situations where category overlap is poor (the trainee needs additional time or training).
Another control operation contemplated is control related to patient monitoring. For example, based on the predicted category, an alarm on the blood pressure monitor may be adjusted to be more or less sensitive based on the device location. For example, if the catheter ID is found to be located in the heart, the sensitivity of the alarm to blood pressure changes may be reduced, otherwise the alarm may be activated too frequently. The predicted category may also be used to determine when to make baseline blood pressure measurements.
Another control operation contemplated is to control a clinical workflow. Various examples of how the shape class may be used to improve clinical workflow are described below. One aspect is a surgical workflow. For example, given a target that the catheter must reach and a predicted sequence of categories, a next series of one or more categories is predicted and the user is notified accordingly, for example by displaying a flow chart showing the user what is ahead in the process of reaching the target. For example, if a specified goal cannot be reached without backtracking, an alert may be issued to the user by a signal or message.
In another example, an alert is issued to the user once a point in the procedure is reached. Thus, the user can learn how much time remains due to the category that has been covered. In this way, the user knows the stage of the procedure and optionally automatically how much time is spent on each category. In this way, the next patient may be better prepared and the medical team may be alerted to better provide time.
Most of the above examples utilize class information based on shape measurement s. However, categories may also be formed based on device speed, or time at a location, or any other reasonable parameter.
As mentioned, the uncertainty Δ of a category may be calculated. Shapes that do not fall into a particular category or have a high degree of uncertainty relative to the category may be marked and may result in different settings being selected on the shape sensing system SSS, imaging system, device, etc. Two possible examples of alternative settings applied to the device or shape sensing system SSS are to reduce the power of the atherectomy device in case of uncertainty of category and anatomical location. In addition or instead, a diagnostic check of the shape sensing system SSS is triggered to ensure that the device has not failed.
Any of the above-described class-dependent control options may be used alone, in any combination or sub-combination, including using all of the control options.
Reference is now made to the flowcharts in fig. 19 to 21. The flow chart shows the steps of a method that may be implemented by the control system C-SYS, the training system TSS and the training data generator TGS described above. However, it should be understood that the methods described below need not be tied to the architecture described above, and may also be understood as their own teachings.
Turning first to fig. 19, a flow chart of a computer-implemented method for controlling the operation of one or more devices/systems in a medical procedure to facilitate the procedure is shown.
Medical procedures are performed with respect to a patient. Medical procedures may include interventions, such as vascular interventions, in which interventional instruments, tools, instruments or devices are at least partially introduced into a patient.
It is contemplated herein that shape sensing is used in conjunction with the introduced device, or alone. The shape sensing system may comprise a deformable arm a as described above. Arm a is at least partially introduced into the patient and deformed by physical interaction with the anatomy surrounding the introduced portion of arm a. Arm a may be coupled with the interventional device ID so both are expected to deform in the same way. In response to such deformation or physical interaction of the arm a, the shape sensing system generates shape measurement data s during intra-operative use. It has been found herein that the shape measurement data s is associated with the shape of the anatomical environment into which the device is introduced, and thus may provide an indication of the anatomical position in which the device and/or arm a is currently located. Shape measurements are also relevant to the type of procedure or stage thereof, as the device is expected to access some anatomy at some stage of the procedure.
Preferably, machine learning is used herein to process shape measurements to predict anatomical locations for given shape measurement data. The type of surgery or stage thereof may also be predicted. In the following, however, we will focus mainly on the prediction of anatomical locations, it being understood that in embodiments also the prediction of the type of surgery and its stage is envisaged. The prediction may be updated in real time, preferably as the arm a moves within the patient towards the target. The prediction results are in the form of a category into which the shape measurement values are classified. The class may be one of the classes, as in a machine learning classifier, or the class may be a cluster, as in a cluster-based machine learning or other modeling approach. The categories are associated with corresponding anatomical locations/surgical types or phases of interest. Thus, the calculation class shows the anatomical location of interest where arm a is currently located, or the type of surgery or stage of surgery currently located.
The predicted categories, or indications derivable therefrom, may be used to control one or more devices/systems used in a medical procedure.
In more detail, at step S1910, shape measurement data S acquired by a shape sensing system of the type described above in fig. 2 is received. Any type of shape sensing is contemplated herein. The shape sensing system may acquire shape measurements without the use of a contrast agent. Moreover, the shape sensing system acquires shape measurements without any ionizing radiation or other radiation that would cause or be harmful to the patient and/or the staff operating the device ID or arm a. Any type of image (optical or otherwise) that may be caused by exposure of patient tissue to radiation is not required to acquire shape measurement data. It will be appreciated that at least a portion of the arm a of the shape sensing system is located partially within the patient and that different shape measurements are generated in response to deformation of the arm a as the arm a moves, advances or undergoes any posture change required by the user and/or the robot that may perform or assist in the intervention.
As shown in fig. 5, the shape measurement data may be visualized as a 3D curve of spatial (X, Y, Z) coordinates, but may instead be represented by other quantities, such as stress, strain or curvature values/coordinates, preferably 3D. Shape measurements in 2D are also contemplated herein.
At step S1910, a stream of such shape measurement values at different points in time may be received, or a single shape measurement value data is received and processed herein.
At optional step S1920, the raw shape measurement data is subjected to a data preparation algorithm that filters, normalizes, or otherwise pre-processes the shape measurement to facilitate efficient downstream processing. The shape measurement may be reconstructed or in a raw format. The calibration matrix may be applied to the shape measurement values, or may provide uncalibrated shape data.
At optional step S1930, raw shape measurement data S or such data prepared at optional step S1920 is transformed into a more suitable representation to facilitate downstream processing, particularly to reduce time and/or reduce downstream processing requirements. Specifically, a dimension reduction operation is performed in this step. In step S1930, the shape measurement values may be transformed into a representation of a lower dimension using principal component analysis or any factor analysis method. Other types of transformations other than dimension reduction are also contemplated herein, which may or may not be associated with dimension reduction. For example, if the shape measurement data has sparsity, sparsity-reducing algorithms that produce denser representations are also contemplated herein. Alternatively, it may be desirable to transform into a more sparse representation. Step S1930 may include transforming to the frequency domain by fourier, wavelet, or other transform processing, in place of, or in addition to, any of the above. Preferably, an FFT may be used. Step S1930 differs from the preprocessing at step S1920 in that the transformation generally brings about a new, more advantageous representation, e.g. in a lower dimension or in a different space, which is not expected in the pure preprocessing step S1920. If these two steps are indeed applied, the order of step S9130 may be reversed.
At step S1940, a machine learning based prediction algorithm is applied to the shape measurement data S (pre-processed and/or transformed, if possible, according to one or both of steps S1920, S1930) to generate/predict as output the class c that indicates that the arm a of the distributed sensing system is currently located at an anatomical location within the patient.
Alternatively, the anatomical location indicative of category c may be converted into an indication of the type of medical procedure to be performed or/and the stage of the procedure. Alternatively, the machine learning prediction algorithm is configured to provide end-to-end indications regarding the type of procedure or stage thereof. Since the order in which shape measurements are taken during this period already holds clues as to what the next shape measurement can indicate, it may be advantageous to input not only the last (current) shape measurement into the prediction algorithm, but also one or more previously taken measurements (which may be stored in a buffer) into the prediction algorithm. Thus, during the deployment of step S9140, the input is notSingle shape measurement data, but a vector or block (s, s 1 ...s n ) Which includes the latest measured value s And n (n)>0) (not necessarily consecutive) previous measured values s j J=1..n. This "chunking" process may result in better performance.
Category c may be a classification result produced by a machine-learning based classifier model (e.g., a neural network that is one example of explicit ML modeling). Such a neural network may include a classification layer as an output layer. The classification layer includes classification bins (classification bin) that quantitatively represent the different types of anatomical locations of interest that are expected to be encountered during an intervention. Additional sorting buckets/classes may be provided that indicate erroneous readings caused by bending of the shape sensing arm a. Instead of neural networks, other models, such as SVMs, or other models as described elsewhere herein, may be used.
Alternatively, the machine learning algorithm used at step S1940 is one of implicit modeling, such as a clustering algorithm, in which the currently received shape measurements S are assigned into a predefined plurality of clusters, each cluster representing a different type of anatomical structure location, medical procedure type, or stage thereof. Whichever type of machine learning algorithm is used, this may be based on training data and models that are adjusted or derived from these previous training data. The training data may include previously acquired historical shape measurements and/or may include synthetically/manually generated shape measurement samples. The historical shape measurement or the artificially generated shape measurement may be related to the same patient undergoing the ongoing medical procedure, or may be related to one or more other patients. In clustering or other implicit modeling ML schemes, the training data itself forms part of the model.
As already mentioned above, the prediction operation S1940 may be performed based on only these shape measurement data. No other data is needed here. However, this is not to say that other data is not used in all embodiments. In some embodiments, the contextual data may be used with shape measurement data. In fact, the context-dependent data may also comprise image data for better robustness. The image data may include data representing anatomical features, locations, stages of surgery, or types of surgery. A portion of the contextually-associated data can include user input that associates shape-sensing data with an anatomical location, feature, surgical stage, or surgical type. The user input may be provided as one or more indicia. Such user information may be generated as described below in fig. 20 or otherwise.
At step S1950, the calculated category c is used to control a medical device. Medical devices are used in surgery. The medical device may be an imaging apparatus, the shape sensing system itself or any of the interventional instrument device IDs mentioned previously. Step S1950 may include issuing an appropriate control signal based on the calculated category (based on the current category or in combination with one or more previously calculated categories).
It is also contemplated herein to control other devices/systems to support intervention based on the calculated class. These controlled systems/devices may include any one or more of the following in lieu of or in addition to the foregoing: clinical workflow management systems, registration modules, vital sign measurement devices, and others. Any combination of a single one of the above devices/systems, a plurality of all of these devices, or all of these devices and devices other than them may be so controlled. The predicted categories may be mapped to desired control instructions using a LUT or associative array or other mechanism, which may then be used to perform one or more control operations.
For example, the apparatus controlled at step S1950 is an imaging device (IA). The control signal may be used to change the image acquisition settings such as collimation, tube voltage/amperage, or any other setting described above.
Additionally or alternatively, the controlled device may comprise a Display Device (DD). The control signals are used to cause a graphical display to be displayed on the display device. The graphical display may include graphical or textual indications of one or more of the following: a) Predicted outcome; b) Currently entered shape measurement values; c) One or more categories of prior shape measurement data. This allows helping the user to navigate the interventional device/shape sensing arm a.
In addition to or in lieu of, the controlled device may include the interventional device ID itself or any other instrument used in surgery. The control signal may be configured to adjust the operating mode of the interventional device, such as energy settings or other settings affecting operation.
In addition to, or instead of, the controlled system may include a shape sensing system. The control signals caused by the predicted categories are operable to adjust the mode of operation of the Shape Sensing System (SSS), such as light intensity, sampling rate, etc.
In addition to, or instead of, the controlled system may include a robot configured to facilitate at least a portion of a procedure. The control signals are used to control the speed of an end effector of the robot, e.g. a robot operating an interventional device.
Additionally or alternatively, the controlled system may include a workflow management system. The control signals are used to control the operation of the workflow management system, e.g. to schedule subsequent interventions, appointments of treatment or imaging sessions, etc.
Additionally or alternatively, the controlled system may include a database or memory system. For example, the control signal is used to control whether the currently entered shape measurement data is stored.
At optional step S1940A, an uncertainty value is calculated in relation to the predicted result C.
Broadly speaking, the uncertainty value measures how well the newly received measurement S matches or corresponds to the features of the previous (training) data at step S1910. These features may include the set of current categories (classes or clusters), if any, predefined for such data in an early training phase prior to the current deployment phase of the ML module used in step S1940A.
This uncertainty value may be used at step S1940B to trigger other processing steps S1940C-F, depending on the magnitude of the uncertainty value. Thus, step S19040B may be implemented as decision logic. In an embodiment, a threshold is issued to trigger one or more of the following steps. If the uncertainty value is at the critical threshold delta 0 In that, no step is triggered. If the value isViolating critical threshold delta 0 Any of the following processing steps S1940C-F may be triggered. Alternatively, the calculation of the uncertainty value is used only for recording purposes, or the delta value is displayed to the user only on a display device for information purposes. Thus, uncertainty values do not have to be embedded in the decision logic. In general, steps S1940B-F may coordinate interactions with the training system to trigger retraining of the model and/or storing new training data to construct a larger training data set of shape measurements and, where possible, to tag with the previously mentioned context-dependent data.
It should further be noted that the following additional processing steps S1940C-F may be triggered by other events, for example by receiving a user request. For example, in response to a user decision to trigger any of the following processing steps by using an input interface (such as a GUI or other), the control operation at step S1950 may include displaying a visual representation of the current measurement S in a 3D or 2D curve as described above. The verification module previously described in fig. 13 is one embodiment by which any of the following processing steps S1940C-F may be triggered. In addition to the magnitude of the uncertainty value, automatic triggering based on other conditions is also contemplated.
Turning now to the process steps S1940C-F in more detail, some of these steps are largely implementations of the dynamic learning paradigm mentioned previously.
For example, in one embodiment, at step S1940C, a higher uncertainty value Δ (or equivalently a low confidence value) of the predicted result C may be used as an indication that the current shape measurement is related to a class of heretofore unknown anatomical locations.
In one embodiment of step S1940C, the current shape measurement S may be added to the current existing training data set to expand and update the training data set. A new class may be developed for the current shape measurement s with a higher uncertainty value (as opposed to the current class or classes defined based on the most recently completed learning phase).
For example, a higher uncertainty value may indicate that the measured value s may be equal toTransition region T i In relation, wherein arm a is considered to be currently located at two known anatomical locations AL j 、AL j+k (k.gtoreq.1), the index k relates to the index of the order in which arm A is expected to traverse the anatomical location in a given procedure. If there is only one known anatomical location, or the anatomical location is the first or last such location on the path of arm a, the transition region may be related to a location of arm a prior to reaching the known single or first anatomical location, or to a location other than the single or last such location through which arm a passes on its path. As used herein, "known anatomical location" means that a category has been previously defined for that location during the training phase.
If the ML setting used is one of the clusters, the current measurement is recorded as a representation of the new class. If the ML setting is one of the classifications and the predictive algorithm is based on a classifier model, then the current measurement with higher uncertainty is labeled with a new label. Once a sufficient number of shape measurements for the new class have been recorded, the model may be retrained.
Since the order of anatomical locations to be traversed in a given procedure is known, the transition regions of two adjacent anatomical locations are well defined and can be automatically established and included in the label or cluster semantics.
In general, the exact semantics of the category newly opened at step S9140C may be defined by the user. To this end, the method may further comprise an additional step S1940D in which a marker or annotation is received. The user may assign the indicia using a suitable user interface (e.g., GUI). Annotations or markers are assigned to the current shape measurement. The tag value may represent any of the acquisition times or any other contextual data described above. The tag may indicate the semantics of the newly opened category. The markers may include identifiers of new anatomical locations or transition regions, or may otherwise specify the semantics or purpose of the new category. The user may also at least initially assign a merely generic placeholder tag (e.g., a string "NEWCAT_k" or the like, where "k" is a counter index). In the current or subsequent medical procedure, once more representatives are accumulated under the category, the user may revisit the newly offered category. In this way, the user can be in a better place to give a more closely named tag for this (now no longer new) category.
At step S1940E, the ML training algorithm is re-run to re-train the machine learning module used in step S1940A, but now based on the expanded training dataset to which the current measurements have been added at step S1940C.
For example, in a clustered machine learning algorithm, a new cluster may be defined by newly added shape measurements, which may then be considered for future predictions. That is, in a cluster-based or similar implicit training ML setup, the training may actually simply store the new measurement as a new cluster. It may be necessary to initialize a new descriptor D(s), such as a center point, which is trivial if the new measurement is the only representation, but descriptor D may need to be recalculated as more representations will be added in the future.
If the modeling is explicit, it may be necessary to re-run the training algorithm by re-adjusting parameters of the model (e.g., weights of the neural network) based on the new training data set (now including new measurements). As mentioned briefly above, it may be useful to trigger this step only when a certain number of new measurements have been added. Joint learning or reinforcement learning may be used. Before model redeployment, one or more tests may need to be run with test data applied to the retrained model. During this time, the user may continue to use the previously trained model.
In optional step S1940E, existing one or more sizes of one or more categories (e.g., clusters) may be rebalanced. This may involve changing the size of one or more clusters to increase the efficiency of the prediction algorithm.
At step S1940F, the shape measurement value with the higher uncertainty value may be related to a bending event of arm a or other failure of the shape sensing system. A new class may be developed for such fault events and the training algorithm re-run in a new instance of the training phase to train the model to correctly identify such faults in the feature. The shape measurement may be corrected to remove the buckling contribution in the shape measurement data, and then the corrected shape measurement data is resubmitted for prediction at step S1940A.
Thus, it can be seen that uncertainty value assessment can be effectively used for dynamic learning. The model is updated by deciding to add a new class with new semantics and/or by additional runs of learning algorithms to take into account the new class, which improves over time, and which is added to the model. Thus, over time, the model can be categorized over a wider range of categories than before. Thus, over time, the number of categories is expected to dynamically increase, including not only the initial category for a priori anatomical location of interest, but also categories for any one or more of: one or more anatomical locations (transition regions) between the two, failure events (e.g., bending), and others.
It should be noted that updating the training data set by adding new data at step S1940C may include adding new synthetically generated training data samples/instances, and that the opening of new one or more categories may be related to these newly added training data. The generation of new data will be explained in more detail in fig. 20 below. The newly added training data may be related to new anatomical locations that have later proven to be of interest, or in which there are too few training data specimens, or none at all. At step S1940D, any other medical procedure categories may also be added and the model improved.
Referring now to FIG. 20, a flowchart of a computer-implemented method of synthetically generating training data is shown. These data allow for expanding the existing training data set of shape measurements, thereby improving the performance of the system C-SYS. If such existing historical measurement data is lacking in general or for a particular anatomy or patient type, it may be useful to generate the data synthetically rather than to obtain the existing data of the same or other patient generated in a previous procedure from a medical data store.
In one such embodiment of data generation, the method is image-based. It may include displaying a current or previous image of the anatomy of interest of the given patient at step S2010.
At step S2020, a user input is received defining a geometric path drawn on the image.
At step S2030, the geometric path is converted into shape measurement data, for example by calculating a series of local curvature values. Alternatively, the stress or strain value can also be calculated given a specification of the material properties of the deformed arm to be used in future interventions. However, the conversion step S2030 is optional or may be an isotactic operation (identity operation), as the shape measurement values may be provided directly in the form of spatial coordinates, preferably 3D coordinates (X, Y, Z), as specified by the user in step S2020.
In step S2020, the specification of the path may be provided by setting a discrete number of control points into a portion of the vessel tree represented in the image, or into other portions of the image representing the body part/parts/anatomy of interest. A spline algorithm may be used to pass the curve through the control points to obtain shape data. The bottom layer image may be acquired from an X-ray imaging device. Images of a biplane X-ray imaging apparatus may be used.
The tree structure indicated in the image may be related to other cavity systems, but not necessarily to the vascular system, for example to biliary or intra-pulmonary intra-division networks. A tree structure (e.g., a vessel tree) may be obtained by segmenting the image and displaying the segmented image at step S2010.
Reference is now made to the flowchart in fig. 21, which illustrates the steps of a computer-implemented method of training a machine learning model for use in the above-described embodiments. The model will be trained to be able to predict from the shape measurements the class representing the location of the anatomical structure from which the shape measurements were taken.
At step S2110, training data of a current training data set is received. The training data includes shape measurements that are generated as per fig. 20 or otherwise, or from early interventions where shape sensing data has been collected and stored.
At step S2120, a training algorithm is applied based on the training data. In an embodiment, parameters of the model are adjusted based on the training data, for example in explicit modeling. In other embodiments, such as in implicit modeling, a data-centric model is used, such as in clustering, in which training data is stored, and optionally descriptors representing features of training data clusters, such as center points describing each cluster, are computed. The shape measurement s 'of the training data may be applied in training for a given acquisition time, or to a block (s', s 'of the current shape measurement and one or more earlier measurements' 1 ...s' n ) In (n.gtoreq.1), as described above. Since the temporal context 1-n includes cues about the correct category, training may be improved.
At step S2130, the adjusted model is available for deployment.
In step S2140, it is checked whether new training data has been received, and if so, the training phase may be restarted at step S2120 to take into account the newly received training data.
The training algorithm at step S2120 may itself be iterative, with the parameters of the model adjusted for a given training data sample/instance in one or more iterative steps. This is the case, for example, in back propagation type algorithms, or more generally for other gradient-based algorithms. Such iterative training algorithms may be used for models of the type of neural network contemplated herein. Instead of instance-by-instance training, a set of training instances is processed at once, and then trained on a group-by-group basis, preferably as herein contemplated as batch-learning.
However, any other training algorithm may be used based on the corresponding model, such as a support vector machine, decision tree, random forest, etc. Any other type of training for classification purposes (i.e., clustering or classification) is also contemplated herein.
The training scheme may also be used for ML implementations of transform stages, such as for encoder-decoder type transform stages, in particular of variable-division auto-encoder type or other auto-encoder implementations, as well as other implementations.
The components of system C-SYS may be implemented as one or more software modules running on one or more general-purpose processing units PU, such as a workstation associated with imager IA, or on a server computer associated with a set of imagers.
Alternatively, some or all of the components of the system C-SYS may be arranged in hardware, such as a suitably programmed microcontroller or microprocessor, such as an FPGA (field programmable gate array), or a hardwired IC chip, an Application Specific Integrated Circuit (ASIC), integrated into the imaging system IA. In another embodiment, either of the systems TS, SYS may be implemented both partially in software and partially in hardware.
Any one or different components of the system C-SYS may be implemented on a single data processing unit PU. Alternatively, some or more components are implemented on different processing units PU, which may be remotely arranged in a distributed architecture and may be connected in a suitable communication network, such as in a cloud setting or a client-server setting, etc.
One or more features described herein may be configured or implemented as or with circuitry encoded in a computer-readable medium, and/or a combination thereof. The circuitry may include discrete and/or integrated circuits, system on a chip (SOC) and combinations thereof, machines, computer systems, processors and memories, computer programs.
In a further exemplary embodiment of the invention, a computer program or a computer program element is provided, characterized in that the computer program or the computer program element is adapted to execute the method steps of the method according to one of the preceding embodiments on a suitable system.
Thus, a computer program element may be stored on a computer unit, which may also be part of an embodiment of the invention. The computing unit may be adapted to perform or induce performing the steps of the method described above. Furthermore, it may be adapted to operate the components of the above-described apparatus. The computing unit may be adapted to automatically run and/or execute user instructions. The computer program may be loaded into a working memory of a data processor. In this way, the data processor may perform the method of the present invention.
This exemplary embodiment of the present invention includes both a computer program that uses the present invention from the beginning and a computer program that converts an existing program into a program that uses the present invention by updating.
Furthermore, the computer program element can provide all necessary steps to complete the processes of the exemplary embodiments of the method as described above.
According to another exemplary embodiment of the invention, a computer-readable medium, such as a CD-ROM, is provided, wherein the computer-readable medium has stored thereon a computer program element, which is described by the previous section.
A computer program may be stored and/or distributed on a suitable medium, especially but not necessarily a non-transitory medium, such as an optical storage medium or a solid-state medium supplied together with or as part of other hardware, but may also be distributed in other forms, such as via the internet or other wired or wireless telecommunication systems.
However, the computer program may also be provided over a network, such as the world wide web, and may be downloaded into the working memory of a data processor. According to another exemplary embodiment of the invention, a medium for making available for downloading a computer program element is provided, which computer program element is arranged for executing the method according to one of the previously described embodiments of the invention.
It should be noted that embodiments of the present invention are described with reference to different subject matter. In particular, some embodiments are described with reference to method type claims, while other embodiments are described with reference to device type claims. However, one skilled in the art will recognize from the foregoing and following description that, unless otherwise indicated, any combination of features relating to different subject matter is also considered to be the disclosure of this application, in addition to any combination of features belonging to one type of subject matter. However, all features may be combined to provide a synergistic effect that is greater than the simple addition of the features.
While the invention has been illustrated and described in detail in the drawings and foregoing description, such illustration and description are to be considered illustrative or exemplary and not restrictive. The invention is not limited to the disclosed embodiments. Other variations to the disclosed embodiments can be understood and effected by those skilled in the art in view of the drawings, the disclosure, and the appended claims.
In the claims, the word "comprising" does not exclude other elements or steps, and the indefinite article "a" or "an" does not exclude a plurality. A single processor or other unit may fulfill the functions of several items recited in the claims. The mere fact that certain measures are recited in mutually different dependent claims does not indicate that a combination of these measures cannot be used to advantage. Any reference signs in the claims shall not be construed as limiting the scope.
Claims (17)
1. A system (C-SYS) for controlling the operation of at least one device (D1-6) in a medical procedure that can be performed for a Patient (PAT), comprising:
an input Interface (IN) for receiving input data, the input data comprising input shape measurement data of a deformable elongated shape sensing device (a) located at least partially within the patient;
logic (PL) configured with anatomical-based features to output a result from the input shape measurement data, the result comprising an indication of any one or more of: i) Anatomical location within the patient, ii) stage of the procedure, or iii) type of the procedure; and
-a Control Interface (CI) for issuing control signals for said at least one device (D1-6) based on said output.
2. A system (C-SYS) for controlling the operation of at least one device (D1-6) in a medical procedure that can be performed for a Patient (PAT), comprising:
an input Interface (IN) for receiving input data, the input data comprising input shape measurement data generated by a Shape Sensing System (SSS) comprising a deformable elongated shape sensing device (a) located at least partially within the patient;
Prediction Logic (PL) configured to predict a result based on i) the input shape measurement data and ii) previous shape measurement data, the result comprising an indication of any one or more of: i) Anatomical location within the patient, ii) stage of the procedure, or iii) type of the procedure; and
-a Control Interface (CI) for issuing control signals for the at least one device (D1-6) based on the predicted result.
3. The system of claim 1 or 2, wherein the logic (PL) is configured to categorize the received input shape measurement data into categories associated with the outcome, i.e., i) categories of anatomical locations within the patient, ii) categories of phases of the procedure, or iii) categories of types of the procedure.
4. The system of any one of the preceding claims, wherein, in case the logic (PL) outputs with an uncertainty above a threshold, the system establishes a new class different from the previous class or classes and assigns the input shape measurement data to the new class.
5. The system of any one of the two preceding claims, wherein the system comprises a Rebalancer (RB) configured to rebalance respective sizes of categories among a plurality of such categories.
6. The system of any of the preceding claims, wherein the logic (PL) is configured to output the result based on a machine learning model (M) trained on the previous shape measurement data.
7. The system of claim 6, wherein the training data comprises any one or more of: i) Historical shape measurements collected during previous such medical procedures; ii) user-generated data; iii) Image-based computer-generated data.
8. A system according to claim 6 or 7, wherein the input shape measurements and the results are stored in memory for retraining the model, and the logic and the system are arranged to effect such retraining.
9. The system according to any one of the preceding claims, wherein deformation is derivable by the shape sensing device (P) conforming at least partially to a surrounding anatomical structure, the logic (PL) being further configured to predict whether the received shape measurement value is a result of the shape sensing device (a) conforming to the surrounding anatomical structure, and if not, the logic (PL) being further configured to correct the shape measurement value data and base the result on the corrected shape measurement value data.
10. The system of any of the preceding claims, wherein the input data comprises additional data (κ) other than the shape measurement data, the additional data being associated with the shape measurement data in tandem, and the result predicted by the Prediction Logic (PL) is further based on such additional data, wherein the data (κ) associated in tandem comprises any one or more of: i) Time data including shape measurement acquisition time or data derived from such time data; ii) chronological data of previous shape measurements; iii) Metadata, including patient data; iv) data relating to or generated by any one or more of the devices (D1-6) used or to be used during the surgery; v) one or more previous predictions; vi) the relevant 2D or 3D image data.
11. The system of claim 10, wherein the contextually relevant data is obtained by operation of an Annotator (AT) operable to tag the input shape measurement data or the previous shape measurement data, the tag indicating the contextually relevant data.
12. The system according to any of the preceding claims, wherein the at least one device (D j ) Is or includes any one or more of the following:
i) An imaging device (IA), and the control signal is used to effect a change in the image acquisition setting;
ii) a Display Device (DD), and the control signal is for enabling display of a graphical display on the display device, the graphical display comprising a graphical or textual indication of any one or more of: a) the predicted result, b) the current input shape measurement, c) one or more categories of the previous shape measurement data; d) A virtual image of the reconstructed shape of the shape sensing device (a), and a virtual and/or imaged representation of the anatomical location; e) A virtual and/or imaged representation of a next anatomical location that the shape sensing device (a) may traverse in a navigation path of the shape sensing device (a); f) A representation of a planned navigation path based on the results; g) An indication of a new predicted anatomical location in relation to a new relevant organ entered by the shape sensing device (a);
iii) -the Intervention Device (ID) or an Intervention Device (ID) which can be used for the procedure, and-the control signal is used to adjust an operation mode or parameter of the intervention device;
iv) the shape sensing system, and the control signal is used to adjust an operating mode or parameter of the Shape Sensing System (SSS);
v) a robot configured to drive the shape sensing device (a), and the control signal is used to control the robot;
vi) a workflow management system integrating, using or processing the predicted results with other data, and the control signals are used to control the operation of the workflow management system;
vii) a database or memory system, and said control signal is used to control whether the current input shape measurement data is stored, and
viii) a registration module (REG) configured for registering an Interventional Device (ID) and/or a shape sensing device (a) with an image, and the control signals are used to control whether the Interventional Device (ID) and/or the shape sensing device (a) are registered or re-registered with a given image.
13. The system of any of the preceding claims, wherein the logic (PL) is capable of predicting the result without using the acquired image data of the patient.
14. A system (TDG) for generating a training data instance to facilitate predicting a result from shape measurements collected by a shape sensing device at least partially located within the patient, the result comprising an indication of any one or more of; i) Anatomical location in the patient, ii) stage of medical procedure, iii) type of medical procedure.
15. The system of claim 14, wherein the system comprises: a User Interface (UI) configured to allow a user to define a path in a medical image of a patient; and a Converter (CV) capable of converting the path into shape measurements, thereby obtaining one or more instances of parameters or training data for the logic.
16. A computer program element adapted to cause at least one processing unit to perform one of the following methods when executed by the processing unit:
a computer-implemented method for training (S2110-S2130) a machine learning model based on training data, the machine learning model being capable of predicting a result from shape measurements that can be collected by a shape sensing device (a) located at least partially within the patient, the result comprising an indication of any one or more of: i) An anatomical location within the patient; ii) stage of medical surgery; ii) type of medical procedure;
a computer-implemented method for generating (S2010-S2030) a training data instance to facilitate predicting a result from shape measurements collected by a shape sensing device (a) located at least partially within the patient, the result comprising an indication of any one or more of: i) An anatomical location within the patient; ii) stage of medical surgery; iii) The type of medical procedure;
A method for controlling operation of at least one device (D1-6) in a medical procedure capable of being performed for a Patient (PAT), comprising:
receiving (S1910) input data comprising input shape measurement data generated by a Shape Sensing System (SSS) comprising a deformable shape sensing device (a) located at least partially within the patient;
predicting (S1940) a result based on i) the input shape measurement data and ii) previous shape measurement data, the result comprising an indication of any one or more of: i) Anatomical location within the patient, ii) stage of the procedure, or iii) type of the procedure; and
-issuing a control signal (S1950) for said at least one device (D1-6) based on said predicted result;
a method for controlling operation of at least one device (D1-6) in a medical procedure capable of being performed for a Patient (PAT), comprising:
receiving (S1910) input data comprising input shape measurement data of a deformable elongated shape sensing device (a) located at least partially within the patient;
outputting (S1940) a result from a model (PL) configured with anatomical structure based features, the result comprising an indication of any one or more of: i) Anatomical location within the patient, ii) stage of the procedure, or iii) type of the procedure; and
-issuing (S1950) a control signal for said at least one device (D1-6) based on said predicted result.
17. At least one computer-readable medium, on which a computer program element according to claim 16 is stored.
Applications Claiming Priority (4)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US202163219507P | 2021-07-08 | 2021-07-08 | |
US63/219,507 | 2021-07-08 | ||
EP21199460.3 | 2021-09-28 | ||
PCT/EP2022/068361 WO2023280732A1 (en) | 2021-07-08 | 2022-07-04 | System and device control using shape clustering |
Publications (1)
Publication Number | Publication Date |
---|---|
CN117615705A true CN117615705A (en) | 2024-02-27 |
Family
ID=89958413
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202280048502.4A Pending CN117615705A (en) | 2021-07-08 | 2022-07-04 | System and device control using shape clustering |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN117615705A (en) |
-
2022
- 2022-07-04 CN CN202280048502.4A patent/CN117615705A/en active Pending
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Wolterink et al. | Coronary artery centerline extraction in cardiac CT angiography using a CNN-based orientation classifier | |
Ghesu et al. | Multi-scale deep reinforcement learning for real-time 3D-landmark detection in CT scans | |
EP3142033B1 (en) | Physiology-driven decision support for therapy planning | |
CN106037710B (en) | Synthetic data-driven hemodynamic determination in medical imaging | |
US8812431B2 (en) | Method and system for medical decision support using organ models and learning based discriminative distance functions | |
JP2022522960A (en) | Systems and methods for classifying arterial imaging regions and their features | |
Wu et al. | Chest imagenome dataset for clinical reasoning | |
CN108962381B (en) | Learning-based method for personalized assessment, long-term prediction and management of atherosclerosis | |
JP2024524427A (en) | System and Device Control Using Shape Clustering | |
Chhabra et al. | A smart healthcare system based on classifier DenseNet 121 model to detect multiple diseases | |
Fernandes et al. | Bayesian convolutional neural network estimation for pediatric pneumonia detection and diagnosis | |
US20240127436A1 (en) | Multi-modal computer-aided diagnosis systems and methods for prostate cancer | |
Sridhar et al. | A Torn ACL Mapping in Knee MRI Images Using Deep Convolution Neural Network with Inception‐v3 | |
Long et al. | Artificial intelligence and automation in valvular heart diseases | |
US12080021B2 (en) | Training a machine learning algorithm using digitally reconstructed radiographs | |
Wissel et al. | Cascaded learning in intravascular ultrasound: coronary stent delineation in manual pullbacks | |
Karam et al. | A progressive and cross-domain deep transfer learning framework for wrist fracture detection | |
Gu et al. | Biomedjourney: Counterfactual biomedical image generation by instruction-learning from multimodal patient journeys | |
Anbarasi et al. | Computer aided decision support system for mitral valve diagnosis and classification using depthwise separable convolution neural network | |
EP4154810A1 (en) | System and device control using shape clustering | |
Zreik et al. | Combined analysis of coronary arteries and the left ventricular myocardium in cardiac CT angiography for detection of patients with functionally significant stenosis | |
WO2021255652A1 (en) | Intelligent assessment and analysis of medical patients | |
Oğuz et al. | Introduction to deep learning and diagnosis in medicine | |
Balasubramaniam et al. | Medical Image Analysis Based on Deep Learning Approach for Early Diagnosis of Diseases | |
KR102442093B1 (en) | Methods for improving surface registration in surgical navigation systems |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |