WO2022220779A1 - A device and method used for scoring the conformity of ultrasonography image marked/scored with deep learning, machine learning, artificial intelligence techniques - Google Patents
A device and method used for scoring the conformity of ultrasonography image marked/scored with deep learning, machine learning, artificial intelligence techniques Download PDFInfo
- Publication number
- WO2022220779A1 WO2022220779A1 PCT/TR2022/050324 TR2022050324W WO2022220779A1 WO 2022220779 A1 WO2022220779 A1 WO 2022220779A1 TR 2022050324 W TR2022050324 W TR 2022050324W WO 2022220779 A1 WO2022220779 A1 WO 2022220779A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- image
- anatomical
- scoring
- anatomical structures
- structures
- Prior art date
Links
- 238000000034 method Methods 0.000 title claims abstract description 57
- 238000013135 deep learning Methods 0.000 title claims abstract description 16
- 238000013473 artificial intelligence Methods 0.000 title claims abstract description 15
- 238000010801 machine learning Methods 0.000 title claims abstract description 13
- 238000002604 ultrasonography Methods 0.000 title abstract description 26
- 210000003484 anatomy Anatomy 0.000 claims abstract description 52
- 238000002372 labelling Methods 0.000 claims abstract description 5
- 238000013527 convolutional neural network Methods 0.000 claims description 8
- 230000004807 localization Effects 0.000 claims description 6
- 230000011218 segmentation Effects 0.000 claims description 6
- 230000002093 peripheral effect Effects 0.000 claims description 5
- 238000001514 detection method Methods 0.000 claims description 4
- 238000012285 ultrasound imaging Methods 0.000 claims description 4
- 238000012549 training Methods 0.000 claims description 3
- 238000013528 artificial neural network Methods 0.000 claims description 2
- 238000004364 calculation method Methods 0.000 claims description 2
- 230000005484 gravity Effects 0.000 claims description 2
- 210000000578 peripheral nerve Anatomy 0.000 abstract description 7
- 210000001519 tissue Anatomy 0.000 description 17
- 238000002694 regional anesthesia Methods 0.000 description 8
- 238000003384 imaging method Methods 0.000 description 7
- 238000002091 elastography Methods 0.000 description 4
- 239000003589 local anesthetic agent Substances 0.000 description 3
- 230000036592 analgesia Effects 0.000 description 2
- 238000002591 computed tomography Methods 0.000 description 2
- 238000011161 development Methods 0.000 description 2
- 230000018109 developmental process Effects 0.000 description 2
- 238000011156 evaluation Methods 0.000 description 2
- 238000002695 general anesthesia Methods 0.000 description 2
- 238000002595 magnetic resonance imaging Methods 0.000 description 2
- 210000005036 nerve Anatomy 0.000 description 2
- 230000002980 postoperative effect Effects 0.000 description 2
- 206010002091 Anaesthesia Diseases 0.000 description 1
- 206010021118 Hypotonia Diseases 0.000 description 1
- 230000000202 analgesic effect Effects 0.000 description 1
- 230000003474 anti-emetic effect Effects 0.000 description 1
- 239000002111 antiemetic agent Substances 0.000 description 1
- 230000004888 barrier function Effects 0.000 description 1
- 230000017531 blood circulation Effects 0.000 description 1
- 238000003745 diagnosis Methods 0.000 description 1
- 238000002059 diagnostic imaging Methods 0.000 description 1
- 239000003814 drug Substances 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 230000000004 hemodynamic effect Effects 0.000 description 1
- 238000000968 medical method and process Methods 0.000 description 1
- 230000036640 muscle relaxation Effects 0.000 description 1
- 238000012545 processing Methods 0.000 description 1
- 238000011084 recovery Methods 0.000 description 1
- 230000011514 reflex Effects 0.000 description 1
- 239000000523 sample Substances 0.000 description 1
- 238000013077 scoring method Methods 0.000 description 1
- 230000002889 sympathetic effect Effects 0.000 description 1
- 230000001988 toxicity Effects 0.000 description 1
- 231100000419 toxicity Toxicity 0.000 description 1
- 238000002627 tracheal intubation Methods 0.000 description 1
- 238000011282 treatment Methods 0.000 description 1
- 238000012800 visualization Methods 0.000 description 1
- 230000029663 wound healing Effects 0.000 description 1
Classifications
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B8/00—Diagnosis using ultrasonic, sonic or infrasonic waves
- A61B8/08—Detecting organic movements or changes, e.g. tumours, cysts, swellings
- A61B8/0833—Detecting organic movements or changes, e.g. tumours, cysts, swellings involving detecting or locating foreign bodies or organic structures
- A61B8/085—Detecting organic movements or changes, e.g. tumours, cysts, swellings involving detecting or locating foreign bodies or organic structures for locating body or organic structures, e.g. tumours, calculi, blood vessels, nodules
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/0464—Convolutional networks [CNN, ConvNet]
Definitions
- the invention relates to a system used for marking/labeling the image created by anatomical ultrasonographic and/or elastographic methods with deep learning, machine learning, and artificial intelligence techniques and for scoring the created image, and a method for the operation of this system.
- the system and method of the invention aims to score the similarity of the snapshot with this reference layout by referring to the scene layout expected to be created by the anatomical structures on the ultrasonography image.
- RA regional anesthesia
- analgesia techniques are considered to be more reliable than general anesthesia due to their many advantages.
- the significant advantages of regional anesthesia over general anesthesia are that tracheal intubation is not necessary, airway reflexes are preserved, analgesic and antiemetic consumption is low, hemodynamics is maintained stable, no additional time is required for awakening and extubation, duration of stay in recovery room, post-anesthesia care unit (PACU) and hospital is short, sufficient intraoperative muscle relaxation is provided, intraoperative and postoperative analgesia is provided, blood flow increases in the extremity with sympathetic blockade and it contributes positively to postoperative wound healing.
- PACU post-anesthesia care unit
- the reasons why it is not preferred are the time-consuming application of regional anesthesia, the late onset of its effect and the need for experience.
- CT computed tomography
- MR magnetic resonance imaging
- elastography elastography
- US 10504227 describes the methods and devices that can be used for this purpose and allow the creation of anatomical images with deep learning, machine learning and/or artificial intelligence techniques.
- Patent application numbered EP3482346 describes a system and method for automatic detection, localization and semantic segmentation of at least one anatomical object in a parameter space of an image created by an imaging system.
- the said method aims to produce the image via the imaging system and to provide the image of the anatomical object and the tissue surrounding it to a processor.
- the said method includes the development and training of a parameter space deep learning network, which includes convolutional neural networks (cnn) to automatically detect the anatomical object and the tissue surrounding the parameter space of the image.
- the said method also includes automatic localization and segmentation of the anatomical object and peripheral tissue of the parameter space of the image by using additional convolutional neural networks.
- the method includes automatic labeling of the anatomical object defined on the image and the surrounding tissue. In this way, this method also aims to display the labeled image to a user in real time.
- the anatomical object intended to be displayed and the tissue surrounding the parameter space of the image are labeled and presented to the user directly through a display unit.
- the user performing the application can perform the necessary application without the need to see and interpret both the anatomical object and the tissue surrounding the parameter space of the image.
- the application is the ultrasonography-assisted peripheral nerve block process used in regional anesthesiology
- the user will be able to automatically perform more accurate regional anesthesia (RA) applications by using the provided data.
- RA regional anesthesia
- ultrasonography-assisted peripheral nerve block processes used in regional anesthesiology require continuous movement of ultrasound over the application area. In this case, the scene displayed for the user who performs the application will change constantly.
- Detection and marking of anatomical structures on the USG image with artificial intelligence, etc. is actually the prediction made by artificial intelligence for the flown image of the application within each scene. This prediction facilitates the application by informing the user about where she/he is. However, it poses a significant technical barrier as it does not provide information on how accurate this point is for process within the flown image.
- the practitioner can get information about where she/he is, but at this point, he/she cannot get information about the accuracy of the layout created by the anatomical structures appearing on the image, with reference to the layout accepted as a reference for the procedure.
- the object of the invention is to create a system and method used to score the similarity of the ultrasonography image to this reference layout at any time, by taking the layout expected to be formed by the anatomical structures appearing on the ultrasonography image of any part of the human body as a reference.
- the user will be able to detect anatomical structures on the ultrasonography image with these techniques, and will also be able to obtain the results of the extent to which the detected anatomical structures can create the correct image, regardless of whether or not they make a marking on the anatomical structures.
- the user will be guided by the scores created by deep learning, machine learning and artificial intelligence techniques and displayed to the user, about the ongoing imaging process and how accurate the points shown in the flown image are for processing.
- the scoring method and system of the invention is used to score the similarity of the image to the expected layout in cases where anatomical structures are desired to be seen in a specific layout on the ultrasonography image, such as in ultrasonography-assisted peripheral nerve block processes used in regional anesthesiology.
- different embodiments of the invention can be used to obtain anatomical ultrasonography images.
- Figure 1 View of the layout evaluation graph used to determine the angles between anatomical structures
- the embodiment of the invention relates to an imaging system; that can take an image of at least one anatomical structure and surrounding tissue, including convolutional neural networks to automatically detect the anatomical structure, the object and the tissue surrounding the parameter space of the image, that comprises at least one application that provides automatic localization and segmentation of the anatomical object and the peripheral tissue of the parameter space of the image by using convolutional neural networks and can automatically mark the anatomical object and peripheral tissue in the created image and present it to the user as an image; a processor on which said application can run, allowing more than one operation to be performed; the display unit to present the created image to the user; that can be one of the ultrasound imaging system (USG) and elastography system, which enables the image of the anatomical object and the surrounding tissue to be produced and transfers this image to the said processor.
- USG ultrasound imaging system
- elastography system which enables the image of the anatomical object and the surrounding tissue to be produced and transfers this image to the said processor.
- the said system includes at least one scoring module that scores the extent to which the detected structures can create the correct image for application performance with one or more of the deep learning/machine learning/artificial intelligence techniques by using the anatomical structure importance factor, anatomical structure visibility, and geometric angle with other anatomical structures, regardless of whether the said anatomical structures have been marked before or not, and that can display on the image of the scene formed by the anatomical structures in the display unit.
- the system of the invention relates to a method; that can be used for automatic detection, localization and segmentation of at least one anatomical object in a parameter space of an image generated by an imaging system, which may be one of an ultrasound imaging system (USG) and an elastography system, including the process steps of providing an image of the anatomical object and surrounding tissue to a processor; developing and training a deep learning network comprising one or more convolutional neural networks to automatically detect the anatomical object and the surrounding tissue of the parameter space of the image; automatically positioning and segmenting the tissue surrounding the parameter space of the image, the anatomical object, and surrounding tissue through an additional convoluted neural network; automatically labeling the anatomical object and surrounding tissue in the image; displaying the labeled image to a user.
- an imaging system which may be one of an ultrasound imaging system (USG) and an elastography system
- USG ultrasound imaging system
- elastography system an elastography system
- the said method can score the extent to which the detected structures can create the correct image for application performance with one or more of the deep learning/machine learning/artificial intelligence techniques by using the anatomical structure importance factor, anatomical structure visibility, and geometric angle with other anatomical structures, regardless of whether the said anatomical structures have been marked before or not, and can display the relevant scores on the image of the scene formed by the anatomical structures in the display unit.
- Scoring is calculated by the approximation of the positional angles detected by one or more of the deep learning/machine learning/artificial intelligence techniques among the anatomical structures to the specified reference angle values.
- the presence or absence of each anatomical structure in the displayed scene is also used as a coefficient in the scoring calculation.
- Different embodiments of the invention can be operated with the center of a circle covering the anatomical structure, the diameter of the circle, the center of the quadrangle, the left and right edges of this quadrangle, the rightmost, lowest, upmost and leftmost coordinates of the anatomical structure, and other reference points of this type.
- evaluation can be made by using the angles that these created reference points make not only with the horizontal axis, but also with other reference axes, such as the vertical axis (Y Axis), diagonal axis, etc.
Landscapes
- Health & Medical Sciences (AREA)
- Life Sciences & Earth Sciences (AREA)
- Heart & Thoracic Surgery (AREA)
- Medical Informatics (AREA)
- Biophysics (AREA)
- Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
- Pathology (AREA)
- Radiology & Medical Imaging (AREA)
- Engineering & Computer Science (AREA)
- Biomedical Technology (AREA)
- Vascular Medicine (AREA)
- Physics & Mathematics (AREA)
- Molecular Biology (AREA)
- Surgery (AREA)
- Animal Behavior & Ethology (AREA)
- General Health & Medical Sciences (AREA)
- Public Health (AREA)
- Veterinary Medicine (AREA)
- Image Processing (AREA)
- Image Analysis (AREA)
Abstract
The invention relates to a system used for marking/labeling the image created by anatomical ultrasonographic and/or elastographic methods with deep learning, machine learning, and artificial intelligence techniques and for scoring the created image, and a method for the operation of this system. More specifically, the system and method of the invention aims to create the ultrasonography image for situations where anatomical structures are desired to be localized by the ultrasonography method, such as peripheral nerve block process, and to score the created image.
Description
A SYSTEM AND METHOD USED FOR SCORING THE CONFORMITY OF ANATOMICAL ULTRASONOGRAPHY IMAGE MARKED/SCORED WITH DEEP LEARNING, MACHINE LEARNING, ARTIFICIAL INTELLIGENCE TECHNIQUES
Technical Field
The invention relates to a system used for marking/labeling the image created by anatomical ultrasonographic and/or elastographic methods with deep learning, machine learning, and artificial intelligence techniques and for scoring the created image, and a method for the operation of this system.
More specifically, the system and method of the invention aims to score the similarity of the snapshot with this reference layout by referring to the scene layout expected to be created by the anatomical structures on the ultrasonography image.
The State of The Art
The use of ultrasonic image acquisition devices for anatomical purposes is increasing day by day. For this reason, the usage areas of devices that work especially ultrasonically and/or can take anatomical images are also increasing. Regional anesthesia, which is one of the most important examples that can be given to these areas, is more specifically the ultrasonography- assisted peripheral nerve block process.
Today, regional anesthesia (RA) and analgesia techniques are considered to be more reliable than general anesthesia due to their many advantages. The significant advantages of regional anesthesia over general anesthesia are that tracheal intubation is not necessary, airway reflexes are preserved, analgesic and antiemetic consumption is low, hemodynamics is maintained stable, no additional time is required for awakening and extubation, duration of stay in recovery room, post-anesthesia care unit (PACU) and hospital is short, sufficient intraoperative muscle relaxation is provided, intraoperative and postoperative analgesia is provided, blood flow increases in the extremity with sympathetic blockade and it contributes positively to postoperative wound healing.
The reasons why it is not preferred are the time-consuming application of regional anesthesia, the late onset of its effect and the need for experience.
However, the development of ultrasonography (USG), which is one of the essential imaging methods of modern medicine, in particular, the increasing use of ultrasonography probe in peripheral nerve blocks, which allows higher frequency and for clearer visualization of superficial tissues has made it possible to apply a safer, faster and more comfortable block.
Direct observation of the spread of local anesthetic around the nerve with the use of ultrasonography increases the success of the block and shortens the block application time. One of the most important advantages of ultrasonography in the application of regional block is that it reduces the local anesthetic dose, the risk of local anesthetic toxicity and complications.
One of the most important technical problems during this process is to perform the correct application by determining the position of the nerve. Today, one of the most used equipment for this purpose is ultrasonography.
Although the use of USG provides significant advantages for the regional anesthesia process, the success of the application still depends on the knowledge and skills of the practitioner.
Like ultrasonography-assisted peripheral nerve block process, different imaging techniques such as computed tomography (CT), magnetic resonance imaging (MR), elastography, etc. are used in diagnosis, treatment and medical processes performed with medical imaging today.
With the developing technology, it is possible to automatically detect and mark anatomical structures by using deep-learning/machine learning/artificial intelligence techniques in order to create these mentioned images.
Due to this situation, users who do not have high experience will be able to use the system easily.
Patent applications numbered EP3738100, WO2019241659, W02018187005 and
US 10504227 describes the methods and devices that can be used for this purpose and allow the creation of anatomical images with deep learning, machine learning and/or artificial intelligence techniques.
Patent application numbered EP3482346 describes a system and method for automatic detection, localization and semantic segmentation of at least one anatomical object in a parameter space of an image created by an imaging system.
It is understood that the said method aims to produce the image via the imaging system and to provide the image of the anatomical object and the tissue surrounding it to a processor.
It is understood that the said method includes the development and training of a parameter space deep learning network, which includes convolutional neural networks (cnn) to automatically detect the anatomical object and the tissue surrounding the parameter space of the image. The said method also includes automatic localization and segmentation of the anatomical object and peripheral tissue of the parameter space of the image by using additional convolutional neural networks.
In addition, the method includes automatic labeling of the anatomical object defined on the image and the surrounding tissue. In this way, this method also aims to display the labeled image to a user in real time.
By using the method and system described in the patent application numbered EP3482346, the anatomical object intended to be displayed and the tissue surrounding the parameter space of the image are labeled and presented to the user directly through a display unit.
In this way, the user performing the application can perform the necessary application without the need to see and interpret both the anatomical object and the tissue surrounding the parameter space of the image. In case the application is the ultrasonography-assisted peripheral nerve block process used in regional anesthesiology, the user will be able to automatically perform more accurate regional anesthesia (RA) applications by using the provided data.
However, ultrasonography-assisted peripheral nerve block processes used in regional anesthesiology require continuous movement of ultrasound over the application area. In this case, the scene displayed for the user who performs the application will change constantly.
Even if anatomical structures are detected on USG with these techniques, it does not provide the results of the extent to which the detected anatomical structures can create the correct image, regardless of whether or not they make a marking on the anatomical structures.
Detection and marking of anatomical structures on the USG image with artificial intelligence, etc., is actually the prediction made by artificial intelligence for the flown image of the application within each scene. This prediction facilitates the application by informing the user about where she/he is. However, it poses a significant technical barrier as it does not provide information on how accurate this point is for process within the flown image.
In fact, with the use of existing systems, the practitioner can get information about where she/he is, but at this point, he/she cannot get information about the accuracy of the layout created by the anatomical structures appearing on the image, with reference to the layout accepted as a reference for the procedure.
The Problems to be Solved by the Invention
The object of the invention is to create a system and method used to score the similarity of the ultrasonography image to this reference layout at any time, by taking the layout expected to be formed by the anatomical structures appearing on the ultrasonography image of any part of the human body as a reference.
With the use of the system and method of the invention, the user will be able to detect anatomical structures on the ultrasonography image with these techniques, and will also be able to obtain the results of the extent to which the detected anatomical structures can create the correct image, regardless of whether or not they make a marking on the anatomical structures.
In this way, the user will be guided by the scores created by deep learning, machine learning and artificial intelligence techniques and displayed to the user, about the ongoing imaging process and how accurate the points shown in the flown image are for processing.
According to the preferred embodiment of the invention, the scoring method and system of the invention is used to score the similarity of the image to the expected layout in cases where anatomical structures are desired to be seen in a specific layout on the ultrasonography image, such as in ultrasonography-assisted peripheral nerve block processes used in regional anesthesiology. However, different embodiments of the invention can be used to obtain anatomical ultrasonography images.
More generally, different embodiments of the invention can be used for scoring images created by elastography.
Description of the Figures
Figure 1. View of the layout evaluation graph used to determine the angles between anatomical structures
Description of the Invention
The embodiment of the invention relates to an imaging system; that can take an image of at least one anatomical structure and surrounding tissue, including convolutional neural networks to automatically detect the anatomical structure, the object and the tissue surrounding the parameter space of the image, that comprises at least one application that provides automatic localization and segmentation of the anatomical object and the peripheral tissue of the parameter space of the image by using convolutional neural networks and can automatically mark the anatomical object and peripheral tissue in the created image and present it to the user as an image; a processor on which said application can run, allowing more than one operation to be performed; the display unit to present the created image to the user;
that can be one of the ultrasound imaging system (USG) and elastography system, which enables the image of the anatomical object and the surrounding tissue to be produced and transfers this image to the said processor.
In the most basic form of the invention, the said system includes at least one scoring module that scores the extent to which the detected structures can create the correct image for application performance with one or more of the deep learning/machine learning/artificial intelligence techniques by using the anatomical structure importance factor, anatomical structure visibility, and geometric angle with other anatomical structures, regardless of whether the said anatomical structures have been marked before or not, and that can display on the image of the scene formed by the anatomical structures in the display unit.
The system of the invention relates to a method; that can be used for automatic detection, localization and segmentation of at least one anatomical object in a parameter space of an image generated by an imaging system, which may be one of an ultrasound imaging system (USG) and an elastography system, including the process steps of providing an image of the anatomical object and surrounding tissue to a processor; developing and training a deep learning network comprising one or more convolutional neural networks to automatically detect the anatomical object and the surrounding tissue of the parameter space of the image; automatically positioning and segmenting the tissue surrounding the parameter space of the image, the anatomical object, and surrounding tissue through an additional convoluted neural network; automatically labeling the anatomical object and surrounding tissue in the image; displaying the labeled image to a user.
In the most basic form of the invention, the said method can score the extent to which the detected structures can create the correct image for application performance with one or more of the deep learning/machine learning/artificial intelligence techniques by using the anatomical structure importance factor, anatomical structure visibility, and geometric angle with other anatomical structures, regardless of whether the said anatomical structures have been marked before or not, and can display the relevant scores on the image of the scene formed by the anatomical structures in the display unit.
According to Figure 1, where the preferred embodiment of the invention is shown; while determining the angles between the anatomical structures, a line is drawn between the centers of gravity and the horizontal axis (X axis) of this line and the angle it makes are taken as a basis.
Scoring is calculated by the approximation of the positional angles detected by one or more of the deep learning/machine learning/artificial intelligence techniques among the anatomical structures to the specified reference angle values.
The presence or absence of each anatomical structure in the displayed scene is also used as a coefficient in the scoring calculation.
Different embodiments of the invention can be operated with the center of a circle covering the anatomical structure, the diameter of the circle, the center of the quadrangle, the left and right edges of this quadrangle, the rightmost, lowest, upmost and leftmost coordinates of the anatomical structure, and other reference points of this type.
Also, according to different embodiments of the invention, evaluation can be made by using the angles that these created reference points make not only with the horizontal axis, but also with other reference axes, such as the vertical axis (Y Axis), diagonal axis, etc.
Claims
1. An ultrasound imaging system that can take an image of at least one anatomical structure and surrounding tissue, that includes convolutional neural networks to automatically detect the anatomical object and the tissue surrounding the parameter space of the image, that comprises at least one application that provides automatic localization and segmentation of the anatomical structures appearing on the image and the peripheral tissue of the parameter space of the image by using convolutional neural networks, that can automatically mark the anatomical structure and peripheral tissue in the created image with color and text labels and present it to the user as an image; a processor on which said application can run, allowing more than one operation to be performed; the display unit to present the created image to the user; that enables the image of the anatomical object and the surrounding tissue to be produced and transfers this image to the said processor, characterized in that it includes at least one scoring module that scores the extent to which the detected anatomical structures can create the correct image for application performance with one or more of the deep learning/machine learning/artificial intelligence techniques by using the importance factor of each anatomical structure, anatomical structure visibility, and geometric angle with other anatomical structures, regardless of whether the said anatomical structures have been marked or not, and that can display on the image of the scene formed by the anatomical structures in the display unit.
2. A method that can be used for automatic detection, localization and segmentation of at least one anatomical object in a parameter space of an image generated by an ultrasound imaging system; including the process steps of providing an image of the anatomical object and surrounding tissue to a processor; developing and training a parameter space deep learning network comprising one or more convolutional neural networks to automatically detect the anatomical object and the surrounding tissue of the parameter space of the image;
automatically positioning and segmenting the tissue surrounding the parameter space of the image, the anatomical object, and surrounding tissue through an additional convoluted neural network; automatically labeling the anatomical object and surrounding tissue in the image; displaying the labeled image to a user, characterized in that it includes the process steps of scoring the extent to which the detected structures can create the correct image for application performance with one or more of the deep learning/machine learning/artificial intelligence techniques by using the anatomical structure importance factor, anatomical structure visibility, and geometric angle with other anatomical structures, regardless of whether the said anatomical structures have been marked before or not, and displaying the relevant scores on the scene image created by the anatomical structures on the display unit.
3. A method for scoring the image according to Claim 1, characterized in that a line is drawn between the centers of gravity and the angles between the anatomical structures are determined based on the horizontal axis of this line and the angle it makes.
4. A method for scoring the image according to Claim 1, characterized in that it includes the process steps of
• calculating by the approximation of the positional angles detected by one or more of the deep learning/machine learning/artificial intelligence techniques among the anatomical structures to the specified reference angle values and
• using the presence or absence of each anatomical structure in the displayed scene as a coefficient in the scoring calculation.
5. A method for scoring the image according to Claim 1, characterized in that a line is drawn between the center of a circle covering the anatomical structure, the diameter of the circle, the center of the quadrangle, the left and right edges of this quadrangle, the rightmost, lowest, uppermost and leftmost coordinates of the anatomical structure and the angles between the anatomical structures are determined based on the angle that this line makes with the horizontal axis.
6. A method for scoring the image according to Claims 3 or 4, characterized in that the angles between the anatomical structures are determined based on the vertical axis or other reference axes.
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
TR2021/006462 TR2021006462A2 (en) | 2021-04-12 | A DEVICE AND METHOD USED FOR SCORING THE CONVENIENCE OF ULTRASONOGRAPHIC IMAGE MARKED/SCORED BY DEEP LEARNING, MACHINE LEARNING, ARTIFICIAL INTELLIGENCE TECHNIQUES | |
TR2021/006462A TR202106462A2 (en) | 2021-04-12 | 2021-04-12 | A DEVICE AND METHOD USED FOR SCORING THE CONVENIENCE OF ULTRASONOGRAPHY IMAGE MARKED/SCORED BY DEEP LEARNING, MACHINE LEARNING, ARTIFICIAL INTELLIGENCE TECHNIQUES |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2022220779A1 true WO2022220779A1 (en) | 2022-10-20 |
Family
ID=83366389
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/TR2022/050324 WO2022220779A1 (en) | 2021-04-12 | 2022-04-12 | A device and method used for scoring the conformity of ultrasonography image marked/scored with deep learning, machine learning, artificial intelligence techniques |
Country Status (2)
Country | Link |
---|---|
TR (1) | TR202106462A2 (en) |
WO (1) | WO2022220779A1 (en) |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP3482346A1 (en) * | 2016-07-08 | 2019-05-15 | Avent, Inc. | System and method for automatic detection, localization, and semantic segmentation of anatomical objects |
US20210045716A1 (en) * | 2019-08-13 | 2021-02-18 | GE Precision Healthcare LLC | Method and system for providing interaction with a visual artificial intelligence ultrasound image segmentation module |
WO2021050976A1 (en) * | 2019-09-12 | 2021-03-18 | EchoNous, Inc. | Systems and methods for automated ultrasound image labeling and quality grading |
-
2021
- 2021-04-12 TR TR2021/006462A patent/TR202106462A2/en unknown
-
2022
- 2022-04-12 WO PCT/TR2022/050324 patent/WO2022220779A1/en unknown
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP3482346A1 (en) * | 2016-07-08 | 2019-05-15 | Avent, Inc. | System and method for automatic detection, localization, and semantic segmentation of anatomical objects |
US20210045716A1 (en) * | 2019-08-13 | 2021-02-18 | GE Precision Healthcare LLC | Method and system for providing interaction with a visual artificial intelligence ultrasound image segmentation module |
WO2021050976A1 (en) * | 2019-09-12 | 2021-03-18 | EchoNous, Inc. | Systems and methods for automated ultrasound image labeling and quality grading |
Also Published As
Publication number | Publication date |
---|---|
TR202106462A2 (en) | 2021-07-26 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US8014848B2 (en) | Visualization of procedural guidelines for a medical procedure | |
US10258413B2 (en) | Human organ movement monitoring method, surgical navigation system and computer readable medium | |
Yavuz et al. | Advantages of US in percutaneous dilatational tracheostomy: randomized controlled trial and review of the literature | |
Tagaytayan et al. | Augmented reality in neurosurgery | |
Hill et al. | Measurement of intraoperative brain surface deformation under a craniotomy | |
Terkawi et al. | Ultrasound for the anesthesiologists: present and future | |
Wadley et al. | Pre-operative planning and intra-operative guidance in modern neurosurgery: a review of 300 cases. | |
US8867801B2 (en) | Method for determining properties of a vessel in a medical image | |
Drake et al. | ISG viewing wand system | |
KR20180058656A (en) | Reality - Enhanced morphological method | |
Drake et al. | Frameless stereotaxy in children | |
Sikora et al. | Ultrasound for the detection of pleural effusions and guidance of the thoracentesis procedure | |
EP3544538B1 (en) | System for navigating interventional instrumentation | |
US11403965B2 (en) | System and method for image-guided procedure analysis and training | |
Fischer et al. | MRI image overlay: application to arthrography needle insertion | |
CN116313028A (en) | Medical assistance device, method, and computer-readable storage medium | |
Liu et al. | Clinical application of a neurosurgical robot in intracranial ommaya reservoir implantation | |
Fichtinger et al. | Image overlay for CT-guided needle insertions | |
Bounajem et al. | Improved accuracy and lowered learning curve of ventricular targeting using augmented reality—phantom and cadaveric model testing | |
TWI679960B (en) | Surgical instrument guidance system | |
WO2022220779A1 (en) | A device and method used for scoring the conformity of ultrasonography image marked/scored with deep learning, machine learning, artificial intelligence techniques | |
WO2023216947A1 (en) | Medical image processing system and method for interventional operation | |
Peng et al. | Augmented reality-assisted localization of solitary pulmonary nodules for precise sublobar lung resection: a preliminary study using an animal model | |
CN113907883A (en) | 3D visualization operation navigation system and method for ear-side skull-base surgery | |
TR2021006462A2 (en) | A DEVICE AND METHOD USED FOR SCORING THE CONVENIENCE OF ULTRASONOGRAPHIC IMAGE MARKED/SCORED BY DEEP LEARNING, MACHINE LEARNING, ARTIFICIAL INTELLIGENCE TECHNIQUES |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 22788578 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |