WO2016159726A1 - Dispositif pour détecter automatiquement l'emplacement d'une lésion à partir d'une image médicale et procédé associé - Google Patents

Dispositif pour détecter automatiquement l'emplacement d'une lésion à partir d'une image médicale et procédé associé Download PDF

Info

Publication number
WO2016159726A1
WO2016159726A1 PCT/KR2016/003432 KR2016003432W WO2016159726A1 WO 2016159726 A1 WO2016159726 A1 WO 2016159726A1 KR 2016003432 W KR2016003432 W KR 2016003432W WO 2016159726 A1 WO2016159726 A1 WO 2016159726A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
threshold
threshold value
contour
setting
Prior art date
Application number
PCT/KR2016/003432
Other languages
English (en)
Korean (ko)
Inventor
김황남
손수연
이석규
Original Assignee
고려대학교 산학협력단
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 고려대학교 산학협력단 filed Critical 고려대학교 산학협력단
Publication of WO2016159726A1 publication Critical patent/WO2016159726A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16ZINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS, NOT OTHERWISE PROVIDED FOR
    • G16Z99/00Subject matter not provided for in other main groups of this subclass

Definitions

  • the present invention relates to an apparatus and method for automatically detecting the position of a lesion from a medical image, and more particularly, to an apparatus and a method for automatically detecting a lesion position using a brightness difference in an image obtained through an endoscope.
  • images obtained through various medical devices are used to examine lesions inside the body.
  • images include endoscope images, magnetic resonance imaging (MRI), and computed tomography (CT) images.
  • MRI magnetic resonance imaging
  • CT computed tomography
  • 'Endoscope' is a medical device that can directly observe the internal organs or the body cavity, and is designed to insert a machine to observe the organs that cannot directly identify the lesion without incision.
  • organs there are esophagus, gastroscopy, duodenum, etc.
  • fiber scope lens system type, gastric camera, etc.
  • Magnetic Resonance Imaging is a large magnetic cylinder that generates a magnetic field, and then enters the human body to generate high-frequency waves to resonate hydrogen nuclei in the body, thereby measuring the difference in the signal from each tissue. It is a technique to image by reconstructing.
  • Computed tomography is a technology that takes a human body into a large circular machine with an X-ray generator and then photographs a cross-section of the human body.
  • Computed tomography has the advantage that structures are less overlapped than simple X-rays, so structures and lesions can be seen more clearly.In most organs and diseases, the lesions are suspected and need to be examined. When necessary, it is basically a test method.
  • computed tomography is common to magnetic resonance imaging (MRI) in that cross-sectional images are obtained, but computed tomography (CT) uses X-rays, and magnetic resonance imaging (MRI) uses high frequency in a magnetic field. The difference is that images are acquired by transferring.
  • MRI magnetic resonance imaging
  • CT computed tomography
  • CT computed tomography
  • an object of the present invention is to provide an apparatus and method for automatically detecting a lesion location by detecting a tissue suspected to be a lesion by using brightness differences in a medical image, so as to accurately detect even minute changes that are difficult to visually identify.
  • the present invention is to provide an apparatus and method for automatically detecting the position of the lesion to increase the cure rate by initially diagnosing the lesion.
  • the present invention provides a criterion for determining whether a lesion can be diagnosed as abnormal tissue, thereby providing an apparatus and method for automatically detecting a lesion location, which can be used as a tool for learning to apprentices who are not familiar with lesion diagnosis. I would like to.
  • an apparatus for automatically detecting a lesion location including: an input unit configured to receive an original image collected through a medical device; A threshold setting unit configured to set at least one threshold based on brightness values of pixels constituting the grayscale image after converting the original image into a grayscale image; An image conversion unit for binarizing the grayscale image into a black and white image based on the at least one threshold value; And an contour processor for searching for components suspected to be lesions from the binarized image, extracting contours, and displaying the contours on the original image.
  • the input unit may receive an original image from the endoscope.
  • the threshold setting unit sets an average value of brightness values for all the pixels constituting the grayscale image as a first threshold value, and then adds at least one additional threshold value based on the first threshold value. Can be set.
  • the threshold setting unit may set an average value of brightness values for pixels having a brightness value greater than the first threshold value among pixels constituting the grayscale image as a second threshold value.
  • the threshold setting unit may set an average value of brightness values for pixels of which the brightness value of the pixels constituting the grayscale image is smaller than the first threshold value to a third threshold value.
  • the image conversion unit binarizes the grayscale image into a black and white image for each of the at least one threshold, and as a result, may generate the same number of black and white images as the number of the thresholds.
  • the contour processing unit extracts the contour for each of the black and white images, and then displays the contours in one original image.
  • the contour processor may store in advance a size range of the contour to be displayed on the original image, and display only the contour within the size range.
  • the setting of the threshold value may include: setting an average value of brightness values for all pixels constituting the grayscale image as a first threshold value; And setting at least one additional threshold value based on the first threshold value.
  • the setting of the additional threshold value may set an average value of brightness values for pixels whose brightness value is greater than the first threshold value among pixels constituting the grayscale image as a second threshold value.
  • the setting of the additional threshold further includes setting an average value of brightness values for pixels of which the brightness value of the pixels constituting the grayscale image is smaller than the first threshold value as a third threshold value. It may include.
  • said image converting step may binarize said grayscale image into a black and white image for each of said at least one threshold, resulting in the same number of black and white images as said number of thresholds.
  • said contour processing step comprises extracting said contour for each of said monochrome images; And displaying the extracted contour lines in the one original image.
  • the contour processing step may display only the contour line within the size range of the preset contour line.
  • the present invention has an advantage of detecting the boundary lines for the tissues suspected to be lesions by using the brightness difference in the image during the endoscopy, thereby accurately detecting minute changes that are difficult to be examined by the naked eye.
  • the present invention has the advantage of automatically detecting the position of the bottle, by using a multiple threshold value to reduce the binarization error to obtain accurate results.
  • the present invention can increase the cure rate of the disease by initially diagnosing the lesion, and providing a criterion for whether the lesion can be diagnosed as an abnormal tissue, thereby providing a tool for learning to apprentices who are not familiar with the diagnosis of the lesion. It can be used as.
  • the present invention is also applied to the capsule endoscope adjusted in vitro during telemedicine, to guide the examination doctor to the tissue that is considered to be a problem or to move to the suspected lesion site, automation of the movement of the capsule endoscope is also possible There is an advantage to this.
  • FIG. 1 is a schematic block diagram of an automatic lesion position detection apparatus according to an embodiment of the present invention.
  • FIG. 2 is a flowchart illustrating a method for automatically detecting a lesion position according to an embodiment of the present invention.
  • FIG. 3 is a schematic process flowchart of the multi-threshold setting process of FIG. 2.
  • FIG. 4 is a schematic process flowchart of the image conversion process of FIG. 2.
  • FIG. 5 is a diagram illustrating images generated for each processing step according to an embodiment of the present invention.
  • 6 and 7 are diagrams for explaining and explaining the effects of the present invention.
  • first, second, A, and B may be used to describe various components, but the components should not be limited by the terms. The terms are used only for the purpose of distinguishing one component from another.
  • the first component may be referred to as the second component, and similarly, the second component may also be referred to as the first component.
  • the apparatus 100 for automatically detecting a lesion position may include an input unit 110, a threshold setting unit 120, an image switching unit 130, and an outline processing unit 140. ).
  • the input unit 110 inputs an original image to the automatic lesion position detecting apparatus 100.
  • the original image collected through a medical device eg, endoscope
  • the threshold setting unit 120 is transmitted to the input unit 110.
  • the threshold setting unit 120 sets at least one threshold by analyzing the original image transmitted through the input unit 110. To this end, the threshold setting unit 120 converts the original image, which is a color tone image, to a grayscale image, and then derives brightness values of all pixels constituting the grayscale image. At least one threshold is set based on the brightness values of all the pixels.
  • the threshold setting unit 120 derives an average value of brightness values for all the pixels constituting the grayscale image based on Equation 1, and sets the value as a first threshold.
  • Threshold ⁇ Value gray (i) / Number of Pixels
  • ⁇ Value gray (i) is the sum of brightness values for all the pixels constituting the grayscale image
  • Number of Pixels is the number of all pixels
  • the first threshold obtained by Equation 1 is the corresponding frame. It is the standard for normal tissue in Esau.
  • the threshold setting unit 120 may further set additional thresholds based on the first threshold. This is to reduce an error that may occur when the corresponding image is binarized with only one first threshold. For example, if the illuminated area within a frame is concentrated in one place, an error may occur when the difference between abnormal tissue and normal tissue is determined by one reference value.
  • the threshold value setting unit 120 includes an average value (aka second threshold value) for a portion of the illumination that is illuminated in addition to the first threshold value, and an average value (aka 3 thresholds).
  • the second threshold may be derived from Equation 2
  • the third threshold may be derived from Equation 3.
  • Threshold bright ⁇ Value bright (i) / Number of bright Pixels
  • Threshold bright is an average value for the illuminated area
  • ⁇ Value bright (i) is the sum of brightness values of pixels having a brightness value greater than the first threshold value
  • Number of brightPixels is the first threshold value. This is the number of pixels with a brightness value greater than the value.
  • Threshold dark ⁇ Value dark (i) / Number of dark Pixels
  • Threshold dark is an average value for a non-illuminated area
  • ⁇ Value dark (i) is a sum of brightness values of pixels having a brightness value smaller than a first threshold value
  • Number of darkPixels is a first value. The number of pixels having a brightness value less than a threshold.
  • the image converting unit 130 binarizes the grayscale image into a black and white image based on at least one threshold set by the threshold setting unit 120.
  • the image conversion unit 130 may adjust brightness values of all pixels constituting the grayscale image and the first threshold value. Comparing the values, the brightness value of a pixel having a brightness value greater than the first threshold value is set to 10, and the brightness value of a pixel having a brightness value less than the first threshold value is set to 0, thereby making the grayscale image monochrome. Convert to an image.
  • the image conversion unit 130 generates the black and white image by performing the binarization process on each of the one or more threshold values generated by the threshold setting unit 120. Therefore, the number of monochrome images also varies according to the number of thresholds.
  • the threshold value setting unit 120 sets the first to third threshold values as in the above example, the image conversion unit 130 generates a black and white image for each of the first to third threshold values, Black and white images will be generated.
  • the contour processor 140 searches for components suspected of lesions from the black-and-white image (that is, the binarized image) generated by the image conversion unit 130, extracts the contour, and displays the contour on the original image.
  • the component refers to a white interior region at the points where black and white abut on the binarized image
  • the contour processor 140 extracts a contour corresponding to the edge of the white interior region.
  • the contour processing unit 140 extracts the contours for each of the one or more black and white images and then integrates the contours into one original image. It is preferable to display.
  • the method of extracting the contour scans the brightness values of the pixels constituting the image sequentially, and if the brightness values differ from the surrounding pixels, the corresponding pixels are displayed and they are connected to express the boundary line. Can be used.
  • the method of extracting the contour by the above method is only one embodiment, the contour extraction method of the present invention is not limited to the above-described method. That is, a method of extracting the contours of the components may apply various known techniques.
  • the contour processing unit 140 preferably stores in advance the size range of the contour line to be displayed on the original image, and displays only the contour line within the size range.
  • the contour processing unit 140 may determine the length of the longest straight line that passes through the center of the contour of the closed curve and meets the closed curve as the size of the corresponding contour.
  • the outline processor 140 may display the outline on the original image only when the size of the outline is within the pre-stored size range. This is difficult to see as a lesion for components that are too large, and those components that are too small are areas where the mucosal surface reflects light, which is difficult to see as lesions.
  • the size range of the contour is preferably set in advance by a person skilled in the art, the examination doctor may adjust the size of the lesion to be detected directly.
  • FIG. 2 is a flowchart illustrating a method for automatically detecting a lesion position according to an embodiment of the present invention. 1 and 2 is a method for automatically detecting a lesion position according to an embodiment of the present invention.
  • step S100 it is determined whether an original image (ie, a medical image) collected from a medical device is input to the input unit 110. For example, it is determined whether the image acquired through the endoscope is input to the input unit 110.
  • FIG. 5 is a diagram illustrating images generated by processing steps according to an embodiment of the present invention.
  • FIG. 5A illustrates an original image (ie, a color tone image) input in step S100.
  • (B) shows the image converted into a grayscale image in step S200. Regions indicated in white in FIG. 5B are components.
  • the threshold setting unit 120 sets at least one threshold value (ie, multiple threshold values) based on brightness values of pixels constituting the grayscale image. Since an example of a more specific process for this is illustrated in FIG. 3, the specific process will be described with reference to FIG. 3.
  • the image conversion unit 130 binarizes the grayscale image into a monochrome image based on at least one threshold value set in operation S300.
  • the binarization process is repeated by the threshold number set in step S300, and the number of black and white images generated as the result is the same as the threshold number.
  • 5 (c) and (d) illustrate the result of binarizing the grayscale image illustrated in FIG. 5 (b) with different threshold values.
  • FIG. 5C shows a result of binarization by the first threshold
  • FIG. 5D shows a result of binarization by the second or third threshold.
  • FIGS. 5C and 5D shows that the binarization result is more accurate than that of FIG. 5C. Therefore, it can be seen that as the number of thresholds increases, an error occurring in the binarization process can be reduced. Meanwhile, since an example of a more specific process for this is illustrated in FIG. 4, the specific process will be described with reference to FIG. 4.
  • the contour processing unit 140 searches for components suspected of lesions from the binarized image and extracts contours. At this time, the contour extraction process is repeated by the number of monochrome images generated in step S400. That is, a contour extraction process is performed on each of the black and white images generated in step S400.
  • step S600 the contour processing unit 140 displays the contour line extracted in the step S500 on the original image.
  • the outlines are extracted from the one or more black and white images in step S500, the respective outlines are displayed by integrating the respective outlines into one original image.
  • a predetermined condition eg, size
  • FIG. 5E shows an example of the contour extracted from the black and white image illustrated in FIG. 5D
  • FIG. 5F shows the above image on the original image illustrated in FIG. 5A. The example which displayed the outline illustrated in (e) of 5 is shown.
  • FIG. 3 is a schematic process flowchart of the multi-threshold setting process of FIG. 2. 1 and 3, the multi-threshold setting process S300 of FIG. 2 is as follows.
  • step S310 the threshold value setting unit 120 extracts brightness values for all pixels constituting the grayscale image as illustrated in FIG. 5B.
  • the threshold setting unit 120 sets the average value of the brightness values for all the pixels as the first threshold value.
  • the threshold setting unit 120 operates as described in the description with reference to FIG. 1 and preferably derives the first to third threshold values based on Equations 1 to 3 below.
  • the example of FIG. 3 shows an example of a process for deriving the first to third threshold values, but the present invention is limited to deriving the first to third threshold values. It is not. For example, only the first and second two thresholds may be derived and the subsequent process may be processed based on those values. However, when the threshold value is more than three, a problem arises in that efficiency decreases in computation speed and accuracy. Therefore, it is preferable to set the maximum threshold value to three.
  • FIG. 4 is a schematic process flowchart of the image conversion process of FIG. 2. 1 and 4, the image conversion process S400 of FIG. 2 is as follows.
  • step S410 the image conversion unit 130 performs binarization based on the first threshold value. For this reason, in step S410, the grayscale image generated by the threshold setting unit 120 is converted into the first black and white image.
  • step S420 the image conversion unit 130 performs binarization based on the second threshold value. For this reason, in step S420, the grayscale image generated by the threshold setting unit 120 is converted into a second black and white image.
  • the image switching unit 130 performs binarization based on the third threshold value. For this reason, in operation S430, the grayscale image generated by the threshold setting unit 120 is converted into a third monochrome image.
  • the image conversion process S400 binarizes the grayscale image into a black and white image for each of the at least one threshold, and as a result, generates the same number of black and white images as the number of the thresholds.
  • 6 and 7 are diagrams for explaining the effects of the present invention, and are diagrams for explaining the results of performance evaluation by applying the present invention step by step.
  • FIG. 6A illustrates an example of a case in which a specialist examiner displays a lesion area A on an endoscope image
  • FIG. 6B illustrates a first threshold derived by Equation 1.
  • Figure 6 (c) is derived after performing the binarization based on the first threshold (Threshold)
  • An example of a case in which the extracted contour is integrated and displayed in the original image after binarization is performed based on the generated contour and the second threshold bright generated additionally is shown.
  • the lesion area A indicated by the specialist doctor in FIG. 6A is not indicated by an outline. This is due to an error caused by the initial threshold being set to the average brightness value as the camera lights are concentrated in the lower right corner of the original image.
  • FIG. 6C an outline B similar to the lesion area A indicated by the specialist medical doctor is displayed in FIG. 6A. This indicates that a more accurate result can be obtained by performing additional binarization once more based on the second threshold bright derived by Equation 2 and then adding the derived contour.
  • FIG. 7 An example of numerically expressing this effect is illustrated in FIG. 7.
  • a first stage (1 st stage) is a stage when only an average threshold value (ie, a first threshold value) is applied, and a second stage (2 nd stage) is a second threshold value. bright ) If necessary, a third threshold (Threshold dark ) can be added to proceed to the third stage (3 rd stage), but as shown in FIG. proceeds up to value (Threshold bright) a second step of adding (2 nd stage).
  • FIG. 6B The result generated in the first stage of FIG. 7 (1 st stage) is shown in FIG. Referring to FIG. 6B and FIG. 7 together, the boundary lines shown in FIG. 6B are suspected lesions due to the difference in brightness with the surroundings even though there are no lesions. This is defined as 'false positive' because it is considered to be positive (lesion). On the other hand, since the actual lesion site was not detected as a lesion, this part is 'false negative' which judges the positive (lesion) as negative (normal).
  • the present invention may apply multiple thresholds to detect and display all the parts suspected of being lesions, thereby reducing the probability of missing the lesions by guiding the examination to an unsuccessful doctor and a trained trainee. That is, removing the 'false negative' which is not recognized as a lesion even though it is a lesion is a top priority of the present invention. Referring to FIGS. 6 and 7, it can be confirmed that this goal has been achieved through setting of multiple thresholds.
  • the above-described embodiments of the present invention can be written as a program that can be executed in a computer, and can be implemented in a general-purpose digital computer that operates the program using a computer-readable recording medium.
  • the computer-readable recording medium may include a magnetic storage medium (eg, a ROM, a floppy disk, a hard disk, etc.) and an optical read medium (eg, a CD-ROM, DVD, etc.).
  • a magnetic storage medium eg, a ROM, a floppy disk, a hard disk, etc.
  • an optical read medium eg, a CD-ROM, DVD, etc.

Landscapes

  • Health & Medical Sciences (AREA)
  • Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Medical Informatics (AREA)
  • Surgery (AREA)
  • General Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Molecular Biology (AREA)
  • Animal Behavior & Ethology (AREA)
  • Pathology (AREA)
  • Radiology & Medical Imaging (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Physics & Mathematics (AREA)
  • Image Analysis (AREA)
  • Biophysics (AREA)
  • Optics & Photonics (AREA)
  • Public Health (AREA)
  • Veterinary Medicine (AREA)
  • Endoscopes (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • High Energy & Nuclear Physics (AREA)
  • Physiology (AREA)
  • Signal Processing (AREA)

Abstract

La présente invention concerne un dispositif de détection automatique de l'emplacement d'une lésion pour détecter automatiquement un emplacement de lésion à partir d'une image médicale. Le dispositif de détection automatique d'emplacement de lésion de la présente invention comprend : une unité d'entrée pour recevoir une image originale collectée par le biais d'un instrument médical ; une unité de réglage de valeur de seuil pour convertir l'image originale en une image à tons de gris, puis fixer au moins une valeur de seuil sur la base d'une valeur de luminance de pixels pour former l'image à tons de gris ; une unité de conversion d'image pour binariser l'image à tons de gris en une image en noir et blanc sur la base de ladite au moins une valeur de seuil ; et une unité de traitement de contours destinée à extraire des contours en cherchant, à partir de l'image binarisée, les composants qui sont suspectés de lésions, puis afficher les contours sur l'image originale.
PCT/KR2016/003432 2015-04-01 2016-04-01 Dispositif pour détecter automatiquement l'emplacement d'une lésion à partir d'une image médicale et procédé associé WO2016159726A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
KR10-2015-0046279 2015-04-01
KR1020150046279A KR20160118037A (ko) 2015-04-01 2015-04-01 의료 영상으로부터 병변의 위치를 자동으로 감지하는 장치 및 그 방법

Publications (1)

Publication Number Publication Date
WO2016159726A1 true WO2016159726A1 (fr) 2016-10-06

Family

ID=57004716

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/KR2016/003432 WO2016159726A1 (fr) 2015-04-01 2016-04-01 Dispositif pour détecter automatiquement l'emplacement d'une lésion à partir d'une image médicale et procédé associé

Country Status (2)

Country Link
KR (1) KR20160118037A (fr)
WO (1) WO2016159726A1 (fr)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109346159A (zh) * 2018-11-13 2019-02-15 平安科技(深圳)有限公司 病例图像分类方法、装置、计算机设备及存储介质
CN113723417A (zh) * 2021-08-31 2021-11-30 平安国际智慧城市科技股份有限公司 基于单视图的影像匹配方法、装置、设备及存储介质
CN114903408A (zh) * 2022-04-22 2022-08-16 华伦医疗用品(深圳)有限公司 一种具有诊断成像的内窥镜成像系统

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR102149369B1 (ko) * 2018-04-06 2020-08-31 주식회사 뷰노 의료 영상을 시각화하는 방법 및 이를 이용한 장치
KR102102255B1 (ko) 2019-05-14 2020-04-20 주식회사 뷰노 의료 영상에서 병변의 시각화를 보조하는 방법 및 이를 이용한 장치
KR102613718B1 (ko) 2022-09-06 2023-12-14 주식회사 에어스메디컬 의료 영상 기반의 객체 추적 방법, 프로그램 및 장치
KR20240034428A (ko) 2022-09-07 2024-03-14 (주)임팩티브에이아이 인공지능을 기반으로 하는 신제품의 개발 방법, 프로그램 및 장치

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2004133736A (ja) * 2002-10-11 2004-04-30 Hitachi Medical Corp 医用画像表示方法及びその装置
KR20120113861A (ko) * 2011-04-06 2012-10-16 인하대학교 산학협력단 췌장 상피내 종양 진단 시스템
KR20120126679A (ko) * 2011-05-12 2012-11-21 주식회사 이턴 수술 상황 판단 및 대응을 위한 수술 로봇 시스템의 제어 방법과 이를 기록한 기록매체 및 수술 로봇 시스템
KR20140033332A (ko) * 2010-12-17 2014-03-18 오르후스 우니베르시테트 조직 병변의 묘사방법
KR20140134563A (ko) * 2013-05-14 2014-11-24 사회복지법인 삼성생명공익재단 정도관리를 위한 초음파 진단 장치 및 초음파 진단 방법

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2004133736A (ja) * 2002-10-11 2004-04-30 Hitachi Medical Corp 医用画像表示方法及びその装置
KR20140033332A (ko) * 2010-12-17 2014-03-18 오르후스 우니베르시테트 조직 병변의 묘사방법
KR20120113861A (ko) * 2011-04-06 2012-10-16 인하대학교 산학협력단 췌장 상피내 종양 진단 시스템
KR20120126679A (ko) * 2011-05-12 2012-11-21 주식회사 이턴 수술 상황 판단 및 대응을 위한 수술 로봇 시스템의 제어 방법과 이를 기록한 기록매체 및 수술 로봇 시스템
KR20140134563A (ko) * 2013-05-14 2014-11-24 사회복지법인 삼성생명공익재단 정도관리를 위한 초음파 진단 장치 및 초음파 진단 방법

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109346159A (zh) * 2018-11-13 2019-02-15 平安科技(深圳)有限公司 病例图像分类方法、装置、计算机设备及存储介质
CN109346159B (zh) * 2018-11-13 2024-02-13 平安科技(深圳)有限公司 病例图像分类方法、装置、计算机设备及存储介质
CN113723417A (zh) * 2021-08-31 2021-11-30 平安国际智慧城市科技股份有限公司 基于单视图的影像匹配方法、装置、设备及存储介质
CN113723417B (zh) * 2021-08-31 2024-04-12 深圳平安智慧医健科技有限公司 基于单视图的影像匹配方法、装置、设备及存储介质
CN114903408A (zh) * 2022-04-22 2022-08-16 华伦医疗用品(深圳)有限公司 一种具有诊断成像的内窥镜成像系统

Also Published As

Publication number Publication date
KR20160118037A (ko) 2016-10-11

Similar Documents

Publication Publication Date Title
WO2016159726A1 (fr) Dispositif pour détecter automatiquement l'emplacement d'une lésion à partir d'une image médicale et procédé associé
WO2017192020A1 (fr) Dispositif de traitement de données tridimensionnelles dentaires et procédé associé
WO2014208971A1 (fr) Méthode et appareil d'affichage d'images d'ultrasons
US7907775B2 (en) Image processing apparatus, image processing method and image processing program
WO2019172621A1 (fr) Procédé de prédiction de maladie et dispositif de prédiction de maladie l'utilisant
WO2020242019A1 (fr) Procédé et dispositif de traitement d'images médicales faisant appel à un apprentissage automatique
WO2015041451A1 (fr) Dispositif d'imagerie de diagnostic pour photographier un sein à l'aide de l'appariement d'une image tactile et d'une image infrarouge proche et procédé d'acquisition de l'image du tissu d'un sein
CN101404923A (zh) 医疗用图像处理装置以及医疗用图像处理方法
WO2013095032A1 (fr) Procédé permettant de détecter automatiquement un plan médio-sagittal au moyen d'une image ultrasonore et appareil associé
WO2013105815A1 (fr) Procédé de modélisation d'un fœtus et appareil de traitement d'image
WO2022059982A1 (fr) Système de diagnostic à ultrasons
Pan et al. A novel algorithm for color similarity measurement and the application for bleeding detection in WCE
WO2020180135A1 (fr) Appareil et procédé de prédiction de maladie du cerveau, et appareil d'apprentissage pour prédire une maladie du cerveau
WO2017065358A1 (fr) Procédé et dispositif pour le diagnostic de maladie infectieuse par l'intermédiaire d'image de kit de réactifs
WO2014157796A1 (fr) Système d'endoscope pour prise en charge de diagnostic et son procédé de commande
WO2017150894A1 (fr) Procédé et dispositif d'analyse de vaisseau sanguin à l'aide d'une image angiographique
WO2022139068A1 (fr) Système d'aide au diagnostic d'une maladie pulmonaire basé sur un apprentissage profond et procédé d'aide au diagnostic d'une maladie pulmonaire basé sur un apprentissage profond
WO2020222555A1 (fr) Dispositif et procédé d'analyse d'image
WO2021187700A2 (fr) Procédé de diagnostic par ultrasons de l'artère carotide
CN113768452A (zh) 一种电子内窥镜智能计时方法及装置
WO2023075303A1 (fr) Système d'aide au diagnostic endoscopique basé sur l'intelligence artificielle et son procédé de commande
WO2017010612A1 (fr) Système et méthode de prédiction de diagnostic pathologique reposant sur une analyse d'image médicale
WO2021112436A1 (fr) Dispositif et procédé de calcul automatique de la propreté des intestins
WO2020246676A1 (fr) Système de diagnostic automatique du cancer du col de l'utérus
WO2022211402A1 (fr) Procédé et dispositif d'analyse de tissu de plaque athéroscléreuse utilisant une image de fusion multimodale

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 16773507

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 16773507

Country of ref document: EP

Kind code of ref document: A1