WO2019143179A1 - Procédé de détection automatique de mêmes régions d'intérêt entre des images du même objet prises à un intervalle de temps, et appareil ayant recours à ce procédé - Google Patents

Procédé de détection automatique de mêmes régions d'intérêt entre des images du même objet prises à un intervalle de temps, et appareil ayant recours à ce procédé Download PDF

Info

Publication number
WO2019143179A1
WO2019143179A1 PCT/KR2019/000761 KR2019000761W WO2019143179A1 WO 2019143179 A1 WO2019143179 A1 WO 2019143179A1 KR 2019000761 W KR2019000761 W KR 2019000761W WO 2019143179 A1 WO2019143179 A1 WO 2019143179A1
Authority
WO
WIPO (PCT)
Prior art keywords
interest
image
same
region
computing device
Prior art date
Application number
PCT/KR2019/000761
Other languages
English (en)
Korean (ko)
Inventor
정규환
Original Assignee
주식회사 뷰노
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 주식회사 뷰노 filed Critical 주식회사 뷰노
Publication of WO2019143179A1 publication Critical patent/WO2019143179A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H30/00ICT specially adapted for the handling or processing of medical images
    • G16H30/40ICT specially adapted for the handling or processing of medical images for processing medical images, e.g. editing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20092Interactive image processing based on input by user
    • G06T2207/20104Interactive definition of region of interest [ROI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30096Tumor; Lesion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2210/00Indexing scheme for image generation or computer graphics
    • G06T2210/41Medical
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/03Recognition of patterns in medical or anatomical images

Definitions

  • the present invention relates to a method for detecting a region of interest between images photographed with a time interval for the same subject, and an apparatus using the same. Specifically, according to the method of the present invention, when a second image photographed at a time interval from the time of photographing of the first image and the second image is obtained, and a region of interest is set on the first image, And detects the same area of interest as the area of interest as the area of interest as the area of interest in the second image.
  • CT computed tomography
  • lesions are characterized by progressive characteristics, that is, shape and size depending on time. Therefore, it is difficult to know whether the lesion is the same lesion only by simple matching and comparison, and thus it is difficult to automate.
  • a method of learning a deep learning model using pairs of lesions photographed at different viewpoints and determining a pair of lesions with the same degree of similarity or higher by measuring the similarity of newly given lesion pairs And propose a device using the same.
  • Patent Document 1 US7660448 B
  • Non-Patent Document 1 Goodfellow, Ian J .; Pouget-Abadie, Jean; Mirza, Mehdi; Xu, Bing; Warde-Farley, David; Ozair, Sherjil; Courville, Aaron; Bengio, Yoshua (2014). "Generative Adversarial Networks"
  • Non-Patent Document 2 Dmitriy Serdyuk et al. Twin Networks: Matching the Future for Sequence Generation, arXiv preprint arXiv: 1708.06742v2
  • the present invention aims to enable detection of the same area of interest, especially lesion, between different images.
  • the present invention aims at performing measurement and tracking observation of the shape and quantitative change of the area of interest through quantification of the same area of interest.
  • a method for detecting the same area of interest between images photographed for the same subject wherein the time from the shooting of the first image and the first image If a second image photographed at intervals is obtained and a region of interest is set on the first image, the computing device may detect in the second image the same area of interest as the area of interest as a region of interest, Thereby enabling other interlocked devices to be detected.
  • the method further comprises: (a) supporting the computing device to acquire or acquire the first and second images; (b) supporting the computing device to set or set a search area corresponding to a region of interest designated on the acquired first image to the second image; And (c) supporting the computing device to search for or retrieve at least one candidate image that is the same or similar to the region of interest within the set search subject area, thereby selecting one of the at least one candidate image To the same area of interest.
  • a computer program stored in a machine readable non-transitory medium, comprising instructions embodied to perform the method of detecting a region of interest in accordance with the present invention.
  • a computing device for detecting the same area of interest between images photographed for the same subject, the computing device comprising: a first image and a time interval A communication unit for acquiring a second image photographed by the photographing unit; And when the first image and the second image are acquired and a region of interest is set on the first image, the same region of interest as the region of interest is detected as the same region of interest as the same region of interest, And to enable other devices to be detected.
  • the processor is configured to set or otherwise set the search object zone corresponding to the region of interest designated on the acquired first image to the second image or to set the other device in the set search region To support the detection or detection of any one of the at least one candidate image selected according to a predetermined criterion in the same area of interest by assisting in searching or retrieving at least one candidate image that is the same as or similar to the region of interest.
  • the present invention it is possible to determine one or more regions of interest in one image, particularly the same region of interest in another image, for each of the lesions.
  • the present invention can be applied to a medical image used in a hospital, for example, a three-dimensionally acquired ultrasound image, an MRI image, and the like, and the method of the present invention is not dependent on a specific type of image or platform Of course not.
  • FIG. 1 schematically shows an exemplary configuration of a computing device that performs a method for detecting the same area of interest between images photographed for the same subject in accordance with the present invention It is a conceptual diagram.
  • FIG. 2 is an exemplary block diagram illustrating the hardware or software components of a computing device that performs the method of detecting a region of interest in accordance with the present invention.
  • Figure 3 is a flow diagram illustrating an exemplary method of detecting a region of interest in accordance with the present invention.
  • FIG. 4 is a conceptual diagram illustrating a calculation method of the degree of similarity used in the method of detecting the same area of interest according to the present invention.
  • Figure 5 is an exemplary illustration of regions of interest detected in accordance with the present invention.
  • image refers to multidimensional data composed of discrete image elements (e.g., pixels in a two-dimensional image and voxels in a three- Quot;
  • imaging may be computed by (cone-beam) computed tomography, magnetic resonance imaging (MRI), ultrasound, or any other medical imaging system known in the art I.e., a medical image of a subject.
  • the images may also be provided in a non-medical context, for example, a remote sensing system, an electron microscopy, and the like.
  • an 'image' refers to an image that is visible (eg, displayed on a video screen) or an image (eg, a file corresponding to a pixel output, such as a CT or MRI detector) It is a term referring to a digital representation.
  • cone-beam computed tomography (CBCT) image data is sometimes shown as an exemplary image modality in the drawings.
  • image formats used in various embodiments of the present invention may be used in various imaging formats such as X-ray imaging, MRI, CT, positron emission tomography (PET), PET-CT, SPECT, SPECT-CT, MR- But it is to be understood that the present invention is not limited to the three-dimensional image and the slice image derived therefrom.
  • DICOM Digital Imaging and Communications in Medicine
  • ACR American Radiation Medical Association
  • NEMA American Electrical Manufacturers Association
  • 'Picture Archiving and Communication System refers to a system for storing, processing and transmitting according to the DICOM standard throughout the detailed description and claims of the present invention, , And MRI can be stored in the DICOM format and transmitted to a terminal inside or outside the hospital through the network, and the result of reading and the medical record can be added to the terminal.
  • 'learning' or 'learning' refers to performing machine learning through computing according to a procedure, It will be understood by those of ordinary skill in the art that the present invention is not intended to be so-called.
  • FIG. 1 is a conceptual diagram schematically illustrating an exemplary configuration of a computing device that performs a method of detecting a region of interest according to the present invention.
  • a computing device 100 includes a communication unit 110 and a processor 120.
  • the communication unit 110 communicates with an external computing device (not shown) Communication is possible.
  • the computing device 100 may be implemented as a computer-readable medium, such as conventional computer hardware (e.g., a computer processor, memory, storage, input and output devices, Electronic communication devices, electronic information storage systems such as network-attached storage (NAS) and storage area networks (SAN), and computer software (i.e., computing devices that enable a computing device to function in a particular manner) Commands) to achieve the desired system performance.
  • conventional computer hardware e.g., a computer processor, memory, storage, input and output devices, Electronic communication devices, electronic information storage systems such as network-attached storage (NAS) and storage area networks (SAN), and computer software (i.e., computing devices that enable a computing device to function in a particular manner) Commands) to achieve the desired system performance.
  • NAS network-attached storage
  • SAN storage area networks
  • the communication unit 110 of the computing device can send and receive requests and responses to and from other interworking computing devices.
  • requests and responses can be made by the same transmission control protocol (TCP) session
  • TCP transmission control protocol
  • UDP user datagram protocol
  • the communication unit 110 may include a keyboard, a mouse, an external input device, a printer, a display, and other external output devices for receiving commands or instructions.
  • the processor 120 of the computing device may also be a micro processing unit (MPU), a central processing unit (CPU), a graphics processing unit (GPU), or a tensor processing unit (TPU), a cache memory, a data bus ). ≪ / RTI > It may further include a software configuration of an operating system and an application that performs a specific purpose.
  • MPU micro processing unit
  • CPU central processing unit
  • GPU graphics processing unit
  • TPU tensor processing unit
  • FIG. 2 is an exemplary block diagram illustrating the hardware or software components of a computing device that performs the method of detecting a region of interest in accordance with the present invention.
  • the computing device 100 may include an image acquisition module 210 as a component thereof.
  • the image acquisition module 210 is configured to acquire a first image and a second image to which the method according to the present invention is applied, wherein the first image and the second image may be a two-dimensional image or a three-dimensional image. If these images are 3D images, the area of interest can be called the volume of interest. If these images are 2D images, the area of interest may be called the area of interest.
  • the first image and the second image may be medical images and may be acquired from an external image storage system such as a radiographic imaging apparatus or a medical image storage transmission system (PACS) linked through the communication unit 110 But is not limited thereto.
  • these images may be captured by the imaging device and transmitted to the PACS in accordance with the DICOM standard, and then acquired by the image acquisition module 210 of the computing device 100.
  • PACS medical image storage transmission system
  • the acquired medical image may be transmitted to the search object zone setting module 220, which sets the search object zone corresponding to the area of interest designated on the first image to the second image As shown in FIG.
  • the search area may be set to be larger than the size of the area of interest in consideration of the position error e between the first image and the second image.
  • the candidate image search module 230 is configured to retrieve at least one candidate image that is the same or similar to the region of interest within the region of interest when the region of interest is set.
  • the candidate image retrieval module 230 may be implemented using a recent deep convolutional neural network such as a Fully Convolutional Neural Network or a generative adversarial network (GAN), for example. CNN).
  • a recent deep convolutional neural network such as a Fully Convolutional Neural Network or a generative adversarial network (GAN), for example. CNN.
  • GAN generative adversarial network
  • An exemplary hostile neural network configuration is described in Non-Patent Document 1: [Goodfellow, Ian J .; Pouget-Abadie, Jean; Mirza, Mehdi; Xu, Bing; Warde-Farley, David; Ozair, Sherjil; Courville, Aaron; Bengio, Yoshua (2014). "Generative Adversarial Networks”.
  • the neural network constituting the candidate image retrieval module 230 may be a generative hostile neural network including a production neural network 232 (not shown) and a classified neural network 234 (not shown).
  • the technique used in the candidate image search module 230 is not limited to the hostile neural network, and various techniques can be used.
  • the candidate image retrieval module 220 that has completed the learning or training may be composed only of the generated neural network 232 because the classified neural network 234 is for learning as described later.
  • the same area of interest detection module 240 is configured to detect, when at least one candidate image is searched, any one selected according to a predetermined criterion as a region of interest.
  • the predetermined criterion is related to the similarity to be described later.
  • information about the same pair of interest areas can be passed to the storage and transmission module 250 (not shown), which stores and transmits information May be stored or provided to an external entity.
  • the external entity includes a user of the computing device 100, a manager, a medical professional in charge of the subject, and the like.
  • a second slice image calculated from the first slice image is required
  • the external entity may be an external AI device that includes separate AI hardware and / or software modules utilizing the second slice image.
  • 'external' in an external entity may be used to exclude embodiments in which AI hardware and / or software modules that utilize at least one of the first slice image and the second slice image are integrated into the computing device 100
  • the present invention is not intended to suggest that the second slice image, which is the result of the hardware and / or software module performing the method of the present invention, can be used as input data of other methods. That is, the external entity may be the computing device 100 itself.
  • the storage and transmission module 250 may perform this through a predetermined display device or the like. At this time, each image and a pair of the same interest area corresponding thereto are displayed on the display device It will be possible. Storage of information representative of this same pair of interest areas may be performed by other devices associated with the computing device 100, such as PACS.
  • FIG. 2 Although the components shown in FIG. 2 are illustrated as being realized in one computing device for convenience of explanation, it will be understood that a plurality of computing devices 100 performing the method of the present invention may be configured to be interlocked with each other.
  • Figure 3 is a flow diagram illustrating an exemplary method of detecting a region of interest in accordance with the present invention.
  • a second image photographed at a time interval from the time of photographing the first image and the first image is obtained, and when a region of interest is set on the first image, 100 detects in the second image the same area of interest as the area of interest as a region of interest, or other device associated with the computing device.
  • the method of detecting a region of interest according to the present invention is characterized in that the image acquisition module 210 implemented by the communication unit 110 of the computing device 100 includes the first image and the second image, And acquiring or acquiring the second image (SlOO).
  • the same-interest area detection method of the present invention is characterized in that the search target zone setting module 220 embodied by the computing device 100 searches for a search target zone corresponding to each of at least one ROI specified on the acquired first image, (S200) for setting or setting each of the first image and the second image with respect to the second image.
  • the search target zone setting module 220 embodied by the computing device 100 searches for a search target zone corresponding to each of at least one ROI specified on the acquired first image, (S200) for setting or setting each of the first image and the second image with respect to the second image.
  • a step of assigning a region of interest on the first image may be performed, which may be performed by a specialist (e.g., a doctor) who can identify a particular region of interest, Or by a lesion detection method (for example, a detection method such as U.S. Patent No. 7,660,448).
  • a specialist e.g., a doctor
  • a lesion detection method for example, a detection method such as U.S. Patent No. 7,660,448.
  • at least one region of interest on the first image may be all of the regions of interest identified on the first image or may be regions of interest selected by predetermined manipulations or predetermined criteria.
  • the predetermined operation may be, for example, an operation (for example, a click operation) for selecting one of the list of the interest areas provided through the user interface, or a part of the image is boxed through the user interface ), And so on.
  • the predetermined criteria may be a numerical value indicating that the region of interest is meaningful, for example, a value such as confidence that a lesion identified in a medical image means a probability of an actual lesion is a certain level or more.
  • the photographing time point of the first image is t
  • the photographing time point of the second image is t '
  • the list of the interest area designated by the first image at time t is touched with interest designated to the second image at time t_I
  • t' The list of zones can be called t'_I.
  • an individual search subject area corresponding to the individual interest area included in t_I is set in step S200.
  • the individual search subject area has a predetermined physical dimension, for example, a space within a predetermined distance d from a position corresponding to the individual attention area of the first image.
  • the search object zone is utilized to narrow the search object by using information of the position of the ROI relative to the images of the object, whereby detection of the same ROI can be efficiently performed.
  • the search target area is not necessarily set and that the search target area may be set as the entire area of the second image when sufficient operation resources are secured. That is, the search object zone corresponding to the region of interest designated on the first image may be set to be the entire second image.
  • the method of detecting a region of interest is characterized in that the candidate image retrieval module 230 of the computing device 100 retrieves at least one candidate image identical or similar to the individual region of interest (S300) to detect or detect any one of the at least one candidate image selected according to a predetermined criterion by searching for or searching for the same interest area as the individual area of interest (S300).
  • Step S300 may include, for example, the step S310 of supporting the candidate image search module 230 to search for or retrieve the at least one candidate image and the step S310 of searching the candidate image, (S320) to detect or detect a candidate image in the second image having the highest degree of similarity with the individual region of interest in the same interest region.
  • the meaning of searching for the candidate image in this step S310 does not mean to detect the region of interest in the second image at that time but it is to be understood as meaning that the region of interest of the second image in the region to be searched is selected . That is, the detection of regions of interest in the second image may be performed prior to step S310, preferably with step S150 described above.
  • Similarity is a measure of the degree of similarity between images, and can be roughly calculated using characteristics such as color, contrast and saturation according to the conventional image processing method. However, similarity is reflected in various layers of image characteristics ≪ / RTI > will be preferred in carrying out the method of the present invention.
  • step S320 the similarity between the individual interest area of the first image and the candidate image in the second image (i.e., the image selected from the individual interest areas of the second image) is determined by the following scheme of the deep learning model . ≪ / RTI >
  • FIG. 4 is a conceptual diagram illustrating a calculation method of the degree of similarity used in the method of detecting a similar area according to the present invention.
  • FIG. 4 illustrates a pair of twin networks.
  • the two networks may share weight and may be separate, non-shared, convolutional neural networks.
  • the input to the network may be a two-dimensional patch or a three-dimensional voxel.
  • the twin networks are learned to output the same interest area, for example 1 for the same lesion and 0 for the unequal area of interest. If the final output layer of the network is a softmax layer, the resultant value is always between 0 and 1. When any pair of images enters the input value, a value between 0 and 1 is output as a result, Can be used as the similarity between the interest zone images.
  • the deep learning model according to the present invention can be preliminarily learned using training data including image pairs that are photographed for the same object with a time interval and in which pairs of the same or different regions of interest are labeled.
  • the training data includes a pair of images acquired at different times t and t 'for the same subject, i.e., the subject, and a list of regions of interest detected in each of the images.
  • the list includes information such as a region image of interest, e.g., an image corresponding to a lesion and a spatial location of the region of interest.
  • a pair of interest zones other than the same interest zone can also be constructed based on this list of interest zones, which is defined as a pair of non-identical interest zones. Since the data of the pair of the same interest area and the pair of the non-same interest area can be secured for various subjects and various viewpoints, the deep learning model of the present invention can obtain a value of 1 The result can be learned by putting out a value of 0 for the input of a pair of non-identical regions of interest as a result value.
  • step S320 if the similarity for all the candidate images is lower than a predetermined threshold value S, it can be determined that the same area of interest is not detected. On the other hand, if there is at least one candidate image in which the degree of similarity is equal to or greater than the predetermined threshold S, the candidate image having the highest degree of similarity may be detected as the same region of interest.
  • FIG. 5 is an exemplary illustration of regions of interest detected in accordance with the present invention. Referring to FIG. 5, nodule lesions, which are the same areas of interest detected in a chest CT image, are illustrated. It will be appreciated by those of ordinary skill in the art that the method of the present invention is generally applicable to a variety of images including areas of interest that vary over time.
  • the method of detecting a region of interest in accordance with the present invention may include detecting information about the same areas of interest detected by the storage and transmission module 250 (not shown) implemented by the computing device 100, (Step S400) of providing, storing, providing, or storing the similarity between the regions of interest, the type, the identification number, the name of the lesion, and the similarity between regions of interest. Information on such areas of interest could be used to facilitate tracking and observation of areas of interest by including quantitative and morphological indicators.
  • the information about the same areas of interest generated is stored through the storage and transmission module 250 and / or provided to an external entity via a predetermined display device, and / or May be provided to other devices, such as PACS, that are interfaced to the computing device 100.
  • the present invention is able to determine and track the same area of interest in another image for each of at least one region of interest in one image over all of the embodiments and variations described above, It is possible to construct a system. This will allow a more accurate diagnosis of progressive lesions of the physician, which will ultimately improve the quality of care and improve the workflow in the medical field as aided by AI.
  • the hardware may include special features or components of a general purpose computer and / or a dedicated computing device or a specific computing device or a particular computing device.
  • the processes may be realized by one or more microprocessors, microcontrollers, embedded microcontrollers, programmable digital signal processors or other programmable devices having internal and / or external memory. Additionally or alternatively, the processes can be configured to process application specific integrated circuits (ASICs), programmable gate arrays, programmable array logic (PAL) Or any other device or combination of devices.
  • ASICs application specific integrated circuits
  • PAL programmable array logic
  • the objects of the technical solution of the present invention, or portions contributing to the prior art may be implemented in the form of program instructions that can be executed through various computer components and recorded on a machine-readable recording medium.
  • the machine-readable recording medium may include program commands, data files, data structures, and the like, alone or in combination.
  • the program instructions recorded on the machine-readable recording medium may be those specially designed and constructed for the present invention or may be those known to those of ordinary skill in the computer software arts.
  • machine-readable recording medium examples include magnetic media such as hard disks, floppy disks and magnetic tape, optical recording media such as CD-ROM, DVD, Blu-ray, magneto-optical media such as floptical disks magneto-optical media, and hardware devices specifically configured to store and execute program instructions such as ROM, RAM, flash memory, and the like.
  • Examples of program instructions include, but are not limited to, any of the above devices, as well as a heterogeneous combination of processors, processor architectures or combinations of different hardware and software, Which may be constructed using a structured programming language such as C, an object-oriented programming language such as C ++ or an advanced or low-level programming language (assembly language, hardware description languages and database programming languages and techniques) This includes not only bytecode, but also high-level language code that can be executed by a computer using an interpreter or the like.
  • combinations of the methods and methods may be implemented as executable code that performs each of the steps.
  • the method may be implemented as systems for performing the steps, and the methods may be distributed in various ways throughout the devices, or all functions may be integrated into one dedicated, stand-alone device, or other hardware.
  • the means for performing the steps associated with the processes described above may include any of the hardware and / or software described above. All such sequential combinations and combinations are intended to be within the scope of this disclosure.
  • the hardware device may be configured to operate as one or more software modules to perform processing in accordance with the present invention, and vice versa.
  • the hardware device may include a processor, such as an MPU, CPU, GPU, TPU, coupled to a memory, such as ROM / RAM, for storing program instructions and configured to execute instructions stored in the memory, And a communication unit capable of receiving and sending data.
  • the hardware device may include a keyboard, a mouse, and other external input devices for receiving commands generated by the developers.
  • Such equally or equivalently modified means include, for example, a logically equivalent method which can produce the same result as the method according to the present invention, Should not be limited by the foregoing examples, but should be understood in the broadest sense permissible by law.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Evolutionary Computation (AREA)
  • Radiology & Medical Imaging (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Epidemiology (AREA)
  • Primary Health Care (AREA)
  • Public Health (AREA)
  • Databases & Information Systems (AREA)
  • Computing Systems (AREA)
  • Software Systems (AREA)
  • Artificial Intelligence (AREA)
  • Apparatus For Radiation Diagnosis (AREA)

Abstract

La présente invention concerne un procédé de détection de mêmes régions d'intérêt dans des images du même objet prises à un intervalle de temps, et un appareil utilisant ce procédé. En particulier, le procédé selon la présente invention comprend l'acquisition d'une première image, et d'une deuxième image prise à un intervalle de temps succédant à l'instant où la première image a été prise et, lorsqu'une zone d'intérêt est réglée dans la première image, un appareil informatique assure, dans la deuxième image, la détection en tant que même zone d'intérêt, d'une zone d'intérêt qui est la même que ladite zone d'intérêt, ou apporte une assistance en ce que la détection soit faite à l'aide d'un appareil connecté à l'appareil informatique.
PCT/KR2019/000761 2018-01-18 2019-01-18 Procédé de détection automatique de mêmes régions d'intérêt entre des images du même objet prises à un intervalle de temps, et appareil ayant recours à ce procédé WO2019143179A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
KR1020180006503A KR101919847B1 (ko) 2018-01-18 2018-01-18 동일 피사체에 대하여 시간 간격을 두고 촬영된 영상 간에 동일 관심구역을 자동으로 검출하는 방법 및 이를 이용한 장치
KR10-2018-0006503 2018-01-18

Publications (1)

Publication Number Publication Date
WO2019143179A1 true WO2019143179A1 (fr) 2019-07-25

Family

ID=64561973

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/KR2019/000761 WO2019143179A1 (fr) 2018-01-18 2019-01-18 Procédé de détection automatique de mêmes régions d'intérêt entre des images du même objet prises à un intervalle de temps, et appareil ayant recours à ce procédé

Country Status (2)

Country Link
KR (1) KR101919847B1 (fr)
WO (1) WO2019143179A1 (fr)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20220076419A1 (en) * 2020-09-08 2022-03-10 Canon Medical Systems Corporation Ultrasonic diagnostic device and storage medium

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR102683362B1 (ko) * 2018-12-07 2024-07-10 삼성전자주식회사 이미지를 처리하기 위한 방법 및 그 전자 장치
KR102173942B1 (ko) * 2020-03-31 2020-11-04 주식회사 딥노이드 다른 시점의 영상을 이용한 객체 검출을 위한 장치 및 이를 위한 방법
KR102161853B1 (ko) * 2020-05-28 2020-10-05 주식회사 에프앤디파트너스 의료 영상 처리 방법, 의료 영상 검색 방법 및 이를 이용한 장치
KR102343348B1 (ko) * 2021-07-29 2021-12-24 국방과학연구소 적외선 영상 데이터 획득 방법, 적외선 영상 데이터 획득 장치 및 상기 방법을 실행시키기 위하여 기록매체에 저장된 컴퓨터 프로그램

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2013192624A (ja) * 2012-03-16 2013-09-30 Hitachi Ltd 医用画像診断支援装置、医用画像診断支援方法ならびにコンピュータプログラム
KR20140093376A (ko) * 2013-01-16 2014-07-28 삼성전자주식회사 의료 영상을 이용하여 대상체에 악성 종양이 존재하는지 여부를 예측하는 장치 및 방법
WO2015191414A2 (fr) * 2014-06-09 2015-12-17 Siemens Corporation Détection de points de repère avec des contraintes spatiales et temporelles en imagerie médicale
KR20160032586A (ko) * 2014-09-16 2016-03-24 삼성전자주식회사 관심영역 크기 전이 모델 기반의 컴퓨터 보조 진단 장치 및 방법
KR20160047516A (ko) * 2013-08-27 2016-05-02 하트플로우, 인크. 관동맥 병변들의 위치, 발병, 및/또는 변화를 예측하기 위한 시스템들 및 방법들

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2013192624A (ja) * 2012-03-16 2013-09-30 Hitachi Ltd 医用画像診断支援装置、医用画像診断支援方法ならびにコンピュータプログラム
KR20140093376A (ko) * 2013-01-16 2014-07-28 삼성전자주식회사 의료 영상을 이용하여 대상체에 악성 종양이 존재하는지 여부를 예측하는 장치 및 방법
KR20160047516A (ko) * 2013-08-27 2016-05-02 하트플로우, 인크. 관동맥 병변들의 위치, 발병, 및/또는 변화를 예측하기 위한 시스템들 및 방법들
WO2015191414A2 (fr) * 2014-06-09 2015-12-17 Siemens Corporation Détection de points de repère avec des contraintes spatiales et temporelles en imagerie médicale
KR20160032586A (ko) * 2014-09-16 2016-03-24 삼성전자주식회사 관심영역 크기 전이 모델 기반의 컴퓨터 보조 진단 장치 및 방법

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20220076419A1 (en) * 2020-09-08 2022-03-10 Canon Medical Systems Corporation Ultrasonic diagnostic device and storage medium
US12062171B2 (en) * 2020-09-08 2024-08-13 Canon Medical Systems Corporation Ultrasonic diagnostic device and storage medium

Also Published As

Publication number Publication date
KR101919847B1 (ko) 2018-11-19

Similar Documents

Publication Publication Date Title
WO2019143179A1 (fr) Procédé de détection automatique de mêmes régions d'intérêt entre des images du même objet prises à un intervalle de temps, et appareil ayant recours à ce procédé
WO2019143177A1 (fr) Procédé de reconstruction de série d'images de tranche et appareil utilisant celui-ci
KR101943011B1 (ko) 피검체의 의료 영상 판독을 지원하는 방법 및 이를 이용한 장치
WO2019103440A1 (fr) Procédé permettant de prendre en charge la lecture d'une image médicale d'un sujet et dispositif utilisant ce dernier
KR101919866B1 (ko) 뼈 스캔 영상에서 암 전이 여부의 판정을 지원하는 방법 및 이를 이용한 장치
WO2019143021A1 (fr) Procédé de prise en charge de visualisation d'images et appareil l'utilisant
JP2019153250A (ja) 医療文書作成支援装置、方法およびプログラム
KR20190103937A (ko) 뉴럴 네트워크를 이용하여 캡슐 내시경 영상으로부터 병변 판독 방법 및 장치
WO2021034138A1 (fr) Procédé d'évaluation de la démence et appareil utilisant un tel procédé
CN111989710A (zh) 医学成像中的自动切片选择
WO2017051944A1 (fr) Procédé pour augmenter l'efficacité de la lecture en utilisant des informations de regard d'utilisateur dans un processus de lecture d'image médicale et appareil associé
JP2019082881A (ja) 画像検索装置、方法およびプログラム
WO2016125978A1 (fr) Procédé et appareil d'affichage d'image médical
WO2019124836A1 (fr) Procédé de mappage d'une région d'intérêt d'une première image médicale sur une seconde image médicale, et dispositif l'utilisant
KR101923962B1 (ko) 의료 영상의 열람을 지원하는 방법 및 이를 이용한 장치
WO2022131642A1 (fr) Appareil et procédé pour déterminer la gravité d'une maladie sur la base d'images médicales
WO2022145841A1 (fr) Procédé d'interprétation de lésion et appareil associé
Chen et al. Domain adaptive and fully automated carotid artery atherosclerotic lesion detection using an artificial intelligence approach (LATTE) on 3D MRI
Wang et al. Automatic creation of annotations for chest radiographs based on the positional information extracted from radiographic image reports
US11923069B2 (en) Medical document creation support apparatus, method and program, learned model, and learning apparatus, method and program
KR20200116278A (ko) 치과 영상으로부터 피검체의 성별 및 연령을 판정하는 방법 및 이를 이용한 장치
EP3467770B1 (fr) Procédé d'analyse d'un ensemble de données d'imagerie médicale, système d'analyse d'un ensemble de données d'imagerie médicale, produit-programme d'ordinateur et support lisible par ordinateur
US11416994B2 (en) Method and system for detecting chest x-ray thoracic diseases utilizing multi-view multi-scale learning
KR102222816B1 (ko) 진행성 병변의 미래 영상을 생성하는 방법 및 이를 이용한 장치
WO2023013959A1 (fr) Appareil et procédé de prédiction de l'accumulation de bêta-amyloïdes

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19741258

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19741258

Country of ref document: EP

Kind code of ref document: A1