WO2022054541A1 - Dispositif, procédé et programme de traitement d'image - Google Patents

Dispositif, procédé et programme de traitement d'image Download PDF

Info

Publication number
WO2022054541A1
WO2022054541A1 PCT/JP2021/030594 JP2021030594W WO2022054541A1 WO 2022054541 A1 WO2022054541 A1 WO 2022054541A1 JP 2021030594 W JP2021030594 W JP 2021030594W WO 2022054541 A1 WO2022054541 A1 WO 2022054541A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
lesion
standard
organ
schema
Prior art date
Application number
PCT/JP2021/030594
Other languages
English (en)
Japanese (ja)
Inventor
拓矢 湯澤
Original Assignee
富士フイルム株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 富士フイルム株式会社 filed Critical 富士フイルム株式会社
Publication of WO2022054541A1 publication Critical patent/WO2022054541A1/fr

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B6/00Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/33Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods

Definitions

  • This disclosure relates to image processing devices, methods and programs.
  • CT Computer Tomography
  • MRI Magnetic Resonance Imaging
  • CAD Computer-Aided Diagnosis
  • a schema is a schematic diagram schematically showing the structure of the human body. For example, if a lesion is detected in the upper right lobe of the lung as a result of analyzing a medical image, it is possible to highlight and display an abnormality in the area of the upper right lobe in a schema that schematically shows the lung. Will be. However, in such a display method, although it can be seen that there is an abnormal finding in the upper right lobe by the schema, it is not possible to read where the lesion is in the upper right lobe.
  • the lesion is identified in the three-dimensional image acquired by a CT device or the like, the virtual patient image created in advance and the three-dimensional image are aligned, the position of the lesion is specified in the virtual patient image, and the lesion position is determined.
  • a method for displaying a specified virtual patient image has been proposed (see Patent Document 1).
  • the virtual patient image in Patent Document 1 is generated in advance as an image actually taken by X-ray according to the physique of the patient such as age, adult, child, gender, weight, and height.
  • the virtual patient image is generated as an image actually taken by X-ray of a human body having a standard physique. For this reason, the virtual patient image is not generated by photographing an actual patient with X-rays, but schematically represents the structure of the human body, similar to the schema. It is difficult to accurately align a schematic image such as a schema with a medical image acquired by photographing. Therefore, when the method described in Patent Document 1 is used, it is difficult to accurately reflect the position of the lesion included in the medical image in the schema.
  • the present disclosure has been made in view of the above circumstances, and an object thereof is to enable the position of the lesion included in the medical image to be accurately reflected in the schema.
  • the image processing apparatus comprises at least one processor.
  • the processor is By extracting the target organ containing the lesion from the medical image, the target organ image including the lesion is derived. By aligning the standard organ image derived by normalizing multiple target organ images with the derived target organ image, the position of the lesion in the standard organ image can be identified. Based on the location of the lesion identified in the standard organ image, the location of the lesion in the schema schematically representing the target organ is identified.
  • the standard organ image may be derived by normalizing the size, shape and density of a plurality of target organ images.
  • the medical image and the standard organ image are three-dimensional images.
  • the schema may be a two-dimensional image.
  • the processor derives a two-dimensional standard organ image in which the position of the lesion is specified by projecting the standard organ image in two dimensions.
  • the location of the lesion in the schema may be specified based on the location of the lesion in the two-dimensional standard organ image.
  • the processor derives the positional relationship between the center of gravity of the two-dimensional standard organ image and the position of the lesion in the two-dimensional standard organ image.
  • the position of the lesion in the schema may be specified by reflecting the positional relationship with respect to the center of gravity of the schema.
  • the processor derives the positional relationship between the center of gravity of the anatomical region containing the location of the lesion and the location of the lesion in the anatomical region in a two-dimensional standard organ image.
  • the location of the lesion in the schema may be specified by reflecting the positional relationship with respect to the center of gravity of the anatomical region of the schema corresponding to the anatomical region including the location of the lesion.
  • the processor may display a schema in which the position of the lesion is specified.
  • the processor may detect a lesion by analyzing a medical image.
  • the image processing method derives a target organ image including a lesion by extracting a target organ containing the lesion from a medical image.
  • a target organ image including a lesion by extracting a target organ containing the lesion from a medical image.
  • the position of the lesion in the standard organ image can be identified.
  • the location of the lesion in the schema schematically representing the target organ is identified.
  • image processing method according to the present disclosure may be provided as a program for executing the computer.
  • the position of the lesion included in the medical image can be accurately reflected in the schema.
  • FIG. 1 The figure which shows the schematic structure of the medical information system which applied the image processing apparatus by embodiment of this disclosure.
  • Functional configuration diagram of the image processing device according to this embodiment Figure showing lung image The figure which shows the detection result of a lesion Figure showing standard lung image Diagram showing standard lung images in which lesions are located
  • Diagram showing schema Figure showing two-dimensional standard lung image A diagram for explaining the relationship between the center of gravity and the position of the lesion in the upper lobe of the left lung in a two-dimensional standard lung image.
  • Diagram to explain the location of the lesion based on the center of gravity of the upper lobe of the left lung of the schema Diagram showing the display screen of the schema Flow chart showing processing performed in this embodiment
  • FIG. 1 is a diagram showing a schematic configuration of a medical information system.
  • a computer 1 an imaging device 2, and an image storage server 3 including an image processing device according to the present embodiment are connected in a communicable state via a network 4.
  • the computer 1 includes an image processing device according to the present embodiment, and an image processing program according to the present embodiment is installed.
  • the computer 1 may be a workstation or a personal computer directly operated by a doctor who interprets a medical image or makes a diagnosis using the medical image, or may be a server computer connected to them via a network.
  • the image processing program is stored in a storage device of a server computer connected to a network or in a network storage in a state of being accessible from the outside, and is downloaded and installed in a computer 1 used by a doctor upon request. Alternatively, it is recorded and distributed on a recording medium such as a DVD (Digital Versatile Disc) or a CD-ROM (Compact Disc Read Only Memory), and is installed on the computer 1 from the recording medium.
  • the imaging device 2 is a device that generates a three-dimensional image representing the site by photographing the site to be diagnosed of the subject, and specifically, a CT device, an MRI device, and a PET (Positron Emission Tomography). ) Equipment, etc.
  • the three-dimensional image composed of a plurality of sliced images generated by the photographing device 2 is transmitted to and stored in the image storage server 3.
  • a three-dimensional image of the chest obtained by photographing the chest of the subject with a CT device is used as a medical image.
  • the image storage server 3 is a computer that stores and manages various data, and is equipped with a large-capacity external storage device and database management software.
  • the image storage server 3 communicates with other devices via a wired or wireless network 4 to send and receive image data and the like.
  • various data including image data of a three-dimensional image generated by the photographing device 2 are acquired via a network and stored in a recording medium such as a large-capacity external storage device for management.
  • the storage format of the image data and the communication between the devices via the network 4 are based on a protocol such as DICOM (Digital Imaging and Communication in Medicine).
  • DICOM Digital Imaging and Communication in Medicine
  • a standard organ image which is a standard image of an organ, is also stored in the image storage server 3.
  • FIG. 2 describes the hardware configuration of the image processing apparatus according to the present embodiment.
  • the image processing device 20 includes a CPU (Central Processing Unit) 11, a non-volatile storage 13, and a memory 16 as a temporary storage area.
  • the image processing device 20 includes a display 14 such as a liquid crystal display, an input device 15 such as a keyboard and a mouse, and a network I / F (InterFace) 17 connected to the network 4.
  • the CPU 11, the storage 13, the display 14, the input device 15, the memory 16, and the network I / F 17 are connected to the bus 18.
  • the CPU 11 is an example of a processor.
  • the storage 13 is realized by an HDD (Hard Disk Drive), an SSD (Solid State Drive), a flash memory, or the like.
  • the image processing program 12 is stored in the storage 13 as a storage medium.
  • the CPU 11 reads the image processing program 12 from the storage 13, expands it into the memory 16, and executes the expanded image processing program 12.
  • FIG. 3 is a diagram showing a functional configuration of the image processing apparatus according to the present embodiment.
  • the image processing device 20 includes an information acquisition unit 21, an organ extraction unit 22, an analysis unit 23, an alignment unit 24, a position identification unit 25, and a display control unit 26.
  • the CPU 11 executes the image processing program 12
  • the CPU 11 functions as an information acquisition unit 21, an organ extraction unit 22, an analysis unit 23, an alignment unit 24, a position identification unit 25, and a display control unit 26.
  • the information acquisition unit 21 acquires the three-dimensional image G0 from the image storage server 3 via the network I / F 17 in response to an instruction from the input device 15 by the operator.
  • the three-dimensional image G0 includes an organ of interest to the user doctor, for example, which is the subject of diagnosis. If the 3D image G0 is already stored in the storage 13, the information acquisition unit 21 may acquire the 3D image G0 from the storage 13. In the present embodiment, it is assumed that the target organ to be read by the user is the lung. Further, the information acquisition unit 21 acquires a standard organ image from the image storage server 3 via the network I / F17. The standard organ image will be described later.
  • the organ extraction unit 22 derives the target organ image by extracting the target organ from the three-dimensional image G0.
  • the organ extraction unit 22 since the target organ is the lung, the organ extraction unit 22 derives the lung image GL0 by extracting the lung from the three-dimensional image G0.
  • the lung image GL0 is an example of the target organ image.
  • CT value signal value
  • the region having the signal value of the lung is extracted by performing the threshold value processing on the histogram.
  • any method such as Region Growing based on the seed point representing the lung can be used.
  • FIG. 4 is a diagram showing an example of a lung image.
  • the analysis unit 23 detects the lesion contained in the lung by analyzing the three-dimensional image G0.
  • the analysis unit 23 detects the shadows of a plurality of types of diseases as lesions from the three-dimensional image G0 by using a known computer-aided image diagnosis (that is, CAD) algorithm.
  • CAD computer-aided image diagnosis
  • the analysis unit 23 may detect the lesion by analyzing the lung image GL0. good.
  • the types of diseases include lung diseases such as pleural effusion, mesothelioma, nodules and calcification.
  • the analysis unit 23 has a learning model 23A in which machine learning is performed so as to detect the shadows of a plurality of types of diseases as lesions from the three-dimensional image G0 or the lung image GL0.
  • a plurality of learning models 23A are prepared according to the type of disease.
  • deep learning was performed using the teacher data so as to determine whether or not each pixel (voxel) in the three-dimensional image G0 or the lung image GL0 represents a lesion. It consists of a convolutional neural network (CNN (Convolutional Neural Network)).
  • CNN Convolutional Neural Network
  • the learning model 23A is constructed by learning CNN using, for example, a teacher image including a lesion, teacher data consisting of correct answer data representing a region of the lesion in the teacher image, and teacher data consisting of a teacher image not including the lesion. To.
  • the learning model 23A derives a certainty (likelihood) indicating that each pixel in the medical image is a lesion, and lesions a region consisting of pixels whose certainty is equal to or higher than a predetermined first threshold value. Detected as an area of.
  • the certainty is a value of 0 or more and 1 or less.
  • the learning model 23A may detect the lesion from the three-dimensional image G0 or the lung image GL0, but detects the lesion from each of the plurality of tomographic images constituting the three-dimensional image G0 or the lung image GL0. It may be a thing. Further, as the learning model 23A, in addition to the convolutional neural network, any learning model such as a support vector machine (SVM (Support Vector Machine)) can be used.
  • SVM Support Vector Machine
  • FIG. 5 is a diagram showing the detection result of the lesion. As shown in FIG. 5, a lesion 32 surrounded by a rectangle 31 is detected in the upper part of the right lung of the lung image GL0.
  • the alignment unit 24 aligns the standard organ image and the lung image GL0.
  • a standard organ image will be described.
  • the target organ is the lung
  • the standard organ image is a standard lung image.
  • SL0 will be used as a reference code for the standard lung image.
  • the standard lung image SL0 is derived by normalizing a plurality of lung images prepared for deriving the standard lung image SL0.
  • the standard lung image SL0 is derived by normalizing the size, shape and density of the lung image.
  • Size normalization is to find the average size of multiple lung images. As the size, the vertical size and the horizontal size of the lung image in the human body can be used. Shape normalization is to find the average shape of a lung image whose size is normalized. For example, the shape may be normalized by obtaining the average value of the distance from the center of gravity of the lung region to the lung surface in the lung image. At this time, the boundaries of the anatomical region in the lung are also normalized. Here, the right lung is divided into upper, middle and lower lobe anatomical regions, and the left lung is divided into upper and lower lobe anatomical regions.
  • the center of gravity of the anatomical region in the lung image may be obtained, and the average value of the distances from the center of gravity to the boundary of the anatomical region may be obtained.
  • the concentration can be normalized by obtaining a representative value of signal values (voxel values) of a plurality of lung images.
  • the representative value for example, an average value, a median value, a maximum value, a minimum value, and the like can be used.
  • FIG. 6 is a diagram showing a standard lung image. As shown in FIG. 6, in the standard lung image SL0, it is divided into anatomical regions of the upper lobe 41 of the right lung, the middle lobe 42 of the right lung, the lower lobe 43 of the right lung, the upper lobe 44 of the left lung, and the lower lobe 45 of the left lung. ing.
  • the size of the lungs varies depending on the body shape of the subject such as age, gender, height and weight. Therefore, the standard lung image SL0 has a size corresponding to the physique such as age, sex, height, and weight, and is stored in the image storage server 3.
  • the information acquisition unit 21 acquires a standard lung image SL0 having a size corresponding to the physique based on the age, sex, height, weight, etc. of the subject input from the input device 15 from the image storage server 3. It shall be.
  • the alignment unit 24 aligns the lung image GL0 and the standard lung image SL0 so that the lung image GL0 matches the standard lung image SL0.
  • the alignment method it is preferable to use non-rigid body alignment, but rigid body alignment may be used.
  • non-rigid alignment for example, a function such as B-spline and Thin Plate Spline is used to non-linearly convert each pixel position in the lung image GL0 to the corresponding pixel position in the standard lung image SL0.
  • the method is not limited to this.
  • the aligned lung image GL0 has the same size and shape as the standard lung image SL0.
  • the position of the lesion included in the lung image GL0 can be specified in the standard lung image SL0.
  • FIG. 7 is a diagram showing a standard lung image in which the location of a lesion is identified. As shown in FIG. 7, a lesion 32 included in the lung image GL0 has been identified in the upper lobe 41 of the right lung of the standard lung image SL0.
  • the position specifying unit 25 identifies the position of the lesion in the schema that schematically represents the target organ, based on the position of the lesion specified in the standard lung image SL0.
  • FIG. 8 is a diagram showing a lung schema.
  • the schema 50 is a diagram schematically shown by illustrating the lung, and the right lung is located in the anatomical region of the upper lobe 51 of the right lung, the middle lobe 52 of the right lung, and the lower lobe 53 of the right lung.
  • the left lung is divided into anatomical regions of the upper lobe 54 of the left lung and the lower lobe 55 of the left lung.
  • the position specifying unit 25 projects the standard lung image SL0 in which the position of the lesion is specified in two dimensions to derive the two-dimensional standard lung image SL1.
  • the projection direction in this case is the same direction as the line-of-sight direction of the schema 50, that is, the depth direction when the human body is viewed from the front.
  • FIG. 9 is a diagram showing a two-dimensional standard lung image. As shown in FIG. 9, the two-dimensional standard lung image SL1 is a two-dimensional image, and the right lung is divided into anatomical regions of the upper lobe 61 of the right lung, the middle lobe 62 of the right lung, and the lower lobe 63 of the right lung, and the left lung.
  • the lesion 32 is included in the upper lobe 61 of the right lung of the two-dimensional standard lung image SL1.
  • the position of the lesion in the upper lobe 61 of the right lung is a two-dimensional projection of the position of the lesion 32 in the upper lobe 51 of the right lung in the three-dimensional standard lung image SL0.
  • the position specifying unit 25 first matches the sizes of the schema 50 and the two-dimensional standard lung image SL1. For example, the size of the schema 50 and the two-dimensional standard lung image SL1 are matched only in the y direction, or the sizes are matched only in the x direction. At this time, the position specifying unit 25 temporarily stores the enlargement ratio ⁇ for matching the sizes of the schema 50 and the two-dimensional standard lung image SL1 in the memory 16.
  • the positioning portion 25 derives the center of gravity g1 of the upper lobe 61 of the right lung including the lesion 32 in the two-dimensional standard lung image SL1.
  • the position specifying unit 25 derives the center of gravity of the upper lobe 51 of the right lung of the standard lung image SL0 before being projected in two dimensions, and the position where the derived center of gravity is projected in two dimensions is on the right of the two-dimensional standard lung image SL1. It may be the position of the center of gravity in the upper lobe 61 of the lung.
  • the position specifying unit 25 derives the relative position of the position p1 of the lesion 32 with respect to the center of gravity g1 in the upper lobe 61 of the right lung of the two-dimensional standard lung image SL1 as shown in FIG.
  • the center of gravity of the lesion 32 can be used as the position p1 of the lesion 32.
  • Let (x1, y1) be the coordinates of the relative position of the position p1 of the lesion 32 with respect to the center of gravity g1.
  • the position specifying unit 25 derives the center of gravity g2 of the upper lobe 51 of the right lung in the schema 50, multiplies the coordinates (x1, y1) of the position p1 by the enlargement factor ⁇ , and obtains the center of gravity g2.
  • the position p2 of the reference coordinates ( ⁇ x1, ⁇ y1) is specified at the position of the lesion in the schema 50.
  • FIG. 12 is a diagram showing a display screen of the schema.
  • the display screen 70 includes an image display area 71 and a character display area 72.
  • the three-dimensional image G0 and the schema 50 are displayed in the image display area 71.
  • the displayed three-dimensional image G0 is a tomographic image included in the three-dimensional image G0.
  • the user can switch the displayed tomographic image by using the input device 15.
  • the position of the lesion is emphasized by giving a rectangular mark 73 representing the lesion to the upper lobe 51 of the right lung.
  • a mark having another shape such as an arrow may be added.
  • the lesion may be emphasized by making the color of the anatomical region including the lesion in the schema 50 (the upper lobe of the right lung in FIG. 12) different from the color of the other anatomical region. .. Annotation 74 indicating that the upper lobe of the right lung contains a nodule that is a lesion is also displayed in the image display area 71. Therefore, the user can display, for example, a tomographic image including the upper lobe of the left lung in the image display area 71 and perform detailed image interpretation.
  • the interpretation result of the three-dimensional image G0 by the user is input as a finding.
  • the finding of "nodule is seen in the upper lobe of the right lung" obtained as a result of interpreting the upper lobe of the right lung is input.
  • the confirmation button 75 the user creates an interpretation report including the input findings.
  • the created interpretation report is transmitted from the network I / F 17 to a report server (not shown) and saved.
  • FIG. 13 is a flowchart showing the processing performed in the present embodiment. It is assumed that the standard lung image SL0 is acquired from the image storage server 3 and stored in the storage 13. First, the information acquisition unit 21 acquires the three-dimensional image G0 (step ST1). Next, the organ extraction unit 22 derives the lung image GL0 by extracting the lung from the three-dimensional image G0 (step ST2). Then, the analysis unit 23 detects the lesion in the lung included in the three-dimensional image G0 (step ST3).
  • the alignment unit 24 identifies the position of the lesion in the standard lung image SL0 by aligning the standard lung image SL0 and the lung image GL0 (step ST4).
  • the position specifying unit 25 identifies the position of the lesion in the schema 50 based on the position of the lesion specified in the standard lung image SL0 (step ST5).
  • the display control unit 26 displays the schema 50 in which the position of the lesion is specified (step ST6), and ends the process.
  • the position corresponding to the lesion in the standard organ image is determined by aligning the target organ image with the standard organ image derived in advance by normalizing the plurality of target organ images. Based on the location corresponding to the lesion identified in the standard organ image, the location corresponding to the lesion in the schema schematically representing the target organ was specified.
  • the standard organ image can be interposed between the target organ image and the schema, and the target organ image and the schema can be aligned stepwise. Therefore, the target organ image and the schema can be accurately aligned as compared with the case where the target organ image and the schema are directly aligned, and as a result, the position of the lesion contained in the medical image can be accurately aligned in the schema. Can be reflected in.
  • the standard organ image is derived by normalizing a plurality of target organ images
  • the standard organ image has an average size, shape and density of the target organ. Therefore, both the alignment with the target organ image and the alignment with the schema 50 can be performed with high accuracy, and as a result, the alignment between the target organ image and the schema 50 can be performed with high accuracy. Therefore, according to the present embodiment, the position of the lesion included in the medical image can be accurately reflected in the schema.
  • the schema 50 is derived by deriving the correspondence between the center of gravity of the anatomical region of the two-dimensional standard lung image SL1 and the position of the lesion, and the derived correspondence is reflected in the center of gravity of the corresponding anatomical region in the schema 50.
  • the position of the lesion in the schema 50 can be specified by a simple calculation.
  • the positional relationship of the center of gravity in the corresponding anatomical region that is, the upper lobe of the right lung
  • the position p2 ( ⁇ x2, ⁇ x2) of the lesion in the right lung of the schema 50. ⁇ y2) may be specified.
  • the image processing device 20 includes an analysis unit 23 and detects a lesion from the three-dimensional image G0, but the present invention is not limited to this.
  • the lesion may be detected from the three-dimensional image G0 in a separate device connected to the image processing device 20 via the network 4.
  • the 3D image G0 in which the lesion has already been detected is acquired from the image storage server 3, and the image processing device 20 uses the 3D image G0 in which the lesion is detected to specify the position of the lesion in the schema 50. You may do it.
  • the 3D image G0 may be displayed on the display 14, and the user may specify the position of the lesion by interpreting the 3D image G0. In these cases, the analysis unit 23 is unnecessary in the image processing device 20.
  • the target organ is the lung, but the present invention is not limited to this.
  • any part of the human body such as the brain, heart, liver, blood vessels and limbs can be diagnosed.
  • a processing unit that executes various processes such as an information acquisition unit 21, an organ extraction unit 22, an analysis unit 23, an alignment unit 24, a position identification unit 25, and a display control unit 26.
  • various processors Processors
  • various processors as described above, in addition to the CPU, which is a general-purpose processor that executes software (program) and functions as various processing units, circuits after manufacturing FPGA (Field Programmable Gate Array) and the like are used.
  • Dedicated electricity which is a processor with a circuit configuration specially designed to execute specific processing such as programmable logic device (PLD), ASIC (Application Specific Integrated Circuit), which is a processor whose configuration can be changed. Circuits etc. are included.
  • One processing unit may be composed of one of these various processors, or a combination of two or more processors of the same type or different types (for example, a combination of a plurality of FPGAs or a combination of a CPU and an FPGA). ) May be configured. Further, a plurality of processing units may be configured by one processor.
  • one processor is configured by a combination of one or more CPUs and software. There is a form in which this processor functions as a plurality of processing units.
  • SoC System On Chip
  • the various processing units are configured by using one or more of the above-mentioned various processors as a hardware-like structure.
  • circuitry in which circuit elements such as semiconductor elements are combined can be used.

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Physics & Mathematics (AREA)
  • Medical Informatics (AREA)
  • Pathology (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • High Energy & Nuclear Physics (AREA)
  • General Physics & Mathematics (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Optics & Photonics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Radiology & Medical Imaging (AREA)
  • Theoretical Computer Science (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Molecular Biology (AREA)
  • Surgery (AREA)
  • Animal Behavior & Ethology (AREA)
  • General Health & Medical Sciences (AREA)
  • Public Health (AREA)
  • Veterinary Medicine (AREA)
  • Apparatus For Radiation Diagnosis (AREA)
  • Image Analysis (AREA)

Abstract

L'invention concerne un dispositif, un procédé et un programme de traitement d'image au moyen desquels la position d'une lésion incluse dans une image médicale est reflétée avec précision dans un schéma. Dans la présente invention, un processeur obtient une image d'organe cible par extraction, à partir d'une image médicale, d'un organe cible incluant une lésion, identifie la position de la lésion dans une image d'organe standard par positionnement de l'image d'organe standard, obtenue à l'avance par normalisation d'une pluralité d'images d'organe cible, et de l'image d'organe cible obtenue, et sur la base de la position identifiée de la lésion dans l'image d'organe standard, identifie la position de la lésion dans un schéma sur lequel l'organe cible est représenté schématiquement.
PCT/JP2021/030594 2020-09-11 2021-08-20 Dispositif, procédé et programme de traitement d'image WO2022054541A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2020153151A JP2023178525A (ja) 2020-09-11 2020-09-11 画像処理装置、方法およびプログラム
JP2020-153151 2020-09-11

Publications (1)

Publication Number Publication Date
WO2022054541A1 true WO2022054541A1 (fr) 2022-03-17

Family

ID=80631533

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2021/030594 WO2022054541A1 (fr) 2020-09-11 2021-08-20 Dispositif, procédé et programme de traitement d'image

Country Status (2)

Country Link
JP (1) JP2023178525A (fr)
WO (1) WO2022054541A1 (fr)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2006181146A (ja) * 2004-12-28 2006-07-13 Fuji Photo Film Co Ltd 診断支援装置、診断支援方法およびそのプログラム
JP2009247535A (ja) * 2008-04-04 2009-10-29 Dainippon Printing Co Ltd 医用画像処理システム

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2006181146A (ja) * 2004-12-28 2006-07-13 Fuji Photo Film Co Ltd 診断支援装置、診断支援方法およびそのプログラム
JP2009247535A (ja) * 2008-04-04 2009-10-29 Dainippon Printing Co Ltd 医用画像処理システム

Also Published As

Publication number Publication date
JP2023178525A (ja) 2023-12-18

Similar Documents

Publication Publication Date Title
US10980493B2 (en) Medical image display device, method, and program
US11941812B2 (en) Diagnosis support apparatus and X-ray CT apparatus
US20170042495A1 (en) Medical image information system, medical image information processing method, and program
EP2189942A2 (fr) Procédé et système d'enregistrement d'une image médicale
US9336457B2 (en) Adaptive anatomical region prediction
JP2019082881A (ja) 画像検索装置、方法およびプログラム
JP2019169049A (ja) 医用画像特定装置、方法およびプログラム
US10628963B2 (en) Automatic detection of an artifact in patient image
US11468659B2 (en) Learning support device, learning support method, learning support program, region-of-interest discrimination device, region-of-interest discrimination method, region-of-interest discrimination program, and learned model
US11669960B2 (en) Learning system, method, and program
JP7237089B2 (ja) 医療文書作成支援装置、方法およびプログラム
US11205269B2 (en) Learning data creation support apparatus, learning data creation support method, and learning data creation support program
US20230005601A1 (en) Document creation support apparatus, method, and program
WO2022153702A1 (fr) Dispositif, procédé et programme d'affichage d'images médicales
WO2022054541A1 (fr) Dispositif, procédé et programme de traitement d'image
JP2021175454A (ja) 医用画像処理装置、方法およびプログラム
JPWO2019150717A1 (ja) 葉間膜表示装置、方法およびプログラム
US20230197253A1 (en) Medical image processing apparatus, method, and program
JP7376715B2 (ja) 経過予測装置、経過予測装置の作動方法および経過予測プログラム
US20230225681A1 (en) Image display apparatus, method, and program
EP4343781A1 (fr) Appareil, procédé et programme de traitement d'informations
US20240037739A1 (en) Image processing apparatus, image processing method, and image processing program
US12033366B2 (en) Matching apparatus, matching method, and matching program
WO2020241857A1 (fr) Dispositif de création de documents médicaux, procédé, et programme, dispositif d'apprentissage, procédé, et programme, et modèle appris
US20240037738A1 (en) Image processing apparatus, image processing method, and image processing program

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21866501

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21866501

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: JP