WO2022054541A1 - Image processing device, method, and program - Google Patents

Image processing device, method, and program Download PDF

Info

Publication number
WO2022054541A1
WO2022054541A1 PCT/JP2021/030594 JP2021030594W WO2022054541A1 WO 2022054541 A1 WO2022054541 A1 WO 2022054541A1 JP 2021030594 W JP2021030594 W JP 2021030594W WO 2022054541 A1 WO2022054541 A1 WO 2022054541A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
lesion
standard
organ
schema
Prior art date
Application number
PCT/JP2021/030594
Other languages
French (fr)
Japanese (ja)
Inventor
拓矢 湯澤
Original Assignee
富士フイルム株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 富士フイルム株式会社 filed Critical 富士フイルム株式会社
Publication of WO2022054541A1 publication Critical patent/WO2022054541A1/en

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B6/00Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/33Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods

Definitions

  • This disclosure relates to image processing devices, methods and programs.
  • CT Computer Tomography
  • MRI Magnetic Resonance Imaging
  • CAD Computer-Aided Diagnosis
  • a schema is a schematic diagram schematically showing the structure of the human body. For example, if a lesion is detected in the upper right lobe of the lung as a result of analyzing a medical image, it is possible to highlight and display an abnormality in the area of the upper right lobe in a schema that schematically shows the lung. Will be. However, in such a display method, although it can be seen that there is an abnormal finding in the upper right lobe by the schema, it is not possible to read where the lesion is in the upper right lobe.
  • the lesion is identified in the three-dimensional image acquired by a CT device or the like, the virtual patient image created in advance and the three-dimensional image are aligned, the position of the lesion is specified in the virtual patient image, and the lesion position is determined.
  • a method for displaying a specified virtual patient image has been proposed (see Patent Document 1).
  • the virtual patient image in Patent Document 1 is generated in advance as an image actually taken by X-ray according to the physique of the patient such as age, adult, child, gender, weight, and height.
  • the virtual patient image is generated as an image actually taken by X-ray of a human body having a standard physique. For this reason, the virtual patient image is not generated by photographing an actual patient with X-rays, but schematically represents the structure of the human body, similar to the schema. It is difficult to accurately align a schematic image such as a schema with a medical image acquired by photographing. Therefore, when the method described in Patent Document 1 is used, it is difficult to accurately reflect the position of the lesion included in the medical image in the schema.
  • the present disclosure has been made in view of the above circumstances, and an object thereof is to enable the position of the lesion included in the medical image to be accurately reflected in the schema.
  • the image processing apparatus comprises at least one processor.
  • the processor is By extracting the target organ containing the lesion from the medical image, the target organ image including the lesion is derived. By aligning the standard organ image derived by normalizing multiple target organ images with the derived target organ image, the position of the lesion in the standard organ image can be identified. Based on the location of the lesion identified in the standard organ image, the location of the lesion in the schema schematically representing the target organ is identified.
  • the standard organ image may be derived by normalizing the size, shape and density of a plurality of target organ images.
  • the medical image and the standard organ image are three-dimensional images.
  • the schema may be a two-dimensional image.
  • the processor derives a two-dimensional standard organ image in which the position of the lesion is specified by projecting the standard organ image in two dimensions.
  • the location of the lesion in the schema may be specified based on the location of the lesion in the two-dimensional standard organ image.
  • the processor derives the positional relationship between the center of gravity of the two-dimensional standard organ image and the position of the lesion in the two-dimensional standard organ image.
  • the position of the lesion in the schema may be specified by reflecting the positional relationship with respect to the center of gravity of the schema.
  • the processor derives the positional relationship between the center of gravity of the anatomical region containing the location of the lesion and the location of the lesion in the anatomical region in a two-dimensional standard organ image.
  • the location of the lesion in the schema may be specified by reflecting the positional relationship with respect to the center of gravity of the anatomical region of the schema corresponding to the anatomical region including the location of the lesion.
  • the processor may display a schema in which the position of the lesion is specified.
  • the processor may detect a lesion by analyzing a medical image.
  • the image processing method derives a target organ image including a lesion by extracting a target organ containing the lesion from a medical image.
  • a target organ image including a lesion by extracting a target organ containing the lesion from a medical image.
  • the position of the lesion in the standard organ image can be identified.
  • the location of the lesion in the schema schematically representing the target organ is identified.
  • image processing method according to the present disclosure may be provided as a program for executing the computer.
  • the position of the lesion included in the medical image can be accurately reflected in the schema.
  • FIG. 1 The figure which shows the schematic structure of the medical information system which applied the image processing apparatus by embodiment of this disclosure.
  • Functional configuration diagram of the image processing device according to this embodiment Figure showing lung image The figure which shows the detection result of a lesion Figure showing standard lung image Diagram showing standard lung images in which lesions are located
  • Diagram showing schema Figure showing two-dimensional standard lung image A diagram for explaining the relationship between the center of gravity and the position of the lesion in the upper lobe of the left lung in a two-dimensional standard lung image.
  • Diagram to explain the location of the lesion based on the center of gravity of the upper lobe of the left lung of the schema Diagram showing the display screen of the schema Flow chart showing processing performed in this embodiment
  • FIG. 1 is a diagram showing a schematic configuration of a medical information system.
  • a computer 1 an imaging device 2, and an image storage server 3 including an image processing device according to the present embodiment are connected in a communicable state via a network 4.
  • the computer 1 includes an image processing device according to the present embodiment, and an image processing program according to the present embodiment is installed.
  • the computer 1 may be a workstation or a personal computer directly operated by a doctor who interprets a medical image or makes a diagnosis using the medical image, or may be a server computer connected to them via a network.
  • the image processing program is stored in a storage device of a server computer connected to a network or in a network storage in a state of being accessible from the outside, and is downloaded and installed in a computer 1 used by a doctor upon request. Alternatively, it is recorded and distributed on a recording medium such as a DVD (Digital Versatile Disc) or a CD-ROM (Compact Disc Read Only Memory), and is installed on the computer 1 from the recording medium.
  • the imaging device 2 is a device that generates a three-dimensional image representing the site by photographing the site to be diagnosed of the subject, and specifically, a CT device, an MRI device, and a PET (Positron Emission Tomography). ) Equipment, etc.
  • the three-dimensional image composed of a plurality of sliced images generated by the photographing device 2 is transmitted to and stored in the image storage server 3.
  • a three-dimensional image of the chest obtained by photographing the chest of the subject with a CT device is used as a medical image.
  • the image storage server 3 is a computer that stores and manages various data, and is equipped with a large-capacity external storage device and database management software.
  • the image storage server 3 communicates with other devices via a wired or wireless network 4 to send and receive image data and the like.
  • various data including image data of a three-dimensional image generated by the photographing device 2 are acquired via a network and stored in a recording medium such as a large-capacity external storage device for management.
  • the storage format of the image data and the communication between the devices via the network 4 are based on a protocol such as DICOM (Digital Imaging and Communication in Medicine).
  • DICOM Digital Imaging and Communication in Medicine
  • a standard organ image which is a standard image of an organ, is also stored in the image storage server 3.
  • FIG. 2 describes the hardware configuration of the image processing apparatus according to the present embodiment.
  • the image processing device 20 includes a CPU (Central Processing Unit) 11, a non-volatile storage 13, and a memory 16 as a temporary storage area.
  • the image processing device 20 includes a display 14 such as a liquid crystal display, an input device 15 such as a keyboard and a mouse, and a network I / F (InterFace) 17 connected to the network 4.
  • the CPU 11, the storage 13, the display 14, the input device 15, the memory 16, and the network I / F 17 are connected to the bus 18.
  • the CPU 11 is an example of a processor.
  • the storage 13 is realized by an HDD (Hard Disk Drive), an SSD (Solid State Drive), a flash memory, or the like.
  • the image processing program 12 is stored in the storage 13 as a storage medium.
  • the CPU 11 reads the image processing program 12 from the storage 13, expands it into the memory 16, and executes the expanded image processing program 12.
  • FIG. 3 is a diagram showing a functional configuration of the image processing apparatus according to the present embodiment.
  • the image processing device 20 includes an information acquisition unit 21, an organ extraction unit 22, an analysis unit 23, an alignment unit 24, a position identification unit 25, and a display control unit 26.
  • the CPU 11 executes the image processing program 12
  • the CPU 11 functions as an information acquisition unit 21, an organ extraction unit 22, an analysis unit 23, an alignment unit 24, a position identification unit 25, and a display control unit 26.
  • the information acquisition unit 21 acquires the three-dimensional image G0 from the image storage server 3 via the network I / F 17 in response to an instruction from the input device 15 by the operator.
  • the three-dimensional image G0 includes an organ of interest to the user doctor, for example, which is the subject of diagnosis. If the 3D image G0 is already stored in the storage 13, the information acquisition unit 21 may acquire the 3D image G0 from the storage 13. In the present embodiment, it is assumed that the target organ to be read by the user is the lung. Further, the information acquisition unit 21 acquires a standard organ image from the image storage server 3 via the network I / F17. The standard organ image will be described later.
  • the organ extraction unit 22 derives the target organ image by extracting the target organ from the three-dimensional image G0.
  • the organ extraction unit 22 since the target organ is the lung, the organ extraction unit 22 derives the lung image GL0 by extracting the lung from the three-dimensional image G0.
  • the lung image GL0 is an example of the target organ image.
  • CT value signal value
  • the region having the signal value of the lung is extracted by performing the threshold value processing on the histogram.
  • any method such as Region Growing based on the seed point representing the lung can be used.
  • FIG. 4 is a diagram showing an example of a lung image.
  • the analysis unit 23 detects the lesion contained in the lung by analyzing the three-dimensional image G0.
  • the analysis unit 23 detects the shadows of a plurality of types of diseases as lesions from the three-dimensional image G0 by using a known computer-aided image diagnosis (that is, CAD) algorithm.
  • CAD computer-aided image diagnosis
  • the analysis unit 23 may detect the lesion by analyzing the lung image GL0. good.
  • the types of diseases include lung diseases such as pleural effusion, mesothelioma, nodules and calcification.
  • the analysis unit 23 has a learning model 23A in which machine learning is performed so as to detect the shadows of a plurality of types of diseases as lesions from the three-dimensional image G0 or the lung image GL0.
  • a plurality of learning models 23A are prepared according to the type of disease.
  • deep learning was performed using the teacher data so as to determine whether or not each pixel (voxel) in the three-dimensional image G0 or the lung image GL0 represents a lesion. It consists of a convolutional neural network (CNN (Convolutional Neural Network)).
  • CNN Convolutional Neural Network
  • the learning model 23A is constructed by learning CNN using, for example, a teacher image including a lesion, teacher data consisting of correct answer data representing a region of the lesion in the teacher image, and teacher data consisting of a teacher image not including the lesion. To.
  • the learning model 23A derives a certainty (likelihood) indicating that each pixel in the medical image is a lesion, and lesions a region consisting of pixels whose certainty is equal to or higher than a predetermined first threshold value. Detected as an area of.
  • the certainty is a value of 0 or more and 1 or less.
  • the learning model 23A may detect the lesion from the three-dimensional image G0 or the lung image GL0, but detects the lesion from each of the plurality of tomographic images constituting the three-dimensional image G0 or the lung image GL0. It may be a thing. Further, as the learning model 23A, in addition to the convolutional neural network, any learning model such as a support vector machine (SVM (Support Vector Machine)) can be used.
  • SVM Support Vector Machine
  • FIG. 5 is a diagram showing the detection result of the lesion. As shown in FIG. 5, a lesion 32 surrounded by a rectangle 31 is detected in the upper part of the right lung of the lung image GL0.
  • the alignment unit 24 aligns the standard organ image and the lung image GL0.
  • a standard organ image will be described.
  • the target organ is the lung
  • the standard organ image is a standard lung image.
  • SL0 will be used as a reference code for the standard lung image.
  • the standard lung image SL0 is derived by normalizing a plurality of lung images prepared for deriving the standard lung image SL0.
  • the standard lung image SL0 is derived by normalizing the size, shape and density of the lung image.
  • Size normalization is to find the average size of multiple lung images. As the size, the vertical size and the horizontal size of the lung image in the human body can be used. Shape normalization is to find the average shape of a lung image whose size is normalized. For example, the shape may be normalized by obtaining the average value of the distance from the center of gravity of the lung region to the lung surface in the lung image. At this time, the boundaries of the anatomical region in the lung are also normalized. Here, the right lung is divided into upper, middle and lower lobe anatomical regions, and the left lung is divided into upper and lower lobe anatomical regions.
  • the center of gravity of the anatomical region in the lung image may be obtained, and the average value of the distances from the center of gravity to the boundary of the anatomical region may be obtained.
  • the concentration can be normalized by obtaining a representative value of signal values (voxel values) of a plurality of lung images.
  • the representative value for example, an average value, a median value, a maximum value, a minimum value, and the like can be used.
  • FIG. 6 is a diagram showing a standard lung image. As shown in FIG. 6, in the standard lung image SL0, it is divided into anatomical regions of the upper lobe 41 of the right lung, the middle lobe 42 of the right lung, the lower lobe 43 of the right lung, the upper lobe 44 of the left lung, and the lower lobe 45 of the left lung. ing.
  • the size of the lungs varies depending on the body shape of the subject such as age, gender, height and weight. Therefore, the standard lung image SL0 has a size corresponding to the physique such as age, sex, height, and weight, and is stored in the image storage server 3.
  • the information acquisition unit 21 acquires a standard lung image SL0 having a size corresponding to the physique based on the age, sex, height, weight, etc. of the subject input from the input device 15 from the image storage server 3. It shall be.
  • the alignment unit 24 aligns the lung image GL0 and the standard lung image SL0 so that the lung image GL0 matches the standard lung image SL0.
  • the alignment method it is preferable to use non-rigid body alignment, but rigid body alignment may be used.
  • non-rigid alignment for example, a function such as B-spline and Thin Plate Spline is used to non-linearly convert each pixel position in the lung image GL0 to the corresponding pixel position in the standard lung image SL0.
  • the method is not limited to this.
  • the aligned lung image GL0 has the same size and shape as the standard lung image SL0.
  • the position of the lesion included in the lung image GL0 can be specified in the standard lung image SL0.
  • FIG. 7 is a diagram showing a standard lung image in which the location of a lesion is identified. As shown in FIG. 7, a lesion 32 included in the lung image GL0 has been identified in the upper lobe 41 of the right lung of the standard lung image SL0.
  • the position specifying unit 25 identifies the position of the lesion in the schema that schematically represents the target organ, based on the position of the lesion specified in the standard lung image SL0.
  • FIG. 8 is a diagram showing a lung schema.
  • the schema 50 is a diagram schematically shown by illustrating the lung, and the right lung is located in the anatomical region of the upper lobe 51 of the right lung, the middle lobe 52 of the right lung, and the lower lobe 53 of the right lung.
  • the left lung is divided into anatomical regions of the upper lobe 54 of the left lung and the lower lobe 55 of the left lung.
  • the position specifying unit 25 projects the standard lung image SL0 in which the position of the lesion is specified in two dimensions to derive the two-dimensional standard lung image SL1.
  • the projection direction in this case is the same direction as the line-of-sight direction of the schema 50, that is, the depth direction when the human body is viewed from the front.
  • FIG. 9 is a diagram showing a two-dimensional standard lung image. As shown in FIG. 9, the two-dimensional standard lung image SL1 is a two-dimensional image, and the right lung is divided into anatomical regions of the upper lobe 61 of the right lung, the middle lobe 62 of the right lung, and the lower lobe 63 of the right lung, and the left lung.
  • the lesion 32 is included in the upper lobe 61 of the right lung of the two-dimensional standard lung image SL1.
  • the position of the lesion in the upper lobe 61 of the right lung is a two-dimensional projection of the position of the lesion 32 in the upper lobe 51 of the right lung in the three-dimensional standard lung image SL0.
  • the position specifying unit 25 first matches the sizes of the schema 50 and the two-dimensional standard lung image SL1. For example, the size of the schema 50 and the two-dimensional standard lung image SL1 are matched only in the y direction, or the sizes are matched only in the x direction. At this time, the position specifying unit 25 temporarily stores the enlargement ratio ⁇ for matching the sizes of the schema 50 and the two-dimensional standard lung image SL1 in the memory 16.
  • the positioning portion 25 derives the center of gravity g1 of the upper lobe 61 of the right lung including the lesion 32 in the two-dimensional standard lung image SL1.
  • the position specifying unit 25 derives the center of gravity of the upper lobe 51 of the right lung of the standard lung image SL0 before being projected in two dimensions, and the position where the derived center of gravity is projected in two dimensions is on the right of the two-dimensional standard lung image SL1. It may be the position of the center of gravity in the upper lobe 61 of the lung.
  • the position specifying unit 25 derives the relative position of the position p1 of the lesion 32 with respect to the center of gravity g1 in the upper lobe 61 of the right lung of the two-dimensional standard lung image SL1 as shown in FIG.
  • the center of gravity of the lesion 32 can be used as the position p1 of the lesion 32.
  • Let (x1, y1) be the coordinates of the relative position of the position p1 of the lesion 32 with respect to the center of gravity g1.
  • the position specifying unit 25 derives the center of gravity g2 of the upper lobe 51 of the right lung in the schema 50, multiplies the coordinates (x1, y1) of the position p1 by the enlargement factor ⁇ , and obtains the center of gravity g2.
  • the position p2 of the reference coordinates ( ⁇ x1, ⁇ y1) is specified at the position of the lesion in the schema 50.
  • FIG. 12 is a diagram showing a display screen of the schema.
  • the display screen 70 includes an image display area 71 and a character display area 72.
  • the three-dimensional image G0 and the schema 50 are displayed in the image display area 71.
  • the displayed three-dimensional image G0 is a tomographic image included in the three-dimensional image G0.
  • the user can switch the displayed tomographic image by using the input device 15.
  • the position of the lesion is emphasized by giving a rectangular mark 73 representing the lesion to the upper lobe 51 of the right lung.
  • a mark having another shape such as an arrow may be added.
  • the lesion may be emphasized by making the color of the anatomical region including the lesion in the schema 50 (the upper lobe of the right lung in FIG. 12) different from the color of the other anatomical region. .. Annotation 74 indicating that the upper lobe of the right lung contains a nodule that is a lesion is also displayed in the image display area 71. Therefore, the user can display, for example, a tomographic image including the upper lobe of the left lung in the image display area 71 and perform detailed image interpretation.
  • the interpretation result of the three-dimensional image G0 by the user is input as a finding.
  • the finding of "nodule is seen in the upper lobe of the right lung" obtained as a result of interpreting the upper lobe of the right lung is input.
  • the confirmation button 75 the user creates an interpretation report including the input findings.
  • the created interpretation report is transmitted from the network I / F 17 to a report server (not shown) and saved.
  • FIG. 13 is a flowchart showing the processing performed in the present embodiment. It is assumed that the standard lung image SL0 is acquired from the image storage server 3 and stored in the storage 13. First, the information acquisition unit 21 acquires the three-dimensional image G0 (step ST1). Next, the organ extraction unit 22 derives the lung image GL0 by extracting the lung from the three-dimensional image G0 (step ST2). Then, the analysis unit 23 detects the lesion in the lung included in the three-dimensional image G0 (step ST3).
  • the alignment unit 24 identifies the position of the lesion in the standard lung image SL0 by aligning the standard lung image SL0 and the lung image GL0 (step ST4).
  • the position specifying unit 25 identifies the position of the lesion in the schema 50 based on the position of the lesion specified in the standard lung image SL0 (step ST5).
  • the display control unit 26 displays the schema 50 in which the position of the lesion is specified (step ST6), and ends the process.
  • the position corresponding to the lesion in the standard organ image is determined by aligning the target organ image with the standard organ image derived in advance by normalizing the plurality of target organ images. Based on the location corresponding to the lesion identified in the standard organ image, the location corresponding to the lesion in the schema schematically representing the target organ was specified.
  • the standard organ image can be interposed between the target organ image and the schema, and the target organ image and the schema can be aligned stepwise. Therefore, the target organ image and the schema can be accurately aligned as compared with the case where the target organ image and the schema are directly aligned, and as a result, the position of the lesion contained in the medical image can be accurately aligned in the schema. Can be reflected in.
  • the standard organ image is derived by normalizing a plurality of target organ images
  • the standard organ image has an average size, shape and density of the target organ. Therefore, both the alignment with the target organ image and the alignment with the schema 50 can be performed with high accuracy, and as a result, the alignment between the target organ image and the schema 50 can be performed with high accuracy. Therefore, according to the present embodiment, the position of the lesion included in the medical image can be accurately reflected in the schema.
  • the schema 50 is derived by deriving the correspondence between the center of gravity of the anatomical region of the two-dimensional standard lung image SL1 and the position of the lesion, and the derived correspondence is reflected in the center of gravity of the corresponding anatomical region in the schema 50.
  • the position of the lesion in the schema 50 can be specified by a simple calculation.
  • the positional relationship of the center of gravity in the corresponding anatomical region that is, the upper lobe of the right lung
  • the position p2 ( ⁇ x2, ⁇ x2) of the lesion in the right lung of the schema 50. ⁇ y2) may be specified.
  • the image processing device 20 includes an analysis unit 23 and detects a lesion from the three-dimensional image G0, but the present invention is not limited to this.
  • the lesion may be detected from the three-dimensional image G0 in a separate device connected to the image processing device 20 via the network 4.
  • the 3D image G0 in which the lesion has already been detected is acquired from the image storage server 3, and the image processing device 20 uses the 3D image G0 in which the lesion is detected to specify the position of the lesion in the schema 50. You may do it.
  • the 3D image G0 may be displayed on the display 14, and the user may specify the position of the lesion by interpreting the 3D image G0. In these cases, the analysis unit 23 is unnecessary in the image processing device 20.
  • the target organ is the lung, but the present invention is not limited to this.
  • any part of the human body such as the brain, heart, liver, blood vessels and limbs can be diagnosed.
  • a processing unit that executes various processes such as an information acquisition unit 21, an organ extraction unit 22, an analysis unit 23, an alignment unit 24, a position identification unit 25, and a display control unit 26.
  • various processors Processors
  • various processors as described above, in addition to the CPU, which is a general-purpose processor that executes software (program) and functions as various processing units, circuits after manufacturing FPGA (Field Programmable Gate Array) and the like are used.
  • Dedicated electricity which is a processor with a circuit configuration specially designed to execute specific processing such as programmable logic device (PLD), ASIC (Application Specific Integrated Circuit), which is a processor whose configuration can be changed. Circuits etc. are included.
  • One processing unit may be composed of one of these various processors, or a combination of two or more processors of the same type or different types (for example, a combination of a plurality of FPGAs or a combination of a CPU and an FPGA). ) May be configured. Further, a plurality of processing units may be configured by one processor.
  • one processor is configured by a combination of one or more CPUs and software. There is a form in which this processor functions as a plurality of processing units.
  • SoC System On Chip
  • the various processing units are configured by using one or more of the above-mentioned various processors as a hardware-like structure.
  • circuitry in which circuit elements such as semiconductor elements are combined can be used.

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Physics & Mathematics (AREA)
  • Medical Informatics (AREA)
  • Pathology (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • High Energy & Nuclear Physics (AREA)
  • General Physics & Mathematics (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Optics & Photonics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Radiology & Medical Imaging (AREA)
  • Theoretical Computer Science (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Molecular Biology (AREA)
  • Surgery (AREA)
  • Animal Behavior & Ethology (AREA)
  • General Health & Medical Sciences (AREA)
  • Public Health (AREA)
  • Veterinary Medicine (AREA)
  • Apparatus For Radiation Diagnosis (AREA)
  • Image Analysis (AREA)

Abstract

Provided are an image processing device, method, and program whereby the position of a lesion included in a medical image is accurately reflected in a schema. In the present invention, a processor derives a target organ image by extracting, from a medical image, a target organ including a lesion, identifies the position of the lesion in a standard organ image by positioning the standard organ image, derived in advance by normalizing a plurality of target organ images, and the derived target organ image, and on the basis of the identified position of the lesion in the standard organ image, identifies the position of the lesion in a schema in which the target organ is schematically represented.

Description

画像処理装置、方法およびプログラムImage processing equipment, methods and programs
 本開示は、画像処理装置、方法およびプログラムに関する。 This disclosure relates to image processing devices, methods and programs.
 近年、CT(Computed Tomography)装置およびMRI(Magnetic Resonance Imaging)装置等の医療機器の進歩により、より質の高い高解像度の医用画像を用いての画像診断が可能となってきている。とくに、CT画像およびMRI画像等を用いた画像診断により、病変の領域を精度よく特定することができるため、特定した結果に基づいて適切な治療が行われるようになってきている。また、機械学習がなされた学習済みモデルを用いたCAD(Computer-Aided Diagnosis)により医用画像を解析して、医用画像に含まれる病変を検出することも行われている。 In recent years, advances in medical devices such as CT (Computed Tomography) devices and MRI (Magnetic Resonance Imaging) devices have made it possible to perform diagnostic imaging using higher quality medical images. In particular, since the region of the lesion can be accurately identified by diagnostic imaging using CT images, MRI images, and the like, appropriate treatment has come to be performed based on the identified results. In addition, medical images are analyzed by CAD (Computer-Aided Diagnosis) using a trained model that has undergone machine learning to detect lesions contained in the medical images.
 また、医用画像の解析結果を客観的に示すために、解析結果を予め用意してあるシェーマに表示することがある。シェーマとは、人体の構造を模式的に示した模式図である。例えば、医用画像を解析した結果、肺の右上葉に病変が検出された場合、肺を模式的に示したシェーマにおいて、右上葉の領域に異常があることをハイライトして表示することが行われる。しかしながら、このような表示方法は、シェーマにより右上葉に異常所見があることは分かるが、右上葉のどこに病変があるかまでは読み取ることはできない。 In addition, in order to objectively show the analysis result of the medical image, the analysis result may be displayed on a pre-prepared schema. A schema is a schematic diagram schematically showing the structure of the human body. For example, if a lesion is detected in the upper right lobe of the lung as a result of analyzing a medical image, it is possible to highlight and display an abnormality in the area of the upper right lobe in a schema that schematically shows the lung. Will be. However, in such a display method, although it can be seen that there is an abnormal finding in the upper right lobe by the schema, it is not possible to read where the lesion is in the upper right lobe.
 このため、CT装置等により取得された3次元画像において病変を特定し、予め作成された仮想患者画像と3次元画像とを位置合わせし、仮想患者画像において病変の位置を特定し、病変位置が特定された仮想患者画像を表示する手法が提案されている(特許文献1参照)。特許文献1における仮想患者画像は、年齢、成人、子供、性別、体重、身長等の患者の体格に応じて、実際にX線で撮影した画像として予め生成される。 Therefore, the lesion is identified in the three-dimensional image acquired by a CT device or the like, the virtual patient image created in advance and the three-dimensional image are aligned, the position of the lesion is specified in the virtual patient image, and the lesion position is determined. A method for displaying a specified virtual patient image has been proposed (see Patent Document 1). The virtual patient image in Patent Document 1 is generated in advance as an image actually taken by X-ray according to the physique of the patient such as age, adult, child, gender, weight, and height.
特開2017-204041号公報Japanese Unexamined Patent Publication No. 2017-204041
 しかしながら、特許文献1に記載された手法においては、仮想患者画像は標準的な体格を有する人体について、実際にX線で撮影した画像として生成されている。このため、仮想患者画像は、実際の患者をX線で撮影することにより生成されたものではなく、シェーマと同様に、人体の構造が模式的に表されている。このようなシェーマのような模式的な画像と撮影により取得した医用画像との位置合わせを精度よく行うことは難しい。このため、特許文献1に記載された手法を用いた場合、医用画像に含まれる病変の位置をシェーマにおいて正確に反映させることは困難である。 However, in the method described in Patent Document 1, the virtual patient image is generated as an image actually taken by X-ray of a human body having a standard physique. For this reason, the virtual patient image is not generated by photographing an actual patient with X-rays, but schematically represents the structure of the human body, similar to the schema. It is difficult to accurately align a schematic image such as a schema with a medical image acquired by photographing. Therefore, when the method described in Patent Document 1 is used, it is difficult to accurately reflect the position of the lesion included in the medical image in the schema.
 本開示は上記事情に鑑みなされたものであり、医用画像に含まれる病変の位置をシェーマにおいて正確に反映させることができるようにすることを目的とする。 The present disclosure has been made in view of the above circumstances, and an object thereof is to enable the position of the lesion included in the medical image to be accurately reflected in the schema.
 本開示による画像処理装置は、少なくとも1つのプロセッサを備え、
 プロセッサは、
 医用画像から病変を含む対象臓器を抽出することにより病変を含む対象臓器画像を導出し、
 複数の対象臓器画像を正規化することにより導出された標準臓器画像と、導出された対象臓器画像とを位置合わせすることによって、標準臓器画像における病変の位置を特定し、
 標準臓器画像において特定された病変の位置に基づいて、対象臓器を模式的に表したシェーマにおける病変の位置を特定する。
The image processing apparatus according to the present disclosure comprises at least one processor.
The processor is
By extracting the target organ containing the lesion from the medical image, the target organ image including the lesion is derived.
By aligning the standard organ image derived by normalizing multiple target organ images with the derived target organ image, the position of the lesion in the standard organ image can be identified.
Based on the location of the lesion identified in the standard organ image, the location of the lesion in the schema schematically representing the target organ is identified.
 なお、本開示による画像処理装置においては、標準臓器画像は、複数の対象臓器画像のサイズ、形状および濃度を正規化することにより導出されてなるものであってもよい。 In the image processing apparatus according to the present disclosure, the standard organ image may be derived by normalizing the size, shape and density of a plurality of target organ images.
 また、本開示による画像処理装置においては、医用画像および標準臓器画像は3次元画像であり、
 シェーマは2次元画像であってもよい。
Further, in the image processing apparatus according to the present disclosure, the medical image and the standard organ image are three-dimensional images.
The schema may be a two-dimensional image.
 また、本開示による画像処理装置においては、プロセッサは、標準臓器画像を2次元に投影することにより病変の位置が特定された2次元標準臓器画像を導出し、
 2次元標準臓器画像における病変の位置に基づいて、シェーマにおける病変の位置を特定するものであってもよい。
Further, in the image processing apparatus according to the present disclosure, the processor derives a two-dimensional standard organ image in which the position of the lesion is specified by projecting the standard organ image in two dimensions.
The location of the lesion in the schema may be specified based on the location of the lesion in the two-dimensional standard organ image.
 また、本開示による画像処理装置においては、プロセッサは、2次元標準臓器画像の重心と2次元標準臓器画像における病変の位置との位置関係を導出し、
 シェーマの重心に対して位置関係を反映させることにより、シェーマにおける病変の位置を特定するものであってもよい。
Further, in the image processing apparatus according to the present disclosure, the processor derives the positional relationship between the center of gravity of the two-dimensional standard organ image and the position of the lesion in the two-dimensional standard organ image.
The position of the lesion in the schema may be specified by reflecting the positional relationship with respect to the center of gravity of the schema.
 また、本開示による画像処理装置においては、標準臓器画像が複数の解剖学的領域に分割され、シェーマが標準臓器画像に対応する複数の解剖学的領域に分割されてなる場合、
 プロセッサは、2次元標準臓器画像において、病変の位置が含まれる解剖学的領域の重心と解剖学的領域における病変の位置との位置関係を導出し、
 病変の位置が含まれる解剖学的領域に対応するシェーマの解剖学的領域の重心に対して位置関係を反映させることにより、シェーマにおける病変の位置を特定するものであってもよい。
Further, in the image processing apparatus according to the present disclosure, when the standard organ image is divided into a plurality of anatomical regions and the schema is divided into a plurality of anatomical regions corresponding to the standard organ images.
The processor derives the positional relationship between the center of gravity of the anatomical region containing the location of the lesion and the location of the lesion in the anatomical region in a two-dimensional standard organ image.
The location of the lesion in the schema may be specified by reflecting the positional relationship with respect to the center of gravity of the anatomical region of the schema corresponding to the anatomical region including the location of the lesion.
 また、本開示による画像処理装置においては、プロセッサは、病変の位置が特定されたシェーマを表示するものであってもよい。 Further, in the image processing device according to the present disclosure, the processor may display a schema in which the position of the lesion is specified.
 また、本開示による画像処理装置においては、プロセッサは、医用画像を解析することにより病変を検出するものであってもよい。 Further, in the image processing apparatus according to the present disclosure, the processor may detect a lesion by analyzing a medical image.
 本開示による画像処理方法は、医用画像から病変を含む対象臓器を抽出することにより病変を含む対象臓器画像を導出し、
 複数の対象臓器画像を正規化することにより導出された標準臓器画像と、導出された対象臓器画像とを位置合わせすることによって、標準臓器画像における病変の位置を特定し、
 標準臓器画像において特定された病変の位置に基づいて、対象臓器を模式的に表したシェーマにおける病変の位置を特定する。
The image processing method according to the present disclosure derives a target organ image including a lesion by extracting a target organ containing the lesion from a medical image.
By aligning the standard organ image derived by normalizing multiple target organ images with the derived target organ image, the position of the lesion in the standard organ image can be identified.
Based on the location of the lesion identified in the standard organ image, the location of the lesion in the schema schematically representing the target organ is identified.
 なお、本開示による画像処理方法をコンピュータに実行させるためのプログラムとして提供してもよい。 It should be noted that the image processing method according to the present disclosure may be provided as a program for executing the computer.
 本開示によれば、医用画像に含まれる病変の位置をシェーマにおいて正確に反映させることができる。 According to the present disclosure, the position of the lesion included in the medical image can be accurately reflected in the schema.
本開示の実施形態による画像処理装置を適用した医療情報システムの概略構成を示す図The figure which shows the schematic structure of the medical information system which applied the image processing apparatus by embodiment of this disclosure. 本実施形態による画像処理装置の概略構成を示す図The figure which shows the schematic structure of the image processing apparatus by this embodiment. 本実施形態による画像処理装置の機能構成図Functional configuration diagram of the image processing device according to this embodiment 肺画像を示す図Figure showing lung image 病変の検出結果を示す図The figure which shows the detection result of a lesion 標準肺画像を示す図Figure showing standard lung image 病変の位置が特定された標準肺画像を示す図Diagram showing standard lung images in which lesions are located シェーマを示す図Diagram showing schema 2次元標準肺画像を示す図Figure showing two-dimensional standard lung image 2次元標準肺画像の左肺上葉における重心と病変の位置との関係を説明するための図A diagram for explaining the relationship between the center of gravity and the position of the lesion in the upper lobe of the left lung in a two-dimensional standard lung image. シェーマの左肺上葉の重心に基づく病変の位置の特定を説明するための図Diagram to explain the location of the lesion based on the center of gravity of the upper lobe of the left lung of the schema シェーマの表示画面を示す図Diagram showing the display screen of the schema 本実施形態において行われる処理を示すフローチャートFlow chart showing processing performed in this embodiment 2次元標準肺画像の重心と病変の位置との関係を説明するための図A diagram for explaining the relationship between the center of gravity of a two-dimensional standard lung image and the position of a lesion. シェーマの重心に基づく病変の位置の特定を説明するための図Diagram to explain the location of lesions based on the center of gravity of the schema
 以下、図面を参照して本開示の実施形態について説明する。まず、本実施形態による画像処理装置を適用した医療情報システムの構成について説明する。図1は、医療情報システムの概略構成を示す図である。図1に示す医療情報システムは、本実施形態による画像処理装置を内包するコンピュータ1、撮影装置2および画像保管サーバ3が、ネットワーク4を経由して通信可能な状態で接続されている。 Hereinafter, embodiments of the present disclosure will be described with reference to the drawings. First, the configuration of the medical information system to which the image processing apparatus according to the present embodiment is applied will be described. FIG. 1 is a diagram showing a schematic configuration of a medical information system. In the medical information system shown in FIG. 1, a computer 1, an imaging device 2, and an image storage server 3 including an image processing device according to the present embodiment are connected in a communicable state via a network 4.
 コンピュータ1は、本実施形態による画像処理装置を内包するものであり、本実施形態による画像処理プログラムがインストールされている。コンピュータ1は、医用画像の読影または医用画像を用いての診断を行う医師が直接操作するワークステーションあるいはパーソナルコンピュータでもよいし、それらとネットワークを介して接続されたサーバコンピュータでもよい。画像処理プログラムは、ネットワークに接続されたサーバコンピュータの記憶装置、あるいはネットワークストレージに、外部からアクセス可能な状態で記憶され、要求に応じて医師が使用するコンピュータ1にダウンロードされ、インストールされる。または、DVD(Digital Versatile Disc)あるいはCD-ROM(Compact Disc Read Only Memory)等の記録媒体に記録されて配布され、その記録媒体からコンピュータ1にインストールされる。 The computer 1 includes an image processing device according to the present embodiment, and an image processing program according to the present embodiment is installed. The computer 1 may be a workstation or a personal computer directly operated by a doctor who interprets a medical image or makes a diagnosis using the medical image, or may be a server computer connected to them via a network. The image processing program is stored in a storage device of a server computer connected to a network or in a network storage in a state of being accessible from the outside, and is downloaded and installed in a computer 1 used by a doctor upon request. Alternatively, it is recorded and distributed on a recording medium such as a DVD (Digital Versatile Disc) or a CD-ROM (Compact Disc Read Only Memory), and is installed on the computer 1 from the recording medium.
 撮影装置2は、被検体の診断対象となる部位を撮影することにより、その部位を表す3次元画像を生成する装置であり、具体的には、CT装置、MRI装置、およびPET(Positron Emission Tomography)装置等である。この撮影装置2により生成された、複数のスライス画像からなる3次元画像は画像保管サーバ3に送信され、保存される。なお、本実施形態においては、CT装置により被検体の胸部を撮影することにより取得した、胸部の3次元画像を医用画像として用いるものとする。 The imaging device 2 is a device that generates a three-dimensional image representing the site by photographing the site to be diagnosed of the subject, and specifically, a CT device, an MRI device, and a PET (Positron Emission Tomography). ) Equipment, etc. The three-dimensional image composed of a plurality of sliced images generated by the photographing device 2 is transmitted to and stored in the image storage server 3. In this embodiment, a three-dimensional image of the chest obtained by photographing the chest of the subject with a CT device is used as a medical image.
 画像保管サーバ3は、各種データを保存して管理するコンピュータであり、大容量外部記憶装置およびデータベース管理用ソフトウェアを備えている。画像保管サーバ3は、有線あるいは無線のネットワーク4を介して他の装置と通信を行い、画像データ等を送受信する。具体的には撮影装置2で生成された3次元画像の画像データを含む各種データをネットワーク経由で取得し、大容量外部記憶装置等の記録媒体に保存して管理する。なお、画像データの格納形式およびネットワーク4経由での各装置間の通信は、DICOM(Digital Imaging and Communication in Medicine)等のプロトコルに基づいている。また、画像保管サーバ3には、臓器の標準的な画像である標準臓器画像も保存されている。 The image storage server 3 is a computer that stores and manages various data, and is equipped with a large-capacity external storage device and database management software. The image storage server 3 communicates with other devices via a wired or wireless network 4 to send and receive image data and the like. Specifically, various data including image data of a three-dimensional image generated by the photographing device 2 are acquired via a network and stored in a recording medium such as a large-capacity external storage device for management. The storage format of the image data and the communication between the devices via the network 4 are based on a protocol such as DICOM (Digital Imaging and Communication in Medicine). In addition, a standard organ image, which is a standard image of an organ, is also stored in the image storage server 3.
 次いで、本実施形態による画像処理装置について説明する。図2は、本実施形態による画像処理装置のハードウェア構成を説明する。図2に示すように、画像処理装置20は、CPU(Central Processing Unit)11、不揮発性のストレージ13、および一時記憶領域としてのメモリ16を含む。また、画像処理装置20は、液晶ディスプレイ等のディスプレイ14、キーボードとマウス等の入力デバイス15、およびネットワーク4に接続されるネットワークI/F(InterFace)17を含む。CPU11、ストレージ13、ディスプレイ14、入力デバイス15、メモリ16およびネットワークI/F17は、バス18に接続される。なお、CPU11がプロセッサの一例である。 Next, the image processing apparatus according to the present embodiment will be described. FIG. 2 describes the hardware configuration of the image processing apparatus according to the present embodiment. As shown in FIG. 2, the image processing device 20 includes a CPU (Central Processing Unit) 11, a non-volatile storage 13, and a memory 16 as a temporary storage area. Further, the image processing device 20 includes a display 14 such as a liquid crystal display, an input device 15 such as a keyboard and a mouse, and a network I / F (InterFace) 17 connected to the network 4. The CPU 11, the storage 13, the display 14, the input device 15, the memory 16, and the network I / F 17 are connected to the bus 18. The CPU 11 is an example of a processor.
 ストレージ13は、HDD(Hard Disk Drive)、SSD(Solid State Drive)、およびフラッシュメモリ等によって実現される。記憶媒体としてのストレージ13には、画像処理プログラム12が記憶される。CPU11は、ストレージ13から画像処理プログラム12を読み出してからメモリ16に展開し、展開した画像処理プログラム12を実行する。 The storage 13 is realized by an HDD (Hard Disk Drive), an SSD (Solid State Drive), a flash memory, or the like. The image processing program 12 is stored in the storage 13 as a storage medium. The CPU 11 reads the image processing program 12 from the storage 13, expands it into the memory 16, and executes the expanded image processing program 12.
 次いで、本実施形態による画像処理装置の機能的な構成を説明する。図3は、本実施形態による画像処理装置の機能的な構成を示す図である。図3に示すように画像処理装置20は、情報取得部21、臓器抽出部22、解析部23、位置合わせ部24、位置特定部25および表示制御部26を備える。そして、CPU11が、画像処理プログラム12を実行することにより、CPU11は、情報取得部21、臓器抽出部22、解析部23、位置合わせ部24、位置特定部25および表示制御部26として機能する。 Next, the functional configuration of the image processing device according to the present embodiment will be described. FIG. 3 is a diagram showing a functional configuration of the image processing apparatus according to the present embodiment. As shown in FIG. 3, the image processing device 20 includes an information acquisition unit 21, an organ extraction unit 22, an analysis unit 23, an alignment unit 24, a position identification unit 25, and a display control unit 26. Then, when the CPU 11 executes the image processing program 12, the CPU 11 functions as an information acquisition unit 21, an organ extraction unit 22, an analysis unit 23, an alignment unit 24, a position identification unit 25, and a display control unit 26.
 情報取得部21は、操作者による入力デバイス15からの指示により、画像保管サーバ3からネットワークI/F17を介して3次元画像G0を取得する。3次元画像G0は、複数の断層画像Dj(j=1~n、nは断層画像の数)から構成される。3次元画像G0は、例えば診断の対象となるような、ユーザである医師が注目する臓器を含む。なお、3次元画像G0が既にストレージ13に記憶されている場合には、情報取得部21は、ストレージ13から3次元画像G0を取得するようにしてもよい。本実施形態においては、ユーザが読影する対象臓器は肺であるものとする。また、情報取得部21は、画像保管サーバ3からネットワークI/F17を介して標準臓器画像を取得する。標準臓器画像については後述する。 The information acquisition unit 21 acquires the three-dimensional image G0 from the image storage server 3 via the network I / F 17 in response to an instruction from the input device 15 by the operator. The three-dimensional image G0 is composed of a plurality of tomographic images Dj (j = 1 to n, n is the number of tomographic images). The three-dimensional image G0 includes an organ of interest to the user doctor, for example, which is the subject of diagnosis. If the 3D image G0 is already stored in the storage 13, the information acquisition unit 21 may acquire the 3D image G0 from the storage 13. In the present embodiment, it is assumed that the target organ to be read by the user is the lung. Further, the information acquisition unit 21 acquires a standard organ image from the image storage server 3 via the network I / F17. The standard organ image will be described later.
 臓器抽出部22は、3次元画像G0から対象臓器を抽出することにより対象臓器画像を導出する。本実施形態においては、対象臓器は肺であるため、臓器抽出部22は、3次元画像G0から肺を抽出することにより肺画像GL0を導出する。肺画像GL0が対象臓器画像の一例である。肺を抽出する手法としては、3次元画像G0における画素毎の信号値(CT値)をヒストグラム化し、ヒストグラムをしきい値処理することにより肺の信号値を有する領域を抽出する方法を用いることができる。また、肺を表すシード点に基づく領域拡張法(Region Growing)等、任意の手法を用いることができる。なお、3次元画像G0から肺を抽出するように機械学習がなされた判別器を用いるようにしてもよい。なお、肺画像GL0は3次元画像G0から導出されるため、3次元の画像となる。図4は肺画像の例を示す図である。 The organ extraction unit 22 derives the target organ image by extracting the target organ from the three-dimensional image G0. In the present embodiment, since the target organ is the lung, the organ extraction unit 22 derives the lung image GL0 by extracting the lung from the three-dimensional image G0. The lung image GL0 is an example of the target organ image. As a method for extracting the lungs, it is possible to use a method in which the signal value (CT value) for each pixel in the three-dimensional image G0 is made into a histogram and the region having the signal value of the lung is extracted by performing the threshold value processing on the histogram. can. In addition, any method such as Region Growing based on the seed point representing the lung can be used. A discriminator that has been machine-learned to extract lungs from the three-dimensional image G0 may be used. Since the lung image GL0 is derived from the three-dimensional image G0, it is a three-dimensional image. FIG. 4 is a diagram showing an example of a lung image.
 解析部23は、3次元画像G0を解析することにより、肺に含まれる病変を検出する。解析部23は、公知のコンピュータ支援画像診断(すなわちCAD)のアルゴリズムを用いて、3次元画像G0から複数種類の疾患の陰影を病変として検出する。なお、本実施形態においては、臓器抽出部22により3次元画像G0から肺画像GL0が導出されているため、解析部23は、肺画像GL0を解析することにより病変を検出するものであってもよい。なお、疾患の種類としては、肺の疾患である胸水、中皮腫、結節および石灰化等が挙げられる。 The analysis unit 23 detects the lesion contained in the lung by analyzing the three-dimensional image G0. The analysis unit 23 detects the shadows of a plurality of types of diseases as lesions from the three-dimensional image G0 by using a known computer-aided image diagnosis (that is, CAD) algorithm. In the present embodiment, since the lung image GL0 is derived from the three-dimensional image G0 by the organ extraction unit 22, the analysis unit 23 may detect the lesion by analyzing the lung image GL0. good. The types of diseases include lung diseases such as pleural effusion, mesothelioma, nodules and calcification.
 病変の検出のために、解析部23は、3次元画像G0または肺画像GL0から複数種類の疾患の陰影を病変として検出するように機械学習がなされた学習モデル23Aを有する。学習モデル23Aは、疾患の種類に応じて複数用意される。学習モデル23Aは、3次元画像G0または肺画像GL0における各画素(ボクセル)が、病変を表すものであるか否かを判別するように、教師データを用いてディープラーニング(深層学習)がなされた畳み込みニューラルネットワーク(CNN(Convolutional Neural Network))からなる。 For the detection of lesions, the analysis unit 23 has a learning model 23A in which machine learning is performed so as to detect the shadows of a plurality of types of diseases as lesions from the three-dimensional image G0 or the lung image GL0. A plurality of learning models 23A are prepared according to the type of disease. In the learning model 23A, deep learning was performed using the teacher data so as to determine whether or not each pixel (voxel) in the three-dimensional image G0 or the lung image GL0 represents a lesion. It consists of a convolutional neural network (CNN (Convolutional Neural Network)).
 学習モデル23Aは、例えば病変を含む教師画像および教師画像における病変の領域を表す正解データからなる教師データ、並びに病変を含まない教師画像からなる教師データを多数用いてCNNを学習することにより構築される。学習モデル23Aは、医用画像における各画素が病変であることを表す確信度(尤度)を導出し、その確信度が予め定められた第1のしきい値以上となる画素からなる領域を病変の領域として検出する。ここで、確信度は0以上1以下の値となる。 The learning model 23A is constructed by learning CNN using, for example, a teacher image including a lesion, teacher data consisting of correct answer data representing a region of the lesion in the teacher image, and teacher data consisting of a teacher image not including the lesion. To. The learning model 23A derives a certainty (likelihood) indicating that each pixel in the medical image is a lesion, and lesions a region consisting of pixels whose certainty is equal to or higher than a predetermined first threshold value. Detected as an area of. Here, the certainty is a value of 0 or more and 1 or less.
 なお、学習モデル23Aは、3次元画像G0または肺画像GL0から病変を検出するものであってもよいが、3次元画像G0または肺画像GL0を構成する複数の断層画像のそれぞれから病変を検出するものであってもよい。また、学習モデル23Aとしては、畳み込みニューラルネットワークの他、例えばサポートベクタマシン(SVM(Support Vector Machine))等の任意の学習モデルを用いることができる。 The learning model 23A may detect the lesion from the three-dimensional image G0 or the lung image GL0, but detects the lesion from each of the plurality of tomographic images constituting the three-dimensional image G0 or the lung image GL0. It may be a thing. Further, as the learning model 23A, in addition to the convolutional neural network, any learning model such as a support vector machine (SVM (Support Vector Machine)) can be used.
 図5は病変の検出結果を示す図である。図5に示すように、肺画像GL0の右肺の上部に矩形31により囲まれた病変32が検出されている。 FIG. 5 is a diagram showing the detection result of the lesion. As shown in FIG. 5, a lesion 32 surrounded by a rectangle 31 is detected in the upper part of the right lung of the lung image GL0.
 位置合わせ部24は、標準臓器画像と肺画像GL0との位置合わせを行う。ここで、標準臓器画像について説明する。なお、本実施形態においては、対象臓器が肺であるため、標準臓器画像は標準肺画像となる。以下、標準肺画像については参照符号としてSL0を用いるものとする。標準肺画像SL0は、標準肺画像SL0を導出するために用意された複数の肺画像を正規化することにより導出される。具体的には肺画像のサイズ、形状および濃度を正規化することにより、標準肺画像SL0が導出される。 The alignment unit 24 aligns the standard organ image and the lung image GL0. Here, a standard organ image will be described. In this embodiment, since the target organ is the lung, the standard organ image is a standard lung image. Hereinafter, SL0 will be used as a reference code for the standard lung image. The standard lung image SL0 is derived by normalizing a plurality of lung images prepared for deriving the standard lung image SL0. Specifically, the standard lung image SL0 is derived by normalizing the size, shape and density of the lung image.
 サイズの正規化とは、複数の肺画像の平均的なサイズを求めることである。サイズは、人体における肺画像の上下方向のサイズおよび左右方向のサイズを用いることができる。形状の正規化とは、大きさが正規化された肺画像の平均的な形状を求めることである。例えば肺画像における肺領域の重心からの肺表面までの距離の平均値を求めるなどにより、形状を正規化すればよい。また、この際、肺における解剖学的領域の境界も正規化される。ここで、右肺は上葉、中葉および下葉の解剖学的領域に分割され、左肺は上葉および下葉の解剖学的領域に分割される。解剖学的領域の境界は、肺画像における解剖学的領域の重心を求め、重心から解剖学的領域の境界までの距離の平均値を求める等すればよい。濃度の正規化は、複数の肺画像の信号値(ボクセル値)の代表値を求める等により行えばよい。代表値としては、例えば平均値、中央値、最大値および最小値等を用いることができる。 Size normalization is to find the average size of multiple lung images. As the size, the vertical size and the horizontal size of the lung image in the human body can be used. Shape normalization is to find the average shape of a lung image whose size is normalized. For example, the shape may be normalized by obtaining the average value of the distance from the center of gravity of the lung region to the lung surface in the lung image. At this time, the boundaries of the anatomical region in the lung are also normalized. Here, the right lung is divided into upper, middle and lower lobe anatomical regions, and the left lung is divided into upper and lower lobe anatomical regions. For the boundary of the anatomical region, the center of gravity of the anatomical region in the lung image may be obtained, and the average value of the distances from the center of gravity to the boundary of the anatomical region may be obtained. The concentration can be normalized by obtaining a representative value of signal values (voxel values) of a plurality of lung images. As the representative value, for example, an average value, a median value, a maximum value, a minimum value, and the like can be used.
 図6は標準肺画像を示す図である。図6に示すように、標準肺画像SL0においては、右肺上葉41、右肺中葉42、右肺下葉43、左肺上葉44および左肺下葉45の解剖学的領域に分割されている。なお、肺のサイズは、年齢、性別、身長および体重等の被検体の体格によって変わる。このため、標準肺画像SL0は、年齢、性別、身長および体重等の体格に応じたサイズのものが画像保管サーバ3に保存されている。本実施形態においては、情報取得部21は、入力デバイス15から入力された被検体の年齢、性別、身長および体重等に基づく体格に応じたサイズの標準肺画像SL0を画像保管サーバ3から取得するものとする。 FIG. 6 is a diagram showing a standard lung image. As shown in FIG. 6, in the standard lung image SL0, it is divided into anatomical regions of the upper lobe 41 of the right lung, the middle lobe 42 of the right lung, the lower lobe 43 of the right lung, the upper lobe 44 of the left lung, and the lower lobe 45 of the left lung. ing. The size of the lungs varies depending on the body shape of the subject such as age, gender, height and weight. Therefore, the standard lung image SL0 has a size corresponding to the physique such as age, sex, height, and weight, and is stored in the image storage server 3. In the present embodiment, the information acquisition unit 21 acquires a standard lung image SL0 having a size corresponding to the physique based on the age, sex, height, weight, etc. of the subject input from the input device 15 from the image storage server 3. It shall be.
 位置合わせ部24は、肺画像GL0を標準肺画像SL0に一致させるように、肺画像GL0と標準肺画像SL0とを位置合わせする。位置合わせの手法としては、非剛体位置合わせを用いることが好ましいが、剛体位置合わせを用いてもよい。非剛体位置合わせとしては、例えばBスプラインおよびシンプレートスプライン(Thin Plate Spline)等の関数を用いて、肺画像GL0における各画素位置を、標準肺画像SL0における対応する画素位置に非線形に変換することによる手法を用いることができるが、これに限定されるものではない。なお、位置合わせの精度を向上させるために、肺画像GL0の濃度の分布が標準肺画像SL0の濃度の分布と一致するように、肺画像GL0の濃度を変換することが好ましい。 The alignment unit 24 aligns the lung image GL0 and the standard lung image SL0 so that the lung image GL0 matches the standard lung image SL0. As the alignment method, it is preferable to use non-rigid body alignment, but rigid body alignment may be used. For non-rigid alignment, for example, a function such as B-spline and Thin Plate Spline is used to non-linearly convert each pixel position in the lung image GL0 to the corresponding pixel position in the standard lung image SL0. However, the method is not limited to this. In order to improve the accuracy of alignment, it is preferable to convert the concentration of the lung image GL0 so that the distribution of the concentration of the lung image GL0 matches the distribution of the concentration of the standard lung image SL0.
 位置合わせ後の肺画像GL0は、標準肺画像SL0と同一のサイズおよび形状を有する。また、位置合わせにより、肺画像GL0に含まれる病変の位置を標準肺画像SL0において特定することができる。図7は病変の位置が特定された標準肺画像を示す図である。図7に示すように、標準肺画像SL0の右肺上葉41に、肺画像GL0に含まれる病変32が特定されている。 The aligned lung image GL0 has the same size and shape as the standard lung image SL0. In addition, by alignment, the position of the lesion included in the lung image GL0 can be specified in the standard lung image SL0. FIG. 7 is a diagram showing a standard lung image in which the location of a lesion is identified. As shown in FIG. 7, a lesion 32 included in the lung image GL0 has been identified in the upper lobe 41 of the right lung of the standard lung image SL0.
 位置特定部25は、標準肺画像SL0において特定された病変の位置に基づいて、対象臓器を模式的に表したシェーマにおける病変の位置を特定する。図8は肺のシェーマを示す図である。図8に示すように、シェーマ50は肺をイラスト化することにより模式的に示す図であり、右肺は右肺上葉51、右肺中葉52および右肺下葉53の解剖学的領域に分割され、左肺は左肺上葉54および左肺下葉55の解剖学的領域に分割されている。 The position specifying unit 25 identifies the position of the lesion in the schema that schematically represents the target organ, based on the position of the lesion specified in the standard lung image SL0. FIG. 8 is a diagram showing a lung schema. As shown in FIG. 8, the schema 50 is a diagram schematically shown by illustrating the lung, and the right lung is located in the anatomical region of the upper lobe 51 of the right lung, the middle lobe 52 of the right lung, and the lower lobe 53 of the right lung. Divided, the left lung is divided into anatomical regions of the upper lobe 54 of the left lung and the lower lobe 55 of the left lung.
 シェーマ50における病変の位置の特定のために、位置特定部25は、病変の位置が特定された標準肺画像SL0を2次元に投影して2次元標準肺画像SL1を導出する。この場合の投影方向は、シェーマ50の視線方向と同一方向、すなわち人体を正面から見た場合の奥行き方向である。図9は2次元標準肺画像を示す図である。図9に示すように、2次元標準肺画像SL1は2次元画像であり、右肺は右肺上葉61、右肺中葉62および右肺下葉63の解剖学的領域に分割され、左肺は左肺上葉64および左肺下葉65の解剖学的領域に分割されている。図9に示すように、2次元標準肺画像SL1の右肺上葉61に病変32が含まれている。右肺上葉61の病変の位置は、3次元の標準肺画像SL0における右肺上葉51の病変32の位置を2次元に投影した位置である。 In order to specify the position of the lesion in the schema 50, the position specifying unit 25 projects the standard lung image SL0 in which the position of the lesion is specified in two dimensions to derive the two-dimensional standard lung image SL1. The projection direction in this case is the same direction as the line-of-sight direction of the schema 50, that is, the depth direction when the human body is viewed from the front. FIG. 9 is a diagram showing a two-dimensional standard lung image. As shown in FIG. 9, the two-dimensional standard lung image SL1 is a two-dimensional image, and the right lung is divided into anatomical regions of the upper lobe 61 of the right lung, the middle lobe 62 of the right lung, and the lower lobe 63 of the right lung, and the left lung. Is divided into anatomical regions of the upper lobe 64 of the left lung and the lower lobe 65 of the left lung. As shown in FIG. 9, the lesion 32 is included in the upper lobe 61 of the right lung of the two-dimensional standard lung image SL1. The position of the lesion in the upper lobe 61 of the right lung is a two-dimensional projection of the position of the lesion 32 in the upper lobe 51 of the right lung in the three-dimensional standard lung image SL0.
 位置特定部25は、まずシェーマ50と2次元標準肺画像SL1とのサイズを一致させる。例えば、シェーマ50と2次元標準肺画像SL1とのy方向のみサイズを一致させる、あるいはx方向のみサイズを一致させる。この際、位置特定部25は、シェーマ50と2次元標準肺画像SL1とのサイズを一致させるための拡大率αを一時的にメモリ16に記憶する。 The position specifying unit 25 first matches the sizes of the schema 50 and the two-dimensional standard lung image SL1. For example, the size of the schema 50 and the two-dimensional standard lung image SL1 are matched only in the y direction, or the sizes are matched only in the x direction. At this time, the position specifying unit 25 temporarily stores the enlargement ratio α for matching the sizes of the schema 50 and the two-dimensional standard lung image SL1 in the memory 16.
 そして、位置特定部25は、2次元標準肺画像SL1において、病変32が含まれる右肺上葉61の重心g1を導出する。なお、位置特定部25は、2次元に投影する前の標準肺画像SL0の右肺上葉51の重心を導出し、導出した重心を2次元に投影した位置を2次元標準肺画像SL1の右肺上葉61における重心の位置としてもよい。 Then, the positioning portion 25 derives the center of gravity g1 of the upper lobe 61 of the right lung including the lesion 32 in the two-dimensional standard lung image SL1. The position specifying unit 25 derives the center of gravity of the upper lobe 51 of the right lung of the standard lung image SL0 before being projected in two dimensions, and the position where the derived center of gravity is projected in two dimensions is on the right of the two-dimensional standard lung image SL1. It may be the position of the center of gravity in the upper lobe 61 of the lung.
 次いで、位置特定部25は、図10に示すように、2次元標準肺画像SL1の右肺上葉61における重心g1に対する病変32の位置p1の相対位置を導出する。なお、病変32の位置p1として病変32の重心を用いることができる。重心g1に対する病変32の位置p1の相対位置の座標を(x1,y1)とする。 Next, the position specifying unit 25 derives the relative position of the position p1 of the lesion 32 with respect to the center of gravity g1 in the upper lobe 61 of the right lung of the two-dimensional standard lung image SL1 as shown in FIG. The center of gravity of the lesion 32 can be used as the position p1 of the lesion 32. Let (x1, y1) be the coordinates of the relative position of the position p1 of the lesion 32 with respect to the center of gravity g1.
 そして、位置特定部25は、図11に示すように、シェーマ50における右肺上葉51の重心g2を導出し、位置p1の座標(x1,y1)に拡大率αを乗算し、重心g2を基準とした座標(αx1,αy1)の位置p2をシェーマ50における病変の位置に特定する。 Then, as shown in FIG. 11, the position specifying unit 25 derives the center of gravity g2 of the upper lobe 51 of the right lung in the schema 50, multiplies the coordinates (x1, y1) of the position p1 by the enlargement factor α, and obtains the center of gravity g2. The position p2 of the reference coordinates (αx1, αy1) is specified at the position of the lesion in the schema 50.
 表示制御部26は、病変の位置が特定されたシェーマをディスプレイ14に表示する。図12はシェーマの表示画面を示す図である。図12に示すように表示画面70は、画像表示領域71および文字表示領域72を含む。画像表示領域71には、3次元画像G0およびシェーマ50が表示される。なお、表示される3次元画像G0は、3次元画像G0に含まれる断層画像である。ユーザは、入力デバイス15を用いることにより、表示される断層画像を切り替えることができる。また、シェーマ50においては右肺上葉51に病変を表す矩形のマーク73が付与されることにより、病変の位置が強調されている。なお、矩形のマークに代えて矢印等の他の形状のマークを付与するようにしてもよい。また、シェーマ50における病変を含む解剖学的領域(図12においては右肺上葉)の色を他の解剖学的領域の色と異なるものとすることにより、病変を強調するようにしてもよい。また、右肺上葉に病変である結節が含まれることを表すアノテーション74も画像表示領域71に表示される。このため、ユーザは、例えば左肺上葉を含む断層画像を画像表示領域71に表示して、詳細に読影を行うことができる。 The display control unit 26 displays the schema in which the position of the lesion is specified on the display 14. FIG. 12 is a diagram showing a display screen of the schema. As shown in FIG. 12, the display screen 70 includes an image display area 71 and a character display area 72. The three-dimensional image G0 and the schema 50 are displayed in the image display area 71. The displayed three-dimensional image G0 is a tomographic image included in the three-dimensional image G0. The user can switch the displayed tomographic image by using the input device 15. Further, in the schema 50, the position of the lesion is emphasized by giving a rectangular mark 73 representing the lesion to the upper lobe 51 of the right lung. In addition, instead of the rectangular mark, a mark having another shape such as an arrow may be added. Further, the lesion may be emphasized by making the color of the anatomical region including the lesion in the schema 50 (the upper lobe of the right lung in FIG. 12) different from the color of the other anatomical region. .. Annotation 74 indicating that the upper lobe of the right lung contains a nodule that is a lesion is also displayed in the image display area 71. Therefore, the user can display, for example, a tomographic image including the upper lobe of the left lung in the image display area 71 and perform detailed image interpretation.
 文字表示領域72には、ユーザによる3次元画像G0の読影結果が所見として入力される。例えば、図12においては、右肺上葉を読影した結果得られた「右肺上葉に結節が見られます。」の所見が入力されている。ユーザは、確定ボタン75を選択することにより、入力した所見を含む読影レポートが作成される。作成された読影レポートはネットワークI/F17から不図示のレポートサーバに送信されて保存される。 In the character display area 72, the interpretation result of the three-dimensional image G0 by the user is input as a finding. For example, in FIG. 12, the finding of "nodule is seen in the upper lobe of the right lung" obtained as a result of interpreting the upper lobe of the right lung is input. By selecting the confirmation button 75, the user creates an interpretation report including the input findings. The created interpretation report is transmitted from the network I / F 17 to a report server (not shown) and saved.
 次いで、本実施形態において行われる処理について説明する。図13は本実施形態において行われる処理を示すフローチャートである。なお、標準肺画像SL0は画像保管サーバ3から取得されて、ストレージ13に保存されているものとする。まず、情報取得部21が、3次元画像G0を取得する(ステップST1)。次いで、臓器抽出部22が3次元画像G0から肺を抽出することにより肺画像GL0を導出する(ステップST2)。そして、解析部23が、3次元画像G0に含まれる肺における病変を検出する(ステップST3)。 Next, the processing performed in this embodiment will be described. FIG. 13 is a flowchart showing the processing performed in the present embodiment. It is assumed that the standard lung image SL0 is acquired from the image storage server 3 and stored in the storage 13. First, the information acquisition unit 21 acquires the three-dimensional image G0 (step ST1). Next, the organ extraction unit 22 derives the lung image GL0 by extracting the lung from the three-dimensional image G0 (step ST2). Then, the analysis unit 23 detects the lesion in the lung included in the three-dimensional image G0 (step ST3).
 続いて、位置合わせ部24が、標準肺画像SL0と肺画像GL0とを位置合わせすることにより、標準肺画像SL0における病変の位置を特定する(ステップST4)。そして、位置特定部25が、標準肺画像SL0において特定された病変の位置に基づいて、シェーマ50における病変の位置を特定する(ステップST5)。次いで、表示制御部26が、病変の位置が特定されたシェーマ50を表示し(ステップST6)、処理を終了する。 Subsequently, the alignment unit 24 identifies the position of the lesion in the standard lung image SL0 by aligning the standard lung image SL0 and the lung image GL0 (step ST4). Then, the position specifying unit 25 identifies the position of the lesion in the schema 50 based on the position of the lesion specified in the standard lung image SL0 (step ST5). Next, the display control unit 26 displays the schema 50 in which the position of the lesion is specified (step ST6), and ends the process.
 このように、本実施形態においては、複数の対象臓器画像を正規化することにより予め導出された標準臓器画像と対象臓器画像とを位置合わせすることにより、標準臓器画像における病変に対応する位置を特定し、標準臓器画像において特定された病変に対応する位置に基づいて、対象臓器を模式的に表したシェーマにおける病変に対応する位置を特定するようにした。これにより、対象臓器画像とシェーマとの間に、標準臓器画像を介在させて、段階的に対象臓器画像とシェーマとを位置合わせすることができる。したがって、対象臓器画像とシェーマとを直接位置合わせする場合と比較して、対象臓器画像とシェーマとを正確に位置合わせすることができ、その結果、医用画像に含まれる病変の位置をシェーマにおいて正確に反映させることができる。 As described above, in the present embodiment, the position corresponding to the lesion in the standard organ image is determined by aligning the target organ image with the standard organ image derived in advance by normalizing the plurality of target organ images. Based on the location corresponding to the lesion identified in the standard organ image, the location corresponding to the lesion in the schema schematically representing the target organ was specified. As a result, the standard organ image can be interposed between the target organ image and the schema, and the target organ image and the schema can be aligned stepwise. Therefore, the target organ image and the schema can be accurately aligned as compared with the case where the target organ image and the schema are directly aligned, and as a result, the position of the lesion contained in the medical image can be accurately aligned in the schema. Can be reflected in.
 とくに、標準臓器画像を複数の対象臓器画像を正規化することにより導出されてなるものとすることにより、標準臓器画像は対象臓器の平均的なサイズ、形状および濃度を有するものとなる。このため、対象臓器画像との位置合わせ、およびシェーマ50との位置合わせの双方を、精度よく行うことができ、その結果、対象臓器画像とシェーマ50との位置合わせを精度よく行うことができる。したがって、本実施形態によれば、医用画像に含まれる病変の位置をシェーマにおいて正確に反映させることができる。 In particular, by assuming that the standard organ image is derived by normalizing a plurality of target organ images, the standard organ image has an average size, shape and density of the target organ. Therefore, both the alignment with the target organ image and the alignment with the schema 50 can be performed with high accuracy, and as a result, the alignment between the target organ image and the schema 50 can be performed with high accuracy. Therefore, according to the present embodiment, the position of the lesion included in the medical image can be accurately reflected in the schema.
 また、2次元標準肺画像SL1の解剖学的領域の重心と病変の位置との対応関係を導出し、導出した対応関係をシェーマ50における対応する解剖学的領域の重心に反映することによってシェーマ50における病変の位置を特定することにより、簡易な演算によりシェーマ50における病変の位置を特定することができる。 Further, the schema 50 is derived by deriving the correspondence between the center of gravity of the anatomical region of the two-dimensional standard lung image SL1 and the position of the lesion, and the derived correspondence is reflected in the center of gravity of the corresponding anatomical region in the schema 50. By specifying the position of the lesion in the schema 50, the position of the lesion in the schema 50 can be specified by a simple calculation.
 なお、上記実施形態においては、シェーマ50における病変の位置を特定する際に、2次元標準肺画像SL1とシェーマ50との対応する解剖学的領域(すなわち右肺上葉)における重心位置の位置関係を用いているが、これに限定されるものではない。右肺上葉に病変が特定された場合、図14に示すように、2次元標準肺画像SL1の右肺の重心g11と2次元標準肺画像SL1において特定された病変の位置p1(x2,y2)との位置関係を導出し、図15に示すように、導出した位置関係をシェーマ50における右肺の重心g12に対して反映させることにより、シェーマ50の右肺における病変の位置p2(αx2,αy2)を特定するようにしてもよい。 In the above embodiment, when the position of the lesion in the schema 50 is specified, the positional relationship of the center of gravity in the corresponding anatomical region (that is, the upper lobe of the right lung) between the two-dimensional standard lung image SL1 and the schema 50. Is used, but is not limited to this. When a lesion is identified in the upper lobe of the right lung, as shown in FIG. 14, the center of gravity g11 of the right lung of the two-dimensional standard lung image SL1 and the position of the lesion identified in the two-dimensional standard lung image SL1 p1 (x2, y2). ), And as shown in FIG. 15, by reflecting the derived positional relationship with respect to the center of gravity g12 of the right lung in the schema 50, the position p2 (αx2, αx2) of the lesion in the right lung of the schema 50. αy2) may be specified.
 また、上記実施形態においては、画像処理装置20が解析部23を備え、3次元画像G0から病変を検出しているが、これに限定されるものではない。画像処理装置20とネットワーク4を介して接続された別個の装置において、3次元画像G0から病変を検出するようにしてもよい。また、すでに病変が検出されている3次元画像G0を画像保管サーバ3から取得し、画像処理装置20において、病変が検出された3次元画像G0を用いてシェーマ50における病変の位置を特定するようにしてもよい。また、3次元画像G0をディスプレイ14に表示し、ユーザが3次元画像G0を読影することにより病変の位置を特定するようにしてもよい。これらの場合、画像処理装置20においては解析部23は不用となる。 Further, in the above embodiment, the image processing device 20 includes an analysis unit 23 and detects a lesion from the three-dimensional image G0, but the present invention is not limited to this. The lesion may be detected from the three-dimensional image G0 in a separate device connected to the image processing device 20 via the network 4. Further, the 3D image G0 in which the lesion has already been detected is acquired from the image storage server 3, and the image processing device 20 uses the 3D image G0 in which the lesion is detected to specify the position of the lesion in the schema 50. You may do it. Further, the 3D image G0 may be displayed on the display 14, and the user may specify the position of the lesion by interpreting the 3D image G0. In these cases, the analysis unit 23 is unnecessary in the image processing device 20.
 また、上記実施形態においては、対象臓器を肺としているがこれに限定されるものではない。肺の他に、脳、心臓、肝臓、血管および四肢等の人体の任意の部位を診断対象とすることができる。 Further, in the above embodiment, the target organ is the lung, but the present invention is not limited to this. In addition to the lungs, any part of the human body such as the brain, heart, liver, blood vessels and limbs can be diagnosed.
 また、上記実施形態において、例えば、情報取得部21、臓器抽出部22、解析部23、位置合わせ部24、位置特定部25および表示制御部26といった各種の処理を実行する処理部(Processing Unit)のハードウェア的な構造としては、次に示す各種のプロセッサ(Processor)を用いることができる。上記各種のプロセッサには、上述したように、ソフトウェア(プログラム)を実行して各種の処理部として機能する汎用的なプロセッサであるCPUに加えて、FPGA(Field Programmable Gate Array)等の製造後に回路構成を変更可能なプロセッサであるプログラマブルロジックデバイス(Programmable Logic Device :PLD)、ASIC(Application Specific Integrated Circuit)等の特定の処理を実行させるために専用に設計された回路構成を有するプロセッサである専用電気回路等が含まれる。 Further, in the above embodiment, for example, a processing unit (Processing Unit) that executes various processes such as an information acquisition unit 21, an organ extraction unit 22, an analysis unit 23, an alignment unit 24, a position identification unit 25, and a display control unit 26. As the hardware structure of, various processors (Processors) shown below can be used. For the above-mentioned various processors, as described above, in addition to the CPU, which is a general-purpose processor that executes software (program) and functions as various processing units, circuits after manufacturing FPGA (Field Programmable Gate Array) and the like are used. Dedicated electricity, which is a processor with a circuit configuration specially designed to execute specific processing such as programmable logic device (PLD), ASIC (Application Specific Integrated Circuit), which is a processor whose configuration can be changed. Circuits etc. are included.
 1つの処理部は、これらの各種のプロセッサのうちの1つで構成されてもよいし、同種または異種の2つ以上のプロセッサの組み合わせ(例えば、複数のFPGAの組み合わせまたはCPUとFPGAとの組み合わせ)で構成されてもよい。また、複数の処理部を1つのプロセッサで構成してもよい。 One processing unit may be composed of one of these various processors, or a combination of two or more processors of the same type or different types (for example, a combination of a plurality of FPGAs or a combination of a CPU and an FPGA). ) May be configured. Further, a plurality of processing units may be configured by one processor.
 複数の処理部を1つのプロセッサで構成する例としては、第1に、クライアントおよびサーバ等のコンピュータに代表されるように、1つ以上のCPUとソフトウェアとの組み合わせで1つのプロセッサを構成し、このプロセッサが複数の処理部として機能する形態がある。第2に、システムオンチップ(System On Chip:SoC)等に代表されるように、複数の処理部を含むシステム全体の機能を1つのIC(Integrated Circuit)チップで実現するプロセッサを使用する形態がある。このように、各種の処理部は、ハードウェア的な構造として、上記各種のプロセッサの1つ以上を用いて構成される。 As an example of configuring a plurality of processing units with one processor, first, as represented by a computer such as a client and a server, one processor is configured by a combination of one or more CPUs and software. There is a form in which this processor functions as a plurality of processing units. Second, as typified by System On Chip (SoC), there is a form that uses a processor that realizes the functions of the entire system including multiple processing units with one IC (Integrated Circuit) chip. be. As described above, the various processing units are configured by using one or more of the above-mentioned various processors as a hardware-like structure.
 さらに、これらの各種のプロセッサのハードウェア的な構造としては、より具体的には、半導体素子等の回路素子を組み合わせた電気回路(Circuitry)を用いることができる。 Further, as the hardware structure of these various processors, more specifically, an electric circuit (Circuitry) in which circuit elements such as semiconductor elements are combined can be used.
   1  コンピュータ
   2  撮影装置
   3  画像保管サーバ
   4  ネットワーク
   11  CPU
   12  画像処理プログラム
   13  ストレージ
   14  ディスプレイ
   15  入力デバイス
   16  メモリ
   17  ネットワークI/F
   18  バス
   20  画像処理装置
   21  情報取得部
   22  臓器抽出部
   23  解析部
   23A  学習済みモデル
   24  位置合わせ部
   25  位置特定部
   26  表示制御部
   31  矩形
   32  病変
   41、51、61  右肺上葉
   42、52、62  右肺中葉
   43、53、63  右肺下葉
   44、54、64  左肺上葉
   45、55、65  左肺下葉
   50  シェーマ
   70  表示画面
   71  画像表示領域
   72  文字表示領域
   73  マーク
   74  アノテーション
   g1,g2,g11,g12  重心
   p1,p2  病変の位置
   GL0  肺画像
   SL0  標準肺画像
   SL1  2次元標準肺画像
1 Computer 2 Imaging device 3 Image storage server 4 Network 11 CPU
12 Image processing program 13 Storage 14 Display 15 Input device 16 Memory 17 Network I / F
18 Bus 20 Image processing device 21 Information acquisition unit 22 Organ extraction unit 23 Analysis unit 23A Learned model 24 Alignment unit 25 Positioning unit 26 Display control unit 31 Rectangular 32 Diseases 41, 51, 61 Right upper lobe 42, 52, 62 Right lung middle lobe 43, 53, 63 Right lung lower lobe 44, 54, 64 Left lung upper lobe 45, 55, 65 Left lung lower lobe 50 Schema 70 Display screen 71 Image display area 72 Character display area 73 Mark 74 Annotation g1, g2, g11, g12 Center of gravity p1, p2 Location of lesion GL0 Lung image SL0 Standard lung image SL1 Two-dimensional standard lung image

Claims (10)

  1.  少なくとも1つのプロセッサを備え、
     プロセッサは、
     医用画像から病変を含む対象臓器を抽出することにより前記病変を含む対象臓器画像を導出し、
     複数の対象臓器画像を正規化することにより導出された標準臓器画像と、前記導出された対象臓器画像とを位置合わせすることによって、前記標準臓器画像における前記病変の位置を特定し、
     前記標準臓器画像において特定された前記病変の位置に基づいて、前記対象臓器を模式的に表したシェーマにおける前記病変の位置を特定する画像処理装置。
    Equipped with at least one processor
    The processor is
    By extracting the target organ containing the lesion from the medical image, the target organ image including the lesion is derived, and the target organ image is derived.
    By aligning the standard organ image derived by normalizing a plurality of target organ images with the derived target organ image, the position of the lesion in the standard organ image is specified.
    An image processing device that identifies the position of the lesion in a schema that schematically represents the target organ, based on the position of the lesion identified in the standard organ image.
  2.  前記標準臓器画像は、前記複数の対象臓器画像のサイズ、形状および濃度を正規化することにより導出されてなる請求項1に記載の画像処理装置。 The image processing apparatus according to claim 1, wherein the standard organ image is derived by normalizing the size, shape, and density of the plurality of target organ images.
  3.  前記医用画像および前記標準臓器画像は3次元画像であり、
     前記シェーマは2次元画像である請求項1または2に記載の画像処理装置。
    The medical image and the standard organ image are three-dimensional images, and are
    The image processing apparatus according to claim 1 or 2, wherein the schema is a two-dimensional image.
  4.  前記プロセッサは、前記標準臓器画像を2次元に投影することにより前記病変の位置が特定された2次元標準臓器画像を導出し、
     前記2次元標準臓器画像における前記病変の位置に基づいて、前記シェーマにおける前記病変の位置を特定する請求項3項に記載の画像処理装置。
    The processor derives a two-dimensional standard organ image in which the position of the lesion is specified by projecting the standard organ image in two dimensions.
    The image processing apparatus according to claim 3, wherein the position of the lesion in the schema is specified based on the position of the lesion in the two-dimensional standard organ image.
  5.  前記プロセッサは、前記2次元標準臓器画像の重心と前記2次元標準臓器画像における前記病変の位置との位置関係を導出し、
     前記シェーマの重心に対して前記位置関係を反映させることにより、前記シェーマにおける前記病変の位置を特定する請求項4に記載の画像処理装置。
    The processor derives the positional relationship between the center of gravity of the two-dimensional standard organ image and the position of the lesion in the two-dimensional standard organ image.
    The image processing apparatus according to claim 4, wherein the position of the lesion in the schema is specified by reflecting the positional relationship with respect to the center of gravity of the schema.
  6.  前記標準臓器画像が複数の解剖学的領域に分割され、前記シェーマが前記標準臓器画像に対応する複数の解剖学的領域に分割されてなる場合、
     前記プロセッサは、前記2次元標準臓器画像において、前記病変の位置が含まれる解剖学的領域の重心と前記解剖学的領域における前記病変の位置との位置関係を導出し、
     前記病変の位置が含まれる解剖学的領域に対応する前記シェーマの解剖学的領域の重心に対して前記位置関係を反映させることにより、前記シェーマにおける前記病変の位置を特定する請求項4に記載の画像処理装置。
    When the standard organ image is divided into a plurality of anatomical regions and the schema is divided into a plurality of anatomical regions corresponding to the standard organ image.
    The processor derives the positional relationship between the center of gravity of the anatomical region including the position of the lesion and the position of the lesion in the anatomical region in the two-dimensional standard organ image.
    The fourth aspect of claim 4 is to specify the position of the lesion in the schema by reflecting the positional relationship with respect to the center of gravity of the anatomical region of the schema corresponding to the anatomical region including the position of the lesion. Image processing equipment.
  7.  前記プロセッサは、前記病変の位置が特定された前記シェーマを表示する請求項1から6のいずれか1項に記載の画像処理装置。 The image processing device according to any one of claims 1 to 6, wherein the processor displays the schema in which the position of the lesion is specified.
  8.  前記プロセッサは、前記医用画像を解析することにより前記病変を検出する請求項1から7のいずれか1項に記載の画像処理装置。 The image processing device according to any one of claims 1 to 7, wherein the processor detects the lesion by analyzing the medical image.
  9.  医用画像から病変を含む対象臓器を抽出することにより前記病変を含む対象臓器画像を導出し、
     複数の対象臓器画像を正規化することにより導出された標準臓器画像と、前記導出された対象臓器画像とを位置合わせすることによって、前記標準臓器画像における前記病変の位置を特定し、
     前記標準臓器画像において特定された前記病変の位置に基づいて、前記対象臓器を模式的に表したシェーマにおける前記病変の位置を特定する画像処理方法。
    By extracting the target organ containing the lesion from the medical image, the target organ image including the lesion is derived, and the target organ image is derived.
    By aligning the standard organ image derived by normalizing a plurality of target organ images with the derived target organ image, the position of the lesion in the standard organ image is specified.
    An image processing method for specifying the position of the lesion in a schema schematically representing the target organ based on the position of the lesion identified in the standard organ image.
  10.  医用画像から病変を含む対象臓器を抽出することにより前記病変を含む対象臓器画像を導出する手順と、
     複数の対象臓器画像を正規化することにより導出された標準臓器画像と、前記導出された対象臓器画像とを位置合わせすることによって、前記標準臓器画像における前記病変の位置を特定する手順と、
     前記標準臓器画像において特定された前記病変の位置に基づいて、前記対象臓器を模式的に表したシェーマにおける前記病変の位置を特定する手順とをコンピュータに実行させる画像処理プログラム。
    A procedure for deriving a target organ image including a lesion by extracting a target organ containing the lesion from a medical image, and a procedure for deriving the target organ image including the lesion.
    A procedure for identifying the position of the lesion in the standard organ image by aligning the standard organ image derived by normalizing a plurality of target organ images with the derived target organ image, and a procedure for identifying the position of the lesion in the standard organ image.
    An image processing program that causes a computer to perform a procedure for identifying the position of the lesion in a schema schematically representing the target organ based on the position of the lesion identified in the standard organ image.
PCT/JP2021/030594 2020-09-11 2021-08-20 Image processing device, method, and program WO2022054541A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2020153151A JP2023178525A (en) 2020-09-11 2020-09-11 Image processing device, method, and program
JP2020-153151 2020-09-11

Publications (1)

Publication Number Publication Date
WO2022054541A1 true WO2022054541A1 (en) 2022-03-17

Family

ID=80631533

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2021/030594 WO2022054541A1 (en) 2020-09-11 2021-08-20 Image processing device, method, and program

Country Status (2)

Country Link
JP (1) JP2023178525A (en)
WO (1) WO2022054541A1 (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2006181146A (en) * 2004-12-28 2006-07-13 Fuji Photo Film Co Ltd Diagnosis assisting device, diagnosis assisting method and its program
JP2009247535A (en) * 2008-04-04 2009-10-29 Dainippon Printing Co Ltd Medical image processing system

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2006181146A (en) * 2004-12-28 2006-07-13 Fuji Photo Film Co Ltd Diagnosis assisting device, diagnosis assisting method and its program
JP2009247535A (en) * 2008-04-04 2009-10-29 Dainippon Printing Co Ltd Medical image processing system

Also Published As

Publication number Publication date
JP2023178525A (en) 2023-12-18

Similar Documents

Publication Publication Date Title
US10980493B2 (en) Medical image display device, method, and program
US11941812B2 (en) Diagnosis support apparatus and X-ray CT apparatus
US20170042495A1 (en) Medical image information system, medical image information processing method, and program
EP2189942A2 (en) Method and system for registering a medical image
US9336457B2 (en) Adaptive anatomical region prediction
JP2019082881A (en) Image retrieval device, method and program
JP2019169049A (en) Medical image specification device, method, and program
US10628963B2 (en) Automatic detection of an artifact in patient image
US11468659B2 (en) Learning support device, learning support method, learning support program, region-of-interest discrimination device, region-of-interest discrimination method, region-of-interest discrimination program, and learned model
US11669960B2 (en) Learning system, method, and program
JP7237089B2 (en) MEDICAL DOCUMENT SUPPORT DEVICE, METHOD AND PROGRAM
US11205269B2 (en) Learning data creation support apparatus, learning data creation support method, and learning data creation support program
US20230005601A1 (en) Document creation support apparatus, method, and program
WO2022153702A1 (en) Medical image display device, method, and program
WO2022054541A1 (en) Image processing device, method, and program
JP2021175454A (en) Medical image processing apparatus, method and program
JPWO2019150717A1 (en) Mesenteric display device, method and program
US20230197253A1 (en) Medical image processing apparatus, method, and program
JP7376715B2 (en) Progress prediction device, method of operating the progress prediction device, and progress prediction program
US20230225681A1 (en) Image display apparatus, method, and program
EP4343781A1 (en) Information processing apparatus, information processing method, and information processing program
US20240037739A1 (en) Image processing apparatus, image processing method, and image processing program
US12033366B2 (en) Matching apparatus, matching method, and matching program
WO2020241857A1 (en) Medical document creation device, method, and program, learning device, method, and program, and learned model
US20240037738A1 (en) Image processing apparatus, image processing method, and image processing program

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21866501

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21866501

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: JP