WO2021060461A1 - Dispositif de traitement d'image, procédé et programme - Google Patents

Dispositif de traitement d'image, procédé et programme Download PDF

Info

Publication number
WO2021060461A1
WO2021060461A1 PCT/JP2020/036254 JP2020036254W WO2021060461A1 WO 2021060461 A1 WO2021060461 A1 WO 2021060461A1 JP 2020036254 W JP2020036254 W JP 2020036254W WO 2021060461 A1 WO2021060461 A1 WO 2021060461A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
difference
feature vector
feature
unit
Prior art date
Application number
PCT/JP2020/036254
Other languages
English (en)
Japanese (ja)
Inventor
尚人 升澤
Original Assignee
富士フイルム株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 富士フイルム株式会社 filed Critical 富士フイルム株式会社
Publication of WO2021060461A1 publication Critical patent/WO2021060461A1/fr

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B6/00Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment
    • A61B6/02Arrangements for diagnosis sequentially in different planes; Stereoscopic radiation diagnosis
    • A61B6/03Computed tomography [CT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis

Definitions

  • the present disclosure relates to an image processing apparatus, method and program for deriving the difference between two images.
  • Japanese Patent Application Laid-Open No. 2009-522004 proposes a method of observing a patient by aligning two medical images and then generating a difference image of the two medical images.
  • the difference between the two images is represented in each image as the difference in brightness. It can be specified as a part showing the difference between two images.
  • CNN Convolutional Neural Network
  • a method for determining whether or not the objects contained in the two images are the same by generating a feature map consisting of the feature quantities of the above and deriving the difference between the feature maps has been proposed (Japanese Patent Laid-Open No. 2018-506788). reference).
  • a method of detecting an abnormality of an object contained in an image based on the difference between the feature maps of the two images has also been proposed (see Patent No. 6216024).
  • Japanese Patent Application Laid-Open No. 2009-522004 can recognize the difference between the two images only based on the difference in the brightness of the two images. Further, the methods described in Japanese Patent Application Laid-Open No. 2018-506788 and Japanese Patent No. 6216024 can only detect the difference between the two images and the presence or absence of an abnormality, and what kind of difference is present in any part of the two images. I can't recognize if it exists.
  • the present disclosure has been made in view of the above circumstances, and an object of the present disclosure is to make it possible to recognize the difference between two images based on various features.
  • the image processing apparatus includes a first feature vector including at least one element representing a feature for each pixel of the first image, and at least one element representing a feature for each pixel of the second image.
  • a feature vector derivation unit for deriving a second feature vector including A difference derivation unit that derives a difference between at least one corresponding element of the first feature vector and the second feature vector, It is provided with a display control unit that displays a difference image representing the difference on the display unit.
  • the display control unit may display a difference image in which the difference is superimposed on at least one of the first image and the second image on the display unit.
  • the feature vector derivation unit derives a first feature vector and a second feature vector including a plurality of elements.
  • the difference derivation unit may derive the difference between the plurality of corresponding elements of the first feature vector and the second feature vector, respectively.
  • the display control unit may be capable of switching and displaying the difference represented by the difference image for each element.
  • the first image and the second image may be medical images including the same subject.
  • the feature vector derivation unit has a learning model that outputs a feature vector including at least one element representing a feature in each pixel of the image when an image is input. You may.
  • the image processing method includes a first feature vector including at least one element representing a feature for each pixel of the first image, and at least one element representing a feature for each pixel of the second image.
  • Derivation of the second feature vector to include Derivation of the difference between at least one corresponding element of the first feature vector and the second feature vector A difference image showing the difference is displayed on the display unit.
  • image processing devices include a memory for storing an instruction to be executed by a computer and a memory.
  • the processor comprises a processor configured to execute a stored instruction.
  • a first feature vector containing at least one element representing a feature for each pixel of the first image and a second feature vector containing at least one element representing a feature for each pixel of the second image are derived.
  • Derivation of the difference between at least one corresponding element of the first feature vector and the second feature vector The process of displaying the difference image showing the difference on the display unit is executed.
  • the difference between the first image and the second image can be recognized based on various features other than the brightness, which are elements of the feature vector.
  • Diagram to explain the derivation of the feature vector using the learning model Conceptual diagram of derivation of feature vector Diagram showing the feature map schematically
  • the figure which shows the display screen of the difference image The figure which shows the display screen of the difference image Flowchart showing processing performed in this embodiment
  • FIG. 1 is a hardware configuration diagram showing an outline of a diagnostic support system to which the image processing apparatus according to the embodiment of the present disclosure is applied.
  • the image processing device 1, the modality 2, and the image storage server 3 according to the present embodiment are connected in a communicable state via the network 4.
  • Modality 2 is a device that generates a medical image representing the part by photographing the part to be diagnosed of the subject, and is a three-dimensional image taking device, a radiographic image taking device, and the like.
  • the three-dimensional imaging apparatus is, for example, a CT apparatus, an MRI apparatus, a PET (Positron Emission Tomography) apparatus, or the like.
  • the three-dimensional image generated by the modality 2 is transmitted to the image storage server 3 and stored.
  • the modality 2 is a CT device, and a CT image including a portion to be diagnosed as a subject is generated as a three-dimensional image.
  • the three-dimensional image consists of a plurality of tomographic images.
  • the image storage server 3 is a computer that stores and manages various data, and is equipped with a large-capacity external storage device and database management software.
  • the image storage server 3 communicates with another device via a wired or wireless network 4 to send and receive image data and the like.
  • various data including image data of medical images generated by modality 2 are acquired via a network and stored in a recording medium such as a large-capacity external storage device for management.
  • the storage format of the image data and the communication between the devices via the network 4 are based on a protocol such as DICOM (Digital Imaging and Communication in Medicine).
  • DICOM Digital Imaging and Communication in Medicine
  • the image storage server 3 stores and manages a plurality of medical images of the same patient having different imaging dates and times.
  • the image processing device 1 of the present embodiment is one in which the image processing program of the present embodiment is installed on one computer.
  • the computer may be a workstation or personal computer operated directly by the diagnosing doctor, or it may be a server computer connected to them via a network.
  • the image processing program is stored in the storage device of the server computer connected to the network or in the network storage in a state of being accessible from the outside, and is downloaded and installed in the computer used by the doctor upon request. Alternatively, it is recorded and distributed on a recording medium such as a DVD (Digital Versatile Disc) or a CD-ROM (Compact Disc Read Only Memory), and installed on a computer from the recording medium.
  • a recording medium such as a DVD (Digital Versatile Disc) or a CD-ROM (Compact Disc Read Only Memory), and installed on a computer from the recording medium.
  • FIG. 2 is a diagram showing a schematic configuration of an image processing device realized by installing an image processing program on a computer.
  • the image processing device 1 includes a CPU (Central Processing Unit) 11, a memory 12, and a storage 13 as a standard workstation configuration. Further, the image processing device 1 is connected to a display unit 14 such as a liquid crystal display and an input unit 15 such as a keyboard and a mouse.
  • a display unit 14 such as a liquid crystal display
  • an input unit 15 such as a keyboard and a mouse.
  • the storage 13 is composed of a hard disk drive or the like, and stores the first image G1 and the second image G2 to be processed acquired from the image storage server 3 via the network 4, and various information including information necessary for processing. ing.
  • the image processing program is stored in the memory 12.
  • the image processing program executes an image acquisition process for acquiring the first image G1 and the second image G2 to be processed, and a normalization process for normalizing the first image G1 and the second image G2 as processes to be executed by the CPU 11.
  • processing feature vector derivation processing for deriving the first feature vector V1 for the first image G1 and the second feature vector V2 for the second image G2, the first feature vector V1 and the second feature vector V2.
  • a difference derivation process for deriving the difference between the corresponding elements of the above, and a display control process for displaying the difference image representing the difference on the display unit 14 are defined.
  • the computer functions as an image acquisition unit 21, a normalization unit 22, a feature vector derivation unit 23, a difference derivation unit 24, and a display control unit 25.
  • the image acquisition unit 21 acquires the first image G1 and the second image G2 to be compared from the image storage server 3 via an interface (not shown) connected to the network.
  • the first image G1 and the second image G2 are CT images. Further, it is assumed that the first image G1 and the second image G2 include a portion from the cervical spine to the thoracic spine of the same patient as a subject.
  • the acquired first and second images G1 and G2 are stored in the storage 13.
  • the normalization unit 22 normalizes the first image G1 and the second image G2.
  • the normalization in the present embodiment is a alignment process in which the first image G1 and the second image G2 are aligned so that the spatial positions of the first and second images G1 and G2 are matched. It includes a process of matching the voxel values (CT values) of the first image G1 and the second image G2, a smoothing process for removing fine structural differences and noise, and the like.
  • Examples of the process of matching the voxel values include a process of matching the average value of the voxel values of the first image G1 and the average value of the voxel values of the second image G2.
  • Examples of the smoothing process include a filtering process using a smoothing filter.
  • the feature vector derivation unit 23 includes at least one element representing a feature for each pixel of the first image G1 and a first feature vector V1 including at least one element representing a feature for each pixel of the first image G1.
  • a second feature vector V2 including one is derived.
  • the first feature vector V1 and the second feature vector V2 are one-dimensional vectors including a plurality of elements corresponding to each other.

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Physics & Mathematics (AREA)
  • Medical Informatics (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • High Energy & Nuclear Physics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Optics & Photonics (AREA)
  • Pathology (AREA)
  • Radiology & Medical Imaging (AREA)
  • Theoretical Computer Science (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Molecular Biology (AREA)
  • Surgery (AREA)
  • Animal Behavior & Ethology (AREA)
  • General Health & Medical Sciences (AREA)
  • Public Health (AREA)
  • Veterinary Medicine (AREA)
  • Image Analysis (AREA)

Abstract

La présente invention concerne une unité de dérivation de vecteur de caractéristiques qui dérive un premier vecteur de caractéristiques contenant au moins un élément qui représente une caractéristique de chaque pixel dans une première image et un second vecteur de caractéristique contenant au moins un élément qui représente une caractéristique de chaque pixel dans une seconde image. Une unité de dérivation de différence dérive la différence entre le ou les éléments correspondant au premier vecteur de caractéristiques et le ou les éléments correspondant au second vecteur de caractéristiques. Une unité de commande d'affichage amène une unité d'affichage à afficher une image de différence représentant la différence.
PCT/JP2020/036254 2019-09-25 2020-09-25 Dispositif de traitement d'image, procédé et programme WO2021060461A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2019-173793 2019-09-25
JP2019173793 2019-09-25

Publications (1)

Publication Number Publication Date
WO2021060461A1 true WO2021060461A1 (fr) 2021-04-01

Family

ID=75166169

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2020/036254 WO2021060461A1 (fr) 2019-09-25 2020-09-25 Dispositif de traitement d'image, procédé et programme

Country Status (1)

Country Link
WO (1) WO2021060461A1 (fr)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2007020829A (ja) * 2005-07-15 2007-02-01 Hitachi Ltd 画像データ解析方法およびシステム
WO2015029135A1 (fr) * 2013-08-27 2015-03-05 株式会社日立製作所 Dispositif d'évaluation de taux d'incidence, procédé d'évaluation de taux d'incidence, et programme d'évaluation de taux d'incidence
JP2018175227A (ja) * 2017-04-10 2018-11-15 富士フイルム株式会社 医用画像表示装置、方法およびプログラム

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2007020829A (ja) * 2005-07-15 2007-02-01 Hitachi Ltd 画像データ解析方法およびシステム
WO2015029135A1 (fr) * 2013-08-27 2015-03-05 株式会社日立製作所 Dispositif d'évaluation de taux d'incidence, procédé d'évaluation de taux d'incidence, et programme d'évaluation de taux d'incidence
JP2018175227A (ja) * 2017-04-10 2018-11-15 富士フイルム株式会社 医用画像表示装置、方法およびプログラム

Similar Documents

Publication Publication Date Title
US10980493B2 (en) Medical image display device, method, and program
US11069056B2 (en) Multi-modal computer-aided diagnosis systems and methods for prostate cancer
EP3355273B1 (fr) Détection d'orientation grossière dans des données d'image
JP7018856B2 (ja) 医用画像処理装置、方法およびプログラム
Lazli et al. A survey on computer-aided diagnosis of brain disorders through MRI based on machine learning and data mining methodologies with an emphasis on Alzheimer disease diagnosis and the contribution of the multimodal fusion
US11893729B2 (en) Multi-modal computer-aided diagnosis systems and methods for prostate cancer
US11244455B2 (en) Apparatus, method, and program for training discriminator discriminating disease region, discriminator discriminating disease region, disease region discrimination apparatus, and disease region discrimination program
Roy et al. QuickNAT: segmenting MRI neuroanatomy in 20 seconds
JP7339270B2 (ja) 医用画像処理装置、方法およびプログラム
US11049251B2 (en) Apparatus, method, and program for learning discriminator discriminating infarction region, discriminator for discriminating infarction region, and apparatus, method, and program for discriminating infarction region
WO2019216125A1 (fr) Dispositif d'apprentissage, procédé, et programme pour classificateur afin de classifier la région d'infarctus, classificateur pour classifier la région d'infarctus, et dispositif, procédé et programme de classification de région d'infarctus
Verma et al. Role of deep learning in classification of brain MRI images for prediction of disorders: a survey of emerging trends
Öztürk Convolutional neural networks for medical image processing applications
WO2021060461A1 (fr) Dispositif de traitement d'image, procédé et programme
WO2019208130A1 (fr) Dispositif, procédé et programme de support de création de document médical, modèle appris et dispositif, procédé et programme d'apprentissage
WO2021015232A1 (fr) Dispositif, procédé et programme d'apprentissage, dispositif, procédé et programme d'extraction de structure graphique et modèle d'extraction appris
Chedid et al. Synthesis of fracture radiographs with deep neural networks
JP7321271B2 (ja) 学習用画像生成装置、方法及びプログラム、並びに学習方法、装置及びプログラム
US11176413B2 (en) Apparatus, method, and program for training discriminator discriminating disease region, discriminator discriminating disease region, disease region discrimination apparatus, and disease region discrimination program
Atehortúa et al. Characterization of motion patterns by a spatio-temporal saliency descriptor in cardiac cine MRI
WO2020110520A1 (fr) Dispositif de détermination de similitude, procédé, et programme
WO2020262681A1 (fr) Dispositif, procédé et programme d'apprentissage, dispositif, procédé et programme de traitement d'image médicale et discriminateur
WO2020262682A1 (fr) Dispositif d'apprentissage, et programme, dispositif de classification, procédé, et programme, et modèle appris
Hassan et al. ViBaNet: A Novel Deep Learning Approach to Detect Bacterial and Viral Pneumonia

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20867729

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20867729

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: JP