US9679376B2 - Medical image processing apparatus, method, and recording medium - Google Patents

Medical image processing apparatus, method, and recording medium Download PDF

Info

Publication number
US9679376B2
US9679376B2 US14/842,169 US201514842169A US9679376B2 US 9679376 B2 US9679376 B2 US 9679376B2 US 201514842169 A US201514842169 A US 201514842169A US 9679376 B2 US9679376 B2 US 9679376B2
Authority
US
United States
Prior art keywords
medical image
vertebra
image
subject
label
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
US14/842,169
Other languages
English (en)
Other versions
US20160086327A1 (en
Inventor
Yuanzhong Li
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Fujifilm Corp
Original Assignee
Fujifilm Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Fujifilm Corp filed Critical Fujifilm Corp
Assigned to FUJIFILM CORPORATION reassignment FUJIFILM CORPORATION ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: LI, YUANZHONG
Publication of US20160086327A1 publication Critical patent/US20160086327A1/en
Application granted granted Critical
Publication of US9679376B2 publication Critical patent/US9679376B2/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/0059Measuring for diagnostic purposes; Identification of persons using light, e.g. diagnosis by transillumination, diascopy, fluorescence
    • A61B5/0073Measuring for diagnostic purposes; Identification of persons using light, e.g. diagnosis by transillumination, diascopy, fluorescence by tomography, i.e. reconstruction of 3D images from 2D projections
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06K9/6267
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/60Editing figures and text; Combining figures or text
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • H04N13/02
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B2576/00Medical imaging apparatus involving image processing or analysis
    • A61B2576/02Medical imaging apparatus involving image processing or analysis specially adapted for a particular organ or body part
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/0002Remote monitoring of patients using telemetry, e.g. transmission of vital signals via a communication network
    • A61B5/0004Remote monitoring of patients using telemetry, e.g. transmission of vital signals via a communication network characterised by the type of physiological signal transmitted
    • A61B5/0013Medical image data
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10116X-ray image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30008Bone
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30008Bone
    • G06T2207/30012Spine; Backbone
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2210/00Indexing scheme for image generation or computer graphics
    • G06T2210/41Medical
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H30/00ICT specially adapted for the handling or processing of medical images
    • G16H30/40ICT specially adapted for the handling or processing of medical images for processing medical images, e.g. editing
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N2013/0074Stereoscopic image analysis
    • H04N2013/0092Image segmentation from stereoscopic image signals

Definitions

  • the present disclosure relates to a medical image processing apparatus, method, and program for recognizing a vertebra included in a medical image.
  • a spinal cord is a very important region as it plays a role of conveying messages between a brain and each body region. For this reason, the spinal cord is protected by a plurality of vertebrae (a spine).
  • a spine vertebrae
  • Japanese Unexamined Patent Publication No. 2011-131040 proposes a method for generating, with a three-dimensional image obtained from tomographic images of a computed tomography (CT) image, a magnetic resonance imaging (MRI) image, or the like as the target, tomographic images of planes intersecting and parallel to the central axis of each vertebra, calculating a characteristic amount representing sharpness of a cross-sectional shape in each tomographic image and a characteristic amount representing regularity of an array of vertebrae, identifying an area of each vertebra by identifying a position of an intervertebral disc located between each vertebra based on these characteristic amounts, and further labeling the identified area of each vertebra.
  • CT computed tomography
  • MRI magnetic resonance imaging
  • the positions of the vertebrae cannot be recognized by the use of the method described in Japanese Unexamined Patent Publication No. 2011-131040 or in M. Lootus et al., “Vertebrae Detection and Labelling in Lumbar MR Images”, MICCAI Workshop: Computational Methods and Clinical Applications for Spine Imaging, 2013, and as a result the vertebrae cannot be labeled.
  • the present disclosure has been developed in view of the circumstances described above.
  • the present disclosure allows, even for an image that includes only a portion of a vertebra, labeling the vertebra included in the image, in a medical image processing apparatus, method, and program.
  • a medical image processing apparatus of the present disclosure includes:
  • a determination means that makes a determination as to whether or not at least either one of at least a portion of an upper end vertebra and at least a portion of a lower end vertebra is included in a first medical image of a subject;
  • an image obtaining means that obtains, if the determination is negative, a second medical image that allows recognition of a label of the vertebra of the subject;
  • a labeling means that aligns the first medical image with the second medical image and labels the vertebra included in the first medical image.
  • an upper end vertebra refers to a portion of a plurality of vertebrae constituting a spine that can be recognized as the upper end vertebra. Therefore, it may be the entire upper end vertebra or a portion thereof.
  • a lower end vertebra refers to a portion of a plurality of vertebrae constituting a spine that can be recognized as the lower end vertebra. Therefore, it may be the entire lower end vertebra or a portion thereof.
  • the “label” may be any information as long as it allows recognition of the anatomical position of the vertebra.
  • the label may be an anatomical symbol, a number, and a combination of a symbol and a number specific to each vertebra.
  • the label may be a symbol, a number, a combination of a symbol and a number, and the like with reference to a specific vertebra.
  • the labeling means may be a means that labels, when the determination is positive, the vertebra included in the first medical image based on either one of information of at least a portion of the upper end vertebra and at least a portion of the lower end vertebra included in the first medical image.
  • the image obtaining means may be a means that obtains a medical image that includes at least either one of at least a portion of the upper end vertebra and at least a portion of the lower end vertebra of the subject as the second medical image.
  • the image obtaining means may be a means that obtains an image in which a label of at least one vertebra of the subject is known as the second medical image.
  • the upper end vertebra may be at least either one of a first cervical vertebra and a second cervical vertebra.
  • the lower end vertebra may be at least either one of a fifth lumbar vertebra and a sacrum.
  • the image obtaining means may be a means that obtains, if a plurality of medical images that allow recognition of a label of the vertebra of the subject is present, a medical image captured by the same imaging method as that of the first medical image as the second medical image.
  • the image obtaining means may be a means that obtains, if a plurality of medical images that allow recognition of a label of the vertebra of the subject is present, a medical image which is close in imaging time to the first medical image as the second medical image.
  • a medical image which is closest in imaging time is preferably used, but not limited to this and a medical image captured within a predetermined number of days from the imaging time of the first medical image may be used.
  • a medical image processing method includes the steps of:
  • medical image processing method may be provided as a program to be executed by a computer.
  • FIG. 1 is a hardware configuration diagram of a diagnosis support system to which a medical image processing apparatus according to an embodiment of the present disclosure is applied, illustrating an overview thereof.
  • FIG. 2 shows a medical image processing apparatus realized by installing a medical image processing program on a computer, illustrating a schematic configuration thereof.
  • FIG. 3 schematically illustrates a sagittal image representing an array of vertebrae.
  • FIG. 4 shows an example of a first three-dimensional image.
  • FIG. 5 shows an example of a second three-dimensional image.
  • FIG. 6 shows another example of a second three-dimensional image.
  • FIG. 7 is a drawing for explaining alignment.
  • FIG. 8 is a flowchart illustrating processing performed in the present embodiment.
  • FIG. 1 is a hardware configuration of a diagnosis support system to which a medical image processing apparatus according to an embodiment of the present disclosure is applied, illustrating an overview thereof.
  • the system includes a medical image processing apparatus 1 according to the present embodiment, a three-dimensional image capturing apparatus 2 , and an image storage server 3 which are communicatively connected through a network 4 .
  • the three-dimensional image capturing apparatus 2 is an apparatus that captures a diagnostic target region of a subject and generates a three-dimensional image representing the region. More specifically, the apparatus 2 is CT equipment, MRI equipment, positron emission tomography (PET) equipment, or the like. The three-dimensional image generated by the three-dimensional image capturing apparatus 2 is transmitted to the image storage server 3 and stored therein. It is assumed in the present embodiment that the diagnostic target region of the subject is a vertebra, the three-dimensional image capturing apparatus 2 is MRI equipment, and the three-dimensional image is a MRI image.
  • the image storage server 3 is a computer that stores and manages various types of data, and includes a large capacity external storage device and database management software.
  • the image storage server 3 communicates with the other apparatuses via the wire or wireless network 4 to send and receive image data and the like. More specifically, the image storage server 3 obtains image data of a three-dimensional image generated by the three-dimensional image capturing apparatus 2 and similar data via the network and manages them by storing in a storage medium, such as the large capacity external storage device.
  • the image data storage format and communication between each equipment are based on a protocol such as the digital imaging and communication in medicine (DICOM). Further, a DICOM standard tag is attached to the three-dimensional image.
  • the tag includes various types of information, including a patient name, information representing the imaging apparatus, date and time of imaging, imaged region, and the like.
  • the medical image processing apparatus 1 is one computer on which a medical image processing program of the present disclosure is installed.
  • the computer may be a workstation or a personal computer directly operated by the doctor who performs diagnosis, or a server computer connected thereto via the network.
  • the medical image processing program is distributed by being recorded on a recording medium, such as DVD, CD-ROM, or the like, and installed on the computer from the recording medium. Otherwise, the program is stored in a storage device of the server computer connected to the network or in a network storage in an accessible state from the outside, and downloaded and installed on the computer used by the doctor upon request.
  • FIG. 2 shows a medical image processing apparatus realized by installing the medical image processing program on a computer, illustrating a schematic configuration thereof.
  • the medical image processing apparatus 1 includes a CPU 11 , a memory 12 , and a storage 13 as a configuration of a standard workstation.
  • a display 14 and an input device 15 are connected to the medical image processing apparatus 1 .
  • the storage 13 includes various types of information, including a three-dimensional image obtained from the image storage server 3 via the network 4 and an image generated through the processing performed in the medical image processing apparatus 1 .
  • the memory 12 stores the medical image processing program.
  • the medical image processing program defines, as the processes executed by the CPU 11 , a first image obtaining process for obtaining a first three-dimensional image V 1 which includes a diagnostic target vertebra of a subject, a determination process for determining whether or not the first three-dimensional image V 1 includes at least either one of at least a portion of an upper end vertebra and at least a portion of a lower end vertebra, a second image obtaining process for obtaining, if the determination is negative, a second three-dimensional image V 2 that allows recognition of a label of a vertebra of the subject, and a labeling process for aligning the first three-dimensional image V 1 with the second three-dimensional image V 2 and labeling the vertebra included in the first three-dimensional image V 1 .
  • the medical image processing apparatus 1 may include a plurality of CPUs for performing the first and the second image obtaining processes, the determination process, and the labeling process respectively.
  • the image obtaining unit 21 obtains the first and the second three-dimensional images V 1 , V 2 from the image storage server 3 . In a case where the three-dimensional images V 1 , V 2 have already been stored in the storage 13 , the image obtaining unit 21 may obtain the images from the storage 13 .
  • the determination unit 22 determines whether or not the first three-dimensional image V 1 includes at least either one of at least a portion of an upper end vertebra and at least a portion of a lower end vertebra, and outputs a result of the determination to the image obtaining unit 21 and labeling unit 23 .
  • FIG. 3 schematically illustrates a sagittal image representing an array of vertebrae. As shown in FIG. 3 , each vertebra is anatomically labeled.
  • the spine is composed of four portions of cervical spine, thoracic spine, lumbar spine, and sacrum.
  • the cervical spine is composed of seven vertebrae and anatomically labeled as C 1 to C 7 .
  • the thoracic spine is composed of twelve vertebrae and anatomically labeled as Th 1 to Th 12 .
  • the lumbar spine is composed of five vertebrae and anatomically labeled as L 1 to L 5 .
  • the sacrum is composed of only one bone and anatomically labeled as S 1 .
  • the present embodiment uses these anatomical labels as the labels to be applied to the vertebrae.
  • the determination unit 22 of the present embodiment uses, as the target for making a determination, at least either one of the first cervical vertebra and the second cervical vertebra as the upper end vertebra, and at least either one of the fifth lumbar vertebra and the sacrum as the lower end vertebra.
  • the determination unit 22 performs at least either one of a first determination which is a determination as to whether or not the first three-dimensional image V 1 includes at least a portion of the upper end vertebra, i.e., at least an area in which the upper end vertebra can be recognized and a second determination which is a determination as to whether or not the first three-dimensional image V 1 includes at least a portion of the lower end vertebra, i.e., at least an area in which the lower end vertebra can be recognized.
  • a first determination which is a determination as to whether or not the first three-dimensional image V 1 includes at least a portion of the upper end vertebra, i.e., at least an area in which the upper end vertebra can be recognized
  • a second determination which is a determination as to whether or not the first three-dimensional image V 1 includes at least a portion of the lower end vertebra, i.e., at least an area in which the lower end vertebra can be recognized.
  • only the first or the second determination may be made, or both the
  • the first determination may be made first and if the first determination is negative, the second determination may be made, while if the first determination is positive, the second determination may not be made. Contrary to this, the second determination may be made first and if the second determination is negative, the first determination may be made, while if the second determination is positive, the first determination may not be made. In this case, if both the first and second determinations are negative, the determination as to whether or not the first three-dimensional image V 1 includes at least either one of at least a portion of the upper end vertebra and at least a portion of the lower end vertebra is negative, and otherwise it is positive.
  • the determination unit 22 is provided with a template having a pattern representing a three-dimensional shape of at least one of the first and the second cervical vertebrae for the first determination. Further, the determination unit 22 is provided with a template having a pattern representing a three-dimensional shape of at least one of the fifth lumbar vertebra and the sacrum for the second determination. The determination unit 22 performs matching between the first three-dimensional image and the template to search for an area having a shape which is the same as that of the pattern of the template. Then, if the area is found, the first and the second determinations are positive.
  • the determination made by the determination unit 22 is negative.
  • the determination made by the determination unit 22 is positive. Note that in each of FIGS. 4 to 6 , a sagittal image passing through the center of vertebrae generated from a three-dimensional image is shown for the purpose of explanation.
  • the labeling unit 23 performs a first labeling process if a determination made by the determination unit 22 is negative and, if positive, performs a second labeling process.
  • the first labeling process will be described first. If a determination made by the determination unit 22 is negative, the image obtaining unit 21 obtains a second three-dimensional image V 2 that allows recognition of a label of a vertebra of the subject of the first three-dimensional image V 1 from the image storage server 3 .
  • a DICOM standard tag is attached to a three-dimensional image stored in the image storage server 3 .
  • the tag includes a patient name, information representing the imaging apparatus, date and time of imaging, imaged region, and the like.
  • the image obtaining unit 21 obtains a three-dimensional image whose tag includes the same patient name as that of the first three-dimensional image V 1 and allows recognition of a label of a vertebra, as a second three-dimensional image V 2 .
  • the three-dimensional image that allows recognition of a label of a vertebra an image in which vertebrae have already been labeled and information to that effect is included in the tag may be used.
  • an image with a tag that includes information indicating that at least a portion of the upper end vertebra or at least a portion of the lower end vertebra is included in the image may be used.
  • a three-dimensional image obtained by the same imaging method as that of the first three-dimensional image V 1 is obtained as the second three-dimensional image V 2 .
  • the first three-dimensional image V 1 is a MRI image
  • a MRI image is obtained as the second three-dimensional image V 2 .
  • a three-dimensional image closest in imaging time to the first three-dimensional image V 1 may be obtained as the second three-dimensional image V 2 .
  • a determination may be made as to whether or not a three-dimensional image obtained by the same imaging method as that of the first three-dimensional image V 1 is stored in the image storage server 3 and if the determination is negative, a three-dimensional image closest in imaging time to the first three-dimensional image V 1 may be obtained as the second three-dimensional image V 2 .
  • the labeling unit 23 labels the vertebra included in the first three-dimensional image V 1 using the obtained three-dimensional image V 2 .
  • the labeling unit 23 recognizes the position of each vertebra included in the second three-dimensional image V 2 and labels each vertebra. The processing of recognizing the position of each vertebra and labeling each vertebra is identical to a second labeling process, to be described later and, therefore, a detailed description is omitted here.
  • the labeling unit 23 performs alignment between the first three-dimensional image V 1 and the second three-dimensional image V 2 .
  • matching may be performed between the first three-dimensional image V 1 and the second three-dimensional image V 2 to search for an area that includes a vertebra having the same shape as that of a vertebra included in the first three-dimensional image V 1 .
  • the imaging method is different between the first three-dimensional image V 1 and the second three-dimensional image V 2 , for example, if the first three-dimensional image V 1 is a MRI image while the second three-dimensional image V 2 is a CT image
  • the alignment may be performed using the method described in W. M. Wells III, et al., “Multi-modal volume registration by maximization of mutual information”, Medical Image Analysis, Vol. 1, No. 1, pp. 35-51, 1996 (Reference Literature 1).
  • the first three-dimensional image V 1 and the second three-dimensional image V 2 to be aligned, as illustrated in FIG. 7 .
  • the labels of the vertebrae are already known in the second three-dimensional image V 2 .
  • the vertebrae included in the first three-dimensional image V 1 can be labeled as Th 2 , Th 3 , and Th 4 .
  • the second labeling process is processing to label vertebrae included in the first three-dimensional image V 1 based on at least either one of information of at least a portion of the upper end vertebra and at least a portion of the lower end vertebra included in the first three-dimensional image V 1 .
  • the labeling unit 23 performs the second labeling process using, for example, the method described in Japanese Unexamined Patent Publication No. 2011-131040. That is, the labeling unit 23 detects a central axis of each vertebra from the first three-dimensional image V 1 and generates tomographic images of planes intersecting and parallel to the detected central axis of each vertebra.
  • the labeling unit 23 recognizes the position of each vertebra based on a characteristic amount representing sharpness of a cross-sectional shape in each tomographic image and a characteristic amount representing regularity of an array of the vertebrae.
  • the labeling unit 23 labels the recognized vertebrae in order from the upper end.
  • the labeling unit 23 labels the recognized vertebrae in order from the lower end.
  • the labeling unit 23 may recognize the sacrum using the method described in M. Lootus et al., “Vertebrae Detection and Labelling in Lumbar MR Images”, MICCAI Workshop: Computational Methods and Clinical Applications for Spine Imaging, 2013, and may label the vertebrae with reference to the sacrum.
  • the method of labeling vertebrae is not limited to those described above and any method may be used.
  • FIG. 8 is a flowchart illustrating processing performed in the present embodiment.
  • the image obtaining unit 21 obtains a diagnostic target first three-dimensional image V 1 from the image storage server 3 (step ST 1 ), and the determination unit 22 performs a determination as to whether or not the first three-dimensional image V 1 includes at least a portion of the upper end vertebra, i.e., at least an area in which the upper end vertebra can be recognized (determination process, step ST 2 ). If step ST 2 is negative, the labeling unit 23 performs the first labeling process (step ST 3 ). On the other hand, if step ST 2 is positive, the labeling unit 23 performs the second labeling process (step ST 4 ). Then the labeling unit 23 stores the first three-dimensional image V 1 in which the vertebrae are labeled (step ST 5 ), and the processing is terminated.
  • the first three-dimensional image V 1 and the second three-dimensional image V 2 are aligned, and a vertebra included in the first three-dimensional image V 1 is labeled using the second three-dimensional image V 2 . Therefore, even in a case where the first three-dimensional image V 1 includes only a portion of a vertebra, the vertebra included in the first three-dimensional image V 1 may be labeled.
  • a three-dimensional image captured by the same imaging method as that of the first three-dimensional image V 1 is obtained as the second three-dimensional image V 2 . This allows the vertebra included in the first three-dimensional image V 1 to be labeled accurately.
  • a three-dimensional image closest in imaging time to the first three-dimensional image V 1 is obtained as the second three-dimensional image V 2 . This allows the vertebra included in the first three-dimensional image V 1 to be labeled accurately.
  • the labeling unit 23 may display a message that the vertebra cannot be labeled on the display 14 , since the vertebra included in the first three-dimensional image cannot be labeled.
  • a speech message may be outputted.
  • a vertebra is labeled using three-dimensional images as the first and the second medical image of the present disclosure, but the labeling of a vertebra may be performed with a two-dimensional X-ray image obtained by X-ray imaging or a tomographic image of a sagittal cross-section passing through the center of the vertebrae as the target.
  • different types of images may be used between the first and the second medical images, like the first medical image is a three-dimensional image while the second medical image is an X-ray image.
  • a pseudo two-dimensional X-ray image may be generated by projecting the three-dimensional image of the first medical image and labeling of a vertebra may be performed with the pseudo X-ray image as the target.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Multimedia (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Quality & Reliability (AREA)
  • Signal Processing (AREA)
  • Data Mining & Analysis (AREA)
  • Software Systems (AREA)
  • Databases & Information Systems (AREA)
  • Computing Systems (AREA)
  • Animal Behavior & Ethology (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Public Health (AREA)
  • Pathology (AREA)
  • Biophysics (AREA)
  • Surgery (AREA)
  • Molecular Biology (AREA)
  • Veterinary Medicine (AREA)
  • Biomedical Technology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Magnetic Resonance Imaging Apparatus (AREA)
  • Apparatus For Radiation Diagnosis (AREA)
US14/842,169 2014-09-22 2015-09-01 Medical image processing apparatus, method, and recording medium Active US9679376B2 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2014192152A JP6145892B2 (ja) 2014-09-22 2014-09-22 医用画像処理装置、方法およびプログラム
JP2014-192152 2014-09-22

Publications (2)

Publication Number Publication Date
US20160086327A1 US20160086327A1 (en) 2016-03-24
US9679376B2 true US9679376B2 (en) 2017-06-13

Family

ID=55444882

Family Applications (1)

Application Number Title Priority Date Filing Date
US14/842,169 Active US9679376B2 (en) 2014-09-22 2015-09-01 Medical image processing apparatus, method, and recording medium

Country Status (4)

Country Link
US (1) US9679376B2 (zh)
JP (1) JP6145892B2 (zh)
CN (1) CN105433972B (zh)
DE (1) DE102015114513A1 (zh)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
AU2017359466B2 (en) * 2016-11-11 2023-05-04 Boston Scientific Scimed, Inc. Guidance systems and associated methods
JP7120560B2 (ja) * 2017-07-03 2022-08-17 株式会社リコー 診断支援システム、診断支援方法及び診断支援プログラム
JP7121191B2 (ja) * 2019-04-11 2022-08-17 富士フイルム株式会社 構造物分離装置、方法およびプログラム、学習装置、方法およびプログラム、並びに学習済みモデル
CN112349392B (zh) * 2020-11-25 2021-08-03 北京大学第三医院(北京大学第三临床医学院) 一种人体颈椎医学图像处理系统

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090234217A1 (en) * 2003-01-30 2009-09-17 Surgical Navigation Technologies, Inc. Method And Apparatus For Preplanning A Surgical Procedure
JP2009240617A (ja) 2008-03-31 2009-10-22 Fujifilm Corp 骨番号認識装置およびそのプログラム
JP2009254600A (ja) 2008-04-17 2009-11-05 Fujifilm Corp 画像表示装置並びに画像表示制御方法およびプログラム
US20110130653A1 (en) * 2009-11-27 2011-06-02 Fujifilm Corporation Vertebra segmentation apparatus, vertebra segmentation method, and recording medium with program for vertebra segmentation
US8014575B2 (en) * 2004-03-11 2011-09-06 Weiss Kenneth L Automated neuroaxis (brain and spine) imaging with iterative scan prescriptions, analysis, reconstructions, labeling, surface localization and guided intervention
US20120172700A1 (en) * 2010-05-21 2012-07-05 Siemens Medical Solutions Usa, Inc. Systems and Methods for Viewing and Analyzing Anatomical Structures
US20130060146A1 (en) * 2010-04-28 2013-03-07 Ryerson University System and methods for intraoperative guidance feedback

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5483960A (en) * 1994-01-03 1996-01-16 Hologic, Inc. Morphometric X-ray absorptiometry (MXA)
US7014633B2 (en) * 2000-02-16 2006-03-21 Trans1, Inc. Methods of performing procedures in the spine
GB0503236D0 (en) * 2005-02-16 2005-03-23 Ccbr As Vertebral fracture quantification
JP5388496B2 (ja) * 2008-07-22 2014-01-15 キヤノン株式会社 画像表示制御装置、画像表示制御方法及びプログラム
US9042618B2 (en) * 2009-09-17 2015-05-26 Siemens Aktiengesellshaft Method and system for detection 3D spinal geometry using iterated marginal space learning
US8160357B2 (en) * 2010-07-30 2012-04-17 Kabushiki Kaisha Toshiba Image segmentation
JP5777973B2 (ja) * 2011-08-11 2015-09-16 株式会社日立メディコ 磁気共鳴イメージング装置
EP2690596B1 (en) * 2012-07-24 2018-08-15 Agfa Healthcare Method, apparatus and system for automated spine labeling

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090234217A1 (en) * 2003-01-30 2009-09-17 Surgical Navigation Technologies, Inc. Method And Apparatus For Preplanning A Surgical Procedure
US8014575B2 (en) * 2004-03-11 2011-09-06 Weiss Kenneth L Automated neuroaxis (brain and spine) imaging with iterative scan prescriptions, analysis, reconstructions, labeling, surface localization and guided intervention
US20130287276A1 (en) * 2004-03-11 2013-10-31 Kenneth L. Weiss Image creation, analysis, presentation, and localization technology
JP2009240617A (ja) 2008-03-31 2009-10-22 Fujifilm Corp 骨番号認識装置およびそのプログラム
JP2009254600A (ja) 2008-04-17 2009-11-05 Fujifilm Corp 画像表示装置並びに画像表示制御方法およびプログラム
US20110130653A1 (en) * 2009-11-27 2011-06-02 Fujifilm Corporation Vertebra segmentation apparatus, vertebra segmentation method, and recording medium with program for vertebra segmentation
JP2011131040A (ja) 2009-11-27 2011-07-07 Fujifilm Corp 椎骨セグメンテーション装置、方法及びプログラム
US20130060146A1 (en) * 2010-04-28 2013-03-07 Ryerson University System and methods for intraoperative guidance feedback
US20120172700A1 (en) * 2010-05-21 2012-07-05 Siemens Medical Solutions Usa, Inc. Systems and Methods for Viewing and Analyzing Anatomical Structures

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
Communication dated Jun. 2, 2016, from the German Patent Office in counterpart German application No. 10 2015 114 513.8.
Communication dated Nov. 22, 2016 from the Japanese Patent Office in corresponding Application No. 2014-192152.
Meelis Lootus et al., "Vertebrae Detection and Labelling in Lumbar MR Images," MICCAI Workshop: Computational Methods and Clinical Applications for Spine Imaging, 2013.
William M. Wells III et al., "Multi-modal volume registration by maximization of mutual information," Medical Image Analysis, 1996, pp. 35-51, vol. 1, No. 1.

Also Published As

Publication number Publication date
JP6145892B2 (ja) 2017-06-14
JP2016059732A (ja) 2016-04-25
CN105433972B (zh) 2020-06-05
CN105433972A (zh) 2016-03-30
US20160086327A1 (en) 2016-03-24
DE102015114513A1 (de) 2016-03-24

Similar Documents

Publication Publication Date Title
US20210158531A1 (en) Patient Management Based On Anatomic Measurements
JP6401083B2 (ja) 医用画像処理装置、方法およびプログラム
US10390886B2 (en) Image-based pedicle screw positioning
US10580159B2 (en) Coarse orientation detection in image data
JP6184926B2 (ja) 椎骨セグメンテーション装置、方法およびプログラム
US11074688B2 (en) Determination of a degree of deformity of at least one vertebral bone
US20200058098A1 (en) Image processing apparatus, image processing method, and image processing program
US9336457B2 (en) Adaptive anatomical region prediction
US9679376B2 (en) Medical image processing apparatus, method, and recording medium
JP2017527015A (ja) 被験者の画像をセグメント化するためのデバイス、システム及び方法
JP7150605B2 (ja) 医療画像の画像関連情報を検証するための装置、システム及び方法。
CN111276221A (zh) 椎骨影像信息的处理方法、显示方法及存储介质
JP5363962B2 (ja) 診断支援システム、診断支援プログラムおよび診断支援方法
US10896501B2 (en) Rib developed image generation apparatus using a core line, method, and program
JP2020006150A (ja) 基準系の妥当性
US20140228676A1 (en) Determination of a physically-varying anatomical structure
JP6559927B2 (ja) 医療情報管理装置及び医療情報管理システム
JP6869086B2 (ja) 位置合せ装置および位置合せ方法並びに位置合せプログラム
CN111210897B (zh) 处理医学图像
US11983870B2 (en) Structure separating apparatus, structure separating method, and structure separating program, learning device, learning method, and learning program, and learned model
JP2010220902A (ja) 認識結果判定装置および方法並びにプログラム
US20230121783A1 (en) Medical image processing apparatus, method, and program
US20230206662A1 (en) Image processing apparatus, method, and program
US20230102745A1 (en) Medical image display apparatus, method, and program
CN114494147A (zh) 一种构建动物脑模板的方法及装置

Legal Events

Date Code Title Description
AS Assignment

Owner name: FUJIFILM CORPORATION, JAPAN

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:LI, YUANZHONG;REEL/FRAME:036469/0447

Effective date: 20150625

STCF Information on status: patent grant

Free format text: PATENTED CASE

MAFP Maintenance fee payment

Free format text: PAYMENT OF MAINTENANCE FEE, 4TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1551); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

Year of fee payment: 4