WO2017072916A1 - Image processing device and image processing method, and computer program - Google Patents

Image processing device and image processing method, and computer program Download PDF

Info

Publication number
WO2017072916A1
WO2017072916A1 PCT/JP2015/080576 JP2015080576W WO2017072916A1 WO 2017072916 A1 WO2017072916 A1 WO 2017072916A1 JP 2015080576 W JP2015080576 W JP 2015080576W WO 2017072916 A1 WO2017072916 A1 WO 2017072916A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
dimensional
image portion
dimensional image
specifying
Prior art date
Application number
PCT/JP2015/080576
Other languages
French (fr)
Japanese (ja)
Inventor
宏美 武居
達也 織茂
Original Assignee
パイオニア株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by パイオニア株式会社 filed Critical パイオニア株式会社
Priority to PCT/JP2015/080576 priority Critical patent/WO2017072916A1/en
Publication of WO2017072916A1 publication Critical patent/WO2017072916A1/en

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/05Detecting, measuring or recording for diagnosis by means of electric currents or magnetic fields; Measuring using microwaves or radio waves 
    • A61B5/055Detecting, measuring or recording for diagnosis by means of electric currents or magnetic fields; Measuring using microwaves or radio waves  involving electronic [EMR] or nuclear [NMR] magnetic resonance, e.g. magnetic resonance imaging

Definitions

  • the present invention relates to a technical field of an image processing apparatus and an image processing method for performing image processing on a two-dimensional image and a three-dimensional image obtained by imaging the same object, and a computer program.
  • Patent Document 1 There is known a surgery support apparatus that superimposes and displays an observation image of a human body photographed by a video camera or the like during surgery on a three-dimensional image of the human body photographed by an MRI apparatus or the like before surgery.
  • tumor image portion an image portion showing a tumor (hereinafter referred to as “tumor image portion”) in the spectral image.
  • the inventors of the present application display the image part corresponding to the tumor in a three-dimensional image of the human body based on the identified tumor image part in a manner that can be distinguished from other image parts.
  • the spectral image is a two-dimensional image (that is, a planar image)
  • the tumor image portion is also a two-dimensional image. Therefore, from the spectral image, only the outer surface of the tumor (that is, the portion of the tumor that can be observed from the outside) is identified. That is, it is difficult to specify how the tumor is three-dimensionally distributed from the spectral image. Therefore, by simply using the surgery support apparatus described in Patent Document 1 and displaying the tumor image portion superimposed on the three-dimensional image, only a three-dimensional image representing the outer surface of the tumor is displayed.
  • the above-described technical problem is not limited to a case where a tumor image is superimposed and displayed on a three-dimensional image of a human body, but can also occur in a case where an arbitrary two-dimensional image is superimposed and displayed on an arbitrary three-dimensional image.
  • the present invention has been made in view of, for example, the above-described problems, and a two-dimensional image obtained by photographing the same object and a three-dimensional image are used to convert the object included in the two-dimensional image into a three-dimensional image. It is an object of the present invention to provide an image processing apparatus, an image processing method, and a computer program that can specify how the image is three-dimensionally distributed in an image.
  • An image processing apparatus for solving the above problem represents a two-dimensional image and a three-dimensional image generated by photographing the same object, and represents the object among the two-dimensional image.
  • An image processing method for solving the above problem represents a two-dimensional image and a three-dimensional image generated by capturing the same target object, and represents the target object among the two-dimensional images.
  • a computer program for solving the above problem causes a computer to execute the above-described image processing method.
  • the image processing apparatus includes a acquiring unit that acquires a two-dimensional image and a three-dimensional image generated by photographing the same object, and a first image representing the object among the two-dimensional images.
  • First specifying means for specifying a portion; and second specifying means for specifying a third image portion in the three-dimensional image having features of the second image portion in the three-dimensional image corresponding to the first image portion; Is provided.
  • the first specifying unit specifies the first image portion representing the object in the two-dimensional image. Since the first image part is at least a part of the two-dimensional image, the first image part represents at least a part of the object planarly (in other words, two-dimensionally). That is, the first image portion represents at least a part of the outer surface of the object.
  • the second specifying means has a third image portion having the same feature as the feature of the second image portion in the three-dimensional image corresponding to the first image portion in the three-dimensional image based on the first image portion. Is identified. Since the first image part represents at least a part of the outer surface of the object, the second image part corresponding to the first image part is also visible from the outer surface of the object (that is, the object is visible from the outside). Part). Furthermore, since the second image portion is at least a part of the three-dimensional image, the second image portion represents at least a part of the outer surface of the object in a three-dimensional manner (in other words, three-dimensionally).
  • the third image portion also represents at least a part of the object, like the second image portion. It is. Furthermore, the third image portion has the same characteristics as the features of the second image portion, instead of being directly identified from the first image portion, which only represents at least part of the outer surface of the object. The image portion is specified by searching from the three-dimensional image. For this reason, it can be estimated that the third image portion represents at least the inside of the object (that is, the portion that cannot be visually recognized from the outside of at least a part of the object). Furthermore, since the third image portion is at least a part of the three-dimensional image, the third image portion three-dimensionally represents the inside of the object.
  • the image processing apparatus specifies not only the second image part that stereoscopically represents at least a part of the outer surface of the object, but also the third image part that stereoscopically represents the inside of the object. can do. For this reason, the image processing apparatus of this embodiment can specify how the objects included in the two-dimensional image are three-dimensionally distributed in the three-dimensional image. As a result, any display device supports the user's recognition of the object by displaying the second and third image parts corresponding to the object in the three-dimensional image in a manner distinguishable from other image parts. can do.
  • the third image portion represents the inside of the object.
  • the third image portion may include a second image portion representing the outer surface of the object.
  • an image portion obtained by combining an image portion representing the inside of the object and an image portion representing the outer surface of the object may be used as the third image portion.
  • the second image portion represents at least a part of the outer surface of the object in the three-dimensional image
  • the third image portion is a portion of the three-dimensional image. Of these, at least the inside of the object is represented.
  • the second specifying means has at least the inside of the object having the same feature as the feature of the second image portion that represents at least a part of the outer surface of the object specified by the second specifying means.
  • Three image portions can be specified.
  • the image processing apparatus further includes a third specifying unit that specifies the second image portion, and the two-dimensional image is generated by shooting a shooting target including the target object,
  • the third specifying means is (i) present on a path of light reaching each pixel of an image sensor included in an imaging device that generates the two-dimensional image by photographing the photographing target among the three-dimensional images.
  • the fourth image portion is associated with the two-dimensional image
  • the second image portion is specified based on a result of the association between the fourth image portion and the two-dimensional image.
  • the third specifying means can preferably specify the second image portion.
  • the third specifying means does not specify the position in the two-dimensional image of the marker that may be placed on the shooting target in order to specify the position of the shooting target or the object,
  • the second image portion can be specified.
  • the display device is controlled to display another three-dimensional image representing the second image portion and the third image portion in a manner distinguishable from other image portions. And a control means.
  • the display device can display the second and third image portions corresponding to the object in the three-dimensional image in a manner distinguishable from the other image portions.
  • the object includes at least one of a predetermined part of the subject, an affected area of the subject, and a tumor part of the subject.
  • the image processing apparatus has the same characteristics as the characteristics of the second image portion representing the outer surface of the predetermined part, affected part, or tumor part of the subject, and the inside of the predetermined part, affected part, or tumor part of the subject.
  • a third image portion representing can be specified.
  • the three-dimensional image is an MRI (Magnetic Resonance Imaging) image that three-dimensionally shows the outer surface and the inside of the subject, and the two-dimensional image is the subject FIG.
  • MRI Magnetic Resonance Imaging
  • the image processing apparatus includes not only the second image portion representing the outer surface of the subject including the outer surface of the object based on the MRI image and the spectroscopic image, but also the inside of the subject including the inside of the object. It is also possible to specify the third image portion representing
  • the image processing method includes an acquisition step of acquiring a two-dimensional image and a three-dimensional image generated by imaging the same object, and a first image representing the object among the two-dimensional images.
  • the image processing method of the present embodiment may adopt various aspects.
  • the computer program of this embodiment causes a computer to execute the image processing method of this embodiment described above.
  • the computer program of this embodiment may adopt various aspects.
  • the computer program may be recorded on a recording medium.
  • the image processing apparatus includes an acquisition unit, a first specifying unit, and a second specifying unit.
  • the image processing method of the present embodiment includes an acquisition process, a first specifying process, and a second specifying process.
  • the computer program of this embodiment causes a computer to execute the image processing method of this embodiment described above. Therefore, it is specified how the objects included in the two-dimensional image are three-dimensionally distributed in the three-dimensional image using the two-dimensional image and the three-dimensional image obtained by photographing the same object.
  • the image processing apparatus, the image processing method, and the computer program include an MRI image obtained by imaging a patient with an MRI (Magnetic Resonance Imaging) apparatus (that is, an external surface and an internal surface of the patient).
  • MRI Magnetic Resonance Imaging
  • a three-dimensional image that represents the structure of the patient three-dimensionally) and a tumor image portion obtained by analyzing the spectral image captured by the hyperspectral camera that is, a two-dimensional image that planarly represents the outer surface of the patient's tumor
  • the tumor represented by the tumor image portion is three-dimensionally distributed in the MRI image.
  • the image processing apparatus, the image processing method, and the computer program use a two-dimensional image and a three-dimensional image obtained by photographing the same object, and the object included in the two-dimensional image is a three-dimensional image.
  • the present invention may be applied to any device that specifies how the three-dimensional distribution is distributed within the network.
  • FIG. 1 is a block diagram showing the configuration of the surgery support system 1 of the present embodiment.
  • the surgery support system 1 includes an MRI apparatus 11, a hyperspectral camera 12, a pointer 13, a position measuring device 14, an image processing device 15, and a display device 16.
  • the MRI apparatus 11 generates an MRI image that is a three-dimensional image that three-dimensionally represents the external surface and internal structure of the patient by imaging the external surface and internal structure of the patient.
  • An MRI image is an aggregate of patient tomographic images (in other words, two-dimensional slice images).
  • the MRI image may be any image as long as the outer surface and the internal structure of the patient can be three-dimensionally represented.
  • the MRI image may be a three-dimensional model image or three-dimensional volume data instead of an aggregate of tomographic images.
  • the MRI image generated by the MRI apparatus 11 is input to the image processing apparatus 15.
  • the patient is an operation target.
  • a part of the patient's body becomes the surgical site.
  • the surgical site is a part of the patient's head.
  • the operation is performed on a tumor in the patient's head.
  • a tumor is present at the surgical site.
  • the patient and the surgical site are specific examples of “imaging target”, and the tumor is a specific example of “target”.
  • the surgery support system 1 of the present embodiment is used when performing an arbitrary operation (for example, an operation performed on an arbitrary part of a patient) different from an operation performed on a tumor of the patient's head. May be.
  • the patient may be a human or any living body other than a human (for example, an animal).
  • At least three markers m1 are installed around the surgical site.
  • the marker m1 is a marker (for example, Fiducial Marker) that can be imaged by the MRI apparatus 11.
  • the hyperspectral camera 12 shoots a patient to generate a spectral image of the patient (that is, a collection of images for each light spectrum (wavelength)).
  • the spectral image generated by the hyperspectral camera 12 is input to the image processing device 15.
  • the hyperspectral camera 12 is provided with at least three markers m2 (three markers m2 in the example shown in FIG. 1).
  • the marker m2 is used for the position measurement device 14 to measure the position of the hyperspectral camera 12 (particularly, the three-dimensional position and orientation in the real space coordinate system that defines the three dimensions in the real space where the patient actually exists). Used.
  • the pointer 13 is a surgical tool that points to the marker m1 installed around the surgical site. The surgeon operates the pointer 13 to point the marker m1 at the tip of the pointer 13.
  • the pointer 13 is provided with at least three markers m3 (three markers m3 in the example shown in FIG. 1).
  • the marker m3 is used for the position measuring device 14 to measure the position of the pointer 13 (particularly, the three-dimensional position and orientation in the real space coordinate system).
  • the position measuring device 14 measures the positions of the marker m2 and the marker m3 (particularly, a three-dimensional position in the real space coordinate system).
  • the position measuring device 14 includes a stereo camera (two-lens camera) and an LED (Light Emitting Diode).
  • the LED emits measurement light (for example, infrared rays) toward each of the marker m2 and the marker m3.
  • the stereo camera detects the measurement light reflected by each of the marker m2 and the marker m3.
  • the detection result of the measurement light by the position measurement device 14 (that is, the measurement result of the measurement light) is input to the image processing device 15.
  • the image processing device 15 specifies the position of the hyperspectral camera 12 and the position of the pointer 13 (specifically, the position of the marker m1 as will be described in detail later) based on the measurement result of the position measuring device 14.
  • the image processing device 15 performs image processing on the MRI image and the spectral image.
  • the image processing apparatus 15 includes a CPU (Central Processing Unit) 151 and a memory 152.
  • the memory 152 stores a computer program for causing the image processing apparatus 15 to perform image processing.
  • a logical processing block for performing image processing is formed inside the CPU 151.
  • the computer program may not be recorded in the memory 152.
  • the CPU 151 may execute a computer program downloaded via a network.
  • the image processing apparatus 15 includes an MRI image acquisition unit 1511 that is a specific example of “acquisition unit”, a marker, and a logical processing block formed in the CPU 151.
  • the MRI image acquisition unit 1511 acquires the MRI image generated by the MRI apparatus 11.
  • the marker position specifying unit 1512 analyzes the MRI image acquired by the MRI image acquisition unit 1511 to thereby detect the position of the marker m1 included in the MRI image (particularly in the MRI coordinate system that defines the three-dimensional position in the MRI image). 3D position).
  • the spectral image acquisition unit 1513 acquires the spectral image generated by the hyperspectral camera 12.
  • the tumor identification unit 1514 identifies the position of the tumor (particularly, the two-dimensional position in the spectral coordinate system that defines the two-dimensional position in the spectral image) by analyzing the spectral image acquired by the spectral image acquisition unit 1513. . That is, the tumor specifying unit 1514 specifies a tumor image portion that is an image portion representing a tumor in the spectral image.
  • the tumor image portion is a specific example of “first image portion”.
  • the marker position specifying unit 1515 specifies the position of the marker m2 based on the measurement result of the position measuring device 14. In addition, the marker position specifying unit 1515 specifies the position of the marker m1 (particularly, the three-dimensional position in the real space coordinate system) based on the position of the marker m3 placed on the pointer 13 pointing to the marker m1.
  • the association processing unit 1516 mainly performs three association processes described below.
  • the association processing unit 1516 performs, as the first association processing, the position of the marker m1 in the MRI coordinate system specified by the marker position specifying unit 1512 and the real space coordinates specified by the marker position specifying unit 1515. A process of associating the position of the marker m1 with the system is performed.
  • the association processing unit 1516 performs a process of associating the MRI image and the spectral image (particularly, the tumor image portion) as the second association process. More specifically, the association processing unit 1516 identifies an image portion corresponding to the tumor image portion (that is, an image portion corresponding to the outer surface of the tumor) in the MRI image based on the tumor image portion. Note that a specific example of the process of specifying the image portion corresponding to the tumor image portion in the MRI image will be described in detail later, and thus the description thereof is omitted here.
  • the tumor image portion is also a two-dimensional image. Therefore, the tumor image portion is an image portion that two-dimensionally represents the outer surface of the tumor (in other words, the outer shell) as a plane.
  • the outer surface of a tumor means the part visible from the outside among tumors. Therefore, the image portion corresponding to the tumor image portion in the MRI image is also an image portion representing the outer surface of the tumor.
  • the image portion corresponding to the tumor image portion in the MRI image is an image portion that three-dimensionally (in other words, three-dimensionally) with the outer surface of the tumor as a curved surface. .
  • an image portion corresponding to the tumor image portion in the MRI image is referred to as an “outer surface image portion”.
  • the outer surface image portion is a specific example of “second image portion”.
  • the outer surface image portion represents at least a part of the outer surface of the patient's tumor (typically, the outer surface of the tumor portion included in the imaging region of the hyperspectral camera 12 of the patient's tumor).
  • the association processing unit 1516 identifies an image portion corresponding to the inside of the tumor (that is, an image portion representing a structure inside the tumor) based on the outer surface image portion.
  • the inside of a tumor means the part which cannot be visually recognized from the outside among tumors. Note that a specific example of the process of specifying the image portion corresponding to the inside of the tumor in the MRI image will be described later in detail, and the description thereof is omitted here.
  • the image part corresponding to the inside of the tumor in the MRI image is an image part that represents the inside of the tumor three-dimensionally (in other words, three-dimensionally) as a three-dimensional structure.
  • an image portion corresponding to the inside of the tumor in the MRI image is referred to as an “internal image portion”.
  • the internal image portion is a specific example of “third image portion”.
  • the three-dimensional model generation unit 1517 determines whether the tumor is in the other part based on the processing result by the association processing unit 1516 (that is, the identification result of the outer surface image portion and the inner image portion) and the MRI image acquired by the MRI image acquisition unit 1511. A three-dimensional model of a patient expressed in a manner distinguishable from the above is generated.
  • the 3D viewer processing unit 1518 generates an observation image that is observed when the surgeon observes the 3D model generated by the 3D model generation unit 1517 from a desired viewpoint.
  • the display device 16 displays the observation image generated by the three-dimensional viewer processing unit 1518. As a result, the surgeon can recognize how the tumor is three-dimensionally distributed inside the patient.
  • the display device 16 may display other arbitrary information (for example, information related to surgery support).
  • FIG. 3 is a flowchart showing an operation flow of the surgery support system 1.
  • the MRI image acquisition unit 1511 acquires an MRI image (step S11). Specifically, the marker m1 is first installed around the patient's surgical site. Thereafter, the patient lies on the bed provided in the MRI apparatus 11. Thereafter, the MRI apparatus 11 sequentially takes tomographic images (see FIG. 4) of the patient. That is, the MRI apparatus 11 generates an MRI image. As a result, the MRI image acquisition unit 1511 acquires an MRI image.
  • the patient is transferred from the bed provided in the MRI apparatus 11 to the operating table.
  • the patient is transported so that the position where the marker m1 is installed does not change (that is, the position where the marker m1 is installed is fixed).
  • the bed provided in the MRI apparatus 11 may be used as an operating table. The position where the marker m1 is installed does not change (is fixed) while the series of operations shown in FIG. 3 is performed.
  • the marker m1 may be directly driven into the bone of the patient's head before the MRI image is acquired so that the position where the marker m1 is installed does not change.
  • the marker m1 may be driven into the skull that has opened the patient lying on the bed, turned over the scalp, and appeared.
  • the marker position specifying unit 1512 specifies the position of the marker m1 included in the MRI image by analyzing the MRI image acquired in step S11 (step S12). That is, the marker position specifying unit 1512 specifies the three-dimensional position of the marker m1 in the MRI coordinate system. For example, the marker position specifying unit 1512 specifies an image portion corresponding to the marker m1 from the MRI image using a matching process or the like. Thereafter, the marker position specifying unit 1512 specifies the position of the image portion corresponding to the marker m1.
  • the marker position specifying unit 1515 specifies the position of the marker m1 based on the measurement result of the position measuring device 14 (step S13). That is, the marker position specifying unit 1515 specifies the three-dimensional position of the marker m1 in the real space coordinate system. Specifically, the surgeon operates the pointer 13 to point the marker m1 at the tip of the pointer 13. Under the situation where the marker m1 is pointed by the tip of the pointer 13, the position measurement device 14 detects the measurement light reflected by the marker m3. The marker position specifying unit 1515 can specify the position of the marker m3 based on the measurement result of the position measuring device 14.
  • the position of the pointer 13 (however, the “position” in this case includes the direction of the pointer 13) is also specified.
  • the position of the pointer 13 substantially corresponds to the position of the marker m1.
  • the shape of the pointer 13 (the positional relationship between the positions and the tips of at least three markers m3) is known. Therefore, the marker position specifying unit 1515 can specify the position of the marker m1 in the real space coordinate system by specifying the position of the marker m3 (that is, specifying the position of the pointer 13). The above operation is performed for all the markers m1.
  • the association processing unit 1516 associates the position of the marker m1 in the MRI coordinate system identified in step S12 with the position of the marker m1 in the real space coordinate system identified in step S13 (step S14).
  • the association processing unit 1516 calculates a transformation matrix for converting the position in the MRI coordinate system to a position in the real space coordinate system or the position in the real space coordinate system to a position in the MRI coordinate system. be able to. That is, the association processing unit 1516 can specify a position in the real space coordinate system corresponding to an arbitrary position in the MRI coordinate system.
  • the association processing unit 1516 can specify a position in the MRI coordinate system corresponding to an arbitrary position in the real space coordinate system.
  • the spectral image acquisition unit 1513 acquires a spectral image (step S21). Specifically, the hyperspectral camera 12 images the patient with the patient positioned on the operating table. That is, the hyperspectral camera 12 generates a spectral image. As a result, the spectral image acquisition unit 1513 acquires a spectral image.
  • the tumor identification unit 1514 identifies the position of the tumor by analyzing the spectral image acquired in step S21 (step S22). That is, as shown in FIG. 5, the tumor specifying unit 1514 specifies the two-dimensional position of the tumor in the spectral coordinate system that defines the two-dimensional position in the spectral image. That is, the tumor specifying unit 1514 specifies a tumor image portion that is an image portion representing a tumor in the spectral image (step S22).
  • movement may be used as an operation
  • a known operation may be used as an operation for specifying the position of the tumor from the spectral image.
  • the image processing apparatus 15 associates the tumor image portion specified from the spectral image with the MRI image, thereby corresponding to the outer surface image portion corresponding to the outer surface of the tumor in the MRI image and the inside of the tumor in the MRI image.
  • An operation for specifying the internal image portion is performed (step S31 to step S34).
  • the marker position specifying unit 1515 specifies the three-dimensional position of the hyperspectral camera 12 in the real space coordinate system based on the measurement result of the position measuring device 14 (step S31). Specifically, the position measuring device 14 detects the measurement light reflected by the marker m2 installed in the hyperspectral camera 12. The marker position specifying unit 1515 can specify the position of the marker m ⁇ b> 2 based on the measurement result of the position measuring device 14.
  • the position of the marker m2 is specified
  • the position of the hyperspectral camera 12 (however, the position in this case includes the orientation of the hyperspectral camera 12) is specified.
  • the position of the hyperspectral camera 12 specified in step S31 is specified by Equation 1.
  • the matrix of 3 rows ⁇ 3 rows composed of rc11, rc12, rc13, rc21, rc22, rc23, rc31, rc32, and rc33 in Equation 1 is the rotation amount of the hyperspectral camera 12 in the real space coordinate system (ie, yaw).
  • the rotation amount in the direction (in other words, the tilt amount, the same applies hereinafter), the rotation amount in the roll direction, and the rotation amount in the pitch direction) are shown.
  • Tcx in Equation 1 represents the translation amount of the hyperspectral camera 12 along the X axis from the origin of the real space coordinate system.
  • Tcy in Equation 1 indicates the translation amount of the hyperspectral camera 12 along the Y axis from the origin of the real space coordinate system.
  • Tcz in Formula 1 indicates the translation amount of the hyperspectral camera 12 along the Z axis from the origin of the real space coordinate system.
  • the association processing unit 1516 identifies an outer surface image portion corresponding to the tumor image portion in the MRI image (step S32). That is, the association processing unit 151 displays an outer surface image portion that three-dimensionally shows the outer surface of the tumor as a curved surface (or uneven surface) in the MRI image that is a three-dimensional image based on the tumor image portion that is a two-dimensional image. Identify.
  • the association processing unit 1516 arbitrarily selects an outer surface image portion corresponding to the tumor image portion of the MRI image (that is, three-dimensional image) based on the tumor image portion (that is, two-dimensional image). May be used.
  • the association processing unit 1516 corresponds to the two-dimensional image of the three-dimensional image based on the two-dimensional image as the operation of specifying the outer surface image portion corresponding to the tumor image portion of the MRI image based on the tumor image portion.
  • a known operation for specifying an image portion to be performed may be used.
  • the association processing unit 1516 identifies an internal parameter indicating the state of the optical system (for example, a lens group) of the hyperspectral camera 12.
  • the focal length fcx of the optical system with respect to the X axis constituting the spectral coordinate system that is, the focal length fcx corresponding to the photographing magnification of the subject along the X axis
  • the Y axis constituting the spectral coordinate system Is the focal length fcy of the optical system (that is, the focal length fcy corresponding to the photographing magnification of the subject along the Y axis).
  • the internal parameter is a deviation amount of the center of the imaging device of the hyperspectral camera 12 (that is, the center of the spectral image) with respect to the center of the optical axis of the optical system.
  • This deviation amount includes a deviation amount ccx along the X axis constituting the spectral coordinate system and a deviation amount ccy along the Y axis constituting the spectral coordinate system.
  • the association processing unit 1516 can specify an internal parameter using the calibration result of the hyperspectral camera 12 based on the checker pattern imaging result.
  • the association processing unit 1516 further specifies an external parameter indicating the installation state of the hyperspectral camera 12 in the real space coordinate system.
  • the external parameter is the position of the hyperspectral camera 12 in the real space coordinate system shown in Equation 1.
  • the association processing unit 1516 can specify the light path to each pixel constituting the imaging device of the hyperspectral camera 12 based on the internal parameter and the external parameter in the real space coordinate system. it can. Furthermore, since the position in the real space coordinate system can be converted to the position in the MRI coordinate system, the association processing unit 1516 can specify the light path to each pixel in the MRI coordinate system. As a result, the association processing unit 1516 can specify a portion located on the light path reaching each pixel in the outer surface of the patient represented by the MRI image. For example, in the MRI image, the association processing unit 1516 firstly reaches the position of the outer surface of the patient in the MRI coordinate system (X1 (X1 ( 3), Y1 (3), Z1 (3)) can be specified.
  • the portion of the outer surface of the patient that is located on the light path leading to each pixel is the portion of the outer surface of the patient that is captured by the hyperspectral camera 12.
  • the portion of the outer surface of the patient that is located on the light path to each pixel is a portion that is also imaged by the MRI apparatus 11.
  • the image portion corresponding to the outer surface of the patient located on the light path reaching each pixel in the MRI image corresponds to the spectroscopic image. That is, the image portion located on the light path in the MRI image corresponds to the spectral image.
  • the association processing unit 1516 includes an image portion located on the light path in the MRI image (that is, an image portion where the position (X1 (3), Y1 (3), Z1 (3)) is specified). Is associated with the spectral image. Specifically, the positions (X1 (3), Y1 (3), Z1 (3)) in the MRI coordinate system are the positions (X1 (3) ′, Y1 (3) ′, Z1 in the real space coordinate system. As described above, (3) ′) can be converted. Furthermore, each pixel that is a light emission destination has a one-to-one correspondence with specific coordinates in a spectral coordinate system that specifies a two-dimensional position in the spectral image.
  • the association processing unit 1516 determines the position (X1 (3) ′, Y1 (3) ′, Z1 (3) ′) in the real space coordinate system and the spectral coordinate system based on the internal parameter and the external parameter.
  • the positions (X1 (2), Y1 (2)) can be associated with each other. That is, the association processing unit 1516 determines the position (X1 (3), Y1 (3), Z1 (3)) in the MRI coordinate system and the position (X1 (2), Y1 (2)) in the spectral coordinate system. Can be associated. In this embodiment, the association processing unit 1516 uses Equation 2 to calculate the position (X1 (3), Y1 (3), Z1 (3)) in the MRI coordinate system and the position (X1 ( 2) and Y1 (2)). Note that s1 represents a predetermined coefficient or a predetermined conversion matrix.
  • the association processing unit 1516 can associate the MRI image (particularly, the image portion of the MRI image corresponding to the outer surface of the patient located on the light path reaching each pixel) with the spectral image. That is, the association processing unit 1516 can specify the image portion of the spectral image corresponding to a certain image portion constituting the MRI image. Similarly, the association processing unit 1516 can specify the image portion of the MRI image corresponding to a certain image portion constituting the spectral image. Then, since the position of the tumor image portion in the spectral image has already been specified, the association processing unit 1516 corresponds to the tumor image portion in the MRI image when the MRI image and the spectral image are associated with each other. The outer surface image portion to be identified can be specified.
  • the association processing unit 1516 can associate the MRI image and the spectral image without specifying the position of the marker m1 included in the spectral image in the spectral coordinate system.
  • the reason for not specifying the position of the marker m1 included in the spectral image in the spectral coordinate system is that the operation performed by the association processing unit 1516 in order to associate the MRI image with the spectral image is performed by the patient in the MRI image. This is because it is assumed that the outer surface image portion corresponding to the outer surface corresponds to a spectral image generated by photographing the outer surface of the patient.
  • the operation performed by the association processing unit 1516 for associating the MRI image and the spectroscopic image is two-dimensionally representing the outer surface of the patient as a plane, and the same outer surface of the same patient having the same MRI image as a curved surface. This is because it is assumed to be expressed three-dimensionally.
  • the association processing unit 1516 uses the fact that each of the spectral image (further, the tumor image portion) and the outer surface image portion represents the same outer surface of the same patient, so that the outer surface of the patient is a two-dimensional plane. Based on the tumor image portion expressed in (3), an outer surface image portion that three-dimensionally represents the same outer surface of the same patient as a curved surface is specified.
  • the association processing unit 1516 identifies the feature of the outer surface image portion identified in step S32 (step S33).
  • the “feature” specified in step S33 may be any feature as long as the feature is unique to the image or related to the image. Examples of such features include brightness, hue, saturation, brightness, and the like.
  • step S33 Since the feature specified in step S33 is a feature of the outer surface image portion showing the outer surface of the tumor, it is also estimated that it is a feature of the image portion corresponding to the tumor in the MRI image. Therefore, the association processing unit 1516 identifies an image portion having the same feature as the feature identified in step S33 in the MRI image as an internal image portion representing the inside of the tumor (step S34). As a result, as shown in FIG. 7, on the MRI image, not only the outer surface image portion representing the outer surface of the tumor but also the inner image portion representing the inside of the tumor is specified.
  • the three-dimensional model generation unit 1517 generates a three-dimensional model of the patient that represents the tumor in a manner that can be distinguished from other parts (step S41). That is, the three-dimensional model generation unit 1517 generates a three-dimensional model of the patient that represents the outer surface image portion and the inner image portion in a manner that can be distinguished from other image portions (step S41). For example, the three-dimensional model generation unit 1517 may generate the three-dimensional model by modifying the MRI image so that the MRI image represents the outer surface image portion and the inner image portion in a manner that can be distinguished from other image portions. Good.
  • the three-dimensional model generation unit 1517 may generate a three-dimensional model by newly generating an MRI image so as to represent the outer surface image portion and the inner image portion in a manner distinguishable from other image portions.
  • a display method that highlights and displays the tumor can be given.
  • the three-dimensional viewer processing unit 1518 generates an observation image that is observed when the surgeon observes the three-dimensional model generated in step S41 from a desired viewpoint (step S42).
  • the three-dimensional viewer processing unit 1518 outputs the generated observation image to the display device 16.
  • the display device 16 displays the observation image generated in step S42 (step S43).
  • the surgeon can recognize how the tumor is three-dimensionally distributed inside the patient by visually recognizing the observation image.
  • the MRI image acquisition unit 1511 determines whether or not it is time to acquire the MRI image again (step S51). For example, when a predetermined time has elapsed since the last MRI image was acquired, the MRI image acquisition unit 1511 may determine that the timing for acquiring the MRI image again has arrived. For example, when the user requests acquisition of the MRI image again, the MRI image acquisition unit 1511 may determine that the timing for acquiring the MRI image again has arrived.
  • step S51 when it is determined that the timing for acquiring the MRI image again has come (step S51: Yes), the MRI image acquisition unit 1511 acquires the MRI image again (step S1). Thereafter, the operations after step S2 are performed again.
  • step S51 when it is determined that the timing for acquiring the MRI image again has not arrived (step S51: No), the 3D model generation unit 1517 generates the 3 generated in step S42.
  • An additional model corresponding to an additional image is added to the dimension model (step S52).
  • an additional image an image of a surgical instrument (for example, a knife) handled by the surgeon is raised.
  • the image processing device 15 performs the operation described below.
  • a marker is placed on the surgical instrument.
  • the position measurement device 14 detects measurement light reflected by a marker installed on the surgical instrument.
  • the marker position detection unit 1515 specifies the position of the surgical instrument in the real space coordinate system based on the measurement result of the position measurement device 14.
  • the association processing unit 1516 converts the position of the surgical tool in the real space coordinate system into the position of the surgical tool in the MRI coordinate system.
  • the position of the surgical instrument in the MRI coordinate system corresponds to the position where the surgical instrument is to be placed in the three-dimensional model.
  • the three-dimensional model generation unit 1517 gives an additional model corresponding to the surgical tool to the three-dimensional model of the patient, and the positional relationship between the three-dimensional model of the patient and the additional model corresponding to the surgical tool is actually It can be added so as to match the positional relationship between the patient and the actual surgical instrument.
  • step S53 determines whether or not the operation is completed. As a result of the determination in step S53, when it is determined that the operation has not been completed (step S53: No), the operations after step S51 are repeated. On the other hand, as a result of the determination in step S53, when it is determined that the operation is completed (step S53: Yes), the operation support system 1 ends the operation illustrated in FIG.
  • the image processing apparatus 15 can specify not only the outer surface image portion that three-dimensionally represents the outer surface of the tumor but also the inner image portion that represents the inside of the tumor three-dimensionally. . That is, the image processing apparatus 15 according to the present embodiment can specify how the tumor identified from the spectral image that is a two-dimensional image is three-dimensionally distributed in the MRI image that is a three-dimensional image. . As a result, the display device 16 can display a three-dimensional model of the patient that represents the tumor in a manner that can be distinguished from other parts. As a result, the surgeon can appropriately recognize the tumor.
  • the surgery support system 1 may include an arbitrary device that can generate an image representing a three-dimensional representation of the outer surface and internal structure of the patient by imaging the patient in addition to or instead of the MRI apparatus 11. Good.
  • an arbitrary apparatus there is a CT (Computed Tomography) apparatus and a PET (Positron Emission Tomography) apparatus.
  • the image processing apparatus 15 specifies an outer surface image portion representing the outer surface of the tumor and an inner image portion representing the inside of the tumor, and then at least one of the spectral image (or the outer surface image portion and the inner image portion). ) May be subjected to a non-rigid registration process.
  • the image processing apparatus 15 can generate a three-dimensional model that represents the tumor whose shape has changed in a manner that can be distinguished from other parts.
  • the inner image portion may include the outer surface image portion as a part thereof. This is because the outer shell of the inner image portion substantially corresponds to the outer surface image portion. For this reason, the internal image portion may represent the outer surface of the tumor in addition to the inside of the tumor.
  • the present invention can be appropriately changed without departing from the gist or concept of the invention that can be read from the claims and the entire specification, and an image processing apparatus, an image processing method, and a computer that involve such a change
  • the program is also included in the technical idea of the present invention.

Landscapes

  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Physics & Mathematics (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Engineering & Computer Science (AREA)
  • Medical Informatics (AREA)
  • Biophysics (AREA)
  • Pathology (AREA)
  • High Energy & Nuclear Physics (AREA)
  • Biomedical Technology (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Radiology & Medical Imaging (AREA)
  • Molecular Biology (AREA)
  • Surgery (AREA)
  • Animal Behavior & Ethology (AREA)
  • General Health & Medical Sciences (AREA)
  • Public Health (AREA)
  • Veterinary Medicine (AREA)
  • Magnetic Resonance Imaging Apparatus (AREA)

Abstract

Through the present invention, using a two-dimensional image and a three-dimensional image obtained by capturing images of the same object, a steric distribution in the three-dimensional image of an object included in the two-dimensional image is specified. An image processing device (15) is provided with an acquiring means for acquiring a two-dimensional image and a three-dimensional image generated by capturing images of the same object, a first specifying means (1514) for specifying a first image portion representing the object from within the two-dimensional image, and a second specifying means (1516) for specifying a third image component in the three-dimensional image having a feature of the second image portion in the three-dimensional image corresponding to the first image portion.

Description

画像処理装置及び画像処理方法、並びにコンピュータプログラムImage processing apparatus, image processing method, and computer program
 本発明は、同一の対象物を撮像することで得られる2次元画像及び3次元画像に対して画像処理を行う画像処理装置及び画像処理方法、並びにコンピュータプログラムの技術分野に関する。 The present invention relates to a technical field of an image processing apparatus and an image processing method for performing image processing on a two-dimensional image and a three-dimensional image obtained by imaging the same object, and a computer program.
 手術前にMRI装置等によって撮影された人体の3次元画像に、手術中にビデオカメラ等によって撮影される人体の観察画像を重畳表示する手術支援装置が知られている(例えば、特許文献1)。 There is known a surgery support apparatus that superimposes and displays an observation image of a human body photographed by a video camera or the like during surgery on a three-dimensional image of the human body photographed by an MRI apparatus or the like before surgery (for example, Patent Document 1). .
特開2010-274044公報JP 2010-274044 A
 近年、ハイパースペクトルカメラ等によって撮影される分光画像を解析することで、当該分光画像のうち腫瘍を示す画像部分(以降、“腫瘍画像部分”と称する)を特定する技術が提案されている。本願発明者等は、特定された腫瘍画像部分に基づいて、人体の3次元画像内において腫瘍に対応する画像部分を他の画像部分と区別可能な態様で表示することで、術者等による腫瘍の認識を支援する技術の開発を検討している。 In recent years, a technique has been proposed in which a spectral image captured by a hyperspectral camera or the like is analyzed to identify an image portion showing a tumor (hereinafter referred to as “tumor image portion”) in the spectral image. The inventors of the present application display the image part corresponding to the tumor in a three-dimensional image of the human body based on the identified tumor image part in a manner that can be distinguished from other image parts. We are considering the development of technology that supports the recognition of
 そこで、特許文献1に記載された技術を流用することで、特定された腫瘍画像部分を、3次元画像のうち腫瘍に対応する画像部分に重畳表示する技術が想定される。しかしながら、分光画像が2次元画像(つまり、平面画像)であるがゆえに、腫瘍画像部分もまた2次元画像である。従って、分光画像からは、腫瘍の外面(つまり、腫瘍のうち外から観察可能な部分)が特定されるに過ぎない。つまり、分光画像からは、腫瘍が立体的にどのように分布しているかを特定することが困難である。従って、単に特許文献1に記載の手術支援装置を用いることで腫瘍画像部分が3次元画像に重畳表示されるだけでは、腫瘍の外面を表すだけの3次元画像が表示されるに過ぎない。つまり、人体の外面及び内部を立体的に示すことが可能な3次元画像に腫瘍の外面を表すに過ぎない腫瘍画像部分が単に重畳表示されたとしても、腫瘍が人体の内部において立体的にどのように分布するかを明確に示す3次元画像を表示することができないという技術的問題が生ずる。 Therefore, a technique for superimposing and displaying the identified tumor image portion on the image portion corresponding to the tumor in the three-dimensional image by using the technique described in Patent Document 1 is assumed. However, since the spectral image is a two-dimensional image (that is, a planar image), the tumor image portion is also a two-dimensional image. Therefore, from the spectral image, only the outer surface of the tumor (that is, the portion of the tumor that can be observed from the outside) is identified. That is, it is difficult to specify how the tumor is three-dimensionally distributed from the spectral image. Therefore, by simply using the surgery support apparatus described in Patent Document 1 and displaying the tumor image portion superimposed on the three-dimensional image, only a three-dimensional image representing the outer surface of the tumor is displayed. In other words, even if a tumor image portion that merely represents the outer surface of a tumor is simply superimposed and displayed on a three-dimensional image that can three-dimensionally show the outer surface and the inside of the human body, This causes a technical problem that it is impossible to display a three-dimensional image that clearly shows the distribution.
 尚、上述した技術的問題は、人体の3次元画像に腫瘍画像を重畳表示する場合に限らず、任意の3次元画像に任意の2次元画像を重畳表示する場合においても同様に生じ得る。 Note that the above-described technical problem is not limited to a case where a tumor image is superimposed and displayed on a three-dimensional image of a human body, but can also occur in a case where an arbitrary two-dimensional image is superimposed and displayed on an arbitrary three-dimensional image.
 本発明は、例えば上記問題点に鑑みてなされたものであり、同一の対象物を撮影することで得られる2次元画像及び3次元画像を用いて、2次元画像に含まれる対象物が3次元画像内で立体的にどのように分布するかを特定することが可能な画像処理装置及び画像処理方法、並びに、コンピュータプログラムを提供することを課題とする。 The present invention has been made in view of, for example, the above-described problems, and a two-dimensional image obtained by photographing the same object and a three-dimensional image are used to convert the object included in the two-dimensional image into a three-dimensional image. It is an object of the present invention to provide an image processing apparatus, an image processing method, and a computer program that can specify how the image is three-dimensionally distributed in an image.
 上記課題を解決するための画像処理装置は、同一の対象物を撮影することで生成される2次元画像及び3次元画像を取得する取得手段と、前記2次元画像のうち、前記対象物を表す第1画像部分を特定する第1特定手段と、前記第1画像部分に対応する前記3次元画像中の第2画像部分の特徴を有する前記3次元画像中の第3画像部分を特定する第2特定手段とを備える。 An image processing apparatus for solving the above problem represents a two-dimensional image and a three-dimensional image generated by photographing the same object, and represents the object among the two-dimensional image. A first specifying means for specifying a first image portion; a second for specifying a third image portion in the three-dimensional image having characteristics of the second image portion in the three-dimensional image corresponding to the first image portion; Specifying means.
 上記課題を解決するための画像処理方法は、同一の対象物を撮像することで生成される2次元画像及び3次元画像を取得する取得工程と、前記2次元画像のうち、前記対象物を表す第1画像部分を特定する第1特定工程と、前記第1画像部分に対応する前記3次元画像中の第2画像部分の特徴を有する前記3次元画像中の第3画像部分を特定する第2特定工程とを備える。 An image processing method for solving the above problem represents a two-dimensional image and a three-dimensional image generated by capturing the same target object, and represents the target object among the two-dimensional images. A first specifying step of specifying a first image portion; and a second specifying of a third image portion in the three-dimensional image having characteristics of the second image portion in the three-dimensional image corresponding to the first image portion. A specific process.
 上記課題を解決するためのコンピュータプログラムは、コンピュータに上述した画像処理方法を実行させる。 A computer program for solving the above problem causes a computer to execute the above-described image processing method.
手術支援システムの構成を示すブロック図である。It is a block diagram which shows the structure of a surgery assistance system. 画像処理装置の構成を示すブロック図である。It is a block diagram which shows the structure of an image processing apparatus. 手術支援システムの動作の流れを示すフローチャートである。It is a flowchart which shows the flow of operation | movement of a surgery assistance system. MRI画像を示す模式図である。It is a schematic diagram which shows an MRI image. 腫瘍画像部分が特定されている分光画像を示す平面図である。It is a top view which shows the spectral image by which the tumor image part is specified. 外面画像部分を特定する動作の概念を示す模式図である。It is a schematic diagram which shows the concept of the operation | movement which pinpoints an outer surface image part. 内部画像部分を特定する動作の概念を示す模式図である。It is a schematic diagram which shows the concept of the operation | movement which pinpoints an internal image part.
 以下、発明を実施するための形態として、画像処理装置及び画像処理方法、並びに、コンピュータプログラムの実施形態について順に説明する。 Hereinafter, embodiments of an image processing apparatus, an image processing method, and a computer program will be described in order as modes for carrying out the invention.
 (画像処理装置の実施形態)
 <1>
 本実施形態の画像処理装置は、同一の対象物を撮影することで生成される2次元画像及び3次元画像を取得する取得手段と、前記2次元画像のうち、前記対象物を表す第1画像部分を特定する第1特定手段と、前記第1画像部分に対応する前記3次元画像中の第2画像部分の特徴を有する前記3次元画像中の第3画像部分を特定する第2特定手段とを備える。
(Embodiment of Image Processing Device)
<1>
The image processing apparatus according to the present embodiment includes a acquiring unit that acquires a two-dimensional image and a three-dimensional image generated by photographing the same object, and a first image representing the object among the two-dimensional images. First specifying means for specifying a portion; and second specifying means for specifying a third image portion in the three-dimensional image having features of the second image portion in the three-dimensional image corresponding to the first image portion; Is provided.
 本実施形態の画像処理装置によれば、まず、第1特定手段は、2次元画像のうち、対象物を表す第1画像部分を特定する。第1画像部分が2次元画像の少なくとも一部であるため、第1画像部分は、対象物の少なくとも一部を平面的に(言い換えれば、2次元的に)表す。つまり、第1画像部分は、対象物の外面の少なくとも一部を表す。 According to the image processing apparatus of the present embodiment, first, the first specifying unit specifies the first image portion representing the object in the two-dimensional image. Since the first image part is at least a part of the two-dimensional image, the first image part represents at least a part of the object planarly (in other words, two-dimensionally). That is, the first image portion represents at least a part of the outer surface of the object.
 その後、第2特定手段は、第1画像部分に基づいて、3次元画像のうち、第1画像部分に対応する3次元画像中の第2画像部分の特徴と同一の特徴を有する第3画像部分を特定する。第1画像部分が対象物の外面の少なくとも一部を表しているため、当該第1画像部分に対応する第2画像部分もまた、対象物の外面(つまり、対象物のうち外部から視認可能な部分)の少なくとも一部を表す。更に、第2画像部分が3次元画像の少なくとも一部であるがゆえに、第2画像部分は、対象物の外面の少なくとも一部を立体的に(言い換えれば、3次元的に)表す。 After that, the second specifying means has a third image portion having the same feature as the feature of the second image portion in the three-dimensional image corresponding to the first image portion in the three-dimensional image based on the first image portion. Is identified. Since the first image part represents at least a part of the outer surface of the object, the second image part corresponding to the first image part is also visible from the outer surface of the object (that is, the object is visible from the outside). Part). Furthermore, since the second image portion is at least a part of the three-dimensional image, the second image portion represents at least a part of the outer surface of the object in a three-dimensional manner (in other words, three-dimensionally).
 ここで、第2画像部分の特徴と第3画像部分の特徴が同一であるため、第3画像部分もまた、第2画像部分と同様に、対象物の少なくとも一部を表していると推定可能である。更には、第3画像部分は、対象物の外面の少なくとも一部を表すに過ぎない第1画像部分から直接的に特定されることに代えて、第2画像部分の特徴と同一の特徴を有する画像部分を3次元画像から探し出すことで特定される。このため、第3画像部分は、対象物の内部(つまり、対象物の少なくとも一部のうち外部から視認できない部分)を少なくとも表していると推定可能である。更に、第3画像部分が3次元画像の少なくとも一部であるがゆえに、第3画像部分は、対象物の内部を立体的に表す。 Here, since the characteristics of the second image portion and the features of the third image portion are the same, it can be estimated that the third image portion also represents at least a part of the object, like the second image portion. It is. Furthermore, the third image portion has the same characteristics as the features of the second image portion, instead of being directly identified from the first image portion, which only represents at least part of the outer surface of the object. The image portion is specified by searching from the three-dimensional image. For this reason, it can be estimated that the third image portion represents at least the inside of the object (that is, the portion that cannot be visually recognized from the outside of at least a part of the object). Furthermore, since the third image portion is at least a part of the three-dimensional image, the third image portion three-dimensionally represents the inside of the object.
 このように、本実施形態の画像処理装置は、対象物の外面の少なくとも一部を立体的に表す第2画像部分のみならず、対象物の内部を立体的に表す第3画像部分をも特定することができる。このため、本実施形態の画像処理装置は、2次元画像に含まれる対象物が3次元画像内で立体的にどのように分布するかを特定することができる。その結果、任意の表示装置は、3次元画像内において対象物に対応する第2及び第3画像部分を他の画像部分と区別可能な態様で表示することで、ユーザによる対象物の認識を支援することができる。 As described above, the image processing apparatus according to the present embodiment specifies not only the second image part that stereoscopically represents at least a part of the outer surface of the object, but also the third image part that stereoscopically represents the inside of the object. can do. For this reason, the image processing apparatus of this embodiment can specify how the objects included in the two-dimensional image are three-dimensionally distributed in the three-dimensional image. As a result, any display device supports the user's recognition of the object by displaying the second and third image parts corresponding to the object in the three-dimensional image in a manner distinguishable from other image parts. can do.
 また、上述した説明では、第3画像部分は対象物の内部を表している。しかしながら、第3画像部分は、対象物の外面を表す第2画像部分を含んでいてもよい。即ち、対象物の内部を表す画像部分と対象物の外面を表す画像部分を合わせた画像部分を第3画像部分としてもよい。なぜならば、第3画像部分の外殻が実質的には第2画像部分に対応するからである。
<2>
 本実施形態の画像処理装置の他の態様では、前記第2画像部分は、前記3次元画像のうち前記対象物の外面の少なくとも一部を表し、前記第3画像部分は、前記3次元画像のうち前記対象物の少なくとも内部を表す。
In the above description, the third image portion represents the inside of the object. However, the third image portion may include a second image portion representing the outer surface of the object. In other words, an image portion obtained by combining an image portion representing the inside of the object and an image portion representing the outer surface of the object may be used as the third image portion. This is because the outer shell of the third image portion substantially corresponds to the second image portion.
<2>
In another aspect of the image processing apparatus of the present embodiment, the second image portion represents at least a part of the outer surface of the object in the three-dimensional image, and the third image portion is a portion of the three-dimensional image. Of these, at least the inside of the object is represented.
 この態様によれば、第2特定手段は、第2特定手段が特定した対象物の外面の少なくとも一部を表す第2画像部分の特徴と同一の特徴を有する、対象物の内部を少なくとも表す第3画像部分を特定することができる。 According to this aspect, the second specifying means has at least the inside of the object having the same feature as the feature of the second image portion that represents at least a part of the outer surface of the object specified by the second specifying means. Three image portions can be specified.
 <3>
 本実施形態の画像処理装置の他の態様では、前記第2画像部分を特定する第3特定手段を更に備え、前記2次元画像は、前記対象物を含む撮影対象を撮影することで生成され、前記第3特定手段は、(i)前記3次元画像のうち、前記撮影対象を撮影することで前記2次元画像を生成する撮像装置が備える撮像素子の各画素に到達する光の経路上に存在する前記撮影対象の外面に対応する第4画像部分を特定し、(ii)前記撮像装置が備える光学系の状態を示す第1パラメータ及び前記撮像装置の設置環境を示す第2パラメータに基づいて、前記第4画像部分を前記2次元画像に対応付け、(iii)前記第4画像部分と前記2次元画像との対応付けの結果に基づいて、前記第2画像部分を特定する。
<3>
In another aspect of the image processing apparatus of the present embodiment, the image processing apparatus further includes a third specifying unit that specifies the second image portion, and the two-dimensional image is generated by shooting a shooting target including the target object, The third specifying means is (i) present on a path of light reaching each pixel of an image sensor included in an imaging device that generates the two-dimensional image by photographing the photographing target among the three-dimensional images. Identifying a fourth image portion corresponding to an outer surface of the imaging target, and (ii) based on a first parameter indicating a state of an optical system included in the imaging device and a second parameter indicating an installation environment of the imaging device, The fourth image portion is associated with the two-dimensional image, and (iii) the second image portion is specified based on a result of the association between the fourth image portion and the two-dimensional image.
 この態様によれば、第3特定手段は、第2画像部分を好適に特定することができる。特に、後述するように、第3特定手段は、撮影対象や対象物の位置を特定するために撮影対象に設置される可能性のあるマーカの2次元画像内での位置を特定することなく、第2画像部分を特定することができる。 According to this aspect, the third specifying means can preferably specify the second image portion. In particular, as will be described later, the third specifying means does not specify the position in the two-dimensional image of the marker that may be placed on the shooting target in order to specify the position of the shooting target or the object, The second image portion can be specified.
 <4>
 本実施形態の画像処理装置の他の対象では、前記第2画像部分及び前記第3画像部分を他の画像部分と区別可能な態様で表す他の3次元画像を表示するように表示装置を制御する制御手段を更に備える。
<4>
In another object of the image processing apparatus of the present embodiment, the display device is controlled to display another three-dimensional image representing the second image portion and the third image portion in a manner distinguishable from other image portions. And a control means.
 この態様によれば、制御手段の制御下で、表示装置は、3次元画像内において対象物に対応する第2及び第3画像部分を他の画像部分と区別可能な態様で表示することができる。 According to this aspect, under the control of the control means, the display device can display the second and third image portions corresponding to the object in the three-dimensional image in a manner distinguishable from the other image portions. .
 <5>
 本実施形態の画像処理装置の他の態様では、前記対象物は、被検体の所定部位、前記被検体の患部及び前記被検体の腫瘍部位のうちの少なくとも一つを含む。
<5>
In another aspect of the image processing apparatus of the present embodiment, the object includes at least one of a predetermined part of the subject, an affected area of the subject, and a tumor part of the subject.
 この態様によれば、画像処理装置は、被検体の所定部位、患部又は腫瘍部位の外面を表す第2画像部分の特徴と同一の特徴を有する、被検体の所定部位、患部又は腫瘍部位の内部を表す第3画像部分を特定することができる。 According to this aspect, the image processing apparatus has the same characteristics as the characteristics of the second image portion representing the outer surface of the predetermined part, affected part, or tumor part of the subject, and the inside of the predetermined part, affected part, or tumor part of the subject. A third image portion representing can be specified.
 <6>
 本実施形態の画像処理装置の他の態様では、前記3次元画像は、前記被検体の外面及び内部を立体的に示すMRI(Magnetic Resonance Imaging)画像であり、前記2次元画像は、前記被検体の外面を平面的に示す分光画像である。
<6>
In another aspect of the image processing apparatus of the present embodiment, the three-dimensional image is an MRI (Magnetic Resonance Imaging) image that three-dimensionally shows the outer surface and the inside of the subject, and the two-dimensional image is the subject FIG.
 この態様によれば、画像処理装置は、MRI画像及び分光画像に基づいて、対象物の外面を含む被検体の外面を表す第2画像部分のみならず、対象物の内部を含む被検体の内部を表す第3画像部分をも特定することができる。 According to this aspect, the image processing apparatus includes not only the second image portion representing the outer surface of the subject including the outer surface of the object based on the MRI image and the spectroscopic image, but also the inside of the subject including the inside of the object. It is also possible to specify the third image portion representing
 (画像処理方法の実施形態)
 <7>
 本実施形態の画像処理方法は、同一の対象物を撮像することで生成される2次元画像及び3次元画像を取得する取得工程と、前記2次元画像のうち、前記対象物を表す第1画像部分を特定する第1特定工程と、前記3次元画像のうち、前記第1画像部分に対応する前記2次元画像中の第2画像部分の特徴を有する前記3次元画像中の第3画像部分を特定する第2特定工程とを備える。
(Embodiment of Image Processing Method)
<7>
The image processing method according to the present embodiment includes an acquisition step of acquiring a two-dimensional image and a three-dimensional image generated by imaging the same object, and a first image representing the object among the two-dimensional images. A first specifying step of specifying a portion; and a third image portion in the three-dimensional image having features of the second image portion in the two-dimensional image corresponding to the first image portion of the three-dimensional image. A second specifying step of specifying.
 本実施形態の画像処理方法によれば、上述した本実施形態の画像処理装置が享受する各種効果を好適に享受することができる。 According to the image processing method of the present embodiment, various effects enjoyed by the above-described image processing apparatus of the present embodiment can be suitably enjoyed.
 尚、本実施形態の画像処理装置が採用する各種態様に対応して、本実施形態の画像処理方法も、各種態様を採用してもよい。 Incidentally, in response to various aspects adopted by the image processing apparatus of the present embodiment, the image processing method of the present embodiment may adopt various aspects.
 (コンピュータプログラムの実施形態)
 <8>
 本実施形態のコンピュータプログラムは、コンピュータに上述した本実施形態の画像処理方法を実行させる。
(Embodiment of computer program)
<8>
The computer program of this embodiment causes a computer to execute the image processing method of this embodiment described above.
 本実施形態のコンピュータプログラムによれば、上述した本実施形態の画像処理装置が享受する各種効果を好適に享受することができる。 According to the computer program of this embodiment, various effects enjoyed by the above-described image processing apparatus of this embodiment can be suitably enjoyed.
 尚、本実施形態の画像処理装置が採用する各種態様に対応して、本実施形態のコンピュータプログラムも、各種態様を採用してもよい。また、コンピュータプログラムは、記録媒体に記録されていてもよい。 Incidentally, in response to various aspects adopted by the image processing apparatus of this embodiment, the computer program of this embodiment may adopt various aspects. The computer program may be recorded on a recording medium.
 本実施形態のこのような作用及び他の利得は次に説明する実施例から明らかにされる。 Such an operation and other advantages of the present embodiment will be clarified from examples described below.
 以上説明したように、本実施形態の画像処理装置は、取得手段と、第1特定手段と、第2特定手段とを備える。本実施形態の画像処理方法は、取得工程と、第1特定工程と、第2特定工程とを備える。本実施形態のコンピュータプログラムは、コンピュータに上述した本実施形態の画像処理方法を実行させる。従って、同一の対象物を撮影することで得られる2次元画像及び3次元画像を用いて、2次元画像に含まれる対象物が3次元画像内で立体的にどのように分布するかが特定される。 As described above, the image processing apparatus according to this embodiment includes an acquisition unit, a first specifying unit, and a second specifying unit. The image processing method of the present embodiment includes an acquisition process, a first specifying process, and a second specifying process. The computer program of this embodiment causes a computer to execute the image processing method of this embodiment described above. Therefore, it is specified how the objects included in the two-dimensional image are three-dimensionally distributed in the three-dimensional image using the two-dimensional image and the three-dimensional image obtained by photographing the same object. The
 以下、図面を参照しながら、画像処理装置及び画像処理方法、並びに、コンピュータプログラムの実施例について説明する。尚、以下では、画像処理装置及び画像処理方法、並びに、コンピュータプログラムを、患者に手術を行う術者を支援するための手術支援システムに適用した例について説明を進める。この場合、画像処理装置及び画像処理方法、並びに、コンピュータプログラムが適用された手術支援システムは、MRI(Magnetic Resonance Imaging)装置で患者を撮影することで得られるMRI画像(つまり、患者の外面及び内部の構造等を立体的に表す3次元画像)及びハイパースペクトルカメラが撮影した分光画像を解析することで得られる腫瘍画像部分(つまり、患者の腫瘍の外面を平面的に表す2次元画像)を用いて、腫瘍画像部分が表す腫瘍がMRI画像内で立体的にどのように分布するかを特定する。但し、画像処理装置及び画像処理方法、並びに、コンピュータプログラムは、同一の対象物を撮影することで得られる2次元画像及び3次元画像を用いて、2次元画像に含まれる対象物が3次元画像内で立体的にどのように分布するかを特定する任意の装置に適用されてもよい。 Hereinafter, embodiments of an image processing apparatus, an image processing method, and a computer program will be described with reference to the drawings. In the following description, an example in which the image processing apparatus, the image processing method, and the computer program are applied to a surgery support system for supporting a surgeon performing surgery on a patient will be described. In this case, the image processing apparatus, the image processing method, and the surgery support system to which the computer program is applied include an MRI image obtained by imaging a patient with an MRI (Magnetic Resonance Imaging) apparatus (that is, an external surface and an internal surface of the patient). A three-dimensional image that represents the structure of the patient three-dimensionally) and a tumor image portion obtained by analyzing the spectral image captured by the hyperspectral camera (that is, a two-dimensional image that planarly represents the outer surface of the patient's tumor) Thus, it is specified how the tumor represented by the tumor image portion is three-dimensionally distributed in the MRI image. However, the image processing apparatus, the image processing method, and the computer program use a two-dimensional image and a three-dimensional image obtained by photographing the same object, and the object included in the two-dimensional image is a three-dimensional image. The present invention may be applied to any device that specifies how the three-dimensional distribution is distributed within the network.
 (1)手術支援システム1の構成
 はじめに、図1を参照しながら、本実施例の手術支援システム1の構成について説明する。図1は、本実施例の手術支援システム1の構成を示すブロック図である。
(1) Configuration of Surgery Support System 1 First, the configuration of the surgery support system 1 of the present embodiment will be described with reference to FIG. FIG. 1 is a block diagram showing the configuration of the surgery support system 1 of the present embodiment.
 図1に示すように、手術支援システム1は、MRI装置11と、ハイパースペクトルカメラ12と、ポインタ13と、位置測定装置14と、画像処理装置15と、表示装置16とを備える。 As shown in FIG. 1, the surgery support system 1 includes an MRI apparatus 11, a hyperspectral camera 12, a pointer 13, a position measuring device 14, an image processing device 15, and a display device 16.
 MRI装置11は、患者の外面及び内部の構造等を撮影することで、患者の外面及び内部の構造等を立体的に表す3次元画像であるMRI画像を生成する。MRI画像は、患者の断層画像(言い換えれば、2次元のスライス画像)の集合体である。但し、MRI画像は、患者の外面及び内部の構造等を立体的に表すことができる限りは、どのような画像であってもよい。例えば、MRI画像は、断層画像の集合体ではなく、3次元モデル画像又は3次元ボリュームデータであってもよい。MRI装置11が生成したMRI画像は、画像処理装置15に入力される。 The MRI apparatus 11 generates an MRI image that is a three-dimensional image that three-dimensionally represents the external surface and internal structure of the patient by imaging the external surface and internal structure of the patient. An MRI image is an aggregate of patient tomographic images (in other words, two-dimensional slice images). However, the MRI image may be any image as long as the outer surface and the internal structure of the patient can be three-dimensionally represented. For example, the MRI image may be a three-dimensional model image or three-dimensional volume data instead of an aggregate of tomographic images. The MRI image generated by the MRI apparatus 11 is input to the image processing apparatus 15.
 本実施例では、患者は、手術対象であるものとする。この場合、患者の体の一部は、手術部位となる。更に、本実施例では、手術部位は、患者の頭部の一部であるものとする。加えて、本実施例では、手術は、患者の頭部にある腫瘍を対象に行われる手術であるものとする。この場合、手術部位には、腫瘍が存在する。患者や手術部位は、「撮影対象」の一具体例であり、腫瘍は、「対象物」の一具体例である。但し、本実施例の手術支援システム1は、患者の頭部の腫瘍を対象に行われる手術とは異なる任意の手術(例えば、患者の任意の部位を対象に行われる手術)を行う際に用いられてもよい。更に、患者は、人間であってもよいし、人間以外の任意の生体(例えば、動物等)であってもよい。 In this embodiment, it is assumed that the patient is an operation target. In this case, a part of the patient's body becomes the surgical site. Furthermore, in this embodiment, the surgical site is a part of the patient's head. In addition, in this embodiment, the operation is performed on a tumor in the patient's head. In this case, a tumor is present at the surgical site. The patient and the surgical site are specific examples of “imaging target”, and the tumor is a specific example of “target”. However, the surgery support system 1 of the present embodiment is used when performing an arbitrary operation (for example, an operation performed on an arbitrary part of a patient) different from an operation performed on a tumor of the patient's head. May be. Furthermore, the patient may be a human or any living body other than a human (for example, an animal).
 手術部位の周辺には、少なくとも3つのマーカm1(図1に示す例では、4つのマーカm1)が設置されている。マーカm1は、MRI装置11によって撮影可能なマーカ(例えば、Fiducial Marker)である。 At least three markers m1 (four markers m1 in the example shown in FIG. 1) are installed around the surgical site. The marker m1 is a marker (for example, Fiducial Marker) that can be imaged by the MRI apparatus 11.
 ハイパースペクトルカメラ12は、患者を撮影することで、患者の分光画像(つまり、光のスペクトル(波長)毎の画像の集合体)を生成する。ハイパースペクトルカメラ12が生成した分光画像は、画像処理装置15に入力される。 The hyperspectral camera 12 shoots a patient to generate a spectral image of the patient (that is, a collection of images for each light spectrum (wavelength)). The spectral image generated by the hyperspectral camera 12 is input to the image processing device 15.
 ハイパースペクトルカメラ12には、少なくとも3つのマーカm2(図1に示す例では、3つのマーカm2)が設置されている。マーカm2は、位置測定装置14がハイパースペクトルカメラ12の位置(特に、患者が実際に存在する実空間内の3次元を規定する実空間座標系での3次元位置と方位)を測定するために用いられる。 The hyperspectral camera 12 is provided with at least three markers m2 (three markers m2 in the example shown in FIG. 1). The marker m2 is used for the position measurement device 14 to measure the position of the hyperspectral camera 12 (particularly, the three-dimensional position and orientation in the real space coordinate system that defines the three dimensions in the real space where the patient actually exists). Used.
 ポインタ13は、手術部位の周辺に設置されたマーカm1を指し示す術具である。術者は、ポインタ13を操作することで、マーカm1をポインタ13の先端で指し示す。ポインタ13には、少なくとも3つのマーカm3(図1に示す例では、3つのマーカm3)が設置されている。マーカm3は、位置測定装置14がポインタ13の位置(特に、実空間座標系での3次元位置と方位)を測定するために用いられる。 The pointer 13 is a surgical tool that points to the marker m1 installed around the surgical site. The surgeon operates the pointer 13 to point the marker m1 at the tip of the pointer 13. The pointer 13 is provided with at least three markers m3 (three markers m3 in the example shown in FIG. 1). The marker m3 is used for the position measuring device 14 to measure the position of the pointer 13 (particularly, the three-dimensional position and orientation in the real space coordinate system).
 位置測定装置14は、マーカm2及びマーカm3の位置(特に、実空間座標系での3次元位置)を測定する。例えば、位置測定装置14は、ステレオカメラ(2眼カメラ)とLED(Light Emitting Diode)とを備える。LEDは、マーカm2及びマーカm3の夫々に向けて、測定光(例えば、赤外線等)を出射する。ステレオカメラは、マーカm2及びマーカm3の夫々によって反射された測定光を検出する。位置測定装置14による測定光の検出結果(つまり、測定光の測定結果)は、画像処理装置15に入力される。画像処理装置15は、位置測定装置14の測定結果に基づいて、ハイパースペクトルカメラ12の位置や、ポインタ13の位置(具体的には、後に詳述するようにマーカm1の位置)を特定する。 The position measuring device 14 measures the positions of the marker m2 and the marker m3 (particularly, a three-dimensional position in the real space coordinate system). For example, the position measuring device 14 includes a stereo camera (two-lens camera) and an LED (Light Emitting Diode). The LED emits measurement light (for example, infrared rays) toward each of the marker m2 and the marker m3. The stereo camera detects the measurement light reflected by each of the marker m2 and the marker m3. The detection result of the measurement light by the position measurement device 14 (that is, the measurement result of the measurement light) is input to the image processing device 15. The image processing device 15 specifies the position of the hyperspectral camera 12 and the position of the pointer 13 (specifically, the position of the marker m1 as will be described in detail later) based on the measurement result of the position measuring device 14.
 画像処理装置15は、MRI画像及び分光画像に対する画像処理を行う。画像処理装置15は、CPU(Central Processing Unit)151と、メモリ152とを備える。メモリ152には、画像処理装置15に画像処理を行わせるためのコンピュータプログラムが記録されている。当該コンピュータプログラムがCPU151によって実行されることで、CPU151の内部には、画像処理を行うための論理的な処理ブロックが形成される。但し、メモリ152にコンピュータプログラムが記録されていなくてもよい。この場合、CPU151は、ネットワークを介してダウンロードしたコンピュータプログラムを実行してもよい。 The image processing device 15 performs image processing on the MRI image and the spectral image. The image processing apparatus 15 includes a CPU (Central Processing Unit) 151 and a memory 152. The memory 152 stores a computer program for causing the image processing apparatus 15 to perform image processing. When the computer program is executed by the CPU 151, a logical processing block for performing image processing is formed inside the CPU 151. However, the computer program may not be recorded in the memory 152. In this case, the CPU 151 may execute a computer program downloaded via a network.
 具体的には、図2に示すように、画像処理装置15は、CPU151の内部に形成される論理的な処理ブロックとして、「取得手段」の一具体例であるMRI画像取得部1511と、マーカ位置特定部1512と、「取得手段」の一具体例である分光画像取得部1513と、「第1特定手段」の一具体例である腫瘍特定部1514と、マーカ位置特定部1515と、「第2特定手段」及び「第3特定手段」の一具体例である対応付け処理部1516と、「制御手段」の一具体例である3次元モデル生成部1517と、「制御手段」の一具体例である3次元ビューワ処理部1518と、を備える。 Specifically, as illustrated in FIG. 2, the image processing apparatus 15 includes an MRI image acquisition unit 1511 that is a specific example of “acquisition unit”, a marker, and a logical processing block formed in the CPU 151. A position specifying unit 1512; a spectroscopic image acquiring unit 1513 which is a specific example of “acquiring means”; a tumor specifying unit 1514 which is a specific example of “first specifying means”; a marker position specifying unit 1515; A specific example of “two specifying means” and “third specifying means”, an association processing unit 1516, a three-dimensional model generating unit 1517 that is a specific example of “control means”, and a specific example of “control means” And a three-dimensional viewer processing unit 1518.
 MRI画像取得部1511は、MRI装置11が生成したMRI画像を取得する。 The MRI image acquisition unit 1511 acquires the MRI image generated by the MRI apparatus 11.
 マーカ位置特定部1512は、MRI画像取得部1511が取得したMRI画像を解析することで、MRI画像内に含まれるマーカm1の位置(特に、MRI画像内の3次元位置を規定するMRI座標系での3次元位置)を特定する。 The marker position specifying unit 1512 analyzes the MRI image acquired by the MRI image acquisition unit 1511 to thereby detect the position of the marker m1 included in the MRI image (particularly in the MRI coordinate system that defines the three-dimensional position in the MRI image). 3D position).
 分光画像取得部1513は、ハイパースペクトルカメラ12が生成した分光画像を取得する。 The spectral image acquisition unit 1513 acquires the spectral image generated by the hyperspectral camera 12.
 腫瘍特定部1514は、分光画像取得部1513が取得した分光画像を解析することで、腫瘍の位置(特に、分光画像内の2次元位置を規定する分光座標系での2次元位置)を特定する。つまり、腫瘍特定部1514は、分光画像のうち腫瘍を表す画像部分である腫瘍画像部分を特定する。尚、腫瘍画像部分は、「第1画像部分」の一具体例である。 The tumor identification unit 1514 identifies the position of the tumor (particularly, the two-dimensional position in the spectral coordinate system that defines the two-dimensional position in the spectral image) by analyzing the spectral image acquired by the spectral image acquisition unit 1513. . That is, the tumor specifying unit 1514 specifies a tumor image portion that is an image portion representing a tumor in the spectral image. The tumor image portion is a specific example of “first image portion”.
 マーカ位置特定部1515は、位置測定装置14の測定結果に基づいて、マーカm2の位置を特定する。加えて、マーカ位置特定部1515は、マーカm1を指し示すポインタ13に設置されたマーカm3の位置に基づいて、マーカm1の位置(特に、実空間座標系での3次元位置)を特定する。 The marker position specifying unit 1515 specifies the position of the marker m2 based on the measurement result of the position measuring device 14. In addition, the marker position specifying unit 1515 specifies the position of the marker m1 (particularly, the three-dimensional position in the real space coordinate system) based on the position of the marker m3 placed on the pointer 13 pointing to the marker m1.
 対応付け処理部1516は、主として、以下に説明する3つの対応付け処理を行う。 The association processing unit 1516 mainly performs three association processes described below.
 具体的には、対応付け処理部1516は、第1の対応付け処理として、マーカ位置特定部1512が特定したMRI座標系でのマーカm1の位置と、マーカ位置特定部1515が特定した実空間座標系でのマーカm1の位置とを対応付ける処理を行う。 Specifically, the association processing unit 1516 performs, as the first association processing, the position of the marker m1 in the MRI coordinate system specified by the marker position specifying unit 1512 and the real space coordinates specified by the marker position specifying unit 1515. A process of associating the position of the marker m1 with the system is performed.
 対応付け処理部1516は、第2の対応付け処理として、MRI画像と分光画像(特に、腫瘍画像部分)とを対応付ける処理を行う。より具体的には、対応付け処理部1516は、腫瘍画像部分に基づいて、MRI画像のうち腫瘍画像部分に対応する画像部分(つまり、腫瘍の外面に対応する画像部分)を特定する。尚、MRI画像のうちの腫瘍画像部分に対応する画像部分を特定する処理の具体例については、後に詳述するため、ここでの説明を省略する。 The association processing unit 1516 performs a process of associating the MRI image and the spectral image (particularly, the tumor image portion) as the second association process. More specifically, the association processing unit 1516 identifies an image portion corresponding to the tumor image portion (that is, an image portion corresponding to the outer surface of the tumor) in the MRI image based on the tumor image portion. Note that a specific example of the process of specifying the image portion corresponding to the tumor image portion in the MRI image will be described in detail later, and thus the description thereof is omitted here.
 ここで、分光画像が2次元画像であるがゆえに、腫瘍画像部分もまた2次元画像である。従って、腫瘍画像部分は、腫瘍の外面(言い換えれば、外殻)を平面として2次元的に表す画像部分である。尚、腫瘍の外面とは、腫瘍のうち外部から視認可能な部分を意味する。従って、MRI画像のうち腫瘍画像部分に対応する画像部分もまた、腫瘍の外面を表す画像部分である。但し、MRI画像が3次元画像であるがゆえに、MRI画像のうち腫瘍画像部分に対応する画像部分は、腫瘍の外面を曲面として3次元的に(言い換えれば、立体的に)表す画像部分である。尚、以降の説明では、MRI画像のうち腫瘍画像部分に対応する画像部分を、“外面画像部分”と称する。外面画像部分は、「第2画像部分」の一具体例である。外面画像部分は、患者の腫瘍の外面の少なくとも一部(典型的には、患者の腫瘍のうちハイパースペクトルカメラ12の撮影領域に含まれる腫瘍部分の外面)を表している。 Here, since the spectral image is a two-dimensional image, the tumor image portion is also a two-dimensional image. Therefore, the tumor image portion is an image portion that two-dimensionally represents the outer surface of the tumor (in other words, the outer shell) as a plane. In addition, the outer surface of a tumor means the part visible from the outside among tumors. Therefore, the image portion corresponding to the tumor image portion in the MRI image is also an image portion representing the outer surface of the tumor. However, since the MRI image is a three-dimensional image, the image portion corresponding to the tumor image portion in the MRI image is an image portion that three-dimensionally (in other words, three-dimensionally) with the outer surface of the tumor as a curved surface. . In the following description, an image portion corresponding to the tumor image portion in the MRI image is referred to as an “outer surface image portion”. The outer surface image portion is a specific example of “second image portion”. The outer surface image portion represents at least a part of the outer surface of the patient's tumor (typically, the outer surface of the tumor portion included in the imaging region of the hyperspectral camera 12 of the patient's tumor).
 対応付け処理部1516は、第3の対応付け処理として、外面画像部分に基づいて、MRI画像のうち腫瘍の内部に対応する画像部分(つまり、腫瘍の内部の構造を表す画像部分)を特定する処理を行う。尚、腫瘍の内部とは、腫瘍のうち外部から視認できない部分を意味する。尚、MRI画像のうち腫瘍の内部に対応する画像部分を特定する処理の具体例については、後に詳述するため、ここでの説明を省略する。 As the third association process, the association processing unit 1516 identifies an image portion corresponding to the inside of the tumor (that is, an image portion representing a structure inside the tumor) based on the outer surface image portion. Process. In addition, the inside of a tumor means the part which cannot be visually recognized from the outside among tumors. Note that a specific example of the process of specifying the image portion corresponding to the inside of the tumor in the MRI image will be described later in detail, and the description thereof is omitted here.
 MRI画像のうち腫瘍の内部に対応する画像部分は、腫瘍の内部を3次元の構造体として3次元的に(言い換えれば、立体的に)表す画像部分である。尚、以降の説明では、MRI画像のうち腫瘍の内部に対応する画像部分を、“内部画像部分”と称する。内部画像部分は、「第3画像部分」の一具体例である。 The image part corresponding to the inside of the tumor in the MRI image is an image part that represents the inside of the tumor three-dimensionally (in other words, three-dimensionally) as a three-dimensional structure. In the following description, an image portion corresponding to the inside of the tumor in the MRI image is referred to as an “internal image portion”. The internal image portion is a specific example of “third image portion”.
 3次元モデル生成部1517は、対応付け処理部1516による処理結果(つまり、外面画像部分及び内部画像部分の特定結果)及びMRI画像取得部1511が取得したMRI画像に基づいて、腫瘍を他の部分と区別可能な態様で表す患者の3次元モデルを生成する。 The three-dimensional model generation unit 1517 determines whether the tumor is in the other part based on the processing result by the association processing unit 1516 (that is, the identification result of the outer surface image portion and the inner image portion) and the MRI image acquired by the MRI image acquisition unit 1511. A three-dimensional model of a patient expressed in a manner distinguishable from the above is generated.
 3次元ビューワ処理部1518は、3次元モデル生成部1517が生成した3次元モデルを術者が所望の視点から観察した場合に観察される観察画像を生成する。 The 3D viewer processing unit 1518 generates an observation image that is observed when the surgeon observes the 3D model generated by the 3D model generation unit 1517 from a desired viewpoint.
 再び図1において、表示装置16は、3次元ビューワ処理部1518が生成した観察画像を表示する。その結果、術者は、患者の内部で腫瘍がどのように立体的に分布しているかを認識することができる。尚、表示装置16は、その他の任意の情報(例えば、手術の支援に関連する情報)を表示してもよい。 1 again, the display device 16 displays the observation image generated by the three-dimensional viewer processing unit 1518. As a result, the surgeon can recognize how the tumor is three-dimensionally distributed inside the patient. The display device 16 may display other arbitrary information (for example, information related to surgery support).
 (2)手術支援システム1の動作
 続いて、図3を参照しながら、手術支援システム1の動作(主として、画像処理装置15の動作)について説明する。図3は、手術支援システム1の動作の流れを示すフローチャートである。
(2) Operation of Surgery Support System 1 Next, the operation of the surgery support system 1 (mainly the operation of the image processing device 15) will be described with reference to FIG. FIG. 3 is a flowchart showing an operation flow of the surgery support system 1.
 図3に示すように、まず、MRI画像取得部1511は、MRI画像を取得する(ステップS11)。具体的には、まず、患者の手術部位の周辺にマーカm1が設置される。その後、患者は、MRI装置11が備えるベッドの上に寝転ぶ。その後、MRI装置11は、患者の断層画像(図4参照)を順次撮影していく。つまり、MRI装置11は、MRI画像を生成していく。その結果、MRI画像取得部1511は、MRI画像を取得する。 As shown in FIG. 3, first, the MRI image acquisition unit 1511 acquires an MRI image (step S11). Specifically, the marker m1 is first installed around the patient's surgical site. Thereafter, the patient lies on the bed provided in the MRI apparatus 11. Thereafter, the MRI apparatus 11 sequentially takes tomographic images (see FIG. 4) of the patient. That is, the MRI apparatus 11 generates an MRI image. As a result, the MRI image acquisition unit 1511 acquires an MRI image.
 その後、患者は、MRI装置11が備えるベッドから手術台に搬送される。この場合、マーカm1が設置されている位置が変わらないように(つまり、マーカm1が設置されている位置が固定されたまま)、患者が搬送される。但し、MRI装置11が備えるベッドが手術台として用いられてもよい。尚、マーカm1が設置されている位置は、図3に示す一連の動作が行われている間は、変わらない(固定されている)。 Thereafter, the patient is transferred from the bed provided in the MRI apparatus 11 to the operating table. In this case, the patient is transported so that the position where the marker m1 is installed does not change (that is, the position where the marker m1 is installed is fixed). However, the bed provided in the MRI apparatus 11 may be used as an operating table. The position where the marker m1 is installed does not change (is fixed) while the series of operations shown in FIG. 3 is performed.
 マーカm1が設置されている位置が変わらないように、マーカm1は、MRI画像が取得される前に、患者の頭部の骨に直接打ち込まれてもよい。この場合、マーカm1は、ベッドに寝転んだ患者を開頭し、頭皮をめくり、現れた頭蓋骨に打ち込まれてもよい。 The marker m1 may be directly driven into the bone of the patient's head before the MRI image is acquired so that the position where the marker m1 is installed does not change. In this case, the marker m1 may be driven into the skull that has opened the patient lying on the bed, turned over the scalp, and appeared.
 その後、マーカ位置特定部1512は、ステップS11で取得されたMRI画像を解析することで、MRI画像内に含まれるマーカm1の位置を特定する(ステップS12)。つまり、マーカ位置特定部1512は、MRI座標系での、マーカm1の3次元位置を特定する。例えば、マーカ位置特定部1512は、マッチング処理等を用いて、MRI画像の中からマーカm1に相当する画像部分を特定する。その後、マーカ位置特定部1512は、マーカm1に相当する画像部分の位置を特定する。 Thereafter, the marker position specifying unit 1512 specifies the position of the marker m1 included in the MRI image by analyzing the MRI image acquired in step S11 (step S12). That is, the marker position specifying unit 1512 specifies the three-dimensional position of the marker m1 in the MRI coordinate system. For example, the marker position specifying unit 1512 specifies an image portion corresponding to the marker m1 from the MRI image using a matching process or the like. Thereafter, the marker position specifying unit 1512 specifies the position of the image portion corresponding to the marker m1.
 ステップS12の動作の前に若しくは続けて又は並行して、マーカ位置特定部1515は、位置測定装置14の測定結果に基づいて、マーカm1の位置を特定する(ステップS13)。つまり、マーカ位置特定部1515は、実空間座標系でのマーカm1の3次元位置を特定する。具体的には、術者は、ポインタ13を操作することで、マーカm1をポインタ13の先端で指し示す。マーカm1がポインタ13の先端で指し示されている状況下で、位置測定装置14は、マーカm3によって反射された測定光を検出する。マーカ位置特定部1515は、位置測定装置14の測定結果に基づいて、マーカm3の位置を特定することができる。マーカm3の位置が特定されると、ポインタ13の位置(但し、この場合の「位置」は、ポインタ13の方位も含む)もまた特定される。ここで、ポインタ13の先端がマーカm1を指し示している状況下では、ポインタ13の位置は、実質的には、マーカm1の位置に対応する。更に、ポインタ13の形状(少なくとも3つのマーカm3の位置と先端との位置関係)は、既知である。従って、マーカ位置特定部1515は、マーカm3の位置を特定する(つまり、ポインタ13の位置を特定する)ことで、実空間座標系でのマーカm1の位置を特定することができる。以上の動作が、全てのマーカm1を対象に行われる。 Before or after the operation of step S12 or in parallel, the marker position specifying unit 1515 specifies the position of the marker m1 based on the measurement result of the position measuring device 14 (step S13). That is, the marker position specifying unit 1515 specifies the three-dimensional position of the marker m1 in the real space coordinate system. Specifically, the surgeon operates the pointer 13 to point the marker m1 at the tip of the pointer 13. Under the situation where the marker m1 is pointed by the tip of the pointer 13, the position measurement device 14 detects the measurement light reflected by the marker m3. The marker position specifying unit 1515 can specify the position of the marker m3 based on the measurement result of the position measuring device 14. When the position of the marker m3 is specified, the position of the pointer 13 (however, the “position” in this case includes the direction of the pointer 13) is also specified. Here, under the situation where the tip of the pointer 13 points to the marker m1, the position of the pointer 13 substantially corresponds to the position of the marker m1. Furthermore, the shape of the pointer 13 (the positional relationship between the positions and the tips of at least three markers m3) is known. Therefore, the marker position specifying unit 1515 can specify the position of the marker m1 in the real space coordinate system by specifying the position of the marker m3 (that is, specifying the position of the pointer 13). The above operation is performed for all the markers m1.
 その後、対応付け処理部1516は、ステップS12で特定されたMRI座標系でのマーカm1の位置と、ステップS13で特定された実空間座標系でのマーカm1の位置とを対応付ける(ステップS14)。その結果、対応付け処理部1516は、MRI座標系内の位置を実空間座標系内の位置に又は実空間座標系内の位置をMRI座標系内の位置に変換するための変換行列を算出することができる。つまり、対応付け処理部1516は、MRI座標系内の任意の位置に対応する実空間座標系内の位置を特定することができる。同様に、対応付け処理部1516は、実空間座標系内の任意の位置に対応するMRI座標系内の位置を特定することができる。 Thereafter, the association processing unit 1516 associates the position of the marker m1 in the MRI coordinate system identified in step S12 with the position of the marker m1 in the real space coordinate system identified in step S13 (step S14). As a result, the association processing unit 1516 calculates a transformation matrix for converting the position in the MRI coordinate system to a position in the real space coordinate system or the position in the real space coordinate system to a position in the MRI coordinate system. be able to. That is, the association processing unit 1516 can specify a position in the real space coordinate system corresponding to an arbitrary position in the MRI coordinate system. Similarly, the association processing unit 1516 can specify a position in the MRI coordinate system corresponding to an arbitrary position in the real space coordinate system.
 具体的には、対応付け処理部1516は、MRI座標系での第1のマーカm1の位置(Xm1、Ym1、Zm1)と、実空間座標系での同一の第1のマーカm1の位置(Xa1、Ya1、Za1)とを対応付ける。この対応付けの結果、対応付け処理部1516は、(Xm1、Ym1、Zm1)=変換行列×(Xa1、Ya1、Za1)という方程式を生成する。対応付け処理部1516は、この動作を、全てのマーカm1を対象に行う。その結果、変換行列を導くために必要な数の方程式が生成されれば、変換行列が特定される。その結果、対応付け処理部1516は、MRI画像と実空間とを対応付けることができる。つまり、対応付け処理部1516は、MRI画像内の患者と実空間に位置する患者とを対応付けることができる。 Specifically, the association processing unit 1516 performs the position (Xm1, Ym1, Zm1) of the first marker m1 in the MRI coordinate system and the position (Xa1) of the same first marker m1 in the real space coordinate system. , Ya1, Za1). As a result of this association, the association processing unit 1516 generates an equation of (Xm1, Ym1, Zm1) = conversion matrix × (Xa1, Ya1, Za1). The association processing unit 1516 performs this operation for all the markers m1. As a result, if the required number of equations for deriving the transformation matrix is generated, the transformation matrix is identified. As a result, the association processing unit 1516 can associate the MRI image with the real space. That is, the association processing unit 1516 can associate the patient in the MRI image with the patient located in the real space.
 その後、分光画像取得部1513は、分光画像を取得する(ステップS21)。具体的には、患者が手術台の上に位置する状態で、ハイパースペクトルカメラ12は、患者を撮影する。つまり、ハイパースペクトルカメラ12は、分光画像を生成していく。その結果、分光画像取得部1513は、分光画像を取得する。 Thereafter, the spectral image acquisition unit 1513 acquires a spectral image (step S21). Specifically, the hyperspectral camera 12 images the patient with the patient positioned on the operating table. That is, the hyperspectral camera 12 generates a spectral image. As a result, the spectral image acquisition unit 1513 acquires a spectral image.
 その後、腫瘍特定部1514は、ステップS21で取得された分光画像を解析することで、腫瘍の位置を特定する(ステップS22)。つまり、腫瘍特定部1514は、図5に示すように、分光画像内の2次元的な位置を規定する分光座標系での、腫瘍の2次元位置を特定する。つまり、腫瘍特定部1514は、分光画像のうち腫瘍を表す画像部分である腫瘍画像部分を特定する(ステップS22)。尚、分光画像から腫瘍の位置(腫瘍画像部分)を特定する動作として、任意の動作が用いられてもよい。例えば、分光画像から腫瘍の位置を特定する動作として、公知の動作が用いられてもよい。 Thereafter, the tumor identification unit 1514 identifies the position of the tumor by analyzing the spectral image acquired in step S21 (step S22). That is, as shown in FIG. 5, the tumor specifying unit 1514 specifies the two-dimensional position of the tumor in the spectral coordinate system that defines the two-dimensional position in the spectral image. That is, the tumor specifying unit 1514 specifies a tumor image portion that is an image portion representing a tumor in the spectral image (step S22). In addition, arbitrary operation | movement may be used as an operation | movement which pinpoints the position (tumor image part) of a tumor from a spectral image. For example, a known operation may be used as an operation for specifying the position of the tumor from the spectral image.
 その後、画像処理装置15は、分光画像から特定された腫瘍画像部分とMRI画像とを対応付けることで、MRI画像のうち腫瘍の外面に対応する外面画像部分及びMRI画像のうち腫瘍の内部に対応する内部画像部分を特定する動作を行う(ステップS31からステップS34)。 Thereafter, the image processing apparatus 15 associates the tumor image portion specified from the spectral image with the MRI image, thereby corresponding to the outer surface image portion corresponding to the outer surface of the tumor in the MRI image and the inside of the tumor in the MRI image. An operation for specifying the internal image portion is performed (step S31 to step S34).
 具体的には、まず、マーカ位置特定部1515は、位置測定装置14の測定結果に基づいて、実空間座標系でのハイパースペクトルカメラ12の3次元位置を特定する(ステップS31)。具体的には、位置測定装置14は、ハイパースペクトルカメラ12に設置されているマーカm2によって反射された測定光を検出する。マーカ位置特定部1515は、位置測定装置14の測定結果に基づいて、マーカm2の位置を特定することができる。マーカm2の位置が特定されると、ハイパースペクトルカメラ12の位置(但し、この場合の位置は、ハイパースペクトルカメラ12の方位も含む)が特定される。尚、以下の説明では、説明の便宜上、ステップS31で特定されるハイパースペクトルカメラ12の位置は、数式1によって特定されるものとする。数式1中のrc11、rc12、rc13、rc21、rc22、rc23、rc31、rc32及びrc33からなる3行×3行の行列は、実空間座標系内でのハイパースペクトルカメラ12の回転量(つまり、ヨー方向の回転量(言い換えれば、傾き量、以下同じ)、ロール方向の回転量及びピッチ方向の回転量)を示す。数式1中のtcxは、実空間座標系の原点からのX軸に沿ったハイパースペクトルカメラ12の並進量を示す。数式1中のtcyは、実空間座標系の原点からのY軸に沿ったハイパースペクトルカメラ12の並進量を示す。数式1中のtczは、実空間座標系の原点からのZ軸に沿ったハイパースペクトルカメラ12の並進量を示す。 Specifically, first, the marker position specifying unit 1515 specifies the three-dimensional position of the hyperspectral camera 12 in the real space coordinate system based on the measurement result of the position measuring device 14 (step S31). Specifically, the position measuring device 14 detects the measurement light reflected by the marker m2 installed in the hyperspectral camera 12. The marker position specifying unit 1515 can specify the position of the marker m <b> 2 based on the measurement result of the position measuring device 14. When the position of the marker m2 is specified, the position of the hyperspectral camera 12 (however, the position in this case includes the orientation of the hyperspectral camera 12) is specified. In the following description, for convenience of description, the position of the hyperspectral camera 12 specified in step S31 is specified by Equation 1. The matrix of 3 rows × 3 rows composed of rc11, rc12, rc13, rc21, rc22, rc23, rc31, rc32, and rc33 in Equation 1 is the rotation amount of the hyperspectral camera 12 in the real space coordinate system (ie, yaw). The rotation amount in the direction (in other words, the tilt amount, the same applies hereinafter), the rotation amount in the roll direction, and the rotation amount in the pitch direction) are shown. Tcx in Equation 1 represents the translation amount of the hyperspectral camera 12 along the X axis from the origin of the real space coordinate system. Tcy in Equation 1 indicates the translation amount of the hyperspectral camera 12 along the Y axis from the origin of the real space coordinate system. Tcz in Formula 1 indicates the translation amount of the hyperspectral camera 12 along the Z axis from the origin of the real space coordinate system.
Figure JPOXMLDOC01-appb-M000001
 その後、対応付け処理部1516は、MRI画像のうち腫瘍画像部分に対応する外面画像部分を特定する(ステップS32)。つまり、対応付け処理部151は、2次元画像である腫瘍画像部分に基づいて、3次元画像であるMRI画像のうち腫瘍の外面を曲面(或いは、凹凸面)として立体的に示す外面画像部分を特定する。ここで、対応付け処理部1516は、腫瘍画像部分(つまり、2次元画像)に基づいてMRI画像(つまり、3次元画像)のうち腫瘍画像部分に対応する外面画像部分を特定する動作として、任意の動作を用いてもよい。例えば、対応付け処理部1516は、腫瘍画像部分に基づいてMRI画像のうち腫瘍画像部分に対応する外面画像部分を特定する動作として、2次元画像に基づいて3次元画像のうち2次元画像に対応する画像部分を特定する公知の動作を用いてもよい。
Figure JPOXMLDOC01-appb-M000001
Thereafter, the association processing unit 1516 identifies an outer surface image portion corresponding to the tumor image portion in the MRI image (step S32). That is, the association processing unit 151 displays an outer surface image portion that three-dimensionally shows the outer surface of the tumor as a curved surface (or uneven surface) in the MRI image that is a three-dimensional image based on the tumor image portion that is a two-dimensional image. Identify. Here, the association processing unit 1516 arbitrarily selects an outer surface image portion corresponding to the tumor image portion of the MRI image (that is, three-dimensional image) based on the tumor image portion (that is, two-dimensional image). May be used. For example, the association processing unit 1516 corresponds to the two-dimensional image of the three-dimensional image based on the two-dimensional image as the operation of specifying the outer surface image portion corresponding to the tumor image portion of the MRI image based on the tumor image portion. A known operation for specifying an image portion to be performed may be used.
 以下、腫瘍画像部分に基づいてMRI画像のうち腫瘍画像部分に対応する外面画像部分を特定する動作の一例を説明する。 Hereinafter, an example of an operation for specifying the outer surface image portion corresponding to the tumor image portion in the MRI image based on the tumor image portion will be described.
 対応付け処理部1516は、ハイパースペクトルカメラ12の光学系(例えば、レンズ群)の状態を示す内部パラメータを特定する。内部パラメータの一例として、分光座標系を構成するX軸に対する光学系の焦点距離fcx(つまり、X軸に沿った被写体の撮影倍率に対応する焦点距離fcx)や、分光座標系を構成するY軸に対する光学系の焦点距離fcy(つまり、Y軸に沿った被写体の撮影倍率に対応する焦点距離fcy)があげられる。内部パラメータの他の一例として、光学系の光軸の中心に対するハイパースペクトルカメラ12の撮像素子の中心(つまり、分光画像の中心)のずれ量があげられる。このずれ量は、分光座標系を構成するX軸に沿ったずれ量ccx及び分光座標系を構成するY軸に沿ったずれ量ccyを含む。尚、対応付け処理部1516は、チェッカーパターンの撮影結果に基づくハイパースペクトルカメラ12のキャリブレーションの結果を用いて、内部パラメータを特定可能である。 The association processing unit 1516 identifies an internal parameter indicating the state of the optical system (for example, a lens group) of the hyperspectral camera 12. As an example of the internal parameter, the focal length fcx of the optical system with respect to the X axis constituting the spectral coordinate system (that is, the focal length fcx corresponding to the photographing magnification of the subject along the X axis), or the Y axis constituting the spectral coordinate system Is the focal length fcy of the optical system (that is, the focal length fcy corresponding to the photographing magnification of the subject along the Y axis). Another example of the internal parameter is a deviation amount of the center of the imaging device of the hyperspectral camera 12 (that is, the center of the spectral image) with respect to the center of the optical axis of the optical system. This deviation amount includes a deviation amount ccx along the X axis constituting the spectral coordinate system and a deviation amount ccy along the Y axis constituting the spectral coordinate system. Note that the association processing unit 1516 can specify an internal parameter using the calibration result of the hyperspectral camera 12 based on the checker pattern imaging result.
 対応付け処理部1516は、更に、ハイパースペクトルカメラ12の実空間座標系での設置状態を示す外部パラメータを特定する。本実施例では、外部パラメータは、数式1に示す実空間座標系でのハイパースペクトルカメラ12の位置であるものとする。 The association processing unit 1516 further specifies an external parameter indicating the installation state of the hyperspectral camera 12 in the real space coordinate system. In this embodiment, the external parameter is the position of the hyperspectral camera 12 in the real space coordinate system shown in Equation 1.
 対応付け処理部1516は、内部パラメータ及び外部パラメータに基づいて、図6に示すように、ハイパースペクトルカメラ12の撮像素子を構成する各画素に至る光の経路を実空間座標系で特定することができる。更に、実空間座標系の位置をMRI座標系の位置に変換可能であるため、対応付け処理部1516は、各画素に至る光の経路をMRI座標系で特定することができる。その結果、対応付け処理部1516は、MRI画像が表す患者の外面のうち各画素に至る光の経路上に位置する部分を特定することができる。例えば、対応付け処理部1516は、MRI画像内において、各画素から患者に向かって光の経路に沿って進んでいった場合に最初に到達する患者の外面のMRI座標系での位置(X1(3)、Y1(3)、Z1(3))を特定することができる。 As shown in FIG. 6, the association processing unit 1516 can specify the light path to each pixel constituting the imaging device of the hyperspectral camera 12 based on the internal parameter and the external parameter in the real space coordinate system. it can. Furthermore, since the position in the real space coordinate system can be converted to the position in the MRI coordinate system, the association processing unit 1516 can specify the light path to each pixel in the MRI coordinate system. As a result, the association processing unit 1516 can specify a portion located on the light path reaching each pixel in the outer surface of the patient represented by the MRI image. For example, in the MRI image, the association processing unit 1516 firstly reaches the position of the outer surface of the patient in the MRI coordinate system (X1 (X1 ( 3), Y1 (3), Z1 (3)) can be specified.
 ここで、患者の外面のうち各画素に至る光の経路上に位置する部分は、患者の外面のうちハイパースペクトルカメラ12によって撮影される部分であることは、図6からも分かるとおりである。同様に、患者の外面のうち各画素に至る光の経路上に位置する部分は、MRI装置11によっても撮影される部分である。そうすると、MRI画像のうち各画素に至る光の経路上に位置する患者の外面に対応する画像部分は、分光画像に対応する。つまり、MRI画像のうち光の経路上に位置する画像部分は、分光画像に対応する。 Here, it can be seen from FIG. 6 that the portion of the outer surface of the patient that is located on the light path leading to each pixel is the portion of the outer surface of the patient that is captured by the hyperspectral camera 12. Similarly, the portion of the outer surface of the patient that is located on the light path to each pixel is a portion that is also imaged by the MRI apparatus 11. Then, the image portion corresponding to the outer surface of the patient located on the light path reaching each pixel in the MRI image corresponds to the spectroscopic image. That is, the image portion located on the light path in the MRI image corresponds to the spectral image.
 そこで、対応付け処理部1516は、MRI画像のうち光の経路上に位置する画像部分(つまり、位置(X1(3)、Y1(3)、Z1(3))が特定されている画像部分)を、分光画像に対応付ける。具体的には、MRI座標系での位置(X1(3)、Y1(3)、Z1(3))は、実空間座標系での位置(X1(3)’、Y1(3)’、Z1(3)’)に変換可能であることは上述したとおりである。更に、光の出射先である各画素は、分光画像内の2次元位置を特定する分光座標系内の特定の座標と1対1で対応している。そうすると、対応付け処理部1516は、内部パラメータ及び外部パラメータに基づいて、実空間座標系での位置(X1(3)’、Y1(3)’、Z1(3)’)と分光座標系での位置(X1(2)、Y1(2))とを対応付けることができる。つまり、対応付け処理部1516は、MRI座標系での位置(X1(3)、Y1(3)、Z1(3))と分光座標系での位置(X1(2)、Y1(2))とを対応付けることができる。本実施例では、対応付け処理部1516は、数式2を用いて、MRI座標系での位置(X1(3)、Y1(3)、Z1(3))と分光座標系での位置(X1(2)、Y1(2))とを対応付ける。尚、s1は、所定の係数又は所定の変換行列を示す。 Therefore, the association processing unit 1516 includes an image portion located on the light path in the MRI image (that is, an image portion where the position (X1 (3), Y1 (3), Z1 (3)) is specified). Is associated with the spectral image. Specifically, the positions (X1 (3), Y1 (3), Z1 (3)) in the MRI coordinate system are the positions (X1 (3) ′, Y1 (3) ′, Z1 in the real space coordinate system. As described above, (3) ′) can be converted. Furthermore, each pixel that is a light emission destination has a one-to-one correspondence with specific coordinates in a spectral coordinate system that specifies a two-dimensional position in the spectral image. Then, the association processing unit 1516 determines the position (X1 (3) ′, Y1 (3) ′, Z1 (3) ′) in the real space coordinate system and the spectral coordinate system based on the internal parameter and the external parameter. The positions (X1 (2), Y1 (2)) can be associated with each other. That is, the association processing unit 1516 determines the position (X1 (3), Y1 (3), Z1 (3)) in the MRI coordinate system and the position (X1 (2), Y1 (2)) in the spectral coordinate system. Can be associated. In this embodiment, the association processing unit 1516 uses Equation 2 to calculate the position (X1 (3), Y1 (3), Z1 (3)) in the MRI coordinate system and the position (X1 ( 2) and Y1 (2)). Note that s1 represents a predetermined coefficient or a predetermined conversion matrix.
Figure JPOXMLDOC01-appb-M000002
 その結果、対応付け処理部1516は、MRI画像(特に、各画素に至る光の経路上に位置する患者の外面に対応するMRI画像の画像部分)と分光画像とを対応付けることができる。つまり、対応付け処理部1516は、MRI画像を構成するある画像部分に対応する分光画像の画像部分を特定することができる。同様に、対応付け処理部1516は、分光画像を構成するある画像部分に対応するMRI画像の画像部分を特定することができる。そうすると、分光画像のうちの腫瘍画像部分の位置が既に特定されているため、MRI画像と分光画像とが対応付けられた時点で、対応付け処理部1516は、MRI画像のうち腫瘍画像部分に対応する外面画像部分を特定することができる。
Figure JPOXMLDOC01-appb-M000002
As a result, the association processing unit 1516 can associate the MRI image (particularly, the image portion of the MRI image corresponding to the outer surface of the patient located on the light path reaching each pixel) with the spectral image. That is, the association processing unit 1516 can specify the image portion of the spectral image corresponding to a certain image portion constituting the MRI image. Similarly, the association processing unit 1516 can specify the image portion of the MRI image corresponding to a certain image portion constituting the spectral image. Then, since the position of the tumor image portion in the spectral image has already been specified, the association processing unit 1516 corresponds to the tumor image portion in the MRI image when the MRI image and the spectral image are associated with each other. The outer surface image portion to be identified can be specified.
 このように、対応付け処理部1516は、分光画像に含まれるマーカm1の分光座標系での位置を特定することなく、MRI画像と分光画像とを対応付けることができる。分光画像に含まれるマーカm1の分光座標系での位置を特定しなくてもよい理由は、MRI画像と分光画像とを対応付けるために対応付け処理部1516が行う動作が、MRI画像のうち患者の外面に対応する外面画像部分が、同じく患者の外面を撮影することで生成される分光画像に対応していることを前提としているからである。言い換えれば、MRI画像と分光画像とを対応付けるために対応付け処理部1516が行う動作が、分光画像が患者の外面を平面として2次元的に表すと共に、MRI画像が同じ患者の同じ外面を曲面として3次元的に表すことを前提としているからである。つまり、対応付け処理部1516は、分光画像(更には、腫瘍画像部分)及び外面画像部分の夫々が同一の患者の同一の外面を表すことを利用して、患者の外面を平面として2次元的に表す腫瘍画像部分に基づいて、同じ患者の同じ外面を曲面として3次元的に表す外面画像部分を特定する。 As described above, the association processing unit 1516 can associate the MRI image and the spectral image without specifying the position of the marker m1 included in the spectral image in the spectral coordinate system. The reason for not specifying the position of the marker m1 included in the spectral image in the spectral coordinate system is that the operation performed by the association processing unit 1516 in order to associate the MRI image with the spectral image is performed by the patient in the MRI image. This is because it is assumed that the outer surface image portion corresponding to the outer surface corresponds to a spectral image generated by photographing the outer surface of the patient. In other words, the operation performed by the association processing unit 1516 for associating the MRI image and the spectroscopic image is two-dimensionally representing the outer surface of the patient as a plane, and the same outer surface of the same patient having the same MRI image as a curved surface. This is because it is assumed to be expressed three-dimensionally. In other words, the association processing unit 1516 uses the fact that each of the spectral image (further, the tumor image portion) and the outer surface image portion represents the same outer surface of the same patient, so that the outer surface of the patient is a two-dimensional plane. Based on the tumor image portion expressed in (3), an outer surface image portion that three-dimensionally represents the same outer surface of the same patient as a curved surface is specified.
 その後、対応付け処理部1516は、ステップS32で特定した外面画像部分の特徴を特定する(ステップS33)。ステップS33で特定される「特徴」は、画像に特有の又は画像に関連した特徴である限りは、どのような特徴であってもよい。このような特徴の一例として、輝度や、色相や、彩度や、明度等があげられる。 Thereafter, the association processing unit 1516 identifies the feature of the outer surface image portion identified in step S32 (step S33). The “feature” specified in step S33 may be any feature as long as the feature is unique to the image or related to the image. Examples of such features include brightness, hue, saturation, brightness, and the like.
 ステップS33で特定された特徴は、腫瘍の外面を示す外面画像部分の特徴であるため、MRI画像のうち腫瘍に対応する画像部分の特徴であるとも推定される。このため、対応付け処理部1516は、MRI画像のうちステップS33で特定された特徴と同一の特徴を有する画像部分を、腫瘍の内部を表す内部画像部分として特定する(ステップS34)。その結果、図7に示すように、MRI画像上では、腫瘍の外面を表す外面画像部分のみならず、腫瘍の内部を表す内部画像部分が特定される。 Since the feature specified in step S33 is a feature of the outer surface image portion showing the outer surface of the tumor, it is also estimated that it is a feature of the image portion corresponding to the tumor in the MRI image. Therefore, the association processing unit 1516 identifies an image portion having the same feature as the feature identified in step S33 in the MRI image as an internal image portion representing the inside of the tumor (step S34). As a result, as shown in FIG. 7, on the MRI image, not only the outer surface image portion representing the outer surface of the tumor but also the inner image portion representing the inside of the tumor is specified.
 その後、3次元モデル生成部1517は、腫瘍を他の部分と区別可能な態様で表す患者の3次元モデルを生成する(ステップS41)。つまり、3次元モデル生成部1517は、外面画像部分及び内部画像部分を他の画像部分と区別可能な態様で表す患者の3次元モデルを生成する(ステップS41)。例えば、3次元モデル生成部1517は、外面画像部分及び内部画像部分を他の画像部分と区別可能な態様でMRI画像が表すようにMRI画像を修正することで、3次元モデルを生成してもよい。例えば、3次元モデル生成部1517は、外面画像部分及び内部画像部分を他の画像部分と区別可能な態様で表すようにMRI画像を新たに生成することで、3次元モデルを生成してもよい。尚、他の部分と区別可能な態様での腫瘍の表示方法の一例として、例えば、腫瘍を強調して表示する表示方法があげられる。 Thereafter, the three-dimensional model generation unit 1517 generates a three-dimensional model of the patient that represents the tumor in a manner that can be distinguished from other parts (step S41). That is, the three-dimensional model generation unit 1517 generates a three-dimensional model of the patient that represents the outer surface image portion and the inner image portion in a manner that can be distinguished from other image portions (step S41). For example, the three-dimensional model generation unit 1517 may generate the three-dimensional model by modifying the MRI image so that the MRI image represents the outer surface image portion and the inner image portion in a manner that can be distinguished from other image portions. Good. For example, the three-dimensional model generation unit 1517 may generate a three-dimensional model by newly generating an MRI image so as to represent the outer surface image portion and the inner image portion in a manner distinguishable from other image portions. . In addition, as an example of the display method of the tumor in a mode that can be distinguished from other parts, for example, a display method that highlights and displays the tumor can be given.
 その後、3次元ビューワ処理部1518は、ステップS41で生成された3次元モデルを術者が所望の視点から観察した場合に観察される観察画像を生成する(ステップS42)。3次元ビューワ処理部1518は、生成した観察画像を、表示装置16に出力する。 Thereafter, the three-dimensional viewer processing unit 1518 generates an observation image that is observed when the surgeon observes the three-dimensional model generated in step S41 from a desired viewpoint (step S42). The three-dimensional viewer processing unit 1518 outputs the generated observation image to the display device 16.
 その後、表示装置16は、ステップS42で生成された観察画像を表示する(ステップS43)。その結果、術者は、観察画像を視認することで、患者の内部で腫瘍がどのように立体的に分布しているかを認識することができる。 Thereafter, the display device 16 displays the observation image generated in step S42 (step S43). As a result, the surgeon can recognize how the tumor is three-dimensionally distributed inside the patient by visually recognizing the observation image.
 その後、MRI画像取得部1511は、MRI画像を再度取得するタイミングが到来したか否かを判定する(ステップS51)。例えば、前回MRI画像が取得されてから一定時間が経過した場合には、MRI画像取得部1511は、MRI画像を再度取得するタイミングが到来したと判定してもよい。例えば、ユーザがMRI画像の再度の取得を要求している場合には、MRI画像取得部1511は、MRI画像を再度取得するタイミングが到来したと判定してもよい。 Thereafter, the MRI image acquisition unit 1511 determines whether or not it is time to acquire the MRI image again (step S51). For example, when a predetermined time has elapsed since the last MRI image was acquired, the MRI image acquisition unit 1511 may determine that the timing for acquiring the MRI image again has arrived. For example, when the user requests acquisition of the MRI image again, the MRI image acquisition unit 1511 may determine that the timing for acquiring the MRI image again has arrived.
 ステップS51の判定の結果、MRI画像を再度取得するタイミングが到来したと判定される場合には(ステップS51:Yes)、MRI画像取得部1511は、MRI画像を再度取得する(ステップS1)。その後、ステップS2以降の動作が再度行われる。 As a result of the determination in step S51, when it is determined that the timing for acquiring the MRI image again has come (step S51: Yes), the MRI image acquisition unit 1511 acquires the MRI image again (step S1). Thereafter, the operations after step S2 are performed again.
 他方で、ステップS51の判定の結果、MRI画像を再度取得するタイミングが到来していないと判定される場合には(ステップS51:No)、3次元モデル生成部1517は、ステップS42で生成した3次元モデルに対して、付加的な画像に相当する付加モデルを付加する(ステップS52)。付加的な画像として、術者が扱う術具(例えば、メス等)の画像が上げられる。 On the other hand, as a result of the determination in step S51, when it is determined that the timing for acquiring the MRI image again has not arrived (step S51: No), the 3D model generation unit 1517 generates the 3 generated in step S42. An additional model corresponding to an additional image is added to the dimension model (step S52). As an additional image, an image of a surgical instrument (for example, a knife) handled by the surgeon is raised.
 術具の画像を付加的な画像として3次元モデルに付加するために、画像処理装置15は、以下に説明する動作を行う。まず、前提として、術具には、ハイパースペクトルカメラ12やポインタ13と同様に、マーカが設置される。位置測定装置14は、術具に設置されたマーカによって反射された測定光を検出する。マーカ位置検出部1515は、位置測定装置14の測定結果に基づいて、術具の実空間座標系での位置を特定する。対応付け処理部1516は、術具の実空間座標系での位置を、術具のMRI座標系での位置に変換する。術具のMRI座標系での位置は、3次元モデル中で術具を配置するべき位置に相当する。従って、3次元モデル生成部1517は、患者の3次元モデルに対して、術具に相当する付加モデルを、患者の3次元モデルと術具に相当する付加モデルとの間の位置関係が実際の患者と実際の術具との位置関係と合致するように付加することができる。 In order to add the image of the surgical tool to the three-dimensional model as an additional image, the image processing device 15 performs the operation described below. First, as a premise, similar to the hyperspectral camera 12 and the pointer 13, a marker is placed on the surgical instrument. The position measurement device 14 detects measurement light reflected by a marker installed on the surgical instrument. The marker position detection unit 1515 specifies the position of the surgical instrument in the real space coordinate system based on the measurement result of the position measurement device 14. The association processing unit 1516 converts the position of the surgical tool in the real space coordinate system into the position of the surgical tool in the MRI coordinate system. The position of the surgical instrument in the MRI coordinate system corresponds to the position where the surgical instrument is to be placed in the three-dimensional model. Therefore, the three-dimensional model generation unit 1517 gives an additional model corresponding to the surgical tool to the three-dimensional model of the patient, and the positional relationship between the three-dimensional model of the patient and the additional model corresponding to the surgical tool is actually It can be added so as to match the positional relationship between the patient and the actual surgical instrument.
 その後、画像処理部15は、手術が終了したか否かを判定する(ステップS53)。ステップS53の判定の結果、手術が終了していないと判定される場合には(ステップS53:No)、ステップS51以降の動作が繰り返し行われる。他方で、ステップS53の判定の結果、手術が終了したと判定される場合には(ステップS53:Yes)、手術支援システム1は、図3に示す動作を終了する。 Thereafter, the image processing unit 15 determines whether or not the operation is completed (step S53). As a result of the determination in step S53, when it is determined that the operation has not been completed (step S53: No), the operations after step S51 are repeated. On the other hand, as a result of the determination in step S53, when it is determined that the operation is completed (step S53: Yes), the operation support system 1 ends the operation illustrated in FIG.
 以上説明したように、本実施例の画像処理装置15は、腫瘍の外面を3次元的に表す外面画像部分のみならず、腫瘍の内部を立体的に表す内部画像部分をも特定することができる。つまり、本実施例の画像処理装置15は、2次元画像である分光画像から特定される腫瘍が、3次元画像であるMRI画像内で立体的にどのように分布するかを特定することができる。その結果、表示装置16は、腫瘍を他の部分と区別可能な態様で表す患者の3次元モデルを表示することができる。その結果、術者は、腫瘍を好適に認識することができる。 As described above, the image processing apparatus 15 according to the present embodiment can specify not only the outer surface image portion that three-dimensionally represents the outer surface of the tumor but also the inner image portion that represents the inside of the tumor three-dimensionally. . That is, the image processing apparatus 15 according to the present embodiment can specify how the tumor identified from the spectral image that is a two-dimensional image is three-dimensionally distributed in the MRI image that is a three-dimensional image. . As a result, the display device 16 can display a three-dimensional model of the patient that represents the tumor in a manner that can be distinguished from other parts. As a result, the surgeon can appropriately recognize the tumor.
 尚、上述した手術支援システム1の構成及び動作は、あくまで一例である。従って、手術支援システム1の構成及び動作の少なくとも一部が適宜変形されてもよい。以下、手術支援システム1の構成及び動作の少なくとも一部の変形例について説明する。 In addition, the structure and operation | movement of the surgery assistance system 1 mentioned above are an example to the last. Therefore, at least a part of the configuration and operation of the surgery support system 1 may be modified as appropriate. Hereinafter, at least some modifications of the configuration and operation of the surgery support system 1 will be described.
 手術支援システム1は、MRI装置11に加えて又は代えて、患者を撮影することで、患者の外面及び内部の構造等を3次元的に表す画像を生成可能な任意の装置を備えていてもよい。このような任意の装置の一例として、CT(Computed Tomography)装置や、PET(Positron Emission Tomograph)装置があげられる。 The surgery support system 1 may include an arbitrary device that can generate an image representing a three-dimensional representation of the outer surface and internal structure of the patient by imaging the patient in addition to or instead of the MRI apparatus 11. Good. As an example of such an arbitrary apparatus, there is a CT (Computed Tomography) apparatus and a PET (Positron Emission Tomography) apparatus.
 手術の途中で手術部位の形状や腫瘍の形状等が変化する可能性がある。この場合には、画像処理装置15は、腫瘍の外面を表す外面画像部分及び腫瘍の内部を表す内部画像部分を特定した後に、分光画像(或いは、外面画像部分及び内部画像部分のうちの少なくとも一方)に対して非剛体レジストレーション処理を行ってもよい。その結果、画像処理装置15は、形状が変化した腫瘍を他の部分と区別可能な態様で表す3次元モデルを生成することができる。 During surgery, the shape of the surgical site and the shape of the tumor may change. In this case, the image processing apparatus 15 specifies an outer surface image portion representing the outer surface of the tumor and an inner image portion representing the inside of the tumor, and then at least one of the spectral image (or the outer surface image portion and the inner image portion). ) May be subjected to a non-rigid registration process. As a result, the image processing apparatus 15 can generate a three-dimensional model that represents the tumor whose shape has changed in a manner that can be distinguished from other parts.
 内部画像部分は、外面画像部分をその一部として含んでいてもよい。なぜならば、内部画像部分の外殻が実質的には外面画像部分に対応するからである。このため、内部画像部分は、腫瘍の内部に加えて、腫瘍の外面を表していてもよい。 The inner image portion may include the outer surface image portion as a part thereof. This is because the outer shell of the inner image portion substantially corresponds to the outer surface image portion. For this reason, the internal image portion may represent the outer surface of the tumor in addition to the inside of the tumor.
 また、本発明は、請求の範囲及び明細書全体から読み取るこのできる発明の要旨又は思想に反しない範囲で適宜変更可能であり、そのような変更を伴う画像処理装置及び画像処理方法、並びに、コンピュータプログラムもまた本発明の技術思想に含まれる。 Further, the present invention can be appropriately changed without departing from the gist or concept of the invention that can be read from the claims and the entire specification, and an image processing apparatus, an image processing method, and a computer that involve such a change The program is also included in the technical idea of the present invention.
 1 手術支援システム
 11 MRI装置
 12 ハイパースペクトルカメラ
 13 ポインタ
 14 位置測定装置
 15 画像処理装置
 151 CPU
 1511 MRI画像取得部
 1512 マーカ位置特定部
 1513 分光画像取得部
 1514 腫瘍特定部
 1515 マーカ位置特定部
 1516 対応付け処理部
 1517 3次元モデル生成部
 1518 3次元ビューワ処理部
 152 メモリ
 16 表示装置
DESCRIPTION OF SYMBOLS 1 Surgery support system 11 MRI apparatus 12 Hyper spectrum camera 13 Pointer 14 Position measuring apparatus 15 Image processing apparatus 151 CPU
1511 MRI image acquisition unit 1512 Marker position specifying unit 1513 Spectral image acquiring unit 1514 Tumor specifying unit 1515 Marker position specifying unit 1516 Association processing unit 1517 3D model generation unit 1518 3D viewer processing unit 152 Memory 16 Display device

Claims (8)

  1.  同一の対象物を撮影することで生成される2次元画像及び3次元画像を取得する取得手段と、
     前記2次元画像のうち、前記対象物を表す第1画像部分を特定する第1特定手段と、
     前記第1画像部分に対応する前記3次元画像中の第2画像部分の特徴を有する前記3次元画像中の第3画像部分を特定する第2特定手段と
     を備えることを特徴とする画像処理装置。
    An acquisition means for acquiring a two-dimensional image and a three-dimensional image generated by photographing the same object;
    A first specifying means for specifying a first image portion representing the object in the two-dimensional image;
    An image processing apparatus comprising: a second specifying unit that specifies a third image portion in the three-dimensional image having a feature of the second image portion in the three-dimensional image corresponding to the first image portion. .
  2.  前記第2画像部分は、前記3次元画像のうち前記対象物の外面の少なくとも一部を表し、
     前記第3画像部分は、前記3次元画像のうち前記対象物の少なくとも内部を表す
     ことを特徴とする請求項1に記載の画像処理装置。
    The second image portion represents at least a part of the outer surface of the object in the three-dimensional image,
    The image processing apparatus according to claim 1, wherein the third image portion represents at least the inside of the object in the three-dimensional image.
  3.  前記第2画像部分を特定する第3特定手段を更に備え、
     前記2次元画像は、前記対象物を含む撮影対象を撮影することで生成され、
     前記第3特定手段は、(i)前記3次元画像のうち、前記撮影対象を撮影することで前記2次元画像を生成する撮像装置が備える撮像素子の各画素に到達する光の経路上に存在する前記撮影対象の外面に対応する第4画像部分を特定し、(ii)前記撮像装置が備える光学系の状態を示す第1パラメータ及び前記撮像装置の設置環境を示す第2パラメータに基づいて、前記第4画像部分を前記2次元画像に対応付け、(iii)前記第4画像部分と前記2次元画像との対応付けの結果に基づいて、前記第2画像部分を特定する
     ことを特徴とする請求項1又は2に記載の画像処理装置。
    A third specifying unit for specifying the second image portion;
    The two-dimensional image is generated by shooting a shooting target including the target object,
    The third specifying means is (i) present on a path of light reaching each pixel of an image sensor included in an imaging device that generates the two-dimensional image by photographing the photographing target among the three-dimensional images. Identifying a fourth image portion corresponding to an outer surface of the imaging target, and (ii) based on a first parameter indicating a state of an optical system included in the imaging device and a second parameter indicating an installation environment of the imaging device, The fourth image portion is associated with the two-dimensional image, and (iii) the second image portion is specified based on a result of the association between the fourth image portion and the two-dimensional image. The image processing apparatus according to claim 1.
  4.  前記第2画像部分及び前記第3画像部分を他の画像部分と区別可能な態様で表す他の3次元画像を表示するように表示装置を制御する制御手段を更に備える
     ことを特徴とする請求項1から3のいずれか一項に記載の画像処理装置。
    The apparatus further comprises control means for controlling the display device to display another three-dimensional image representing the second image portion and the third image portion in a manner distinguishable from other image portions. The image processing apparatus according to any one of 1 to 3.
  5.  前記対象物は、被検体の所定部位、前記被検体の患部及び前記被検体の腫瘍部位のうちの少なくとも一つを含む
     ことを特徴とする請求項1から4のいずれか一項に記載の画像処理装置。
    The image according to any one of claims 1 to 4, wherein the object includes at least one of a predetermined part of a subject, an affected area of the subject, and a tumor part of the subject. Processing equipment.
  6.  前記3次元画像は、前記被検体の外面及び内部を立体的に表すMRI(Magnetic Resonance Imaging)画像であり、
     前記2次元画像は、前記被検体の外面を平面的に表す分光画像である
     ことを特徴とする請求項5に記載の画像処理装置。
    The three-dimensional image is an MRI (Magnetic Resonance Imaging) image that stereoscopically represents the outer surface and the inside of the subject.
    The image processing apparatus according to claim 5, wherein the two-dimensional image is a spectral image that planarly represents an outer surface of the subject.
  7.  同一の対象物を撮像することで生成される2次元画像及び3次元画像を取得する取得工程と、
     前記2次元画像のうち、前記対象物を表す第1画像部分を特定する第1特定工程と、
     前記第1画像部分に対応する前記3次元画像中の第2画像部分の特徴を有する前記3次元画像中の第3画像部分を特定する第2特定工程と
     を備えることを特徴とする画像処理方法。
    An acquisition step of acquiring a two-dimensional image and a three-dimensional image generated by imaging the same object;
    A first specifying step of specifying a first image portion representing the object in the two-dimensional image;
    And a second specifying step of specifying a third image portion in the three-dimensional image having a feature of the second image portion in the three-dimensional image corresponding to the first image portion. .
  8.  コンピュータに請求項7に記載の画像処理方法を実行させることを特徴とするコンピュータプログラム。 A computer program for causing a computer to execute the image processing method according to claim 7.
PCT/JP2015/080576 2015-10-29 2015-10-29 Image processing device and image processing method, and computer program WO2017072916A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/JP2015/080576 WO2017072916A1 (en) 2015-10-29 2015-10-29 Image processing device and image processing method, and computer program

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/JP2015/080576 WO2017072916A1 (en) 2015-10-29 2015-10-29 Image processing device and image processing method, and computer program

Publications (1)

Publication Number Publication Date
WO2017072916A1 true WO2017072916A1 (en) 2017-05-04

Family

ID=58631370

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2015/080576 WO2017072916A1 (en) 2015-10-29 2015-10-29 Image processing device and image processing method, and computer program

Country Status (1)

Country Link
WO (1) WO2017072916A1 (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2008093443A (en) * 2006-10-05 2008-04-24 Siemens Ag Method for displaying interventional treatment
JP2008520312A (en) * 2004-11-23 2008-06-19 コーニンクレッカ フィリップス エレクトロニクス エヌ ヴィ Image processing system and method for image display during intervention procedures
JP2009039280A (en) * 2007-08-08 2009-02-26 Arata Satori Endoscopic system and method of detecting subject using endoscopic system
JP2010274044A (en) * 2009-06-01 2010-12-09 Olympus Corp Surgery support apparatus, surgery support method, and surgery support program
JP2014226430A (en) * 2013-05-24 2014-12-08 富士フイルム株式会社 Image display device, method and program
JP5781667B1 (en) * 2014-05-28 2015-09-24 株式会社モリタ製作所 Root canal therapy device

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2008520312A (en) * 2004-11-23 2008-06-19 コーニンクレッカ フィリップス エレクトロニクス エヌ ヴィ Image processing system and method for image display during intervention procedures
JP2008093443A (en) * 2006-10-05 2008-04-24 Siemens Ag Method for displaying interventional treatment
JP2009039280A (en) * 2007-08-08 2009-02-26 Arata Satori Endoscopic system and method of detecting subject using endoscopic system
JP2010274044A (en) * 2009-06-01 2010-12-09 Olympus Corp Surgery support apparatus, surgery support method, and surgery support program
JP2014226430A (en) * 2013-05-24 2014-12-08 富士フイルム株式会社 Image display device, method and program
JP5781667B1 (en) * 2014-05-28 2015-09-24 株式会社モリタ製作所 Root canal therapy device

Similar Documents

Publication Publication Date Title
US11759261B2 (en) Augmented reality pre-registration
JP6463038B2 (en) Image alignment apparatus, method and program
US9983065B2 (en) Method and apparatus for analyzing images
JP6714006B2 (en) Camera system for automatically measuring patient biometric and physiological parameters for use in medical imaging modalities
US9076246B2 (en) System and method of overlaying images of different modalities
CN109998678A (en) Augmented reality assisting navigation is used during medicine regulation
JP2016179121A (en) Endoscope inspection support device, method and program
US11928834B2 (en) Systems and methods for generating three-dimensional measurements using endoscopic video data
CN110136191A (en) The system and method for size estimation for intrabody objects
JP2013153883A (en) Image processing apparatus, imaging system, and image processing method
JP2017164007A (en) Medical image processing device, medical image processing method, and program
JP2016536089A (en) How to calculate a surgical intervention plan
JP2010281811A (en) Device for making temperature image three dimensional
Richey et al. Soft tissue monitoring of the surgical field: detection and tracking of breast surface deformations
US10631948B2 (en) Image alignment device, method, and program
WO2017072916A1 (en) Image processing device and image processing method, and computer program
JP2017080159A (en) Image processing apparatus, image processing method, and computer program
JP2017023834A (en) Picture processing apparatus, imaging system, and picture processing method
JP6745748B2 (en) Endoscope position specifying device, its operating method and program
US11989915B2 (en) Intra-operative determination of a focal length of a camera for medical applications
US20240225776A1 (en) Augmented reality headset and probe for medical imaging
JP2024518392A (en) Augmented reality headsets and probes for medical imaging
IL202923A (en) Method, device and system for analyzing images

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 15907280

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 15907280

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: JP