JP6461257B2 - Image processing apparatus and method - Google Patents

Image processing apparatus and method Download PDF

Info

Publication number
JP6461257B2
JP6461257B2 JP2017142840A JP2017142840A JP6461257B2 JP 6461257 B2 JP6461257 B2 JP 6461257B2 JP 2017142840 A JP2017142840 A JP 2017142840A JP 2017142840 A JP2017142840 A JP 2017142840A JP 6461257 B2 JP6461257 B2 JP 6461257B2
Authority
JP
Japan
Prior art keywords
image
dimensional image
object
mri
subject
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
JP2017142840A
Other languages
Japanese (ja)
Other versions
JP2017221684A (en
Inventor
和大 宮狭
和大 宮狭
卓郎 宮里
卓郎 宮里
Original Assignee
キヤノン株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by キヤノン株式会社 filed Critical キヤノン株式会社
Priority to JP2017142840A priority Critical patent/JP6461257B2/en
Publication of JP2017221684A publication Critical patent/JP2017221684A/en
Application granted granted Critical
Publication of JP6461257B2 publication Critical patent/JP6461257B2/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Description

  The present invention relates to image processing of medical images captured by various medical image collection devices (modalities).

  A photoacoust tomograph (PAT) is a device that irradiates a subject with a light pulse to excite the absorbing substance in the subject, detects a photoacoustic signal generated by the thermoelastic expansion of the absorbing substance, and detects the subject. The properties related to light absorption are imaged. That is, the PAT images the light energy deposition amount distribution (light energy absorption density distribution) in the subject with respect to the irradiation light. Based on this, the light absorption coefficient distribution of the subject with respect to the irradiation wavelength is imaged. Furthermore, it is also possible to image the state of a substance constituting the subject (for example, oxygen saturation of hemoglobin) based on the light absorption coefficient distribution regarding a plurality of wavelengths.

  These images are expected to visualize information about new blood vessels that occur inside and outside malignant tumors such as cancer. Hereinafter, these images are collectively referred to as “photoacoustic tomographic images (PAT images)”.

  Since PAT emits near-infrared light pulses with low energy, it is difficult to image deep parts of the human body compared to X-rays. Patent Document 1 describes, as one form of PAT with a breast as a measurement target, that the breast is held by two flat plates (hereinafter referred to as a holding plate) and imaging is performed with the breast thinned. . Therefore, when performing combination diagnosis of other modalities such as PAT and magnetic resonance imaging device (MRI), perform deformation positioning considering compression deformation due to holding (to match one image with the other image, Deformation) enables efficient diagnosis by a doctor.

  As a method for aligning the PAT image and the MRI image, a method based on image matching can be mentioned. For example, Non-Patent Document 1 describes an alignment technique between an X-ray mammography (MMG) obtained by imaging a flat-compressed breast and an MRI image of the breast. Specifically, a deformed MRI image obtained by performing physical deformation simulation by flat plate compression on an MRI image is generated, a pseudo MMG image is generated from the deformed MRI image, and image matching between the pseudo MMG image and the actually captured MMG is performed. Align with.

  Non-Patent Document 2 describes a technique for evaluating the shape of a deformed breast obtained from the result of performing physical deformation simulation by flat plate compression on an MRI image based on the two-dimensional shape of the breast extracted from the MMG image. Is disclosed.

  In addition, as a diagnostic support by positioning multiple modalities different from the above, an image of a cross section corresponding to an imaging cross section of an ultrasound image (hereinafter referred to as a corresponding cross section) is used as a reference image such as a CT image or MRI image as 3D image data. There is an attempt to generate (cut out) and present from. For example, Patent Document 2 aligns a CT image or MRI image, which is a reference image, with a subject, measures the position and orientation of the ultrasound probe relative to the subject, and compares the ultrasound image and the reference image. A technique for performing alignment is disclosed.

JP 2010-088627 Japanese Patent No. 03871747

Angela Lee et al. "Breast X-ray and MR image fusion using finite element modeling" Proc. Workshop on Breast Image Analysis in conjunction with MICCAI 2011, 129-136, 2011 C. Tanner et al. "Breast Shapes on Real and Simulated Mammograms" Proc. Int. Workshop on Digital Mammography 2010 (IWDM 2010), LNCS 6136, 540-547, 2010

  However, since the feature imaged by PAT and the feature imaged by MRI are different, the structure shown in the MRI image does not necessarily match all the structures on the PAT image. Therefore, it is difficult to perform highly accurate alignment only by image matching. Therefore, it is necessary to manually input a plurality of corresponding points between both coordinate systems.

  An object of the present invention is to align a three-dimensional image of an object with the object itself or another image obtained by imaging the object with high accuracy.

  The present invention has the following configuration as one means for achieving the above object.

Image processing according to the present invention includes:
Image acquisition means for acquiring a three-dimensional image generated by imaging an object by a modality and a two-dimensional image generated by imaging the object by an optical camera ;
Generating means for generating projection information by projecting the three-dimensional image so as to emphasize a region near the surface of the object on the basis of the viewpoint of the optical camera ;
Alignment means for performing alignment processing of the three-dimensional image and the two-dimensional image using the projection information and the two-dimensional image.

  According to the present invention, a three-dimensional image of an object can be aligned with the object itself or another image obtained by imaging the object with high accuracy.

1 is a block diagram illustrating a configuration example of a modality system including an image processing apparatus according to Embodiment 1. FIG. The figure explaining the MRI image of the subject hold | maintained in medical image DB. The figure explaining the imaging of the subject by PAT. The figure which shows the example of the PAT image which PAT imaged. The figure which shows an example of the picked-up image ICAM1 of the front infrared camera in a non-holding state. 3 is a flowchart for explaining the operation and processing of each unit of the image processing apparatus according to the first embodiment. The figure explaining the acquisition process of surface shape. The figure which shows the example of a display of a deformation | transformation MRI image and a PAT image. The flowchart explaining the detail of the alignment in a non-holding state. The figure explaining the process in which a virtual projection image generation part calculates | requires a partial surface area | region. The figure which shows the MIP image using the body surface vicinity information of the subject in a MRI image. The flowchart explaining the detail of estimation of compression deformation. The schematic diagram which shows the production | generation method of the mesh M. FIG. The figure explaining the compression deformation simulation by a holding plate. The schematic diagram which shows the deformation | transformation MRI image ID_MRIonP . FIG. 6 is a block diagram illustrating a configuration example of a modality system including the image processing apparatus according to the second embodiment. 9 is a flowchart for explaining the operation and processing of each unit of the image processing apparatus according to the second embodiment. The flowchart explaining the detail of estimation of a position and attitude | position and compression deformation. The flowchart explaining the detail of estimation of a position and attitude | position and compression deformation.

  Hereinafter, image processing according to an embodiment of the present invention will be described in detail with reference to the drawings. However, the scope of the invention is not limited to the illustrated example.

  The image processing apparatus of Example 1 compares an image of an infrared camera and a PAT image mounted on a photoacoustic tomography apparatus (PAT) with a breast as a subject (an object to be examined), and an MRI image. Perform deformation alignment of PAT image and MRI image. That is, when the position and shape of the subject at the time of MRI imaging are called “first state” and the position and shape of the subject at the time of PAT imaging are called “second state”, the subject in the first state is The MRI image to be represented is deformed, and alignment is performed on the subject in the second state.

  Specifically, first, a two-dimensional image obtained by imaging a subject in a non-holding state before PAT imaging (hereinafter, the initial stage of the second state) with a PAT-equipped infrared camera is acquired, and an MRI image and Registration with the two-dimensional image is performed. That is, a rigid transformation between the first state and the second state (precisely, the initial stage of the second state) is estimated as the alignment parameter of the subject. Then, with this rigid body transformation as an initial value, the deformation parameter of the compression deformation is estimated as the alignment parameter of the subject during the first state and PAT imaging (that is, the second state). In the first embodiment, the alignment parameter between the PAT image and the MRI image is derived by these two stages of processing.

[Device configuration]
The block diagram of FIG. 1 shows a configuration example of a modality system including the image processing apparatus 10 of the first embodiment. The image processing apparatus 10 is connected to a medical image database (DB) 11 and a photoacoustic tomography apparatus (PAT) 12. The medical image DB 11 holds three-dimensional image data obtained by imaging a breast as a subject by MRI. The PAT 12 is a device that captures a PAT image, and holds a PAT image and an infrared camera image of the subject.

● Medical Image DB
An MRI image of the subject held by the medical image DB 11 will be described with reference to FIG. The MRI image 300 of the subject shown in FIG. 2 (a) is a set (three-dimensional image data) of a two-dimensional image (cross section including the nipple 304) sliced by a cross section (axial cross section) perpendicular to the head-to-tail direction of the human body. is there. The positions of the pixels constituting the MRI image 300 are defined in the MRI image coordinate system C MRI . Further, the MRI image 300 includes imaging results of the in-vivo region 302 of the subject and the in-vivo region 303 of the subject.

An MRI image 301 shown in FIG. 2 (b) is a set of two-dimensional images (three-dimensional image data) sliced by a section (sagittal section) perpendicular to the left-right direction of the human body. Similar to the MRI image 300, the MRI image 301 includes imaging results in the external region 302 and the internal region 303 of the subject. In this embodiment, the MRI image coordinates in which the right hand side to the left hand side of the patient are the positive direction of the x axis, the chest side to the back side is the positive direction of the y axis, and the foot side to the head side are the positive direction of the z axis. Define a system C MRI .

● PAT
The imaging of the subject by the PAT 12 will be described with reference to FIG. The subject 500 takes a prone position on the bed on the upper surface of the PAT 12. Then, one breast 501 as the subject is inserted into the opening 502 on the upper surface of the PAT 12. At this time, the breast 501 is held in a compressed state by two transparent holding plates (foot-side fixed holding plate 503 and head-side movable holding plate 504) so that the irradiation light reaches the inside of the breast, Images are taken with the breast 501 thin. The holding is performed by moving the movable holding plate 504 in the foot direction (in the direction of the fixed holding plate 503).

  Both the fixed holding plate 503 and the movable holding plate 504 are flat plates, and the contact surface with the breast 501 (hereinafter referred to as a holding surface) is a flat surface. Further, the distance between the fixed holding plate 503 and the movable holding plate 504 during holding (hereinafter referred to as holding thickness) is measured by the PAT 12, and the holding thickness is stored in the header portion of the image as additional information of the PAT image. .

  A near-infrared light pulse as irradiation light is irradiated from a light source (not shown) from a direction orthogonal to the plane of the holding plate. The photoacoustic signal generated in the subject is received by an ultrasonic probe (not shown) arranged so as to be orthogonal to the plane of the holding plate.

PAT device coordinate system C DEV is defined in PAT12. The xy plane is a plane parallel to the planes of the fixed holding plate 503 and the movable holding plate 504, and the z axis is the thickness direction of the held breast 501. is there. For example, as in the MRI image coordinate system C MRI , from the right hand side to the left hand side of the subject 500, the positive direction of the x axis, the chest side (bottom) to the back side (up), the positive direction of the y axis, and the foot side The head side is defined as the positive direction of the z-axis. The origin of the PAT apparatus coordinate system C DEV is set at the lower end position on the right hand side on the fixed holding plate 503, for example. Hereinafter, in PAT12, the relationship with other coordinate systems will be handled based on the above coordinate system.

An example of a PAT image captured by the PAT 12 is shown in FIG. A PAT image 600 is a set of two-dimensional images (three-dimensional image data) of an axial cross section as in FIG. 2 (a). In this example, as in the MRI image coordinate system C MRI , the right hand side to the left hand side of the subject 500 is the positive direction of the x axis, the chest side to the back side is the positive direction of the y axis, and the foot side is the head side. Define a PAT image coordinate system C PAT with the positive z-axis.

A coordinate transformation matrix for performing transformation from the PAT image coordinate system C PAT to the PAT device coordinate system C DEV is defined as “T PtoD ”. The coordinate transformation matrices that appear after this , including T PtoD , are all 4 × 4 matrices that represent the translation and rotation of the coordinate system. The PAT image coordinate system C PAT is a parallel coordinate system with respect to the PAT apparatus coordinate system C DEV , and the origin position of C PAT changes according to the imaging range of the subject 501. That is, the coordinate transformation matrix T PtoD has no rotation component and can be uniquely calculated based on the imaging range. The coordinate transformation matrix T PtoD is stored in the header portion of the image as additional information of the PAT image.

  As shown in FIG. 3, the PAT 12 includes three infrared cameras (a front infrared camera 505, a rear infrared camera 506, and a side infrared camera 507) for imaging the appearance of the subject 501 and the state of blood vessels near the body surface. Is installed. The front infrared camera 505 is installed at a position where the appearance of the subject 501 can be imaged from the head side through the movable holding plate 504. The rear infrared camera 506 is installed at a position where the appearance of the subject 501 can be imaged from the foot side through the fixed holding plate 503. The side infrared camera 507 is installed at a position where the appearance of the subject 501 can be imaged from the side.

The PAT 12 is an image of the subject 501 captured by the infrared cameras 505 to 507 in a state where the subject 501 is not held (hereinafter, the non-holding state) and a state where the subject 501 is held (hereinafter, the holding state). Has a function to save. Hereinafter, in the holding state, an image captured by the front infrared camera 505 is I CAM1 , an image captured by the rear infrared camera 506 is I CAM2 , and an image captured by the side infrared camera 507 is I CAM3 . In the non-holding state, the image I front infrared camera 505 captures an image 'CAM1, images rear infrared camera 506 captures an image I' CAM2, the image side infrared camera 507 captures an image and I 'CAM3.

The z-axis (indicating the negative direction of the visual axis) of the coordinate system (front camera coordinate system) C CAM1 of the front infrared camera 505 faces substantially the same direction as the z-axis of the PAT apparatus coordinate system C DEV . Similarly, the z-axis of the rear infrared camera 506 coordinate system (rear camera coordinate system) C CAM2 is oriented in a substantially opposite direction to the z-axis of the PAT device coordinate system C DEV . Further , the z axis of the coordinate system (side camera coordinate system) C CAM3 of the side infrared camera 507 faces the -x axis direction of the PAT apparatus coordinate system C DEV .

Camera coordinate system C CAM1, C CAM2, C CAM3 each coordinate transformation matrix to PAT device coordinate system C DEV from, T C1toD, T C2toD, it is defined as T C3toD. Infrared cameras 505 to 507 have been calibrated in the PAT apparatus coordinate system C DEV (in other words, the positional relationship with PAT12 is known). Further, the above-described coordinate transformation matrix and internal parameters of the infrared cameras 505 to 507 are held in the image processing apparatus 10 as known information.

FIG. 5 shows an example of a captured image I CAM1 of the front infrared camera 505 in the non-holding state. An infrared camera is a device that captures an image in which near-infrared intensity information is visualized, and has the following properties of near-infrared to visualize a venous blood vessel (superficial blood vessel) directly under the skin that is the surface layer of the subject. There is.
・ Near infrared rays allow skin to penetrate to some extent ・ Near infrared rays are absorbed by blood vessels including hemoglobin and blood vessels appear darker than the surroundings

  The infrared camera image 700 can be treated as a morphological image because the shape of the superficial blood vessel under the skin is clearly depicted. In FIG. 5, on the infrared camera image 700, a held breast outline shape 701 is shown, and a nipple image 702 and a superficial blood vessel image 703 are shown.

The coordinates on the two-dimensional coordinate system C IMG1 of the captured image I CAM1 of the front infrared camera 505 are in the three-dimensional camera coordinate system C CAM1 , the focal point that is the origin, and the projection plane of the camera in the three-dimensional space. It has a one-to-one relationship with a straight line passing through the point, that is, the line of sight. The transformation between the coordinate systems C IMG1 and the camera coordinate system C CAM1 of the captured image of the front infrared camera 505, described for using the coordinate transformation process between general captured image and three-dimensional space will be omitted. Further, the captured images of the rear infrared camera 506 and the side infrared camera 507 are the same as the captured images of the front infrared camera 505 except that the viewpoint positions are different, and thus detailed description thereof is omitted.

Image Processing Device The operation and processing of each part of the image processing device 10 according to the first embodiment will be described with reference to the flowchart of FIG.

  The medical image acquisition unit 101 acquires the MRI image of the subject held by the medical image DB 11, and outputs the MRI image to the three-dimensional shape acquisition unit 102, the rigid body conversion unit 106, and the deformed image generation unit 111 (S201).

  The three-dimensional shape acquisition unit 102 performs image processing on the input MRI image, detects the position (surface position) of each pixel that hits the surface of the subject, and acquires information indicating the surface shape of the subject ( S202). Further, the position of the feature point in the MRI image is acquired based on the three-dimensional curvature of the shape obtained from the detected surface position (S203). When the subject is a breast, the feature point in the MRI image is a nipple, and the description will be continued below as an information acquisition unit in which the three-dimensional shape acquisition unit 102 acquires information indicating the nipple position. The three-dimensional shape acquisition unit 102 outputs the acquired surface shape and nipple position to the rigid body conversion unit 106 and the deformation estimation unit 109. In the present embodiment, the surface shape of the subject acquired from the MRI image becomes a shape model of the subject in the non-holding state.

  The surface shape acquisition process will be described with reference to FIG. 7 (a) and 7 (b) are surface detection images 400 obtained by detecting the boundary 304 (surface position) between the extracorporeal region 302 and the intracorporeal region 303 of the subject from the MRI images 300 and 301 shown in FIGS. 2 (a) and 2 (b). , 401. The surface detection images 400 and 401 may be, for example, binary images that can distinguish the surface of the subject from the other surfaces.

In this embodiment, as the surface shape of the object, N S number of acquired point cloud P Sk (1 ≦ k ≦ N S), the position coordinates of the three-dimensional positions of those point groups in the MRI image coordinate system C MRI Record as vector V Sk_MRI .

The PAT image acquisition unit 103 acquires a PAT image obtained by imaging the subject by the PAT 12, and outputs the PAT image to the deformed image evaluation unit 111 and the image display unit 112 (S204). Further, additional information included in the header part of the PAT image, for example, coordinate transformation matrices T PtoD , T C1toD , T C2toD , T C3toD, etc. are also output to the modified image evaluation unit 111. The PAT image acquired by the PAT image acquisition unit 103 is a three-dimensional image obtained by imaging the light energy deposition amount distribution in the subject with respect to a predetermined wavelength.

The camera image acquisition unit 104 acquires infrared camera images of the subject in the non-holding state and the holding state, which are captured by the infrared cameras 505 to 507 of the PAT12. The two-dimensional shape acquisition unit 105 and the virtual projection image evaluation The data is output to the unit 108 (S205). The infrared camera images acquired here are I CAMi and I ′ CAMi (i = 1, 2, 3).

  The PAT image acquisition unit 103 and the camera image acquisition unit 104 may acquire an image directly from the PAT 12 in synchronization with imaging of the PAT 12, or acquire an image captured and recorded in the past from a medical image recording device (not shown). May be.

  The two-dimensional shape acquisition unit 105 performs image processing on each input infrared camera image to acquire the positions of the breast contour shape (701 in FIG. 6) and the nipple image (702 in FIG. 6). The nipple position is output to the rigid body conversion unit 106 and the deformed image evaluation unit 111 (S206). For example, a general edge detection method can be used to detect the breast contour shape. The nipple position can be detected based on the curvature of a curve representing the boundary of the breast region. The detection of the breast contour shape and the nipple position is not limited to the above method.

The rigid body conversion unit 106, the virtual projection image generation unit 107, and the virtual projection image evaluation unit 108 use the MRI image and the information on the superficial blood vessel captured in the infrared camera image in the non-holding state, The MRI image is aligned (S207). The non-holding state is an initial stage of the second state. Specifically, based on each of the assumed alignment parameter candidate values, a virtual image obtained by virtually observing the MRI image with the infrared camera is generated, and the alignment parameter is compared with the infrared camera image. Is estimated. The details of this alignment (alignment in the non-holding state) will be described later, but with this process, the transformation matrix T MtoC1 representing the rigid transformation from the MRI image coordinates C MRI to the front camera coordinate system C CAM1 is used as an alignment parameter. get.

Rigid transformation unit 106, based on the transformation matrix T MtoC1, calculates a transformation matrix T mtod from MRI image coordinate system C MRI represents the rigid body transformation to PAT device coordinate system C DEV (S208). That is, the transform matrix T MtoC1, by multiplying the transformation matrix T C1toD to PAT device coordinate system C DEV front camera coordinate system C CAM1 the image processing apparatus 10 is held, the transformation matrix T mtod is calculated.

The deformation estimation unit 109, the deformation image generation unit 110, and the deformation image evaluation unit 111 perform alignment of the subject and the MRI image in the holding state (hereinafter, compression deformation estimation) based on the alignment result in the non-holding state (hereinafter, compression deformation estimation). S209). Although details will be described later, compression deformation of the MRI image is estimated using physical deformation simulation. That is, a physical deformation simulation is performed with various deformation parameters changed, a predetermined evaluation value representing the appropriateness of deformation is obtained by comparison with a PAT image, and the deformation parameter that minimizes the evaluation value is used as an alignment parameter. presume. And the deformation | transformation MRI image ID_MRIonP (deformation three-dimensional image) is produced | generated as an estimation result using the estimated deformation | transformation parameter.

  The image display unit 109 displays the generated deformed three-dimensional image (deformed MRI image) and the PAT image acquired in step S203 side by side on a monitor (not shown) (S210). FIG. 8 shows a display example of the modified MRI image and the PAT image. The example of FIG. 8 is an example in which a deformed MRI image 1400 and a PAT image 600 of the same axial section are juxtaposed in the vertical direction. A broken-line rectangle 1500 superimposed on the deformed MRI image 1400 is display information indicating an area corresponding to the display area of the PAT image 600 to the user.

  3D shape acquisition unit 102, 2D shape acquisition unit 105, rigid body conversion unit 106, virtual projection image generation unit 107, virtual projection image evaluation unit 108, deformation estimation unit 109, deformation image generation unit 110, deformation image evaluation unit 111 constitutes an alignment unit 113.

● Positioning in the non-holding state In the positioning in the non-holding state, the rigid body transformation from the MRI image coordinate system C MRI to the front camera coordinate system C CAM1 is estimated. Details of the alignment (S207) in the non-holding state will be described with reference to the flowchart of FIG.

The rigid transformation unit 106 calculates parameters for translating the MRI image to the infrared camera coordinate system (front camera coordinate system) (S801). First, based on the principle of triangulation, the three-dimensional teat position in the front camera coordinate system C CAM1 is calculated from the two-dimensional teat position obtained from each infrared camera image in the non-holding state. Then, a translation from the MRI image coordinate system C MRI to the front camera coordinate system C CAM1 is represented so that the nipple position in the MRI image matches the three-dimensional nipple position in the non-holding state obtained from the infrared camera image. A transformation matrix T1 MtoC1 is calculated.

Next, the rigid body conversion unit 106 combines a plurality of (N θ sets) rotation parameter candidate values obtained by combining values that can be taken by each component (rotation angle around three axes) for rotationally moving the subject in the MRI image. (Hypothesis) θi = {θx, θy, θz} (1 ≦ i ≦ n θ ) is set (S802). In other words, this means that the candidate value of the rigid body transformation parameter is set by combining the candidate value θi of the rotation parameter in this process and the parameter of translation calculated in step S801. In addition, considering that the relationship between the PAT image coordinate system and the front camera coordinate system is known, this is from an MRI image (subject in the first state) to a PAT image (subject in the second state). It is also equivalent to setting candidate values for rigid body transformation. For example, if the rotation angle around the x axis is θx and the rotation angle θz around the z axis, the following five angles are set in increments of 5 degrees within a range of −10 degrees to +10 degrees.
θx = {-10, -5, 0, +5, +10}
θz = {-10, -5, 0, +5, +10}

If the rotation angle around the y-axis is θy, the following 72 angles are set in 5 ° increments within the range of -180 degrees to +180 degrees.
θy = {-180, -175,…, -5, 0, +5,…, +175, +180}

At this time, the number of values that the rotation parameter θi can take (the total number of candidate values (hypotheses)) N θ is 1800 (that is, 1 ≦ i ≦ 1800).

Next, the rigid body conversion unit 106 performs initialization (S803). That is, 1 is set to the loop variable i, 0 is set to the maximum value S MAX of the similarity Si described later, and θ 1 is set to the angle θ MAX described later.

Next, the rigid body conversion unit 106 outputs the MRI image I MRIonC1 i and the position coordinate vector v Sk_MRIonC1 i obtained by rotating the translated MRI image by the rotation parameter θi with respect to the nipple position to the virtual projection image generation unit 107. (S804). That is, a transformation matrix T2i that is rotated by the rotation parameter θi in the front camera coordinate system CCAM1 is calculated. Subsequently, a transformation matrix T MtoC1 i representing a rigid transformation obtained by multiplying the transformation matrix T1 MtoC1 for translation derived in step S801 by the transformation matrix T2i is derived. Subsequently, it generates a position coordinate vector v Sk_MRI a, MRI image coordinate transformation with each transformation matrix T MtoC1 i I MRIonC1 i and the position coordinates vector v Sk_MRIonC1 i of the surface shape of the MRI image and the object.

The virtual projection image generation unit 107 obtains, as a partial surface region, the surface shape of the subject that will enter the field of view when the MRI image subjected to rigid body conversion is observed from the viewpoint of the front infrared camera 505 (S805). In other words, based on the surface shape, which is information indicating the surface position, and the rigid body transformation parameter (transformation matrix T MtoC1 i representing rigid body transformation), which is a candidate value for the alignment parameter, the MRI image subjected to rigid body transformation is converted to the front infrared camera 505. A virtual image viewed from the viewpoint is generated.

A process (S805) in which the virtual projection image generation unit 107 obtains the partial surface area will be described with reference to FIG. FIG. 10 shows the state of the subject on the MRI image observed from the viewpoint P of the front infrared camera 505, and the MRI breast region 900 is rigidly transformed so that the nipple position 901 coincides with the nipple position in the infrared camera. . In step S805, the virtual projection image generation unit 107 performs perspective projection based on the position and orientation of the viewpoint P, and obtains a partial surface region 903 observed from the viewpoint P. That is, first, the projection line 904 from the viewpoint P is extended, and the observation range 902 having the intersection with the position coordinate vector v Sk_MRIonC1 i representing the body surface shape of the MRI breast region 900 is specified. Then, in the observation range 902, the body surface point 905 where the projection line 904 first intersects with the viewpoint P as the starting point is determined from the position coordinate vector v Sk_MRIonC1 i. When the body surface point 905 is obtained for all projection lines 904 included in the observation range 902, the partial surface region 903 is determined.

Next, the virtual projection image generation unit 107 generates an MIP image using the neighborhood information of the partial surface region 903 in the rigid-transformed MRI image, and outputs the generated MIP image to the virtual projection image evaluation unit 108 (S806). ). MIP is an abbreviation for “maximum intensity projection”, and hereinafter, the MIP image generated in step S806 is referred to as “body surface vicinity MIP image I MIPonC1 i”.

In step S806, the virtual projection image generation unit 107 sets a body surface neighborhood section 906 having a predetermined distance (for example, 5 mm) in the direction away from the viewpoint P, starting from the body surface point 905 where the projection line 904 first intersects. To do. Then, the body surface vicinity area 907 is defined by setting the body surface vicinity section 906 for all projection lines 904 included in the observation range 902. Subsequently, perspective projection is performed based on the position and orientation of the viewpoint P, and a body surface vicinity MIP image I MIPonC1 i that is a MIP image limited to a region included in the body surface vicinity region 907 in the MRI image I MRIonC1 i Generate. As a result, a MIP image in which only the information of the superficial blood vessel 909 existing in the vicinity of the body surface on the front infrared camera 505 side in the blood vessel region 908 in the MRI image is visualized is generated. Note that the skin of the subject may not be included in the region when the body surface vicinity region 907 is generated. Specifically, an area in which a predetermined thickness corresponding to the skin thickness is given to the partial surface area 903 may be derived as a skin area and excluded from the body surface vicinity area 907 obtained by the above processing. Since the skin of the subject has a high luminance value in the MRI image, superficial blood vessels can be more clearly depicted in the generated MIP image by this exclusion process.

In this way, by visualizing only information on the region near the body surface and the region that will be observed from the viewpoint P of the front infrared camera 505, the blood vessel region inside the breast and the body on the opposite side as viewed from the front infrared camera 505 are visualized. Visualization of the blood vessel region existing in the vicinity of the table can be prevented. That is, the body surface vicinity MIP image I MIPonC1 i, which is a MIP image closer to the actual infrared camera image, can be generated.

Note that the method of generating the body surface vicinity MIP image I MIPonC1 i is not limited to the above method as long as only the region near the body surface or the region near the body surface can be highlighted and visualized. As another method, there is a method of generating a MIP image by reducing the weight on the luminance value as the distance from the body surface point 905 increases in the direction away from the viewpoint P on the projection line 904. According to this method, an MIP image in which the luminance value is emphasized in an area inside the breast closer to the body surface is generated, and thus an MIP image in which the superficial blood vessel is emphasized is generated. Of course, even when the MIP image is generated by this processing, the superficial blood vessels can be drawn more clearly by excluding the skin region from the drawing target.

FIG. 11 shows a MIP image 1003 using the body surface vicinity information of the subject in the MRI image. In FIG. 11, the body surface vicinity MIP image I MIPonC1 i is represented as a two-dimensional image on the front camera image coordinate system C IMG1 . This is because the coordinate transformation (S806) when generating a 2D MIP image by perspective projection based on the viewpoint P is the coordinate from the 3D front camera coordinate system C CAM1 to the 2D front camera image coordinate system C IMG1 This is because it is geometrically equal to the conversion (camera imaging process).

In addition, when the MIP image is generated, the outer region of the body surface is not included in the processing target, and therefore there is no significant luminance value outside the breast contour shape 1000. Further, since the nipple position 901 in the MRI image is rigidly transformed to coincide with the nipple position 702 in the captured image I CAM1 of the front infrared camera 505, the nipple 1001 in the MIP image is captured image I CAM1 (FIG. 7). Matches the teat 703. In the MIP image, the superficial blood vessel 1002 is visualized as a particularly bright area.

Next, the virtual projection image evaluation unit 108 determines the similarity between both images based on the luminance information of the superficial blood vessels visualized in both the near body surface MIP image I MIPonC1 i and the non-holding infrared camera image I′CAM1. Degree Si is calculated (S807).

In step S807, the virtual projection image evaluation unit 108 excludes the outer region of the breast from the similarity calculation region for both the body surface vicinity MIP image I MIPonC1 i and the non-holding infrared camera image I ′ CAM1 , The calculation area is limited to the internal area of the breast.

The superficial blood vessel 1002 in the body surface vicinity MIP image I MIPonC1 i is visualized with higher brightness than the surrounding breast region. On the other hand, the superficial blood vessel 703 in the infrared camera image I ′ CAM1 is visualized with lower luminance than the surrounding breast region. Therefore, the virtual projection image evaluation unit 108 inverts the luminance value of the body surface vicinity MIP image I MIPonC1 i so that the luminance information between the images can be directly compared. Then, the similarity Si (0 ≦ Si ≦ 1) is calculated between the body surface vicinity MIP image with the luminance value inverted and the infrared camera image I′CAM1 . If the superficial blood vessel 1002 (FIG. 11) of the near body surface MIP image I MIPonC1 i and the superficial blood vessel 703 (FIG. 5) of the infrared camera image I′CAM1 are similar, the value of the similarity Si increases (to 1) Approach).

  In the present embodiment, the mutual information amount between images is applied as an evaluation measure of similarity Si, but the evaluation measure is not limited to this, and known factors such as a cross-correlation coefficient and SSD (sum of squared difference) are known. A technique may be used. The evaluation scale need not be directly based on the luminance value. For example, a scale that detects image features such as edges from both images and calculates the degree of similarity or coincidence thereof may be used.

Next, the virtual projection image evaluation unit 108 compares the similarity Si with the maximum value S MAX of the similarity (S808). Then, the similarity Si exceeds the maximum value S MAX (Si> S MAX) if updates the S MAX (S MAX = Si) , and updates the angle theta MAX corresponding to S MAX (θ MAX = θi) (S809). If the similarity Si is not more than the maximum value S MAX (Si ≦ S MAX ), no update is performed.

Next, the virtual projection image evaluation unit 108 increments the loop variable i (S810), and compares the loop variable i with the total number N θ of hypotheses (S811). If loop variable i is less than or equal to the total number of hypotheses N θ (i ≦ N θ ), the process returns to step S804. If loop variable i exceeds the total number of hypotheses N θ (i> N θ ), the process proceeds to step S812. Proceed. That is, the total number of hypotheses N theta fraction, the process from step S804 S811 is repeated.

When the processing of the total number of hypotheses N θ is completed, the rigid transformation unit 106 converts the transformation matrix T MtoC1 MAX at the angle θ MAX to a final transformation representing the rigid transformation from the MRI image coordinates C MRI to the front camera coordinate system C CAM1 . The transformation matrix T MtoC1 is set (S812). In other words, a plurality of rotation parameters, rotation parameters theta MAX corresponding to the maximum value S MAX of similarity is selected.

This completes the alignment (S207) processing in the non-holding state performed by the rigid body conversion unit 106, the virtual projection image generation unit 107, and the virtual projection image evaluation unit 108. By this processing, the body surface vicinity MIP image I MIPonC1 i generated based on various rigid body transformation parameter hypotheses (that is, rotation parameter hypothesis θi) is based on an angle θ MAX most similar to the infrared camera image I′CAM1. T MtoC1 is obtained.

In the above, an example in which the near body surface MIP image I MIPonC1 i converted to the front camera image coordinate system C IMG1 is generated and the similarity Si with the image I ′ CAM1 of the front infrared camera 505 is evaluated has been described. However, the evaluation target of the similarity Si is not limited to the front infrared camera 505.

For example, the body surface vicinity MIP image I MIPonC1 i converted into the rear camera image coordinate system C IMG2 and the side camera image coordinate system C IMG3 is generated. Then, the similarity between the body surface vicinity MIP image I MIPonC1 i and the image I ′ CAM2 of the rear infrared camera 506 or the image I ′ CAM3 of the side infrared camera 507 may be evaluated. In that case, the viewpoint P for performing perspective projection in steps S805 and S806 with respect to the transformation matrix candidate value T MtoC1 i obtained in step S804 may be replaced with the camera viewpoints of the rear or side infrared cameras 506 and 507, respectively.

Here, the MRI image I MRIonC1 i and the position coordinate vector v Sk_MRIonC1 i are converted into the front camera image coordinate system C IMG1 in step S804. Therefore, the position and orientation represented on the front camera coordinate system C CAM1 may be used as the position and orientation of the camera viewpoint used for the perspective projection.

As described above, the positional relationship between the infrared cameras (front surface, side surface, and rear surface) is associated with the PAT device coordinate system C DEV as a reference. Therefore, a transformation matrix from the front camera coordinate system C CAM1 to the rear camera coordinate system C CAM2 or the side camera coordinate system C CAM3 can be derived. That is, the position and orientation of the camera viewpoint of the rear or side infrared camera in the front camera coordinate system C CAM1 can be derived.

Note that the similarity may be evaluated by calculating a total similarity based on the similarities based on the front, rear, and side infrared cameras. The total similarity includes, for example, a weighted average value, a maximum value, a minimum value, and a median value of these three types of similarities. In the above description , the rotation parameter θi is set and the coordinate conversion for perspectively projecting the MRI image on each camera coordinate system is performed with reference to the front camera coordinate system C CAM1 . However, the candidate value may be set using the rigid body transformation T MtoD between C MRI and C DEV as an alignment parameter on the basis of the PAT apparatus coordinate system C DEV . In this case, after performing rigid body conversion from C MRI to C DEV and viewing conversion to each infrared camera, a body surface vicinity MIP image is generated by the same processing as step S806. Then, the alignment parameter may be estimated based on the similarity evaluation with each infrared image.

● Estimation of Compression Deformation Details of compression deformation estimation (S209) will be described with reference to the flowchart of FIG.

The deformation estimation unit 109 uses the surface shape of the subject acquired in step S202 and the coordinate transformation matrix T MtoD acquired in step S208 to represent a three-dimensional mesh representing the shape of the subject (hereinafter, mesh M ) Is generated (S901). In other words, coordinate transformation by the coordinate transformation matrix T MtoD is performed on the surface shape V Sk_MRI of the subject, and the position coordinate vector V Si_MRIonD (1 ≦ i ≦ Ns) of the subject surface point group in the PAT apparatus coordinate system C DEV is calculated. To do. Then, based on the surface shape represented by the position coordinate vector V Si_MRIonD , the internal region of the subject is determined, and the mesh M is arranged in the internal region.

  A method for generating the mesh M is shown in the schematic diagram of FIG. FIG. 13 (a) shows a sagittal cross section of the processing target region 1200 of the subject, and shows the surface position 1201 of the subject in the sagittal cross section and the corresponding internal region 1202. As shown in FIG. 13 (b), the mesh M is generated by disposing an element 1203 having a three-dimensional structure such as a hexahedron or a tetrahedron in the internal region 1202 of the subject. The mesh M is described by the position of the vertex (node) 1204 of these elements and the connection information.

Hereinafter, the number of nodes of the mesh M arranged in step S901 is expressed as Nm, and the position of each node is expressed as s L (1 ≦ L ≦ Nm). The displacement field in the element can be expressed by the displacement of each node, and based on this, the displacement of an arbitrary point in the subject can be obtained.

Next, the deformation estimation unit 109 uses a plurality of (Np sets) deformation parameter hypotheses p k (1 ≦ k ≦ Np) that combine values that each component of the deformation parameter (such as Young's modulus and Poisson's ratio of the subject) can take. ) Is generated (S902). For example, the range of values that each component can take is divided at an appropriate interval, and all of them are combined to generate the deformation parameter p k . For example, the Young's modulus ratio p y and the Poisson's ratio p p are used as components of the deformation parameter p k , and the values that the Young's modulus ratio p y and the Poisson's ratio p p can take are as follows.
p y = {1, 2, 3, 4, 5}
p p = {0.0, 0.2, 0.4, 0.45, 0.499}

Then, a deformation parameter combining the Young's modulus ratio p y and the Poisson's ratio p p is generated. In the above example, Np = 25. The Young's modulus ratio p y is a parameter for dealing with the anisotropy of the hardness of the breast, and the front-rear direction (y-axis direction) of the human body relative to the Young's modulus in the coronal plane (xz plane) of the human body Represents the ratio of Young's modulus.

Next, the deformation estimation unit 109 performs initialization (S903). That is, 1 is set to the loop variable k, 0 is set to the maximum value E MAX of an evaluation value described later, and p 1 is set to the deformation parameter p MAX described later.

Next, the deformation estimation unit 109 performs a physical deformation simulation based on the finite element method on the mesh M using the deformation parameter p k and generates a deformed mesh DMk that is a deformed mesh (S904). The deformation function Fk (x, y, z) at this time is defined as a displacement vector dk L (1 ≦ L ≦ Nm) that gives the displacement of each node from the mesh M to the deformation mesh DMk.

  A compression deformation simulation by the holding plate, which is a physical deformation simulation in step S904, will be described with reference to FIG. In the compression deformation by the holding plate, when the two holding plates are moved in the center direction of the subject, the surface area of the subject in contact with the moved holding plate is stuck to the holding plate.

As shown in FIG. 14 (a), it is assumed that the two holding plates are moved by distances Δd1 and Δd2, respectively. Out of the nodes of the mesh M, the outer surface nodes 1300 and 1301 in contact with the holding surfaces P Ud1 and P Ld2 are extracted from the surface nodes representing the body surface, and the displacement amounts for the outer surface nodes 1300 and 1301 to stick to the holding surface, respectively. Ask for. Then, the finite element method is calculated using the displacement amount as the boundary condition C of the deformation simulation, and a deformation mesh is generated when the two holding plates move by the distances Δd1 and Δd2, respectively.

In this embodiment, the process of dividing the two holding plates shown in FIG. 14 (b) to the final holding positions P U and P L into a plurality of (N times) deformation simulations, Corresponds to the change in the boundary condition. That is, FIG. 14B shows a deformed mesh DMk as a result of repeating the N deformation simulations. As shown in FIG. 14B, the physical deformation simulation shows that the deformed mesh of the subject is compressed in the z-axis direction between the holding positions P U and P L and expanded in the y-axis direction.

The deformed image generation unit 110 performs deformation corresponding to the deformation parameter p k on the MRI image, generates a deformed MRI image, and outputs the deformed MRI image to the deformed image evaluation unit 111 (S905). That is, the MRI image is coordinate-transformed into the PAT apparatus coordinate system C DEV using the coordinate transformation matrix T MtoD and transformed using the transformation function Fk calculated in step S904. Then, by the coordinate conversion using the inverse matrix of the coordinate transformation matrix T PtoD, to generate a deformed MRI image I D_MRIonP k in PAT image coordinate system C PAT.

The schematic diagram of FIG. 15 shows a modified MRI image ID_MRIonP . FIG. 15 shows a deformed MRI image 1400, a deformed breast region 1401, and a deformed breast shape 1402. FIG. 15A is a two-dimensional image obtained by slicing the deformed MRI image ID_MRIonP with an axial section, and FIG. 15B is a two-dimensional image obtained by slicing the deformed MRI image ID_MRIonP with a sagittal section. Comparing the deformed breast region 1401 and the untransformed breast shape 1402, the breast region is expanded on the xy plane and compressed in the z-axis direction due to compression in the z-axis direction of the PAT image coordinate system C PAT I understand that.

The deformed image evaluation unit 111 deforms and estimates the evaluation value Ek of the appropriateness of the deformed MRI image I D_MRIonP k calculated using the PAT image acquired in step S203 and the held infrared camera image acquired in step S205. The data is output to the unit 109 (S906). That is, the deformed image evaluation unit 111 calculates the similarity S MRI k (0 ≦ S MRI , k ≦ 1) between the deformed MRI image and the PAT image, and the breast shape of the deformed MRI image and the held infrared camera image. An evaluation value Ek is calculated based on the residual Rk between the two.

It is assumed that the higher the evaluation value Ek, the more appropriate the deformation. Further, the mutual information amount between images is applied as an evaluation measure of the similarity S MRI k as in step S807. Note that the evaluation scale is not limited to this, and any known method such as the cross-correlation coefficient, the degree of coincidence of the positions of feature points such as SSD and blood vessel bifurcation may be used.

FIG. 8 shows the same axial cross section of the deformed MRI image ( ID_MRIonPk ) 1400 and the PAT image 600, and a corresponding area 1500 indicated by a broken-line rectangle is an area on the deformed MRI image 1400 corresponding to the PAT image 600. The similarity S MRI k is calculated between the PAT image 600 and the corresponding area 1500. If the visualized blood vessel region 1501 of the deformed MRI image 1400 and the blood vessel region 1502 of the PAT image 600 are similar between the PAT image 600 and the corresponding region 1500, the value of the similarity S MRI k increases.

The residual Rk is calculated as a difference between the shape of the contour (silhouette) of the subject shown in the infrared camera image and the shape of the outline of the deformed mesh DMk projected onto the infrared camera image. For example, when projecting the deformed mesh DMk to the holding infrared camera image I CAM1 , the deformed mesh DMk is transformed into the front camera coordinate system C CAM1 using the inverse matrix of the coordinate transformation matrix T C1toD and the front camera image coordinates Projected to system C IMG1 . Further, the deformed mesh DMk can be projected onto the rear camera image I CAM2 or the side camera image I CAM3 by the same method.

  The residual Rk is, for example, a total residual (for example, three residuals) obtained by integrating the residuals between the outline of the mesh obtained by projecting the deformed mesh DMk on each of the three infrared camera images and the three infrared camera images. (Weighted average value). However, the total residual is not limited to the weighted average value, and the maximum value, the minimum value, the median value, etc. of the three residuals may be used.

  The residual Rk may be calculated as, for example, the residual between the nipple position of the subject shown in the infrared camera image and the nipple position of the deformed mesh DMk projected on the infrared camera image. Of course, a value (for example, a weighted sum) obtained by integrating both the breast shape residual and the nipple position residual may be used as the residual Rk. Although the example of acquiring the residual based on the breast shape and the nipple position from the infrared camera image has been described, it may be acquired from a general camera image obtained by imaging the appearance of the breast.

The evaluation value Ek is expressed by the following equation as a weighted sum based on the similarity S MRI k and the residual Rk, for example.
Ek = aS MRI k + b {1 / (1 + Rk)}… (1)
Here, a and b are weighting factors (a + b = 1).

The reciprocal number of (1 + Rk) is used in the second term of formula (1) for the following reason.
・ Residual Rk is an index that becomes smaller as the deformation is appropriate, contrary to the evaluation value Ek. ・ The range of possible values is between 0 and 1, similar to the similarity S MRI k. Do

The deformation estimation unit 109 compares the input evaluation value Ek with the maximum evaluation value E MAX (S907). Then, evaluation value Ek exceeds the maximum value E MAX of the evaluation value (Ek> E MAX) if updates the E MAX (E MAX = Ek) , to update the transformation parameters p MAX corresponding to E MAX (p MAX = p k ) (S808). Further, when the evaluation value Ek is equal to or less than the maximum evaluation value E MAX (Ek ≦ E MAX ), the update is not performed.

  Next, the deformation estimation unit 109 increments the loop variable k (S909), and compares the loop variable k with the total number Np of hypotheses (S910). If the loop variable k is less than or equal to the total number of hypotheses Np (k ≦ Np), the process returns to step S904. If the loop variable k exceeds the total number of hypotheses Np (k> Np), the process proceeds to step S911. That is, the processes of steps S904 to S910 are repeated for the total number of hypotheses Np.

When the processing for the total number Np of hypotheses is completed, the deformation estimation unit 109 outputs the deformation parameter pMAX to the deformation image generation unit 110 (S911). In other words, from a plurality of deformation parameter, the transformation parameter p MAX corresponding to the maximum value E MAX of the evaluation value is selected. Deformed image generation unit 110, deformation corresponding to the deformation parameters p MAX MRI image I D_MRIonP MAX the compression deformation estimation result and (I D_MRIonP), and outputs the deformed MRI image I D_MRIonP on the image display unit 112 (S912).

Thus, the compression deformation estimation (S209) by the deformation estimation unit 109, the deformation image generation unit 110, and the deformation image evaluation unit 111 ends. According to this processing, deformation simulation is executed under various deformation parameter hypotheses p k . The deformation MRI image I D_MRIonP is generated by the transformation parameter p MAX where the evaluation value of the appropriateness of deformation within their simulation results Ek is maximized.

  As described above, the accuracy of alignment between the PAT image and the MRI image can be improved by comparing the image of the infrared camera mounted on the PAT 12 and the PAT image with the MRI image using the breast as the subject. In other words, the two-dimensional MIP image that emphasizes the superficial blood vessels generated from the MRI image is compared with the non-retained infrared camera image to estimate the position and orientation of the subject on the MRI image with respect to the infrared camera. Realizes high-precision rigid body alignment of images and infrared camera images. Furthermore, by converting the coordinates of the MRI image from the coordinate system of the infrared camera to the coordinate system of PAT12, the result of the rigid body alignment is utilized as an initial state of the deformation alignment process between the MRI image and the PAT image. In other words, when performing alignment by comparing the MRI image and the PAT image, it is only necessary to estimate the compression deformation from the non-holding state to the holding state. Then, by deforming the MRI image so as to match both the breast shape shown in the infrared camera image and the internal structure shown in the PAT image, it is possible to estimate the compression deformation with high accuracy.

In this embodiment, the physical deformation simulation based on the finite element method is used as a method for aligning the deformation position of the MRI image. However, the method is not necessarily limited to this method. For example, a general deformation method such as FFD (Free Form Deformation) may be used. In the deformation process using FFD, first, grid-like control points are arranged so as to surround a subject in an image with a rectangular parallelepiped. Then, by moving this control point, the image region existing inside the rectangular parallelepiped can be deformed. Here, a set of displacement amounts at each control point is defined as a deformation parameter candidate value (hypothesis) p k (1 ≦ k ≦ Np). Then, by changing the value of the deformation parameter p k variously and calculating the deformation parameter p k that maximizes the evaluation value Ek in the above-described deformation image evaluation unit 111, deformation alignment that meets the above-described purpose Can be realized.

  In this embodiment, the human breast is applied as the subject. However, the present invention is not limited to this, and any subject may be used as long as it is a part of a living body having superficial blood vessels. Further, although an MRI image is applied as an image registered in the medical image database 110, an image by any modality may be used as long as it is a three-dimensional image obtained by photographing a living body.

[Modification 1]
In the above, the evaluation value is calculated by comprehensively changing the rotation angle for obtaining the position and orientation of the subject in the MRI image with respect to the infrared camera, and the rotation angle that gives the optimum evaluation value is acquired (S207). However, another method can be used to acquire the position and orientation of the subject in the MRI image. That is, the rotation angle at which the evaluation value is optimal may be estimated using a general optimization algorithm. For example, a method using a steepest descent method which is a kind of optimization algorithm will be described.

Let x be a three-dimensional vector, and (θx, θy, θz) be parameters that represent the rotation angle of the subject in the MRI image with respect to the infrared camera. When the vector x representing the rotation angle is given, the similarity between the infrared camera image generated in step S807 and the body surface vicinity MIP image is defined as S MIP (x). In the steepest descent method, when a vector x is given, a function f (x) to be minimized is set to the reciprocal 1 / S MIP (x) of the similarity S MIP (x). Here, the reason why f (x) is the reciprocal of S MIP (x) is to obtain a parameter representing a rotation angle at which the similarity S (x) is maximized. Each parameter set as described above is updated using the following equation and converged to calculate a parameter x that minimizes f (x) (that is, maximizes S MIP (x)).
x (k + 1) = x (k) -αgrad f (x (k) )… (2)
Where α is a parameter (usually a small positive constant) that determines the percentage of values to be updated at once.
k is the number of updates,
grad f (x (k) ) is the gradient vector of the function f (x) in the k-th update (the direction in which the rate of change of the function f (x) is greatest).

The gradient vector grad f (x (k) ) is obtained by the following method. It is assumed that the vector x (k) = (θx (k) , θy (k) , θz (k) ) T in the k-th update. A function f (x (k) + Δx) is calculated when a minute change amount Δx = (Δθx, Δθy, Δθz) is given to each element of the vector x (k) .

F (x (k) + Δx) corresponding to each Δx is calculated when the amount of change Δx is changed so that the direction of the vector changes uniformly in the parameter space. From the set of calculated f (x (k) + Δx), Δx (Δx MAX ) that maximizes f (x (k) ) − f (x (k) + Δx) is obtained. This Δx MAX is the direction vector of the parameter space in which the rate of change of f (x (k) ) is maximum, and is equal to grad f (x (k) ).

  Further, any known method such as Newton's method may be used as the optimization algorithm. As a result, the rotation angle that gives the optimum evaluation value can be estimated with a smaller number of repetitions, and the processing speed can be increased.

[Modification 2]
In the above, as an evaluation value for evaluating the validity of the compression deformation of the MRI image, the similarity with the PAT image and the shape error with respect to the contour (silhouette) shape of the subject shown in the infrared camera are used (S209). However, the evaluation value may be obtained by another method.

  For example, a deformed MRI image is generated based on a deformed mesh, a deformed MRI image is projected from the infrared camera viewpoint, a deformed body surface vicinity MIP image is generated, and the similarity between the body surface near MIP image and the infrared camera image is calculated. Thus, this similarity may be added to the evaluation value.

That is, in step S905, a deformation function Fk representing the displacement of each node from the mesh M to the deformation mesh DMk is generated. Then, the MRI image is subjected to rigid transformation by the transformation matrix T MtoC1 from the MRI image coordinate system C MRI to the front camera apparatus coordinate system C CAM1 , transformed by the transformation function Fk, and the transformed MRI image I D_MRIonC1 k is obtained. Generate.

Next, a body surface vicinity deformed MIP image ID_MIPonC1k using the vicinity information of the surface partial region is generated from the deformed MRI image ID_MRIonC1k by the same method as steps S805 and S806. Then, the similarity is calculated S MIP k between the infrared camera image I 'CAM1 holding state body near MIP image I D_MIPonC1 k. Finally, a weighted sum based on the evaluation value Ek based on the similarity S MRI k, the residual Rk, and the calculated similarity S MIP k in step S906 is calculated by the following equation.
Ek = aS MRI k + b {1 / (1 + Rk)} + cS MIP k… (3)
Here, a, b, and c are weighting factors (a + b + c = 1).

  As a result, deformation parameters can be obtained by further using information on superficial blood vessels in the vicinity of the body surface, and high-accuracy alignment is possible.

[Modification 3]
In the above, an example in which an infrared camera image using near infrared rays is used for visualization of superficial blood vessels has been described. For example, an image in which superficial blood vessels are visualized by a polarization component obtained by internal reflection in the body can also be used. . For example, a surface reflection component that irradiates the body surface with light such as halogen light and reflects it from the surface of the body, and an internal reflection component that once enters the body, absorbs and scatters in the body, and then returns to the body surface. To do. In this way, an image in which information in the body is visualized can be obtained. In the image, like the infrared camera image, the hemoglobin absorption site inside the blood vessel is depicted darker than the surroundings in the internal reflection component.

  In this way, a three-dimensional image obtained by imaging the object by the first imaging device and a two-dimensional image obtained by imaging the surface layer portion of the object by the second imaging device are acquired, and the surface of the object is obtained from the three-dimensional image. Information indicating the position is acquired. Then, based on the information indicating the surface position, a projection image obtained by viewing the three-dimensional image from the viewpoint of the second imaging device is generated, and the three-dimensional image and the two-dimensional image regarding the target object are generated using the projection image and the two-dimensional image. Registration with the image is performed.

  The image processing according to the second embodiment of the present invention will be described below. Note that the same reference numerals in the second embodiment denote the same parts as in the first embodiment, and a detailed description thereof will be omitted.

  In Example 1, a rigid body transformation parameter is obtained between an image of a non-holding subject and an MRI image (alignment in a non-holding state), and then a deformation parameter between the image of the holding subject and the MRI image The method of estimating (estimating compression deformation) has been described. This method may include errors that occur when the shape of the subject in the non-holding state and the subject in the MRI image are different, or errors that occur due to fine movement of the subject when transitioning from the non-holding state to the holding state. .

  In the second embodiment, without using measurement information regarding the non-holding state subject, between the image of the holding state subject (the second state subject) and the MRI image (first state subject). A method for simultaneously estimating the rigid body transformation parameter and the deformation parameter will be described.

  A block diagram of FIG. 16 shows a configuration example of a modality system including the image processing apparatus 10 of the second embodiment. The difference from the configuration of the first embodiment shown in FIG. 1 is that an evaluation section 114 is provided instead of the virtual projection image evaluation section 108 and the deformed image evaluation section 111 of the first embodiment. In the second embodiment, the information flow in the alignment unit 113 is different, which will be described in detail in the description of the operation and processing of each unit.

  The operation and processing of each part of the image processing apparatus 10 according to the second embodiment will be described with reference to the flowchart of FIG. The operations and processes in steps S201 to S206 and S210 are the same as those in the first embodiment, and detailed description thereof is omitted.

  In step S220, based on the blood vessel information of the infrared camera image, PAT image, and MRI image, the position / posture of the subject and the compression deformation in the MRI image are estimated. This processing is performed by the rigid body conversion unit 106, the virtual projection image generation unit 107, the deformation estimation unit 109, the deformation image generation unit 110, and the evaluation unit 114.

  In the second embodiment, the position and orientation of the subject on the MRI image with respect to the infrared camera and the deformation parameter representing the compression deformation of the subject are used as alignment parameters. Then, by transforming the MRI image into the infrared camera coordinate system (front camera coordinate system) based on the assumed alignment parameter candidate value, and applying compression deformation, the modified MRI image based on the alignment parameter candidate value is Generate.

  Next, an MIP image is generated by performing perspective projection on the generated deformed MRI image based on the viewpoint of the infrared camera. At that time, on each projection line from the viewpoint of the infrared camera, only the luminance information in the vicinity of the three-dimensional surface of the breast is visualized, or image processing for enhancing and visualizing is performed to emphasize the superficial blood vessels of the breast. Generate a MIP image.

  Next, the transformed MRI image is coordinate-converted from the infrared camera coordinate system (front camera coordinate system) to the PAT image coordinate system based on the position and orientation of the infrared camera calibrated in advance with respect to PAT12.

  Next, the image similarity between the infrared camera image and the MIP image in which the blood vessel information is visualized and between the PAT image and the transformed MRI image after coordinate transformation is calculated, and an evaluation value that combines these similarities is calculated. calculate. Then, the assumed alignment parameter is varied to select the alignment parameter that maximizes the evaluation value. That is, alignment including compression deformation is performed between the MRI image and the PAT image by the alignment parameter.

  Details of position / posture and compression deformation estimation (S220) will be described with reference to the flowcharts of FIGS.

  The deformation estimation unit 109 generates a mesh M representing the shape of the subject based on the surface shape of the subject acquired in step S202 (S1101). This process is substantially the same as step S901 in the first embodiment, and detailed description thereof is omitted.

  The rigid transformation unit 106 translates the MRI image into the infrared camera coordinate system (S1102). The process of step S1102 is performed based on the nipple position in the MRI image acquired in step S203 and the nipple position on the infrared camera image in the holding state acquired in step S206. This process is substantially the same as the process in step S801 of the first embodiment, and detailed description thereof is omitted. However, the processing in step S801 is performed based on the infrared camera image in the non-holding state, but the processing in step S1102 is different in that it is performed based on the infrared camera image in the holding state.

Next, the rigid body conversion unit 106 and the deformation estimation unit 109 combine a value that can be taken by the rigid body conversion parameter and a value that can be taken by each component of the deformation parameter, and a plurality of (Nt set) alignment parameters (conversion parameters) hypotheses. t i (1 ≦ i ≦ Nt) is set (S1103).

For example, the rigid body conversion unit 106 sets a plurality (N θ sets) of rotation parameters θj (1 ≦ j ≦ N θ ) as possible values of the rigid body conversion parameters, as in step S8020 of the first embodiment. The deformation estimation unit 109 sets a plurality (Np sets) of deformation parameters p k (1 ≦ k ≦ Np) as possible values of the deformation parameters, as in step S902 of the first embodiment. Then, a plurality of (Nt = N θ × Np sets) conversion parameters t i (1 ≦ i ≦ Nt) are set by combining the rotation parameter θj and the deformation parameter p k . Since the relationship between the PAT image coordinate system and the front camera coordinate system is known, this is the alignment from the MRI image (subject in the first state) to the PAT image (subject in the second state). Equivalent to setting parameter candidate values. Also, the transformation parameter t i is assumed to be shared between the rigid body transformation unit 106 and the deformation estimation unit 109.

The evaluation unit 114 performs initialization (S1104). That is, 1 is set to the loop variable i, 0 is set to the maximum value E MAX of the evaluation value described later, and t 1 is set to the conversion parameter t MAX described later.

The rigid body conversion unit 106 generates an MRI image I MRIonC1 i obtained by rotating the translated MRI image by the conversion parameter t i (that is, θj ) based on the nipple position. Then, the MRI image I MRIonC1 i and the coordinate transformation matrix T MtoC1 i are output to the deformation estimation unit 109 and the deformation image generation unit 110 (S1105). This process is substantially the same as step S804 of the first embodiment, and detailed description thereof is omitted.

The deformation estimation unit 109 generates a deformation mesh DMi obtained by performing physical deformation simulation based on the finite element method on the mesh M using the conversion parameters t i (that is, θj and p k ), and generates a deformation function Fjk as a deformation image generation unit. The data is output to 110 (S1106). That is, using the coordinate transformation matrix T MtoC1 i derived in step S1105, a mesh Mi is generated by applying a rigid transformation corresponding to the rotation parameter θj to the mesh M. Then, a deformed mesh DMi that is a deformed mesh is generated by performing a physical deformation simulation on the mesh Mi. The deformation function Fi (x, y, z) at this time is defined as a displacement vector di L (1 ≦ L ≦ Nm) that gives the displacement of each node from the mesh M to the deformed mesh DMi. This process is the same as step S904 in the first embodiment.

The deformed image generation unit 110 generates a deformed MRI image obtained by performing conversion corresponding to the conversion parameters t i (that is, θj and p k ) on the MRI image, and outputs the deformed MRI image to the virtual projection image generation unit 107 (S1107). ). That is, rigid transformation using the coordinate transformation matrix T MtoC1 i is performed on the MRI image, and the MRI image is coordinate transformed to the front camera coordinate system C CAM1 corresponding to the rotation parameter θj . A deformation process using the deformation function Fjk is performed on the MRI image after the coordinate conversion to generate a deformed MRI image I D_MRIonC1 i.

The virtual projection image generation unit 107 calculates the surface shape (partial surface region) of the subject that will enter the field of view when the deformed MRI image ID_MRIonC1 i is observed from the viewpoint of the front infrared camera 505 in the front camera coordinate system C CAM1 . Obtain (S1108). This process replaces the rigid body-transformed MRI image in step S805 of the first embodiment with the deformed MRI image ID_MRIonC1i , and detailed description thereof is omitted.

Next, the virtual projection image generation unit 107 generates a body surface vicinity deformed MIP image I D_MIPonC1 i using the vicinity information of the surface partial region in the deformed MRI image ID_MRIonC1i , and the body surface vicinity deformed MIP image is evaluated by the evaluation unit 114. (S1109). This process replaces the rigid-transformed MRI image in step S806 of Example 1 with the deformed MRI image ID_MRIonC1i , and detailed description thereof is omitted.

The deformed image generation unit 110 generates a deformed MRI image I D_MRIonP i in which the deformed MRI image ID_MRIonC1 i generated in step 1107 is coordinate-converted from the front camera coordinate system to the PAT image coordinate system, and the deformed MRI image is sent to the evaluation unit 114. Output (S1110). That is, the transformed MRI image I D_MRIonC1 i is coordinate transformed to the PAT apparatus coordinate system C DEV using the coordinate transformation matrix T C1toD . Furthermore, a modified MRI image I D_MRIonP i in the PAT image coordinate system C PAT is generated by coordinate transformation using an inverse matrix of the coordinate transformation matrix T PtoD .

The evaluation unit 114 calculates the similarity S MIP i (0 ≦ S MIP i ≦ 1) between the deformed MIP image I D_MIPonC1 i near the body surface and the held infrared camera image, and the similarity between the deformed MRI image I D_MRIonP i and the PAT image The degree S MRI i (0 ≦ S MRI i ≦ 1) is calculated. Furthermore, a breast shape residual Ri in the deformed MRI image ID_MRIonPi and the held infrared camera image is calculated. Then, an evaluation value Ei that combines the similarity and the residual is calculated (S1111).

The calculation method of the similarity S MIP i includes the body surface vicinity MIP image I MIPonC1 j and the non-holding infrared camera image in step S807 of the first embodiment, the body surface vicinity deformed MIP image ID_MIPonC1 i and the holding state infrared camera. It has been replaced with an image. In addition, the similarity S MRI i is calculated by replacing the deformed MRI image ID_MRIk in step S906 of the first embodiment with the deformed MRI image ID_MRIonPi . The residual Ri is calculated as a difference between the shape of the contour (silhouette) of the subject shown in the infrared camera image and the shape of the outline of the deformed mesh DMjk projected on the infrared camera image. The method for obtaining the residual Ri is the same as in step S906.

The evaluation value Ek is expressed by the following equation as a weighted sum based on the similarity S MIP i, S MRI i and the residual Ri, for example.
Ei = aS MIP i + bS MRI i + c {1 / (1 + Ri)}… (4)
Here, a, b, and c are weighting factors (a + b + c = 1).

  Further, the reason why the reciprocal number of (1 + Rk) is used in the third term of the equation (4) is the same as that in step S906.

Next, the evaluation unit 114 compares the evaluation value Ei with the maximum evaluation value E MAX (S1112). Then, evaluation value Ei exceeds the maximum value E MAX of the evaluation value (Ei> E MAX) if updates the E MAX (E MAX = Ei) , and updates the transformation parameters t MAX corresponding to E MAX (t MAX = t i ) (S1113). If the evaluation value Ei is equal to or less than the maximum evaluation value E MAX (Ei ≦ E MAX ), no update is performed.

  Next, the evaluation unit 114 increments the loop variable i (S1114), and compares the loop variable i with the total number Nt of hypotheses (S1115). If loop variable i is less than or equal to the total number of hypotheses Nt (i ≦ Nt), the process returns to step S1105. If loop variable i exceeds the total number of hypotheses Nt (i> Nt), the process proceeds to step S1116. That is, the processes of steps S1105 to S1115 are repeated for the total number of hypotheses Nt.

When the processing for the total number Nt of hypotheses is completed, the evaluation unit 114 outputs the transformation parameter t MAX (that is, θ MAX and p MAX ) corresponding to the maximum value E MAX of the evaluation value to the modified image generation unit 110 (S1116). . In other words, a plurality of transformation parameters, the transformation parameters t MAX corresponding to the maximum value E MAX of the evaluation value is selected. Deformed image generation unit 110 outputs the deformation MRI image I D_MRIonP corresponding to transformation parameters t MAX to the image display unit 112 (S1117).

The position / posture and compression deformation estimation (S220) by the rigid body conversion unit 106, the virtual projection image generation unit 107, the deformation estimation unit 109, the deformation image generation unit 110, and the evaluation unit 114 is thus completed. According to this processing, the transformation parameter t MAX that maximizes the evaluation value Ei of the appropriateness of deformation among the results of transformation including compression deformation assuming the transformation parameter t i representing various rotations and deformations. Then, a modified MRI image ID_MRIonP is generated.

  In this way, the MIP image that emphasizes the superficial blood vessels generated by compressing and deforming the MRI image, the infrared camera image in the holding state, and the blood vessel information of the PAT image are compared, and the position and orientation of the subject on the MRI image are compared. Estimate compression deformation. As a result, when capturing an image of a PAT image, when an ideal non-holding state infrared camera image is not captured, or when the breast position / posture changes during the transition from the non-holding state to the holding state, High-precision deformation alignment can be realized.

[Modification]
In the above, the example in which the blood vessel information of the PAT image and the infrared camera image is incorporated into the same evaluation value Ei has been described. However, it is not necessary to use these pieces of information at the same time. For example, the blood vessel information of the PAT image may be used first, and then the blood vessel information of the infrared camera image may be used to perform deformation positioning.

For example, the conversion parameter t i is changed, the similarity between the deformed MRI image and the PAT image is evaluated, and the conversion parameter t MAX that maximizes the evaluation value is obtained. Next, the conversion parameter t i is varied only in the vicinity of the conversion parameter t MAX to evaluate the similarity between the near body surface deformation MIP image I MIPonC1 i and the infrared camera image, and the conversion parameter t with the maximum evaluation value is obtained. Find MAX2 . Then, the deformed MRI image corresponding to the conversion parameter t MAX2 is used as the compression deformation alignment result. In this way, after performing rough deformation alignment using a PAT image in which rough blood vessel information is rendered as a morphological image for an MRI image, an infrared camera image in which more detailed blood vessel information is rendered as a functional image Can be used for precise deformation alignment.

  As in Example 1, after estimating the rigid body transformation parameter in the non-holding state, the setting range of the hypothesis of θj is limited to the vicinity of the rigid body transformation parameter, and the rigid body transformation parameter and the deformation parameter in the holding state are set. The structure to estimate may be sufficient. As a result, the total number of hypotheses is reduced, and the processing speed can be increased. Of course, as in the first modification of the first embodiment, the conversion parameter may be obtained using a general optimization algorithm instead of the evaluation based on the brute force assumption. In this case, the rigid body transformation parameter estimated in the non-holding state can be used as the initial value of θj.

  Hereinafter, image processing according to the third embodiment of the present invention will be described. Note that the same reference numerals in the third embodiment denote the same parts as in the first and second embodiments, and a detailed description thereof will be omitted.

  In Examples 1 and 2, the MRI image is aligned with the PAT image based on the image captured of the breast by the infrared camera mounted on the PAT12 and the emphasized MIP image of the superficial blood vessel of the breast generated from the MRI image. An example was explained. However, using a 3D image of a subject imaged in advance such as an MRI image and information on the superficial blood vessels of the subject captured by an infrared camera, the target image to be aligned with the 3D image is limited to a PAT image. I can't.

  For example, there is a case where an image of a cross section corresponding to an imaging cross section of an ultrasonic image (hereinafter referred to as a corresponding cross section) is generated (cut out) and presented from a CT image or MRI image that is a three-dimensional image for the purpose of diagnosis support. It is done.

  In Example 3, based on the ultrasonic image, the captured image of the subject by the infrared camera with which the positional relationship is associated in advance, and the emphasized MIP image of the superficial blood vessel of the subject generated from the MRI image, the subject at the time of MRI image capturing An example of alignment from the specimen to the subject at the time of ultrasonic image capturing will be described. In the following, the subject is the “subject in the first state” when the breast is the subject and the MRI image is taken, and the subject in the second state is the “subject in the second state”. May be called.

  In addition, the MRI image captured in advance in Example 3 is an image obtained by imaging a prone breast as a subject in the first state, but the ultrasound image is in a supine state as a subject in the second state. It is an image of the breast. Therefore, in order to perform alignment between the MRI image and the ultrasonic image, it is necessary to estimate not only the rigid transformation between the two images but also the gravity deformation. Hereinafter, the process flow of the third embodiment will be described.

  An ultrasonic image is taken by bringing the ultrasonic probe mounted on the ultrasonic imaging apparatus into contact with the subject in the supine state. The ultrasonic probe is equipped with a sensor (magnetic type, optical type, etc.) that measures its position and posture, and measures the position and posture of the ultrasonic probe during ultrasonic imaging. Is done. That is, the imaging region of the ultrasonic image in the coordinate system (hereinafter referred to as the sensor coordinate system) that is the reference of the sensor is measured.

  In addition, an infrared camera installed to image a subject is calibrated in position and orientation in the sensor coordinate system. Accordingly, it is possible to associate the position and orientation of the ultrasonic image and the infrared camera image via the sensor coordinate system. And the infrared camera image which the superficial blood vessel of the breast in the supine state imaged with the infrared camera was imaged is acquired.

  Next, as in Examples 1 and 2, the three-dimensional surface shape of the breast is extracted from the MRI image obtained by imaging the subject. Next, we assumed the candidate values for the alignment parameters representing the various positions and orientations of the subject on the MRI image and gravity deformation for the infrared camera, and performed gravity deformation alignment for each candidate value of the alignment parameter. A modified MRI image is generated.

  Next, an MIP image in which superficial blood vessels on the infrared camera side are emphasized is generated for each deformed MRI image using information in the vicinity of the body surface in the image. Subsequently, the position / posture and gravity deformation of the subject on the MRI image with respect to the infrared camera, which maximize the similarity between each MIP image and the infrared camera image, are estimated.

  Next, based on the position / orientation of the infrared camera image with respect to the previously associated ultrasonic image, a modified MRI image is generated by deforming and aligning the MRI image with the coordinate system of the ultrasonic image. Perform deformation alignment.

  In this way, by comparing the MIP image that gravity-deforms the MRI image and emphasizes the superficial blood vessel on the infrared camera side with the infrared camera image in the supine state, the position and orientation of the subject on the MRI image relative to the infrared camera Gravity deformation can be estimated. Therefore, it is possible to realize highly accurate deformation alignment between the infrared camera and the MRI image, and it is possible to perform highly accurate deformation alignment between the ultrasonic image and the MRI image using the positional relationship between the ultrasonic image and the infrared camera. .

[Modification 1]
In Example 3, in order to correct the deformation state of the breast between the MRI image captured in the prone state and the ultrasound image captured in the supine state, the position / posture of the subject on the MRI image with respect to the infrared camera In addition to that, gravity deformation was also estimated.

  However, for example, if a three-dimensional image such as a CT image or an MRI image that is to be aligned with the ultrasonic image is captured in the same supine state as the ultrasonic image, correction by gravity deformation is performed between the two images. There is no need. The process in this case will be described only with respect to the difference from the process of the third embodiment (alignment of the subject in the first state on the MRI image with respect to the subject in the second state in the infrared camera).

  First, we assume candidate values for alignment parameters that vary only the position and orientation of the subject on the MRI image for the infrared camera, and obtain information on the vicinity of the body surface in the MRI image for each assumed position and orientation. Use to generate MIP images that emphasize superficial blood vessels.

  Next, the similarity between each generated MIP image and the infrared camera image is evaluated, and the position / posture of the subject on the MRI image with respect to the infrared camera having the maximum evaluation value is estimated. Thereby, when the subject can be regarded as a rigid body, the rigid body alignment can be performed with high accuracy between the three-dimensional image that is the object to be aligned with the ultrasonic image.

[Modification 2]
In Example 3, by using an infrared camera image, a three-dimensional image of a subject such as an MRI image, a two-dimensional ultrasonic image acquired by an ultrasonic probe measured by a position / posture sensor, and The example of performing the alignment of the above has been described.

  However, the target to be aligned with the three-dimensional image of the subject may be, for example, a two-dimensional PAT image acquired by an ultrasonic probe measured by a position / posture sensor, similarly to the ultrasonic image. A two-dimensional PAT image is similar to an ultrasound image, except that a near-infrared light source is provided on the ultrasound probe to irradiate the human body with near-infrared light instead of ultrasound. An image is acquired with the configuration of Therefore, as in the third embodiment, the position / posture and gravity deformation of the subject on the MRI image with respect to the infrared camera are estimated. Then, using the positional relationship between the two-dimensional PAT image and the infrared camera known by the position / orientation sensor, highly accurate deformation alignment between the two-dimensional PAT image and the MRI image can be performed.

  Hereinafter, image processing according to the fourth embodiment of the present invention will be described. Note that the same reference numerals in the fourth embodiment denote the same parts as in the first to third embodiments, and a detailed description thereof will be omitted.

  In Examples 1 to 3, as the first process, the position / posture and deformation state of a three-dimensional image of a subject such as an MRI image captured in advance with respect to an infrared camera were estimated. Then, as a second process, the positional relationship between the target modality and the infrared camera is used to align the 3D image and the modality.

  However, the process to be performed may be only the first process. That is, using the information of the superficial blood vessels in both the infrared camera image and the three-dimensional image of the subject in the second state captured in advance, the subject imaged in the infrared camera was captured in advance. Only the process of estimating the position / posture and deformation of the three-dimensional image of the subject in the first state may be used.

  By estimating the position / posture and deformation, for example, during surgery of the subject, a three-dimensional image such as an MRI image or CT image of the subject imaged in advance is aligned with an infrared camera that captures the subject. Can do. This makes it possible to refer to a lesion site or the like shown in a three-dimensional image in association with a subject during surgery.

  In this way, using the information of the internal structure near the surface of the subject acquired from the two-dimensional image captured by the infrared camera and the three-dimensional image such as the MRI image or CT image, the three-dimensional image is converted into the subject. It is possible to provide a mechanism for highly accurate and automatic alignment. Therefore, highly accurate positioning between the PAT image and a three-dimensional image such as an MRI image or a CT image becomes possible.

[Other Examples]
The present invention can also be realized by executing the following processing. That is, software (program) that realizes the functions of the above-described embodiments is supplied to a system or apparatus via a network or various recording media, and a computer (or CPU, MPU, etc.) of the system or apparatus reads the program. It is a process to be executed.

DESCRIPTION OF SYMBOLS 10: Image processing apparatus, 11: Medical image database, 12: Photoacoustic tomography apparatus, 101: Medical image acquisition part, 102: Three-dimensional shape acquisition part, 103: PAT image acquisition part, 104: Camera image acquisition part, 105 : Two-dimensional shape acquisition unit, 106: rigid body conversion unit, 107: virtual projection image generation unit, 108: virtual projection image evaluation unit, 109: deformation estimation unit, 110: deformation image generation unit, 111: deformation image evaluation unit, 112 : Image display

Claims (21)

  1. Image acquisition means for acquiring a three-dimensional image generated by imaging an object by a modality and a two-dimensional image generated by imaging the object by an optical camera ;
    Generating means for generating projection information by projecting the three-dimensional image so as to emphasize a region near the surface of the object on the basis of the viewpoint of the optical camera ;
    An image processing apparatus comprising: an alignment unit that performs an alignment process between the three-dimensional image and the two-dimensional image using the projection information and the two-dimensional image.
  2.   2. The image according to claim 1, wherein the generation unit generates the projection information by projecting the three-dimensional image in a region having a predetermined distance from the surface of the object in a direction away from the viewpoint of the optical camera. Processing equipment.
  3.   The generation unit generates the projection information by projecting the three-dimensional image in a region obtained by removing a skin region from a region having a predetermined distance from the surface of the object in a direction away from the viewpoint of the optical camera. Item 8. The image processing apparatus according to Item 1.
  4. The image according to claim 2 or 3 , wherein the generation unit determines the region having the predetermined distance from the surface of the object included in the field of view of the optical camera in a direction away from the viewpoint of the optical camera. Processing equipment.
  5.   The generation unit generates the projection information by projecting with a reduced weight on the three-dimensional image as the distance from the surface of the object increases in a direction away from the viewpoint of the optical camera. The described image processing apparatus.
  6. The generation unit, based on the plurality of candidate values of the position registration parameters of the two-dimensional image and the three-dimensional image to generate a plurality of projection information,
    The alignment means with said two-dimensional image and the plurality of projection information, as claimed in any one of claims 1 to 5 for selecting one of said plurality of candidate values as the alignment parameter Image processing device.
  7. It said generating means obtains a partial area of the surface position of the said object to be observed by an optical camera, using the information of the partial region, in any one of 6 claims 1 to generate the projection data The described image processing apparatus.
  8. It said generating means, image processing apparatus according to the three-dimensional image in any one of claims 1 to 7 for generating a projection image obtained by projecting the basis of the viewpoint of the optical camera as the projection information.
  9. First evaluation means for evaluating the degree of similarity between the projection information and the two-dimensional image;
    Said alignment means, on the basis of the similarity, an image processing apparatus according to any one of claims 1 to 8 for processing positioning between the two-dimensional image and the three-dimensional image.
  10. The generation unit generates the modified three-dimensional image by deforming the three-dimensional image, and generates the projection information by projecting the modified three-dimensional image with reference to the viewpoint of the optical camera. 10. The image processing device according to any one of items 9 .
  11. The image acquisition means is generated by imaging the three-dimensional image generated by imaging the object in the first state by the modality , and imaging the object in the second state by the optical camera . Obtaining the two-dimensional image and a three-dimensional image generated by imaging the object in the second state with a further modality different from the modality ;
    A second evaluation unit that evaluates the degree of similarity between the three-dimensional image in the first state and the three-dimensional image in the second state;
    The image processing apparatus according to claim 9 , wherein the alignment unit performs the alignment process based on the evaluation results of the first and second evaluation units.
  12. The image acquisition means is generated by imaging the three-dimensional image generated by imaging the object in the first state by the modality , and imaging the object in the second state by the optical camera . the two-dimensional image, and wherein said object of the second state to any one of claims 1 to 11 for obtaining a three-dimensional image generated by imaging the further modalities different from the modality Image processing apparatus.
  13. The image processing apparatus according to claim 11 or 12 , wherein a coordinate system in the optical camera corresponds to a coordinate system in the further modality .
  14. Said three-dimensional image processing of the alignment is performed, an image according to any one of claims 11 to 13 comprising means for displaying on the display means and said three-dimensional image of said second state Processing equipment.
  15.   The image processing apparatus according to claim 11, wherein the modality includes an MRI apparatus, and the further modality includes a PAT apparatus.
  16. The modality includes an MRI apparatus or PAT device, the optical camera image processing apparatus according to any one of claims 1 to 14 comprising an infrared camera.
  17.   The alignment means includes
      Calculating information representing the surface position of the object based on the two-dimensional image;
      Calculating information representing the surface position of the object based on the projection information;
      Using the information representing the surface position of the object calculated based on the two-dimensional image and the information representing the surface position of the object calculated based on the projection information, the three-dimensional image and the The image processing apparatus according to claim 1, wherein the image processing apparatus performs alignment processing with a two-dimensional image.
  18. The generating means includes
    Based on the three-dimensional image, obtain information representing the surface position of the object,
    Based on the information representing the surface position of the object, calculate the intersection of the projection line based on the viewpoint of the optical camera and the surface position of the object,
    The image processing apparatus according to claim 17 , wherein information representing a surface position of the object including the intersection is generated as information representing a surface position of the object included in a field of view of the optical camera .
  19. Obtaining a three-dimensional image generated by imaging an object with a modality and a two-dimensional image generated by imaging the object with an optical camera ;
    The projection information is generated by projecting the three-dimensional image so as to emphasize a region near the surface of the object with respect to the viewpoint of the optical camera .
    An image processing method for performing alignment processing between the three-dimensional image and the two-dimensional image using the projection information and the two-dimensional image.
  20. Program for causing to function as each means of the image processing apparatus according to computer claims 1 to any one of claims 18.
  21. Program of claim 20 is recorded, the computer readable recording medium.
JP2017142840A 2017-07-24 2017-07-24 Image processing apparatus and method Active JP6461257B2 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP2017142840A JP6461257B2 (en) 2017-07-24 2017-07-24 Image processing apparatus and method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
JP2017142840A JP6461257B2 (en) 2017-07-24 2017-07-24 Image processing apparatus and method

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
JP2013214140 Division 2013-10-11

Publications (2)

Publication Number Publication Date
JP2017221684A JP2017221684A (en) 2017-12-21
JP6461257B2 true JP6461257B2 (en) 2019-01-30

Family

ID=60686196

Family Applications (1)

Application Number Title Priority Date Filing Date
JP2017142840A Active JP6461257B2 (en) 2017-07-24 2017-07-24 Image processing apparatus and method

Country Status (1)

Country Link
JP (1) JP6461257B2 (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018147202A1 (en) 2017-02-07 2018-08-16 キヤノン株式会社 Photoelectric conversion element, optical area sensor using same, imaging element, and imaging device

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7204640B2 (en) * 2003-08-29 2007-04-17 Accuray, Inc. Apparatus and method for registering 2D radiographic images with images reconstructed from 3D scan data
CN105844586A (en) * 2007-12-18 2016-08-10 皇家飞利浦电子股份有限公司 Features-based 2d/3d image registration
DE102010040634A1 (en) * 2010-09-13 2012-03-15 Siemens Aktiengesellschaft Method for 2D / 3D registration
JP5808004B2 (en) * 2011-08-24 2015-11-10 国立大学法人 奈良先端科学技術大学院大学 image processing apparatus, image processing method, and program

Also Published As

Publication number Publication date
JP2017221684A (en) 2017-12-21

Similar Documents

Publication Publication Date Title
US10271822B2 (en) Sensor coordinate calibration in an ultrasound system
US10229496B2 (en) Method and a system for registering a 3D pre acquired image coordinates system with a medical positioning system coordinate system and with a 2D image coordinate system
JP6395995B2 (en) Medical video processing method and apparatus
US9092848B2 (en) Methods for automatic segmentation and temporal tracking
US10290076B2 (en) System and method for automated initialization and registration of navigation system
JP6242569B2 (en) Medical image display apparatus and X-ray diagnostic apparatus
US10542955B2 (en) Method and apparatus for medical image registration
CA2645385C (en) Anatomical modeling from a 3-d image and surface mapping
US8831708B2 (en) Multi-modal medical imaging
US8867808B2 (en) Information processing apparatus, information processing method, program, and storage medium
US8126239B2 (en) Registering 2D and 3D data using 3D ultrasound data
EP1699361B1 (en) System for guiding a medical instrument in a patient body
RU2494676C2 (en) Interventional navigation with application of three-dimentional ultrasound with contrast enhancement
US8165372B2 (en) Information processing apparatus for registrating medical images, information processing method and program
US9480456B2 (en) Image processing apparatus that simultaneously displays two regions of interest on a body mark, processing method thereof and storage medium
Hsu et al. Freehand 3D ultrasound calibration: a review
KR101059312B1 (en) Non-rigid Image Matching Device for Ultrasound and CT Images Using Brightness and Gradient Information
JP5837748B2 (en) System and method for registration of imaging data
JP2015531607A (en) Method for tracking a three-dimensional object
KR101121396B1 (en) System and method for providing 2-dimensional ct image corresponding to 2-dimensional ultrasound image
JP6238651B2 (en) Ultrasonic diagnostic apparatus and image processing method
KR101121353B1 (en) System and method for providing 2-dimensional ct image corresponding to 2-dimensional ultrasound image
CN102727258B (en) Image processing apparatus, ultrasonic photographing system, and image processing method
Stoyanov et al. A practical approach towards accurate dense 3D depth recovery for robotic laparoscopic surgery
Wu et al. Three-dimensional modeling from endoscopic video using geometric constraints via feature positioning

Legal Events

Date Code Title Description
A977 Report on retrieval

Free format text: JAPANESE INTERMEDIATE CODE: A971007

Effective date: 20180516

A131 Notification of reasons for refusal

Free format text: JAPANESE INTERMEDIATE CODE: A131

Effective date: 20180629

A521 Written amendment

Free format text: JAPANESE INTERMEDIATE CODE: A523

Effective date: 20180828

TRDD Decision of grant or rejection written
A01 Written decision to grant a patent or to grant a registration (utility model)

Free format text: JAPANESE INTERMEDIATE CODE: A01

Effective date: 20181126

A61 First payment of annual fees (during grant procedure)

Free format text: JAPANESE INTERMEDIATE CODE: A61

Effective date: 20181225

R151 Written notification of patent or utility model registration

Ref document number: 6461257

Country of ref document: JP

Free format text: JAPANESE INTERMEDIATE CODE: R151