WO2017130263A1 - Appareil de traitement d'images, procédé de traitement d'images, système de traitement d'images, et programme - Google Patents

Appareil de traitement d'images, procédé de traitement d'images, système de traitement d'images, et programme Download PDF

Info

Publication number
WO2017130263A1
WO2017130263A1 PCT/JP2016/005253 JP2016005253W WO2017130263A1 WO 2017130263 A1 WO2017130263 A1 WO 2017130263A1 JP 2016005253 W JP2016005253 W JP 2016005253W WO 2017130263 A1 WO2017130263 A1 WO 2017130263A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
transformation
information
corresponding point
deformation
Prior art date
Application number
PCT/JP2016/005253
Other languages
English (en)
Inventor
Ryo Ishikawa
Takaaki Endo
Kazuhiro Miyasa
Kenji Morita
Original Assignee
Canon Kabushiki Kaisha
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from JP2016224383A external-priority patent/JP6821403B2/ja
Application filed by Canon Kabushiki Kaisha filed Critical Canon Kabushiki Kaisha
Priority to US16/073,156 priority Critical patent/US10762648B2/en
Publication of WO2017130263A1 publication Critical patent/WO2017130263A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/33Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/14Transformations for image registration, e.g. adjusting or mapping for alignment of images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing

Definitions

  • the disclosure of the present specification relates to an image processing apparatus, an image processing method, an image processing system, and a program.
  • a technology for registering and displaying a plurality of images and analyzing the images is used, for example, for image diagnosis in a medical field.
  • information of corresponding points that are points corresponding between the images on which the registration is performed may be obtained and used for the registration in some cases.
  • a technology for supporting the obtainment of the information of the corresponding points has been proposed.
  • PTL 1 and PTL 2 describes that, with regard to images obtained by imaging different shapes of the same object (subject), coordinate transformation based on a particular structure of the object is applied to each of images to have the corresponding points easily identified by visual observation, and the information of the corresponding points between the images to which the coordinate transformation has been applied is obtained.
  • the shape of the object drawn in the deformed image may be significantly different from the shape of the object drawn in the original image in some cases. Therefore, while it is facilitated to identify the corresponding points by performing the application of the coordinate transformation, it may become difficult to perform the registration between the deformed images on which the coordinate transformation has been applied in some cases.
  • an image processing apparatus includes a coordinate transformation obtaining unit configured to obtain information related to coordinate transformation for respectively transforming a first image and a second image obtained by imaging an object to a first transformed image and a second transformed image, an information obtaining unit configured to obtain correspondence information that serves as information of a corresponding point included in each of the first transformed image and the second transformed image, a transformation unit configured to transform a position of the corresponding point included in the first transformed image to a position in the first image and a position of the corresponding point included in the second transformed image to a position in the second image on the basis of the correspondence information obtained by the information obtaining unit and the coordinate transformation obtained by the coordinate transformation obtaining unit, and a deformation transformation obtaining unit configured to obtain information related to deformation transformation for performing registration of the first image and the second image on the basis of information of the corresponding points respectively transformed to the positions in the first image and the second image by the transformation unit.
  • the information of the corresponding points obtained between the images on which the coordinate transformation for the deformation to the images with which it is facilitated to identify the corresponding points is transformed to the information of the corresponding points in the original images, it is possible to obtain the deformation transformation for the registration in accordance with the features of the original images.
  • the registration can be performed on the basis of the original images in a case where it is difficult to perform the registration between the images deformed so as to be facilitated to identify the corresponding points, it is possible to provide the system for performing the highly accurate registration.
  • Fig. 1 illustrates an example of a function configuration of an image processing apparatus according to first to third exemplary embodiments.
  • Fig. 2 illustrates an example of a hardware configuration of the image processing apparatus according to the first to third exemplary embodiments.
  • Fig. 3 is a flow chart illustrating an example of processing performed by the image processing apparatus according to the first to third exemplary embodiments.
  • Fig. 4A illustrates an example of the processing performed by the image processing apparatus according to the first to third exemplary embodiments.
  • Fig. 4B illustrates an example of the processing performed by the image processing apparatus according to the first to third exemplary embodiments.
  • Fig. 5 is a flow chart illustrating an example of processing performed by the image processing apparatus according to the first to third exemplary embodiments.
  • Fig. 1 illustrates an example of a function configuration of an image processing apparatus according to first to third exemplary embodiments.
  • Fig. 2 illustrates an example of a hardware configuration of the image processing apparatus according to the first to third exemplary embodiments.
  • FIG. 6 illustrates an example of processing performed by the image processing apparatus according to an exemplary embodiment of the present invention.
  • Fig. 7 illustrates an example of a function configuration of the image processing apparatus according to a fourth exemplary embodiment.
  • Fig. 8 is a flow chart illustrating an example of processing performed by the image processing apparatus according to the fourth exemplary embodiment.
  • Fig. 9 is a flow chart illustrating an example of processing performed by the image processing apparatus according to the fourth exemplary embodiment.
  • a plurality of images including images obtained by a plurality of image acquisition apparatuses (modalities), on different dates and times, at different body postures, or in different image acquisition modes may be used for image diagnosis in a medical field, for example.
  • registration between the plurality of images may be performed in some cases.
  • an operation is performed to associate structures and the like drawn in the respective images with one another as corresponding points by visual observation.
  • an image processing apparatus 100 While a first image and a second image obtained by imaging an object in mutually different deformed states are used as inputs, an image processing apparatus 100 according to a first exemplary embodiment obtains deformation transformation for performing registration between the first image and the second image. Subsequently, the image processing apparatus 100 generates a deformed image obtained by deforming one of the first image and the second image to be matched with the other image. To obtain the corresponding points used for the registration and to obtain corresponding positions between the input images, the first image and the second image are projected onto a space where it is easy to compare the images with each other to be observed, for example, a normalized space.
  • the image processing apparatus 100 obtains the corresponding positions by an operation input by a user or the like on the images projected onto the normalized space. Furthermore, the image processing apparatus 100 projects the positions corresponding to the obtained positions onto a space of the original image, that is, a real space, which is a space defined by the coordinate system of the modality, and performs registration on the basis of regularization information for taking into account an appropriateness of physical deformation of the object (applying regularization to the deformation) in this real space.
  • the real space refers to a space defined by image coordinate systems of the first image and the second image corresponding to a space defined by the image coordinate systems of the images obtained through the image acquisition by the modalities.
  • the normalized space will be described below in detail.
  • Fig. 1 is a block diagram illustrating an example of a function configuration of the image processing apparatus 100.
  • the image processing apparatus 100 is connected to a data server 170, an operation unit 180, and a display unit 190.
  • the data server 170 is connected to the image processing apparatus 100 via a network and holds the first image and the second image which will be described below.
  • the first image and the second image held by the data server 170 are medical images obtained by imaging the object in advance in respectively different conditions.
  • the conditions include types of the modalities used for the image acquisition, image acquisition modes, dates and times, body postures of the object, and the like.
  • the medical images include, for example, three-dimensional tomographic images acquired by modalities such as a magnetic resonance imaging (MRI) apparatus, an X-ray computed tomography (CT) apparatus, an ultrasonic imaging apparatus, a photoacoustic tomography (PAT) apparatus, a positron emission tomography (PET)/single photon emission computed tomography (SPECT) apparatus, an optic coherence tomography (OCT) apparatus, and the like.
  • MRI magnetic resonance imaging
  • CT X-ray computed tomography
  • APT photoacoustic tomography
  • PET positron emission tomography
  • SPECT positron emission computed tomography
  • OCT optic coherence tomography
  • the first image and the second image are two medical images obtained by imaging the same object corresponding to the two medical images obtained by imaging the same object in respectively different deformation states.
  • the first image and the second image may also be, for example, images acquired by different modalities or in different image acquisition modes around the same time or images obtained by imaging the same patient by the same modality in the same body posture at different dates and times for follow-up observations.
  • the first image and the second image are input to the image processing apparatus 100 via a data obtaining unit 102.
  • the operation unit 180 is, for example, a mouse or a key board.
  • the operation unit 180 accepts an operation by a user. Positions of corresponding points between the first image and the second image which are specified by the user through an operation input via the operation unit 180 are input to the image processing apparatus 100 via a corresponding point obtaining unit 106.
  • the display unit 190 performs display on the basis of an output from the image processing apparatus 100. For example, a display image generated by the image processing apparatus 100 is displayed, and a graphical user interface (GUI) for obtaining an instruction from the user is provided.
  • GUI graphical user interface
  • the image processing apparatus 100 includes the data obtaining unit 102, a normalization transformation obtaining unit 103, a normalized image generation unit 104, a display control unit 105, the corresponding point obtaining unit 106, a corresponding point transformation unit 107, a deformation transformation obtaining unit (deformation estimation unit) 108, and the deformed image generation unit 109.
  • the data obtaining unit 102 obtains the first image and the second image which are the input images from the data server 170.
  • the normalization transformation obtaining unit 103 obtains coordinate transformation for deforming each of the first image and the second image to an image with which it is facilitated to identify the corresponding point.
  • the normalization transformation obtaining unit 103 obtains normalization transformations with regard to the first image and the second image.
  • the normalization transformation obtaining unit 103 is an example of a coordinate transformation obtaining unit.
  • the normalized image generation unit 104 generates a first normalized image and a second normalized image corresponding to normalized images obtained by performing the normalization transformation of each of the first image and the second image.
  • the normalized image generation unit 104 is an example of a transformed image generation unit.
  • the display control unit 105 displays the first image, the second image, the first normalized image, the second normalized image, and a deformed image which will be described below on the display unit 190 in accordance with the operation input by the user or control based on a predetermined program.
  • the display control unit 105 also functions as an output unit configured to output data of the image that can be displayed to an external apparatus such as the display unit 190, for example.
  • the corresponding point obtaining unit 106 obtains information related to the positions of the corresponding points on the first normalized image and the second normalized image in accordance with the operation input on the operation unit 180 by the user.
  • the corresponding point obtaining unit 106 is an example of an information obtaining unit.
  • the corresponding point transformation unit 107 transforms corresponding point positions from the normalized space to the real space corresponding to the space of the respective original images on the basis of the normalization transformations.
  • the corresponding point transformation unit 107 is an example of a transformation unit.
  • the deformation transformation obtaining unit (deformation estimation unit) 108 performs deformation estimation between the first image and the second image on the basis of the corresponding point positions in the transformed real space and obtains the deformation transformation between the images.
  • the deformation transformation obtaining unit 108 is an example of a deformation transformation obtaining unit.
  • the deformed image generation unit 109 deforms at least one of the first image and the second image on the basis of the deformation transformation between the images to generate a registered first deformed image or a registered second deformed image.
  • a position of a single point or each of a plurality of points on one image specified as the corresponding point that is a point indicating the position corresponding to the position on the other image will be referred to as a corresponding point position.
  • Fig. 2 illustrates an example of a hardware configuration of the image processing apparatus 100.
  • the image processing apparatus 100 is a computer, for example.
  • the image processing apparatus 100 includes a central processing unit (CPU) 201 functioning as a main control unit, a random access memory (RAM) 202 functioning as a memory unit, a read only memory (ROM) 203, a solid state drive (SSD) 204, a communication circuit 205 functioning as a communication unit, a universal serial bus (USB) 206 functioning as a connection unit, High Definition Multimedia Interface (HDMI) (registered trademark) 208, and a graphics processing unit (GPU) 207 functioning as a graphics control unit, and these components are connected via an internal bus so as to be communicable to one another.
  • CPU central processing unit
  • RAM random access memory
  • ROM read only memory
  • SSD solid state drive
  • communication circuit 205 functioning as a communication unit
  • USB universal serial bus
  • HDMI High Definition Multimedia Interface
  • GPU graphics processing unit
  • GPU graphics processing unit
  • the CPU 201 is a control circuit configured to control the image processing apparatus 100 and the respective units connected to the image processing apparatus 100 in an integrated manner.
  • the RAM 202 is a memory that stores a program for executing processing in the image processing apparatus 100 and the respective units connected to the image processing apparatus 100 and various parameters used for image processing.
  • the RAM 202 stores a control program executed by the CPU 201 and temporarily stores various pieces of data used when the CPU 201 executes various controls.
  • the ROM 203 stores a program or data in which a control procedure by the CPU 201 is stored.
  • the SSD 204 stores a program or data.
  • the communication circuit 205 performs a communication with the data server 170 via a network.
  • the USB 206 is connected to the operation unit 180.
  • the GPU 207 is an image processing unit and executes image processing in accordance with the control from the CPU 201.
  • An image obtained as a result of the image processing is output to the display unit 190 via the HDMI (registered trademark) 208 to be displayed.
  • the operation unit 180 and the display unit 190 may be integrated with each other as a touch panel monitor.
  • the functions of the respective configurations illustrated in Fig. 1 are realized while the CPU 201 executes the programs stored in the RAM 202, the ROM 203, and the SSD 204.
  • the functions of the respective configurations illustrated in Fig. 1 are realized by the GPU 207 and a circuit such as a field-programmable gate array (FPGA) (not illustrated) in addition to the CPU 201 described above.
  • FPGA field-programmable gate array
  • at least one of processings including an input to the GPU 207 and the FPCA (not illustrated), a setting of a parameter related to the image processing, and a reception of an output from the GPU 207 and FPGA (not illustrated) is controlled by the CPU 201.
  • the function of the display control unit 105 is realized by the CPU 201 and the GPU 207.
  • the CPU 201 instructs the GPU 207 to display predetermined information on the display unit 190 and generates display data used for the GPU 207 to perform the display on the display unit 190 to output the display data to the display unit 190.
  • the communication circuit 205 is realized by a single network interface card (NIC) or a plurality of NICs when appropriate.
  • NIC network interface card
  • a plurality of CPUs 201 may be included in the apparatus.
  • Fig. 3 is a flow chart illustrating an example of the entirety of the processing performed by the image processing apparatus 100.
  • Figs. 4A and 4B are explanatory diagrams for describing an example of processing in step S301.
  • Fig. 5 is a flow chart illustrating an example of the processing in step S301.
  • Fig. 6 illustrates an example of the entirety of the processing performed by the image processing apparatus 100.
  • the processing performed by the image processing apparatus 100 will be described by appropriately referring to Fig. 3 to Fig. 6.
  • ⁇ Step S300 Obtainment of the image data>
  • step S300 illustrated in Fig. 3 the data obtaining unit 102 obtains the first image and the second image from the data server 170.
  • the first image is denoted by I1
  • the second image is denoted by I2.
  • the first image I1 and the second image I2 are obtained by imaging the breast of the same object in different deformed states.
  • the shape of the breast to be drawn varies in the first image I1 and the second image I2 since the object takes different body postures at the time of the MRI imaging and the time of the PAT imaging or an MRI apparatus and a PAT apparatus come to contact with the breast.
  • the first image I1 and the second image I2 are respectively three-dimensional images, and the image coordinate system is defined in each of the images.
  • a space defined by the image coordinate systems of the first image I1 and the second image I2 will be referred to as a real space.
  • the processing in step S300 is processing of obtaining a first image I1 (600) and a second image I2 (601) illustrated in Fig. 6.
  • step S301 illustrated in Fig. 3 the normalization transformation obtaining unit 103 obtains a first normalization transformation Tn1 and a second normalization transformation Tn2 for respectively transforming the first image I1 and the second image I2 to the normalized images. That is, processing of obtaining transformations for projecting the first image I1 and the second image I2 having the coordinate systems defined on the real space onto the normalized space is executed.
  • the processing in step S301 is processing of obtaining a coordinate transformation Tn1 (602) and a coordinate transformation Tn2 (603) illustrated in Fig. 6.
  • the normalization according to the first exemplary embodiment refers to coordinate transformations for respectively transforming the first image and the second image to a first transformed image and a second transformed image on the basis of a contour of a target site of the object and a reference point on the contour. More specifically, the normalization according to the first exemplary embodiment refers to coordinate transformations for projecting positions and shapes of anatomical features in the first image and the second image corresponding to the images of the object obtained in a plurality of different deformed states onto a common space (normalized space) in which those anatomical features have substantially the same positions and shapes.
  • the normalization transformation corresponding to the coordinate transformation of the normalization according to the first exemplary embodiment is coordinate transformation to the normalized space in which the positions and shapes of the anatomical features of the respective images have substantially the same positions and shapes on the basis of the positions and shapes of the anatomical features such as the nipple commonly drawn in the first image I1 and the second image I2 acquired in the different deformed states and the body surface.
  • the first transformed image obtained by performing the normalization transformation of the first image will be referred to as the first normalized image.
  • the second transformed image obtained by performing the normalization transformation of the second image will be referred to as the second normalized image.
  • an example of a method of obtaining the normalization transformation will be described.
  • Fig. 4A is a schematic diagram of the first image I1.
  • the first image according to the first exemplary embodiment is a three-dimensional MRI image
  • a cross-sectional image 400 illustrated in Fig. 4A is a schematic diagram illustrating a certain cross-sectional image of the first image I1.
  • the normalization transformation obtaining unit 103 obtains a position coordinate P1 of a nipple 401 of the object and shape information S1 of a body surface 402 from the cross-sectional image 400 of the first image I1. These pieces of information are obtained as position information in the coordinate system of the first image I1, that is, position information in the real space.
  • Fig. 4B is a schematic diagram illustrating a certain cross-sectional image of a first normalized image I1' generated in step S302 which will be described below.
  • the normalization transformation obtaining unit 103 obtains the coordinate transformation for deforming the first image I1 represented by the cross-sectional image 400 of Fig. 4A to the first normalized image I1' represented by a cross-sectional image 410 of Fig. 4B.
  • a body surface 412 is represented by a shape S1' obtained by transforming the body surface 402 of Fig. 4A by the normalization transformation.
  • the body surface 412 is a shape located on a predetermined plane in the first normalized image I1'.
  • a geodesic distance along the body surface with respect to the nipple 411 on the respective positions on the body surface 412 is set to maintain substantially the same distance before and after the normalization transformation. That is, the body surface 402 of Fig. 4A does not expand or contract by the normalization transformation while the nipple 401 is set as the center or is transformed to the body surface 412 of Fig. 4B within a small expansion and contraction range.
  • Step S301 The processing executed by the normalization transformation obtaining unit 103 in step S301 to obtain the above-described normalization transformation will be described in detail with reference to the flow chart of Fig. 5.
  • Step S3010 Obtainment of the body surface shape>
  • step S3010 illustrated in Fig. 5 the normalization transformation obtaining unit 103 obtains a body surface shape of the object in the first image I1.
  • the image processing such as edge detection is performed on the first image I1 so that it is possible to automatically obtain the body surface shape.
  • the first image I1 may be displayed on the display unit 190 via the display control unit 105 so that the user can observe the first image I1, and the body surface shape may be obtained on the basis of the operation input by the user or the like.
  • the body surface shape obtained at this time is denoted by S1.
  • S1 is constituted by a point group representing the body surface shape, and the point group is obtained as position information in the coordinate system of the first image I1, that is, the position information in the real space.
  • Step S3011 Obtainment of the nipple position>
  • the normalization transformation obtaining unit 103 obtains the nipple position of the object in the first image I1.
  • the nipple position can be obtained by performing the image processing such as the edge detection on the first image.
  • the nipple position can be obtained while an inflection point of a curvature of the shape, a peak position, or the like may be obtained on the basis of the body surface shape S1 obtained in step S3010.
  • the nipple position may be obtained on the basis of an input operation by the user or the like.
  • the obtained nipple position is denoted by P1.
  • the nipple position P1 is obtained as the position information in the coordinate system of the first image, that is, the position information in the real space.
  • Step S3012 Obtainment of the nipple position and the body surface shape in the normalized space>
  • the normalization transformation obtaining unit 103 obtains the nipple position and the body surface shape in the normalized space, that is, the coordinate system of the first normalized image I1'.
  • the nipple position in the normalized space is denoted by P1'.
  • the body surface shape in the normalized space is denoted by S1'.
  • S1' is constituted by a point group in the normalized space.
  • the x coordinate values and the z coordinate values of the respective points constituting S1' are set such that a change in the geodesic distance along the body surface with respect to the nipple before and after the normalization becomes small.
  • the position of S1' in the normalized space is set such that a mutual distance of the point group constituting S1 is maintained.
  • the position of S1' is determined, for example, by a technique in the related art including a low-strain dimensionality reduction method such as a multi-dimensional scaling method.
  • the normalization transformation obtaining unit 103 obtains the first normalization transformation Tn1 for normalizing the first image I1 on the basis of positional relationships between P1 and P1' and S1 and S1'.
  • the normalization transformation can be obtained by using a deformation interpolation method or a deformation obtaining technique in the related art such as a free form deformation (FFD) method or a thin plate spline (TPS) method.
  • FFD free form deformation
  • TPS thin plate spline
  • the normalization transformation is preferably obtained on the basis of a condition for suppressing distortion of the space, abrupt change, extreme expansion or contraction, or the like caused by the transformation.
  • the normalization transformation may also be obtained by adding a restraint for suppressing a fluctuation of the distance from the body surface caused by the normalization transformation at a position inside the breast of the object. That is, the normalization transformation refers to nonlinear transformation to the normalized space deformed such that the position and the shape of the body surface of the object are substantially matched in the first normalized image I1' and a second normalized image I2'.
  • the first normalization transformation Tn1 is obtained.
  • Tn1 is a transformation function for transforming a position coordinate x in the image coordinate system of the first image I1 to a position coordinate x' in the coordinate system of the first normalized image I1' which will be described below.
  • an action of the first normalization transformation Tn1 is represented by Expression 1.
  • x' Tn1(x) ... (1)
  • step S3013 the normalization transformation obtaining unit 103 obtains an inverse transformation Tn1_inv of the first normalization transformation Tn1 obtained in the above-described processing.
  • an action of Tn1_inv can be represented by Expression 2.
  • x Tn1_inv(x') ...
  • Tn1_inv is a transformation function for transforming the position coordinate x' in the image coordinate system of the first normalized image I1' to the position coordinate x in the image coordinate system of the first image I1.
  • Tn1_inv is such a transformation that an error of the inverse transformation ( ⁇
  • step S3010 to S3013 described above the case where the first normalization transformation Tn1 with regard to the first image I1 and the inverse transformation Tn1_inv thereof are obtained has been described. Furthermore, in the processing in step S301, the normalization transformation obtaining unit 103 according to the first exemplary embodiment executes processing similar to the processing in step S3010 to S3013 described above with respect to the second image I2. That is, a body surface shape S2 and a nipple position P2 are detected from the second image I2, and a second normalization transformation Tn2 for normalizing the second image I2 and an inverse transformation Tn2_inv thereof are obtained on the basis of these pieces of information.
  • step S301 the first normalization transformation Tn1 (602) and an inverse transformation Tn1_inv (604) thereof and the second normalization transformation Tn2 (603) and an inverse transformation Tn2_inv (605) thereof which are illustrated in Fig. 6 are obtained.
  • Step S302 Generations of the normalized image>
  • step S302 the normalized image generation unit 104 generates the first normalized image I1' obtained by deforming the first image I1 obtained in step S300 on the basis of the first normalization transformation Tn1 obtained in step S301. Similarly, the normalized image generation unit 104 generates the second normalized image I2' obtained by deforming the second image I2 on the basis of the second normalization transformation Tn2. At this time, the processing of generating the deformed image obtained by deforming the image on the basis of the transformation function may be executed by using any of techniques in the related art.
  • the inverse transformations Tn1_inv and Tn2_inv of the respective normalization transformations may be used in the image transformation. That is, in step S302, a first normalized image I1' (606) and a second normalized image I2' (607) illustrated in Fig. 6 are obtained.
  • step S303 the corresponding point obtaining unit 106 obtains positions of a plurality of corresponding points between the first normalized image I1' and the second normalized image I2'.
  • this position will be referred to as a corresponding point position in the normalized space.
  • a point used as the corresponding point according to the first exemplary embodiment is, for example, a feature point representing a distinctive structure of the object such as a vascular bifurcation point. That is, the corresponding point obtaining unit 106 obtains the position of the same feature point of the object such as the vascular bifurcation point commonly drawn in the first normalized image I1' and the second normalized image I2' as the corresponding point position in the normalized space.
  • the corresponding point position in the normalized space is a group of pairs of position information in the respective coordinate systems of the first normalized image I1' and the second normalized image I2' (normalized image coordinate systems).
  • i-th corresponding point positions in the normalized space are denoted by C1_i' and C2_i'.
  • a suffix i represents an index of the corresponding point.
  • the number of corresponding points obtained in step S303 is set as Nc. That is, 1 is lower than or equal to i, and also i is lower than or equal to Nc.
  • the corresponding point positions assigned with the same index represent mutually corresponding positions.
  • An example of the method of obtaining the corresponding point position in the normalized space includes a method of obtaining the corresponding point position by an operation input by the user. That is, the display control unit 105 displays the first normalized image I1' and the second normalized image I2' output from the normalized image generation unit 104 and cross-sectional images thereof on the display unit 190, so that the user can observe the respective images. Subsequently, the user who has observed the respective images can find the corresponding positions between the images. In particular, when an XZ cross section of the normalized image is displayed, a cross section where a region at a position at an equal distance from the body surface is cut out is displayed.
  • a Y coordinate on the XZ cross section represents a distance from the body surface, that is, a depth, and the user can operate the Y coordinate on the XZ cross section to be displayed.
  • this configuration since the region at the equal distance from the body surface is drawn in the same plane, an advantage is attained that it is facilitated to grasp an overall picture of the vessel or the like that runs along the body surface.
  • this information can be used as reference information when the corresponding points are identified between the images. For example, by comparing the XZ cross sections in the vicinity of the same Y coordinate with each other, it is possible to efficiently perform the identification of the corresponding points.
  • the corresponding point obtaining unit 106 can obtain the information of the corresponding points.
  • the operation unit 180 is, for example, an input device such as a mouse or a key board.
  • the corresponding point obtaining unit 106 can obtain the corresponding point positions in the normalized space found by the user.
  • the corresponding point obtaining unit 106 transmits the corresponding point positions in the normalized space obtained up to this time to the corresponding point transformation unit 107. Then, the flow proceeds to the processing in step S304.
  • the common anatomical features of the object drawn in both the images are at spatially close positions. For this reason, an advantage is attained that it is more facilitated for the user to find the corresponding point positions as compared with a case where the first image I1 and the second image I2 are visually observed.
  • the method of obtaining the corresponding point positions in the normalized space is not limited to the above-described method, and the corresponding point positions may be automatically obtained by performing the image processing on the images including the first normalized image I1' and the second normalized image I2'.
  • the corresponding point obtaining unit 106 detects a plurality of feature points from the respective images and performs association on the basis of position information related to the plurality of feature points, luminance patterns of the images, or the like, so that the corresponding point positions in the normalized space can be obtained.
  • a specific method for the processing may be executed by using any of techniques in the related art.
  • step S303 correspondence information (608) corresponding to the information of the corresponding points in the normalized space illustrated in Fig. 6 is obtained.
  • Step S304 Transformation of the corresponding point position to the real space>
  • step S304 the corresponding point transformation unit 107 transforms the corresponding point positions in the normalized space obtained in step S303 on the basis of the inverse transformations Tn1_inv and Tn2_inv of the normalization transformation obtained in step S301 to obtain the corresponding point positions in the real space.
  • the obtainment of the corresponding point position in the real space is performed by calculation processing specifically represented by the following expression.
  • C1_i Tn1_inv(C1_i') (1 is lower than or equal to i, and also i is lower than or equal to Nc) ...
  • C2_i Tn2_inv(C2_i') (1 is lower than or equal to i, and also i is lower than or equal to Nc) ... (4)
  • C1_i (1 is lower than or equal to i, and also i is lower than or equal to Nc) denotes the corresponding point position on the first image I1 corresponding to the i-th corresponding point position C1_i' on the first normalized image I1' (1 is lower than or equal to i, and also i is lower than or equal to Nc).
  • C2_i (1 is lower than or equal to i, and also i is lower than or equal to Nc) denotes the corresponding point position on the second image I2 corresponding to the i-th corresponding point position C2_i' on the second normalized image I2' (1 is lower than or equal to i, and also i is lower than or equal to Nc).
  • C1_i and C2_i described above (1 is lower than or equal to i, and also i is lower than or equal to Nc) are collectively referred to as the corresponding point position in the real space.
  • step S304 correspondence information (609) corresponding to the information of the corresponding point in the real space illustrated in Fig. 6 is obtained.
  • Step S305 Obtainment of the deformation transformation>
  • step S305 the deformation transformation obtaining unit (deformation estimation unit) 108 executes processing of obtaining the deformation transformation between the first image I1 and the second image I2 on the basis of the corresponding point position in the real space obtained in step S304.
  • the deformation transformation is obtained on the basis of the information of the regularizing the deformation of the object in the real space.
  • the processing of obtaining the deformation transformation may also be regarded as deformation estimation processing of estimating deformation between the images or deformation registration processing of performing deformation registration between the images.
  • any of techniques in the related art may be used for the deformation processing between the images based on the corresponding point position in the real space, and the deformation processing can be executed by using the TPS method, for example.
  • the deformation processing can be executed by using the TPS method, for example.
  • this method it is possible to obtain the deformation transformation with which C1_i and C2_i (1 is lower than or equal to i, and also i is lower than or equal to Nc) corresponding to the position coordinate values on the respective images of the first image I1 and the second image I2 are matched with each other, and also bending energy caused by the deformation is minimized.
  • each C1_i (1 is lower than or equal to i, and also i is lower than or equal to Nc) is transformed to C2_i (1 is lower than or equal to i, and also i is lower than or equal to Nc), and the spatial distortion in the real space caused by the transformation is defined as energy.
  • the energy of the distortion can be defined as a value calculated by a spatial secondary differentiation of the deformation transformation, for example.
  • the deformation transformation with which the total amount of the energy of the distortion is minimized is obtained. That is, the deformation transformation obtaining unit (deformation estimation unit) 108 obtains the deformation transformation on the basis of the information of the regularizing the deformation so as to be an appropriate deformation in the real space of the object.
  • the obtainment of the deformation transformation based on the information of the regularization is not limited to the case based on the value calculated by the spatial secondary differentiation of the above-described deformation transformation.
  • the definition can also be made on the basis of a size of the specially high frequency component of the displacement amount based on the deformation transformation, a size of a local volume change rate based on the deformation transformation, or the like.
  • the obtainment of the deformation transformation can be executed on the basis of the information of the nipple position and the body surface shape in the first image I1 and the second image I2.
  • the deformation transformation can be obtained by further adding a restraint for matching the nipple position P1 in the first image I1 and the nipple position P2 in the second image I2 with each other.
  • the deformation transformation can be obtained by further adding a restraint for matching the body surface shapes S1 and S2 with each other.
  • the association of the point group on the body surface shape may be performed between the point groups representing the two shapes by using the ICP method or the like. Subsequently, while the information of the associated point group on the body surface is added, the deformation transformation may be obtained by the TPS method again. Furthermore, the obtainment of the deformation transformation and the update of the association may be repeatedly executed until convergence is realized.
  • the deformation transformation is denoted by T1_2.
  • an action of T1_2 can be represented by Expression 5.
  • x2 T1_2(x1) ... (5)
  • the deformation transformation T1_2 provides the transformation from x1 to x2.
  • the deformation transformation obtaining unit (deformation estimation unit) 108 may further obtain an inverse transformation of the above-described deformation transformation T1_2, that is, the deformation transformation from the second image I2 to the first image I1. Since the obtainment of the inverse transformation is similar to the obtainment of the inverse transformation of the normalization transformation described in step S3013, descriptions thereof will be omitted.
  • the inverse transformation of T1_2 is denoted by T2_1.
  • an action of T2_1 can be represented by Expression 6.
  • x1 T2_1(x2) ... (6)
  • the deformation transformation between the first image I1 and the second image I2 is obtained.
  • a deformation transformation (610) from the first image I1 illustrated in Fig. 6 to the second image I2 is obtained. Even if an attempt is made to perform the registration by deforming at least one of the first normalized image I1' and the second normalized image I2', it is difficult to use the information of the regularization on which a physical validity of the deformation in the real space is reflected.
  • the corresponding point position in the normalized space which is deformed such that it is facilitated to identify the corresponding point is transformed to the corresponding point position in the real space, and it is possible to obtain the deformation transformation for performing the registration on which the physical validity of the deformation in the real space is reflected.
  • step S306 the deformed image generation unit 109 generates a first deformed image I1_d obtained by such a deformation that the first image I1 is substantially matched with the second image I2 on the basis of the deformation transformation T1_2 obtained in step S305.
  • the deformed image generation unit 109 may generate a second deformed image I2_d obtained by such a deformation that the second image I2 is substantially matched with the first image I1 on the basis of the deformation transformation T2_1 obtained in step S305. It should be noted however that the method of generating the deformed image by using the deformation transformation has been proposed in the related art, and therefore detailed descriptions will be omitted.
  • ⁇ Step S307 Display of the deformed image>
  • step S307 the display control unit 105 performs control to display the first deformed image I1_d and the second deformed image I2_d generated in step S306 on the display unit 190.
  • the display control unit 105 may display the first image I1 and the second deformed image I2_d in a manner that the images can be compared with each other on the display unit 190 in accordance with an input operation or the like by the user.
  • the first deformed image I1_d and the second image I2 may be displayed in a manner that the images can be compared with each other on the display unit 190.
  • the processing of the image processing apparatus is executed.
  • the first exemplary embodiment it is possible to provide the system where the normalized image is displayed which is easy for the user who observes the images to perform the identification of the corresponding point position, and the image is deformed on the basis of the corresponding point position specified on the image.
  • the corresponding point on the normalized image which is input by the user is transformed to the corresponding position on the original input image, the deformation between the images can be obtained. Therefore, it is possible to perform the transformation on which the physical restraint in the real space such as the minimization of the bending energy related to the deformation in the original input image, that is, the information of the regularization is reflected.
  • the normalization transformation for transforming the points at an equal distance from the body surface to the same plane is performed.
  • the method for the coordinate transformation to facilitate the identification of the corresponding point is not limited to the above.
  • the normalization transformation for transforming the points at an equal distance from the nipple position in the image to the same plane may be obtained.
  • the coordinate transformation based on the chest wall shape of the object may be used.
  • the above-described coordinate transformation can be used in a case where the MRI images or the CT images capturing a range wider than the surrounding of the breast corresponding to the object are used as the first image and the second image.
  • the chest wall shape is also drawn in both the first image and the second image in addition to the nipple position and the body surface shape.
  • the processing of obtaining the normalization transformation executed in step S301 by the normalization transformation obtaining unit 103 it is possible to obtain the normalization transformation based on the nipple position, the body surface shape, and the chest wall shape of the object which are drawn in the image.
  • the normalization transformation for normalizing each of the first image and the second image can be obtained.
  • this method since it is possible to obtain the normalization transformation for projecting the deep part of the breast corresponding to the inspection target region of the object, that is, the chest wall located on an inner side with respect to the body surface of the object at a predetermined position, an advantage is attained that it is facilitated to perform the comparison between the image at the above-mentioned position.
  • the case has been described as an example where the image of the breast is the target of the registration, but the target is not limited to the above.
  • an image of another organ such as heart or liver may also be the target of the registration.
  • various normalization techniques in the related art for transforming the image of the target organ to the normalized space can be used.
  • the normalization based on the distance from the surface of the organ may also be used.
  • the normalization based on the distance from the particular landmark position may also be used.
  • the case has been described as an example where, in step S303, the corresponding point position in the normalized space is obtained, and in step S304, the processing of transforming the corresponding point position in the normalized space to the corresponding point position in the real space is executed.
  • the implementation of the present invention is not limited to the above.
  • the corresponding point position obtained in the processing in step S304 may also include the corresponding point position directly obtained from the first image and the second image corresponding to the image in the real space in addition to the position obtained by transforming the corresponding point position in the normalized space to the real space.
  • the display control unit 105 performs control to display the first image and the second image on the display unit 190.
  • the corresponding point obtaining unit 106 can obtain an anatomically characteristic position or the like commonly drawn in the respective images through the operation input by the user via the operation unit 180. Then, the obtained position may be added to the corresponding point position in the above-described real space. As a result, it is possible to additionally obtain the corresponding point position that is difficult to be identified in the normalized image and also can be identified in the original input image.
  • the corresponding point positions may be obtained in the plurality of transformed images obtained by the various coordinate transformations illustrated in the modified example 1-1 and respectively transformed to the corresponding point positions in the real space.
  • the transformed image on which the coordinate transformation has been performed such that it is facilitated to identify the corresponding point in the vicinity of the body surface of the object and the transformed image on which the coordinate transformation has been performed such that it is facilitated to identify the corresponding point inside the object may be used in combination. As a result, it is possible to perform the obtainment of the corresponding point more accurately.
  • step S305 and the subsequent steps can be performed on the basis of still more corresponding point positions, an advantage is attained that the highly accurate deformation transformation can be obtained, and the deformed image can be displayed.
  • the following processing may be further executed. That is, the information related to the corresponding point position in the real space obtained in step S304 and the nipple positions P1 and P2 and the body surface shapes S1 and S2 obtained in the processing in step S302 may be saved in the data server 170. In this case, the information is preferably saved while being associated with the first image I1 and the second image I2 obtained in step S300 by the data obtaining unit 102. In the processing in step S300, in addition to the above-described processing, the following processing may be further executed.
  • the processing of reading out and obtaining these pieces of information may be performed. Subsequently, in a case where these pieces of information are read out and obtained, in the processing in step S301, the processing in steps S3010 and S3011 can be omitted, and the processing in steps S3012 and S3013 can be performed on the basis of the read information. In addition, the processing in steps S303 and S304 may be omitted.
  • the corresponding point position in the real space obtained by executing the processing in steps S303 and S304 may be integrated with the corresponding point position in the real space obtained by the readout from the data server 170, and the subsequent processing may be executed.
  • the processing for the transformation to the normalized space may be applied to the information such as the corresponding point position in the real space read from the data server 170 to be displayed on the display unit 190 together with the first normalized image I1' and the second normalized image I2'.
  • the similar processing may be performed on not only the information read out from the data server 170 but also the information obtained in step S303 according to the first exemplary embodiment.
  • the obtainment of the corresponding point position which uses the user operation can be executed in multiple courses, and it is possible to provide the system where the addition of the information of the corresponding point position or the like can be flexibly performed.
  • the case has been described as an example where the deformation transformation between the images is obtained by the TPS method, but the implementation of the present invention is not limited to the above.
  • a case will be described where the deformation transformation between the images is obtained by using the free form deformation (FFD) method.
  • FFD free form deformation
  • a function configuration according to the second exemplary embodiment is similar to Fig. 1 in which the function configuration according to the first exemplary embodiment is exemplified. Here, the descriptions thereof will be omitted.
  • the entire processing procedure according to the second exemplary embodiment can be described with reference to Fig. 2 in which the processing procedure according to the first exemplary embodiment is exemplified.
  • a detail of the processing according to the second exemplary embodiment will be described.
  • Step S300 to step S304 the processing similar to that of the first exemplary embodiment is performed. Here, the descriptions thereof will be omitted.
  • step S305 the deformation transformation obtaining unit (deformation estimation unit) 108 executes the processing of obtaining the deformation transformation between the first image I1 and the second image I2 on the basis of the corresponding point positions C1_i and C2_i in the real space obtained by the transformation processing in step S304.
  • the deformation transformation for substantially matching each of the corresponding point positions C1_i and C2_i in the real space to each other is obtained by using the FFD method.
  • the FFD method is a deformation representation method of arranging control points in a grid manner and obtaining the displacement amount in the space by using a control amount at each control point as a deformation parameter.
  • the deformation representation obtainment based on the FFD method is a technique in the related art, and therefore detailed descriptions will be omitted.
  • the method of obtaining the control amount for determining a characteristic of the deformation transformation in the FFD method, that is, the deformation parameter will be described.
  • the deformation transformation obtaining unit (deformation estimation unit) 108 obtains a deformation transformation ⁇ for minimizing a cost function represented by Expression 7.
  • E( ⁇ ) E distance ( ⁇ )+ ⁇ smoothness E smoothness ( ⁇ )+ ⁇ compression E compression ( ⁇ ) ... (7)
  • E distance ( ⁇ ) denotes a data term in the cost function. Specifically, the distance error of the corresponding point position caused by the deformation transformation is calculated by a calculation represented by Expression 8.
  • ⁇ (x) represents a function for obtaining the position coordinate of the transformation result based on the deformation transformation ⁇ of the position coordinate x.
  • E smoothness ( ⁇ ) in Expression 7 is a regularization term (information of the regularization) for obtaining a smooth deformation and can be calculated by the bending energy of the deformation transformation ⁇ obtained by a calculation represented by Expression 9, for example.
  • E compression ( ⁇ ) is a regularization term (information of the regularization) related to a local volume change in the spatial transformation caused by the deformation transformation ⁇ and can be calculated by a calculation represented by Expression 10, for example.
  • J( ⁇ ) represents a function for returning a Jacobian matrix corresponding to a spatial differentiation related to the deformation transformation ⁇ and a function for returning a matrix of three rows and three columns according to the second exemplary embodiment where the three-dimensional deformation is set as the target.
  • det() is a function for obtaining a determinant of a matrix of an argument.
  • Expression 10 represents a calculation for a size of the local volume change in the spatial transformation caused by the deformation transformation ⁇ and spatially integrating the size.
  • ⁇ smoothness and ⁇ compression respectively denote the regularization parameter related to the smoothness of the deformation and the regularization parameter related to the volume change and control the respective intensities of the influences of the regularizations.
  • the deformation transformation ⁇ obtained by the above-described processing is denoted by T1_2.
  • the deformation transformation T1_2 is the transformation function for transforming the position coordinate in the image coordinate system of the first image I1 to corresponding position coordinate in the image coordinate system of the second image I2.
  • step S306 and step S307 the processing similar to that of the first exemplary embodiment is executed. Here, the descriptions thereof will be omitted.
  • the processing of the image processing apparatus is executed.
  • the second exemplary embodiment it is possible to obtain the deformation transformation in which the deformation accompanied with the local volume change of the object is suppressed as compared with the first exemplary embodiment.
  • the deformation transformation between the images can be obtained, so that it is possible to obtain the deformation transformation in which the local volume change with regard to the deformation in the original input image is suppressed. That is, with regard to the deformation of the object, it is possible to obtain the more highly accurate deformation transformation based on the information of the regularization in the real space.
  • the case has been described as an example where, when the deformation transformation between the first image I1 and the second image I2 is obtained, the application of the regularization with regard to the deformation of the object in the real space is performed.
  • the image processing apparatus according to a third exemplary embodiment further uses the regularization in the normalized space in combination.
  • step S305 the deformation transformation obtaining unit (deformation estimation unit) 108 obtains the deformation transformation between the first image and the second image on the basis of the corresponding point position in the real space which is obtained in step S304.
  • the deformation transformation obtaining unit (deformation estimation unit) 108 uses a function obtained by further adding a cost term (regularization term) for performing the regularization in the normalized space to the cost function for the obtainment of the deformation transformation defined by Expression 7 according to the second exemplary embodiment as the cost function for the obtainment of the deformation transformation. That is, the deformation transformation obtained in step S305 is configured to include the restraint of the regularization with regard to the deformation in the normalized space. Only this aspect is the difference from the processing performed in step S305 according to the second exemplary embodiment by the deformation transformation obtaining unit (deformation estimation unit) 108.
  • the deformation transformation obtaining unit (deformation estimation unit) 108 obtains the deformation transformation T1_2' between the first normalized image I1' and the second normalized image I2' on the basis of a candidate value of the deformation transformation between the first image I1 and the second image I2 T1_2.
  • the deformation transformation T1_2' between the first normalized image I1' and the second normalized image I2' is a deformation function that affects in the following manner.
  • x2' T1_2'(x1') ... (11)
  • the case has been described as an example where the information related to the corresponding point position is obtained on the normalized image, but the present invention is not limited to the above.
  • the fourth exemplary embodiment a case will be described where the information related to the corresponding point position is obtained by further using the first deformed image obtained by the registration of the first image with respect to the second image.
  • Fig. 7 is a block diagram illustrating an example of a function configuration of an image processing apparatus 700 according to the fourth exemplary embodiment. It should be noted however that configurations similar to those of Fig. 1 are assigned with the same reference symbols, and detailed descriptions thereof will be omitted here by citing the above-described explanations.
  • the image processing apparatus 700 includes the data obtaining unit 102, the normalization transformation obtaining unit 103, the normalized image generation unit 104, a display control unit 705, a corresponding point obtaining unit 706, a corresponding point transformation unit 707, a deformation transformation obtaining unit (deformation estimation unit) 708, and the deformed image generation unit 109.
  • the display control unit 705 displays the first image, the second image, the first normalized image, the second normalized image, the first deformed image, and a first normalized deformed image which will be described below on the display unit 190 in accordance with the operation input by the user or the control based on the predetermined program.
  • the first normalized deformed image is an image obtained by normalizing the first deformed image.
  • the corresponding point obtaining unit 706 obtains the information related to the positions of the corresponding points on the first normalized image and the second normalized image in accordance with the operation input on the operation unit 180 by the user. In addition, the corresponding point obtaining unit 706 obtains the information related to the positions of the corresponding points on the first image and the second image. Moreover, the corresponding point obtaining unit 706 obtains the information related to the positions of the corresponding points on the first deformed image and the second image. Furthermore, the corresponding point obtaining unit 706 obtains the information related to the positions of the corresponding points on the first normalized deformed image and the second normalized image.
  • the corresponding point transformation unit 707 transforms the corresponding point position from the normalized space to the real space corresponding to the space of each of the original images on the basis of the normalization transformation obtained by the normalization transformation obtaining unit 103.
  • the corresponding point transformation unit 707 transforms the corresponding point position on the first normalized deformed image to the position on the first deformed image on the basis of the normalization transformation.
  • the corresponding point transformation unit 707 transforms the corresponding point position on the first deformed image to the position on the first image on the basis of the deformation transformation obtained by the deformation transformation obtaining unit (deformation estimation unit) 708. That is, the corresponding point position on the first deformed image is transformed to the position in the real space corresponding to the space of the original image.
  • the corresponding point transformation unit 707 transforms the positions of the corresponding points in the first image, the second image, and the first deformed image to the positions of each of the first normalized image, the second normalized image, and the first normalized deformed image on the basis of the normalization transformation.
  • the corresponding point transformation unit 707 also transforms the corresponding point position in the first image to the position in the first deformed image on the basis of the deformation transformation.
  • the deformation transformation obtaining unit (deformation estimation unit) 708 performs the deformation estimation between the first image and the second image on the basis of the corresponding point position obtained in the real space and the corresponding point positions in the real space obtained on the normalized image, the deformed image, and the normalized deformed image and transformed to the real space and obtains the deformation transformation between the images.
  • a hardware configuration of the image processing apparatus 700 is similar to the example illustrated in Fig. 2. Detailed descriptions thereof will be omitted here by citing the above-described explanations.
  • Fig. 8 is a flow chart illustrating an example of the entirety of the processing performed by the image processing apparatus 700.
  • the processing in step S800 to step S802 is similar to the processing in step S300 to step S302 illustrated in Fig. 3.
  • the processing in step S804 and step S805 is similar to the processing in step S303 and step S304 illustrated in Fig. 3 except that the flow proceeds to step S815 after step S805.
  • ⁇ Step S803 Determination on whether or not the corresponding point position on the normalized image is obtained>
  • the corresponding point obtaining unit 706 determines whether or not the corresponding point position on the normalized image is obtained.
  • the corresponding point obtaining unit 706 obtains the information of the instruction input by the user via the operation unit 180 and performs the above-described determination in accordance with the content of the instruction.
  • the instruction is performed by the operation input for specifying the corresponding point on the normalized image displayed on the display unit 190.
  • the instruction is performed by the operation input with respect to the user interface where it is possible to select the image for inputting the corresponding point which is displayed on the display unit 190.
  • the flow proceeds to step S804.
  • the flow proceeds to step S806.
  • ⁇ Step S806 Determination on whether or not the corresponding point position is obtained on the real image>
  • the corresponding point obtaining unit 706 determines whether or not the corresponding point position is obtained on the real image.
  • the real image includes the first image I1 and the second image I2.
  • the corresponding point obtaining unit 706 obtains the information of the instruction input by the user via the operation unit 180 and performs the above-described determination in accordance with the content of the instruction.
  • the instruction is performed by the operation input for specifying the corresponding point on the real image displayed on the display unit 190.
  • the instruction is performed by the operation input with respect to the user interface where it is possible to select the image for inputting the corresponding point which is displayed on the display unit 190.
  • the flow proceeds to step S807.
  • the flow proceeds to step S808.
  • ⁇ Step S807 Obtainment of the corresponding point on the real image>
  • step S807 the corresponding point obtaining unit 706 obtains the corresponding point position between the first image I1 and the second image I2.
  • the corresponding point obtaining unit 706 obtains the corresponding point position on the basis of the operation input with respect to each of the first image I1 and the second image I2 displayed on the display unit 190.
  • the flow proceeds to step S815.
  • Step S808 Determination on whether or not the corresponding point position is obtained on the deformed image>
  • step S808 the image processing apparatus 700 determines whether or not the corresponding point position is obtained on the deformed image or the normalized deformed image.
  • the deformed image is the first deformed image I1_d obtained in step S814.
  • the normalized deformed image is a first normalized deformed image I1_d' obtained in step S814.
  • the corresponding point obtaining unit 706 obtains the information of the instruction input by the user via the operation unit 180 and performs the above-described determination in accordance with the content of the instruction.
  • the instruction is performed by the operation input for specifying the corresponding point on the deformed image displayed on the display unit 190.
  • the instruction is performed by the operation input with respect to the user interface where it is possible to select the image for inputting the corresponding point which is displayed on the display unit 190.
  • the flow proceeds to step S809.
  • the flow proceeds to step S811.
  • Step S809 Determination on whether or not the deformation transformation has been already obtained>
  • step S809 the deformation transformation obtaining unit 708 determines whether or not the deformation transformation has been already obtained. In a case where the deformation transformation T1_2 with regard to the first image I1 and the second image I2 has been already obtained by the deformation transformation obtaining unit 708, the flow proceeds to step S810. In a case where the deformation transformation T1_2 has not been already obtained, the flow proceeds to step S811. ⁇ Step S810: Obtainment of the corresponding point position on the deformed image>
  • step S810 the corresponding point obtaining unit 706 obtains the corresponding point on the deformed image.
  • the corresponding point obtaining unit 706 obtains the corresponding point position between the first deformed image I1_d and the second image I2.
  • the obtained corresponding point position is transformed to a position in the real space by the deformation transformation T2_1.
  • the corresponding point obtaining unit 706 obtains the corresponding point position between the first normalized deformed image I1_d' and the second normalized image I2'.
  • the obtained corresponding point position is transformed to a position in the real space by the inverse transformation of the normalization transformation Tn1_d and the deformation transformation T1_2.
  • the corresponding point obtaining unit 706 obtains information of the transformation of the corresponding point position obtained in the deformed image or the normalized deformed image to the corresponding point position in the real space.
  • the flow proceeds to step S815.
  • Fig. 9 is a flow chart for describing an example of the processing in step S810.
  • Step S8100 Determination on whether or not the corresponding point position is obtained on the normalized deformed image>
  • the corresponding point obtaining unit 706 determines whether or not the corresponding point position is obtained on the normalized deformed image.
  • the normalized deformed image refers to an image obtained by further normalizing the deformed image generated by the deformation transformation.
  • the corresponding point obtaining unit 706 obtains the information of the instruction input by the user via the operation unit 180 and performs the above-described determination in accordance with the content of the instruction.
  • the instruction is performed, for example, by the operation input for specifying the corresponding point on the normalized deformed image displayed on the display unit 190. Alternatively, the instruction is performed by the operation input with respect to the user interface where it is possible to select the image for inputting the corresponding point which is displayed on the display unit 190.
  • Step S8103 Obtainment of the corresponding point position on the deformed image>
  • step S8101 the corresponding point obtaining unit 706 obtains the corresponding point position between the first deformed image I1_d and the second image I2.
  • the corresponding point obtaining unit 706 obtains the corresponding point position between the first deformed image I1_d and the second image I2 in accordance with the operation input by the user via the operation unit 180.
  • Step S8102 Obtainment of the corresponding point position on the normalized deformed image>
  • step S8102 the corresponding point obtaining unit 706 obtains the corresponding point position between the first normalized deformed image I1_d' and the second normalized image I2'.
  • the corresponding point obtaining unit 706 obtains the corresponding point position between the first normalized deformed image I1_d' and the second normalized image I2' in accordance with the operation input by the user via the operation unit 180.
  • Step S8103 Transformation of the corresponding point position onto the real image>
  • step S8103 the corresponding point transformation unit 707 transforms the corresponding point position obtained in the processing in step S8101 and step S8102 to the corresponding point position in the real space.
  • the corresponding point transformation unit 707 transforms the corresponding point position on the first deformed image I1_d obtained in step S8101 on the basis of the deformation transformation T2_1 and obtains the corresponding point position on the first image I1, that is, the corresponding point position on the real image.
  • the processing of transforming the corresponding point position in the first deformed image I1_d to the corresponding point position in the first image I1 is represented, for example, by Expression 13.
  • C1 T2_1(C1_d) ... (13)
  • C1_d denotes the corresponding point specified on the first deformed image I1_d.
  • the corresponding point transformation unit 707 transforms the corresponding point position on the first normalized deformed image I1_d' on the basis of the normalization transformation Tn1_d and the deformation transformation T1_2 obtained in step S8102 and obtains the corresponding point position on the first image I1, that is, the corresponding point position on the real image.
  • the processing of transforming the corresponding point position in the first normalized deformed image I1_d' to the corresponding point position in the first image I1 is represented, for example, by Expression 14.
  • C1 T2_1(Tn1_d_inv(C1_d)) ... (14)
  • Tn1_d_inv denotes the inverse transformation of Tn1_d
  • the inverse transformation is obtained by a related-art method.
  • the processing of obtaining the corresponding point position on the deformed image is executed, and the flow proceeds to step S815.
  • Step S811 Determination on whether or not the deformation transformation is obtained>
  • step S811 the deformation transformation obtaining unit 708 determines whether or not the deformation transformation is obtained.
  • the deformation transformation obtaining unit 708 obtains the determination instruction by the operation input by the user via the operation unit 180. The instruction is performed by the operation input with respect to the user interface for instructing the obtainment of the deformation transformation.
  • step S812 the flow proceeds to step S812.
  • step S815. Step S812: Determination on whether or not the number of corresponding points is sufficient>
  • step S812 the deformation transformation obtaining unit 708 determines whether or not the number of corresponding points is sufficient for the obtainment of the deformation transformation. For example, the deformation transformation obtaining unit 708 determines that the number of corresponding points is sufficient in a case where the number is higher than a predetermined number. In a case where the number of corresponding points is sufficient, the flow proceeds to step S813. In a case where the number of corresponding points is not sufficient, the flow proceeds to step S815. ⁇ Step S813: Obtainment of the deformation transformation>
  • step S813 the deformation transformation obtaining unit (deformation estimation unit) 708 obtains the deformation transformation between the first image I1 and the second image I2 T1_2 on the basis of the corresponding point position in the real space obtained in up to step S813. Since the processing of obtaining the deformation transformation T1_2 is similar to the processing in step S305, detailed descriptions thereof will be omitted here by citing the above-described explanations. ⁇ Step S814: Generation of the deformed image>
  • step S814 the deformed image generation unit 109 generates the first deformed image I1_d obtained by such a deformation that the first image I1 is substantially matched with the second image I2 on the basis of the deformation transformation T1_2 obtained in step S813.
  • the deformed image generation unit 109 may generate the second deformed image I2_d obtained by such a deformation that the second image I2 is substantially matched with the first image I1 on the basis of the deformation transformation T2_1 obtained in step S305. It should be noted however that the method of generating the deformed image by using the deformation transformation has been proposed in the related art, and therefore detailed descriptions will be omitted.
  • the normalization transformation obtaining unit 103 obtains the normalization transformation Tn1_d for transforming the first deformed image I1_d to the normalized deformed image.
  • the normalization transformation Tn1_d is obtained by the method similar to step S801, that is, the processing exemplified in Fig. 5. It should be noted that the normalization transformation obtaining unit 103 obtains the body surface shape or the nipple position in the first deformed image I1_d by applying the deformation transformation T1_2 to the body surface shape or the nipple position of the first image I1 obtained in step S801 to be transformed to the shape and the position in the first deformed image.
  • the normalized image generation unit 104 transforms the first deformed image I1_d on the basis of the normalization transformation Tn1_d and generates the first normalized deformed image I1_d'. It should be noted however that, since the first deformed image I1_d is the image generated by deforming the first image I1 so as to be substantially matched with the second image I2, the first deformed image I1_d may be transformed on the basis of the second normalization transformation Tn2 obtained in step S801 to generate the first normalized deformed image I1_d'.
  • Step S815 Display of the image and the corresponding point>
  • step S815 the display control unit 705 displays the information indicating the corresponding point position obtained in up to step S815 on the display unit 190.
  • the display control unit 705 displays the deformed image generated in step S814 in the display unit 190.
  • the display control unit 705 may further display other images generated in up to step S814 on the display unit 190.
  • the display control unit 705 may arrange and display the images generated in up to step S814 among the first image, the second image, the first normalized image, the second normalized image, the first deformed image, and the first normalized deformed image on the display unit 190 in a tiling manner.
  • the display control unit 705 may display the corresponding point position in a manner that the image where the corresponding point is input can be distinguished from the others. For example, the display control unit 705 displays graphic symbols representing the corresponding point positions in different display modes (for example, different colors or shapes) in accordance with types of the images where the corresponding point is input.
  • the deformation transformations T1_2 and T2_1 are noninvertible, the corresponding point positions that are obtained on the first deformed image or the first normalized deformed image and are not transformed are not matched with the positions obtained while those positions are transformed and further inversely transformed.
  • Step S816 Determination on the end of the entirety of the processing>
  • step S816 the image processing apparatus 700 determines whether or not the entirety of the processing is ended. For example, the image processing apparatus 700 obtains an end instruction in response to the operation input on the operation unit 180 by the user. The end instruction is performed by the operation input with respect to the user interface displayed on the display unit 190.
  • step S812 when it is determined that the number of corresponding points is insufficient, the display control unit 705 may display a screen for notifying the user that the number of corresponding points is insufficient on the display unit 190.
  • the processing illustrated in Fig. 8 is ended.
  • the processing returns to step S803.
  • the processing of the image processing apparatus is executed.
  • the fourth exemplary embodiment it is possible to provide the system where the images are displayed in a manner that deformation states are matched with each other (by performing the deformation registration of one image to the other image) such that it is facilitated for the user who observes the images to perform the identification of the corresponding point position, and the image is deformed on the basis of the corresponding point position specified on the image.
  • the fourth exemplary embodiment after the corresponding point input on the first normalized image and the corresponding point input on the first deformed image are transformed to the positions on the first image, it is possible to perform uniform management together with the corresponding point input on the first image.
  • the corresponding point is managed at the position on the first image, even after the deformation transformation is updated, the corresponding point position is transformed to the appropriate position on the first deformed image.
  • the graphic symbols representing the corresponding point positions that are obtained on the first deformed image and are not transformed are not displayed on the first deformed image.
  • the implementation of the present invention is not limited to the above, and the graphic symbol representing the corresponding point position obtained on the first deformed image and transformed and inversely transformed and the graphic symbol representing the corresponding point position obtained on the first image and transformed and inversely transformed may be displayed in different display modes.
  • the present invention is not limited to the above, and the information related to the corresponding point position between the second deformed image and the first image may be obtained. Alternatively, the information related to the corresponding point position between the deformed images obtained by deforming the first image and the second image may be obtained.
  • the fourth exemplary embodiment the case has been described where the corresponding point is input on the original image, the deformed image, the normalized image, and the normalized deformed image.
  • the implementation of the present invention is not limited to the above, and a configuration may be adopted in which the normalized image and/or the normalized deformed image is not generated, and the corresponding point is not input is not input.
  • the processing performed by the image processing apparatus according to the fourth exemplary embodiment 700 is exemplified in Fig. 8 and Fig. 9, but the order of the processing is not limited to the above.
  • the processing constituted by steps S803, S804, and S805 the processing constituted by steps S806 and S807, and the processing constituted by steps S808, S809, and S810 may be performed in an order different from the order exemplified in Fig. 8.
  • the example has been illustrated in which the flow proceeds to step S815 when each of steps S805, S807, and S810 is ended, but the order of the processing is not limited to the above.
  • the flow may proceed to each of next steps (S806 in the case of S805, S808 in the case of S807, and S811 in the case of S810).
  • next steps S806 in the case of S805, S808 in the case of S807, and S811 in the case of S810).
  • the image processing apparatus 700 may stand by in a mode for standing by for an instruction of the user, and the flow may proceed to a specific step on the basis of the operation input for selecting the image where the corresponding point is input.
  • the example has been described in which pieces of information of the input corresponding point positions by using various images are respectively transformed to the corresponding point positions in the real space.
  • the image processing apparatus 700 may attach the information of the corresponding point positions in the real space to the real image, that is, at least one of the first image and the second image.
  • the processing of respectively transforming the corresponding point positions to the real space is performed, but the processing of transforming the corresponding point positions to the real space may be performed at a predetermined timing or performed each time the corresponding point is input.
  • the information of the input corresponding point positions by using various images may be displayed on the display unit 190 at a predetermined timing (for example, in step S815), or the corresponding point positions input thus far may be displayed on the display unit 190 in a state in which the corresponding point positions are transformed to the positions in the image displayed on the display unit 190.
  • the deformation transformation between the images is represented by the TPS method or the FFD method
  • the target is not limited to the above as long as the method is the deformation representation method in which the optimization of the deformation parameter can be performed by taking into account the regularization in the real space.
  • a radial basis function different from the TPS may be used.
  • a large deformation diffeomorphic metric mapping (LDDMM) method of representing deformation transformation by an integral of a velocity field or the like may be used.
  • the example has been described in which the first transformed image and the second transformed image are obtained by applying the coordinate transformation to each of the first image and the second image, but the present invention is not limited to the above.
  • the coordinate transformation for making the shape of the object drawn in the first image close to the shape of the object drawn in the second image may be applied to obtain only the first transformed image.
  • the corresponding point transformation unit 107 transforms the corresponding point position on the first transformed image among the information of the corresponding points included in each of the first transformed image and the second image to the position in the first image.
  • the corresponding point transformation unit 107 outputs the corresponding point positions in the first image and the second image to the deformation transformation obtaining unit (deformation estimation unit) 108.
  • the function described according to the first exemplary embodiment to the fourth exemplary embodiment may be realized, for example, by a system constituted by a server apparatus (not illustrated) and a client apparatus (not illustrated).
  • the server apparatus (not illustrated) includes the deformation transformation obtaining unit (deformation estimation unit) 108 and the deformed image generation unit 109 illustrated in Fig. 1
  • the client apparatus (not illustrated) includes the normalization transformation obtaining unit 103, the normalized image generation unit 104, the display control unit 105, the corresponding point obtaining unit 106, and the corresponding point transformation unit 107 and is connected to the operation unit 180 and the display unit 190.
  • the client apparatus applies the coordinate transformation to the image obtained from the data server 170 and obtains the corresponding point position in the space where the coordinate transformation has been performed to be transformed to the corresponding point position in the space in the original image. Subsequently, the client apparatus (not illustrated) outputs the plurality of images desired to be subjected to the registration and the corresponding point positions to the server apparatus (not illustrated).
  • the server apparatus obtains the deformation transformation on the basis of the plurality of images and the information of the corresponding point positions input from the client apparatus (not illustrated) and outputs the deformation transformation to the client apparatus (not illustrated).
  • the client apparatus (not illustrated) generates the registered image on the basis of the deformation transformation input from the server apparatus (not illustrated) to be displayed on the display unit 190.
  • the server apparatus (not illustrated) may further include the normalization transformation obtaining unit 103.
  • the present invention can also be realized by processing in which a program for realizing one or more of the functions according to the above-described exemplary embodiments is supplied to a system or an apparatus via a network or a storage medium, and one or more processors in a computer of the system or the apparatus reads out and executes the program.
  • the present invention can also be realized by a circuit (for example, an application specific integrated circuit (ASIC)) that realizes one or more of the functions.
  • ASIC application specific integrated circuit
  • the image processing apparatus may be realized as a standalone apparatus or may adopt a mode in which a plurality of apparatuses are combined with each other so as to be mutually communicable to execute the above-described processing, and both of the configurations are included in the exemplary embodiments of the present invention.
  • a common server apparatus or a server group may execute the above-described processing. It suffices when the plurality of apparatuses constituting the image processing apparatus and the image processing system can perform the communication at a predetermined communication rate, and also the plurality of apparatuses constituting does not necessarily need to exist in the same facility or the same country.
  • the exemplary embodiments of the present invention include a mode in which a program of software for realizing the functions of the above-described exemplary embodiments is supplied to the system or the apparatus, and the computer of the system or the apparatus reads out and executes a code of the supplied program.
  • the program code itself installed into the computer is also one of the exemplary embodiments of the present invention.
  • an operating system (OS) or the like running on the computer performs part or all of the actual processing on the basis of an instruction included in the program read by the computer, and the functions of the above-described exemplary embodiments may also be realized by the processing.
  • Embodiment(s) of the present invention can also be realized by a computer of a system or apparatus that reads out and executes computer executable instructions (e.g., one or more programs) recorded on a storage medium (which may also be referred to more fully as a 'non-transitory computer-readable storage medium') to perform the functions of one or more of the above-described embodiment(s) and/or that includes one or more circuits (e.g., application specific integrated circuit (ASIC)) for performing the functions of one or more of the above-described embodiment(s), and by a method performed by the computer of the system or apparatus by, for example, reading out and executing the computer executable instructions from the storage medium to perform the functions of one or more of the above-described embodiment(s) and/or controlling the one or more circuits to perform the functions of one or more of the above-described embodiment(s).
  • computer executable instructions e.g., one or more programs
  • a storage medium which may also be referred to more fully as
  • the computer may comprise one or more processors (e.g., central processing unit (CPU), micro processing unit (MPU)) and may include a network of separate computers or separate processors to read out and execute the computer executable instructions.
  • the computer executable instructions may be provided to the computer, for example, from a network or the storage medium.
  • the storage medium may include, for example, one or more of a hard disk, a random-access memory (RAM), a read only memory (ROM), a storage of distributed computing systems, an optical disk (such as a compact disc (CD), digital versatile disc (DVD), or Blu-ray Disc (BD) TM ), a flash memory device, a memory card, and the like.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Processing (AREA)

Abstract

L'invention se rapporte à un appareil de traitement d'images qui permet de réaliser facilement l'identification d'un point cible correspondant entre des images, et d'effectuer un alignement approprié sur le plan physique au moyen de l'identification. L'appareil de traitement d'images obtient une transformation de coordonnées destinée à transformer respectivement une première image et une seconde image obtenues par imagerie d'un objet en première image transformée et seconde image transformée, obtient des informations de correspondance qui servent d'informations d'un point correspondant inclus dans chacune des première et seconde images transformées, transforme respectivement les informations de correspondance en positions dans la première et la seconde image, et obtient une transformation de déformation pour réaliser l'alignement de la première et de la seconde image sur la base d'informations des points correspondants dans la première et la seconde image.
PCT/JP2016/005253 2016-01-29 2016-12-28 Appareil de traitement d'images, procédé de traitement d'images, système de traitement d'images, et programme WO2017130263A1 (fr)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US16/073,156 US10762648B2 (en) 2016-01-29 2016-12-28 Image processing apparatus, image processing method, image processing system, and program

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
JP2016016446 2016-01-29
JP2016-016446 2016-01-29
JP2016224383A JP6821403B2 (ja) 2016-01-29 2016-11-17 画像処理装置、画像処理方法、画像処理システム、及びプログラム。
JP2016-224383 2016-11-17

Publications (1)

Publication Number Publication Date
WO2017130263A1 true WO2017130263A1 (fr) 2017-08-03

Family

ID=57906941

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2016/005253 WO2017130263A1 (fr) 2016-01-29 2016-12-28 Appareil de traitement d'images, procédé de traitement d'images, système de traitement d'images, et programme

Country Status (1)

Country Link
WO (1) WO2017130263A1 (fr)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1669928A2 (fr) * 2004-12-10 2006-06-14 Radiological Imaging Technology, Inc. Optimisation de l'alignement d'une image
WO2011052515A1 (fr) * 2009-10-27 2011-05-05 Canon Kabushiki Kaisha Appareil de traitement des informations, procédé de traitement des informations, et programme
WO2015044838A1 (fr) * 2013-09-30 2015-04-02 Koninklijke Philips N.V. Procédé et système destinés à l'enregistrement déformable automatique
US20150235369A1 (en) 2014-02-14 2015-08-20 Canon Kabushiki Kaisha Image processing apparatus and image processing method
US20160314587A1 (en) 2014-01-10 2016-10-27 Canon Kabushiki Kaisha Processing apparatus, processing method, and non-transitory computer-readable storage medium

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1669928A2 (fr) * 2004-12-10 2006-06-14 Radiological Imaging Technology, Inc. Optimisation de l'alignement d'une image
WO2011052515A1 (fr) * 2009-10-27 2011-05-05 Canon Kabushiki Kaisha Appareil de traitement des informations, procédé de traitement des informations, et programme
WO2015044838A1 (fr) * 2013-09-30 2015-04-02 Koninklijke Philips N.V. Procédé et système destinés à l'enregistrement déformable automatique
US20160314587A1 (en) 2014-01-10 2016-10-27 Canon Kabushiki Kaisha Processing apparatus, processing method, and non-transitory computer-readable storage medium
US20150235369A1 (en) 2014-02-14 2015-08-20 Canon Kabushiki Kaisha Image processing apparatus and image processing method

Similar Documents

Publication Publication Date Title
US10762648B2 (en) Image processing apparatus, image processing method, image processing system, and program
US9119599B2 (en) Medical image display apparatus, medical image display method and non-transitory computer-readable recording medium having stored therein medical image display program
US20110262015A1 (en) Image processing apparatus, image processing method, and storage medium
US9965857B2 (en) Medical image processing
JP5706389B2 (ja) 画像処理装置および画像処理方法、並びに、画像処理プログラム
US10867423B2 (en) Deformation field calculation apparatus, method, and computer readable storage medium
US10395380B2 (en) Image processing apparatus, image processing method, and storage medium
US10891787B2 (en) Apparatus and method for creating biological model
CN110546685B (zh) 图像分割和分割预测
JP5955199B2 (ja) 画像処理装置および画像処理方法、並びに、画像処理プログラム
US10810717B2 (en) Image processing apparatus, image processing method, and image processing system
Queirós et al. Fast left ventricle tracking using localized anatomical affine optical flow
Mulder et al. Multiframe registration of real-time three-dimensional echocardiography time series
US10297035B2 (en) Image processing apparatus, image processing method, and program
US11138736B2 (en) Information processing apparatus and information processing method
JP2018153455A (ja) 画像処理装置、画像処理装置の制御方法およびプログラム
WO2017130263A1 (fr) Appareil de traitement d'images, procédé de traitement d'images, système de traitement d'images, et programme
Puyol-Anton et al. Towards a multimodal cardiac motion atlas
Morais et al. Dense motion field estimation from myocardial boundary displacements
US11928828B2 (en) Deformity-weighted registration of medical images
JP2017111561A (ja) 情報処理装置、情報処理方法およびプログラム
Zakkaroff et al. Investigation into diagnostic accuracy of common strategies for automated perfusion motion correction
Barbosa et al. Improving the robustness of interventional 4D ultrasound segmentation through the use of personalized prior shape models

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 16831584

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 16831584

Country of ref document: EP

Kind code of ref document: A1