WO2016027840A1 - Image processing device, method, and program - Google Patents

Image processing device, method, and program Download PDF

Info

Publication number
WO2016027840A1
WO2016027840A1 PCT/JP2015/073277 JP2015073277W WO2016027840A1 WO 2016027840 A1 WO2016027840 A1 WO 2016027840A1 JP 2015073277 W JP2015073277 W JP 2015073277W WO 2016027840 A1 WO2016027840 A1 WO 2016027840A1
Authority
WO
WIPO (PCT)
Prior art keywords
shape
input image
subject
eigenspace
specific object
Prior art date
Application number
PCT/JP2015/073277
Other languages
French (fr)
Japanese (ja)
Inventor
清水 昭伸
斉藤 篤
Original Assignee
国立大学法人東京農工大学
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 国立大学法人東京農工大学 filed Critical 国立大学法人東京農工大学
Priority to JP2016544238A priority Critical patent/JP6661196B2/en
Publication of WO2016027840A1 publication Critical patent/WO2016027840A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis

Definitions

  • the present invention relates to an image processing apparatus, method, and program, and more particularly, to an image processing apparatus, method, and program for extracting a region of a subject.
  • segmentation The process of recognizing a figure from a digital image using a computer is called segmentation.
  • segmentation The process of recognizing a figure from a digital image using a computer is called segmentation.
  • the SN of the target figure is low and the shape fluctuates statistically, accurate segmentation becomes very difficult.
  • the first method uses a single shape template.
  • a shape template For example, as a general shape template, an elliptical template (eg, Slabaugh, G., Unal, G., Sept, “Graph cuts segmentation using an elliptical shape prior.”, In: IEEE International Conference on Image Processing, 2005, Vol. 2. See p.II-1222-5.) And templates of block shapes (eg Funka-Lea, G., Boykov, Y., Florin, C., Jolly, M.- P., Moreau-Gobard, R., Ramaraj, R., Rinck, D., 2006. “Automatic heart isolation for CT coronary visualization using graph-cuts.”, In: Biomedical Imaging: Nano to Macro, 3rd IEEE International Symposium on.
  • the second method uses a plurality of (several) shape templates or probabilistic representations of shapes.
  • multiple shapes selected from statistical models (eg, Nakagomi, K., Shimizu, A., Kobatake, H., Yakami, M., Fujimoto, K., Togashi, K., “Multi-shape graph cuts with neighbor Prior constraints and its application to lung segmentation from a chest CT volume. ”, Medical image analysis 17 (1), 2013, p.62-77.) and probabilistic representation of shapes (eg Linguraru) , M. G., Pura, J. A., Pamulapati, V., Summers, R. M., “Statistical 4d graphs for multi-organ abdominal segmentation from multiphase CT.”, Medical image analysis 16 (4), 2012 , Pp.904-914.) Is known.
  • statistical models eg, Nakagomi, K., Shimizu, A., Kobatake, H., Yakami, M., Fujimoto, K.
  • a large amount (several tens or more) of shape templates is used.
  • a large amount of shape templates is used.
  • specific problems eg, Kohli, P., Rihan, J., Bray, M., Torr, PH, “Simultaneous segmentation and pose estimation of humans using dynamic graph cuts.”, International Journal of Computer Vision 79 ( 3), 2008, p.285-298.
  • a method using an arbitrary large number of graphic shapes for example, see US Pat. No. 8,249,349) are known. .
  • the conventional technique described above can handle figures that fluctuate statistically in order from the first method to the third method.
  • the most advanced method is an arbitrarily large number of figure shapes (the above-mentioned US Pat. No. 8,249,349) of the third method.
  • all methods including this method need to prepare a shape set in advance by selecting a shape in advance. Therefore, unless the optimal shape is included in the shape set selected in advance, the optimality of the segmentation is not guaranteed and the performance deteriorates.
  • a digital image is a set of a finite number of pixels
  • a method of preparing all possible shape patterns can be considered in principle, and in that case, optimality is guaranteed.
  • the number of shapes that must be prepared for this purpose is enormous, and it is not realistic from the viewpoint of the memory required for the entire process including preprocessing and the calculation time.
  • the algorithm of the above-mentioned U.S. Pat. No. 8,249,349 which has been most excellent in the past, is limited in handling about 10 7 shape templates in a two-dimensional image.
  • the number of possible shapes pattern statistical model may be generated in many cases there 10 9 or more, it is very difficult in terms of computational cost in the conventional algorithm.
  • 3D images are the mainstream of medical images, but in that case, the required memory increases by more than two orders of magnitude, and it is impossible to execute processing in a realistic time and memory size using a conventional algorithm. It was.
  • an image processing apparatus that extracts a region of a subject from an input image representing a subject that is a specific object, and accepts the input image. And (A) a plurality of learning images representing the subject, wherein the subject area is an eigenspace based on eigenvectors calculated in advance based on the plurality of images obtained in advance.
  • the image processing method includes an accepting unit and a segmentation unit, and is an image processing method in an image processing apparatus that extracts a region of the subject from an input image representing the subject that is a specific object.
  • the segmentation means of the third aspect estimates the shape parameter representing the shape of the subject represented by the input image so as to optimize the objective function based on the input image in the eigenspace.
  • the region of the subject can be extracted from the input image using the shape of the specific object represented by the estimated shape parameter as prior knowledge.
  • the segmentation means is configured to repeatedly divide the convex multivesicle on the eigenspace representing the shape set including the shape of the specific object represented by the point indicating the optimum shape parameter to obtain the optimum shape parameter.
  • the shape parameter representing the shape of the subject represented by the input image is estimated so as to optimize the objective function according to a search algorithm for searching for convex multivesicles on the eigenspace including a point indicating
  • the region of the subject can be extracted from the input image using the shape of the specific object represented by the shape parameter as a prior knowledge.
  • the segmentation means of the fifth aspect calculates the lower bound of the objective function for the shape set included in the convex multivesicle by examining each vertex of the convex multivesicle in the search algorithm, The convex multivesicle on the eigenspace including the point indicating the optimal shape parameter is searched to estimate the shape parameter representing the shape of the subject represented by the input image, and the specific shape represented by the estimated shape parameter
  • the region of the subject can be extracted from the input image using the shape of the object as prior knowledge.
  • the segmentation means of the sixth aspect divides the convex multivesicle on the eigenspace so that the volumes of the two convex multivesicles after the division correspond to each other, and indicates a point indicating an optimum shape parameter. Repeatedly including a search for a convex multivesicle in the eigenspace to estimate the shape parameter representing the shape of the subject represented by the input image, and the shape of the specific object represented by the estimated shape parameter Can be extracted from the input image.
  • the segmentation means of the seventh aspect sets sampling points on the eigenspace, and divides the convex multi-vesicle using a hyperplane determined from the sampling points set on the eigenspace.
  • the shape parameters representing the shape of the subject represented by the input image are estimated, and the estimated shape parameters
  • the area of the subject can be extracted from the input image using the shape of the specific object represented by
  • segmentation means of the eighth aspect can arbitrarily set sampling points on the eigenspace.
  • the objective function of the ninth aspect includes a monotone function that monotonously changes with respect to the value of the shape label in the pixel for the shape of the specific object represented by the shape parameter as the likelihood of the shape of the specific object. Can be.
  • the search algorithm of the tenth aspect can use the Branch-and-bound method and the graph cut method.
  • the program according to the eleventh aspect is a program for causing a computer to function as each means of the image processing apparatus.
  • the subject area is an eigenspace based on eigenvectors calculated in advance based on a plurality of images obtained in advance.
  • the point on the eigenspace indicates the shape parameter of the statistical shape model representing the shape of the specific object
  • a shape parameter representing the shape of the subject represented by the input image is estimated so as to optimize a predetermined objective function that represents a value corresponding to a pixel value difference between adjacent pixels in the image.
  • the area of the subject is extracted from the input image using the shape of the specific object represented by the estimated shape parameter as prior knowledge.
  • the statistical shape model generation device 10 that generates a statistical model of the shape of a graphic representing the statistical variation of a specific object (hereinafter referred to as a statistical shape model), and the specific object.
  • the image processing apparatus 100 that extracts a subject area from an input image representing the subject will be described.
  • a case where a three-dimensional abdominal CT image is used as an input image and a pancreas region is extracted from the input image will be described as an example.
  • the algorithm is not limited to the combination of the Branch and bound method and the graph cut method, and any algorithm based on optimization theory can be applied.
  • FIG. 19 shows an outline of processing in the prior art.
  • FIG. 19 shows processing when the method of the above-mentioned US Pat. No. 8,249,349 is applied to pancreas segmentation using a statistical shape model.
  • the conventional method requires generation processing of a shape template set T ( ⁇ S) and clustering processing for the set T.
  • ⁇ S shape template set
  • a large number of shape templates are used. This eliminates the need for pre-processing required in the conventional method using the.
  • the conventional method image which has been treated with (aforementioned U.S. Pat. No. 8,249,349) a two-dimensional, while the maximum number of shapes is 107, the algorithm used in this embodiment is a three-dimensional shape (10 9 or more) can be considered.
  • a level set distribution model (Level set distribution model; LSDM, sometimes called a signed distance model) is used as the statistical shape model.
  • LSDM level set distribution model
  • the shape parameters of the statistical shape model are estimated and estimated by the image processing device 100.
  • An optimal segmentation algorithm based on a statistical shape model with shape parameters is presented.
  • the statistical shape model generation apparatus 10 according to the first embodiment can be represented by the functional blocks shown in FIG. Further, these functional blocks can be realized by the hardware configuration of the computer shown in FIG. The configuration of the computer will be described with reference to FIG.
  • the statistical shape model generation apparatus 10 is a CPU (Central Processing Unit) that performs processing according to the present embodiment of the statistical shape model generation apparatus 10 based on a program. ) 21, a RAM (Random Access Memory) 22 used as a work area when the CPU 21 executes various programs, and a ROM (Read Only Memory) 23, which is a recording medium in which various control programs, various parameters, and the like are stored in advance.
  • CPU Central Processing Unit
  • RAM Random Access Memory
  • ROM Read Only Memory
  • a hard disk 24 (described as “HDD” in the figure) used for storing various types of information, an input device 25 such as a keyboard and a mouse, a display device 26 composed of a display, etc., and a LAN (Local Area Network)
  • Various kinds of information are exchanged between the communication device 27 that performs communication using the image information providing device 30 connected to the outside.
  • an input / output interface unit (described as “external IF” in the figure) 28, which are connected to each other via a system bus BUS 29.
  • the CPU 21 accesses the RAM 22, ROM 23, and hard disk 24, acquires various information via the input device 25, displays various information on the display device 26, communication processing of various information using the communication device 27, and input / output interface unit Input of information from an external device including the image information providing device 30 connected to 28 can be performed.
  • the CPU 21 reads the program for controlling the processing according to the present embodiment stored in the hard disk 24 into the RAM 22 and executes it.
  • the statistical shape model generation apparatus 10 is configured by such a computer configuration.
  • FIG. 1 shows a configuration as a functional block
  • FIG. 2 shows a connection state of devices and the like.
  • FIG. 1 will be described in detail with FIG.
  • the statistical shape model generation apparatus 10 has a learning accepting unit 12, a learning unit 14, and a statistical shape model as functions configured by using software including computer hardware and a control program shown in FIG. And a database 16.
  • the learning accepting unit 12 accepts, as learning data, a plurality of images representing a subject that is a specific object and in which a subject area is obtained in advance. Since the specific object that is the subject in the present embodiment is the pancreas, the learning accepting unit 12 accepts a plurality of images in which the pancreas region is obtained in advance.
  • the learning unit 14 calculates eigenvectors and eigenvalues based on a plurality of images in which the pancreas region received by the learning receiving unit 12 is obtained in advance. For example, the learning unit 14 calculates eigenvectors and eigenvalues by principal component analysis based on a plurality of images in which pancreatic regions are obtained in advance. By calculating the eigenvector, an eigenspace based on the eigenvector is generated. The learning unit 14 stores the calculated eigenvector and eigenvalue in the statistical shape model database 16.
  • FIG. 3 shows an example of the shape of the liver (note that the example shown in FIG. 3 is the liver, but the essence of the problem is the same as in the case of the pancreas).
  • the shape of the liver is various. Therefore, in the embodiment, the variation in shape is expressed by a small number of shape parameters using a statistical shape model representing the shape of a specific object.
  • FIG. 4 shows the eigenvector of the shape of the liver corresponding to the first principal component obtained by principal component analysis. As shown in FIG. 4, it can be seen that the shape of the expressed liver changes according to the value of the shape parameter ⁇ corresponding to the eigenvector. Note that ⁇ is a parameter representing statistical variation.
  • Fig. 5 shows an example of a statistical shape model.
  • LSDM for example, references (Cremers, D., Rousson, M., Deriche, R., “A review of statistical approaches to level set segmentation: integrating color, texture, motion and shape.”, International journal of computer vision 72 (2), 2007, p.195-215.)
  • references Cremers, D., Rousson, M., Deriche, R., “A review of statistical approaches to level set segmentation: integrating color, texture, motion and shape.”, International journal of computer vision 72 (2), 2007, p.195-215.
  • LSDM is a representative statistical shape model, which creates a signed distance image for a graphic label of learning data and performs linear statistical analysis (for example, PCA or ICA) on the signed distance image.
  • This is a method of expressing an arbitrary shape by the following formula (1).
  • the above equation (1) is an equation relating to the pixel p ( ⁇ P).
  • P represents a pixel set
  • ⁇ p ( ⁇ ) represents a level set function corresponding to an arbitrary shape related to the pixel p
  • ⁇ p represents an average level set function related to the pixel p
  • ⁇ i represents the i th ⁇ U 1 p ,..., U d p ⁇ represents the component for the pixel p in d eigenvectors
  • [ ⁇ 1 ,..., ⁇ d ] T is on the eigenspace
  • R ⁇ ⁇ r ⁇ R d
  • W represents a positive constant.
  • the shape in the digital image is obtained by mapping the parameter ⁇ with a function g shown in the following equation (2). In the embodiment, a case where the dimension of the eigenspace is
  • L ⁇ 0, 1 ⁇ is a label set, 0 indicates a background, and 1 indicates a figure.
  • y represents one shape in the image.
  • the function g for mapping the parameter ⁇ to the shape is Heavisidevfunction H (•).
  • the function g is a mapping from the eigenspace onto the shape set S of the digital image.
  • is an image of g.
  • the set of shape labels forms a polygon in the eigenspace.
  • Each shape y ⁇ S has an original image in the eigenspace.
  • the statistical shape model database 16 stores eigenvectors and eigenvalues calculated by the learning unit 14.
  • the image processing apparatus 100 can be represented by functional blocks shown in FIG. These functional blocks can be realized by the hardware configuration of the computer shown in FIG.
  • the image processing apparatus 100 includes a reception unit 102, a calculation unit 104, and an output as functions configured using software including the computer hardware and the control program illustrated in FIG. 2. Part 120.
  • the accepting unit 102 accepts a three-dimensional three-dimensional abdominal CT image as an input image.
  • the receiving unit 102 receives the early phase image, the portal phase image, and the late phase image as a three-dimensional three-dimensional abdominal CT image.
  • the calculation unit 104 extracts a pancreas region from the three-dimensional abdominal CT image received by the receiving unit 102.
  • the calculation unit 104 includes a statistical shape model database 106 and an image processing unit 108.
  • the statistical shape model database 106 stores the same eigenvectors and eigenvalues as the statistical shape model database 16.
  • the image processing unit 108 includes an inter-image registration unit 110, a spatial standardization unit 112, and a segmentation unit 114.
  • the inter-image alignment unit 110 performs alignment between the early phase image, the portal phase image, and the late phase image received by the receiving unit 102 (for example, Shimizu, A., Kimoto, T., Kobatake, H., Nawano, S., Shinozaki, K., “Automated pancreas segmentation from three-dimensional contrast-enhanced computed tomography.”, International journal of computer assisted radiology and surgery 5 (1), 2010, p.85 98.), generate a registration image.
  • Shimizu, A., Kimoto, T., Kobatake, H., Nawano, S., Shinozaki, K. “Automated pancreas segmentation from three-dimensional contrast-enhanced computed tomography.”, International journal of computer assisted radiology and surgery 5 (1), 2010, p.85 98.
  • the spatial standardization unit 112 is based on the registration image generated by the inter-image registration unit 110, for example, using a predetermined nonlinear function, so as to match the registration image with the standard image.
  • a predetermined nonlinear function e.g. Shimizu, A., Kimoto, T., Kobatake, H., Nawano, S., Shinozaki, K., “Automated pancreas segmentation from three-dimensional contrast-enhanced computed tomography.”, International journal of computer assisted radiology and surgery 5 (1), 2010, p.85-98.
  • the segmentation unit 114 optimizes the objective function based on the input image received by the receiving unit 102 in the eigenspace based on the pre-calculated eigenvector stored in the statistical shape model database 106.
  • the shape parameter representing the shape of the subject represented by the spatial standardized image generated by the spatial standardization unit 112 is estimated, and the region of the subject represented by the spatial standardized image is obtained.
  • the segmentation unit 114 estimates a shape parameter and simultaneously extracts a subject area.
  • the points on the eigenspace based on the eigenvectors indicate the shape parameters of the statistical shape model.
  • the objective function is determined in advance so as to represent a value corresponding to the likelihood of the shape of the specific object represented by the shape parameter indicated by the point on the eigenspace and the difference in pixel value between adjacent pixels in the input image. It has been.
  • the algorithm according to the present embodiment is different from the conventional method in that the optimization of the objective function is executed in the eigenspace, and this makes it possible to efficiently handle a huge number of shapes.
  • the segmentation unit 114 repeatedly optimizes the objective function of the following equation (4) according to a search algorithm for searching for a convex polygon including a point indicating an optimal shape parameter by repeatedly dividing the convex polygon on the eigenspace.
  • a shape parameter representing the shape of the subject represented by the input image is estimated.
  • the convex polygon on the eigenspace to be searched represents the shape of a specific object represented by the point indicating the optimum shape parameter.
  • the segmentation unit 114 examines each vertex of the convex polygon in the search algorithm, and calculates the lower bound of the objective function for the shape set included in the convex polygon, thereby obtaining a point indicating the optimum shape parameter.
  • the segmentation unit 114 searches for a convex polygon on the eigenspace, but when the eigenspace is three-dimensional or more, The segmentation unit 114 searches for convex multivesicles on the eigenspace.
  • the straight line that divides the eigenspace during the search becomes a hyperplane when the eigenspace is three-dimensional or more.
  • An important idea in the present embodiment is to use the fact that the shape set on the digital image corresponds to the convex polygon on the eigenspace, as shown in FIG.
  • the segmentation unit 114 Based on the fact that a plurality of shape sets on a digital image correspond to convex polygons on the eigenspace, the segmentation unit 114 corresponds to a convex multiplicity on the eigenspace corresponding to a shape that minimizes a preset objective function. Search for a square. Then, the segmentation unit 114 finds a shape that minimizes the objective function from the shape set S based on the searched convex polygon in the eigenspace, and uses the found shape as prior knowledge to determine the shape as a subject area.
  • N is a set of adjacent pixel pairs.
  • x p represents the label of the pixel p, a value of 0 if the background, taking a value of 1 if figure.
  • I p denotes a pixel value of the pixel p
  • y p represents the value of the estimated shape of the pixel p
  • the estimated shape takes a value of 0 if the background, taking a value of 1 if the figure .
  • F p (I p , y p ) and B p (I p , y p ) in the above formula (4) are defined by the following formulas (5) and (6).
  • the following equation (5) represents the cost of assigning the pixel p as a graphic
  • the following equation (6) represents the cost of assigning the pixel p as a background.
  • Ip represents the pixel value of the pixel p
  • Iq represents the pixel value of the pixel q.
  • ⁇ 1 and ⁇ 2 are predetermined positive constants.
  • P pq (I p , I q ) in the above formula (4) is defined by the following formula (7), and evaluates the difference in pixel value between the adjacent pixels p and q.
  • a branch-and-bound search algorithm which is one of general optimization algorithms, is used as an optimization method for the objective function shown in the above equation (4).
  • a method for efficiently calculating the lower bound of the objective function for the shape set is proposed. An important idea for efficiently calculating the lower bound of the objective function for a shape set is that if only the vertices of the convex polygon are examined, the lower bound of the objective function for all shape sets contained in the convex polygon can be found. is there.
  • the parent node H 0 becomes the child nodes H 1 and H 2 as shown in the following expressions (10) and (11) by the sign of ⁇ k ( ⁇ ) shown in the expression (1).
  • the pixel k represents a pixel selected from the set Q by sampling.
  • V 0 represents a vertex set of convex polygons represented by the node H 0 .
  • V 0 can be obtained by analytically solving simultaneous equations of ⁇ e ( ⁇ ) corresponding to the edge of H 0 .
  • Equation (13) above is a Jensen inequality for the minimization problem.
  • the change between “max” and “min” in the above equation (14) is based on the minus sign in the above equation (5).
  • the maximum value and the minimum value of ⁇ ( ⁇ ) at ⁇ H i are obtained from the vertex set V i of H i based on the basic theory of linear programming, and the following equations (15) and (16) Can be expressed as
  • segmentation unit 114 repeats dividing the convex polygon in the eigenspace so that the areas of the two convex polygons after the division correspond to each other (so that the areas are substantially equal).
  • a convex polygon on the eigenspace including a point indicating the shape parameter is searched.
  • Table 1 shows the pseudo code of the segmentation algorithm by the segmentation unit 114.
  • the divided child nodes H 1 and H 2 are put in Queue. However, it is assumed that the nodes included in the Queue are arranged so that the lower bound values are in ascending order.
  • the segmentation unit 114 selects the lowest lower bound node from Queue as the parent node H 0, and repeats the above processing until Q becomes an empty set. From the result at the end of the iteration, the optimum shape parameter is obtained and the optimum shape is obtained. Then, the segmentation unit 114 obtains an optimum segmentation result using the optimum shape.
  • H 0 represents a convex polygon in the eigenspace. Note that the initial value of H 0 corresponds to R ⁇ . H 1 and H 2 represent two convex polygons formed by dividing H 0 .
  • Q is a set of pixels on the dividable digital image H 0. Queue represents a queue (used in Branch and bound search algorithm) containing a convex polygon node. Select (Q) represents a function for selecting a pixel k for division.
  • Select (Q) is a function for selecting pixels for dividing the convex polygon from the image.
  • the convex corresponding to the divided child nodes H 1 and H 2 is selected.
  • a segmentation method in which the polygonal areas are almost equal was proposed in this embodiment, and the efficiency was verified with actual images.
  • FIG. 8 shows a conceptual diagram of the function of the above equation (17). As shown in FIG. 8, the operation of the above equation (17) intuitively selects a pixel p that divides H 0 into two regions (H 1 , H 2 ) having the same area.
  • the segmentation unit 114 uses the shape represented by the optimum shape parameter of the searched specific object as prior knowledge, and converts the object region that is the shape of the specific object into the spatial standardization unit 112 so as to optimize the objective function. Extract from the spatial standardized image generated by.
  • the output unit 120 outputs the object region extracted by the segmentation unit 114 as a result.
  • the processing of the statistical shape model generation apparatus 10 shown in FIG. 9 is performed based on a program stored in the HDD 24 or the like by the CPU 21 in FIG.
  • a learning process routine shown in FIG. 9 is executed.
  • step S100 the learning receiving unit 12 receives a plurality of images in which a pancreas region is obtained in advance.
  • step S102 the learning unit 14 calculates eigenvectors and eigenvalues by principal component analysis based on a plurality of images in which the pancreas region received in step S100 is obtained in advance.
  • step S104 the learning unit 14 stores the eigenvector and eigenvalue calculated in step S102 in the statistical shape model database 16, and ends the learning process routine.
  • the processing of the image processing apparatus 100 shown in FIG. 10 is performed based on a program stored in the HDD 24 or the like by the CPU 21 in FIG. First, when eigenvectors and eigenvalues stored in the statistical shape model database 16 of the statistical shape model generation apparatus 10 are input to the image processing apparatus 100, they are stored in the statistical shape model database 106. Then, when a three-phase three-dimensional abdominal CT image is input as an input image to the image processing apparatus 100, the image processing routine shown in FIG. 10 is executed.
  • step S200 the receiving unit 102 receives a three-phase three-dimensional abdominal CT image as an input image.
  • step S202 the inter-image registration unit 110 performs registration between the early phase image, the portal phase image, and the late phase image received as the three-phase abdominal CT image of the three phases in step S200. To generate a registration image.
  • step S204 the spatial standardization unit 112 generates a spatial standardized image using, for example, a nonlinear function, based on the alignment image generated in step S202.
  • step S206 the segmentation unit 114 optimizes the objective function based on the input image received in step S200 in the eigenspace based on the pre-calculated eigenvector stored in the statistical shape model database 106.
  • the shape parameter representing the shape of the subject represented by the spatial standardized image generated in step S204 is estimated to obtain the shape of the subject represented by the spatial standardized image.
  • Step S206 is realized by the segmentation processing routine shown in FIG.
  • step S300 the segmentation unit 114 sets the entire eigenspace R ⁇ as the parent node H 0 .
  • step S302 the segmentation unit 114, as shown in the equation (12), the set of splittable pixel k parent node H 0, is set as the set Q.
  • step S304 the segmentation unit 114 initializes Queue.
  • step S306 the segmentation unit 114 includes a parent node H 0 set in the step S300, the lower bound L corresponding to the parent node H 0 is calculated according to the equation (14); a combination of (H 0 I) And stored in the Queue initialized in step S304.
  • step S308 the segmentation unit 114 selects the pixel k from the set Q set in step S302 or the set Q updated in step S316 described later according to the above equation (17).
  • step S310 the segmentation unit 114, the above formula parent node H 0 is set in the step S300, the or parent node H 0 updated in step S314 to be described later, by using a pixel k selected in step S308 (10) and divided according to (11), sets the child nodes H 1 and H 2.
  • step S312 the segmentation unit 114, the lower bound L corresponding to the child node H 1 calculated in accordance with the set child nodes H 1 and the formula in the above step S310 (14); a combination of (H 1 I) , Stored in Queue. Further, the combination of the child node H 2 set in step S310 and the lower bound L (H 2 ; I) corresponding to the child node H 2 calculated according to the above equation (14) is stored in Queue.
  • step S314 the segmentation unit 114, among the stored node in Queue, select the lowest lower bound of the node, the parent node H 0, updates the selected node.
  • step S316 the segmentation unit 114, based on the parent node H 0 updated in the step S314, the as shown in the equation (12), the set Q, the set of splittable pixel k of the parent node H 0 Update to If there is no pixel k that can divide the parent node H 0 , the set Q is updated to an empty set.
  • step S318 the segmentation unit 114 determines whether or not the set Q is an empty set. If the set Q is an empty set, the process returns to step S308. On the other hand, if the set Q is not an empty set, the process proceeds to step S320.
  • step S320 the segmentation unit 114 is finally updated by selecting the shape parameter alpha * in the parent node H 0 above step S314, the substituted a selected shape parameter alpha * the function g, the specific Determine the shape y * of the object.
  • step S322 the segmentation unit 114 subjects the region x of the subject having the shape of the specific object so as to optimize the objective function based on the shape y * and the objective function of the specific object determined in step S320.
  • * Is extracted from the spatial standardized image generated in step S204.
  • step S324 the segmentation unit 114 outputs the subject region x * extracted in step S322 as a result.
  • the output unit 120 outputs the subject region x * extracted in step S206 as a result in step S208, and ends the image processing routine.
  • the image processing apparatus 100 as a function by computer processing based on a program, (A) eigenvectors calculated in advance based on a plurality of images in which a subject area is obtained in advance are used as a basis. (B) in the eigenspace where the point on the eigenspace indicates the shape parameter of the statistical shape model representing the shape of the specific object, the specific parameter represented by the shape parameter indicated by the point on the eigenspace A shape parameter that represents the shape of the subject represented by the input image so as to optimize a predetermined objective function that represents a value depending on the likelihood of the shape of the object and the pixel value difference between adjacent pixels in the input image Is estimated.
  • the area of the subject is extracted from the input image using the shape of the specific object represented by the estimated shape parameter as prior knowledge. Thereby, it is possible to accurately extract the region of the subject while suppressing an increase in the calculation amount.
  • the image processing apparatus 100 can extract a subject area at high speed.
  • the second embodiment is different from the first embodiment in that a shape parameter representing the shape of a subject is estimated using an approximate solution.
  • FIG. 12 shows a diagram for explaining the approximate solution in the second embodiment.
  • the shape parameter is estimated by the approximate solution in the second embodiment.
  • the segmentation unit 114 of the image processing apparatus sets sampling points in a lattice shape on the eigenspace.
  • the segmentation unit 114 sets a straight line from the sampling points set in the eigenspace so that the areas of the two convex polygons after the division correspond to each other, and divides the convex polygon by the set straight line.
  • the search for the convex polygon including the point indicating the optimum shape parameter is repeated, and the shape parameter representing the shape of the subject represented by the input image is estimated.
  • a straight line determined from the sampling points using performs processing for dividing a parent node H 0.
  • the number of dimensions of the eigenspace can be increased, and the shape parameter can be estimated with high accuracy.
  • FIG. 13 shows the recognition results of 140 cases of pancreas by the image processing apparatus according to the second embodiment.
  • a broken-line circle represents a pancreas recognition result by the image processing apparatus according to the first embodiment
  • a solid-line circle represents a pancreas recognition result by the image processing apparatus according to the second embodiment.
  • FIG. 14 shows an approximate solution of the second embodiment, a solution when the method described in the above-mentioned US Pat. No. 8,249,349 is applied to pancreas segmentation, and the strictness of the first embodiment.
  • the comparison result of the calculation time of the solution is shown.
  • FIG. 14 shows a change in calculation time when the dimension d of the eigenspace is increased.
  • the upper limit of calculation time was 100 h.
  • the CPU used was 3.1 GHz Intel (R) Xeon (R), 1 CPU was used for preprocessing, and 2 CPU was used for optimization.
  • sampling points are set in a lattice shape on a convex polygon in the eigenspace, and set to a convex polygon in the eigenspace.
  • the shape parameter representing the shape of the subject represented by the input image is estimated by repeating the search by dividing the convex polygon using the straight line.
  • the area of the subject is extracted from the input image using the shape of the specific object represented by the estimated shape parameter as prior knowledge. Thereby, it is possible to accurately extract the region of the subject while suppressing an increase in the calculation amount.
  • the third embodiment is different from the second embodiment in that a shape parameter representing the shape of a subject is estimated using an approximate solution that randomly sets sampling points.
  • FIG. 15 shows a diagram for explaining the approximate solution in the third embodiment.
  • sampling points are set randomly on the eigenspace, and the convex polygon is divided using a straight line determined from the sampling points.
  • the segmentation unit 114 of the image processing apparatus randomly sets sampling points on the eigenspace.
  • the segmentation unit 114 sets a straight line from the sampling points set in the eigenspace so that the areas of the two convex polygons after the division correspond to each other, and divides the convex polygon by the set straight line.
  • the search for the convex polygon including the point indicating the optimum shape parameter is repeated, and the shape parameter representing the shape of the subject represented by the input image is estimated.
  • sampling points are set randomly on the convex polygon in the eigenspace, and set to the convex polygon in the eigenspace.
  • the shape parameter representing the shape of the subject represented by the input image is estimated by repeating the search by dividing the convex polygon using a straight line.
  • the area of the subject is extracted from the input image using the shape of the specific object represented by the estimated shape parameter as prior knowledge. Thereby, it is possible to accurately extract the region of the subject while suppressing an increase in the calculation amount.
  • the fourth embodiment differs from the first to third embodiments in that the objective function is extended.
  • the objective function shown in the above equation (4) changes monotonously with respect to the value of the shape label in the pixel for the shape of the specific object represented by the shape parameter as the likelihood of the shape of the specific object. Define to include monotonic functions and extend the objective function.
  • F p (I p , y p ) and B p (I p , y p ) in the above equation (4) are expressed by the following equations (18), (19 ) And extend the objective function.
  • FIG. 16 shows a diagram representing a sufficient condition for the lower bound of the objective function to satisfy monotonicity.
  • This sufficient condition is that h F p (y) and h B p (y) ( ⁇ p ⁇ P) are monotone functions that monotonically decrease and monotonously increase with respect to y q ( ⁇ q ⁇ P), respectively.
  • FIG. 17 shows an example of h F p (y) and h B p (y) that satisfy the relationship of the above formula (20).
  • h F p (y) and h B p (y) are defined by the distance function shown in the following equation (21).
  • the monotone functions h F p (y) and h B p (y) that minimize the energy at the correct contour of the shape of a specific object are introduced into the objective function.
  • a better objective function can then be used.
  • the fifth embodiment is different from the first to fourth embodiments in that the function g for mapping the parameter ⁇ on the eigenspace to the shape is expanded.
  • mapping function g in the equation (2) is defined by the Heaviside function F ( ⁇ ) in the equation (3) has been described.
  • the function g for mapping the shape parameter to the shape of a specific object is defined so as to include a predetermined function f, and the function g for mapping is extended.
  • f represents a predetermined monotonic function
  • t represents a predetermined threshold value
  • FIG. 18 shows an example when LogOdds is used as a statistical shape model. As shown in FIG. 18, the shape representation is expanded by using LogOdds.
  • the present invention can be widely used for image recognition processing for a graphic whose shape or the like fluctuates statistically.
  • image recognition processing for a graphic whose shape or the like fluctuates statistically.
  • other organs and other medical images for example, MR, PET, etc.
  • it can be used for face recognition from face images and processing for recognizing specific figures from general landscape images. is there.
  • the pancreas is targeted, but the spleen can also be targeted as another organ, for example.
  • the present invention is not limited to this.
  • the present invention can be used not only for LSDM but also for other statistical shape models.
  • the statistical shape model can be expressed by a linear function as in the above formula (1) (strictly speaking, the function ⁇ for the pixel is a linear function)
  • this embodiment is applied to other statistical shape models.
  • this embodiment can be applied to a statistical shape model other than the statistical shape model expressed by a linear function such as the above formula (1) shown in the embodiment.
  • various statistical shape models can be used by expanding the mapping function g.
  • the present invention is not limited to this.
  • a higher dimensional eigenspace may be targeted.
  • the eigenspace dimension can be increased by using an approximate solution.
  • the convex polygon corresponds to the convex multivesicle
  • the area of the convex polygon corresponds to the volume of the convex multivesicle. Therefore, when the dimension of the eigenspace is increased to 3 or more, the segmentation unit 114 repeatedly divides the convex multivesicle on the eigenspace and searches for the convex multivesicle including the point indicating the optimal shape parameter, A shape parameter representing the shape of the subject represented by the input image is estimated so as to optimize the objective function represented by the above equation (4).
  • the segmentation unit 114 searches the convex multivesicle on the eigenspace including the point indicating the optimum shape parameter by calculating the lower bound of the objective function for the shape set included in the convex multivesicle in the search algorithm. Estimate the shape parameters. Further, the segmentation unit 114 divides the convex multivesicle on the eigenspace so that the volumes of the two convex multivesicles after the division correspond to each other, and the convexity on the eigenspace including the point indicating the optimum shape parameter. Repeat the search for multivesicles to estimate the shape parameters.
  • the present invention is not limited to this, and any other arbitrary It can also be applied to convex polygons created by the division method.
  • the present invention can also be applied to a convex polygon that regularly divides an eigenspace, such as a lattice shape. In this case, depending on the fineness of the division, “Tightness” (one of the three conditions for guaranteeing optimality) may not be established.
  • the objective function shown in the above embodiment the branch and bound method and the graph cut method are used this time. It is applicable not only to the thing.
  • the exact solution according to the first embodiment may be executed.
  • a program for realizing the functions of the processing units according to the embodiment is recorded on a computer-readable recording medium, and the program recorded on the recording medium is stored in the computer system.
  • the processing by each component may be executed by causing the program to be read and executed, or the program may be read using a communication function not shown.
  • the computer-readable recording medium means a portable medium such as a flexible disk, a magneto-optical disk, a ROM, a CD-ROM, or a storage device such as a hard disk built in the computer system.
  • the program may be transmitted from a computer system storing the program in a storage device or the like to another computer system via a transmission medium or by a transmission wave in the transmission medium.
  • the “transmission medium” for transmitting the program refers to a medium having a function of transmitting information, such as a network (communication network) such as the Internet or a communication line (communication line) such as a telephone line.
  • the program may be for realizing a part of the functions described above. Furthermore, what can implement
  • each processing unit of the statistical shape model generation apparatus 10 shown in FIG. 1 and the image processing apparatus 100 shown in FIG. 7 is configured by a computer that can execute each function by a program.
  • a hardware configuration including a logic element circuit may be used.
  • the configuration contents may be changed as appropriate.
  • program according to the embodiment may be provided by being stored in a storage medium.
  • a computer-readable medium is a program for extracting a region of a subject from an input image representing a subject that is a specific object, the computer accepting means for receiving the input image, and (A) the subject And eigenspaces based on eigenvectors calculated in advance based on the plurality of images in which the area of the subject is obtained in advance, and (B) eigenspaces In the eigenspace where the upper point indicates the shape parameter of the statistical shape model representing the statistical variation of the shape of the specific object, the shape parameter indicated by the point on the eigenspace represents the input image.
  • a predetermined objective function representing a value corresponding to the likelihood of the shape of the specific object and the difference in pixel value between adjacent pixels in the input image is optimized.
  • the shape parameter representing the shape of the subject represented by the input image is estimated, and the region of the subject is extracted from the input image using the shape of the specific object represented by the estimated shape parameter as prior knowledge.
  • a computer-readable medium storing a program for functioning as a segmentation means.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

A reception unit 102 receives an input image. In an eigenspace such that an eigenvector calculated beforehand serves as a basis and such that points in the eigenspace indicate the shape parameters of a statistical shape model which indicates the statistical fluctuations of the shape of a specific object, in order to optimize a predetermined objective function which indicates a value corresponding to the likelihood of the shape of the specific object indicated by the shape parameters indicated by the points in the eigenspace and corresponding to the difference in pixel values between adjacent pixels in the input image, a segmentation unit 114 estimates the shape parameters which indicate the shape of a subject indicated by the input image, and extracts from the input image a region in which the subject is present, with the shape of the specific object indicated by the estimated shape parameters serving as prior knowledge. Due to this configuration, an increase in the amount of computation can be suppressed and a region in which the subject is present can be extracted with good accuracy.

Description

画像処理装置、方法、及びプログラムImage processing apparatus, method, and program
 本発明は、画像処理装置、方法、及びプログラムに係り、特に、被写体の領域を抽出する画像処理装置、方法、及びプログラムに関するものである。 The present invention relates to an image processing apparatus, method, and program, and more particularly, to an image processing apparatus, method, and program for extracting a region of a subject.
 デジタル画像の中からコンピュータを用いて図形を認識する処理はセグメンテーションと呼ばれるが、対象図形のSNが低く、かつ、形状が統計的に変動する場合、正確なセグメンテーションは非常に難しくなる。 The process of recognizing a figure from a digital image using a computer is called segmentation. However, if the SN of the target figure is low and the shape fluctuates statistically, accurate segmentation becomes very difficult.
 グラフカット(例えば、Boykov, Y., Veksler, O., Zabih, R.,“Fast approximate energy minimization via graph cuts.”, Pattern Analysis and Machine Intelligence, IEEE Transactions on 23 (11), 2001, p.1222‐1239.を参照。)などの最適化理論に基づくセグメンテーションアルゴリズムは、目的関数を真に最適化できるという意味で従来のアルゴリズムより優れており、SNが低い場合にもしばしば非常に高い性能を示した。しかし、それでも形状が変動する場合には正確に認識できないことがしばしばあった。 Graph cuts (eg Boykov, Y., Veksler, O., Zabih, R., “Fast approximate energy minimization via graph cuts.”, Pattern Analysis and Machine Intelligence, IEEE Transactions on 23 (11), 2001, p.122 Segmentation algorithms based on optimization theories such as -1239.) Are superior to conventional algorithms in that they can truly optimize the objective function, and often exhibit very high performance even when the SN is low. It was. However, even when the shape fluctuated, it was often not recognized correctly.
 そこで、最近は、最適化理論に基づくアルゴリズムにおいて図形の形状情報を利用する方法が主流になってきたが、それらは、大まかに次の3通りに分けられる。 Therefore, recently, methods that use shape information of graphics in algorithms based on optimization theory have become mainstream, but they can be roughly divided into the following three types.
 1つ目の方法としては、単一の形状テンプレートを利用する方法である。例えば、一般的形状テンプレートとして、楕円形状のテンプレート(例えば、Slabaugh, G., Unal, G., Sept, “Graph cuts segmentation using an elliptical shape prior.”, In: IEEE International Conference on Image Processing, 2005, Vol. 2. p.II‐1222‐5.を参照。)を用いる手法や、塊状図形のテンプレート(例えば、Funka-Lea, G., Boykov, Y., Florin, C., Jolly, M.‐P., Moreau-Gobard, R., Ramaraj, R., Rinck, D., 2006. “Automatic heart isolation for CT coronary visualization using graph-cuts.”, In: Biomedical Imaging: Nano to Macro, 3rd IEEE International Symposium on. IEEE, 2006, p.614‐617.を参照。)を利用する方法が知られている。
 また、特定の形状テンプレートとして、ユーザ定義の任意形状(例えば、Freedman, D., Zhang, T., “Interactive graph cut based segmentation with shape priors.”, In: IEEE Computer Society Conference on Computer Vision and Pattern Recognition. Vol. 1. IEEE, 2005, p.755‐762.を参照。)を用いる手法や、統計モデルから選択した形状(例えば、Grosgeorge, D., Petitjean, C., Dacher, J.-N., Ruan, S., “Graph cut segmentation with a statistical shape model in cardiac mri.”, Computer Vision and Image Understanding 117 (9), 2013, p.1027‐1035.,Akinobu Shimizu, Keita Nakagomi, Takuya Narihira, Hidefumi Kobatake, Shigeru Nawano, Kenji Shinozaki, Koich Ishizu, and Kaori Togashi, “Automated Segmentation of 3D CT Images based on Statistical Atlas and Graph Cuts”, Proc. of MICCAI workshop MCV, 2010, p.129-138.,Malcolm, J., Rathi, Y., Tannenbaum, A., “Graph cut segmentation with nonlinear shape priors.”, In: Image Processing, 2007. ICIP 2007. IEEE International Conference on. Vol. 4. IEEE, 2007, p.IV‐365.を参照。)を用いる手法が知られている。
The first method uses a single shape template. For example, as a general shape template, an elliptical template (eg, Slabaugh, G., Unal, G., Sept, “Graph cuts segmentation using an elliptical shape prior.”, In: IEEE International Conference on Image Processing, 2005, Vol. 2. See p.II-1222-5.) And templates of block shapes (eg Funka-Lea, G., Boykov, Y., Florin, C., Jolly, M.- P., Moreau-Gobard, R., Ramaraj, R., Rinck, D., 2006. “Automatic heart isolation for CT coronary visualization using graph-cuts.”, In: Biomedical Imaging: Nano to Macro, 3rd IEEE International Symposium on. IEEE, 2006, p.614-617.) is known.
In addition, as a specific shape template, user-defined arbitrary shapes (for example, Freedman, D., Zhang, T., “Interactive graph cut based segmentation with shape priors.”, In: IEEE Computer Society Conference on Computer Vision and Pattern Recognition Vol. 1. See IEEE, 2005, p.755-762.) And shapes selected from statistical models (eg Grosgeorge, D., Petitjean, C., Dacher, J.-N. , Ruan, S., “Graph cut segmentation with a statistical shape model in cardiac mri.”, Computer Vision and Image Understanding 117 (9), 2013, p.1027-1035., Akinobu Shimizu, Keita Nakagomi, Takuya Narihira, Hidefumi Kobatake, Shigeru Nawano, Kenji Shinozaki, Koich Ishizu, and Kaori Togashi, “Automated Segmentation of 3D CT Images based on Statistical Atlas and Graph Cuts”, Proc. Of MICCAI workshop MCV, 2010, p.129-138., Malcolm, J ., Rathi, Y., Tannenbaum, A., “Graph cut segmentation with nonlinear shape priors.”, In: Image Processing, 2007. ICIP 2007. IEEE International Conference on. Vol. 4. See IEEE, 2007, p.IV-365.).
 2つ目の方法としては、複数(数個)の形状テンプレートや形状の確率的表現を利用する方法である。例えば、統計モデルから選択した複数形状(例えば、Nakagomi, K., Shimizu, A., Kobatake, H., Yakami, M., Fujimoto, K., Togashi, K., “Multi-shape graph cuts with neighbor prior constraints and its application to lung segmentation from a chest CT volume.”, Medical image analysis 17 (1), 2013, p.62‐77.を参照。)を用いる手法や、形状の確率的表現(例えば、Linguraru, M. G., Pura, J. A., Pamulapati, V., Summers, R. M., “Statistical 4d graphs for multi-organ abdominal segmentation from multiphase CT.”, Medical image analysis 16 (4), 2012, p.904‐914.を参照。)を用いる手法が知られている。 The second method uses a plurality of (several) shape templates or probabilistic representations of shapes. For example, multiple shapes selected from statistical models (eg, Nakagomi, K., Shimizu, A., Kobatake, H., Yakami, M., Fujimoto, K., Togashi, K., “Multi-shape graph cuts with neighbor Prior constraints and its application to lung segmentation from a chest CT volume. ”, Medical image analysis 17 (1), 2013, p.62-77.) and probabilistic representation of shapes (eg Linguraru) , M. G., Pura, J. A., Pamulapati, V., Summers, R. M., “Statistical 4d graphs for multi-organ abdominal segmentation from multiphase CT.”, Medical image analysis 16 (4), 2012 , Pp.904-914.) Is known.
 3つ目の方法としては、大量(数十以上)の形状テンプレートを利用する方法である。例えば、特定の問題用(例えば、Kohli, P., Rihan, J., Bray, M., Torr, P. H., “Simultaneous segmentation and pose estimation of humans using dynamic graph cuts.”, International Journal of Computer Vision 79 (3), 2008, p.285‐298.を参照。)のテンプレートを用いる手法や、任意の大量の図形形状(例えば、米国特許第8249349号明細書を参照。)を用いる手法が知られている。
 
As the third method, a large amount (several tens or more) of shape templates is used. For example, for specific problems (eg, Kohli, P., Rihan, J., Bray, M., Torr, PH, “Simultaneous segmentation and pose estimation of humans using dynamic graph cuts.”, International Journal of Computer Vision 79 ( 3), 2008, p.285-298.) And a method using an arbitrary large number of graphic shapes (for example, see US Pat. No. 8,249,349) are known. .
 上述した従来の技術は、1つ目の手法から3つ目の手法の順に統計的に変動をする図形をうまく扱えるようになる。最も先端的な方法は、上記3つ目の手法の、任意の大量の図形形状(上記米国特許第8249349号明細書)である。しかし、この方法も含めてすべての方法は事前に形状を選択するなどして、あらかじめ形状集合を準備する必要がある。そのため、事前に選択した形状集合の中にそもそも最適な形状が含まれていなければ、セグメンテーションの最適性は保証されず、性能は低下する。 The conventional technique described above can handle figures that fluctuate statistically in order from the first method to the third method. The most advanced method is an arbitrarily large number of figure shapes (the above-mentioned US Pat. No. 8,249,349) of the third method. However, all methods including this method need to prepare a shape set in advance by selecting a shape in advance. Therefore, unless the optimal shape is included in the shape set selected in advance, the optimality of the segmentation is not guaranteed and the performance deteriorates.
 一方、デジタル画像は有限個の画素の集合であることから、可能な全ての形状パターンを準備する方法も原理的には考えられ、その場合には最適性は保証される。しかし、そのために用意しなければならない形状数は膨大になり、前処理も含めた処理全体の必要なメモリと計算時間の観点から現実的ではない。例えば、従来最も優れていた上記米国特許第8249349号明細書のアルゴリズムでも、2次元画像で10個程度の数の形状テンプレートを扱うのが限界であった。しかし、統計モデルが生成しうる可能な形状パターン数は10個以上ある場合も多く、従来のアルゴリズムでは計算コストの観点から非常に困難である。さらに医用画像では3次元画像が主流であるが、その場合には必要なメモリは二桁以上増加し、ほぼ従来のアルゴリズムによって現実的な時間やメモリサイズで処理を実行することは不可能であった。 On the other hand, since a digital image is a set of a finite number of pixels, a method of preparing all possible shape patterns can be considered in principle, and in that case, optimality is guaranteed. However, the number of shapes that must be prepared for this purpose is enormous, and it is not realistic from the viewpoint of the memory required for the entire process including preprocessing and the calculation time. For example, even the algorithm of the above-mentioned U.S. Pat. No. 8,249,349, which has been most excellent in the past, is limited in handling about 10 7 shape templates in a two-dimensional image. However, the number of possible shapes pattern statistical model may be generated in many cases there 10 9 or more, it is very difficult in terms of computational cost in the conventional algorithm. Furthermore, 3D images are the mainstream of medical images, but in that case, the required memory increases by more than two orders of magnitude, and it is impossible to execute processing in a realistic time and memory size using a conventional algorithm. It was.
 本発明の一実施形態は、上記事情に鑑みてなされたものである。 One embodiment of the present invention has been made in view of the above circumstances.
 上記目的を達成するために、第1の態様の画像処理装置は、特定の物体である被写体を表す入力画像から、前記被写体の領域を抽出する画像処理装置であって、前記入力画像を受け付ける受付手段と、(A)前記被写体を表す学習用の複数の画像であって、前記被写体の領域が予め求められた前記複数の画像に基づいて予め計算された固有ベクトルを基底とする固有空間であって、かつ、(B)固有空間上の点が、前記特定の物体の形状の統計的変動を表す統計的形状モデルの形状パラメータを示す固有空間において、前記入力画像に基づいて、前記固有空間上の点が示す前記形状パラメータが表す前記特定の物体の形状の尤もらしさと前記入力画像中の隣接画素間の画素値の差とに応じた値を表す予め定められた目的関数を最適化するように、前記入力画像が表す前記被写体の形状を表す前記形状パラメータを推定し、前記推定された前記形状パラメータが表す前記特定の物体の形状を事前知識として、前記被写体の領域を、前記入力画像から抽出するセグメンテーション手段と、を備えている。 To achieve the above object, an image processing apparatus according to a first aspect is an image processing apparatus that extracts a region of a subject from an input image representing a subject that is a specific object, and accepts the input image. And (A) a plurality of learning images representing the subject, wherein the subject area is an eigenspace based on eigenvectors calculated in advance based on the plurality of images obtained in advance. And (B) in an eigenspace where a point on the eigenspace indicates a shape parameter of a statistical shape model representing a statistical variation in the shape of the specific object, on the eigenspace based on the input image A predetermined objective function representing a value corresponding to the likelihood of the shape of the specific object represented by the shape parameter indicated by the point and the difference in pixel value between adjacent pixels in the input image is optimized. In addition, the shape parameter representing the shape of the subject represented by the input image is estimated, the shape of the specific object represented by the estimated shape parameter is used as prior knowledge, and the region of the subject is determined from the input image. And a segmentation means for extracting.
 また、第2の態様の画像処理方法は、受付手段、及びセグメンテーション手段を含み、特定の物体である被写体を表す入力画像から、前記被写体の領域を抽出する画像処理装置における画像処理方法であって、前記受付手段が、前記入力画像を受け付けるステップと、前記セグメンテーション手段が、(A)前記被写体を表す学習用の複数の画像であって、前記被写体の領域が予め求められた前記複数の画像に基づいて予め計算された固有ベクトルを基底とする固有空間であって、かつ、(B)固有空間上の点が、前記特定の物体の形状の統計的変動を表す統計的形状モデルの形状パラメータを示す固有空間において、前記入力画像に基づいて、前記固有空間上の点が示す前記形状パラメータが表す前記特定の物体の形状の尤もらしさと前記入力画像中の隣接画素間の画素値の差とに応じた値を表す予め定められた目的関数を最適化するように、前記入力画像が表す前記被写体の形状を表す前記形状パラメータを推定し、前記推定された前記形状パラメータが表す前記特定の物体の形状を事前知識として、前記被写体の領域を、前記入力画像から抽出するステップと、を含む。 The image processing method according to the second aspect includes an accepting unit and a segmentation unit, and is an image processing method in an image processing apparatus that extracts a region of the subject from an input image representing the subject that is a specific object. The step of receiving the input image; and the segmentation unit: (A) a plurality of learning images representing the subject, wherein the regions of the subject are obtained in advance. And (B) a point on the eigenspace indicates a shape parameter of a statistical shape model representing a statistical variation of the shape of the specific object. In the eigenspace, based on the input image, the likelihood of the shape of the specific object represented by the shape parameter indicated by the point on the eigenspace and the previous Estimating the shape parameter representing the shape of the subject represented by the input image so as to optimize a predetermined objective function representing a value corresponding to a pixel value difference between adjacent pixels in the input image; Extracting the region of the subject from the input image using the shape of the specific object represented by the estimated shape parameter as prior knowledge.
 また、第3の態様の前記セグメンテーション手段は、前記固有空間において、前記入力画像に基づいて、前記目的関数を最適化するように、前記入力画像が表す前記被写体の形状を表す前記形状パラメータを推定すると同時に、前記推定された前記形状パラメータが表す前記特定の物体の形状を事前知識として、前記被写体の領域を、前記入力画像から抽出するようにすることができる。 Further, the segmentation means of the third aspect estimates the shape parameter representing the shape of the subject represented by the input image so as to optimize the objective function based on the input image in the eigenspace. At the same time, the region of the subject can be extracted from the input image using the shape of the specific object represented by the estimated shape parameter as prior knowledge.
 また、第4の態様の前記セグメンテーション手段は、最適な形状パラメータを示す点が表す前記特定の物体の形状を含む形状集合を表す前記固有空間上の凸多胞体を繰り返し分割して最適な形状パラメータを示す点を含む前記固有空間上の凸多胞体を探索する探索アルゴリズムに従って、前記目的関数を最適化するように、前記入力画像が表す前記被写体の形状を表す前記形状パラメータを推定し、前記推定された前記形状パラメータが表す前記特定の物体の形状を事前知識として、前記被写体の領域を、前記入力画像から抽出するようにすることができる。 Further, the segmentation means according to the fourth aspect is configured to repeatedly divide the convex multivesicle on the eigenspace representing the shape set including the shape of the specific object represented by the point indicating the optimum shape parameter to obtain the optimum shape parameter. The shape parameter representing the shape of the subject represented by the input image is estimated so as to optimize the objective function according to a search algorithm for searching for convex multivesicles on the eigenspace including a point indicating The region of the subject can be extracted from the input image using the shape of the specific object represented by the shape parameter as a prior knowledge.
 また、第5の態様の前記セグメンテーション手段は、前記探索アルゴリズムにおいて、前記凸多胞体の各頂点を調べることで、前記凸多胞体に含まれる形状集合に対する前記目的関数の下界を計算することにより、最適な形状パラメータを示す点を含む前記固有空間上の凸多胞体を探索して前記入力画像が表す前記被写体の形状を表す前記形状パラメータを推定し、前記推定された前記形状パラメータが表す前記特定の物体の形状を事前知識として、前記被写体の領域を、前記入力画像から抽出するようにすることができる。 Further, the segmentation means of the fifth aspect calculates the lower bound of the objective function for the shape set included in the convex multivesicle by examining each vertex of the convex multivesicle in the search algorithm, The convex multivesicle on the eigenspace including the point indicating the optimal shape parameter is searched to estimate the shape parameter representing the shape of the subject represented by the input image, and the specific shape represented by the estimated shape parameter The region of the subject can be extracted from the input image using the shape of the object as prior knowledge.
 また、第6の態様の前記セグメンテーション手段は、分割後の2つの凸多胞体の体積が対応するように、前記固有空間上の凸多胞体を分割し、かつ、最適な形状パラメータを示す点を含む前記固有空間上の凸多胞体を探索することを繰り返して、前記入力画像が表す前記被写体の形状を表す前記形状パラメータを推定し、前記推定された前記形状パラメータが表す前記特定の物体の形状を事前知識として、前記被写体の領域を、前記入力画像から抽出するようにすることができる。 Further, the segmentation means of the sixth aspect divides the convex multivesicle on the eigenspace so that the volumes of the two convex multivesicles after the division correspond to each other, and indicates a point indicating an optimum shape parameter. Repeatedly including a search for a convex multivesicle in the eigenspace to estimate the shape parameter representing the shape of the subject represented by the input image, and the shape of the specific object represented by the estimated shape parameter Can be extracted from the input image.
 また、第7の態様の前記セグメンテーション手段は、前記固有空間上にサンプリング点を設定し、前記固有空間上に設定されたサンプリング点から決定される超平面を用いて前記凸多胞体を分割して最適な形状パラメータを示す点を含む前記固有空間上の凸多胞体を探索することを繰り返して、前記入力画像が表す前記被写体の形状を表す前記形状パラメータを推定し、前記推定された前記形状パラメータが表す前記特定の物体の形状を事前知識として、前記被写体の領域を、前記入力画像から抽出するようにすることができる。 Further, the segmentation means of the seventh aspect sets sampling points on the eigenspace, and divides the convex multi-vesicle using a hyperplane determined from the sampling points set on the eigenspace. By repeatedly searching for convex multivesicles on the eigenspace including points indicating optimal shape parameters, the shape parameters representing the shape of the subject represented by the input image are estimated, and the estimated shape parameters The area of the subject can be extracted from the input image using the shape of the specific object represented by
 また、第8の態様の前記セグメンテーション手段は、任意に、前記固有空間上にサンプリング点を設定するようにすることができる。 Also, the segmentation means of the eighth aspect can arbitrarily set sampling points on the eigenspace.
 また、第9の態様の前記目的関数は、前記特定の物体の形状の尤もらしさとして、前記形状パラメータが表す前記特定の物体の形状に対する画素における形状ラベルの値に関して単調に変化する単調関数を含むようにすることができる。 Further, the objective function of the ninth aspect includes a monotone function that monotonously changes with respect to the value of the shape label in the pixel for the shape of the specific object represented by the shape parameter as the likelihood of the shape of the specific object. Can be.
 また、第10の態様の前記探索アルゴリズムは、Branch and bound法及びグラフカット法を用いることができる。 Also, the search algorithm of the tenth aspect can use the Branch-and-bound method and the graph cut method.
 また、第11の態様のプログラムは、コンピュータを、上記の画像処理装置の各手段として機能させるためのプログラムである。 The program according to the eleventh aspect is a program for causing a computer to function as each means of the image processing apparatus.
 実施の形態に係る画像処理装置、方法、及びプログラムによれば、(A)被写体の領域が予め求められた複数の画像に基づいて予め計算された固有ベクトルを基底とする固有空間であって、(B)固有空間上の点が、特定の物体の形状を表す統計的形状モデルの形状パラメータを示す固有空間において、固有空間上の点が示す形状パラメータが表す特定の物体の形状の尤もらしさと入力画像中の隣接画素間の画素値の差とに応じた値を表す予め定められた目的関数を最適化するように、入力画像が表す被写体の形状を表す形状パラメータを推定する。推定された形状パラメータが表す特定の物体の形状を事前知識として、被写体の領域を、入力画像から抽出する。これにより、計算量の増大を抑制して、被写体の領域を精度よく抽出することができる、という効果が得られる。 According to the image processing apparatus, the method, and the program according to the embodiment, (A) the subject area is an eigenspace based on eigenvectors calculated in advance based on a plurality of images obtained in advance. B) In the eigenspace where the point on the eigenspace indicates the shape parameter of the statistical shape model representing the shape of the specific object, the likelihood and input of the shape of the specific object represented by the shape parameter indicated by the point on the eigenspace A shape parameter representing the shape of the subject represented by the input image is estimated so as to optimize a predetermined objective function that represents a value corresponding to a pixel value difference between adjacent pixels in the image. The area of the subject is extracted from the input image using the shape of the specific object represented by the estimated shape parameter as prior knowledge. As a result, an effect of suppressing an increase in the amount of calculation and extracting a subject area with high accuracy is obtained.
実施の形態に係る統計的形状モデル生成装置の構成を示すブロック図である。It is a block diagram which shows the structure of the statistical shape model production | generation apparatus which concerns on embodiment. 実施の形態に係る統計的形状モデル生成装置及び画像処理装置のコンピュータ構成例を示すブロック図である。It is a block diagram which shows the computer structural example of the statistical shape model production | generation apparatus and image processing apparatus which concern on embodiment. 肝臓の3次元画像例を示す説明図である。It is explanatory drawing which shows the example of a three-dimensional image of a liver. 肝臓の形状の画像を対象として主成分分析を行った場合の、第1主成分に沿った肝臓の形状の統計的ばらつきを示す図である。It is a figure which shows the statistical dispersion | variation in the shape of the liver along a 1st main component at the time of performing a main component analysis on the image of the shape of a liver. 統計的形状モデルの例を示す説明図である。It is explanatory drawing which shows the example of a statistical shape model. 固有空間と形状空間との関係を示す説明図である。It is explanatory drawing which shows the relationship between eigenspace and shape space. 実施の形態に係る画像処理装置の構成を示すブロック図である。It is a block diagram which shows the structure of the image processing apparatus which concerns on embodiment. 固有空間の分割法を説明するための説明図である。It is explanatory drawing for demonstrating the division | segmentation method of an eigenspace. 実施の形態における学習処理ルーチンを示すフローチャートである。It is a flowchart which shows the learning process routine in embodiment. 実施の形態における画像処理ルーチンを示すフローチャートである。3 is a flowchart illustrating an image processing routine in the embodiment. 実施の形態におけるセグメンテーション処理ルーチンを示すフローチャートである。It is a flowchart which shows the segmentation process routine in embodiment. 格子状にサンプリング点を設定する場合の近似解法を説明するための図である。It is a figure for demonstrating the approximate solution in the case of setting a sampling point in a grid | lattice form. 第2の実施の形態における140例の膵臓の認識結果を示す図である。It is a figure which shows the recognition result of 140 examples of pancreas in 2nd Embodiment. 第2の実施の形態における計算時間の比較結果を示す図である。It is a figure which shows the comparison result of the calculation time in 2nd Embodiment. ランダムにサンプリング点を設定する場合の近似解法を説明するための図である。It is a figure for demonstrating the approximate solution method in the case of setting a sampling point at random. 目的関数の下界が単調性を満たす関係を示す図である。It is a figure which shows the relationship where the lower bound of an objective function satisfy | fills monotonicity. 目的関数の下界が単調性を満たすh (y)とh (y)との一例を示す図である。It is a diagram illustrating an example of a lower bound of the objective function satisfies the monotonicity h F p and (y) and h B p (y). 統計的形状モデルとしてLogOddsを用いる場合の例を示す図である。It is a figure which shows the example in the case of using LogOdds as a statistical shape model. 従来技術を示す説明図である。It is explanatory drawing which shows a prior art.
 以下、図面を参照して、実施の形態を詳細に説明する。なお、実施の形態では、特定の物体の統計的変動を表す図形の形状の統計モデル(以下、統計的形状モデルと称する。)を生成する統計的形状モデル生成装置10と、特定の物体である被写体を表す入力画像から、被写体の領域を抽出する画像処理装置100とについて説明する。また、実施の形態では、3次元腹部CT画像を入力画像とし、当該入力画像から、膵臓の領域を抽出する場合を例に説明する。 Hereinafter, embodiments will be described in detail with reference to the drawings. In the embodiment, the statistical shape model generation device 10 that generates a statistical model of the shape of a graphic representing the statistical variation of a specific object (hereinafter referred to as a statistical shape model), and the specific object. The image processing apparatus 100 that extracts a subject area from an input image representing the subject will be described. In the embodiment, a case where a three-dimensional abdominal CT image is used as an input image and a pancreas region is extracted from the input image will be described as an example.
<概要>
 本実施の形態では、統計的形状モデルを用いるセグメンテーションアルゴリズムを提案する。このアルゴリズムの特徴は以下の5点である。
<Overview>
In the present embodiment, a segmentation algorithm using a statistical shape model is proposed. This algorithm has the following five features.
1.セグメンテーションの過程で、統計的形状モデルを用いて生成可能な全ての3次元形状(10個以上)を考慮可能であり、セグメンテーションの目的関数の観点から最適な形状を選択可能である。 1. In the segmentation process, all three-dimensional shapes (10 9 or more) that can be generated using the statistical shape model can be considered, and an optimum shape can be selected from the viewpoint of the segmentation objective function.
2.大量の形状テンプレートを用いる従来手法では必須で、特に計算コストが高い前処理(形状の生成・選択とクラスタリング)が不要である。 2. It is indispensable in the conventional method using a large amount of shape templates, and does not require preprocessing (shape generation / selection and clustering) that is particularly expensive.
3.Branch and bound法とグラフカット法との組み合わせに限らず、最適化理論に基づくアルゴリズムであれば適用可能である。 3. The algorithm is not limited to the combination of the Branch and bound method and the graph cut method, and any algorithm based on optimization theory can be applied.
4.異なる統計的形状モデルや固有空間の分割法に対しても適用可能である。 4). It can also be applied to different statistical shape models and eigenspace partitioning methods.
5.3次元腹部CT画像内の膵臓のセグメンテーションに関して世界最高の精度が得られる。 5. The world's highest accuracy for pancreas segmentation in 3D abdominal CT images is obtained.
 図19に従来技術の処理の概要を示す。図19は、上記米国特許第8249349号明細書の手法を、統計的形状モデルを利用した膵臓のセグメンテーションに適用させた場合の処理である。図19に示すように、従来手法では、形状テンプレート集合T(⊂S)の生成処理、及び集合Tに対するクラスタリング処理が必要であるが、本実施の形態では、上述したように、大量の形状テンプレートを用いる従来手法で必要であった前処理が不要である。なお、従来手法(上記米国特許第8249349号明細書)で扱われていた画像は2次元であり、形状の最大数は10であるのに対し、本実施の形態で用いるアルゴリズムは3次元形状(10個以上)を考慮可能である。 FIG. 19 shows an outline of processing in the prior art. FIG. 19 shows processing when the method of the above-mentioned US Pat. No. 8,249,349 is applied to pancreas segmentation using a statistical shape model. As shown in FIG. 19, the conventional method requires generation processing of a shape template set T (⊂S) and clustering processing for the set T. In this embodiment, as described above, a large number of shape templates are used. This eliminates the need for pre-processing required in the conventional method using the. Incidentally, the conventional method image which has been treated with (aforementioned U.S. Pat. No. 8,249,349) a two-dimensional, while the maximum number of shapes is 107, the algorithm used in this embodiment is a three-dimensional shape (10 9 or more) can be considered.
 また、実施の形態では、統計的形状モデルとしてレベルセット分布モデル(Level set distribution model; LSDM。符号付距離モデル(Signed distance model)と呼ばれることもある)を利用する。以下では、まず、統計的形状モデル生成装置10において統計的形状モデル(LSDM)の固有ベクトルと固有値とを算出した後で、画像処理装置100において統計的形状モデルの形状パラメータを推定し、推定された形状パラメータを用いた統計的形状モデルに基づいた最適セグメンテーションアルゴリズムを示す。 In the embodiment, a level set distribution model (Level set distribution model; LSDM, sometimes called a signed distance model) is used as the statistical shape model. In the following, first, after the eigenvector and eigenvalue of the statistical shape model (LSDM) are calculated by the statistical shape model generation device 10, the shape parameters of the statistical shape model are estimated and estimated by the image processing device 100. An optimal segmentation algorithm based on a statistical shape model with shape parameters is presented.
[第1の実施の形態]
<第1の実施の形態に係る統計的形状モデル生成装置10の構成>
 第1の実施の形態に係る統計的形状モデル生成装置10は、図1に示される機能ブロックで表すことができる。また、これらの機能ブロックは、図2に示されるコンピュータのハードウェア構成により実現することができる。図2を参照してコンピュータの構成を説明する。
[First Embodiment]
<Configuration of Statistical Shape Model Generation Device 10 according to First Embodiment>
The statistical shape model generation apparatus 10 according to the first embodiment can be represented by the functional blocks shown in FIG. Further, these functional blocks can be realized by the hardware configuration of the computer shown in FIG. The configuration of the computer will be described with reference to FIG.
 図2に示す第1の実施の形態に係る統計的形状モデル生成装置10は、プログラムに基づき統計的形状モデル生成装置10の本実施の形態に係る処理を行うCPU(Central Processing Unit;中央処理装置)21と、CPU21による各種プログラムの実行時のワークエリア等として用いられるRAM(Random Access Memory)22と、各種制御プログラムや各種パラメータ等が予め記憶された記録媒体であるROM(Read Only Memory)23と、各種情報を記憶するために用いられるハードディスク24(図中「HDD」と記載)と、キーボードやマウス等からなる入力装置25と、ディスプレイ等からなる表示装置26と、LAN(Local Area Network)等を用いて通信を行う通信装置27と、外部に接続された画像情報提供装置30との間の各種情報の授受を司る入出力インタフェース部(図中、「外部IF」と記載)28と、を備えており、これらがシステムバスBUS29により相互に接続されて構成されている。 The statistical shape model generation apparatus 10 according to the first embodiment illustrated in FIG. 2 is a CPU (Central Processing Unit) that performs processing according to the present embodiment of the statistical shape model generation apparatus 10 based on a program. ) 21, a RAM (Random Access Memory) 22 used as a work area when the CPU 21 executes various programs, and a ROM (Read Only Memory) 23, which is a recording medium in which various control programs, various parameters, and the like are stored in advance. A hard disk 24 (described as “HDD” in the figure) used for storing various types of information, an input device 25 such as a keyboard and a mouse, a display device 26 composed of a display, etc., and a LAN (Local Area Network) Various kinds of information are exchanged between the communication device 27 that performs communication using the image information providing device 30 connected to the outside. And an input / output interface unit (described as “external IF” in the figure) 28, which are connected to each other via a system bus BUS 29.
 CPU21は、RAM22、ROM23、及びハードディスク24に対するアクセス、入力装置25を介した各種情報の取得、表示装置26に対する各種情報の表示、通信装置27を用いた各種情報の通信処理、及び入出力インタフェース部28に接続された画像情報提供装置30を含む外部装置からの情報の入力等を、各々行うことができる。 The CPU 21 accesses the RAM 22, ROM 23, and hard disk 24, acquires various information via the input device 25, displays various information on the display device 26, communication processing of various information using the communication device 27, and input / output interface unit Input of information from an external device including the image information providing device 30 connected to 28 can be performed.
 CPU21が、ハードディスク24に記憶された本実施形態に係る処理を制御するプログラムを、RAM22に読み込み実行することにより、図1に示す本実施の形態に係る統計的形状モデル生成装置10における、図1に示す各処理部の機能が実行される。 In the statistical shape model generation apparatus 10 according to the present embodiment shown in FIG. 1, the CPU 21 reads the program for controlling the processing according to the present embodiment stored in the hard disk 24 into the RAM 22 and executes it. The functions of the processing units shown in FIG.
 このようなコンピュータ構成により、図1に示す本実施の形態に係る統計的形状モデル生成装置10が構成されている。なお、図1は機能ブロックとなる構成を表し、一方、図2はデバイス等の接続状態を表すものである。前記したように、機能ブロックとデバイス等とは有機的、かつ相互に関連して統計的形状モデル生成装置10を構成するため、図2と共に図1についても詳細に説明する。 The statistical shape model generation apparatus 10 according to the present embodiment shown in FIG. 1 is configured by such a computer configuration. Note that FIG. 1 shows a configuration as a functional block, while FIG. 2 shows a connection state of devices and the like. As described above, since the functional block and the device constitute the statistical shape model generation apparatus 10 in an organic and interrelated manner, FIG. 1 will be described in detail with FIG.
 統計的形状モデル生成装置10は、図2に示したコンピュータのハードウェア及び制御プログラムを含むソフトウェアを利用して構成される機能として、学習用受付部12と、学習部14と、統計的形状モデルデータベース16とを備えている。 The statistical shape model generation apparatus 10 has a learning accepting unit 12, a learning unit 14, and a statistical shape model as functions configured by using software including computer hardware and a control program shown in FIG. And a database 16.
 学習用受付部12は、特定の物体である被写体を表す複数の画像であって、被写体の領域が予め求められた複数の画像を学習データとして受け付ける。本実施の形態で被写体となる特定の物体は膵臓であるため、学習用受付部12は、膵臓の領域が予め求められた複数の画像を受け付ける。 The learning accepting unit 12 accepts, as learning data, a plurality of images representing a subject that is a specific object and in which a subject area is obtained in advance. Since the specific object that is the subject in the present embodiment is the pancreas, the learning accepting unit 12 accepts a plurality of images in which the pancreas region is obtained in advance.
 学習部14は、学習用受付部12によって受け付けた膵臓の領域が予め求められた複数の画像に基づいて、固有ベクトルと固有値とを算出する。例えば、学習部14は、膵臓の領域が予め求められた複数の画像に基づいて、主成分分析によって、固有ベクトルと固有値とを算出する。固有ベクトルが算出されることにより、固有ベクトルを基底とする固有空間が生成される。また、学習部14は、算出された固有ベクトルと固有値とを統計的形状モデルデータベース16に格納する。 The learning unit 14 calculates eigenvectors and eigenvalues based on a plurality of images in which the pancreas region received by the learning receiving unit 12 is obtained in advance. For example, the learning unit 14 calculates eigenvectors and eigenvalues by principal component analysis based on a plurality of images in which pancreatic regions are obtained in advance. By calculating the eigenvector, an eigenspace based on the eigenvector is generated. The learning unit 14 stores the calculated eigenvector and eigenvalue in the statistical shape model database 16.
 ここで、図3に、肝臓の形状の一例を示す(なお、図3に示す例は肝臓であるが、問題の本質は膵臓の場合と同様である。)。図3に示すように、肝臓の形状は様々である。そのため、実施の形態では、特定の物体の形状を表す統計的形状モデルを用いて、形状のばらつきを少数の形状パラメータにより表現する。また、図4に、主成分分析によって得られた第1主成分に対応する肝臓の形状の固有ベクトルを示す。図4に示すように、固有ベクトルに対応する形状パラメータαの値に応じて、表わされる肝臓の形状が変化することがわかる。なお、σは、統計的ばらつきを表すパラメータである。 Here, FIG. 3 shows an example of the shape of the liver (note that the example shown in FIG. 3 is the liver, but the essence of the problem is the same as in the case of the pancreas). As shown in FIG. 3, the shape of the liver is various. Therefore, in the embodiment, the variation in shape is expressed by a small number of shape parameters using a statistical shape model representing the shape of a specific object. FIG. 4 shows the eigenvector of the shape of the liver corresponding to the first principal component obtained by principal component analysis. As shown in FIG. 4, it can be seen that the shape of the expressed liver changes according to the value of the shape parameter α corresponding to the eigenvector. Note that σ is a parameter representing statistical variation.
 図5に、統計的形状モデルの一例を示す。図5に示すように、統計的形状モデルとしては様々なものが存在するが、本実施の形態では、統計的形状モデルとして、LSDM(例えば、参考文献(Cremers, D., Rousson, M., Deriche, R., “A review of statistical approaches to level set segmentation: integrating color, texture, motion and shape.”, International journal of computer vision 72 (2), 2007, p.195‐215.)を参照。)を用いる。 Fig. 5 shows an example of a statistical shape model. As shown in FIG. 5, there are various statistical shape models. In the present embodiment, LSDM (for example, references (Cremers, D., Rousson, M., Deriche, R., “A review of statistical approaches to level set segmentation: integrating color, texture, motion and shape.”, International journal of computer vision 72 (2), 2007, p.195-215.)) Is used.
 LSDMは、代表的な統計的形状モデルであり、学習データの図形ラベルに対する符号付の距離画像を作成し、符号付の距離画像に対して線形の統計解析(例えば、PCAやICAなど)を行うことで任意形状を以下の式(1)で表現する方法である。 LSDM is a representative statistical shape model, which creates a signed distance image for a graphic label of learning data and performs linear statistical analysis (for example, PCA or ICA) on the signed distance image. This is a method of expressing an arbitrary shape by the following formula (1).
Figure JPOXMLDOC01-appb-M000001
Figure JPOXMLDOC01-appb-M000001
 ここで、上記式(1)は画素p(∈P)に関する式である。なお、Pは画素集合を表し、φ(α)は、画素pに関する任意形状に対応するレベルセット関数を表し、μは、画素pに関する平均のレベルセット関数を表し、λはi番目の固有値を表し、{u ,...,u }はd個の固有ベクトルにおける画素pに対する成分を表し、α=[α,... ,αは、固有空間上のRα={r∈R|||r||≦w}の領域(通常は±3σの範囲)で定義される形状パラメータを表す。また、wは、正の定数を示す。デジタル画像における形状は、パラメータαを以下の式(2)に示す関数gで写像することによって得られる。なお、実施の形態では、固有空間の次元d=2である場合を例に説明する。 Here, the above equation (1) is an equation relating to the pixel p (εP). P represents a pixel set, φ p (α) represents a level set function corresponding to an arbitrary shape related to the pixel p, μ p represents an average level set function related to the pixel p, and λ i represents the i th {U 1 p ,..., U d p } represents the component for the pixel p in d eigenvectors, and α = [α 1 ,..., Α d ] T is on the eigenspace R α = {rεR d ||| r || ≦ w}, which is a shape parameter defined in a region (usually in a range of ± 3σ). W represents a positive constant. The shape in the digital image is obtained by mapping the parameter α with a function g shown in the following equation (2). In the embodiment, a case where the dimension of the eigenspace is d = 2 will be described as an example.
Figure JPOXMLDOC01-appb-M000002
Figure JPOXMLDOC01-appb-M000002
 ここで、L={0,1}はラベル集合で、0は背景であることを示し、1は図形であることを示す。yは画像における1つの形状を表す。また、LSDMの場合、以下の式(3)に示すように、パラメータαを形状に写像する関数gは、Heaviside function H(・)である。 Here, L = {0, 1} is a label set, 0 indicates a background, and 1 indicates a figure. y represents one shape in the image. In the case of LSDM, as shown in the following formula (3), the function g for mapping the parameter α to the shape is Heavisidevfunction H (•).
Figure JPOXMLDOC01-appb-M000003
Figure JPOXMLDOC01-appb-M000003
 図6に、d=2である場合のLSDMの固有空間(固有ベクトルが張る空間)とデジタル画像における形状集合Sとの関係を示す。関数gは固有空間からデジタル画像の形状集合S上への写像である。また、形状集合S⊂L|P|はgの像である。図6に示すように、固有空間では形状ラベルの集合は多角形を構成する。各形状y∈Sは固有空間内に原像を持つ。 FIG. 6 shows the relationship between the eigenspace of LSDM (a space spanned by eigenvectors) and the shape set S in the digital image when d = 2. The function g is a mapping from the eigenspace onto the shape set S of the digital image. Further, the shape set S⊂L | P | is an image of g. As shown in FIG. 6, the set of shape labels forms a polygon in the eigenspace. Each shape yεS has an original image in the eigenspace.
 統計的形状モデルデータベース16には、学習部14によって算出された固有ベクトルと固有値とが格納される。 The statistical shape model database 16 stores eigenvectors and eigenvalues calculated by the learning unit 14.
<画像処理装置100の構成>
 本実施の形態に係る画像処理装置100は、図7に示される機能ブロックで表すことができる。また、これらの機能ブロックは、上記図2に示されるコンピュータのハードウェア構成により実現することができる。
<Configuration of Image Processing Device 100>
The image processing apparatus 100 according to the present embodiment can be represented by functional blocks shown in FIG. These functional blocks can be realized by the hardware configuration of the computer shown in FIG.
 画像処理装置100は、上記図2に示したコンピュータのハードウェア及び制御プログラムを含むソフトウェアを利用して構成される機能として、図7に示すように、受付部102と、演算部104と、出力部120とを備えている。 As shown in FIG. 7, the image processing apparatus 100 includes a reception unit 102, a calculation unit 104, and an output as functions configured using software including the computer hardware and the control program illustrated in FIG. 2. Part 120.
 受付部102は、入力画像として、3時相の3次元腹部CT画像を受け付ける。なお、受付部102は、早期相画像、門脈相画像、及び晩期相画像を、3時相の3次元腹部CT画像として受け付ける。 The accepting unit 102 accepts a three-dimensional three-dimensional abdominal CT image as an input image. The receiving unit 102 receives the early phase image, the portal phase image, and the late phase image as a three-dimensional three-dimensional abdominal CT image.
 演算部104は、受付部102によって受け付けた3次元腹部CT画像から、膵臓の領域を抽出する。演算部104は、統計的形状モデルデータベース106と、画像処理部108とを備えている。 The calculation unit 104 extracts a pancreas region from the three-dimensional abdominal CT image received by the receiving unit 102. The calculation unit 104 includes a statistical shape model database 106 and an image processing unit 108.
 統計的形状モデルデータベース106には、統計的形状モデルデータベース16と同じ固有ベクトルと固有値とが格納されている。 The statistical shape model database 106 stores the same eigenvectors and eigenvalues as the statistical shape model database 16.
 画像処理部108は、画像間位置合わせ部110と、空間的標準化部112と、セグメンテーション部114とを備えている。 The image processing unit 108 includes an inter-image registration unit 110, a spatial standardization unit 112, and a segmentation unit 114.
 画像間位置合わせ部110は、受付部102によって受け付けた、早期相画像と、門脈相画像と、晩期相画像との間の位置合わせを行い(例えば、Shimizu, A., Kimoto, T., Kobatake, H., Nawano, S., Shinozaki, K., “Automated pancreas segmentation from three-dimensional contrast-enhanced computed tomography.”, International journal of computer assisted radiology and surgery 5 (1), 2010, p.85‐98.を参照。)、位置合わせ画像を生成する。 The inter-image alignment unit 110 performs alignment between the early phase image, the portal phase image, and the late phase image received by the receiving unit 102 (for example, Shimizu, A., Kimoto, T., Kobatake, H., Nawano, S., Shinozaki, K., “Automated pancreas segmentation from three-dimensional contrast-enhanced computed tomography.”, International journal of computer assisted radiology and surgery 5 (1), 2010, p.85 98.), generate a registration image.
 空間的標準化部112は、画像間位置合わせ部110によって生成された位置合わせ画像に基づいて、例えば予め定められた非線形関数を用いて、位置合わせ画像を標準となる画像に合わせるように空間的標準化を行い(例えば、Shimizu, A., Kimoto, T., Kobatake, H., Nawano, S., Shinozaki, K., “Automated pancreas segmentation from three-dimensional contrast-enhanced computed tomography.”, International journal of computer assisted radiology and surgery 5 (1), 2010, p.85‐98.を参照。)、空間的標準化画像を生成する。 The spatial standardization unit 112 is based on the registration image generated by the inter-image registration unit 110, for example, using a predetermined nonlinear function, so as to match the registration image with the standard image. (E.g. Shimizu, A., Kimoto, T., Kobatake, H., Nawano, S., Shinozaki, K., “Automated pancreas segmentation from three-dimensional contrast-enhanced computed tomography.”, International journal of computer assisted radiology and surgery 5 (1), 2010, p.85-98.), generating spatially standardized images.
 セグメンテーション部114は、統計的形状モデルデータベース106に格納された予め計算された固有ベクトルを基底とする固有空間において、受付部102によって受け付けた入力画像に基づいて、目的関数を最適化するように、空間的標準化部112によって生成された空間的標準化画像が表す被写体の形状を表す形状パラメータを推定し、空間的標準化画像が表す被写体の領域を求める。具体的には、セグメンテーション部114は、形状パラメータを推定すると同時に、被写体の領域を抽出する。 The segmentation unit 114 optimizes the objective function based on the input image received by the receiving unit 102 in the eigenspace based on the pre-calculated eigenvector stored in the statistical shape model database 106. The shape parameter representing the shape of the subject represented by the spatial standardized image generated by the spatial standardization unit 112 is estimated, and the region of the subject represented by the spatial standardized image is obtained. Specifically, the segmentation unit 114 estimates a shape parameter and simultaneously extracts a subject area.
 なお、上記図6に示すように、固有ベクトルを基底とする固有空間上の点は、統計的形状モデルの形状パラメータを示す。また、目的関数は、固有空間上の点が示す形状パラメータが表す特定の物体の形状の尤もらしさと入力画像中の隣接画素間の画素値の差とに応じた値を表すように、予め定められている。 As shown in FIG. 6, the points on the eigenspace based on the eigenvectors indicate the shape parameters of the statistical shape model. The objective function is determined in advance so as to represent a value corresponding to the likelihood of the shape of the specific object represented by the shape parameter indicated by the point on the eigenspace and the difference in pixel value between adjacent pixels in the input image. It has been.
 一般に、上述した形状集合L|P|のサイズ(=デジタル画像上の形状数)は膨大であり、従来のアルゴリズムでは扱うことができなかった。本実施の形態におけるアルゴリズムでは、目的関数の最適化を固有空間において実行する点が従来とは異なり、これによって膨大な数の形状を効率的に扱うことが可能である。 In general, the size of the shape set L | P | described above (= the number of shapes on a digital image) is enormous, and cannot be handled by a conventional algorithm. The algorithm according to the present embodiment is different from the conventional method in that the optimization of the objective function is executed in the eigenspace, and this makes it possible to efficiently handle a huge number of shapes.
 セグメンテーション部114は、固有空間上の凸多角形を繰り返し分割して最適な形状パラメータを示す点を含む凸多角形を探索する探索アルゴリズムに従って、以下の式(4)の目的関数を最適化するように、入力画像が表す被写体の形状を表す形状パラメータを推定する。なお、探索される固有空間上の凸多角形は、最適な形状パラメータを示す点が表す特定の物体の形状を表している。
 また、セグメンテーション部114は、探索アルゴリズムにおいて、凸多角形の各頂点を調べることで、その凸多角形に含まれる形状集合に対する目的関数の下界を計算することにより、最適な形状パラメータを示す点を含む固有空間上の凸多角形を探索する。
 なお、本実施の形態では、固有空間が2次元である場合を例に説明するため、セグメンテーション部114は固有空間上の凸多角形を探索するが、固有空間が3次元以上の場合には、セグメンテーション部114は固有空間上の凸多胞体を探索する。また、探索時に固有空間を分割する直線は、固有空間が3次元以上の場合には超平面となる。
The segmentation unit 114 repeatedly optimizes the objective function of the following equation (4) according to a search algorithm for searching for a convex polygon including a point indicating an optimal shape parameter by repeatedly dividing the convex polygon on the eigenspace. In addition, a shape parameter representing the shape of the subject represented by the input image is estimated. Note that the convex polygon on the eigenspace to be searched represents the shape of a specific object represented by the point indicating the optimum shape parameter.
Further, the segmentation unit 114 examines each vertex of the convex polygon in the search algorithm, and calculates the lower bound of the objective function for the shape set included in the convex polygon, thereby obtaining a point indicating the optimum shape parameter. Search for a convex polygon in the eigenspace that contains it.
In this embodiment, the case where the eigenspace is two-dimensional will be described as an example. Therefore, the segmentation unit 114 searches for a convex polygon on the eigenspace, but when the eigenspace is three-dimensional or more, The segmentation unit 114 searches for convex multivesicles on the eigenspace. In addition, the straight line that divides the eigenspace during the search becomes a hyperplane when the eigenspace is three-dimensional or more.
 本実施の形態における重要なアイディアは、上記図6に示したように、デジタル画像上の形状集合が、固有空間上の凸多角形に対応する事実を利用することにある。本発明の実施の形態では、当該事実を利用することにより、多数の形状を一度に効率的に扱うことが可能である。 An important idea in the present embodiment is to use the fact that the shape set on the digital image corresponds to the convex polygon on the eigenspace, as shown in FIG. In the embodiment of the present invention, it is possible to efficiently handle a large number of shapes at a time by utilizing the fact.
 セグメンテーション部114は、デジタル画像上の複数の形状集合が、固有空間上の凸多角形に対応する事実に基づいて、予め設定した目的関数を最小化する形状に対応する、固有空間上の凸多角形を探索する。そして、セグメンテーション部114は、探索された固有空間上の凸多角形に基づいて、目的関数を最小化する形状を形状集合Sから見つけ出し、見つけ出された形状を事前知識として当該形状を被写体の領域として抽出する。本実施の形態では、グラフカット(上記Boykov, Y., Veksler, O., Zabih, R.,“Fast approximate energy minimization via graph cuts.”, Pattern Analysis and Machine Intelligence, IEEE Transactions on 23 (11), 2001, p.1222‐1239.を参照。)で良く用いられるエネルギー関数を、予め設定した目的関数として用いる場合を例に説明する。本実施の形態で用いる目的関数を、以下の式(4)に示す。 Based on the fact that a plurality of shape sets on a digital image correspond to convex polygons on the eigenspace, the segmentation unit 114 corresponds to a convex multiplicity on the eigenspace corresponding to a shape that minimizes a preset objective function. Search for a square. Then, the segmentation unit 114 finds a shape that minimizes the objective function from the shape set S based on the searched convex polygon in the eigenspace, and uses the found shape as prior knowledge to determine the shape as a subject area. Extract as In this embodiment, the graph cut (above Boykov, Y., Veksler, O., Zabih, R., “Fast approximate energy minimization via graph cuts.”, Pattern ”Analysis and Machine Intelligence, IEEE Transactions on 23 (11), 2001, see pp.1222-1239.) An example of using an energy function often used as a preset objective function will be described. The objective function used in this embodiment is shown in the following formula (4).
Figure JPOXMLDOC01-appb-M000004
Figure JPOXMLDOC01-appb-M000004
 ここで、Nは、隣接する画素のペアの集合である。xは画素pのラベルを表し、背景であれば0の値をとり、図形であれば1の値をとる。また、Iは、画素pの画素値を表し、yは、画素pにおける推定形状の値を表し、推定形状が背景であれば0の値をとり、図形であれば1の値をとる。 Here, N is a set of adjacent pixel pairs. x p represents the label of the pixel p, a value of 0 if the background, taking a value of 1 if figure. Also, I p denotes a pixel value of the pixel p, y p represents the value of the estimated shape of the pixel p, the estimated shape takes a value of 0 if the background, taking a value of 1 if the figure .
 また、上記式(4)におけるF(I,y)と、B(I,y)とは、以下の式(5)、(6)で定義される。ここで、以下の式(5)は、画素pが図形として割り当てられるコストを表し、以下の式(6)は、画素pが背景として割り当てられるコストを表す。 In addition, F p (I p , y p ) and B p (I p , y p ) in the above formula (4) are defined by the following formulas (5) and (6). Here, the following equation (5) represents the cost of assigning the pixel p as a graphic, and the following equation (6) represents the cost of assigning the pixel p as a background.
Figure JPOXMLDOC01-appb-M000005
Figure JPOXMLDOC01-appb-M000005
 Iは画素pの画素値を表し、Iは画素qの画素値を表す。また、λ、λは予め定められた正の定数である。また、上記式(4)におけるPpq(I,I)は、以下の式(7)で定義され、互いに隣接する画素pと画素qの間の画素値の差を評価している。 Ip represents the pixel value of the pixel p, and Iq represents the pixel value of the pixel q. Further, λ 1 and λ 2 are predetermined positive constants. Further, P pq (I p , I q ) in the above formula (4) is defined by the following formula (7), and evaluates the difference in pixel value between the adjacent pixels p and q.
Figure JPOXMLDOC01-appb-M000006
Figure JPOXMLDOC01-appb-M000006
 本実施の形態では、上記式(4)に示す目的関数の最適化手法として、一般的な最適化アルゴリズムの一つであるbranch and bound探索アルゴリズムを用いる。本実施の形態では、形状集合に対する目的関数の下界を効率的に計算する方法を提案する。形状集合に対する目的関数の下界を効率的に計算するための重要なアイディアは、凸多角形の頂点のみを調べれば、その凸多角形に含まれるすべての形状集合に対する目的関数の下界が分かる点である。 In the present embodiment, a branch-and-bound search algorithm, which is one of general optimization algorithms, is used as an optimization method for the objective function shown in the above equation (4). In the present embodiment, a method for efficiently calculating the lower bound of the objective function for the shape set is proposed. An important idea for efficiently calculating the lower bound of the objective function for a shape set is that if only the vertices of the convex polygon are examined, the lower bound of the objective function for all shape sets contained in the convex polygon can be found. is there.
 上記式(4)を、形状yの代わりに、関数g(α)を用いて表すと、以下の式(8)で表すことができる。 When the above equation (4) is expressed using a function g (α) instead of the shape y, it can be expressed by the following equation (8).
Figure JPOXMLDOC01-appb-M000007
Figure JPOXMLDOC01-appb-M000007
[branching処理]
 branch and bound探索アルゴリズムのbranching処理では、親ノードH(∈Rα)が与えられると、親ノードHは子ノードH及びHへ分解され、上記式(8)に示した目的関数は、以下の式(9)で表わされる。
[Branching processing]
In the branching process of the branch and bound search algorithm, when a parent node H 0 (∈R α ) is given, the parent node H 0 is decomposed into child nodes H 1 and H 2 , and the objective function shown in the above equation (8) is obtained. Is represented by the following equation (9).
Figure JPOXMLDOC01-appb-M000008
Figure JPOXMLDOC01-appb-M000008
 ここで、branching処理における分割処理は、様々な分割方法が可能である。本実施の形態では、上記式(1)に示したφ(α)の符号によって、以下の式(10)及び(11)に示すように、親ノードHは子ノードH及びHへ分割される。なお、画素kは、集合Qからサンプリングによって選択された画素を表す。 Here, for the dividing process in the branching process, various dividing methods are possible. In the present embodiment, the parent node H 0 becomes the child nodes H 1 and H 2 as shown in the following expressions (10) and (11) by the sign of φ k (α) shown in the expression (1). Is divided into Note that the pixel k represents a pixel selected from the set Q by sampling.
Figure JPOXMLDOC01-appb-M000009
Figure JPOXMLDOC01-appb-M000009
 ここで、集合Qは、以下の式(12)に示すように、固有空間内のノードHを切る直線φ(α)=0であるような画素kの集合である。なお、固有空間が3次元以上である場合、集合Qは、固有空間内のノードHを切る超平面φ(α)=0であるような画素kの集合となる。 Here, the set Q is a set of pixels k such that the straight line φ k (α) = 0 that cuts the node H 0 in the eigenspace, as shown in the following Expression (12). If the eigenspace is three-dimensional or more, the set Q is a set of pixels k such that the hyperplane φ k (α) = 0 that cuts the node H 0 in the eigenspace.
Figure JPOXMLDOC01-appb-M000010
Figure JPOXMLDOC01-appb-M000010
また、Vは、ノードHが表す凸多角形の頂点集合を表す。Vは、Hのエッジに対応するφ(α)の連立方程式を解析的に解くことで得ることができる。 V 0 represents a vertex set of convex polygons represented by the node H 0 . V 0 can be obtained by analytically solving simultaneous equations of φ e (α) corresponding to the edge of H 0 .
[bounding処理]
 branch and bound探索アルゴリズムのbranching処理においては、下界L(H)(i={1,2})は、以下の式(13)~(14)で与えられる。
[Bounding processing]
In branching processing of the branch and bound search algorithm, the lower bound L (H i ) (i = {1, 2}) is given by the following equations (13) to (14).
Figure JPOXMLDOC01-appb-M000011
Figure JPOXMLDOC01-appb-M000011
 上記式(13)は、最小化問題のためのJensenの不等式である。上記式(14)における「max」と「min」との変化は、上記式(5)のマイナス符号に基づいている。また、α∈Hにおけるφ(α)の最大値と最小値とは、線形計画法の基礎理論に基づき、Hの頂点集合Vから取得され、以下の式(15)及び(16)で表すことができる。 Equation (13) above is a Jensen inequality for the minimization problem. The change between “max” and “min” in the above equation (14) is based on the minus sign in the above equation (5). Further, the maximum value and the minimum value of φ (α) at α∈H i are obtained from the vertex set V i of H i based on the basic theory of linear programming, and the following equations (15) and (16) Can be expressed as
Figure JPOXMLDOC01-appb-M000012
Figure JPOXMLDOC01-appb-M000012
 また、セグメンテーション部114は、分割後の2つの凸多角形の面積が対応するように(面積がほぼ等しくなるように)、固有空間上の凸多角形を分割することを繰り返すことにより、最適な形状パラメータを示す点を含む固有空間上の凸多角形を探索する。 In addition, the segmentation unit 114 repeats dividing the convex polygon in the eigenspace so that the areas of the two convex polygons after the division correspond to each other (so that the areas are substantially equal). A convex polygon on the eigenspace including a point indicating the shape parameter is searched.
 セグメンテーション部114によるセグメンテーションアルゴリズムの疑似コードを表1に示す。まず、対象画像Iが与えられると、セグメンテーション部114は、固有空間全体Rα={r∈R|||r||≦w}を親ノードHとして処理を開始する。すなわち、セグメンテーション部114は、Rαをルートとして処理を開始し、デジタル画像から関数Select(Q)を用いて選択した画素kによって親ノードHを分割する。分割後の子ノードHとHとは、Queueに入れられる。ただし、Queueに入れられたノードは、下界の値が昇順になるように並べられているとする。次に、セグメンテーション部114は、Queueから、最も低い下界のノードを選択して親ノードHとし、上記の処理をQが空集合になるまで反復する。反復終了時の結果から、最適な形状パラメータが得られ、最適な形状が求まる。そして、セグメンテーション部114は、最適な形状を用いて、最適なセグメンテーション結果を得る。 Table 1 shows the pseudo code of the segmentation algorithm by the segmentation unit 114. First, when the target image I is given, the segmentation unit 114 starts the processing by setting the entire eigenspace R α = {rεR d ||| r || ≦ w} as the parent node H 0 . That is, the segmentation unit 114 starts processing with as a root, and divides the parent node H 0 by the pixel k selected from the digital image using the function Select (Q). The divided child nodes H 1 and H 2 are put in Queue. However, it is assumed that the nodes included in the Queue are arranged so that the lower bound values are in ascending order. Next, the segmentation unit 114 selects the lowest lower bound node from Queue as the parent node H 0, and repeats the above processing until Q becomes an empty set. From the result at the end of the iteration, the optimum shape parameter is obtained and the optimum shape is obtained. Then, the segmentation unit 114 obtains an optimum segmentation result using the optimum shape.
Figure JPOXMLDOC01-appb-T000013
Figure JPOXMLDOC01-appb-T000013
 なお、Hは、固有空間における凸多角形を表す。なお、Hの初期値はRαに対応する。また、H、Hは、Hを分割してできた二つの凸多角形を表す。Qは、Hを分割可能なデジタル画像上の画素集合である。Queueは、凸多角形ノードが入っているキュー(Branch and bound探索アルゴリズムで利用)を表す。Select(Q)は、分割のための画素kを選択する関数を表す。 H 0 represents a convex polygon in the eigenspace. Note that the initial value of H 0 corresponds to R α . H 1 and H 2 represent two convex polygons formed by dividing H 0 . Q is a set of pixels on the dividable digital image H 0. Queue represents a queue (used in Branch and bound search algorithm) containing a convex polygon node. Select (Q) represents a function for selecting a pixel k for division.
 また、効率的にアルゴリズムを動作させるためのSelect(Q)に関する新しい提案も、本実施の形態における重要な特徴である。ここで、Select(Q)は画像から凸多角形を分割するための画素を選択する関数であるが、探索を効率的に行なうために、分割後の子ノードH,Hに対応する凸多角形の面積がほぼ等しくなるような分割法を本実施の形態で提案し、その効率を実際の画像で検証した。 Also, a new proposal for Select (Q) for efficiently operating the algorithm is an important feature in the present embodiment. Here, Select (Q) is a function for selecting pixels for dividing the convex polygon from the image. In order to efficiently perform the search, the convex corresponding to the divided child nodes H 1 and H 2 is selected. A segmentation method in which the polygonal areas are almost equal was proposed in this embodiment, and the efficiency was verified with actual images.
 本実施の形態では、以下の式(17)に示す関数を、Select(Q)として用いる。 In the present embodiment, the function shown in the following formula (17) is used as Select (Q).
Figure JPOXMLDOC01-appb-M000014
Figure JPOXMLDOC01-appb-M000014
 ここで、 here,
Figure JPOXMLDOC01-appb-M000015
Figure JPOXMLDOC01-appb-M000015
 である。 It is.
 図8に、上記式(17)の関数の作用の概念図を示す。図8に示すように、上記式(17)の作用は、直感的には、Hを面積が等しい2つの領域(H,H)に分割するような画素pを選択する。 FIG. 8 shows a conceptual diagram of the function of the above equation (17). As shown in FIG. 8, the operation of the above equation (17) intuitively selects a pixel p that divides H 0 into two regions (H 1 , H 2 ) having the same area.
 セグメンテーション部114は、探索された特定の物体の最適な形状パラメータが表す形状を事前知識として、目的関数を最適化するように、特定の物体の形状である被写体の領域を、空間的標準化部112によって生成された空間的標準化画像から抽出する。 The segmentation unit 114 uses the shape represented by the optimum shape parameter of the searched specific object as prior knowledge, and converts the object region that is the shape of the specific object into the spatial standardization unit 112 so as to optimize the objective function. Extract from the spatial standardized image generated by.
 出力部120は、セグメンテーション部114によって抽出された被写体の領域を結果として出力する。 The output unit 120 outputs the object region extracted by the segmentation unit 114 as a result.
<統計的形状モデル生成装置10の作用>
 次に、統計的形状モデル生成装置10の作用を説明する。
<Operation of Statistical Shape Model Generation Device 10>
Next, the operation of the statistical shape model generation apparatus 10 will be described.
 図9に示す統計的形状モデル生成装置10の処理は、上記図2におけるCPU21によりHDD24等に記憶されたプログラムに基づき行なわれるものである。統計的形状モデル生成装置10に、膵臓の領域が予め求められた複数の画像が入力されると、図9に示す学習処理ルーチンが実行される。 The processing of the statistical shape model generation apparatus 10 shown in FIG. 9 is performed based on a program stored in the HDD 24 or the like by the CPU 21 in FIG. When a plurality of images in which pancreatic regions are obtained in advance are input to the statistical shape model generation apparatus 10, a learning process routine shown in FIG. 9 is executed.
 まず、ステップS100において、学習用受付部12は、膵臓の領域が予め求められた複数の画像を受け付ける。 First, in step S100, the learning receiving unit 12 receives a plurality of images in which a pancreas region is obtained in advance.
 次に、ステップS102において、学習部14は、上記ステップS100で受け付けた膵臓の領域が予め求められた複数の画像に基づいて、主成分分析によって、固有ベクトルと固有値とを算出する。 Next, in step S102, the learning unit 14 calculates eigenvectors and eigenvalues by principal component analysis based on a plurality of images in which the pancreas region received in step S100 is obtained in advance.
 ステップS104において、学習部14は、上記ステップS102で算出された固有ベクトルと固有値とを統計的形状モデルデータベース16に格納し、学習処理ルーチンを終了する。 In step S104, the learning unit 14 stores the eigenvector and eigenvalue calculated in step S102 in the statistical shape model database 16, and ends the learning process routine.
<画像処理装置100の作用>
 次に、画像処理装置100の作用を説明する。
<Operation of Image Processing Device 100>
Next, the operation of the image processing apparatus 100 will be described.
 図10に示す画像処理装置100の処理は、上記図2におけるCPU21によりHDD24等に記憶されたプログラムに基づき行なわれるものである。まず、統計的形状モデル生成装置10の統計的形状モデルデータベース16に格納されている固有ベクトルと固有値とが、画像処理装置100に入力されると、統計的形状モデルデータベース106に格納される。そして、画像処理装置100に、3時相の3次元腹部CT画像が入力画像として入力されると、上記図10に示す画像処理ルーチンが実行される。 The processing of the image processing apparatus 100 shown in FIG. 10 is performed based on a program stored in the HDD 24 or the like by the CPU 21 in FIG. First, when eigenvectors and eigenvalues stored in the statistical shape model database 16 of the statistical shape model generation apparatus 10 are input to the image processing apparatus 100, they are stored in the statistical shape model database 106. Then, when a three-phase three-dimensional abdominal CT image is input as an input image to the image processing apparatus 100, the image processing routine shown in FIG. 10 is executed.
 ステップS200において、受付部102は、入力画像として、3時相の3次元腹部CT画像を受け付ける。 In step S200, the receiving unit 102 receives a three-phase three-dimensional abdominal CT image as an input image.
 ステップS202において、画像間位置合わせ部110は、上記ステップS200で3時相の3次元腹部CT画像として受け付けた、早期相画像と、門脈相画像と、晩期相画像との間の位置合わせを行い、位置合わせ画像を生成する。 In step S202, the inter-image registration unit 110 performs registration between the early phase image, the portal phase image, and the late phase image received as the three-phase abdominal CT image of the three phases in step S200. To generate a registration image.
 ステップS204において、空間的標準化部112は、上記ステップS202で生成された位置合わせ画像に基づいて、例えば非線形関数を用いて、空間的標準化画像を生成する。 In step S204, the spatial standardization unit 112 generates a spatial standardized image using, for example, a nonlinear function, based on the alignment image generated in step S202.
 ステップS206において、セグメンテーション部114は、統計的形状モデルデータベース106に格納された予め計算された固有ベクトルを基底とする固有空間において、上記ステップS200で受け付けた入力画像に基づいて、目的関数を最適化するように、上記ステップS204で生成された空間的標準化画像が表す被写体の形状を表す形状パラメータを推定し、空間的標準化画像が表す被写体の形状を求める。ステップS206は、図11に示すセグメンテーション処理ルーチンによって実現される。 In step S206, the segmentation unit 114 optimizes the objective function based on the input image received in step S200 in the eigenspace based on the pre-calculated eigenvector stored in the statistical shape model database 106. As described above, the shape parameter representing the shape of the subject represented by the spatial standardized image generated in step S204 is estimated to obtain the shape of the subject represented by the spatial standardized image. Step S206 is realized by the segmentation processing routine shown in FIG.
<セグメンテーション処理ルーチン>
 ステップS300において、セグメンテーション部114は、固有空間全体Rαを親ノードHとして設定する。
<Segmentation processing routine>
In step S300, the segmentation unit 114 sets the entire eigenspace R α as the parent node H 0 .
 ステップS302において、セグメンテーション部114は、上記式(12)に示すように、親ノードHを分割可能な画素kの集合を、集合Qとして設定する。 In step S302, the segmentation unit 114, as shown in the equation (12), the set of splittable pixel k parent node H 0, is set as the set Q.
 ステップS304において、セグメンテーション部114は、Queueを初期化する。 In step S304, the segmentation unit 114 initializes Queue.
 ステップS306において、セグメンテーション部114は、上記ステップS300で設定した親ノードHと、上記式(14)に従って算出される当該親ノードHに対応する下界L(H;I)との組み合わせを、上記ステップS304で初期化されたQueueに格納する。 In step S306, the segmentation unit 114 includes a parent node H 0 set in the step S300, the lower bound L corresponding to the parent node H 0 is calculated according to the equation (14); a combination of (H 0 I) And stored in the Queue initialized in step S304.
 ステップS308において、セグメンテーション部114は、上記ステップS302で設定された集合Q、又は後述するステップS316で更新された集合Qから、上記式(17)に従って、画素kを選択する。 In step S308, the segmentation unit 114 selects the pixel k from the set Q set in step S302 or the set Q updated in step S316 described later according to the above equation (17).
 ステップS310において、セグメンテーション部114は、上記ステップS300で設定された親ノードH、又は後述するステップS314で更新された親ノードHを、上記ステップS308で選択された画素kを用いて上記式(10)及び(11)に従って分割し、子ノードH及びHを設定する。 In step S310, the segmentation unit 114, the above formula parent node H 0 is set in the step S300, the or parent node H 0 updated in step S314 to be described later, by using a pixel k selected in step S308 (10) and divided according to (11), sets the child nodes H 1 and H 2.
 ステップS312において、セグメンテーション部114は、上記ステップS310で設定された子ノードH及び上記式(14)に従って算出される当該子ノードHに対応する下界L(H;I)との組み合わせを、Queueに格納する。また、上記ステップS310で設定された子ノードH及び上記式(14)に従って算出される当該子ノードHに対応する下界L(H;I)との組み合わせを、Queueに格納する。 In step S312, the segmentation unit 114, the lower bound L corresponding to the child node H 1 calculated in accordance with the set child nodes H 1 and the formula in the above step S310 (14); a combination of (H 1 I) , Stored in Queue. Further, the combination of the child node H 2 set in step S310 and the lower bound L (H 2 ; I) corresponding to the child node H 2 calculated according to the above equation (14) is stored in Queue.
 ステップS314において、セグメンテーション部114は、Queueに格納されたノードのうち、最も低い下界のノードを選択し、親ノードHを、選択されたノードに更新する。 In step S314, the segmentation unit 114, among the stored node in Queue, select the lowest lower bound of the node, the parent node H 0, updates the selected node.
 ステップS316において、セグメンテーション部114は、上記ステップS314で更新された親ノードHに基づいて、上記式(12)に示すように、集合Qを、親ノードHを分割可能な画素kの集合に更新する。なお、親ノードHを分割可能な画素kが存在しない場合には、集合Qが空集合に更新される。 In step S316, the segmentation unit 114, based on the parent node H 0 updated in the step S314, the as shown in the equation (12), the set Q, the set of splittable pixel k of the parent node H 0 Update to If there is no pixel k that can divide the parent node H 0 , the set Q is updated to an empty set.
 ステップS318において、セグメンテーション部114は、集合Qが空集合であるか否かを判定する。集合Qが空集合である場合には、ステップS308へ戻る。一方、集合Qが空集合でない場合には、ステップS320へ進む。 In step S318, the segmentation unit 114 determines whether or not the set Q is an empty set. If the set Q is an empty set, the process returns to step S308. On the other hand, if the set Q is not an empty set, the process proceeds to step S320.
 ステップS320において、セグメンテーション部114は、上記ステップS314で最終的に更新された親ノードHに含まれる形状パラメータαを選択し、選択された形状パラメータαを関数gに代入し、特定の物体の形状yを決定する。 In step S320, the segmentation unit 114 is finally updated by selecting the shape parameter alpha * in the parent node H 0 above step S314, the substituted a selected shape parameter alpha * the function g, the specific Determine the shape y * of the object.
 ステップS322において、セグメンテーション部114は、上記ステップS320で決定された特定の物体の形状y及び目的関数に基づいて、目的関数を最適化するように、特定の物体の形状である被写体の領域xを、上記ステップS204で生成された空間的標準化画像から抽出する。 In step S322, the segmentation unit 114 subjects the region x of the subject having the shape of the specific object so as to optimize the objective function based on the shape y * and the objective function of the specific object determined in step S320. * Is extracted from the spatial standardized image generated in step S204.
 ステップS324において、セグメンテーション部114は、上記ステップS322で抽出された被写体の領域xを結果として出力する。 In step S324, the segmentation unit 114 outputs the subject region x * extracted in step S322 as a result.
 次に画像処理ルーチンに戻り、出力部120は、ステップS208において、上記ステップS206で抽出された被写体の領域xを結果として出力して、画像処理ルーチンを終了する。 Next, returning to the image processing routine, the output unit 120 outputs the subject region x * extracted in step S206 as a result in step S208, and ends the image processing routine.
 以上説明したように、本実施形態の画像処理装置100では、プログラムに基づくコンピュータ処理による機能として、(A)被写体の領域が予め求められた複数の画像に基づいて予め計算された固有ベクトルを基底とする固有空間であって、(B)固有空間上の点が、特定の物体の形状を表す統計的形状モデルの形状パラメータを示す固有空間において、固有空間上の点が示す形状パラメータが表す特定の物体の形状の尤もらしさと入力画像中の隣接画素間の画素値の差とに応じた値を表す予め定められた目的関数を最適化するように、入力画像が表す被写体の形状を表す形状パラメータを推定する。推定された形状パラメータが表す特定の物体の形状を事前知識として、被写体の領域を、入力画像から抽出する。これにより、計算量の増大を抑制して、被写体の領域を精度よく抽出することができる。 As described above, in the image processing apparatus 100 according to the present embodiment, as a function by computer processing based on a program, (A) eigenvectors calculated in advance based on a plurality of images in which a subject area is obtained in advance are used as a basis. (B) in the eigenspace where the point on the eigenspace indicates the shape parameter of the statistical shape model representing the shape of the specific object, the specific parameter represented by the shape parameter indicated by the point on the eigenspace A shape parameter that represents the shape of the subject represented by the input image so as to optimize a predetermined objective function that represents a value depending on the likelihood of the shape of the object and the pixel value difference between adjacent pixels in the input image Is estimated. The area of the subject is extracted from the input image using the shape of the specific object represented by the estimated shape parameter as prior knowledge. Thereby, it is possible to accurately extract the region of the subject while suppressing an increase in the calculation amount.
 また、本実施形態の画像処理装置100は、被写体の領域を高速に抽出することができる。 Also, the image processing apparatus 100 according to the present embodiment can extract a subject area at high speed.
 また、提案するアルゴリズムと従来のアルゴリズムとの重要な違いの一つに、前処理が不要である点が挙げられる。従来は、あらかじめ形状を用意し、クラスタリングをする前処理が必要であり、前処理にかなりの時間とメモリを必要としていたが、本実施の形態における提案アルゴリズムでは前処理は一切不要である。これにより、コストの削減を実現した。 Also, one important difference between the proposed algorithm and the conventional algorithm is that no preprocessing is required. Conventionally, preprocessing for preparing a shape in advance and performing clustering is required, and considerable time and memory are required for the preprocessing. However, the proposed algorithm in this embodiment does not require any preprocessing. This realized cost reduction.
 また、本実施形態の形態では、140例の3次元腹部CT画像から膵臓をセグメンテーションする実験を行って処理精度や処理時間を検証し、以下の結果が得られた。 Moreover, in the embodiment of the present embodiment, an experiment for segmenting the pancreas from 140 three-dimensional abdominal CT images was performed to verify processing accuracy and processing time, and the following results were obtained.
1.3次元腹部CT画像からの膵臓のセグメンテーションに関して世界最高の精度が得られた。
2.提案アルゴリズムによる目的関数の最小化後の値は、従来手法と比較すると必ず小さくなることが理論的に保証されているが、そのことを実際の画像でも確認した。
3.従来手法(上記米国特許第8249349号明細書)で扱われていた形状の最大数が10であるのに比べ、提案法では10以上の数を扱うことができるようになった。また、画像も従来は2次元の画像に限られていたのに対して、画素数が二桁近く多い3次元画像も扱えるようになった。
The world's highest accuracy for pancreas segmentation from 1.3D abdominal CT images was obtained.
2. The value after minimization of the objective function by the proposed algorithm is theoretically guaranteed to be smaller than that of the conventional method, but this was confirmed by actual images.
3. Compared with the maximum number of shapes handled by the conventional method (the above-mentioned US Pat. No. 8,249,349) being 10 7 , the proposed method can handle a number of 10 9 or more. In addition, the image is conventionally limited to a two-dimensional image, but a three-dimensional image having a number of pixels nearly two orders of magnitude can be handled.
[第2の実施の形態]
 次に、第2の実施の形態について説明する。なお、第2の実施の形態に係る統計的形状モデル生成装置及び画像処理装置の構成は、第1の実施の形態と同様の構成となるため、同一符号を付して説明を省略する。
[Second Embodiment]
Next, a second embodiment will be described. In addition, since the structure of the statistical shape model production | generation apparatus and image processing apparatus which concerns on 2nd Embodiment becomes a structure similar to 1st Embodiment, it attaches | subjects the same code | symbol and abbreviate | omits description.
 第2の実施の形態では、近似解法を用いて、被写体の形状を表す形状パラメータを推定する点が、第1の実施の形態と異なる。 The second embodiment is different from the first embodiment in that a shape parameter representing the shape of a subject is estimated using an approximate solution.
 図12に、第2の実施の形態における近似解法を説明するための図を示す。図12に示すように、上記第1の実施の形態で用いた厳密解法とは異なり、第2の実施の形態では近似解法によって形状パラメータを推定する。第2の実施の形態では、図12に示すように、固有空間上に格子状にサンプリング点を設定し、サンプリング点から決定される直線を用いて凸多角形を分割することを繰り返す。なお、サンプリング点の設定は、予め定められたサンプリングパラメータkに応じて行われる。例えば、サンプリングパラメータk=4の場合、固有空間の次元ごとに2個のサンプリング点が設定される。 FIG. 12 shows a diagram for explaining the approximate solution in the second embodiment. As shown in FIG. 12, unlike the exact solution used in the first embodiment, the shape parameter is estimated by the approximate solution in the second embodiment. In the second embodiment, as shown in FIG. 12, sampling points are set in a lattice shape on the eigenspace, and the convex polygon is divided using a straight line determined from the sampling points. Note that the sampling point is set according to a predetermined sampling parameter k. For example, if the sampling parameters k = 4, 2 4 sampling points for each dimension of the eigenspace is set.
 第2の実施の形態に係る画像処理装置のセグメンテーション部114は、まず、固有空間上に格子状にサンプリング点を設定する。 First, the segmentation unit 114 of the image processing apparatus according to the second embodiment sets sampling points in a lattice shape on the eigenspace.
 そして、セグメンテーション部114は、固有空間上に設定されたサンプリング点から、分割後の2つの凸多角形の面積が対応するように、直線を設定し、設定された直線で凸多角形を分割して、最適な形状パラメータを示す点を含む凸多角形を探索することを繰り返し、入力画像が表す被写体の形状を表す形状パラメータを推定する。
 具体的には、上記表1に示したセグメンテーションアルゴリズムの疑似コードにおいて、関数Select(Q)を用いて選択した画素kによって親ノードHを分割する処理の代わりに、サンプリング点から決定される直線を用いて、親ノードHを分割する処理を行う。
Then, the segmentation unit 114 sets a straight line from the sampling points set in the eigenspace so that the areas of the two convex polygons after the division correspond to each other, and divides the convex polygon by the set straight line. Thus, the search for the convex polygon including the point indicating the optimum shape parameter is repeated, and the shape parameter representing the shape of the subject represented by the input image is estimated.
Specifically, in the pseudo code of the segmentation algorithm shown in Table 1 above, instead of the process of dividing the parent node H 0 by the pixel k selected using the function Select (Q), a straight line determined from the sampling points using, performs processing for dividing a parent node H 0.
 第2の実施の形態では、近似解法を用いることにより、固有空間の次元数を増加させることができ、形状パラメータを精度良く推定することができる。 In the second embodiment, by using an approximate solution, the number of dimensions of the eigenspace can be increased, and the shape parameter can be estimated with high accuracy.
 図13に、第2の実施の形態に係る画像処理装置による140例の膵臓の認識結果を示す。図13において、破線の円は上記第1の実施の形態に係る画像処理装置による膵臓の認識結果を表し、実線の円は第2の実施の形態に係る画像処理装置による膵臓の認識結果を表す。
 図13に示すように、近似解法を用いることにより、固有空間の次元数を上げることで厳密解よりも高いセグメンテーション精度が達成されていることがわかる。
FIG. 13 shows the recognition results of 140 cases of pancreas by the image processing apparatus according to the second embodiment. In FIG. 13, a broken-line circle represents a pancreas recognition result by the image processing apparatus according to the first embodiment, and a solid-line circle represents a pancreas recognition result by the image processing apparatus according to the second embodiment. .
As shown in FIG. 13, it can be seen that by using the approximate solution, the segmentation accuracy higher than the exact solution is achieved by increasing the number of dimensions of the eigenspace.
 また、図14に、第2の実施の形態の近似解法、上記米国特許第8249349号明細書に記載の手法を膵臓のセグメンテーションに適用させた場合の解法、及び上記第1の実施の形態の厳密解法の計算時間の比較結果を示す。図14は、固有空間の次元dを増加させた場合の計算時間の変化を示している。なお、厳密解法については計算時間の上限を100hとした。また、近似解法についてはサンプリングパラメータの値をk=4とした。また、CPUは、3.1GHzのIntel(R)Xeon(R)を用い、前処理に1CPU、最適化に2CPUを用いた。 FIG. 14 shows an approximate solution of the second embodiment, a solution when the method described in the above-mentioned US Pat. No. 8,249,349 is applied to pancreas segmentation, and the strictness of the first embodiment. The comparison result of the calculation time of the solution is shown. FIG. 14 shows a change in calculation time when the dimension d of the eigenspace is increased. For the exact solution, the upper limit of calculation time was 100 h. For the approximate solution, the value of the sampling parameter was set to k = 4. The CPU used was 3.1 GHz Intel (R) Xeon (R), 1 CPU was used for preprocessing, and 2 CPU was used for optimization.
 図14に示すように、近似解法を用いた場合には、上記第1の実施の形態の厳密解法を用いた場合及び上記米国特許第8249349号明細書に記載の手法を膵臓のセグメンテーションに適用させた場合に比べ、固有空間の次元数を上げた場合の計算コストを大幅に削減可能であることがわかる。 As shown in FIG. 14, when the approximate solution is used, the method described in the first embodiment and the method described in US Pat. No. 8,249,349 are applied to the pancreas segmentation. It can be seen that the calculation cost can be greatly reduced when the number of dimensions of the eigenspace is increased compared to the case where
 なお、第2の実施の形態に係る画像処理装置の他の構成及び作用については、第1の実施の形態と同様であるため、説明を省略する。 Note that other configurations and operations of the image processing apparatus according to the second embodiment are the same as those of the first embodiment, and thus description thereof is omitted.
 以上説明したように、第2の実施の形態に係る画像処理装置によれば、固有空間上の凸多角形上に、格子状にサンプリング点を設定し、固有空間上の凸多角形に設定された直線を用いて凸多角形を分割して探索することを繰り返して、入力画像が表す被写体の形状を表す形状パラメータを推定する。推定された形状パラメータが表す特定の物体の形状を事前知識として、被写体の領域を、入力画像から抽出する。これにより、計算量の増大を抑制して、被写体の領域を精度よく抽出することができる。 As described above, according to the image processing apparatus according to the second embodiment, sampling points are set in a lattice shape on a convex polygon in the eigenspace, and set to a convex polygon in the eigenspace. The shape parameter representing the shape of the subject represented by the input image is estimated by repeating the search by dividing the convex polygon using the straight line. The area of the subject is extracted from the input image using the shape of the specific object represented by the estimated shape parameter as prior knowledge. Thereby, it is possible to accurately extract the region of the subject while suppressing an increase in the calculation amount.
 また、近似解法により、次元数を増加させたより高度な統計的形状モデルが利用可能となる。 In addition, a more advanced statistical shape model with an increased number of dimensions can be used by the approximate solution method.
[第3の実施の形態]
 次に、第3の実施の形態について説明する。なお、第3の実施の形態に係る統計的形状モデル生成装置及び画像処理装置の構成は、第1の実施の形態と同様の構成となるため、同一符号を付して説明を省略する。
[Third Embodiment]
Next, a third embodiment will be described. In addition, since the structure of the statistical shape model production | generation apparatus and image processing apparatus which concerns on 3rd Embodiment becomes a structure similar to 1st Embodiment, it attaches | subjects the same code | symbol and abbreviate | omits description.
 第3の実施の形態では、ランダムにサンプリング点を設定する近似解法を用いて、被写体の形状を表す形状パラメータを推定する点が、第2の実施の形態と異なる。 The third embodiment is different from the second embodiment in that a shape parameter representing the shape of a subject is estimated using an approximate solution that randomly sets sampling points.
 図15に、第3の実施の形態における近似解法を説明するための図を示す。図15に示すように、第3の実施の形態では、固有空間上にランダムにサンプリング点を設定し、サンプリング点から決定される直線を用いて凸多角形を分割することを繰り返す。 FIG. 15 shows a diagram for explaining the approximate solution in the third embodiment. As shown in FIG. 15, in the third embodiment, sampling points are set randomly on the eigenspace, and the convex polygon is divided using a straight line determined from the sampling points.
 第3の実施の形態に係る画像処理装置のセグメンテーション部114は、まず、固有空間上にランダムにサンプリング点を設定する。 First, the segmentation unit 114 of the image processing apparatus according to the third embodiment randomly sets sampling points on the eigenspace.
 そして、セグメンテーション部114は、固有空間上に設定されたサンプリング点から、分割後の2つの凸多角形の面積が対応するように、直線を設定し、設定された直線で凸多角形を分割して、最適な形状パラメータを示す点を含む凸多角形を探索することを繰り返し、入力画像が表す被写体の形状を表す形状パラメータを推定する。 Then, the segmentation unit 114 sets a straight line from the sampling points set in the eigenspace so that the areas of the two convex polygons after the division correspond to each other, and divides the convex polygon by the set straight line. Thus, the search for the convex polygon including the point indicating the optimum shape parameter is repeated, and the shape parameter representing the shape of the subject represented by the input image is estimated.
 具体的には、上記表1に示したセグメンテーションアルゴリズムの疑似コードにおいて、関数Select(Q)を用いて選択した画素kによって親ノードHを分割する処理の代わりに、サンプリング点から決定される直線を用いて親ノードHを分割する処理を行う。 Specifically, in the pseudo code of the segmentation algorithm shown in Table 1 above, instead of the process of dividing the parent node H 0 by the pixel k selected using the function Select (Q), a straight line determined from the sampling points It performs a process of dividing the parent node H 0 using.
 なお、第3の実施の形態に係る画像処理装置の他の構成及び作用については、第1又は第2の実施の形態と同様であるため、説明を省略する。 Note that other configurations and operations of the image processing apparatus according to the third embodiment are the same as those in the first or second embodiment, and thus description thereof is omitted.
 以上説明したように、第3の実施の形態に係る画像処理装置によれば、固有空間上の凸多角形上に、ランダムにサンプリング点を設定し、固有空間上の凸多角形に設定された直線を用いて凸多角形を分割して探索することを繰り返して、入力画像が表す被写体の形状を表す形状パラメータを推定する。推定された形状パラメータが表す特定の物体の形状を事前知識として、被写体の領域を、入力画像から抽出する。これにより、計算量の増大を抑制して、被写体の領域を精度よく抽出することができる。 As described above, according to the image processing apparatus according to the third embodiment, sampling points are set randomly on the convex polygon in the eigenspace, and set to the convex polygon in the eigenspace. The shape parameter representing the shape of the subject represented by the input image is estimated by repeating the search by dividing the convex polygon using a straight line. The area of the subject is extracted from the input image using the shape of the specific object represented by the estimated shape parameter as prior knowledge. Thereby, it is possible to accurately extract the region of the subject while suppressing an increase in the calculation amount.
[第4の実施の形態]
 次に、第4の実施の形態について説明する。なお、第4の実施の形態に係る統計的形状モデル生成装置及び画像処理装置の構成は、第1の実施の形態と同様の構成となるため、同一符号を付して説明を省略する。
[Fourth Embodiment]
Next, a fourth embodiment will be described. In addition, since the structure of the statistical shape model production | generation apparatus and image processing apparatus which concerns on 4th Embodiment becomes a structure similar to 1st Embodiment, it attaches | subjects the same code | symbol and abbreviate | omits description.
 第4の実施の形態では、目的関数を拡張させる点が、第1~第3の実施の形態と異なる。 The fourth embodiment differs from the first to third embodiments in that the objective function is extended.
 上記第1の実施の形態では、上記式(4)を目的関数として用い、上記式(4)におけるF(I,y)及びB(I,y)は、上記式(5)、(6)で定義される場合について説明した。 In the first embodiment, the above equation (4) is used as an objective function, and F p (I p , y p ) and B p (I p , y p ) in the above equation (4) The case defined in 5) and (6) has been described.
 第4の実施の形態では、上記式(4)に示す目的関数は、特定の物体の形状の尤もらしさとして、形状パラメータが表す特定の物体の形状に対する画素における形状ラベルの値に関して単調に変化する単調関数を含むように定義し、目的関数を拡張する。 In the fourth embodiment, the objective function shown in the above equation (4) changes monotonously with respect to the value of the shape label in the pixel for the shape of the specific object represented by the shape parameter as the likelihood of the shape of the specific object. Define to include monotonic functions and extend the objective function.
 具体的には、第4の実施の形態では、上記式(4)におけるF(I,y)とB(I,y)とを、以下の式(18)、(19)に示すように定義し、目的関数を拡張する。 Specifically, in the fourth embodiment, F p (I p , y p ) and B p (I p , y p ) in the above equation (4) are expressed by the following equations (18), (19 ) And extend the objective function.
Figure JPOXMLDOC01-appb-M000016
Figure JPOXMLDOC01-appb-M000016
 図16に、目的関数の下界が単調性を満たすための十分条件を表す図を示す。この十分条件は,h (y),h (y)(∀p∈P)がそれぞれy(∀q∈P)に関して単調減少,単調増加する単調関数であることである。図16に示すように、ラベル対{y,y’}∈L|P|FIG. 16 shows a diagram representing a sufficient condition for the lower bound of the objective function to satisfy monotonicity. This sufficient condition is that h F p (y) and h B p (y) (∀pεP) are monotone functions that monotonically decrease and monotonously increase with respect to y q (∀qεP), respectively. As shown in FIG. 16, the label pair {y, y ′} ∈L | P |
Figure JPOXMLDOC01-appb-M000017
Figure JPOXMLDOC01-appb-M000017
であるとき、以下の式(20)に示す関係が成り立つ。 When this is true, the relationship shown in the following equation (20) is established.
Figure JPOXMLDOC01-appb-M000018
Figure JPOXMLDOC01-appb-M000018
 また、図17に、上記式(20)の関係を満たすh (y)とh (y)の一例を示す。図17に示す例は、以下の式(21)に示す距離関数によってh (y)とh (y)とを定義したものである。 FIG. 17 shows an example of h F p (y) and h B p (y) that satisfy the relationship of the above formula (20). In the example shown in FIG. 17, h F p (y) and h B p (y) are defined by the distance function shown in the following equation (21).
Figure JPOXMLDOC01-appb-M000019
Figure JPOXMLDOC01-appb-M000019
 上記式(21)に示す符号付き距離関数によって単調関数h (y)とh (y)とを定義し、形状の輪郭付近でのペナルティを小さくすることで、事前形状の誤りによるセグメンテーションの誤差を低減することができる。 By defining the monotone functions h F p (y) and h B p (y) by the signed distance function shown in the above equation (21), and reducing the penalty near the contour of the shape, Segmentation errors can be reduced.
 以上説明したように、第4の実施の形態に係る画像処理装置によれば、目的関数における特定の物体の形状の尤もらしさとして、形状パラメータが表す特定の物体の形状に対する画素における形状ラベルの値に関して単調に変化する単調関数を含むように定義する。これにより、被写体の領域を精度よく抽出することができる。 As described above, according to the image processing apparatus according to the fourth embodiment, as the likelihood of the shape of the specific object in the objective function, the value of the shape label in the pixel with respect to the shape of the specific object represented by the shape parameter Is defined to include a monotonic function that changes monotonically with respect to. As a result, the region of the subject can be extracted with high accuracy.
 また、第4の実施の形態では、特定の物体の形状の正解の輪郭でエネルギーが最小になるような単調関数h (y)とh (y)とを目的関数に導入することで、より優れた目的関数が利用可能となる。 In the fourth embodiment, the monotone functions h F p (y) and h B p (y) that minimize the energy at the correct contour of the shape of a specific object are introduced into the objective function. A better objective function can then be used.
[第5の実施の形態]
 次に、第5の実施の形態について説明する。なお、第5の実施の形態に係る統計的形状モデル生成装置及び画像処理装置の構成は、第1の実施の形態と同様の構成となるため、同一符号を付して説明を省略する。
[Fifth Embodiment]
Next, a fifth embodiment will be described. In addition, since the structure of the statistical shape model production | generation apparatus and image processing apparatus which concerns on 5th Embodiment becomes a structure similar to 1st Embodiment, it attaches | subjects the same code | symbol and abbreviate | omits description.
 第5の実施の形態では、固有空間上のパラメータαを形状に写像する関数gを拡張させる点が、第1~第4の実施の形態と異なる。 The fifth embodiment is different from the first to fourth embodiments in that the function g for mapping the parameter α on the eigenspace to the shape is expanded.
 上記第1の実施の形態において、上記式(2)に写像の関数gは、上記式(3)のHeaviside function H(・)によって定義される場合について説明した。 In the first embodiment, the case where the mapping function g in the equation (2) is defined by the Heaviside function F (·) in the equation (3) has been described.
 第5の実施の形態では、形状パラメータを特定の物体の形状に写像する関数gは、予め定められた関数fを含むように定義し、写像する関数gを拡張する。 In the fifth embodiment, the function g for mapping the shape parameter to the shape of a specific object is defined so as to include a predetermined function f, and the function g for mapping is extended.
Figure JPOXMLDOC01-appb-M000020
Figure JPOXMLDOC01-appb-M000020
 また、写像する関数gを以下の式(23)のように拡張した結果、レベルセット関数以外の形状表現も利用可能となる(例えば、Kilian M. Pohl et al.,“Logarithm Odds Maps for Shape Representation”, Proc. of Medical Image Computing and Computer-Assisted Intervention, Vol.4191, pp.955-963, 2006 に記載されているLogOddsなど)。 Also, extended result as equation (23) below the functions g p mapping, also made available shape representations other than the level set function (e.g., Kilian M. Pohl et al., "Logarithm Odds Maps for Shape Representation ”, Proc. Of Medical Image Computing and Computer-Assisted Intervention, Vol.4191, pp.955-963, 2006, etc.).
Figure JPOXMLDOC01-appb-M000021
Figure JPOXMLDOC01-appb-M000021
 ここでfは予め定められた単調関数を表し、tは予め定められた閾値を表す。 Here, f represents a predetermined monotonic function, and t represents a predetermined threshold value.
 例えば、統計的形状モデルが表す可能な形状表現の一例として、LogOddsを用いる場合について説明する。形状を表すクラスが2である場合、統計的形状モデルとしてLogOddsを用いるときは、以下の式で表される。 For example, a case where LogOdds is used as an example of a possible shape expression represented by the statistical shape model will be described. When the class representing the shape is 2, when using LogOdds as the statistical shape model, it is represented by the following equation.
Figure JPOXMLDOC01-appb-M000022
Figure JPOXMLDOC01-appb-M000022
 また、図18に、統計的形状モデルとしてLogOddsを用いたとき例を示す。図18に示すように、LogOddsを用いることにより、形状表現が拡張される。 FIG. 18 shows an example when LogOdds is used as a statistical shape model. As shown in FIG. 18, the shape representation is expanded by using LogOdds.
 以上説明したように、第5の実施の形態に係る統計的形状モデル生成装置によれば、写像する関数gの拡張(予め定められた関数fと閾値tとの導入)により、さらに高度な統計的形状モデルが利用可能となる。 As described above, according to the statistical shape model generation apparatus according to the fifth embodiment, further advanced statistics can be obtained by expanding the function g to be mapped (introducing a predetermined function f and threshold t). A geometric shape model becomes available.
 なお、本発明は、上述した例に限定されるものではなく、この発明の要旨を逸脱しない範囲内で様々な変形や応用が可能である。 Note that the present invention is not limited to the above-described example, and various modifications and applications are possible without departing from the gist of the present invention.
 本実施の形態は、基礎的であることから、その応用範囲は広いと考えられる。具体的には、形状などが統計的に変動する図形を対象とした画像認識処理に幅広く利用可能である。例えば、他の臓器や他の医用画像(例えばMRやPETなど)はもちろん、それ以外に、顔画像からの顔の認識や、一般の風景画像から特定の図形を認識する処理などに利用可能である。本実施の形態では、膵臓を対象としたが、他の臓器として例えば脾臓を対象とすることもできる。 Since this embodiment is basic, its application range is considered wide. Specifically, the present invention can be widely used for image recognition processing for a graphic whose shape or the like fluctuates statistically. For example, in addition to other organs and other medical images (for example, MR, PET, etc.), it can be used for face recognition from face images and processing for recognizing specific figures from general landscape images. is there. In the present embodiment, the pancreas is targeted, but the spleen can also be targeted as another organ, for example.
 また、第1の実施の形態では、統計的形状モデルとしてLSDMを用いる場合を例に説明したが、これに限定されるものではない。LSDMでなく、他の統計的形状モデルの場合にも利用可能である。統計的形状モデルは、上記式(1)のような線形の関数(厳密には画素についての関数φが線形の関数)で表現できていれば、他の統計的形状モデルに本実施の形態を適用することができる。目的関数や最適化手法によっては、実施の形態で示した上記式(1)のような線形の関数で表現される統計的形状モデル以外の統計的形状モデルに対しても、本実施の形態を適用することができる。
 また、上記第5の実施の形態に示すように、写像の関数gを拡張させることで、様々な統計的形状モデルを用いることができる。
In the first embodiment, the case where LSDM is used as the statistical shape model has been described as an example. However, the present invention is not limited to this. The present invention can be used not only for LSDM but also for other statistical shape models. If the statistical shape model can be expressed by a linear function as in the above formula (1) (strictly speaking, the function φ for the pixel is a linear function), this embodiment is applied to other statistical shape models. Can be applied. Depending on the objective function and the optimization method, this embodiment can be applied to a statistical shape model other than the statistical shape model expressed by a linear function such as the above formula (1) shown in the embodiment. Can be applied.
Also, as shown in the fifth embodiment, various statistical shape models can be used by expanding the mapping function g.
 また、第1の実施の形態では、固有空間の次元は2(d=2)である場合を例に説明したが、これに限定されるものではない。例えば、より高次元の固有空間を対象としてもよい。例えば、第2及び第3の実施の形態で示したように、近似解法を用いることによって、固有空間の次元を増加させることができる。 In the first embodiment, the case where the eigenspace dimension is 2 (d = 2) has been described as an example. However, the present invention is not limited to this. For example, a higher dimensional eigenspace may be targeted. For example, as shown in the second and third embodiments, the eigenspace dimension can be increased by using an approximate solution.
 上記実施の形態において、固有空間の次元を3以上に増加させた場合、凸多角形は凸多胞体に対応し、凸多角形の面積は、凸多胞体の体積に対応する。
 従って固有空間の次元を3以上に増加させた場合、セグメンテーション部114は、固有空間上の凸多胞体を繰り返し分割して最適な形状パラメータを示す点を含む凸多胞体を探索する探索アルゴリズムに従って、上記式(4)に示す目的関数を最適化するように、入力画像が表す被写体の形状を表す形状パラメータを推定する。
 また、セグメンテーション部114は、探索アルゴリズムにおいて、凸多胞体に含まれる形状集合に対する目的関数の下界を計算することにより、最適な形状パラメータを示す点を含む固有空間上の凸多胞体を探索して、形状パラメータを推定する。
 また、セグメンテーション部114は、分割後の2つの凸多胞体の体積が対応するように、固有空間上の凸多胞体を分割し、かつ、最適な形状パラメータを示す点を含む固有空間上の凸多胞体を探索することを繰り返して、形状パラメータを推定する。
In the above embodiment, when the dimension of the eigenspace is increased to 3 or more, the convex polygon corresponds to the convex multivesicle, and the area of the convex polygon corresponds to the volume of the convex multivesicle.
Therefore, when the dimension of the eigenspace is increased to 3 or more, the segmentation unit 114 repeatedly divides the convex multivesicle on the eigenspace and searches for the convex multivesicle including the point indicating the optimal shape parameter, A shape parameter representing the shape of the subject represented by the input image is estimated so as to optimize the objective function represented by the above equation (4).
Further, the segmentation unit 114 searches the convex multivesicle on the eigenspace including the point indicating the optimum shape parameter by calculating the lower bound of the objective function for the shape set included in the convex multivesicle in the search algorithm. Estimate the shape parameters.
Further, the segmentation unit 114 divides the convex multivesicle on the eigenspace so that the volumes of the two convex multivesicles after the division correspond to each other, and the convexity on the eigenspace including the point indicating the optimum shape parameter. Repeat the search for multivesicles to estimate the shape parameters.
 また、実施の形態では、上記式(17)に従って選択された集合Qに含まれる画素kによって固有空間を分割する場合を例に説明したが、これに限定されるものではなく、他の任意の分割法で作成した凸多角形に対しても適用可能である。例えば、第2の実施の形態で示したように、格子状など、規則的に固有空間を分割した凸多角形にも適用可能である。
 この場合、分割の細かさによってはTightness(最適性保証の3つの条件の内の一つ。)が成立しないことがある。
In the embodiment, the case where the eigenspace is divided by the pixel k included in the set Q selected according to the above equation (17) has been described as an example. However, the present invention is not limited to this, and any other arbitrary It can also be applied to convex polygons created by the division method. For example, as shown in the second embodiment, the present invention can also be applied to a convex polygon that regularly divides an eigenspace, such as a lattice shape.
In this case, depending on the fineness of the division, “Tightness” (one of the three conditions for guaranteeing optimality) may not be established.
 ここで、Tightnessについて簡単に説明する。 Here, Tightness will be briefly described.
 上記式(14)を、以下の式(24)のように示す。 The above formula (14) is shown as the following formula (24).
Figure JPOXMLDOC01-appb-M000023
Figure JPOXMLDOC01-appb-M000023
 この場合、すべての葉ノードHについて以下の式(25)が成立する場合には、Tightnessが成立している。 In this case, when the following equation for all the leaf nodes H i (25) is satisfied, tightness is established.
Figure JPOXMLDOC01-appb-M000024
Figure JPOXMLDOC01-appb-M000024
 分割の細かさが粗い場合には、上記式(25)が成立せずに、Tightnessが成立しないこともあり、近似解しか得られないことがあるが、固有空間の次元dを大きくした場合の計算コストのさらなる削減が可能であるというメリットがある。注意すべき点は、最適性をやや犠牲にすることで、計算コストと計算精度のバランスをユーザが選ぶことが可能になる点であり、本実施の形態の提案法の拡張に関する重要な点である。なお、このとき、最適性が保証されない点では上記米国特許第8249349号明細書などと同様であるが、上記米国特許第8249349号明細書と比べて、前処理が一切不要である点に注意されたい。 When the fineness of the division is coarse, the above formula (25) may not be satisfied, and the Highness may not be satisfied, and only an approximate solution may be obtained. However, when the dimension d of the eigenspace is increased There is an advantage that the calculation cost can be further reduced. The point to be noted is that the user can select the balance between the calculation cost and the calculation accuracy by sacrificing the optimality, and is an important point regarding the extension of the proposed method of the present embodiment. is there. At this time, it is the same as the above-mentioned U.S. Pat. No. 8,249,349, etc. in that the optimality is not guaranteed, but it is noted that no pre-processing is required compared with the above-mentioned U.S. Pat. No. 8,249,349. I want.
 また、最小化の目的関数やアルゴリズムについても、今回は上記実施の形態で示した目的関数、および、branch and bound法とグラフカット法とを用いたが、最小化が可能であれば、今回示したものに限らず適用可能である。 For the objective function and algorithm for minimization, the objective function shown in the above embodiment, the branch and bound method and the graph cut method are used this time. It is applicable not only to the thing.
 また、上記第2又は第3の実施の形態に係る近似解法を実行した後に、上記第1の実施の形態に係る厳密解法を実行してもよい。 Further, after executing the approximate solution according to the second or third embodiment, the exact solution according to the first embodiment may be executed.
 また、図2に示したコンピュータ構成において、実施の形態に係る各処理部の機能を実現するためのプログラムをコンピュータ読み取り可能な記録媒体に記録して、この記録媒体に記録されたプログラムをコンピュータシステムに読み込ませ、実行することにより、各構成による処理が実行されてもよいし、図示されていない通信機能を用いて、当該プログラムを読み込ませることでもよい。 In the computer configuration shown in FIG. 2, a program for realizing the functions of the processing units according to the embodiment is recorded on a computer-readable recording medium, and the program recorded on the recording medium is stored in the computer system. The processing by each component may be executed by causing the program to be read and executed, or the program may be read using a communication function not shown.
 なお、コンピュータ読み取り可能な記録媒体とは、フレキシブルディスク、光磁気ディスク、ROM、CD-ROM等の可搬媒体、コンピュータシステムに内蔵されるハードディスク等の記憶装置のことをいう。 The computer-readable recording medium means a portable medium such as a flexible disk, a magneto-optical disk, a ROM, a CD-ROM, or a storage device such as a hard disk built in the computer system.
 また、上記プログラムは、このプログラムを記憶装置等に格納したコンピュータシステムから、伝送媒体を介して、あるいは、伝送媒体中の伝送波により他のコンピュータシステムに伝送されてもよい。ここで、プログラムを伝送する「伝送媒体」は、インターネット等のネットワーク(通信網)や電話回線等の通信回線(通信線)のように情報を伝送する機能を有する媒体のことをいう。 The program may be transmitted from a computer system storing the program in a storage device or the like to another computer system via a transmission medium or by a transmission wave in the transmission medium. Here, the “transmission medium” for transmitting the program refers to a medium having a function of transmitting information, such as a network (communication network) such as the Internet or a communication line (communication line) such as a telephone line.
 また、上記プログラムは、前述した機能の一部を実現するためのものであってもよい。さらに、前述した機能を、コンピュータシステムにすでに記録されているプログラムとの組み合わせで実現できるもの、いわゆる差分ファイル(差分プログラム)であってもよい。 Further, the program may be for realizing a part of the functions described above. Furthermore, what can implement | achieve the function mentioned above in combination with the program already recorded on the computer system, what is called a difference file (difference program) may be sufficient.
 また、本実施の形態では、上記図1に示す統計的形状モデル生成装置10及び上記図7に示す画像処理装置100の各処理部を、プログラムにより各機能の実行が可能なコンピュータで構成するものとしているが、論理素子回路からなるハードウェア構成とすることでも良い。 In the present embodiment, each processing unit of the statistical shape model generation apparatus 10 shown in FIG. 1 and the image processing apparatus 100 shown in FIG. 7 is configured by a computer that can execute each function by a program. However, a hardware configuration including a logic element circuit may be used.
 また、上記図2に示す統計的形状モデル生成装置10及び画像処理装置100のコンピュータ構成に関しても適宜にその構成内容を変更しても良い。 Further, regarding the computer configurations of the statistical shape model generation apparatus 10 and the image processing apparatus 100 shown in FIG. 2, the configuration contents may be changed as appropriate.
 このように、実施する形態例を、図面を参照して詳述してきたが、具体的な構成はこの実施の形態例に限られるものではなく、要旨を逸脱しない範囲の設計等も含まれる。 As described above, the embodiment to be implemented has been described in detail with reference to the drawings. However, the specific configuration is not limited to the embodiment, and includes a design and the like without departing from the scope of the invention.
 また、実施の形態のプログラムは、記憶媒体に格納して提供するようにしてもよい。 Further, the program according to the embodiment may be provided by being stored in a storage medium.
 実施の形態のコンピュータ可読媒体は、特定の物体である被写体を表す入力画像から、前記被写体の領域を抽出するプログラムであって、コンピュータを、前記入力画像を受け付ける受付手段、及び(A)前記被写体を表す学習用の複数の画像であって、前記被写体の領域が予め求められた前記複数の画像に基づいて予め計算された固有ベクトルを基底とする固有空間であって、かつ、(B)固有空間上の点が、前記特定の物体の形状の統計的変動を表す統計的形状モデルの形状パラメータを示す固有空間において、前記入力画像に基づいて、前記固有空間上の点が示す前記形状パラメータが表す前記特定の物体の形状の尤もらしさと前記入力画像中の隣接画素間の画素値の差とに応じた値を表す予め定められた目的関数を最適化するように、前記入力画像が表す前記被写体の形状を表す前記形状パラメータを推定し、前記推定された前記形状パラメータが表す前記特定の物体の形状を事前知識として、前記被写体の領域を、前記入力画像から抽出するセグメンテーション手段として機能させるためのプログラムを記憶したコンピュータ可読媒体である。 A computer-readable medium according to an embodiment is a program for extracting a region of a subject from an input image representing a subject that is a specific object, the computer accepting means for receiving the input image, and (A) the subject And eigenspaces based on eigenvectors calculated in advance based on the plurality of images in which the area of the subject is obtained in advance, and (B) eigenspaces In the eigenspace where the upper point indicates the shape parameter of the statistical shape model representing the statistical variation of the shape of the specific object, the shape parameter indicated by the point on the eigenspace represents the input image. A predetermined objective function representing a value corresponding to the likelihood of the shape of the specific object and the difference in pixel value between adjacent pixels in the input image is optimized. The shape parameter representing the shape of the subject represented by the input image is estimated, and the region of the subject is extracted from the input image using the shape of the specific object represented by the estimated shape parameter as prior knowledge. A computer-readable medium storing a program for functioning as a segmentation means.
 日本出願2014-169911の開示はその全体が参照により本明細書に取り込まれる。 The entire disclosure of Japanese Application 2014-169911 is incorporated herein by reference.
 本明細書に記載された全ての文献、特許出願、及び技術規格は、個々の文献、特許出願、及び技術規格が参照により取り込まれることが具体的かつ個々に記載された場合と同程度に、本明細書中に参照により取り込まれる。 All documents, patent applications, and technical standards mentioned in this specification are to the same extent as if each individual document, patent application, and technical standard were specifically and individually described to be incorporated by reference, Incorporated herein by reference.

Claims (19)

  1.  特定の物体である被写体を表す入力画像から、前記被写体の領域を抽出する画像処理装置であって、
     前記入力画像を受け付ける受付手段と、
     (A)前記被写体を表す学習用の複数の画像であって、前記被写体の領域が予め求められた前記複数の画像に基づいて予め計算された固有ベクトルを基底とする固有空間であって、かつ、(B)固有空間上の点が、前記特定の物体の形状の統計的変動を表す統計的形状モデルの形状パラメータを示す固有空間において、
     前記入力画像に基づいて、前記固有空間上の点が示す前記形状パラメータが表す前記特定の物体の形状の尤もらしさと前記入力画像中の隣接画素間の画素値の差とに応じた値を表す予め定められた目的関数を最適化するように、前記入力画像が表す前記被写体の形状を表す前記形状パラメータを推定し、
     前記推定された前記形状パラメータが表す前記特定の物体の形状を事前知識として、前記被写体の領域を、前記入力画像から抽出するセグメンテーション手段と、
     を含む画像処理装置。
    An image processing apparatus that extracts a region of a subject from an input image representing a subject that is a specific object,
    Receiving means for receiving the input image;
    (A) a plurality of learning images representing the subject, wherein the subject area is an eigenspace based on an eigenvector calculated in advance based on the plurality of images obtained in advance; (B) In an eigenspace in which a point on the eigenspace indicates a shape parameter of a statistical shape model representing a statistical variation of the shape of the specific object,
    Based on the input image, a value corresponding to the likelihood of the shape of the specific object represented by the shape parameter indicated by the point on the eigenspace and the difference in pixel value between adjacent pixels in the input image is represented. Estimating the shape parameter representing the shape of the subject represented by the input image so as to optimize a predetermined objective function;
    Segmentation means for extracting the region of the subject from the input image, using the shape of the specific object represented by the estimated shape parameter as prior knowledge;
    An image processing apparatus.
  2.  前記セグメンテーション手段は、前記固有空間において、
     前記入力画像に基づいて、前記目的関数を最適化するように、前記入力画像が表す前記被写体の形状を表す前記形状パラメータを推定すると同時に、
     前記推定された前記形状パラメータが表す前記特定の物体の形状を事前知識として、前記被写体の領域を、前記入力画像から抽出する請求項1記載の画像処理装置。
    The segmentation means is in the eigenspace,
    While estimating the shape parameter representing the shape of the subject represented by the input image so as to optimize the objective function based on the input image,
    The image processing apparatus according to claim 1, wherein a region of the subject is extracted from the input image by using, as prior knowledge, the shape of the specific object represented by the estimated shape parameter.
  3.  前記セグメンテーション手段は、
     最適な形状パラメータを示す点が表す前記特定の物体の形状を含む形状集合を表す前記固有空間上の凸多胞体を繰り返し分割して最適な形状パラメータを示す点を含む前記固有空間上の凸多胞体を探索する探索アルゴリズムに従って、前記目的関数を最適化するように、前記入力画像が表す前記被写体の形状を表す前記形状パラメータを推定し、
     前記推定された前記形状パラメータが表す前記特定の物体の形状を事前知識として、前記被写体の領域を、前記入力画像から抽出する請求項1又は請求項2記載の画像処理装置。
    The segmentation means includes:
    The convex multiplicity on the eigenspace including the point indicating the optimum shape parameter by repeatedly dividing the convex multivesicle on the eigenspace representing the shape set including the shape of the specific object represented by the point representing the optimum shape parameter Estimating the shape parameter representing the shape of the subject represented by the input image so as to optimize the objective function according to a search algorithm for searching for a cell;
    The image processing apparatus according to claim 1, wherein the region of the subject is extracted from the input image using the shape of the specific object represented by the estimated shape parameter as prior knowledge.
  4.  前記セグメンテーション手段は、前記探索アルゴリズムにおいて、前記凸多胞体に含まれる形状集合に対する前記目的関数の下界を計算することにより、最適な形状パラメータを示す点を含む前記固有空間上の凸多胞体を探索して、前記入力画像が表す前記被写体の形状を表す前記形状パラメータを推定し、前記推定された前記形状パラメータが表す前記特定の物体の形状を事前知識として、前記被写体の領域を、前記入力画像から抽出する請求項3記載の画像処理装置。 The segmentation means searches the convex multivesicle on the eigenspace including a point indicating an optimal shape parameter by calculating a lower bound of the objective function for the shape set included in the convex multivesicle in the search algorithm. Then, the shape parameter representing the shape of the subject represented by the input image is estimated, the shape of the specific object represented by the estimated shape parameter is used as prior knowledge, and the region of the subject is defined as the input image. The image processing apparatus according to claim 3, extracted from
  5.  前記セグメンテーション手段は、分割後の2つの凸多胞体の体積が対応するように、前記固有空間上の凸多胞体を分割し、かつ、最適な形状パラメータを示す点を含む前記固有空間上の凸多胞体を探索することを繰り返して、前記入力画像が表す前記被写体の形状を表す前記形状パラメータを推定し、前記推定された前記形状パラメータが表す前記特定の物体の形状を事前知識として、前記被写体の領域を、前記入力画像から抽出する請求項4記載の画像処理装置。 The segmentation means divides the convex multivesicle on the eigenspace so that the volumes of the two convex multivesicles after the division correspond to each other, and includes a convex on the eigenspace including a point indicating an optimum shape parameter. By repeatedly searching for multivesicles, estimating the shape parameter representing the shape of the subject represented by the input image, and using the shape of the specific object represented by the estimated shape parameter as prior knowledge, the subject The image processing apparatus according to claim 4, wherein the region is extracted from the input image.
  6.  前記セグメンテーション手段は、前記固有空間上にサンプリング点を設定し、前記固有空間上に設定されたサンプリング点から決定される超平面を用いて前記凸多胞体を分割して最適な形状パラメータを示す点を含む前記固有空間上の凸多胞体を探索することを繰り返して、前記入力画像が表す前記被写体の形状を表す前記形状パラメータを推定し、
     前記推定された前記形状パラメータが表す前記特定の物体の形状を事前知識として、前記被写体の領域を、前記入力画像から抽出する請求項4又は請求項5記載の画像処理装置。
    The segmentation means sets sampling points on the eigenspace, and divides the convex multivesicle using a hyperplane determined from the sampling points set on the eigenspace to indicate optimal shape parameters Repetitively searching for convex multivesicles in the eigenspace containing the parameter, estimating the shape parameter representing the shape of the subject represented by the input image,
    The image processing apparatus according to claim 4, wherein a region of the subject is extracted from the input image using the shape of the specific object represented by the estimated shape parameter as prior knowledge.
  7.  前記セグメンテーション手段は、任意に、前記固有空間上にサンプリング点を設定する請求項4又は請求項5記載の画像処理装置。 6. The image processing apparatus according to claim 4, wherein the segmentation means arbitrarily sets a sampling point on the eigenspace.
  8.  前記目的関数は、前記特定の物体の形状の尤もらしさとして、前記形状パラメータが表す前記特定の物体の形状に対する画素における形状ラベルの値に関して単調に変化する単調関数を含む請求項1~請求項7の何れか1項記載の画像処理装置。 The objective function includes a monotone function that monotonously changes as a likelihood of the shape of the specific object as a likelihood of a shape label in a pixel with respect to the shape of the specific object represented by the shape parameter. The image processing apparatus according to claim 1.
  9.  前記探索アルゴリズムは、Branch and bound法及びグラフカット法である請求項3~請求項8の何れか1項記載の画像処理装置。 The image processing apparatus according to any one of claims 3 to 8, wherein the search algorithm is a branch-and-bound method and a graph cut method.
  10.  受付手段、及びセグメンテーション手段を含み、特定の物体である被写体を表す入力画像から、前記被写体の領域を抽出する画像処理装置における画像処理方法であって、
     前記受付手段が、前記入力画像を受け付けるステップと、
     前記セグメンテーション手段が、(A)前記被写体を表す学習用の複数の画像であって、前記被写体の領域が予め求められた前記複数の画像に基づいて予め計算された固有ベクトルを基底とする固有空間であって、かつ、(B)固有空間上の点が、前記特定の物体の形状の統計的変動を表す統計的形状モデルの形状パラメータを示す固有空間において、前記入力画像に基づいて、前記固有空間上の点が示す前記形状パラメータが表す前記特定の物体の形状の尤もらしさと前記入力画像中の隣接画素間の画素値の差とに応じた値を表す予め定められた目的関数を最適化するように、前記入力画像が表す前記被写体の形状を表す前記形状パラメータを推定し、前記推定された前記形状パラメータが表す前記特定の物体の形状を事前知識として、前記被写体の領域を、前記入力画像から抽出するステップと、
     を含む画像処理方法。
    An image processing method in an image processing apparatus that includes a reception unit and a segmentation unit and extracts an area of the subject from an input image representing the subject that is a specific object,
    The accepting means accepting the input image;
    The segmentation means is (A) a plurality of learning images representing the subject, and a region of the subject is an eigenspace based on eigenvectors calculated in advance based on the plurality of images obtained in advance. And (B) in the eigenspace where a point on the eigenspace indicates a shape parameter of a statistical shape model representing a statistical variation of the shape of the specific object, the eigenspace is based on the input image. Optimize a predetermined objective function that represents a value corresponding to the likelihood of the shape of the specific object represented by the shape parameter indicated by the upper point and the difference in pixel value between adjacent pixels in the input image As described above, the shape parameter representing the shape of the subject represented by the input image is estimated, and the shape of the specific object represented by the estimated shape parameter is used as prior knowledge. The area of the object, extracting from the input image,
    An image processing method including:
  11.  前記セグメンテーション手段が、前記被写体の領域を、前記入力画像から抽出するステップは、
     前記固有空間において、
     前記入力画像に基づいて、前記目的関数を最適化するように、前記入力画像が表す前記被写体の形状を表す前記形状パラメータを推定すると同時に、
     前記推定された前記形状パラメータが表す前記特定の物体の形状を事前知識として、前記被写体の領域を、前記入力画像から抽出する請求項10記載の画像処理方法。
    The step of the segmentation means extracting the region of the subject from the input image;
    In the eigenspace,
    While estimating the shape parameter representing the shape of the subject represented by the input image so as to optimize the objective function based on the input image,
    The image processing method according to claim 10, wherein a region of the subject is extracted from the input image using the shape of the specific object represented by the estimated shape parameter as prior knowledge.
  12.  前記セグメンテーション手段が、前記被写体の領域を、前記入力画像から抽出するステップは、
     最適な形状パラメータを示す点が表す前記特定の物体の形状を含む形状集合を表す前記固有空間上の凸多胞体を繰り返し分割して最適な形状パラメータを示す点を含む前記固有空間上の凸多胞体を探索する探索アルゴリズムに従って、前記目的関数を最適化するように、前記入力画像が表す前記被写体の形状を表す前記形状パラメータを推定し、前記推定された前記形状パラメータが表す前記特定の物体の形状を事前知識として、前記被写体の領域を、前記入力画像から抽出する請求項11記載の画像処理方法。
    The step of the segmentation means extracting the region of the subject from the input image;
    The convex multiplicity on the eigenspace including the point indicating the optimum shape parameter by repeatedly dividing the convex multivesicle on the eigenspace representing the shape set including the shape of the specific object represented by the point representing the optimum shape parameter The shape parameter representing the shape of the subject represented by the input image is estimated so as to optimize the objective function according to a search algorithm for searching for a cell, and the specific object represented by the estimated shape parameter is estimated. The image processing method according to claim 11, wherein the region of the subject is extracted from the input image using the shape as prior knowledge.
  13.  前記セグメンテーション手段が、前記被写体の領域を、前記入力画像から抽出するステップは、前記探索アルゴリズムにおいて、前記凸多胞体に含まれる形状集合に対する前記目的関数の下界を計算することにより、最適な形状パラメータを示す点を含む前記固有空間上の凸多胞体を探索して前記入力画像が表す前記被写体の形状を表す前記形状パラメータを推定し、前記推定された前記形状パラメータが表す前記特定の物体の形状を事前知識として、前記被写体の領域を、前記入力画像から抽出する請求項12記載の画像処理方法。 The step of extracting the region of the subject from the input image by the segmentation means calculates an optimum shape parameter by calculating a lower bound of the objective function for the shape set included in the convex multivesicle in the search algorithm. The shape parameter of the specific object represented by the estimated shape parameter is estimated by searching for the convex multivesicle on the eigenspace including the point indicating the shape and estimating the shape parameter representing the shape of the subject represented by the input image. The image processing method according to claim 12, wherein a region of the subject is extracted from the input image with prior knowledge.
  14.  前記セグメンテーション手段が、前記被写体の領域を、前記入力画像から抽出するステップは、分割後の2つの凸多胞体の体積が対応するように、前記固有空間上の凸多胞体を分割し、かつ、最適な形状パラメータを示す点を含む前記固有空間上の凸多胞体を探索することを繰り返して、前記入力画像が表す前記被写体の形状を表す前記形状パラメータを推定し、前記推定された前記形状パラメータが表す前記特定の物体の形状を事前知識として、前記被写体の領域を、前記入力画像から抽出する請求項13記載の画像処理方法。 The step of the segmentation means extracting the region of the subject from the input image divides the convex multivesicle on the eigenspace so that the volumes of the two convex multivesicles after the division correspond, and By repeatedly searching for convex multivesicles on the eigenspace including points indicating optimal shape parameters, the shape parameters representing the shape of the subject represented by the input image are estimated, and the estimated shape parameters The image processing method according to claim 13, wherein the region of the subject is extracted from the input image using the shape of the specific object represented by the prior knowledge as the prior knowledge.
  15.  前記セグメンテーション手段が、前記被写体の領域を、前記入力画像から抽出するステップは、前記固有空間上にサンプリング点を設定し、前記固有空間上に設定されたサンプリング点から決定される超平面を用いて前記凸多胞体を分割して最適な形状パラメータを示す点を含む前記固有空間上の凸多胞体を探索することを繰り返して、前記入力画像が表す前記被写体の形状を表す前記形状パラメータを推定し、
     前記推定された前記形状パラメータが表す前記特定の物体の形状を事前知識として、前記被写体の領域を、前記入力画像から抽出する請求項13又は請求項14記載の画像処理方法。
    The step of the segmentation means extracting the subject area from the input image sets a sampling point on the eigenspace, and uses a hyperplane determined from the sampling point set on the eigenspace. The shape parameter representing the shape of the subject represented by the input image is estimated by repeatedly searching for the convex multivesicle on the eigenspace including the points indicating the optimum shape parameter by dividing the convex multivesicle. ,
    The image processing method according to claim 13 or 14, wherein a region of the subject is extracted from the input image by using the shape of the specific object represented by the estimated shape parameter as prior knowledge.
  16.  前記セグメンテーション手段が、前記被写体の領域を、前記入力画像から抽出するステップは、任意に、前記固有空間上にサンプリング点を設定する請求項13又は請求項14記載の画像処理方法。 15. The image processing method according to claim 13, wherein the step of the segmentation means extracting the object region from the input image arbitrarily sets a sampling point on the eigenspace.
  17.  前記目的関数は、前記特定の物体の形状の尤もらしさとして、前記形状パラメータが表す前記特定の物体の形状に対する画素における形状ラベルの値に関して単調に変化する単調関数を含む請求項11~請求項16の何れか1項記載の画像処理方法。 The objective function includes a monotone function that monotonously changes as a likelihood of the shape of the specific object as a likelihood of a shape label in a pixel with respect to the shape of the specific object represented by the shape parameter. The image processing method according to claim 1.
  18.  前記探索アルゴリズムは、Branch and bound法及びグラフカット法である請求項12~請求項17の何れか1項記載の画像処理方法。 The image processing method according to any one of claims 12 to 17, wherein the search algorithm is a branch and bound method and a graph cut method.
  19.  コンピュータを、請求項1~請求項9の何れか1項記載の画像処理装置の各手段として機能させるためのプログラム。 A program for causing a computer to function as each unit of the image processing apparatus according to any one of claims 1 to 9.
PCT/JP2015/073277 2014-08-22 2015-08-19 Image processing device, method, and program WO2016027840A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP2016544238A JP6661196B2 (en) 2014-08-22 2015-08-19 Image processing apparatus, method, and program

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2014169911 2014-08-22
JP2014-169911 2014-08-22

Publications (1)

Publication Number Publication Date
WO2016027840A1 true WO2016027840A1 (en) 2016-02-25

Family

ID=55350782

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2015/073277 WO2016027840A1 (en) 2014-08-22 2015-08-19 Image processing device, method, and program

Country Status (2)

Country Link
JP (1) JP6661196B2 (en)
WO (1) WO2016027840A1 (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2004188202A (en) * 2002-12-10 2004-07-08 Eastman Kodak Co Automatic analysis method of digital radiograph of chest part
US20090052756A1 (en) * 2007-08-21 2009-02-26 Siemens Corporate Research, Inc. System and method for global-to-local shape matching for automatic liver segmentation in medical imaging
WO2014052687A1 (en) * 2012-09-27 2014-04-03 Siemens Product Lifecycle Management Software Inc. Multi-bone segmentation for 3d computed tomography

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2004188202A (en) * 2002-12-10 2004-07-08 Eastman Kodak Co Automatic analysis method of digital radiograph of chest part
US20090052756A1 (en) * 2007-08-21 2009-02-26 Siemens Corporate Research, Inc. System and method for global-to-local shape matching for automatic liver segmentation in medical imaging
WO2014052687A1 (en) * 2012-09-27 2014-04-03 Siemens Product Lifecycle Management Software Inc. Multi-bone segmentation for 3d computed tomography

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
SHIMIZU, A. ET AL.: "Automated pancreas segmentation from three-dimensional contrast- enhanced computed tomography", INT J CARS, 2009, Retrieved from the Internet <URL:10.1007/s11548-009-0384-0> [retrieved on 20151026] *

Also Published As

Publication number Publication date
JPWO2016027840A1 (en) 2017-06-01
JP6661196B2 (en) 2020-03-11

Similar Documents

Publication Publication Date Title
EP1514229B1 (en) Statistical model
CN104063876B (en) Interactive image segmentation method
US20130127847A1 (en) System and Method for Interactive Image-based Modeling of Curved Surfaces Using Single-view and Multi-view Feature Curves
Chan et al. Volumetric parametrization from a level set boundary representation with PHT-splines
JP2008511366A (en) Feature-weighted medical object contour detection using distance coordinates
JP2003180654A (en) Method for forming three-dimensional statistical shape model for left ventricle from non-dense two-dimensional contour input value, and program storing device which stores program for performing the method
US9741123B2 (en) Transformation of 3-D object for object segmentation in 3-D medical image
US9984311B2 (en) Method and system for image segmentation using a directed graph
US20120154397A1 (en) Method and system for generating mesh from images
Koehl et al. Automatic alignment of genus-zero surfaces
Hu et al. Surface segmentation for polycube construction based on generalized centroidal Voronoi tessellation
US9965698B2 (en) Image processing apparatus, non-transitory computer-readable recording medium having stored therein image processing program, and operation method of image processing apparatus
CN107516314B (en) Medical image hyper-voxel segmentation method and device
Heitz et al. Statistical shape model generation using nonrigid deformation of a template mesh
CN118052866A (en) Method and system for extracting central line of airway tree
CN111369662A (en) Three-dimensional model reconstruction method and system for blood vessels in CT (computed tomography) image
US8229247B1 (en) Method and apparatus for structure preserving editing in computer graphics
JP6661196B2 (en) Image processing apparatus, method, and program
Kluszczyński et al. Image segmentation by polygonal Markov fields
CN112581513B (en) Cone beam computed tomography image feature extraction and corresponding method
Whitaker et al. Isosurfaces and Level-Sets.
Shi et al. Fast and effective integration of multiple overlapping range images
Ehrhardt et al. Statistical shape and appearance models without one-to-one correspondences
WO2021075465A1 (en) Device, method, and program for three-dimensional reconstruction of subject to be analyzed
KR101340594B1 (en) Segmentation apparatus and method based on multi-resolution

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 15834260

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2016544238

Country of ref document: JP

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 15834260

Country of ref document: EP

Kind code of ref document: A1