WO2007015384A1 - Area extracting device and area extracting program - Google Patents

Area extracting device and area extracting program Download PDF

Info

Publication number
WO2007015384A1
WO2007015384A1 PCT/JP2006/314579 JP2006314579W WO2007015384A1 WO 2007015384 A1 WO2007015384 A1 WO 2007015384A1 JP 2006314579 W JP2006314579 W JP 2006314579W WO 2007015384 A1 WO2007015384 A1 WO 2007015384A1
Authority
WO
WIPO (PCT)
Prior art keywords
region
pixel
adjacent
initial
image
Prior art date
Application number
PCT/JP2006/314579
Other languages
French (fr)
Japanese (ja)
Inventor
Yoshinori Ohno
Original Assignee
Olympus Corporation
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Olympus Corporation filed Critical Olympus Corporation
Publication of WO2007015384A1 publication Critical patent/WO2007015384A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/12Edge-based segmentation

Definitions

  • the present invention relates to a region extraction device and a region extraction program that extract an image region corresponding to an imaging target from an input image.
  • the contour of a desired object is extracted based on the theory of "snakes" using the energy minimization principle! .
  • the contour model formed by a continuous line is initialized, and an energy evaluation function that quantitatively expresses the curvature and change state of the contour model, deviations of edges and gradients on the image, etc. is defined.
  • the contour of the object is extracted by repeating the deformation of the contour model so as to minimize this energy evaluation function.
  • Patent Document 1 Japanese Patent Laid-Open No. 9138471
  • Patent Document 2 JP-A-8-329254
  • noise or the like in the image is erroneously detected as an edge point, or the edge line is interrupted when the edge point strength is weak. There is a problem that it is difficult to stably detect a desired contour.
  • the above-described contour extraction apparatus may not be able to extract a contour that is faithful to the target object when the shape of the target object changes or when there are a plurality of various target objects. was there.
  • the contour model is split when a contact or intersection of a plurality of line segments connecting the contour candidate points is detected, so that a plurality of objects are adjacent and the contour candidate points are detected. If this is not possible, the contour of each adjacent object cannot be extracted, and therefore there is a problem that each adjacent object cannot be identified.
  • the present invention has been made in view of the above, and is not affected by noise or the like in an acquired image, and does not depend on the state of the imaging target such as deformation or adjacency.
  • An object of the present invention is to provide an area extraction apparatus and an area extraction program that can stably and accurately extract an image area corresponding to the above.
  • an area extraction apparatus is an area extraction apparatus that extracts a target area, which is an image area corresponding to an imaging target, from input images.
  • Smoothing means for generating a smoothed image obtained by smoothing the image, and based on the smoothed pixel value of each pixel in the smoothed image, and at least the imaging target from the smoothed image.
  • An initial region that detects an initial region that is an image region including a part of Based on the detection means and the smoothed pixel value of the contour neighboring pixels of the initial region, it is determined whether or not the contour neighboring pixels are target region pixels constituting the target region, and according to the determination result And an area deformation means for deforming at least one of the size and shape of the initial area to form the target area.
  • an area extraction apparatus smoothes the image by inputting it into an area extraction apparatus that extracts a target area that is an image area corresponding to an imaging target from input images.
  • An initial region detecting means for detecting an initial region that is an area; an edge detecting means for generating an edge image that detects an edge included in the smoothed image; and the edge corresponding to a contour neighboring pixel of the initial region.
  • the pixel near the contour is a target region pixel constituting the target region, and the size of the initial region is determined according to the determination result.
  • At least one of thickness and shape And a region deformation means for deforming the direction to form the target region.
  • the initial area detecting means detects each pixel having the smoothed pixel value larger than a predetermined value as the initial area. It is characterized by doing.
  • the region extraction device is characterized in that the initial region detection means has a predetermined distribution shape of the smoothed pixel value with respect to the pixel position in the smoothed image. A pixel group satisfying the above condition is detected as the initial region.
  • the region extraction apparatus is characterized in that, in the above invention, the initial region detection means detects, as the initial region, a pixel group having a convex distribution shape. To do.
  • the region deformation means is in the initial region among a series of adjacent pixels adjacent to the outside of the initial region.
  • a pixel that satisfies a predetermined condition of the smoothed pixel value with respect to a contour pixel that forms a contour of the initial region is determined to be the target region pixel, and the initial region is set so as to capture the determined adjacent pixel. It is characterized by being deformed.
  • the region modifying means is configured such that a pixel value difference of the smoothed pixel value between the adjacent pixel and the contour pixel is within a predetermined range. The adjacent pixels that are within are determined to be the target region pixels
  • the region deformation means is located in the initial region among a series of adjacent pixels adjacent to the outside of the initial region.
  • a pixel that satisfies the predetermined condition of the edge pixel value with respect to a contour pixel that constitutes a contour of the initial region is determined to be the target region pixel, and the initial region is modified so as to incorporate the determined adjacent pixel. It is characterized by doing.
  • the region modification means includes a pixel value difference of the edge pixel value between the adjacent pixel and the contour pixel within a predetermined range. And determining that the adjacent pixel is the target region pixel.
  • the region extracting device is characterized in that the region deforming means includes the initial region out of a series of contour pixels in the initial region and constituting the contour of the initial region. A pixel that satisfies the predetermined condition for the smoothed pixel value between adjacent pixels outside the region is determined to be a pixel outside the target region, and the initial contour so as to remove the determined contour pixel. The region is deformed.
  • the region deformation means is configured such that a pixel value difference of the smoothed pixel value between the adjacent pixel and the contour pixel is within a predetermined range. It is characterized in that the contour pixel that is inside is determined to be a pixel outside the target region.
  • the region extracting device is characterized in that the region deforming means includes the initial region out of a series of contour pixels in the initial region and constituting the contour of the initial region.
  • a pixel whose edge pixel value satisfies a predetermined condition is determined to be a pixel outside the target region between adjacent pixels adjacent to the outside of the region, and the initial region is modified so as to remove the determined contour pixel It is characterized by doing.
  • the region extraction device is the region modification.
  • the means is characterized in that the contour pixel in which a pixel value difference of the edge pixel value is within a predetermined range between the adjacent pixel and the contour pixel is determined to be a pixel outside the target region.
  • the region extracting device is configured to determine whether the region deforming means is the target region pixel until the pixel value difference exceeds the predetermined range. This determination is repeated and the deformation of the initial region is repeated.
  • the region extracting device is characterized in that the region deforming means is adjacent to the outside of one of the initial regions and adjacent to the outside of the other initial region. An adjacent pixel that satisfies the predetermined condition is determined to be the target region pixel.
  • the region extracting device is characterized in that the region deforming means is adjacent to the outside of the initial region or a series of all adjacent pixels adjacent to the outside of the initial region. It is characterized in that it is determined whether or not the target region pixel is a power for each adjacent pixel of a predetermined ratio or more in a series of adjacent pixels.
  • the region extracting apparatus is an adjacent region formed by an adjacent deformed region from among the deformed regions that are image regions as a result of deformation by the region deforming means.
  • a region integration unit that detects a region group and integrates the adjacent deformation regions to form the target region based on a feature amount indicating a feature between adjacent deformation regions in the detected adjacent region group. Is further provided.
  • the region integration unit calculates the feature amount and integrates the deformation regions based on the calculated feature amount. It is characterized by.
  • the region integration unit is included in a deformation region different from the deformation region to be subjected to the intermediate force processing target of the contour neighboring pixels of each deformation region. It is characterized in that the adjacent region group is detected by detecting a pixel.
  • the region extracting apparatus is the adjacent deformed region among the deformed regions which are image regions as a result of the deformation by the region deforming means. Is detected based on the smoothed pixel value of the boundary pixel indicating the boundary line between the deformation regions in the detected adjacent region group and the boundary neighboring pixels in the vicinity of the boundary line. It further comprises region integration means for calculating a feature amount indicating a feature between regions, and forming the target region by integrating the deformation regions in the adjacent region group based on the calculated feature amount.
  • the region extracting unit according to claim 21 is characterized in that the region integration means includes the smoothing function of each boundary pixel on the boundary line and a boundary neighboring pixel near the boundary pixel. The average value of the difference between the pixel values is calculated as the feature amount.
  • the region extracting apparatus is an adjacent region formed by the adjacent deformed region from among the deformed regions that are image regions as a result of the deformation by the region deforming means.
  • a region group is detected, and a feature between the deformation regions is shown based on the boundary pixel indicating the boundary line between the deformation regions in the detected adjacent region group and the edge pixel value of the boundary vicinity pixel near the boundary line.
  • the image processing apparatus further includes a region integration unit that calculates a feature amount and integrates the deformation regions in the adjacent region group based on the calculated feature amount to form the target region.
  • the region extraction unit according to claim 23 is characterized in that the region integration unit includes the edge pixels of the boundary pixels on the boundary line and the boundary neighboring pixels in the vicinity of the boundary pixels.
  • the average value difference is calculated as the feature amount.
  • the region extracting device is characterized in that the region integrating means includes a series of adjacent images adjacent to the outside of each deformation region in the detected adjacent region group. Scanning an element, and detecting an adjacent pixel adjacent to the outside of a deformation area different from the deformation area to be processed as the boundary pixel.
  • the area extraction program according to claim 25 is provided with a region extraction device that extracts a target area, which is an image area corresponding to an imaging target, from an input image.
  • the region extraction device causes the region extraction device to generate a smoothed image obtained by smoothing the image, and a smoothed pixel value of each pixel in the smoothed image. Accordingly, an initial region detection for detecting an initial region that is an image region including at least a part of the imaging target from the smoothed image.
  • each contour neighboring pixel Based on the output procedure and the smoothed pixel value of each contour neighboring pixel of the initial region, it is determined whether or not each contour neighboring pixel is a target region pixel constituting the target region, And a region deformation procedure for deforming at least one of the size and shape of the initial region in accordance with the determination result to form the target region.
  • the region extraction program according to claim 26 provides the region extraction device for extracting a target region, which is an image region corresponding to the imaging target, from the input image.
  • the region extraction device In the region extraction program for extracting a region, the region extraction device generates a smoothing procedure for generating a smoothed image obtained by smoothing the image, and the smoothed pixel value of each pixel in the smoothed image. Based on this, an initial region detection procedure for detecting an initial region that is an image region including at least a part of the imaging target from the smoothed image, and an edge included in the smoothed image is detected.
  • Each edge neighboring pixel is determined to be the target area based on the edge detection procedure for generating the edge image and the edge pixel value of each pixel in the edge image corresponding to each edge neighboring pixel in the initial area.
  • a configuration A region deformation procedure for determining whether or not the target region pixel is a force to be deformed, and deforming at least one of the size and shape of the initial region according to the determination result, and forming the target region.
  • FIG. 1 is a block diagram showing a configuration of a region extraction device according to a first exemplary embodiment of the present invention.
  • FIG. 2 is a block diagram showing a detailed configuration of the region deforming section shown in FIG.
  • FIG. 3 is a block diagram showing a detailed configuration of the area integration unit shown in FIG. 1.
  • FIG. 41 is a diagram showing an example of an observation image input to the region extracting apparatus shown in FIG.
  • FIG. 4-2 shows the initial region detection image generated based on the observation image shown in Fig. 41.
  • FIG. 4 2 shows the initial region detection image generated based on the observation image shown in Fig. 41.
  • Fig. 4 3 shows a region deformation image generated based on the observation image shown in Fig. 41.
  • Figure 4-4 is a view to view the region integrating image generated on the basis of the observed image shown in FIG 1.
  • FIG. 5 is a flowchart showing a processing procedure performed by the region extracting apparatus shown in FIG.
  • FIG. 6-1 is a diagram for explaining the processing method of the smoothing process shown in FIG.
  • FIG. 6-2 is a diagram for explaining the processing method of the smoothing process shown in FIG.
  • FIG. 7 is a diagram for explaining a processing method of the initial region detection processing shown in FIG.
  • FIG. 8 is a flowchart showing a processing procedure for the region deformation processing shown in FIG.
  • FIG. 9 is a diagram for explaining a processing method of the region transformation process shown in FIG.
  • FIG. 10 is a flowchart of a process procedure of the area integration process shown in FIG.
  • FIG. 11 is a diagram for explaining a processing method of the region integration processing shown in FIG.
  • FIG. 12 is a block diagram showing a configuration of a region extracting device according to the second embodiment of the present invention.
  • FIG. 13 is a block diagram showing a detailed configuration of the area deforming unit shown in FIG.
  • FIG. 14 is a block diagram showing a detailed configuration of the area integration unit shown in FIG.
  • FIG. 15 is a flowchart showing a processing procedure performed by the region extracting apparatus shown in FIG.
  • FIG. 16 is a diagram showing an edge image generated by the edge detection process shown in FIG.
  • FIG. 1 is a block diagram showing the configuration of the area extracting apparatus 1 according to the first embodiment.
  • the region extraction device 1 includes an input unit 2 that receives input of various types of information such as images, an image processing unit 3 that processes input images, and outputs various types of information such as image display.
  • An output unit 4 to perform, a storage unit 5 for storing various information such as images, and a control unit 6 for controlling processing and operation of each unit of the region extraction device 1 are provided.
  • the input unit 2, image processing unit 3, output unit 4 and storage unit 5 are electrically connected to the control unit 6.
  • the input unit 2 includes an imaging device realized by using an imaging lens, an imaging element such as a CCD, and an AZD converter, and acquires an observation image generated by imaging by the imaging device. Also, The input unit 2 includes an input key, a mouse, a touch panel, a switch, and the like, and receives input of various processing information to be processed by the area extraction device 1.
  • the input unit 2 includes a communication interface such as USB or IEEE1394, or an interface corresponding to a portable storage medium such as a flash memory, a CD, a DVD, or a hard disk. You can get observation images.
  • a communication interface such as USB or IEEE1394, or an interface corresponding to a portable storage medium such as a flash memory, a CD, a DVD, or a hard disk. You can get observation images.
  • the observation image input from the input unit 2 is, for example, an image obtained by imaging cells that have been stained with a fluorescent dye in a living tissue.
  • the corresponding part of the cell to which the dye has acted is observed brightly.
  • the cell staining may be for staining the entire cell or for staining only a specific site such as a cell nucleus, actin, or cell membrane.
  • the dye used for staining is not limited to fluorescent dyes, but makes the cell contrast clearer. In addition to contrast, any dye can be used as long as it does not alter the cell characteristics.
  • any dye can be used as long as it does not alter the cell characteristics.
  • the observation image input from the input unit 2 may be an arbitrary form of image, such as a monochrome image, a color image, or a color difference image, as long as the image can be identified as a cell to be imaged.
  • the imaging target captured in the observation image may be an arbitrary object such as an object such as a vehicle, a person, or an animal that need not be interpreted as being limited to cells.
  • image data such that a portion where the imaging target exists on the image is captured with high contrast, such as an image showing a temperature distribution, can be used.
  • the image processing unit 3 includes a smoothing unit 7, an initial region detection unit 8, a region deformation unit 9, and a region integration unit 10.
  • the image processing unit 3 acquires and processes an observation image output from the input unit 2. Note that the observation image output from the input unit 2 can be acquired and stored by the storage unit 5, and the image processing unit 3 can acquire and process the observation image stored in the storage unit 5. it can.
  • the smoothing unit 7 acquires the observation image output from the input unit 2, and smoothes the observation image while preserving the structure of the pixel value distribution indicating a large pixel value change such as an edge in the image. This smoothness removes random noise in the observed image. In addition, the smoothing unit 7 generates a smoothed image as a result of smoothing the observation image. Output to shape 9 and area integration unit 10. Note that the smoothing unit 7 can also output and store the smoothed image to the storage unit 5 via the control unit 6.
  • the initial region detection unit 8 has a large pixel value corresponding to the imaging target in the smoothed image based on the smoothness pixel value that is the pixel value of each pixel in the smoothed image acquired from the smoothing unit 7.
  • An initial area that is a rough image area is detected.
  • the initial region to be detected may be a region including only a part of the imaging target as long as it is a region including at least a part of the corresponding imaging target, or a region including the entire imaging target. However, the contour of the initial area to be detected does not intersect with the contour of the imaging target.
  • the initial region detection unit 8 detects a strong initial region for each imaging target as an extraction target.
  • the initial region detection unit 8 generates initial region data in which various feature amounts such as the position, shape, and area of each detected initial region are associated, and outputs the initial region data to the region deformation unit 9. Note that the initial region detection unit 8 can also output and store the generated initial region data to the storage unit 5 via the control unit 6.
  • the region deforming unit 9 acquires the smoothed image from the smoothing unit 7 and the initial region data from the initial region detecting unit 8, and matches each initial region to the contour shape of the corresponding imaging target. It transforms so that Specifically, based on the smoothed pixel value of the contour neighboring pixel located in the vicinity of the contour of the initial region, the region deforming unit 9 determines whether or not this contour neighboring pixel is a pixel constituting the target region. Is determined. Then, at least one of the size and shape of the initial region is deformed according to the determination result, and the target region is formed as an image region that matches the contour shape of the imaging target.
  • FIG. 2 is a block diagram showing a detailed configuration of the area deforming unit 9.
  • the region deforming unit 9 includes a labeling unit 9a, a contour detecting unit 9b, a deformation determining unit 9c, and an end determining unit 9d.
  • the labeling unit 9a gives a unique region marker to each initial region in the smooth image.
  • the contour detection unit 9b refers to the area marker provided by the labeling unit 9a, and detects adjacent pixels adjacent to the outside of each initial region as contour neighboring pixels.
  • the deformation determination unit 9c determines whether each adjacent pixel is a target area pixel constituting the target area based on the smoothed pixel value, and deforms the initial area according to the determination result.
  • a deformation area which is an image area obtained as a result of deformation by the deformation determination unit 9c, is temporarily regarded as a target area.
  • the end determination unit 9d performs processing in the region deformation unit 9 according to the processing status of the deformation determination unit 9c. It is determined whether or not the power to end the reason. Further, when the end determination unit 9d determines the end of processing, the end determination unit 9d generates deformation region data in which various feature amounts such as the position, shape, and area of each deformation region are associated with each other, and outputs the deformation region data to the region integration unit 10. . The end determination unit 9d can also output and store the deformation area data in the storage unit 5 via the control unit 6.
  • the region integration unit 10 acquires the smoothed image from the smoothing unit 7 and the deformation region data from the region deformation unit 9, and detects an adjacent region group formed by the adjacent deformation regions. Then, based on the feature amount indicating the feature between adjacent deformation areas in the detected adjacent area group, the adjacent deformation areas are integrated to form a final target area.
  • FIG. 3 is a block diagram showing a detailed configuration of the region integration unit 10.
  • the region integration unit 10 includes a boundary detection unit 10a, a feature amount calculation unit 10b, and an integration determination unit 10c.
  • the boundary detection unit 10a detects boundary pixels indicating boundary lines between adjacent deformation regions with reference to the region markers of the deformation regions.
  • the feature amount calculation unit 10b calculates a feature amount between adjacent deformation regions corresponding to the boundary line based on the smoothed pixel values of the boundary pixel and the boundary vicinity pixel near the boundary line.
  • the integrated determination unit 10c determines whether or not the adjacent deformation regions are integrated based on the feature amount calculated by the feature amount calculation unit 10b, and integrates the deformation regions according to the determination result. Then, the target area is formed. In addition, when the integration determination unit 10c completes the integration of the deformation region and ends the processing in the region integration unit 10, the integration determination unit 10c associates various feature amounts such as the position, shape, and area of each target region as a processing result Generate area data and output to output unit 4. At this time, the integration determining unit 10c regards the independent deformation area that has not been integrated as it is as the final target area, and outputs the deformation area data corresponding to the deformation area as the target area data. The integrated determination unit 10c can also output and store the target area data to the storage unit 5 via the control unit 6.
  • the output unit 4 includes a display device having a CRT, a liquid crystal display, etc., acquires the target area data output from the image processing unit 3, and displays it as image information and numerical information. In this display, the output unit 4 can display only one of image information and numerical information. Both can be displayed simultaneously or by switching. Note that the target area data output from the image processing unit 3 is acquired and stored by the storage unit 5. The output unit 4 may acquire and display the target area data stored in the storage unit 5. In addition to the target area data, the output section 4 can obtain observation images, smoothed images, initial area data, deformation area data, etc. from the image processing section 3 or the storage section 5 and display them. .
  • the storage unit 5 is realized by using a ROM that stores various processing programs and the like, and a RAM that stores processing parameters, processing data, and the like for various processing.
  • the storage unit 5 stores a program for causing the image processing unit 3 to execute processing, that is, a region extraction program for causing the region extraction device 1 to extract a target region from the observed image.
  • the storage unit 5 may include a portable storage medium such as a flash memory, a CD, a DVD, and a hard disk as a removable storage unit.
  • the control unit 6 is realized using a CPU or the like that executes various processing programs stored in the storage unit 5.
  • the control unit 6 executes a region extraction program stored in the storage unit 5, and controls the processing and operation of each component included in the image processing unit 3 according to the region extraction program.
  • the control unit 6 performs control to display the target area data output from the image processing unit 3 on the output unit 4 as image information and numerical information.
  • the control unit 6 can also perform control to acquire an observation image, a smoothed image, initial region data, deformation region data, target region data, and the like from the storage unit 5 and display them on the output unit 4.
  • FIG. 5 is a flowchart showing the processing procedure of the region extraction process in which the region extraction apparatus 1 processes and displays the observation image by the control unit 6 executing the region extraction program.
  • FIGS. 41 to 44 are diagrams showing the processing results of the respective processing steps shown in FIG. 5, and are schematic diagrams showing an observation image, an initial region detection image, a region deformation image, and a region integration image in order. .
  • FIG. 41 is also used to describe the smoothed image as the smoothed image processing result.
  • the region integrated image shown in FIG. 4-4 is an image showing the target region finally determined as the region integration processing result, and is also the target region image as the region extraction processing result.
  • Each image shown in Figs. 4 2 to 4 4 is imaged by the control unit 6 based on the initial area data, the deformation area data, and the target area data, respectively. Although it is not necessarily an image to be generated, here is the region extraction process It is shown as a diagram for explaining the progress.
  • the input unit 2 captures a cell as an imaging target and acquires an observation image (step S101).
  • the input unit 2 acquires, as an observation image, an image obtained by imaging a plurality of cells, for example, as shown in FIG. 4-1.
  • the input unit 2 outputs the acquired observation image to the smoothing unit 7.
  • the smoothing unit 7 performs a smoothing process to generate a smoothed image by smoothing the observation image acquired from the input unit 2 (step S103).
  • step S103 as shown in FIG. 41, the smoothing unit 7 performs smoothing while preserving the structure of the cell region in the observation image, and removes the smooth image from which noise and the like in the observation image are removed. Generate.
  • the smoothing unit 7 outputs the generated smoothed image to the initial region detecting unit 8, the region deforming unit 9, and the region integrating unit 10.
  • the initial region detection unit 8 detects an initial region corresponding to each cell from the smoothed image acquired from the smoothing unit 7, and generates initial region data of each detected initial region.
  • Initial region detection processing is performed (step S105).
  • the initial region detection unit 8 detects an initial region including at least a part of each cell region, for example, as shown in FIG.
  • the initial region detection unit 8 outputs the generated initial region data to the region deformation unit 9.
  • the region deforming unit 9 determines each initial region indicated by the initial region data acquired from the initial region detecting unit 8 based on the smoothed pixel value of the smoothed image acquired from the smoothing unit 7. Deformation is performed so as to match the contour shape of the imaging target, and region deformation processing for generating deformation region data of each deformation region as a deformation result is performed (step S107). In step S107, as shown in FIG. 43, the region deformation unit 9 forms a deformation region that matches the contour shape of each cell in the observation image as a result of expanding and deforming each initial region stepwise. . The region deformation unit 9 outputs the generated deformation region data to the region integration unit 10.
  • the region integration unit 10 detects the adjacent region group based on the smoothed pixel value of the smoothed image acquired from the smoothing unit 7, and the feature amount between the adjacent deformation regions in the adjacent region group. Based on the calculated feature value, adjacent deformation regions are integrated with each other, and region integration processing for generating target region data as a result of integration is performed (step S109).
  • step S109 the region integration unit 10 in the region deformation image shown in FIG. 4-3, for example.
  • Adjacent deformation areas TA5 and TA6 are integrated into a target area OA5 as shown in Figure 4-4.
  • the region integration unit 10 regards the independent deformation regions TA1 to TA4 that are not integrated as the final target regions ⁇ 1 to ⁇ 4, respectively, and generates target region data for each target region.
  • the region integration unit 10 outputs the generated target region data to the output unit 4.
  • the output unit 4 displays at least one of image information and numerical information based on the target region data output from the region integration unit 10 as an extraction result of the region extraction process (step S111).
  • the output unit 4 displays the target region image as shown in FIG. 44 when displaying the extraction result as image information.
  • the output unit 4 displays, for each area marker associated with the target area, for example, each pixel value in the target area has the same value, each pixel has the same color, or the target area has been unified. And so on.
  • step S111 the control unit 6 ends the series of region extraction processing.
  • the control unit 6 can repeatedly perform the processing of steps S101 to S111 until, for example, instruction information for ending predetermined processing is received.
  • the observation image stored in the storage unit 5 can be acquired and the processing in step S103 and subsequent steps can be executed.
  • step S111 can be executed based on the target area data stored in the storage unit 5. In addition to the target area data, step S111 can also be executed based on the observed image, smoothed image, initial area data, area deformation data, and the like.
  • step S103 the smoothing unit 7 refers to a neighboring 5 ⁇ 5 pixel pattern PA centered on the target pixel OP, which is a pixel to be processed in the observed image, as shown in FIG. 5 X 5 pixel pattern
  • step S103 the smoothing unit 7 refers to a neighboring 5 ⁇ 5 pixel pattern PA centered on the target pixel OP, which is a pixel to be processed in the observed image, as shown in FIG. 5 X 5 pixel pattern
  • the smoothing unit 7 refers to a neighboring 5 ⁇ 5 pixel pattern PA centered on the target pixel OP, which is a pixel to be processed in the observed image, as shown in FIG. 5 X 5 pixel pattern
  • the smoothing unit 7 divides the 5 ⁇ 5 pixel pattern PA into, for example, nine 3 ⁇ 3 pixel patterns PA1 to PA9 as shown in FIG. A variance value of pixel values of a plurality of selected pixels indicated by diagonal lines is calculated for each of ⁇ PA9. And The smoothing unit 7 extracts the 3 ⁇ 3 pixel pattern indicating the smallest variance value, calculates the average value of the pixel values of the selected pixels in the extracted 3 ⁇ 3 pixel pattern, and pays attention to the calculated average value. Set as the smooth pixel value of pixel OP.
  • the smoothing unit 7 smoothes the observation image by setting the smoothed pixel value for each pixel constituting the observation image. It is not necessary to limit the pixel pattern referred to corresponding to the target pixel OP to 5 ⁇ 5 pixels, and the number of pixels to be referred to may be increased or decreased. In addition, the number of pixels in each pixel pattern may be increased or decreased without having to limit each pixel pattern to be divided within the pixel pattern to be referenced to 3 ⁇ 3 pixels.
  • the smoothing method by the smoothing unit 7 need not be interpreted as being limited to the method described above.
  • the smoothed pixel value of the target pixel may be calculated by referring to the pixel value of each neighboring pixel within a predetermined range with respect to the target pixel and calculating an average value weighted at the center.
  • the smooth pixel value may be set using the k-nearest neighbor method. That is, k pixels having a pixel value closest to the pixel value of the target pixel are extracted from neighboring pixels within a predetermined range with respect to the target pixel, and an average value of the pixel values of the extracted pixels is calculated.
  • the smooth pixel value of the target pixel may be used.
  • the smoothed pixel value may be set using a selection averaging method. That is, an edge within a predetermined range with respect to the target pixel may be detected, and an average value of pixel values of neighboring pixels along the detected edge direction may be calculated to obtain the smoothed pixel value of the target pixel. Furthermore, smoothing may be performed using a known filter such as a median filter or a bilateral filter.
  • step S 105 the initial region detection unit 8 detects, as an initial region, each pixel having a smoothed pixel value larger than a predetermined value from each pixel constituting the smoothed image. At this time, the initial region detection unit 8 determines whether or not the smoothness pixel value is larger than a predetermined value for each pixel, and determines that the pixel determined to be large is a value of “1” and is not large. A value of “0” is set for the pixel. Then, a set of pixels for which “1” is set is detected as an initial region.
  • the value set for each pixel according to the determination result need not be limited to “1” and “0”, as long as the determination result can be determined, other numerical values, alphabets, symbols, etc. Even with Good.
  • the predetermined value used as a criterion for determining the smoothness pixel value may be a fixed value for all pixels in the smoothness image, but the position of the pixel to be determined in the smoothness image The variation value may be in accordance with the smoothness pixel value or the like.
  • the predetermined value may be a value obtained by using a known method such as a discriminant analysis method, which may be an average pixel value in a pixel block of a predetermined size! /.
  • the initial region detection method by the initial region detection unit 8 need not be interpreted as being limited to the method described above.
  • this distribution shape is convex.
  • a pixel group may be detected as the initial region.
  • FIG. 7 shows the distribution shape of the smoothed pixel value, where the horizontal axis indicates the pixel position in the smoothed image, and the vertical axis indicates the smoothed pixel value.
  • the initial region detection unit 8 refers to pixels P2 and P3 that are symmetrically separated by a predetermined distance D with respect to the target pixel P1. Then, when compared with the average pixel value v23 of the pixel values V2 and v3 of the pixels P2 and P3, when the pixel value vl of the pixel of interest P1 is large, the pixel of interest P1 is detected as a pixel constituting the initial region. By repeating this detection process over the entire smoothed image, the initial region detection unit 8 can detect an initial region as a pixel group in which the distribution shape of the smoothed pixel values is convex.
  • the initial region detection unit 8 is not limited to the case where the distribution shape of the smooth pixel value is convex, and detects, for example, a pixel group having a local maximum local distribution value as the initial region. Also good. Note that the initial region detection method by the initial region detection unit 8 is not limited to the method described above, and various methods can be applied.
  • FIG. 8 is a flowchart showing the processing procedure of the area transformation process.
  • the labeling unit 9a performs a labeling process for assigning a unique region marker to each initial region detected by the initial region detection unit 8 (step S121).
  • the region marker given by the marker 9a may be any notation using numbers, alphabets, symbols, etc. as long as it is unique.
  • the contour detection unit 9b detects a contour pixel indicating the contour of each initial region, and further detects an adjacent pixel that is a pixel adjacent to the contour pixel (step S123). Deformation judgment The unit 9c determines whether or not each detected adjacent pixel is a pixel constituting the target region, and deforms the initial region according to the determination result (step S125). Thereafter, the end determination unit 9d determines whether or not the force has deformed the initial region in step S125 (step S127).
  • step SI 27: Yes When the initial region is deformed (step SI 27: Yes), the control unit 6 repeats the processing from step S123. On the other hand, when the initial area is not deformed (step S 127: No), the control unit 6 determines whether or not the force has processed all the initial areas (step S 129). If yes (step S129: No), repeat the process from step S123 for the initial area. If all have been processed (step S129: Yes), the control unit 6 causes the end determination unit 9d to output the deformation area data, and then returns to step S107.
  • the contour detection unit 9b refers to the area marker given by the marker 9a, and detects a series of pixels adjacent to the outside of each initial area as an adjacent pixel. In other words, the contour detection unit 9b searches for pixels that have not been assigned a region marker to determine whether or not a region marker has been added to pixels adjacent in the vertical, horizontal, and diagonal directions. In this case, the pixel to be processed is detected as an adjacent pixel. In this way, the contour detection unit 9b detects, as shown in FIG. 9, for example, a series of pixels indicated by diagonal lines adjacent to the outside of the initial area IA1 as adjacent pixels.
  • FIG. 9 is a schematic diagram illustrating an initial region detection image, and is a diagram in which a part of the initial region is enlarged and displayed.
  • each rectangular area indicates a pixel
  • areas IA1 and IA2 surrounded by thick lines indicate a part of different initial areas.
  • step S125 the deformation determination unit 9c satisfies the predetermined condition for the smoothed pixel value between the adjacent pixels in the initial region and the contour pixels constituting the contour of the initial region among a series of adjacent pixels.
  • the added pixel is determined to be a target region pixel, and the initial region is expanded and deformed so as to capture the determined adjacent pixel.
  • the deformation determination unit 9c calculates a pixel value difference between the smoothed pixel values between the adjacent pixel and the contour pixel, and if the calculated pixel value difference is within a predetermined range, the deformation determination unit 9c
  • the pixel is determined to be a target area pixel.
  • the modification is performed so that the adjacent pixel determined to be the target region pixel is taken into the initial region.
  • the initial area The same area marker as that of the initial area is newly given to the adjacent pixels captured in.
  • the deformation determination unit 9c calculates a pixel value difference of the smoothness pixel value between the adjacent pixel and the contour pixel, and when the calculated pixel value difference exceeds a predetermined range, Neighboring pixels are not taken into the initial area. Further, when the adjacent pixel is adjacent to a plurality of different initial regions, the deformation determination unit 9c does not use the adjacent pixel as a pixel to be taken into the initial region. In other words, the deformation determination unit 9c determines that an adjacent pixel that is adjacent to only one initial region and whose calculated pixel value difference is within a predetermined range is a target region pixel, and takes it into the initial region. Let it be a pixel.
  • the deformation determination unit 9c performs such processing in step S125 on a series of all adjacent pixels detected by the contour detection unit 9b.
  • the deformation determination unit 9c can be executed only for adjacent pixels of a predetermined ratio or more in a series of adjacent pixels. For example, it can be executed only for each adjacent pixel separated by a predetermined interval. It is.
  • the deformation determination unit 9c calculates a pixel value difference between the pixel PxO as the adjacent pixel and the pixels Pxl to Px3 as the contour pixels. If the pixel value difference is within the predetermined range, the pixel PxO is determined to be the target region pixel. More specifically, the deformation determination unit 9c determines that the pixel value differences between the pixels PxO and the pixel Pxl, the pixel PxO and the pixel Px2, and the pixel P ⁇ and the pixel Px3 are all within a predetermined range. The pixel PxO is determined to be the target area pixel.
  • the deformation determination unit 9c performs the same determination processing for all adjacent pixels indicated by diagonal lines, and then takes each adjacent pixel determined to be the target region pixel in the initial region IA1.
  • the initial area IA1 is expanded and deformed so that Note that the deformation determination unit 9c does not use the pixels Px4 to Px6 adjacent to the two initial areas IA1 and IA2 as pixels to be captured in any of the initial areas.
  • the deformation determination unit 9c performs the determination process only on adjacent pixels separated by one pixel, for example, indicated by a circle among the adjacent pixels indicated by diagonal lines in the initial area IA1. You can also.
  • the determination as to whether or not the non-processed adjacent pixel that is not marked with a ⁇ is a target region pixel is made based on, for example, the discrimination result for the neighboring pixel marked with a close ⁇ . It is good to estimate. By reducing the adjacent pixels to be processed in this way, it is possible to reduce the processing load and processing time in the region deformation processing.
  • the deformation determination unit 9c has all the pixel value differences among the plurality of pixel value differences calculated for the adjacent pixels to be processed within a predetermined range.
  • the pixel to be processed is determined to be a target region pixel.
  • step S 127 the end determination unit 9 d determines the end or continuation of the region deformation process for the initial region to be processed depending on whether or not the initial region has been deformed in the immediately preceding step S 125.
  • the initial region is deformed immediately before, it is highly necessary to deform the initial region, and it is determined that the region deformation process is to be continued.
  • the necessity of further deforming the initial region is low, and the end of the region deformation process is determined.
  • the region deforming unit 9 determines whether or not each adjacent pixel is a target region pixel until the pixel value difference calculated by the deformation determining unit 9c exceeds a predetermined range, and this Based on the determination result, the initial region is repeatedly deformed.
  • the region deformation process is repeated until the initial region is no longer deformed, that is, until there are no adjacent pixels to be captured in the initial region.
  • the present invention is limited to this. For example, when the number of adjacent pixels captured in the initial area is less than a predetermined number, or when the area deformation process is repeated a predetermined number of times, the area deformation process can be ended.
  • the deformation determination unit 9c expands and deforms the initial region by newly taking in adjacent pixels, but conversely, the initial region force is also applied to the contour pixels. By removing, the initial region can be contracted and deformed. In this case, when the pixel value difference between the adjacent pixel and the contour pixel is within a predetermined range, the contour pixel is determined to be a pixel outside the target region. When the initial region is contracted and deformed in this way, the initial region detection unit 8 detects the initial region so as to individually include the region corresponding to each imaging target. Further, in the region deformation process described above, it is determined whether or not the adjacent pixel is a pixel constituting the target region and the initial region is expanded and deformed.
  • the initial region is changed to the initial region. It is also possible to detect pixels separated from the outline of the region by a predetermined distance and determine whether the detected pixel is a pixel constituting the target region, thereby expanding and deforming the initial region.
  • the initial region contour is also used as a pixel separated from the initial region by removing the pixel and the contour pixel from the initial region. The region can be contracted and deformed.
  • FIG. 10 is a flowchart showing the processing procedure of the region integration processing.
  • the boundary detection unit 1 Oa detects an adjacent region group based on the deformation region data (step S141), and detects boundary pixels in the adjacent region group (step S143).
  • the feature amount calculation unit 10b calculates a feature amount between adjacent deformation regions in the adjacent region group (step S145).
  • the integration determination unit 10c determines whether or not the calculated feature amount is smaller than the predetermined value (step S1 47), and if it is smaller than the predetermined value (step S147: Yes), the adjacent deformation regions are integrated. (Step S149).
  • step S151 determines whether or not the force has processed all the deformation areas (step S151), and when all the deformation areas have not been processed (step S151: No), The process from step S141 is repeated. If all are processed (step S1 51: Yes), the control unit 6 outputs the target area data from the integrated determination unit 10c, and then returns to step S109. If it is determined in step S147 that the feature amount is not smaller than the predetermined value (step S147: No), the integration determination unit 10c does not integrate the deformation area, and the control unit 6 determines in step S151. I do.
  • the boundary detection unit 10a refers to the contour neighboring pixels of the deformation region, and detects the neighboring region group from the contour neighboring pixels by detecting pixels adjacent to the deformation region different from the deformation region. To detect. More specifically, the boundary detection unit 10a scans a series of adjacent pixels that are adjacent to the outside of the deformation area, and detects pixels having an area marker of a deformation area different from the deformation area around the adjacent pixels. In this case, it is determined that this deformation area is adjacent to another deformation area, and a set of these deformation areas is detected as an adjacent area group.
  • step S143 the boundary detection unit 10a scans a series of adjacent pixels that are adjacent to the outside of the deformation area, and is adjacent to the pixels included in the deformation area different from the deformation area from among the scanned adjacent pixels. Adjacent pixels are detected as boundary pixels between adjacent deformation regions. More specifically, when scanning the adjacent pixels, the boundary detection unit 10a refers to the area markers included in the eight pixels adjacent in the vertical and horizontal directions and in the oblique direction for each adjacent pixel, and detects pixels having different area markers. If it is detected, the adjacent pixel to be processed is determined as a boundary pixel and detected.
  • step S145 the feature amount calculation unit 10b sandwiches the boundary pixel with respect to each boundary line detected in step S143 based on the smoothness pixel values of the boundary pixel and the boundary neighboring pixels. To calculate the feature amount between adjacent deformation regions. More specifically, the feature amount calculation unit 10b calculates the average value of the smoothed pixel value difference between the boundary pixel and the boundary vicinity pixel located in the vicinity of the boundary pixel as the feature amount between the adjacent deformation regions. To do.
  • a pixel that is a predetermined number of pixels away from the boundary pixel to be processed in the normal direction of the boundary line is selected as the boundary neighboring pixel.
  • the feature amount calculation unit 10b for example, as shown in FIG. 11, is predetermined in the normal direction of the boundary line A—B of the adjacent deformation areas AR1, AR2, and on the opposite side from the boundary line A—B. Set curves ⁇ , - ⁇ ', A "— ⁇ " separated by the number of pixels. Then, for each boundary pixel on the boundary line ⁇ — ⁇ , the pixel of the smoothed pixel value between the pixel on the curve A′—B ′ and the pixel on the curve ⁇ “— ⁇ ” corresponding to the normal direction. Each value difference is calculated. The feature amount calculation unit 10b calculates such a pixel value difference for all the boundary pixels on the boundary line A—B, and calculates an average value of all the calculated pixel value differences between the feature regions AR1 and AR2. Calculate as
  • the feature quantity calculation unit 10b can calculate not only the average value but also statistics such as the maximum value, the minimum value, and the standard deviation of the pixel value difference as the feature quantity.
  • the feature amount calculation unit 10b calculates the feature amount based on the pixel value difference between the smoothness pixel values. For example, the feature amount calculation unit 10b calculates the feature amount based on the intersection angle of the contour lines of adjacent deformation regions. Can also be calculated.
  • step S149 the integration determining unit 10c integrates adjacent deformed regions in the adjacent region group when it is determined in step S147 that the feature amount is smaller than a predetermined value. At this time, the integrated determination unit 10c selects one of the area markers of the adjacent deformation areas. The other area marker is rewritten by the other area marker. The determination in step S147 is based on the fact that when the feature amount is small, the change between adjacent deformation areas is small and can be regarded as the same deformation area.
  • the input observation image is first subjected to smoothing processing, and based on the smoothed image, Therefore, the target area can be extracted stably and with high accuracy without being affected by noise included in the observation image.
  • a pixel group in which the size, distribution shape, etc. of the smoothed pixel values satisfy a predetermined condition is detected as the initial region. Therefore, for example, even when a plurality of target areas are adjacent to each other, it is possible to reliably detect individual initial areas corresponding to the target areas.
  • the region extraction device 1 and the region extraction program according to the first embodiment a series of all adjacent pixels adjacent to the outside of the initial region to be used, or a predetermined ratio or more of the series of adjacent pixels. Since the initial region is deformed based on each adjacent pixel to form the target region, even if the shape of the imaging target changes, the contour shape of the imaging target is highly accurate regardless of the state of the imaging target. It is possible to extract the target area that matches the target.
  • the initial region detection unit 8 detects the initial region corresponding to each imaging target, and the region deformation unit 9 uses each initial region. Since the deformed area is formed by deforming the area, and the deformed area is integrated by the area integration unit 10 to form the target area, the number of images to be imaged, the position, etc. are specified in advance. The target area can be extracted automatically without the need for setting.
  • the region extraction device 1 has been described as including the region integration unit 10 that integrates adjacent deformation regions, it is not necessary to integrate adjacent deformation regions. If the imaging target is not handled, it is not always necessary. Even when the region integration unit 10 is provided, the region integration processing may be omitted according to predetermined instruction information, for example. When region integration processing is not performed in this way, region transformation processing Each subsequent deformation area may be regarded as a target area, and the area extraction result may be obtained using the deformation area data as the target area data.
  • the area integration process is described as being executed after the area deformation process.
  • the area integration process is incorporated into the area deformation process. You may make it perform.
  • the region deforming unit 9 may detect adjacent initial regions as needed in the process of sequentially deforming each initial region, and perform region integration processing at the detected stage.
  • the region deformation processing and the region integration processing are performed based on the smooth smooth image.
  • an edge included in the smooth smooth image is detected. Based on the detected edge detection image, perform region transformation processing and region integration processing based on V!
  • FIG. 12 is a block diagram showing a configuration of the area extracting device 21 according to the second embodiment of the present invention.
  • the region extraction device 21 includes an image processing unit 23 and a control unit 26 instead of the image processing unit 3 and the control unit 6 based on the configuration of the region extraction device 1.
  • the image processing unit 23 includes a region deformation unit 29 and a region integration unit 30 instead of the region deformation unit 9 and the region integration unit 10 based on the configuration of the image processing unit 3, and a new edge detection unit 31. Equipped.
  • FIGS. 13 and 14 are block diagrams showing detailed configurations of the region deforming unit 29 and the region integrating unit 30, respectively.
  • the region deforming unit 29 includes a deformation determining unit 29c instead of the deformation determining unit 9c based on the configuration of the region deforming unit 9.
  • the region integration unit 30 includes a feature amount calculation unit 30b instead of the feature amount calculation unit 10b based on the configuration of the region integration unit 10.
  • the other configuration of the region extracting device 21 is the same as that of the first embodiment, and the same components are denoted by the same reference numerals.
  • the edge detection unit 31 acquires the smoothed image output from the smoothing unit 7, detects the edge included in the acquired smoothed image by filtering, and generates an edge image indicating the detected edge To do.
  • the edge detection unit 31 detects an edge using, for example, a Sobel filter, a Laplacian filter, or a pre-witt filter. In addition, the edge detection unit 31 generates the generated error.
  • the wedge image is output to the region transformation unit 29 and the region integration unit 30.
  • the edge detection unit 31 can also output and store the edge image in the storage unit 5 via the control unit 26.
  • the area deforming unit 29 executes the same process as the area deforming unit 9. However, the area deforming unit 29 acquires the edge image from the edge detecting unit 31 using the area deforming unit 29c, instead of the area deforming unit 9 acquiring the smoothed image from the smoothing unit 7 using the deformation determining unit 9c. In addition, instead of the region deforming unit 9 performing the process of determining whether or not the image is the target region image based on the smoothed pixel value by the deformation determining unit 9c, the region deforming unit 29 is replaced by the deformation determining unit 29c. Based on the edge pixel value possessed by each pixel in the edge image, it is determined whether or not the pixel is the target region pixel.
  • the area integration unit 30 executes the same processing as that of the area integration unit 10. However, instead of the region integration unit 10 acquiring the smoothed image from the smoothing unit 7 by the feature amount calculation unit 10b, the region integration unit 30 receives the edge image from the edge detection unit 31 by the feature amount calculation unit 30b. get. In addition, instead of the region integration unit 10 calculating the feature amount between adjacent deformation regions based on the smoothed pixel value by the feature amount calculation unit 10b, the region integration unit 30 uses the feature amount calculation unit 30b. The feature amount is calculated based on the edge pixel value.
  • FIG. 15 is a flowchart showing a processing procedure of region extraction processing in which the region extraction device 21 processes and displays an observation image by the control unit 26 executing the region extraction program.
  • FIG. 16 is a schematic diagram showing an edge image generated by the edge detection unit 31 as a result of the edge detection process shown in FIG.
  • FIG. 16 shows an example of an edge image generated from the smooth wrinkle image shown in FIG. 4-1.
  • the input unit 2 acquire observation images in the same manner as in steps S101 to S105 shown in FIG. Step S1 61), smoothing process (step S163) and initial area detection process (step S165)
  • the edge detection unit 31 performs edge detection processing for detecting an edge of the smooth image obtained from the smoothing unit 7 and generating an edge image (step S167).
  • step S167 the edge detection unit 31 performs the smoothed image shown in FIG. 4-1, for example, as shown in FIG. The edge of each cell region in the image is detected and imaged.
  • the edge detection unit 31 outputs the generated edge image to the region deformation unit 29 and the region integration unit 30. Note that the processing order of step S167 and step S165 may be exchanged.
  • the region transformation unit 29, the region integration unit 30 and the output unit 4 sequentially perform region transformation processing (step S169) and region integration processing in the same manner as steps S107 to S111 shown in FIG. (Step S171) and display of extraction results (Step S173) are performed, and then the control unit 26 ends a series of region extraction processing.
  • step S169 the area deforming unit 29 does not perform the process of determining whether or not the pixel is the target area pixel based on the smoothness pixel value in step 107, but based on the edge pixel value.
  • This determination process is performed. That is, the deformation determination unit 29c determines that a pixel satisfying a predetermined condition for an edge pixel value with respect to a contour pixel from a series of adjacent pixels adjacent to the outside of the initial region is a target region pixel.
  • the initial area is expanded and deformed so as to capture the adjacent pixels.
  • the deformation determination unit 29c calculates the pixel value difference of the edge pixel value between the adjacent pixel and the contour pixel, and when the calculated pixel value difference is within a predetermined range, the adjacent pixel Is determined to be a target area pixel.
  • the deformation determination unit 29c calculates a pixel value difference of the edge pixel value between the adjacent pixel and the contour pixel, and if the calculated pixel value difference exceeds the predetermined range, the deformation determination unit 29c sets the initial region. Do not use pixels to capture. Furthermore, when the adjacent pixel is adjacent to a plurality of different initial regions, the deformation determination unit 29c does not regard the adjacent pixel as a pixel that is taken into the initial region. In other words, the deformation determination unit 29c determines that an adjacent pixel that is adjacent to only one initial region and has a calculated pixel value difference within a predetermined range is a target region pixel, and incorporates it into the initial region. Let it be a pixel.
  • the deformation determination unit 29c can also shrink and deform the initial region by removing the contour pixels, similarly to the deformation determination unit 9c.
  • step S171 the region integration unit 30 calculates this feature amount based on the edge pixel value instead of calculating the feature amount between adjacent deformation regions based on the smoothed pixel value in step S109. Is calculated. That is, the feature amount calculation unit 30b sandwiches the boundary pixel based on the edge pixel values of the boundary pixel and the boundary neighboring pixel. A feature amount between adjacent deformation regions is calculated. More specifically, the feature amount calculation unit 30b calculates the average value of the difference between the edge pixel values of the boundary pixel and the boundary vicinity pixel located in the vicinity of the boundary pixel as the feature amount between the adjacent deformation regions. To do.
  • the region extraction device 21 does not necessarily include the region integration unit 30 when it is not necessary to integrate adjacent deformation regions or when an adjacent imaging target is not handled. It does not have to be. Further, even when the area integration unit 30 is provided, for example, the area integration processing may be omitted according to predetermined instruction information.
  • the initial region is deformed and the deformed region is integrated based on the edge pixel value. Therefore, it is possible to shape and extract a target region that more precisely matches the contour shape of the imaging target.
  • the pixel values such as the smooth gradation pixel value and the edge pixel value used in the region extraction device according to the first and second embodiments described above include a luminance value, a gray value, a gradation value, an intensity value, and the like. included .
  • the edge pixel value includes an edge strength value indicating the strength of the detected edge.
  • the region extraction apparatus according to the first and second embodiments can appropriately select and process various values of the medium value that uses the pixel value according to the form of the input observation image.
  • the region extraction device and the region extraction program according to the present invention are useful for the region extraction device and the region extraction program that extract the image region corresponding to the imaging target from the input image.
  • it is suitable for an area extraction apparatus and an area extraction program that stably and highly accurately extract image areas corresponding to individual imaging targets.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Processing (AREA)
  • Image Analysis (AREA)

Abstract

An area extracting device (1) for extracting an area of interest corresponding to a subject stably and accurately comprises a smoothing section (7) for creating a smoothed image by smoothing an input image, an initial area detecting section (8) for detecting an initial area including at least a part of the subject from the smoothed image according to the smoothed pixel values which the pixels in the smoothed image have, an area transforming section (9) for judging whether or not a contour adjoining pixel is a pixel of the area of interest constituting the area of interest on the basis of the smoothing pixel value which the contour adjoining pixel in the initial area has, varying at least either the size or shape of the initial area according to the judgment result, and shaping the transformed area, and an area integrating section (10) for detecting adjoining areas formed by adjoining transformed areas and shaping the area of interest by integrating the adjoining transformed areas according to the feature values indicating the features of the adjoining transformed areas among the adjoining areas.

Description

領域抽出装置および領域抽出プログラム  Region extraction apparatus and region extraction program
技術分野  Technical field
[0001] 本発明は、入力された画像の中から撮像対象に対応する画像領域を抽出する領 域抽出装置および領域抽出プログラムに関するものである。  The present invention relates to a region extraction device and a region extraction program that extract an image region corresponding to an imaging target from an input image.
背景技術  Background art
[0002] 従来、デジタルカメラや、顕微鏡に備えた撮像装置などによって撮像することで取 得した画像から、特定の撮像対象に対応する画像領域を抽出する画像処理技術が 、様々な分野で利用され、重要な技術となっている(例えば、特許文献 1および 2参 照)。  Conventionally, an image processing technique for extracting an image region corresponding to a specific imaging target from an image acquired by imaging with a digital camera or an imaging device provided in a microscope has been used in various fields. It has become an important technology (see, for example, Patent Documents 1 and 2).
[0003] 特許文献 1に開示されている領域の抽出方法では、画像の中から撮像対象である 人物の輪郭を抽出する際に、エッジ検出フィルタを用いて複数のエッジ点を求め、隣 接するエッジ点を連結することによってエッジ線を形成し、楕円形状を示すエッジ線 を人物の輪郭と判断して検出するようにして 、る。  [0003] In the region extraction method disclosed in Patent Document 1, when extracting the outline of a person to be imaged from an image, a plurality of edge points are obtained using an edge detection filter, and adjacent edges are extracted. An edge line is formed by connecting points, and an edge line indicating an elliptical shape is detected as a contour of a person.
[0004] 特許文献 2に開示されている輪郭抽出装置では、エネルギー最小化原理を利用し た「スネークス」の理論に基づ 、て所望の対象物の輪郭を抽出するようにして!/、る。す なわち、連続線で形成される輪郭モデルを初期設定するとともに、輪郭モデルの曲 率や変化の状態、画像上のエッジや勾配のズレなどを定量的に表現したエネルギー 評価関数を定義し、このエネルギー評価関数を最小化するように輪郭モデルの変形 を繰り返すことによって対象物の輪郭を抽出するようにして ヽる。  [0004] With the contour extraction device disclosed in Patent Document 2, the contour of a desired object is extracted based on the theory of "snakes" using the energy minimization principle! . In other words, the contour model formed by a continuous line is initialized, and an energy evaluation function that quantitatively expresses the curvature and change state of the contour model, deviations of edges and gradients on the image, etc. is defined. The contour of the object is extracted by repeating the deformation of the contour model so as to minimize this energy evaluation function.
[0005] このようなスネークスによる手法では、従来、抽出対象とする対象物の輪郭モデルを オペレータがあら力じめ想定し、この対象物を取り囲むように輪郭モデルを初期設定 する必要がある。このため、複数の対象物を抽出する場合や複数の画像から対象物 を抽出する場合などには、対象物毎あるいは画像毎に輪郭モデルの初期設定を行 わなければならず、力かる作業に多大な時間および負荷力かかるという問題があった  [0005] Conventionally, in such a Snakes method, it is necessary for the operator to presume the contour model of the target object to be extracted, and to initially set the contour model so as to surround the target object. For this reason, when extracting multiple objects or extracting objects from multiple images, the contour model must be initialized for each object or image, which can be a daunting task. There was a problem that it took a lot of time and load.
[0006] これに対して特許文献 2に開示されている輪郭抽出装置では、画像内のすべての 抽出対象を取り囲む輪郭モデルを初期設定し、この輪郭モデルが変形過程で接触 または交差した場合に、輪郭モデルを複数に分裂させることによって、対象物毎に輪 郭モデルを初期設定することを必要とせずに複数の対象物の輪郭を抽出するように している。 [0006] On the other hand, in the contour extraction device disclosed in Patent Document 2, all the images in the image are displayed. It is necessary to initialize the contour model for each target object by initializing the contour model surrounding the extraction target and splitting the contour model into multiple parts when this contour model contacts or intersects during the deformation process. Instead, the contours of multiple objects are extracted.
[0007] 特許文献 1 :特開平 9 138471号公報  [0007] Patent Document 1: Japanese Patent Laid-Open No. 9138471
特許文献 2:特開平 8— 329254号公報  Patent Document 2: JP-A-8-329254
発明の開示  Disclosure of the invention
発明が解決しょうとする課題  Problems to be solved by the invention
[0008] し力しながら、上述した領域の抽出方法では、画像中のノイズ等をエッジ点として誤 検出する、あるいはエッジ点の強度が弱い場合にエッジ線が途切れてしまうなどの影 響により、所望の輪郭を安定して検出することが困難であるという問題があった。  However, in the region extraction method described above, noise or the like in the image is erroneously detected as an edge point, or the edge line is interrupted when the edge point strength is weak. There is a problem that it is difficult to stably detect a desired contour.
[0009] また、上述した輪郭抽出装置では、対象物の形状が変化する、あるいは種々の対 象物が複数存在するなどの場合に、対象物に忠実な輪郭を抽出できない場合があ るという問題があった。さらに、この輪郭抽出装置では、輪郭候補点を結ぶ複数の線 分の接触または交差を検出した場合に輪郭モデルを分裂させるようにしているため、 複数の対象物が隣接し、輪郭候補点が検出できない場合には、隣接した各対象物 の輪郭を抽出することができず、したがって、隣接した各対象物を識別することができ ないという問題があった。  [0009] Further, the above-described contour extraction apparatus may not be able to extract a contour that is faithful to the target object when the shape of the target object changes or when there are a plurality of various target objects. was there. Furthermore, in this contour extraction device, the contour model is split when a contact or intersection of a plurality of line segments connecting the contour candidate points is detected, so that a plurality of objects are adjacent and the contour candidate points are detected. If this is not possible, the contour of each adjacent object cannot be extracted, and therefore there is a problem that each adjacent object cannot be identified.
[0010] 本発明は、上記に鑑みてなされたものであって、取得した画像中のノイズ等に影響 されることなく、また変形、隣接等の撮像対象の状態によらず、個々の撮像対象に対 応する画像領域を安定して高精度に抽出することができる領域抽出装置および領域 抽出プログラムを提供することを目的とする。  [0010] The present invention has been made in view of the above, and is not affected by noise or the like in an acquired image, and does not depend on the state of the imaging target such as deformation or adjacency. An object of the present invention is to provide an area extraction apparatus and an area extraction program that can stably and accurately extract an image area corresponding to the above.
課題を解決するための手段  Means for solving the problem
[0011] 上記の目的を達成するために、請求項 1にかかる領域抽出装置は、入力された画 像の中から撮像対象に対応する画像領域である対象領域を抽出する領域抽出装置 において、前記画像を平滑化した平滑化画像を生成する平滑化手段と、前記平滑 化画像内の各画素が有する平滑化画素値に基づ!ヽて、前記平滑化画像の中から少 なくとも前記撮像対象の一部を含む画像領域である初期領域を検出する初期領域 検出手段と、前記初期領域の輪郭近傍画素が有する前記平滑化画素値に基づいて 、前記輪郭近傍画素が前記対象領域を構成する対象領域画素であるか否かを判別 し、判別結果に応じて前記初期領域の大きさおよび形状の少なくとも一方を変形して 前記対象領域を成形する領域変形手段と、を備えたことを特徴とする。 [0011] In order to achieve the above object, an area extraction apparatus according to claim 1 is an area extraction apparatus that extracts a target area, which is an image area corresponding to an imaging target, from input images. Smoothing means for generating a smoothed image obtained by smoothing the image, and based on the smoothed pixel value of each pixel in the smoothed image, and at least the imaging target from the smoothed image. An initial region that detects an initial region that is an image region including a part of Based on the detection means and the smoothed pixel value of the contour neighboring pixels of the initial region, it is determined whether or not the contour neighboring pixels are target region pixels constituting the target region, and according to the determination result And an area deformation means for deforming at least one of the size and shape of the initial area to form the target area.
[0012] また、請求項 2にかかる領域抽出装置は、入力された画像の中から撮像対象に対 応する画像領域である対象領域を抽出する領域抽出装置にぉ ヽて、前記画像を平 滑化した平滑化画像を生成する平滑化手段と、前記平滑化画像内の各画素が有す る平滑化画素値に基づいて、前記平滑化画像の中から少なくとも前記撮像対象の一 部を含む画像領域である初期領域を検出する初期領域検出手段と、前記平滑ィ匕画 像に含まれるエッジを検出したエッジ画像を生成するエッジ検出手段と、前記初期領 域の輪郭近傍画素に対応する前記エッジ画像内の各画素が有するエッジ画素値に 基づ!ヽて、前記輪郭近傍画素が前記対象領域を構成する対象領域画素であるか否 かを判別し、判別結果に応じて前記初期領域の大きさおよび形状の少なくとも一方を 変形して前記対象領域を成形する領域変形手段と、を備えたことを特徴とする。  [0012] In addition, an area extraction apparatus according to claim 2 smoothes the image by inputting it into an area extraction apparatus that extracts a target area that is an image area corresponding to an imaging target from input images. An image including at least a part of the imaging target from the smoothed image based on a smoothing unit that generates a smoothed image and a smoothed pixel value of each pixel in the smoothed image An initial region detecting means for detecting an initial region that is an area; an edge detecting means for generating an edge image that detects an edge included in the smoothed image; and the edge corresponding to a contour neighboring pixel of the initial region. Based on the edge pixel value of each pixel in the image, it is determined whether or not the pixel near the contour is a target region pixel constituting the target region, and the size of the initial region is determined according to the determination result. At least one of thickness and shape And a region deformation means for deforming the direction to form the target region.
[0013] また、請求項 3にかかる領域抽出装置は、上記の発明において、前記初期領域検 出手段は、所定値より大きい前記平滑化画素値を有する各画素を、前記初期領域と して検出することを特徴とする。  [0013] In the area extracting apparatus according to claim 3, in the above invention, the initial area detecting means detects each pixel having the smoothed pixel value larger than a predetermined value as the initial area. It is characterized by doing.
[0014] また、請求項 4に力かる領域抽出装置は、上記の発明において、前記初期領域検 出手段は、前記平滑化画像内の画素位置に対する前記平滑化画素値の分布形状 が所定の条件を満たす画素群を、前記初期領域として検出することを特徴とする。  [0014] In addition, in the above-described invention, the region extraction device according to claim 4 is characterized in that the initial region detection means has a predetermined distribution shape of the smoothed pixel value with respect to the pixel position in the smoothed image. A pixel group satisfying the above condition is detected as the initial region.
[0015] また、請求項 5にかかる領域抽出装置は、上記の発明において、前記初期領域検 出手段は、前記分布形状が凸状である画素群を、前記初期領域として検出すること を特徴とする。  [0015] In addition, the region extraction apparatus according to claim 5 is characterized in that, in the above invention, the initial region detection means detects, as the initial region, a pixel group having a convex distribution shape. To do.
[0016] また、請求項 6にかかる領域抽出装置は、上記の発明において、前記領域変形手 段は、前記初期領域の外側に隣接する一連の隣接画素のうち、前記初期領域内に あって該初期領域の輪郭を構成する輪郭画素との間で、前記平滑化画素値が所定 条件を満たす画素を前記対象領域画素であると判別し、該判別した隣接画素を取り 込むように前記初期領域を変形することを特徴とする。 [0017] また、請求項 7にかかる領域抽出装置は、上記の発明において、前記領域変形手 段は、前記隣接画素と前記輪郭画素との間で前記平滑化画素値の画素値差が所定 範囲内である前記隣接画素を前記対象領域画素であると判別することを特徴とする [0016] Furthermore, in the region extraction device according to claim 6, in the above invention, the region deformation means is in the initial region among a series of adjacent pixels adjacent to the outside of the initial region. A pixel that satisfies a predetermined condition of the smoothed pixel value with respect to a contour pixel that forms a contour of the initial region is determined to be the target region pixel, and the initial region is set so as to capture the determined adjacent pixel. It is characterized by being deformed. [0017] Further, in the region extracting apparatus according to claim 7, in the above invention, the region modifying means is configured such that a pixel value difference of the smoothed pixel value between the adjacent pixel and the contour pixel is within a predetermined range. The adjacent pixels that are within are determined to be the target region pixels
[0018] また、請求項 8にかかる領域抽出装置は、上記の発明において、前記領域変形手 段は、前記初期領域の外側に隣接する一連の隣接画素のうち、前記初期領域内に あって該初期領域の輪郭を構成する輪郭画素との間で、前記エッジ画素値が所定 条件を満たす画素を前記対象領域画素であると判別し、該判別した隣接画素を取り 込むように前記初期領域を変形することを特徴とする。 [0018] Also, in the region extraction device according to claim 8, in the above invention, the region deformation means is located in the initial region among a series of adjacent pixels adjacent to the outside of the initial region. A pixel that satisfies the predetermined condition of the edge pixel value with respect to a contour pixel that constitutes a contour of the initial region is determined to be the target region pixel, and the initial region is modified so as to incorporate the determined adjacent pixel. It is characterized by doing.
[0019] また、請求項 9にかかる領域抽出装置は、上記の発明において、前記領域変形手 段は、前記隣接画素と前記輪郭画素との間で前記エッジ画素値の画素値差が所定 範囲内である前記隣接画素を前記対象領域画素であると判別することを特徴とする  [0019] In addition, in the region extraction device according to claim 9, in the above invention, the region modification means includes a pixel value difference of the edge pixel value between the adjacent pixel and the contour pixel within a predetermined range. And determining that the adjacent pixel is the target region pixel.
[0020] また、請求項 10にかかる領域抽出装置は、上記の発明において、前記領域変形 手段は、前記初期領域内にあって該初期領域の輪郭を構成する一連の輪郭画素の うち、前記初期領域の外側に隣接する隣接画素との間で、前記平滑化画素値が所 定条件を満たす画素を前記対象領域外の画素であると判別し、該判別した輪郭画 素を取り除くように前記初期領域を変形することを特徴とする。 [0020] Further, in the above-described invention, the region extracting device according to claim 10 is characterized in that the region deforming means includes the initial region out of a series of contour pixels in the initial region and constituting the contour of the initial region. A pixel that satisfies the predetermined condition for the smoothed pixel value between adjacent pixels outside the region is determined to be a pixel outside the target region, and the initial contour so as to remove the determined contour pixel. The region is deformed.
[0021] また、請求項 11にかかる領域抽出装置は、上記の発明において、前記領域変形 手段は、前記隣接画素と前記輪郭画素との間で前記平滑化画素値の画素値差が所 定範囲内である前記輪郭画素を前記対象領域外の画素であると判別することを特徴 とする。  [0021] Further, in the region extraction device according to claim 11, in the above invention, the region deformation means is configured such that a pixel value difference of the smoothed pixel value between the adjacent pixel and the contour pixel is within a predetermined range. It is characterized in that the contour pixel that is inside is determined to be a pixel outside the target region.
[0022] また、請求項 12にかかる領域抽出装置は、上記の発明において、前記領域変形 手段は、前記初期領域内にあって該初期領域の輪郭を構成する一連の輪郭画素の うち、前記初期領域の外側に隣接する隣接画素との間で、前記エッジ画素値が所定 条件を満たす画素を前記対象領域外の画素であると判別し、該判別した輪郭画素を 取り除くように前記初期領域を変形することを特徴とする。  [0022] Further, in the above invention, the region extracting device according to claim 12 is characterized in that the region deforming means includes the initial region out of a series of contour pixels in the initial region and constituting the contour of the initial region. A pixel whose edge pixel value satisfies a predetermined condition is determined to be a pixel outside the target region between adjacent pixels adjacent to the outside of the region, and the initial region is modified so as to remove the determined contour pixel It is characterized by doing.
[0023] また、請求項 13にかかる領域抽出装置は、上記の発明において、前記領域変形 手段は、前記隣接画素と前記輪郭画素との間で前記エッジ画素値の画素値差が所 定範囲内である前記輪郭画素を前記対象領域外の画素であると判別することを特徴 とする。 [0023] Further, in the above invention, the region extraction device according to claim 13 is the region modification. The means is characterized in that the contour pixel in which a pixel value difference of the edge pixel value is within a predetermined range between the adjacent pixel and the contour pixel is determined to be a pixel outside the target region.
[0024] また、請求項 14にかかる領域抽出装置は、上記の発明にお 、て、前記領域変形 手段は、前記画素値差が前記所定範囲を超えるまで、前記対象領域画素であるか 否かの判別および前記初期領域の変形を繰り返すことを特徴とする。  [0024] Further, in the above-described invention, the region extracting device according to claim 14 is configured to determine whether the region deforming means is the target region pixel until the pixel value difference exceeds the predetermined range. This determination is repeated and the deformation of the initial region is repeated.
[0025] また、請求項 15にかかる領域抽出装置は、上記の発明において、前記領域変形 手段は、 1つの前記初期領域の外側に隣接し、他の前記初期領域の外側に隣接し て 、な 、画素であって前記所定条件を満たす隣接画素を、前記対象領域画素であ ると判別することを特徴とする。  [0025] Further, in the above-described invention, the region extracting device according to claim 15 is characterized in that the region deforming means is adjacent to the outside of one of the initial regions and adjacent to the outside of the other initial region. An adjacent pixel that satisfies the predetermined condition is determined to be the target region pixel.
[0026] また、請求項 16にかかる領域抽出装置は、上記の発明において、前記領域変形 手段は、前記初期領域の外側に隣接する一連のすべての隣接画素、または前記初 期領域の外側に隣接する一連の隣接画素のうち所定割合以上の各隣接画素に対し て、前記対象領域画素である力否かの判別を行うことを特徴とする。  [0026] Further, in the above-described invention, the region extracting device according to claim 16 is characterized in that the region deforming means is adjacent to the outside of the initial region or a series of all adjacent pixels adjacent to the outside of the initial region. It is characterized in that it is determined whether or not the target region pixel is a power for each adjacent pixel of a predetermined ratio or more in a series of adjacent pixels.
[0027] また、請求項 17にかかる領域抽出装置は、上記の発明において、前記領域変形 手段が変形した結果の画像領域である各変形領域の中から、隣接する該変形領域 によって形成された隣接領域群を検出し、該検出した隣接領域群内の隣接した変形 領域間の特徴を示す特徴量に基づ ヽて、該隣接した変形領域同士を統合して前記 対象領域を成形する領域統合手段を更に備えたことを特徴とする。  [0027] Further, in the above-described invention, the region extracting apparatus according to claim 17 is an adjacent region formed by an adjacent deformed region from among the deformed regions that are image regions as a result of deformation by the region deforming means. A region integration unit that detects a region group and integrates the adjacent deformation regions to form the target region based on a feature amount indicating a feature between adjacent deformation regions in the detected adjacent region group. Is further provided.
[0028] また、請求項 18にかかる領域抽出装置は、上記の発明において、前記領域統合 手段は、前記特徴量を算出し、該算出した特徴量をもとに前記変形領域同士を統合 することを特徴とする。  [0028] In the region extraction device according to claim 18, in the above invention, the region integration unit calculates the feature amount and integrates the deformation regions based on the calculated feature amount. It is characterized by.
[0029] また、請求項 19にかかる領域抽出装置は、上記の発明において、前記領域統合 手段は、前記各変形領域の輪郭近傍画素の中力 処理対象の変形領域と異なる変 形領域に含まれる画素を検出することによって前記隣接領域群を検出することを特 徴とする。  [0029] In addition, in the region extraction device according to claim 19, in the above invention, the region integration unit is included in a deformation region different from the deformation region to be subjected to the intermediate force processing target of the contour neighboring pixels of each deformation region. It is characterized in that the adjacent region group is detected by detecting a pixel.
[0030] また、請求項 20に力かる領域抽出装置は、上記の発明にお 、て、前記領域変形 手段が変形した結果の画像領域である各変形領域の中から、隣接する該変形領域 によって形成された隣接領域群を検出し、該検出した隣接領域群内の変形領域間 の境界線を示す境界画素および該境界線近傍の境界近傍画素が有する前記平滑 化画素値に基づいて該変形領域間の特徴を示す特徴量を算出し、該算出した特徴 量をもとに、該隣接領域群内の変形領域同士を統合して前記対象領域を成形する 領域統合手段を更に備えたことを特徴とする。 [0030] In addition, in the above invention, the region extracting apparatus according to claim 20 is the adjacent deformed region among the deformed regions which are image regions as a result of the deformation by the region deforming means. Is detected based on the smoothed pixel value of the boundary pixel indicating the boundary line between the deformation regions in the detected adjacent region group and the boundary neighboring pixels in the vicinity of the boundary line. It further comprises region integration means for calculating a feature amount indicating a feature between regions, and forming the target region by integrating the deformation regions in the adjacent region group based on the calculated feature amount. Features.
[0031] また、請求項 21にかかる領域抽出装置は、上記の発明において、前記領域統合 手段は、前記境界線上の各境界画素と、該各境界画素近傍の境界近傍画素との前 記平滑ィ匕画素値の差の平均値を前記特徴量として算出することを特徴とする。  [0031] In addition, in the above-described invention, the region extracting unit according to claim 21 is characterized in that the region integration means includes the smoothing function of each boundary pixel on the boundary line and a boundary neighboring pixel near the boundary pixel. The average value of the difference between the pixel values is calculated as the feature amount.
[0032] また、請求項 22にかかる領域抽出装置は、上記の発明において、前記領域変形 手段が変形した結果の画像領域である各変形領域の中から、隣接する該変形領域 によって形成された隣接領域群を検出し、該検出した隣接領域群内の変形領域間 の境界線を示す境界画素および該境界線近傍の境界近傍画素が有する前記エッジ 画素値に基づいて該変形領域間の特徴を示す特徴量を算出し、該算出した特徴量 をもとに、該隣接領域群内の変形領域同士を統合して前記対象領域を成形する領 域統合手段を更に備えたことを特徴とする。  [0032] Further, in the above-described invention, the region extracting apparatus according to claim 22 is an adjacent region formed by the adjacent deformed region from among the deformed regions that are image regions as a result of the deformation by the region deforming means. A region group is detected, and a feature between the deformation regions is shown based on the boundary pixel indicating the boundary line between the deformation regions in the detected adjacent region group and the edge pixel value of the boundary vicinity pixel near the boundary line. The image processing apparatus further includes a region integration unit that calculates a feature amount and integrates the deformation regions in the adjacent region group based on the calculated feature amount to form the target region.
[0033] また、請求項 23にかかる領域抽出装置は、上記の発明において、前記領域統合 手段は、前記境界線上の各境界画素と、該各境界画素近傍の境界近傍画素との前 記エッジ画素値の差の平均値を前記特徴量として算出することを特徴とする。  [0033] In addition, in the above-described invention, the region extraction unit according to claim 23 is characterized in that the region integration unit includes the edge pixels of the boundary pixels on the boundary line and the boundary neighboring pixels in the vicinity of the boundary pixels. The average value difference is calculated as the feature amount.
[0034] また、請求項 24に力かる領域抽出装置は、上記の発明にお 、て、前記領域統合 手段は、前記検出した隣接領域群内の各変形領域の外側に隣接する一連の隣接画 素を走査し、処理対象の変形領域と異なる変形領域の外側に隣接する隣接画素を 前記境界画素として検出することを特徴とする。  [0034] Further, in the above-described invention, the region extracting device according to claim 24 is characterized in that the region integrating means includes a series of adjacent images adjacent to the outside of each deformation region in the detected adjacent region group. Scanning an element, and detecting an adjacent pixel adjacent to the outside of a deformation area different from the deformation area to be processed as the boundary pixel.
[0035] また、請求項 25にかかる領域抽出プログラムは、入力された画像の中から撮像対 象に対応する画像領域である対象領域を抽出する領域抽出装置に、前記画像の中 力 前記対象領域を抽出させるための領域抽出プログラムにおいて、前記領域抽出 装置に、前記画像を平滑化した平滑化画像を生成する平滑化手順と、前記平滑ィ匕 画像内の各画素が有する平滑化画素値に基づ 、て、前記平滑化画像の中から少な くとも前記撮像対象の一部を含む画像領域である初期領域を検出する初期領域検 出手順と、前記初期領域の各輪郭近傍画素が有する前記平滑化画素値に基づ 、て 、前記各輪郭近傍画素が前記対象領域を構成する対象領域画素であるか否かを判 別し、判別結果に応じて前記初期領域の大きさおよび形状の少なくとも一方を変形し て前記対象領域を成形する領域変形手順と、を実行させることを特徴とする。 [0035] Further, the area extraction program according to claim 25 is provided with a region extraction device that extracts a target area, which is an image area corresponding to an imaging target, from an input image. In the region extraction program for extracting image data, the region extraction device causes the region extraction device to generate a smoothed image obtained by smoothing the image, and a smoothed pixel value of each pixel in the smoothed image. Accordingly, an initial region detection for detecting an initial region that is an image region including at least a part of the imaging target from the smoothed image. Based on the output procedure and the smoothed pixel value of each contour neighboring pixel of the initial region, it is determined whether or not each contour neighboring pixel is a target region pixel constituting the target region, And a region deformation procedure for deforming at least one of the size and shape of the initial region in accordance with the determination result to form the target region.
[0036] また、請求項 26に力かる領域抽出プログラムは、入力された画像の中から撮像対 象に対応する画像領域である対象領域を抽出する領域抽出装置に、前記画像の中 力 前記対象領域を抽出させるための領域抽出プログラムにおいて、前記領域抽出 装置に、前記画像を平滑化した平滑化画像を生成する平滑化手順と、前記平滑ィ匕 画像内の各画素が有する平滑化画素値に基づ 、て、前記平滑化画像の中から少な くとも前記撮像対象の一部を含む画像領域である初期領域を検出する初期領域検 出手順と、前記平滑ィ匕画像に含まれるエッジを検出したエッジ画像を生成するエッジ 検出手順と、前記初期領域の各輪郭近傍画素に対応する前記エッジ画像内の各画 素が有するエッジ画素値に基づ 、て、前記各輪郭近傍画素が前記対象領域を構成 する対象領域画素である力否かを判別し、判別結果に応じて前記初期領域の大きさ および形状の少なくとも一方を変形して前記対象領域を成形する領域変形手順と、 を実行させることを特徴とする。  [0036] Further, the region extraction program according to claim 26 provides the region extraction device for extracting a target region, which is an image region corresponding to the imaging target, from the input image. In the region extraction program for extracting a region, the region extraction device generates a smoothing procedure for generating a smoothed image obtained by smoothing the image, and the smoothed pixel value of each pixel in the smoothed image. Based on this, an initial region detection procedure for detecting an initial region that is an image region including at least a part of the imaging target from the smoothed image, and an edge included in the smoothed image is detected. Each edge neighboring pixel is determined to be the target area based on the edge detection procedure for generating the edge image and the edge pixel value of each pixel in the edge image corresponding to each edge neighboring pixel in the initial area. a configuration A region deformation procedure for determining whether or not the target region pixel is a force to be deformed, and deforming at least one of the size and shape of the initial region according to the determination result, and forming the target region. And
発明の効果  The invention's effect
[0037] 本発明に力かる領域抽出装置および領域抽出プログラムによれば、取得した画像 中のノイズ等に影響されることなぐまた変形、隣接等の撮像対象の状態によらず、個 々の撮像対象に対応する画像領域を安定して高精度に抽出することができる。 図面の簡単な説明  [0037] According to the region extraction device and the region extraction program that are useful in the present invention, individual imaging is performed regardless of the state of the imaging target such as deformation and adjacency without being affected by noise or the like in the acquired image. The image area corresponding to the object can be stably and accurately extracted. Brief Description of Drawings
[0038] [図 1]図 1は、本発明の実施の形態 1にかかる領域抽出装置の構成を示すブロック図 である。  FIG. 1 is a block diagram showing a configuration of a region extraction device according to a first exemplary embodiment of the present invention.
[図 2]図 2は、図 1に示した領域変形部の詳細構成を示すブロック図である。  FIG. 2 is a block diagram showing a detailed configuration of the region deforming section shown in FIG.
[図 3]図 3は、図 1に示した領域統合部の詳細構成を示すブロック図である。  FIG. 3 is a block diagram showing a detailed configuration of the area integration unit shown in FIG. 1.
[図 4-1]図 4 1は、図 1に示した領域抽出装置に入力される観測画像の一例を示す 図である。  [FIG. 4-1] FIG. 41 is a diagram showing an example of an observation image input to the region extracting apparatus shown in FIG.
[図 4-2]図 4 2は、図 4 1に示した観測画像をもとに生成される初期領域検出画像 を示す図である。 [Fig. 4-2] Fig. 4 2 shows the initial region detection image generated based on the observation image shown in Fig. 41. FIG.
[図 4-3]図 4 3は、図 4 1に示した観測画像をもとに生成される領域変形画像を示 す図である。  [Fig. 4-3] Fig. 4 3 shows a region deformation image generated based on the observation image shown in Fig. 41.
[図 4-4]図 4—4は、図 4 1に示した観測画像をもとに生成される領域統合画像を示 す図である。 [4 - 4] Figure 4-4 is a view to view the region integrating image generated on the basis of the observed image shown in FIG 1.
[図 5]図 5は、図 1に示した領域抽出装置が行う処理手順を示すフローチャートである  FIG. 5 is a flowchart showing a processing procedure performed by the region extracting apparatus shown in FIG.
[図 6-1]図 6— 1は、図 5に示した平滑処理の処理方法を説明する図である。 [FIG. 6-1] FIG. 6-1 is a diagram for explaining the processing method of the smoothing process shown in FIG.
[図 6-2]図 6— 2は、図 5に示した平滑処理の処理方法を説明する図である。  [FIG. 6-2] FIG. 6-2 is a diagram for explaining the processing method of the smoothing process shown in FIG.
[図 7]図 7は、図 5に示した初期領域検出処理の処理方法を説明する図である。  FIG. 7 is a diagram for explaining a processing method of the initial region detection processing shown in FIG.
[図 8]図 8は、図 5に示した領域変形処理の処理手順を示すフローチャートである。  FIG. 8 is a flowchart showing a processing procedure for the region deformation processing shown in FIG.
[図 9]図 9は、図 5に示した領域変形処理の処理方法を説明する図である。  FIG. 9 is a diagram for explaining a processing method of the region transformation process shown in FIG.
[図 10]図 10は、図 5に示した領域統合処理の処理手順を示すフローチャートである。  FIG. 10 is a flowchart of a process procedure of the area integration process shown in FIG.
[図 11]図 11は、図 5に示した領域統合処理の処理方法を説明する図である。  [FIG. 11] FIG. 11 is a diagram for explaining a processing method of the region integration processing shown in FIG.
[図 12]図 12は、本発明の実施の形態 2にかかる領域抽出装置の構成を示すブロック 図である。  FIG. 12 is a block diagram showing a configuration of a region extracting device according to the second embodiment of the present invention.
[図 13]図 13は、図 12に示した領域変形部の詳細構成を示すブロック図である。  FIG. 13 is a block diagram showing a detailed configuration of the area deforming unit shown in FIG.
[図 14]図 14は、図 12に示した領域統合部の詳細構成を示すブロック図である。 FIG. 14 is a block diagram showing a detailed configuration of the area integration unit shown in FIG.
[図 15]図 15は、図 12に示した領域抽出装置が行う処理手順を示すフローチャートで ある。 FIG. 15 is a flowchart showing a processing procedure performed by the region extracting apparatus shown in FIG.
[図 16]図 16は、図 15に示したエッジ検出処理によって生成されるエッジ画像を示す 図である。  FIG. 16 is a diagram showing an edge image generated by the edge detection process shown in FIG.
符号の説明 Explanation of symbols
1, 21 領域抽出装置  1, 21 region extractor
2 入力部  2 Input section
3, 23 画像処理部  3, 23 Image processing section
4 出力部 6, 26 制御部 4 Output section 6, 26 Control unit
7 平滑部  7 Smoothing part
8 初期領域検出部  8 Initial area detector
9, 29 領域変形部  9, 29 Region transformation
9a 標識部  9a Sign section
9b 輪郭検出部  9b Contour detector
9c, 29c 変形判定部  9c, 29c Deformation judgment part
9d 終了判定部  9d End determination part
10, 30 領域統合部  10, 30 Area Integration Department
10a 境界検出部  10a Boundary detector
10b, 30b 特徴量算出部  10b, 30b feature quantity calculator
10c 統合判定部  10c Integrated judgment section
31 エッジ検出部  31 Edge detector
発明を実施するための最良の形態  BEST MODE FOR CARRYING OUT THE INVENTION
[0040] 以下、添付図面を参照して、本発明にかかる領域抽出装置および領域抽出プログ ラムの好適な実施の形態を詳細に説明する。なお、この実施の形態により本発明が 限定されるものではない。また、図面の記載において、同一部分には同一の符号を 付している。  Hereinafter, preferred embodiments of a region extraction device and a region extraction program according to the present invention will be described in detail with reference to the accompanying drawings. Note that the present invention is not limited to the embodiments. In the drawings, the same portions are denoted by the same reference numerals.
[0041] (実施の形態 1)  [0041] (Embodiment 1)
まず、本発明の実施の形態 1にかかる領域抽出装置および領域抽出プログラム〖こ ついて説明する。図 1は、この実施の形態 1にかかる領域抽出装置 1の構成を示すブ ロック図である。図 1に示すように、領域抽出装置 1は、画像等の各種情報の入力を 受け付ける入力部 2と、入力された画像を処理する画像処理部 3と、画像の表示等、 各種情報の出力を行う出力部 4と、画像等の各種情報を記憶する記憶部 5と、領域 抽出装置 1の各部の処理および動作を制御する制御部 6と、を備える。入力部 2、画 像処理部 3、出力部 4および記憶部 5は、制御部 6に電気的に接続されている。  First, the region extraction apparatus and the region extraction program according to the first embodiment of the present invention will be described. FIG. 1 is a block diagram showing the configuration of the area extracting apparatus 1 according to the first embodiment. As shown in FIG. 1, the region extraction device 1 includes an input unit 2 that receives input of various types of information such as images, an image processing unit 3 that processes input images, and outputs various types of information such as image display. An output unit 4 to perform, a storage unit 5 for storing various information such as images, and a control unit 6 for controlling processing and operation of each unit of the region extraction device 1 are provided. The input unit 2, image processing unit 3, output unit 4 and storage unit 5 are electrically connected to the control unit 6.
[0042] 入力部 2は、撮像レンズ、 CCD等の撮像素子および AZD変換器を用いて実現さ れる撮像装置を備え、この撮像装置が撮像して生成した観測画像を取得する。また、 入力部 2は、入力キー、マウス、タツチパネル、スィッチ等を備え、領域抽出装置 1で 処理する各種処理情報等の入力を受け付ける。 The input unit 2 includes an imaging device realized by using an imaging lens, an imaging element such as a CCD, and an AZD converter, and acquires an observation image generated by imaging by the imaging device. Also, The input unit 2 includes an input key, a mouse, a touch panel, a switch, and the like, and receives input of various processing information to be processed by the area extraction device 1.
[0043] なお、入力部 2が備える撮像装置には、デジタルカメラ、顕微鏡、視覚センサ等、デ ジタル画像を生成可能な各種形態の撮像装置が適用できる。また、入力部 2は、 US B、 IEEE1394等の通信用インターフェース、あるいはフラッシュメモリ、 CD、 DVD, ハードディスク等の携帯型記憶媒体に対応するインターフェースを備え、これらのィ ンターフェースを介して外部装置力 観測画像を取得するようにしてもよ 、。  [0043] Note that various types of imaging devices capable of generating digital images, such as a digital camera, a microscope, and a visual sensor, can be applied to the imaging device included in the input unit 2. The input unit 2 includes a communication interface such as USB or IEEE1394, or an interface corresponding to a portable storage medium such as a flash memory, a CD, a DVD, or a hard disk. You can get observation images.
[0044] ここで、入力部 2から入力される観測画像は、一例として、生体組織中のあらカゝじめ 蛍光色素によって染色された細胞を撮像した画像であるとする。力かる細胞を撮像し た観測画像では、色素の作用した細胞の該当部位が明るく観測される。なお、細胞 の染色は、細胞全体を染色するものであってもよぐあるいは細胞核、ァクチン、細胞 膜等の特定部位のみを染色するものであってもよい。また、染色に用いる色素は、蛍 光色素に限定されるものではなぐ細胞のコントラストを一層鮮明にするものであって 、コントラストの他に細胞の特性を変質させな 、ものであれば任意の色素でよ 、。  Here, it is assumed that the observation image input from the input unit 2 is, for example, an image obtained by imaging cells that have been stained with a fluorescent dye in a living tissue. In the observation image obtained by imaging the energetic cells, the corresponding part of the cell to which the dye has acted is observed brightly. The cell staining may be for staining the entire cell or for staining only a specific site such as a cell nucleus, actin, or cell membrane. In addition, the dye used for staining is not limited to fluorescent dyes, but makes the cell contrast clearer. In addition to contrast, any dye can be used as long as it does not alter the cell characteristics. Well ...
[0045] なお、入力部 2から入力される観測画像は、モノクロ画像、カラー画像、色差画像等 、撮像対象としての細胞が識別できる画像であれば任意形態の画像でよい。また、 観測画像に撮像される撮像対象は、細胞に限定して解釈する必要はなぐ車両等の 物体、人物、動物など任意の物体でよい。例えば人物を撮像対象とする場合には、 温度分布を示す画像のように、画像上で撮像対象の存在する箇所が高コントラストで 撮像されるような画像データを用いることができる。  [0045] Note that the observation image input from the input unit 2 may be an arbitrary form of image, such as a monochrome image, a color image, or a color difference image, as long as the image can be identified as a cell to be imaged. In addition, the imaging target captured in the observation image may be an arbitrary object such as an object such as a vehicle, a person, or an animal that need not be interpreted as being limited to cells. For example, in the case where a person is an imaging target, image data such that a portion where the imaging target exists on the image is captured with high contrast, such as an image showing a temperature distribution, can be used.
[0046] 画像処理部 3は、平滑部 7、初期領域検出部 8、領域変形部 9および領域統合部 1 0を備え、入力部 2から出力される観測画像を取得して処理する。なお、入力部 2から 出力される観測画像を記憶部 5が取得して記憶することも可能であり、画像処理部 3 は、記憶部 5に記憶された観測画像を取得して処理することもできる。  The image processing unit 3 includes a smoothing unit 7, an initial region detection unit 8, a region deformation unit 9, and a region integration unit 10. The image processing unit 3 acquires and processes an observation image output from the input unit 2. Note that the observation image output from the input unit 2 can be acquired and stored by the storage unit 5, and the image processing unit 3 can acquire and process the observation image stored in the storage unit 5. it can.
[0047] 平滑部 7は、入力部 2から出力される観測画像を取得し、画像中のエッジ等、大きな 画素値変化を示す画素値分布の構造を保存しながら観測画像を平滑化する。この 平滑ィ匕によって、観測画像中のランダムノイズ等が除去される。また、平滑部 7は、観 測画像を平滑化した結果としての平滑ィ匕画像を生成し、初期領域検出部 8、領域変 形部 9および領域統合部 10に出力する。なお、平滑部 7は、平滑化画像を、制御部 6を介して記憶部 5に出力して記憶させることもできる。 [0047] The smoothing unit 7 acquires the observation image output from the input unit 2, and smoothes the observation image while preserving the structure of the pixel value distribution indicating a large pixel value change such as an edge in the image. This smoothness removes random noise in the observed image. In addition, the smoothing unit 7 generates a smoothed image as a result of smoothing the observation image. Output to shape 9 and area integration unit 10. Note that the smoothing unit 7 can also output and store the smoothed image to the storage unit 5 via the control unit 6.
[0048] 初期領域検出部 8は、平滑部 7から取得した平滑化画像内の各画素が有する画素 値である平滑ィ匕画素値に基づいて、平滑化画像の中から撮像対象に対応する大ま かな画像領域である初期領域を検出する。検出する初期領域は、少なくとも対応す る撮像対象の一部を含む領域であればよぐ撮像対象の一部のみを含む領域であつ ても、撮像対象の全体を含む領域であってもよい。ただし、検出する初期領域の輪郭 は、撮像対象の輪郭と交差しないものとする。初期領域検出部 8は、力かる初期領域 を抽出対象としての撮像対象毎に検出する。また、初期領域検出部 8は、検出した各 初期領域の位置、形状、面積等の各種特徴量を対応付けた初期領域データを生成 し、領域変形部 9に出力する。なお、初期領域検出部 8は、生成した初期領域データ を、制御部 6を介して記憶部 5に出力して記憶させることもできる。  [0048] The initial region detection unit 8 has a large pixel value corresponding to the imaging target in the smoothed image based on the smoothness pixel value that is the pixel value of each pixel in the smoothed image acquired from the smoothing unit 7. An initial area that is a rough image area is detected. The initial region to be detected may be a region including only a part of the imaging target as long as it is a region including at least a part of the corresponding imaging target, or a region including the entire imaging target. However, the contour of the initial area to be detected does not intersect with the contour of the imaging target. The initial region detection unit 8 detects a strong initial region for each imaging target as an extraction target. In addition, the initial region detection unit 8 generates initial region data in which various feature amounts such as the position, shape, and area of each detected initial region are associated, and outputs the initial region data to the region deformation unit 9. Note that the initial region detection unit 8 can also output and store the generated initial region data to the storage unit 5 via the control unit 6.
[0049] 領域変形部 9は、平滑部 7からの平滑化画像と、初期領域検出部 8からの初期領域 データとを取得し、各初期領域を、それぞれ対応する撮像対象の輪郭形状に整合さ せるように変形する。具体的には、領域変形部 9は、初期領域の輪郭近傍に位置す る輪郭近傍画素の平滑化画素値に基づ ヽて、この輪郭近傍画素が対象領域を構成 する画素である力否かを判別する。そして、この判別結果に応じて初期領域の大きさ および形状の少なくとも一方を変形し、撮像対象の輪郭形状に整合した画像領域と しての対象領域を成形する。  [0049] The region deforming unit 9 acquires the smoothed image from the smoothing unit 7 and the initial region data from the initial region detecting unit 8, and matches each initial region to the contour shape of the corresponding imaging target. It transforms so that Specifically, based on the smoothed pixel value of the contour neighboring pixel located in the vicinity of the contour of the initial region, the region deforming unit 9 determines whether or not this contour neighboring pixel is a pixel constituting the target region. Is determined. Then, at least one of the size and shape of the initial region is deformed according to the determination result, and the target region is formed as an image region that matches the contour shape of the imaging target.
[0050] 図 2は、領域変形部 9の詳細構成を示すブロック図である。図 2に示すように、領域 変形部 9は、標識部 9a、輪郭検出部 9b、変形判定部 9cおよび終了判定部 9dを備え る。標識部 9aは、平滑ィ匕画像内の各初期領域に固有の領域標識を付与する。輪郭 検出部 9bは、標識部 9aによって付与された領域標識を参照し、輪郭近傍画素として 各初期領域の外側に隣接する隣接画素を検出する。変形判定部 9cは、平滑化画素 値に基づいて各隣接画素が対象領域を構成する対象領域画素であるか否かを判別 し、判別結果に応じて初期領域を変形する。この変形判定部 9cが変形した結果の画 像領域である変形領域は、暫定的に対象領域とみなされる。  FIG. 2 is a block diagram showing a detailed configuration of the area deforming unit 9. As shown in FIG. 2, the region deforming unit 9 includes a labeling unit 9a, a contour detecting unit 9b, a deformation determining unit 9c, and an end determining unit 9d. The labeling unit 9a gives a unique region marker to each initial region in the smooth image. The contour detection unit 9b refers to the area marker provided by the labeling unit 9a, and detects adjacent pixels adjacent to the outside of each initial region as contour neighboring pixels. The deformation determination unit 9c determines whether each adjacent pixel is a target area pixel constituting the target area based on the smoothed pixel value, and deforms the initial area according to the determination result. A deformation area, which is an image area obtained as a result of deformation by the deformation determination unit 9c, is temporarily regarded as a target area.
[0051] 終了判定部 9dは、変形判定部 9cの処理状況に応じて、領域変形部 9における処 理を終了する力否かを判定する。また、終了判定部 9dは、処理終了の判定を行った 場合、各変形領域の位置、形状、面積等の各種特徴量を対応付けた変形領域デー タを生成し、領域統合部 10に出力する。なお、終了判定部 9dは、変形領域データを 、制御部 6を介して記憶部 5に出力して記憶させることもできる。 [0051] The end determination unit 9d performs processing in the region deformation unit 9 according to the processing status of the deformation determination unit 9c. It is determined whether or not the power to end the reason. Further, when the end determination unit 9d determines the end of processing, the end determination unit 9d generates deformation region data in which various feature amounts such as the position, shape, and area of each deformation region are associated with each other, and outputs the deformation region data to the region integration unit 10. . The end determination unit 9d can also output and store the deformation area data in the storage unit 5 via the control unit 6.
[0052] 領域統合部 10は、平滑部 7からの平滑化画像と、領域変形部 9からの変形領域デ 一タとを取得し、隣接する変形領域によって形成された隣接領域群を検出する。そし て、検出した隣接領域群内の隣接した変形領域間の特徴を示す特徴量に基づ!、て 、この隣接した変形領域同士を統合して最終的な対象領域を成形する。  [0052] The region integration unit 10 acquires the smoothed image from the smoothing unit 7 and the deformation region data from the region deformation unit 9, and detects an adjacent region group formed by the adjacent deformation regions. Then, based on the feature amount indicating the feature between adjacent deformation areas in the detected adjacent area group, the adjacent deformation areas are integrated to form a final target area.
[0053] 図 3は、領域統合部 10の詳細構成を示すブロック図である。図 3に示すように、領 域統合部 10は、境界検出部 10a、特徴量算出部 10bおよび統合判定部 10cを備え る。境界検出部 10aは、変形領域の領域標識を参照して、隣接した変形領域間の境 界線を示す境界画素を検出する。特徴量算出部 10bは、境界画素と境界線近傍の 境界近傍画素とが有する平滑化画素値に基づいて、この境界線に対応する隣接し た変形領域間の特徴量を算出する。  FIG. 3 is a block diagram showing a detailed configuration of the region integration unit 10. As shown in FIG. 3, the region integration unit 10 includes a boundary detection unit 10a, a feature amount calculation unit 10b, and an integration determination unit 10c. The boundary detection unit 10a detects boundary pixels indicating boundary lines between adjacent deformation regions with reference to the region markers of the deformation regions. The feature amount calculation unit 10b calculates a feature amount between adjacent deformation regions corresponding to the boundary line based on the smoothed pixel values of the boundary pixel and the boundary vicinity pixel near the boundary line.
[0054] 統合判定部 10cは、特徴量算出部 10bによって算出された特徴量に基づいて、隣 接した変形領域同士を統合する力否かを判定し、判定結果に応じて変形領域を統 合して対象領域を成形する。また、統合判定部 10cは、変形領域の統合を完了し、 領域統合部 10における処理を終了する際、処理結果としての各対象領域の位置、 形状、面積等の各種特徴量を対応付けた対象領域データを生成し、出力部 4に出力 する。このとき、統合判定部 10cは、統合しな力つた独立した変形領域をそのまま最 終的な対象領域とみなし、かかる変形領域に対応する変形領域データを対象領域 データとして出力する。なお、統合判定部 10cは、対象領域データを、制御部 6を介 して記憶部 5に出力して記憶させることもできる。  [0054] The integrated determination unit 10c determines whether or not the adjacent deformation regions are integrated based on the feature amount calculated by the feature amount calculation unit 10b, and integrates the deformation regions according to the determination result. Then, the target area is formed. In addition, when the integration determination unit 10c completes the integration of the deformation region and ends the processing in the region integration unit 10, the integration determination unit 10c associates various feature amounts such as the position, shape, and area of each target region as a processing result Generate area data and output to output unit 4. At this time, the integration determining unit 10c regards the independent deformation area that has not been integrated as it is as the final target area, and outputs the deformation area data corresponding to the deformation area as the target area data. The integrated determination unit 10c can also output and store the target area data to the storage unit 5 via the control unit 6.
[0055] 出力部 4は、 CRT,液晶ディスプレイ等を有する表示装置を備え、画像処理部 3か ら出力される対象領域データを取得し、画像情報および数値情報として表示する。こ の表示では、出力部 4は、画像情報および数値情報のいずれか一方のみを表示す ることができる。また、両方を同時もしくは切り換えて表示することができる。なお、画 像処理部 3から出力される対象領域データを記憶部 5が取得して記憶することとして 、出力部 4は、記憶部 5に記憶された対象領域データを取得して表示するようにして もよい。また、出力部 4は、対象領域データ以外にも観測画像、平滑化画像、初期領 域データ、変形領域データ等を、画像処理部 3もしくは記憶部 5から取得して表示す ることがでさる。 [0055] The output unit 4 includes a display device having a CRT, a liquid crystal display, etc., acquires the target area data output from the image processing unit 3, and displays it as image information and numerical information. In this display, the output unit 4 can display only one of image information and numerical information. Both can be displayed simultaneously or by switching. Note that the target area data output from the image processing unit 3 is acquired and stored by the storage unit 5. The output unit 4 may acquire and display the target area data stored in the storage unit 5. In addition to the target area data, the output section 4 can obtain observation images, smoothed images, initial area data, deformation area data, etc. from the image processing section 3 or the storage section 5 and display them. .
[0056] 記憶部 5は、各種処理プログラム等があら力じめ記憶された ROMと、各種処理の処 理パラメータ、処理データ等を記憶する RAMとを用いて実現される。記憶部 5は、特 に、画像処理部 3に処理を実行させるためのプログラム、すなわち領域抽出装置 1に 観測画像の中から対象領域を抽出させるための領域抽出プログラムを記憶する。な お、記憶部 5は、フラッシュメモリ、 CD、 DVD,ハードディスク等の携帯型記憶媒体を 着脱可能な記憶部として備えるようにしてもよ 、。  The storage unit 5 is realized by using a ROM that stores various processing programs and the like, and a RAM that stores processing parameters, processing data, and the like for various processing. In particular, the storage unit 5 stores a program for causing the image processing unit 3 to execute processing, that is, a region extraction program for causing the region extraction device 1 to extract a target region from the observed image. The storage unit 5 may include a portable storage medium such as a flash memory, a CD, a DVD, and a hard disk as a removable storage unit.
[0057] 制御部 6は、記憶部 5に記憶された各種処理プログラムを実行する CPU等を用い て実現される。制御部 6は、特に、記憶部 5に記憶された領域抽出プログラムを実行 し、この領域抽出プログラムにしたがって画像処理部 3が備える各構成部位の処理お よび動作を制御する。また、制御部 6は、画像処理部 3から出力される対象領域デー タを画像情報および数値情報として出力部 4に表示させる制御を行う。なお、制御部 6は、観測画像、平滑化画像、初期領域データ、変形領域データ、対象領域データ 等を記憶部 5から取得して出力部 4に表示させる制御を行うこともできる。  The control unit 6 is realized using a CPU or the like that executes various processing programs stored in the storage unit 5. In particular, the control unit 6 executes a region extraction program stored in the storage unit 5, and controls the processing and operation of each component included in the image processing unit 3 according to the region extraction program. The control unit 6 performs control to display the target area data output from the image processing unit 3 on the output unit 4 as image information and numerical information. Note that the control unit 6 can also perform control to acquire an observation image, a smoothed image, initial region data, deformation region data, target region data, and the like from the storage unit 5 and display them on the output unit 4.
[0058] ここで、領域抽出装置 1が行う処理手順について説明する。図 5は、制御部 6が領 域抽出プログラムを実行することによって、領域抽出装置 1が観測画像を処理して表 示する領域抽出処理の処理手順を示すフローチャートである。また、図 4 1〜4 4 は、図 5に示す各処理ステップの処理結果を示す図であって、順に、観測画像、初期 領域検出画像、領域変形画像および領域統合画像を示す模式図である。ただし、図 4 1は、平滑ィ匕処理結果としての平滑ィ匕画像の説明にも用いる。また、図 4—4に示 す領域統合画像は、領域統合処理結果として最終的に確定された対象領域を示す 画像であって、領域抽出処理結果としての対象領域画像でもある。  Here, a processing procedure performed by the region extraction device 1 will be described. FIG. 5 is a flowchart showing the processing procedure of the region extraction process in which the region extraction apparatus 1 processes and displays the observation image by the control unit 6 executing the region extraction program. FIGS. 41 to 44 are diagrams showing the processing results of the respective processing steps shown in FIG. 5, and are schematic diagrams showing an observation image, an initial region detection image, a region deformation image, and a region integration image in order. . However, FIG. 41 is also used to describe the smoothed image as the smoothed image processing result. The region integrated image shown in FIG. 4-4 is an image showing the target region finally determined as the region integration processing result, and is also the target region image as the region extraction processing result.
[0059] なお、図 4 2〜4 4に示す各画像は、それぞれ初期領域データ、変形領域デー タおよび対象領域データに基づいて制御部 6によって画像化されるものであり、領域 抽出処理中に必ずしも生成される画像ではないが、ここでは、領域抽出処理の処理 経過を説明する図として示している。 [0059] Each image shown in Figs. 4 2 to 4 4 is imaged by the control unit 6 based on the initial area data, the deformation area data, and the target area data, respectively. Although it is not necessarily an image to be generated, here is the region extraction process It is shown as a diagram for explaining the progress.
[0060] 図 5に示すように、まず、入力部 2は、撮像対象としての細胞を撮像して観測画像を 取得する(ステップ S101)。このステップ S101では、入力部 2は、例えば図 4—1に 示すように、複数の細胞を撮像した画像を観測画像として取得する。入力部 2は、取 得した観測画像を平滑部 7に出力する。  As shown in FIG. 5, first, the input unit 2 captures a cell as an imaging target and acquires an observation image (step S101). In step S101, the input unit 2 acquires, as an observation image, an image obtained by imaging a plurality of cells, for example, as shown in FIG. 4-1. The input unit 2 outputs the acquired observation image to the smoothing unit 7.
[0061] つぎに、平滑部 7は、入力部 2から取得した観測画像を平滑ィ匕して平滑ィ匕画像を生 成する平滑化処理を行う(ステップ S103)。このステップ S103では、平滑部 7は、図 4 1に示すように、観測画像中の細胞領域の構造を保存したまま平滑化を行い、観 測画像中のノイズ等を除去した平滑ィ匕画像を生成する。平滑部 7は、生成した平滑 化画像を初期領域検出部 8、領域変形部 9および領域統合部 10に出力する。  [0061] Next, the smoothing unit 7 performs a smoothing process to generate a smoothed image by smoothing the observation image acquired from the input unit 2 (step S103). In step S103, as shown in FIG. 41, the smoothing unit 7 performs smoothing while preserving the structure of the cell region in the observation image, and removes the smooth image from which noise and the like in the observation image are removed. Generate. The smoothing unit 7 outputs the generated smoothed image to the initial region detecting unit 8, the region deforming unit 9, and the region integrating unit 10.
[0062] つづいて、初期領域検出部 8は、平滑部 7から取得した平滑ィ匕画像の中から各細 胞に対応する初期領域を検出し、検出した各初期領域の初期領域データを生成す る初期領域検出処理を行う(ステップ S105)。このステップ S105では、初期領域検 出部 8は、例えば図 4 2に示すように、各細胞領域の少なくとも一部を含む初期領 域を検出する。初期領域検出部 8は、生成した初期領域データを領域変形部 9に出 力する。  Subsequently, the initial region detection unit 8 detects an initial region corresponding to each cell from the smoothed image acquired from the smoothing unit 7, and generates initial region data of each detected initial region. Initial region detection processing is performed (step S105). In step S105, the initial region detection unit 8 detects an initial region including at least a part of each cell region, for example, as shown in FIG. The initial region detection unit 8 outputs the generated initial region data to the region deformation unit 9.
[0063] つぎに、領域変形部 9は、平滑部 7から取得した平滑化画像の平滑化画素値に基 づいて、初期領域検出部 8から取得した初期領域データで示される各初期領域を、 撮像対象の輪郭形状に整合させるように変形し、変形結果としての各変形領域の変 形領域データを生成する領域変形処理を行う(ステップ S107)。このステップ S107 では、領域変形部 9は、図 4 3に示すように、各初期領域を段階的に膨張変形させ た結果として、観測画像中の各細胞の輪郭形状に整合する変形領域を形成する。領 域変形部 9は、生成した変形領域データを領域統合部 10に出力する。  [0063] Next, the region deforming unit 9 determines each initial region indicated by the initial region data acquired from the initial region detecting unit 8 based on the smoothed pixel value of the smoothed image acquired from the smoothing unit 7. Deformation is performed so as to match the contour shape of the imaging target, and region deformation processing for generating deformation region data of each deformation region as a deformation result is performed (step S107). In step S107, as shown in FIG. 43, the region deformation unit 9 forms a deformation region that matches the contour shape of each cell in the observation image as a result of expanding and deforming each initial region stepwise. . The region deformation unit 9 outputs the generated deformation region data to the region integration unit 10.
[0064] つづいて、領域統合部 10は、平滑部 7から取得した平滑化画像の平滑化画素値 に基づいて、隣接領域群を検出するとともに隣接領域群内の隣接した変形領域間の 特徴量を算出し、算出した特徴量に基づいて、隣接した変形領域同士を統合し、統 合結果としての対象領域データを生成する領域統合処理を行う(ステップ S 109)。こ のステップ S109では、領域統合部 10は、例えば図 4— 3に示す領域変形画像中で 隣接した変形領域 TA5, TA6を統合し、図 4— 4に示すように、対象領域 OA5として 成形する。また、領域統合部 10は、統合しなカゝつた独立した変形領域 TA1〜TA4を 、それぞれ最終的な対象領域 ΟΑ1〜ΟΑ4とみなし、各対象領域の対象領域データ を生成する。領域統合部 10は、生成した対象領域データを出力部 4に出力する。 [0064] Subsequently, the region integration unit 10 detects the adjacent region group based on the smoothed pixel value of the smoothed image acquired from the smoothing unit 7, and the feature amount between the adjacent deformation regions in the adjacent region group. Based on the calculated feature value, adjacent deformation regions are integrated with each other, and region integration processing for generating target region data as a result of integration is performed (step S109). In step S109, the region integration unit 10 in the region deformation image shown in FIG. 4-3, for example. Adjacent deformation areas TA5 and TA6 are integrated into a target area OA5 as shown in Figure 4-4. Further, the region integration unit 10 regards the independent deformation regions TA1 to TA4 that are not integrated as the final target regions ΟΑ1 to ΟΑ4, respectively, and generates target region data for each target region. The region integration unit 10 outputs the generated target region data to the output unit 4.
[0065] つぎに、出力部 4は、領域統合部 10から出力された対象領域データに基づく画像 情報および数値情報の少なくとも一方を、領域抽出処理の抽出結果として表示する( ステップ S111)。このステップ S111では、出力部 4は、抽出結果を画像情報として表 示する場合に、図 4 4に示すような対象領域画像を表示する。このとき、出力部 4は 、対象領域に対応付けられた領域標識毎に、例えば対象領域内の各画素値を同一 値とする、各画素を同一色とする、あるいは対象領域内を統一した模様とする等によ つて表示する。 Next, the output unit 4 displays at least one of image information and numerical information based on the target region data output from the region integration unit 10 as an extraction result of the region extraction process (step S111). In step S111, the output unit 4 displays the target region image as shown in FIG. 44 when displaying the extraction result as image information. At this time, the output unit 4 displays, for each area marker associated with the target area, for example, each pixel value in the target area has the same value, each pixel has the same color, or the target area has been unified. And so on.
[0066] ステップ S111の後、制御部 6は、一連の領域抽出処理を終了する。ただし、制御 部 6は、例えば所定の処理終了の指示情報等を受け付けるまでステップ S 101〜S 1 11の処理を繰り返して行うこともできる。また、ステップ S101において入力部 2から観 測画像を取得する代わりに、記憶部 5に記憶された観測画像を取得してステップ S 10 3以降の処理を実行することもできる。さらに、ステップ S101〜S109を処理する代わ りに、記憶部 5に記憶された対象領域データに基づ 、てステップ S 111を実行するこ ともできる。また、対象領域データ以外にも、観測画像、平滑化画像、初期領域デー タ、領域変形データ等に基づいて、ステップ S 111を実行することもできる。  [0066] After step S111, the control unit 6 ends the series of region extraction processing. However, the control unit 6 can repeatedly perform the processing of steps S101 to S111 until, for example, instruction information for ending predetermined processing is received. Further, instead of acquiring the observation image from the input unit 2 in step S101, the observation image stored in the storage unit 5 can be acquired and the processing in step S103 and subsequent steps can be executed. Furthermore, instead of processing steps S101 to S109, step S111 can be executed based on the target area data stored in the storage unit 5. In addition to the target area data, step S111 can also be executed based on the observed image, smoothed image, initial area data, area deformation data, and the like.
[0067] つぎに、図 5〖こ示したステップ S103〜S109〖こついて、より具体的に各処理内容を 説明する。まず、ステップ S103では、平滑部 7は、図 6—1に示すように、観測画像中 の処理対象の画素である注目画素 OPを中心とする近傍の 5 X 5画素パターン P Aを 参照し、この 5 X 5画素パターン PA内の所定の 3 X 3画素パターン毎に算出される画 素値の分散値および平均値に基づ!、て、注目画素 OPの平滑化画素値を設定する  [0067] Next, the details of each process will be described with respect to steps S103 to S109 shown in FIG. First, in step S103, the smoothing unit 7 refers to a neighboring 5 × 5 pixel pattern PA centered on the target pixel OP, which is a pixel to be processed in the observed image, as shown in FIG. 5 X 5 pixel pattern Based on the variance and average value of the pixel values calculated for each given 3 X 3 pixel pattern in the PA !, set the smoothed pixel value of the pixel of interest OP
[0068] すなわち、平滑部 7は、 5 X 5画素パターン PAを、例えば図 6— 2に示すように 9種 の 3 X 3画素パターン PA1〜PA9に分割し、分割した 3 X 3画素パターン PA1〜PA 9毎に、斜線で示す複数の選択画素が有する画素値の分散値を算出する。そして、 平滑部 7は、最も小さい分散値を示す 3 X 3画素パターンを抽出し、抽出した 3 X 3画 素パターン内の選択画素が有する画素値の平均値を算出し、この算出した平均値を 注目画素 OPの平滑ィ匕画素値として設定する。 That is, the smoothing unit 7 divides the 5 × 5 pixel pattern PA into, for example, nine 3 × 3 pixel patterns PA1 to PA9 as shown in FIG. A variance value of pixel values of a plurality of selected pixels indicated by diagonal lines is calculated for each of ~ PA9. And The smoothing unit 7 extracts the 3 × 3 pixel pattern indicating the smallest variance value, calculates the average value of the pixel values of the selected pixels in the extracted 3 × 3 pixel pattern, and pays attention to the calculated average value. Set as the smooth pixel value of pixel OP.
[0069] 平滑部 7は、かかる平滑化画素値の設定を、観測画像を構成する各画素に対して 行うことによって、観測画像を平滑化する。なお、注目画素 OPに対応して参照する 画素パターンを 5 X 5画素に限定する必要はなぐ参照する画素数を増減して構わな い。また、参照する画素パターン内で分割する各画素パターンを 3 X 3画素に限定す る必要はなぐ各画素パターン内の画素数を増減してもよい。  [0069] The smoothing unit 7 smoothes the observation image by setting the smoothed pixel value for each pixel constituting the observation image. It is not necessary to limit the pixel pattern referred to corresponding to the target pixel OP to 5 × 5 pixels, and the number of pixels to be referred to may be increased or decreased. In addition, the number of pixels in each pixel pattern may be increased or decreased without having to limit each pixel pattern to be divided within the pixel pattern to be referenced to 3 × 3 pixels.
[0070] さらに、平滑部 7による平滑化方法は、上述した方法に限定して解釈する必要はな い。例えば、注目画素に対して所定範囲内の各近傍画素の画素値を参照し、中心 部に重み付けした平均値を算出して、注目画素の平滑ィ匕画素値としてもよい。あるい は、 k—最近傍法を利用して平滑ィ匕画素値を設定してもよい。すなわち、注目画素に 対する所定範囲内の近傍画素の中から、注目画素の画素値に最も近い画素値を有 する k個の画素を抽出し、抽出した画素の画素値の平均値を算出して、注目画素の 平滑ィ匕画素値としてもよい。  [0070] Furthermore, the smoothing method by the smoothing unit 7 need not be interpreted as being limited to the method described above. For example, the smoothed pixel value of the target pixel may be calculated by referring to the pixel value of each neighboring pixel within a predetermined range with respect to the target pixel and calculating an average value weighted at the center. Alternatively, the smooth pixel value may be set using the k-nearest neighbor method. That is, k pixels having a pixel value closest to the pixel value of the target pixel are extracted from neighboring pixels within a predetermined range with respect to the target pixel, and an average value of the pixel values of the extracted pixels is calculated. The smooth pixel value of the target pixel may be used.
[0071] また、選択平均法を利用して平滑ィ匕画素値を設定してもよい。すなわち、注目画素 に対して所定範囲内にあるエッジを検出し、検出したエッジ方向に沿った近傍画素 が有する画素値の平均値を算出して、注目画素の平滑ィ匕画素値としてもよい。さらに 、メディアンフィルタ、バイラテラルフィルタ等の公知のフィルタを用いて平滑化しても よい。  Further, the smoothed pixel value may be set using a selection averaging method. That is, an edge within a predetermined range with respect to the target pixel may be detected, and an average value of pixel values of neighboring pixels along the detected edge direction may be calculated to obtain the smoothed pixel value of the target pixel. Furthermore, smoothing may be performed using a known filter such as a median filter or a bilateral filter.
[0072] つぎに、ステップ S 105の初期領域検出処理について説明する。ステップ S 105で は、初期領域検出部 8は、平滑化画像を構成する各画素の中から、所定値よりも大き い平滑化画素値を有する各画素を初期領域として検出する。このとき、初期領域検 出部 8は、画素毎に平滑ィ匕画素値が所定値よりも大きいか否かを判定し、大きいと判 定した画素に「1」の値、大きくないと判定した画素に「0」の値を設定する。そして、「1 」が設定された画素の集合を初期領域として検出する。  Next, the initial region detection process in step S 105 will be described. In step S 105, the initial region detection unit 8 detects, as an initial region, each pixel having a smoothed pixel value larger than a predetermined value from each pixel constituting the smoothed image. At this time, the initial region detection unit 8 determines whether or not the smoothness pixel value is larger than a predetermined value for each pixel, and determines that the pixel determined to be large is a value of “1” and is not large. A value of “0” is set for the pixel. Then, a set of pixels for which “1” is set is detected as an initial region.
[0073] なお、判定結果に応じて各画素に設定する値は、「1」、「0」に限定する必要はなく 、判定結果が判別できるものであれば、他の数値、アルファベット、記号等を用いても よい。また、平滑ィ匕画素値の判定基準とする所定値は、平滑ィ匕画像内のすべての画 素に対して固定値としてもよいが、判定対象とする画素の平滑ィ匕画像内での位置、 平滑ィ匕画素値等に応じた変動値としてもよい。あるいは、この所定値は、所定の大き さの画素ブロックにおける平均画素値としてもよぐ判別分析法等の公知の方法を利 用して求められる値としてもよ!/、。 It should be noted that the value set for each pixel according to the determination result need not be limited to “1” and “0”, as long as the determination result can be determined, other numerical values, alphabets, symbols, etc. Even with Good. The predetermined value used as a criterion for determining the smoothness pixel value may be a fixed value for all pixels in the smoothness image, but the position of the pixel to be determined in the smoothness image The variation value may be in accordance with the smoothness pixel value or the like. Alternatively, the predetermined value may be a value obtained by using a known method such as a discriminant analysis method, which may be an average pixel value in a pixel block of a predetermined size! /.
[0074] また、初期領域検出部 8による初期領域の検出方法は、上述した方法に限定して 解釈する必要はない。例えば図 7に示すように、平滑化画素値の分布形状が所定の 条件を満たす画素群を初期領域として検出するようにしてもよぐより具体的には、こ の分布形状が凸状である画素群を初期領域として検出してもよい。ここで、図 7は、平 滑化画素値の分布形状を示しており、横軸が平滑ィ匕画像内の画素位置を示し、縦 軸が平滑化画素値を示して!/ヽる。  Further, the initial region detection method by the initial region detection unit 8 need not be interpreted as being limited to the method described above. For example, as shown in FIG. 7, it may be possible to detect a pixel group whose smoothed pixel value distribution shape satisfies a predetermined condition as an initial region. More specifically, this distribution shape is convex. A pixel group may be detected as the initial region. Here, FIG. 7 shows the distribution shape of the smoothed pixel value, where the horizontal axis indicates the pixel position in the smoothed image, and the vertical axis indicates the smoothed pixel value.
[0075] 初期領域検出部 8は、例えば図 7に示すように、注目画素 P1に対して対称に所定 間隔 Dだけ離隔した画素 P2, P3を参照する。そして、画素 P2, P3が有する画素値 V 2, v3の平均画素値 v23と比較し、注目画素 P1の画素値 vlが大きい場合に、注目 画素 P1を初期領域を構成する画素として検出する。この検出処理を平滑化画像全 体に渡って繰り返すことによって、初期領域検出部 8は、平滑化画素値の分布形状 が凸状である画素群としての初期領域を検出することができる。  For example, as shown in FIG. 7, the initial region detection unit 8 refers to pixels P2 and P3 that are symmetrically separated by a predetermined distance D with respect to the target pixel P1. Then, when compared with the average pixel value v23 of the pixel values V2 and v3 of the pixels P2 and P3, when the pixel value vl of the pixel of interest P1 is large, the pixel of interest P1 is detected as a pixel constituting the initial region. By repeating this detection process over the entire smoothed image, the initial region detection unit 8 can detect an initial region as a pixel group in which the distribution shape of the smoothed pixel values is convex.
[0076] また、初期領域検出部 8は、平滑ィ匕画素値の分布形状が凸状である場合に限らず 、例えば分布形状が局所的に極大値を示す画素群を初期領域として検出してもよい 。なお、初期領域検出部 8による初期領域の検出方法には、上述した方法に限定さ れず様々な方法が適用可能である。  In addition, the initial region detection unit 8 is not limited to the case where the distribution shape of the smooth pixel value is convex, and detects, for example, a pixel group having a local maximum local distribution value as the initial region. Also good. Note that the initial region detection method by the initial region detection unit 8 is not limited to the method described above, and various methods can be applied.
[0077] つぎに、ステップ S 107の領域変形処理について説明する。図 8は、領域変形処理 の処理手順を示すフローチャートである。図 8に示すように、まず、標識部 9aは、初期 領域検出部 8によって検出された各初期領域に固有の領域標識を付与するラベリン グ処理を行う(ステップ S121)。ここで、標識 9aによって付与される領域標識は、固有 のものであれば番号、アルファベット、記号等を用いた任意の表記でよい。  Next, the region deformation process in step S 107 will be described. FIG. 8 is a flowchart showing the processing procedure of the area transformation process. As shown in FIG. 8, first, the labeling unit 9a performs a labeling process for assigning a unique region marker to each initial region detected by the initial region detection unit 8 (step S121). Here, the region marker given by the marker 9a may be any notation using numbers, alphabets, symbols, etc. as long as it is unique.
[0078] つづいて、輪郭検出部 9bは、各初期領域の輪郭を示す輪郭画素を検出し、さら〖こ 輪郭画素に隣接する画素である隣接画素の検出を行う (ステップ S123)。変形判定 部 9cは、検出された隣接画素毎に、対象領域を構成する画素であるか否かを判別し 、判別結果に応じて初期領域を変形する (ステップ S125)。その後、終了判定部 9d は、ステップ S 125において初期領域を変形させた力否かを判断する (ステップ S 127Subsequently, the contour detection unit 9b detects a contour pixel indicating the contour of each initial region, and further detects an adjacent pixel that is a pixel adjacent to the contour pixel (step S123). Deformation judgment The unit 9c determines whether or not each detected adjacent pixel is a pixel constituting the target region, and deforms the initial region according to the determination result (step S125). Thereafter, the end determination unit 9d determines whether or not the force has deformed the initial region in step S125 (step S127).
) o ) o
[0079] 初期領域を変形させた場合 (ステップ SI 27 : Yes)、制御部 6は、ステップ S 123か らの処理を繰り返す。一方、初期領域を変形していない場合 (ステップ S 127 : No)、 制御部 6は、すべての初期領域を処理した力否かを判断し (ステップ S 129)、すべて を処理して ヽな 、場合 (ステップ S 129: No)、処理して ヽな 、初期領域に対してステ ップ S 123からの処理を繰り返す。すべてを処理して!/、る場合 (ステップ S 129 : Yes) 、制御部 6は、終了判定部 9dから変形領域データを出力させた後、ステップ S107に リターンする。  [0079] When the initial region is deformed (step SI 27: Yes), the control unit 6 repeats the processing from step S123. On the other hand, when the initial area is not deformed (step S 127: No), the control unit 6 determines whether or not the force has processed all the initial areas (step S 129). If yes (step S129: No), repeat the process from step S123 for the initial area. If all have been processed (step S129: Yes), the control unit 6 causes the end determination unit 9d to output the deformation area data, and then returns to step S107.
[0080] ステップ S123では、輪郭検出部 9bは、標識 9aによって付与された領域標識を参 照し、各初期領域の外側に隣接する一連の画素を隣接画素として検出する。すなわ ち、輪郭検出部 9bは、領域標識が付与されていない各画素について、縦横および 斜め方向に隣接する画素に領域標識が付与されているか否かを検索し、領域標識 が付与されている場合、この処理対象の画素を隣接画素と判定して検出する。このよ うにして、輪郭検出部 9bは、例えば図 9に示すように、初期領域 IA1の外側に隣接す る斜線で示された一連の画素を隣接画素として検出する。  [0080] In step S123, the contour detection unit 9b refers to the area marker given by the marker 9a, and detects a series of pixels adjacent to the outside of each initial area as an adjacent pixel. In other words, the contour detection unit 9b searches for pixels that have not been assigned a region marker to determine whether or not a region marker has been added to pixels adjacent in the vertical, horizontal, and diagonal directions. In this case, the pixel to be processed is detected as an adjacent pixel. In this way, the contour detection unit 9b detects, as shown in FIG. 9, for example, a series of pixels indicated by diagonal lines adjacent to the outside of the initial area IA1 as adjacent pixels.
[0081] ここで、図 9は、初期領域検出画像を例示する模式図であり、初期領域の一部を拡 大表示した図である。図中、個々の矩形領域は画素を示し、太線で囲まれた領域 IA 1, IA2は、それぞれ異なる初期領域の一部を示している。  Here, FIG. 9 is a schematic diagram illustrating an initial region detection image, and is a diagram in which a part of the initial region is enlarged and displayed. In the figure, each rectangular area indicates a pixel, and areas IA1 and IA2 surrounded by thick lines indicate a part of different initial areas.
[0082] ステップ S125では、変形判定部 9cは、一連の隣接画素のうち、初期領域内にあつ て該初期領域の輪郭を構成する輪郭画素との間で、平滑化画素値が所定条件を満 たす画素を対象領域画素であると判別し、この判別した隣接画素を取り込むように初 期領域を膨張変形する。より具体的には、変形判定部 9cは、隣接画素と輪郭画素と の間で平滑ィ匕画素値の画素値差を算出し、算出した画素値差が所定範囲内である 場合に、この隣接画素を対象領域画素であると判別する。そして、この対象領域画素 であると判別した隣接画素を初期領域に取り込むように変形を行う。なお、初期領域 に取り込まれた隣接画素には、この初期領域と同じ領域標識が新たに付与される。 [0082] In step S125, the deformation determination unit 9c satisfies the predetermined condition for the smoothed pixel value between the adjacent pixels in the initial region and the contour pixels constituting the contour of the initial region among a series of adjacent pixels. The added pixel is determined to be a target region pixel, and the initial region is expanded and deformed so as to capture the determined adjacent pixel. More specifically, the deformation determination unit 9c calculates a pixel value difference between the smoothed pixel values between the adjacent pixel and the contour pixel, and if the calculated pixel value difference is within a predetermined range, the deformation determination unit 9c The pixel is determined to be a target area pixel. Then, the modification is performed so that the adjacent pixel determined to be the target region pixel is taken into the initial region. The initial area The same area marker as that of the initial area is newly given to the adjacent pixels captured in.
[0083] また、変形判定部 9cは、隣接画素と輪郭画素との間で平滑ィ匕画素値の画素値差を 算出し、算出した画素値差が所定範囲を超えている場合には、この隣接画素を初期 領域に取り込む画素としない。さらに、変形判定部 9cは、隣接画素が異なる複数の 初期領域と隣接している場合には、この隣接画素を初期領域に取り込む画素としな い。換言すると、変形判定部 9cは、 1つの初期領域のみに隣接する画素であって、 算出した画素値差が所定範囲内にある隣接画素を対象領域画素であると判別し、初 期領域に取り込む画素とする。  [0083] Further, the deformation determination unit 9c calculates a pixel value difference of the smoothness pixel value between the adjacent pixel and the contour pixel, and when the calculated pixel value difference exceeds a predetermined range, Neighboring pixels are not taken into the initial area. Further, when the adjacent pixel is adjacent to a plurality of different initial regions, the deformation determination unit 9c does not use the adjacent pixel as a pixel to be taken into the initial region. In other words, the deformation determination unit 9c determines that an adjacent pixel that is adjacent to only one initial region and whose calculated pixel value difference is within a predetermined range is a target region pixel, and takes it into the initial region. Let it be a pixel.
[0084] このようなステップ S125における処理を、変形判定部 9cは、輪郭検出部 9bによつ て検出された一連のすべての隣接画素に対して実行する。ただし、変形判定部 9cは 、一連の隣接画素のうち所定割合以上の隣接画素に対してのみ実行することも可能 であり、例えば所定間隔ずつ離隔した各隣接画素に対してのみ実行することが可能 である。  [0084] The deformation determination unit 9c performs such processing in step S125 on a series of all adjacent pixels detected by the contour detection unit 9b. However, the deformation determination unit 9c can be executed only for adjacent pixels of a predetermined ratio or more in a series of adjacent pixels. For example, it can be executed only for each adjacent pixel separated by a predetermined interval. It is.
[0085] 例えば図 9に示す初期領域 IA1に対して、変形判定部 9cは、隣接画素としての画 素 PxOと、輪郭画素としての画素 Pxl〜Px3との間で画素値差を算出し、算出した画 素値差が所定範囲内である場合に、画素 PxOを対象領域画素であると判別する。よ り具体的には、変形判定部 9cは、画素 PxOと画素 Pxl、画素 PxOと画素 Px2、画素 P χθと画素 Px3のそれぞれの画素間での画素値差がすべて所定範囲内である場合に 、画素 PxOを対象領域画素であると判別する。  For example, for the initial region IA1 shown in FIG. 9, the deformation determination unit 9c calculates a pixel value difference between the pixel PxO as the adjacent pixel and the pixels Pxl to Px3 as the contour pixels. If the pixel value difference is within the predetermined range, the pixel PxO is determined to be the target region pixel. More specifically, the deformation determination unit 9c determines that the pixel value differences between the pixels PxO and the pixel Pxl, the pixel PxO and the pixel Px2, and the pixel Pχθ and the pixel Px3 are all within a predetermined range. The pixel PxO is determined to be the target area pixel.
[0086] そして、変形判定部 9cは、同様の判別処理を斜線で示されるすべての隣接画素に 対して行った後、対象領域画素であると判別した各隣接画素を初期領域 IA1内に取 り込むように、初期領域 IA1を膨張変形する。なお、変形判定部 9cは、 2つの初期領 域 IA1, IA2に隣接する画素 Px4〜Px6を、いずれの初期領域にも取り込む画素とし ない。  Then, the deformation determination unit 9c performs the same determination processing for all adjacent pixels indicated by diagonal lines, and then takes each adjacent pixel determined to be the target region pixel in the initial region IA1. The initial area IA1 is expanded and deformed so that Note that the deformation determination unit 9c does not use the pixels Px4 to Px6 adjacent to the two initial areas IA1 and IA2 as pixels to be captured in any of the initial areas.
[0087] ここで、変形判定部 9cは、初期領域 IA1の斜線で示される隣接画素のうち、例えば 〇印を付して示される 1画素ずつ離隔した隣接画素に対してのみ判別処理を行うこと もできる。この場合、〇印を付していない処理対象外の隣接画素が対象領域画素で ある力否かの判別は、例えば近傍の〇印を付した隣接画素に対する判別結果から 推定して行うとよい。このように処理対象とする隣接画素を削減することによって、領 域変形処理における処理負荷および処理時間を軽減することができる。 Here, the deformation determination unit 9c performs the determination process only on adjacent pixels separated by one pixel, for example, indicated by a circle among the adjacent pixels indicated by diagonal lines in the initial area IA1. You can also. In this case, the determination as to whether or not the non-processed adjacent pixel that is not marked with a ○ is a target region pixel is made based on, for example, the discrimination result for the neighboring pixel marked with a close ○. It is good to estimate. By reducing the adjacent pixels to be processed in this way, it is possible to reduce the processing load and processing time in the region deformation processing.
[0088] なお、対象領域である力否かの判別処理では、変形判定部 9cは、処理対象の隣 接画素に対して算出した複数の画素値差におけるすべての画素値差が所定範囲内 である場合以外にも、例えば所定数以上もしくは所定割合以上の画素値差が所定範 囲内である場合に、この処理対象の画素を対象領域画素であると判別するようにして ちょい。  [0088] It should be noted that in the determination process of whether or not the force is the target region, the deformation determination unit 9c has all the pixel value differences among the plurality of pixel value differences calculated for the adjacent pixels to be processed within a predetermined range. In addition to a certain case, for example, when a pixel value difference of a predetermined number or more or a predetermined ratio is within a predetermined range, the pixel to be processed is determined to be a target region pixel.
[0089] ステップ S 127では、終了判定部 9dは、直前のステップ S 125によって初期領域の 変形が行われたか否かによって、処理対象の初期領域に対する領域変形処理の終 了または継続を判定する。ここで、直前に初期領域の変形が行われた場合には、さら に初期領域を変形する必要性が高 、ものとして、領域変形処理の継続を判定する。 一方、初期領域の変形が行われな力つた場合には、さらに初期領域を変形する必要 性が低 、ものとして、領域変形処理の終了を判定する。  In step S 127, the end determination unit 9 d determines the end or continuation of the region deformation process for the initial region to be processed depending on whether or not the initial region has been deformed in the immediately preceding step S 125. Here, if the initial region is deformed immediately before, it is highly necessary to deform the initial region, and it is determined that the region deformation process is to be continued. On the other hand, if the initial region is not deformed, the necessity of further deforming the initial region is low, and the end of the region deformation process is determined.
[0090] このようにして、領域変形部 9では、変形判定部 9cによって算出された画素値差が 所定範囲を超えるまで、各隣接画素が対象領域画素である力否かの判別、およびこ の判別結果に基づ 、た初期領域の変形を繰り返すこととして 、る。  In this way, the region deforming unit 9 determines whether or not each adjacent pixel is a target region pixel until the pixel value difference calculated by the deformation determining unit 9c exceeds a predetermined range, and this Based on the determination result, the initial region is repeatedly deformed.
[0091] なお、ここでは、領域変形処理を、初期領域の変形が行われなくなるまで、すなわ ち初期領域に取り込まれる隣接画素がなくなるまで繰り返すこととしたが、これに限定 して解釈する必要はなぐ例えば、初期領域に取り込まれる隣接画素数が所定数以 下となった場合、あるいは領域変形処理を所定回数だけ繰り返した場合などに、領 域変形処理を終了することもできる。  [0091] Here, the region deformation process is repeated until the initial region is no longer deformed, that is, until there are no adjacent pixels to be captured in the initial region. However, the present invention is limited to this. For example, when the number of adjacent pixels captured in the initial area is less than a predetermined number, or when the area deformation process is repeated a predetermined number of times, the area deformation process can be ended.
[0092] また、上述した領域変形処理では、変形判定部 9cは、隣接画素を新たに取り込む ことによって初期領域を膨張変形することとしたが、これとは逆に、輪郭画素を初期領 域力も取り除くことによって初期領域を収縮変形させることもできる。この場合、隣接 画素と輪郭画素との画素値差が所定範囲内である場合に、この輪郭画素を対象領 域外の画素であると判別する。なお、このように初期領域を収縮変形させる場合には 、初期領域検出部 8は、各撮像対象に対応する領域を個別に包含するように初期領 域を検出する。 [0093] さらに、上述した領域変形処理では、隣接画素が対象領域を構成する画素である か否かを判別して初期領域を膨張変形するようにして 、たが、隣接画素の替わりに、 初期領域の輪郭から所定間隔外側に離隔した画素を検出し、この検出した画素が対 象領域を構成する画素である力否かを判別して初期領域を膨張変形することもでき る。また、初期領域を収縮変形させる場合には、輪郭画素の替わりに、初期領域の輪 郭カも所定間隔内側に離隔した画素を用い、この画素と輪郭画素とを初期領域から 取り除くようにして初期領域を収縮変形することもできる。 [0092] In the region deformation process described above, the deformation determination unit 9c expands and deforms the initial region by newly taking in adjacent pixels, but conversely, the initial region force is also applied to the contour pixels. By removing, the initial region can be contracted and deformed. In this case, when the pixel value difference between the adjacent pixel and the contour pixel is within a predetermined range, the contour pixel is determined to be a pixel outside the target region. When the initial region is contracted and deformed in this way, the initial region detection unit 8 detects the initial region so as to individually include the region corresponding to each imaging target. Further, in the region deformation process described above, it is determined whether or not the adjacent pixel is a pixel constituting the target region and the initial region is expanded and deformed. However, instead of the adjacent pixel, the initial region is changed to the initial region. It is also possible to detect pixels separated from the outline of the region by a predetermined distance and determine whether the detected pixel is a pixel constituting the target region, thereby expanding and deforming the initial region. When the initial region is contracted and deformed, instead of the contour pixel, the initial region contour is also used as a pixel separated from the initial region by removing the pixel and the contour pixel from the initial region. The region can be contracted and deformed.
[0094] つぎに、ステップ S 109の領域統合処理について説明する。図 10は、領域統合処 理の処理手順を示すフローチャートである。図 10に示すように、まず、境界検出部 1 Oaは、変形領域データをもとに、隣接領域群を検出し (ステップ S141)、隣接領域群 内の境界画素を検出する (ステップ S 143)。つづいて、特徴量算出部 10bは、隣接 領域群内の隣接した変形領域間の特徴量を算出する (ステップ S145)。その後、統 合判定部 10cは、算出された特徴量が所定値より小さいか否かを判断し (ステップ S1 47)、所定値より小さい場合 (ステップ S147 : Yes)、隣接した変形領域を統合する( ステップ S 149)。  Next, the region integration processing in step S 109 will be described. FIG. 10 is a flowchart showing the processing procedure of the region integration processing. As shown in FIG. 10, first, the boundary detection unit 1 Oa detects an adjacent region group based on the deformation region data (step S141), and detects boundary pixels in the adjacent region group (step S143). . Subsequently, the feature amount calculation unit 10b calculates a feature amount between adjacent deformation regions in the adjacent region group (step S145). After that, the integration determination unit 10c determines whether or not the calculated feature amount is smaller than the predetermined value (step S1 47), and if it is smaller than the predetermined value (step S147: Yes), the adjacent deformation regions are integrated. (Step S149).
[0095] そして、制御部 6は、すべての変形領域を処理した力否かを判断し (ステップ S 151 )、すべてを処理していない場合 (ステップ S151 : No)、処理していない変形領域に 対してステップ S141からの処理を繰り返す。すべてを処理している場合 (ステップ S1 51: Yes)、制御部 6は、統合判定部 10cから対象領域データを出力させた後、ステツ プ S109にリターンする。なお、ステップ S147によって特徴量が所定値より小さくない と判断された場合には (ステップ S147 :No)、統合判定部 10cは、変形領域の統合 を行わず、制御部 6は、ステップ S151の判断を行う。  [0095] Then, the control unit 6 determines whether or not the force has processed all the deformation areas (step S151), and when all the deformation areas have not been processed (step S151: No), The process from step S141 is repeated. If all are processed (step S1 51: Yes), the control unit 6 outputs the target area data from the integrated determination unit 10c, and then returns to step S109. If it is determined in step S147 that the feature amount is not smaller than the predetermined value (step S147: No), the integration determination unit 10c does not integrate the deformation area, and the control unit 6 determines in step S151. I do.
[0096] ステップ S141では、境界検出部 10aは、変形領域の輪郭近傍画素を参照し、輪郭 近傍画素の中から、この変形領域と異なる変形領域に隣接する画素を検出すること によって隣接領域群を検出する。より具体的には、境界検出部 10aは、変形領域の 外側に隣接する一連の隣接画素を走査し、隣接画素の周囲に、この変形領域と異な る変形領域の領域標識を有する画素を検出した場合、この変形領域が他の変形領 域と隣接して ヽるものと判定し、これら変形領域の組を隣接領域群として検出する。 [0097] ステップ S143では、境界検出部 10aは、変形領域の外側に隣接する一連の隣接 画素を走査し、走査した隣接画素の中から、この変形領域と異なる変形領域に含ま れる画素に隣接する隣接画素を、隣接した変形領域間の境界画素として検出する。 より具体的には、境界検出部 10aは、隣接画素を走査する際、隣接画素毎に縦横お よび斜め方向に隣接する 8つの画素が有する領域標識を参照し、互いに異なる領域 標識を有する画素を検出した場合、処理対象の隣接画素を境界画素と判定して検 出する。 [0096] In step S141, the boundary detection unit 10a refers to the contour neighboring pixels of the deformation region, and detects the neighboring region group from the contour neighboring pixels by detecting pixels adjacent to the deformation region different from the deformation region. To detect. More specifically, the boundary detection unit 10a scans a series of adjacent pixels that are adjacent to the outside of the deformation area, and detects pixels having an area marker of a deformation area different from the deformation area around the adjacent pixels. In this case, it is determined that this deformation area is adjacent to another deformation area, and a set of these deformation areas is detected as an adjacent area group. [0097] In step S143, the boundary detection unit 10a scans a series of adjacent pixels that are adjacent to the outside of the deformation area, and is adjacent to the pixels included in the deformation area different from the deformation area from among the scanned adjacent pixels. Adjacent pixels are detected as boundary pixels between adjacent deformation regions. More specifically, when scanning the adjacent pixels, the boundary detection unit 10a refers to the area markers included in the eight pixels adjacent in the vertical and horizontal directions and in the oblique direction for each adjacent pixel, and detects pixels having different area markers. If it is detected, the adjacent pixel to be processed is determined as a boundary pixel and detected.
[0098] ステップ S145では、特徴量算出部 10bは、ステップ S143によって検出された各境 界線に対して、境界画素と境界近傍画素とが有する平滑ィ匕画素値に基づいて、この 境界画素を挟んで隣接した変形領域間の特徴量を算出する。より具体的には、特徴 量算出部 10bは、境界画素と、この境界画素近傍に位置する境界近傍画素との平滑 化画素値の差の平均値を、隣接した変形領域間の特徴量として算出する。ここで、 境界近傍画素としては、例えば処理対象の境界画素から、境界線の法線方向に所 定画素数離れた画素を選択するとよ 、。  [0098] In step S145, the feature amount calculation unit 10b sandwiches the boundary pixel with respect to each boundary line detected in step S143 based on the smoothness pixel values of the boundary pixel and the boundary neighboring pixels. To calculate the feature amount between adjacent deformation regions. More specifically, the feature amount calculation unit 10b calculates the average value of the smoothed pixel value difference between the boundary pixel and the boundary vicinity pixel located in the vicinity of the boundary pixel as the feature amount between the adjacent deformation regions. To do. Here, for example, a pixel that is a predetermined number of pixels away from the boundary pixel to be processed in the normal direction of the boundary line is selected as the boundary neighboring pixel.
[0099] 特徴量算出部 10bは、例えば図 11に示すように、隣接した変形領域 AR1, AR2の 境界線 A— Bの法線方向に、境界線 A— Bから互 ヽに反対側に所定画素数離れた 曲線 Α, -Β' , A"— Β"を設定する。そして、境界線 Α—Β上の境界画素毎に、法線 方向に対応する曲線 A'—B'上の画素および曲線 Α"— Β"上の画素との間で、平滑 化画素値の画素値差をそれぞれ算出する。特徴量算出部 10bは、かかる画素値差 を、境界線 A—B上のすべての境界画素に対して算出し、算出したすべての画素値 差の平均値を変形領域 AR1, AR2間の特徴量として算出する。  [0099] The feature amount calculation unit 10b, for example, as shown in FIG. 11, is predetermined in the normal direction of the boundary line A—B of the adjacent deformation areas AR1, AR2, and on the opposite side from the boundary line A—B. Set curves Α, -Β ', A "— Β" separated by the number of pixels. Then, for each boundary pixel on the boundary line Α—Β, the pixel of the smoothed pixel value between the pixel on the curve A′—B ′ and the pixel on the curve Α “—Β” corresponding to the normal direction. Each value difference is calculated. The feature amount calculation unit 10b calculates such a pixel value difference for all the boundary pixels on the boundary line A—B, and calculates an average value of all the calculated pixel value differences between the feature regions AR1 and AR2. Calculate as
[0100] なお、特徴量算出部 10bは、平均値に限らず、画素値差の最大値、最小値、標準 偏差等の統計量を特徴量として算出することもできる。また、特徴量算出部 10bは、 平滑ィ匕画素値の画素値差に基づいて特徴量を算出するば力りでなぐ例えば隣接し た各変形領域の輪郭線の交差角等に基づいて特徴量を算出することもできる。  [0100] Note that the feature quantity calculation unit 10b can calculate not only the average value but also statistics such as the maximum value, the minimum value, and the standard deviation of the pixel value difference as the feature quantity. In addition, the feature amount calculation unit 10b calculates the feature amount based on the pixel value difference between the smoothness pixel values. For example, the feature amount calculation unit 10b calculates the feature amount based on the intersection angle of the contour lines of adjacent deformation regions. Can also be calculated.
[0101] ステップ S149では、統合判定部 10cは、ステップ S 147によって特徴量が所定値よ りも小さいと判断された場合に、隣接領域群内の隣接した変形領域同士を統合する 。このとき、統合判定部 10cは、隣接した各変形領域の領域標識のうち、いずれか一 方の領域標識によって他方の領域標識を書き換える。なお、ステップ S147における 判断は、特徴量が小さい場合、隣接した変形領域間の変化が小さく同一の変形領域 であるとみなせることに基づ 、て 、る。 [0101] In step S149, the integration determining unit 10c integrates adjacent deformed regions in the adjacent region group when it is determined in step S147 that the feature amount is smaller than a predetermined value. At this time, the integrated determination unit 10c selects one of the area markers of the adjacent deformation areas. The other area marker is rewritten by the other area marker. The determination in step S147 is based on the fact that when the feature amount is small, the change between adjacent deformation areas is small and can be regarded as the same deformation area.
[0102] 以上説明したように、本実施の形態 1にかかる領域抽出装置 1および領域抽出プロ グラムでは、入力された観測画像に対して、まず平滑化処理を行い、平滑化画像に 基づ 、て対象領域を抽出するようにして!/、るため、観測画像中に含まれるノイズ等の 影響を受けることなく安定して高精度に対象領域を抽出することができる。  [0102] As described above, in the region extraction device 1 and the region extraction program according to the first embodiment, the input observation image is first subjected to smoothing processing, and based on the smoothed image, Therefore, the target area can be extracted stably and with high accuracy without being affected by noise included in the observation image.
[0103] また、本実施の形態 1にかかる領域抽出装置 1および領域抽出プログラムでは、平 滑化画素値の大きさ、分布形状等が所定の条件を満足する画素群を初期領域として 検出するようにしているため、例えば複数の対象領域が隣接する場合にも、各対象 領域に対応する個別の初期領域を確実に検出することができる。  [0103] Further, in the region extraction device 1 and the region extraction program according to the first embodiment, a pixel group in which the size, distribution shape, etc. of the smoothed pixel values satisfy a predetermined condition is detected as the initial region. Therefore, for example, even when a plurality of target areas are adjacent to each other, it is possible to reliably detect individual initial areas corresponding to the target areas.
[0104] さらに、本実施の形態 1にかかる領域抽出装置 1および領域抽出プログラムでは、 力かる初期領域の外側に隣接する一連のすべての隣接画素、または一連の隣接画 素のうち所定割合以上の各隣接画素に基づいて初期領域を変形し、対象領域を成 形するようにしているため、撮像対象の形状が変化する場合でも、撮像対象の状態 によらず、撮像対象の輪郭形状に高精度に整合した対象領域を抽出することができ る。  [0104] Furthermore, in the region extraction device 1 and the region extraction program according to the first embodiment, a series of all adjacent pixels adjacent to the outside of the initial region to be used, or a predetermined ratio or more of the series of adjacent pixels. Since the initial region is deformed based on each adjacent pixel to form the target region, even if the shape of the imaging target changes, the contour shape of the imaging target is highly accurate regardless of the state of the imaging target. It is possible to extract the target area that matches the target.
[0105] また、本実施の形態 1にかかる領域抽出装置 1および領域抽出プログラムでは、初 期領域検出部 8によって各撮像対象に対応する初期領域を検出し、領域変形部 9〖こ よって各初期領域を変形して変形領域を成形し、領域統合部 10によって隣接した変 形領域を統合して対象領域を成形するようにしているため、撮像対象の数、位置等を あらかじめ指定するなどの初期設定を必要とすることなぐ全自動的に対象領域を抽 出することができる。  [0105] In addition, in the region extraction device 1 and the region extraction program according to the first embodiment, the initial region detection unit 8 detects the initial region corresponding to each imaging target, and the region deformation unit 9 uses each initial region. Since the deformed area is formed by deforming the area, and the deformed area is integrated by the area integration unit 10 to form the target area, the number of images to be imaged, the position, etc. are specified in advance. The target area can be extracted automatically without the need for setting.
[0106] なお、本実施の形態 1にかかる領域抽出装置 1では、隣接した変形領域を統合する 領域統合部 10を備えるものとして説明したが、隣接した変形領域を統合する必要が ない、隣接した撮像対象を扱わない等の場合には、必ずしも備えなくてもよい。また、 領域統合部 10を備える場合にも、例えば所定の指示情報に応じて領域統合処理を 省略可能としてもよい。このように領域統合処理を行わない場合には、領域変形処理 後の各変形領域を対象領域とみなし、変形領域データを対象領域データとして領域 抽出結果とすればよい。 [0106] Although the region extraction device 1 according to the first embodiment has been described as including the region integration unit 10 that integrates adjacent deformation regions, it is not necessary to integrate adjacent deformation regions. If the imaging target is not handled, it is not always necessary. Even when the region integration unit 10 is provided, the region integration processing may be omitted according to predetermined instruction information, for example. When region integration processing is not performed in this way, region transformation processing Each subsequent deformation area may be regarded as a target area, and the area extraction result may be obtained using the deformation area data as the target area data.
[0107] また、本実施の形態 1にかかる領域抽出装置 1および領域抽出プログラムでは、領 域統合処理を領域変形処理後に実行するものとして説明したが、領域統合処理を領 域変形処理に組み込んで実行するようにしてもよい。この場合、領域変形部 9は、各 初期領域を順次変形する過程で、隣接する初期領域の検出を随時行い、検出され た段階で領域統合処理を行うとよ ヽ。  [0107] Further, in the area extraction device 1 and the area extraction program according to the first embodiment, the area integration process is described as being executed after the area deformation process. However, the area integration process is incorporated into the area deformation process. You may make it perform. In this case, the region deforming unit 9 may detect adjacent initial regions as needed in the process of sequentially deforming each initial region, and perform region integration processing at the detected stage.
[0108] (実施の形態 2)  [Embodiment 2]
つぎに、本発明の実施の形態 2について説明する。上述した実施の形態 1では、平 滑ィ匕画像に基づいて、領域変形処理と領域統合処理とを行うようにしていたが、この 実施の形態 2では、平滑ィ匕画像に含まれるエッジを検出したエッジ検出画像に基づ Vヽて、領域変形処理と領域統合処理とを行うようにして!/、る。  Next, a second embodiment of the present invention will be described. In the first embodiment described above, the region deformation processing and the region integration processing are performed based on the smooth smooth image. However, in the second embodiment, an edge included in the smooth smooth image is detected. Based on the detected edge detection image, perform region transformation processing and region integration processing based on V!
[0109] 図 12は、本発明の実施の形態 2にかかる領域抽出装置 21の構成を示すブロック図 である。図 12に示すように、領域抽出装置 21は、領域抽出装置 1の構成をもとに、画 像処理部 3および制御部 6に替えて画像処理部 23および制御部 26を備える。また、 画像処理部 23は、画像処理部 3の構成をもとに、領域変形部 9および領域統合部 1 0に替えて領域変形部 29および領域統合部 30を備え、新たにエッジ検出部 31を備 える。  FIG. 12 is a block diagram showing a configuration of the area extracting device 21 according to the second embodiment of the present invention. As shown in FIG. 12, the region extraction device 21 includes an image processing unit 23 and a control unit 26 instead of the image processing unit 3 and the control unit 6 based on the configuration of the region extraction device 1. Further, the image processing unit 23 includes a region deformation unit 29 and a region integration unit 30 instead of the region deformation unit 9 and the region integration unit 10 based on the configuration of the image processing unit 3, and a new edge detection unit 31. Equipped.
[0110] また、図 13および図 14は、それぞれ領域変形部 29および領域統合部 30の詳細 構成を示すブロック図である。領域変形部 29は、図 13に示すように、領域変形部 9 の構成をもとに、変形判定部 9cに替えて変形判定部 29cを備える。領域統合部 30 は、図 14に示すように、領域統合部 10の構成をもとに、特徴量算出部 10bに替えて 特徴量算出部 30bを備える。領域抽出装置 21のその他の構成は、実施の形態 1と同 じであり、同一構成部分には同一符号を付している。  FIGS. 13 and 14 are block diagrams showing detailed configurations of the region deforming unit 29 and the region integrating unit 30, respectively. As shown in FIG. 13, the region deforming unit 29 includes a deformation determining unit 29c instead of the deformation determining unit 9c based on the configuration of the region deforming unit 9. As shown in FIG. 14, the region integration unit 30 includes a feature amount calculation unit 30b instead of the feature amount calculation unit 10b based on the configuration of the region integration unit 10. The other configuration of the region extracting device 21 is the same as that of the first embodiment, and the same components are denoted by the same reference numerals.
[0111] エッジ検出部 31は、平滑部 7から出力される平滑ィ匕画像を取得し、取得した平滑化 画像に含まれるエッジをフィルタ処理によって検出し、検出したエッジを示すエッジ画 像を生成する。エッジ検出部 31は、例えばソーベルフィルタ、ラプラシアンフィルタ、 プレウィットフィルタを用いてエッジを検出する。また、エッジ検出部 31は、生成したェ ッジ画像を領域変形部 29および領域統合部 30に出力する。なお、エッジ検出部 31 は、エッジ画像を、制御部 26を介して記憶部 5に出力して記憶させることもできる。 [0111] The edge detection unit 31 acquires the smoothed image output from the smoothing unit 7, detects the edge included in the acquired smoothed image by filtering, and generates an edge image indicating the detected edge To do. The edge detection unit 31 detects an edge using, for example, a Sobel filter, a Laplacian filter, or a pre-witt filter. In addition, the edge detection unit 31 generates the generated error. The wedge image is output to the region transformation unit 29 and the region integration unit 30. The edge detection unit 31 can also output and store the edge image in the storage unit 5 via the control unit 26.
[0112] 領域変形部 29は、領域変形部 9と同様の処理を実行する。ただし、領域変形部 29 は、領域変形部 9が変形判定部 9cによって平滑部 7から平滑ィ匕画像を取得していた 替わりに、領域変形部 29cによってエッジ検出部 31からエッジ画像を取得する。また 、領域変形部 9が変形判定部 9cによって、平滑化画素値に基づいて対象領域画像 であるか否かの判別処理を行っていた替わりに、領域変形部 29は、変形判定部 29c によって、エッジ画像内の各画素が有するエッジ画素値に基づいて、対象領域画素 であるか否かの判別処理を行う。  The area deforming unit 29 executes the same process as the area deforming unit 9. However, the area deforming unit 29 acquires the edge image from the edge detecting unit 31 using the area deforming unit 29c, instead of the area deforming unit 9 acquiring the smoothed image from the smoothing unit 7 using the deformation determining unit 9c. In addition, instead of the region deforming unit 9 performing the process of determining whether or not the image is the target region image based on the smoothed pixel value by the deformation determining unit 9c, the region deforming unit 29 is replaced by the deformation determining unit 29c. Based on the edge pixel value possessed by each pixel in the edge image, it is determined whether or not the pixel is the target region pixel.
[0113] 領域統合部 30は、領域統合部 10と同様の処理を実行する。ただし、領域統合部 3 0は、領域統合部 10が特徴量算出部 10bによって平滑部 7から平滑ィ匕画像を取得し ていた替わりに、特徴量算出部 30bによってエッジ検出部 31からエッジ画像を取得 する。また、領域統合部 10が特徴量算出部 10bによって、平滑化画素値に基づいて 、隣接した変形領域間の特徴量を算出していた替わりに、領域統合部 30は、特徴量 算出部 30bによって、エッジ画素値に基づいて特徴量を算出する。  The area integration unit 30 executes the same processing as that of the area integration unit 10. However, instead of the region integration unit 10 acquiring the smoothed image from the smoothing unit 7 by the feature amount calculation unit 10b, the region integration unit 30 receives the edge image from the edge detection unit 31 by the feature amount calculation unit 30b. get. In addition, instead of the region integration unit 10 calculating the feature amount between adjacent deformation regions based on the smoothed pixel value by the feature amount calculation unit 10b, the region integration unit 30 uses the feature amount calculation unit 30b. The feature amount is calculated based on the edge pixel value.
[0114] ここで、領域抽出装置 21が行う処理手順について説明する。図 15は、制御部 26が 領域抽出プログラムを実行することによって、領域抽出装置 21が観測画像を処理し て表示する領域抽出処理の処理手順を示すフローチャートである。また、図 16は、 図 15に示すエッジ検出処理の処理結果として、エッジ検出部 31が生成するエッジ画 像を示す模式図である。なお、図 16は、図 4—1に示した平滑ィ匕画像から生成される エッジ画像を例示して!/、る。  Here, a processing procedure performed by the region extraction device 21 will be described. FIG. 15 is a flowchart showing a processing procedure of region extraction processing in which the region extraction device 21 processes and displays an observation image by the control unit 26 executing the region extraction program. FIG. 16 is a schematic diagram showing an edge image generated by the edge detection unit 31 as a result of the edge detection process shown in FIG. FIG. 16 shows an example of an edge image generated from the smooth wrinkle image shown in FIG. 4-1.
[0115] 図 15に示すように、まず、入力部 2、平滑部 7および初期領域検出部 8は、それぞ れ順に、図 5に示したステップ S101〜S105と同様に、観測画像の取得 (ステップ S1 61)、平滑ィ匕処理 (ステップ S163)および初期領域検出処理 (ステップ S165)を行う  As shown in FIG. 15, first, the input unit 2, the smoothing unit 7, and the initial region detecting unit 8 acquire observation images in the same manner as in steps S101 to S105 shown in FIG. Step S1 61), smoothing process (step S163) and initial area detection process (step S165)
[0116] つぎに、エッジ検出部 31は、平滑部 7から取得した平滑ィ匕画像の中力もエッジを検 出してエッジ画像を生成するエッジ検出処理を行う(ステップ S 167)。このステップ S 167では、エッジ検出部 31は、例えば図 16に示すように、図 4—1に示した平滑化画 像内の各細胞領域のエッジを検出して画像ィ匕する。エッジ検出部 31は、生成したェ ッジ画像を領域変形部 29および領域統合部 30に出力する。なお、ステップ S167と ステップ S165との処理順序は、交換してもよい。 [0116] Next, the edge detection unit 31 performs edge detection processing for detecting an edge of the smooth image obtained from the smoothing unit 7 and generating an edge image (step S167). In step S167, the edge detection unit 31 performs the smoothed image shown in FIG. 4-1, for example, as shown in FIG. The edge of each cell region in the image is detected and imaged. The edge detection unit 31 outputs the generated edge image to the region deformation unit 29 and the region integration unit 30. Note that the processing order of step S167 and step S165 may be exchanged.
[0117] つづいて、領域変形部 29、領域統合部 30および出力部 4は、それぞれ順に、図 5 に示したステップ S107〜S111と同様に、領域変形処理 (ステップ S 169)、領域統 合処理 (ステップ S171)および抽出結果の表示 (ステップ S173)を行い、その後、制 御部 26は、一連の領域抽出処理を終了する。  [0117] Subsequently, the region transformation unit 29, the region integration unit 30 and the output unit 4 sequentially perform region transformation processing (step S169) and region integration processing in the same manner as steps S107 to S111 shown in FIG. (Step S171) and display of extraction results (Step S173) are performed, and then the control unit 26 ends a series of region extraction processing.
[0118] ただし、ステップ S169では、領域変形部 29は、ステップ 107において平滑ィ匕画素 値に基づ ヽて対象領域画素であるか否かの判断処理を行った替わりに、エッジ画素 値に基づいてこの判断処理を行うようにしている。すなわち、変形判定部 29cは、初 期領域の外側に隣接する一連の隣接画素のうち、輪郭画素との間でエッジ画素値が 所定条件を満たす画素を対象領域画素であると判別し、この判別した隣接画素を取 り込むように初期領域を膨張変形する。より具体的には、変形判定部 29cは、隣接画 素と輪郭画素との間でエッジ画素値の画素値差を算出し、算出した画素値差が所定 範囲内である場合に、この隣接画素を対象領域画素であると判別する。  [0118] However, in step S169, the area deforming unit 29 does not perform the process of determining whether or not the pixel is the target area pixel based on the smoothness pixel value in step 107, but based on the edge pixel value. This determination process is performed. That is, the deformation determination unit 29c determines that a pixel satisfying a predetermined condition for an edge pixel value with respect to a contour pixel from a series of adjacent pixels adjacent to the outside of the initial region is a target region pixel. The initial area is expanded and deformed so as to capture the adjacent pixels. More specifically, the deformation determination unit 29c calculates the pixel value difference of the edge pixel value between the adjacent pixel and the contour pixel, and when the calculated pixel value difference is within a predetermined range, the adjacent pixel Is determined to be a target area pixel.
[0119] また、変形判定部 29cは、隣接画素と輪郭画素との間でエッジ画素値の画素値差 を算出し、算出した画素値差が所定範囲を超えている場合には、初期領域に取り込 む画素としない。さらに、変形判定部 29cは、隣接画素が異なる複数の初期領域と隣 接している場合には、この隣接画素を初期領域に取り込む画素としない。換言すると 、変形判定部 29cは、 1つの初期領域のみに隣接する画素であって、算出した画素 値差が所定範囲内にある隣接画素を対象領域画素であると判別し、初期領域に取り 込む画素とする。  [0119] Further, the deformation determination unit 29c calculates a pixel value difference of the edge pixel value between the adjacent pixel and the contour pixel, and if the calculated pixel value difference exceeds the predetermined range, the deformation determination unit 29c sets the initial region. Do not use pixels to capture. Furthermore, when the adjacent pixel is adjacent to a plurality of different initial regions, the deformation determination unit 29c does not regard the adjacent pixel as a pixel that is taken into the initial region. In other words, the deformation determination unit 29c determines that an adjacent pixel that is adjacent to only one initial region and has a calculated pixel value difference within a predetermined range is a target region pixel, and incorporates it into the initial region. Let it be a pixel.
[0120] なお、変形判定部 29cは、変形判定部 9cと同様に、輪郭画素を取り除くことによつ て初期領域を収縮変形させることもできる。  [0120] It should be noted that the deformation determination unit 29c can also shrink and deform the initial region by removing the contour pixels, similarly to the deformation determination unit 9c.
[0121] 一方、ステップ S171では、領域統合部 30は、ステップ S109において平滑ィ匕画素 値に基づいて、隣接した変形領域間の特徴量を算出した替わりに、エッジ画素値に 基づいてこの特徴量を算出するようにしている。すなわち、特徴量算出部 30bは、境 界画素と境界近傍画素とが有するエッジ画素値に基づ 、て、この境界画素を挟んで 隣接した変形領域間の特徴量を算出する。より具体的には、特徴量算出部 30bは、 境界画素と、この境界画素近傍に位置する境界近傍画素とのエッジ画素値の差の平 均値を、隣接した変形領域間の特徴量として算出する。 [0121] On the other hand, in step S171, the region integration unit 30 calculates this feature amount based on the edge pixel value instead of calculating the feature amount between adjacent deformation regions based on the smoothed pixel value in step S109. Is calculated. That is, the feature amount calculation unit 30b sandwiches the boundary pixel based on the edge pixel values of the boundary pixel and the boundary neighboring pixel. A feature amount between adjacent deformation regions is calculated. More specifically, the feature amount calculation unit 30b calculates the average value of the difference between the edge pixel values of the boundary pixel and the boundary vicinity pixel located in the vicinity of the boundary pixel as the feature amount between the adjacent deformation regions. To do.
[0122] なお、領域抽出装置 21でも、領域抽出装置 1と同様に、隣接した変形領域を統合 する必要がない場合、隣接した撮像対象を扱わない場合等には、必ずしも領域統合 部 30を備えなくてもよい。また、領域統合部 30を備える場合にも、例えば所定の指 示情報に応じて領域統合処理を省略可能としてもょ ヽ。  [0122] It should be noted that, similarly to the region extraction device 1, the region extraction device 21 does not necessarily include the region integration unit 30 when it is not necessary to integrate adjacent deformation regions or when an adjacent imaging target is not handled. It does not have to be. Further, even when the area integration unit 30 is provided, for example, the area integration processing may be omitted according to predetermined instruction information.
[0123] 以上説明したように、本実施の形態 2にかかる領域抽出装置 21および領域抽出プ ログラムでは、エッジ画素値に基づ 、て初期領域の変形および変形領域の統合を行 うようにしているため、一層厳密に撮像対象の輪郭形状に整合した対象領域を整形 して抽出することができる。  [0123] As described above, in the region extracting device 21 and the region extracting program according to the second embodiment, the initial region is deformed and the deformed region is integrated based on the edge pixel value. Therefore, it is possible to shape and extract a target region that more precisely matches the contour shape of the imaging target.
[0124] なお、上述した実施の形態 1および 2にかかる領域抽出装置で用いた平滑ィ匕画素 値、エッジ画素値等の画素値には、輝度値、濃淡値、諧調値、強度値等が含まれる 。特に、エッジ画素値には、検出したエッジの強度を示すエッジ強度値が含まれる。 実施の形態 1および 2にかかる領域抽出装置は、入力される観測画像の形態に応じ て、画素値としての値を力かる各種の値の中力 適宜選択して処理することが可能で ある。  Note that the pixel values such as the smooth gradation pixel value and the edge pixel value used in the region extraction device according to the first and second embodiments described above include a luminance value, a gray value, a gradation value, an intensity value, and the like. included . In particular, the edge pixel value includes an edge strength value indicating the strength of the detected edge. The region extraction apparatus according to the first and second embodiments can appropriately select and process various values of the medium value that uses the pixel value according to the form of the input observation image.
産業上の利用可能性  Industrial applicability
[0125] 以上のように、本発明にかかる領域抽出装置および領域抽出プログラムは、入力さ れた画像の中から撮像対象に対応する画像領域を抽出する領域抽出装置および領 域抽出プログラムに有用であり、特に個々の撮像対象に対応する画像領域を安定し て高精度に抽出する領域抽出装置および領域抽出プログラムに適している。 As described above, the region extraction device and the region extraction program according to the present invention are useful for the region extraction device and the region extraction program that extract the image region corresponding to the imaging target from the input image. In particular, it is suitable for an area extraction apparatus and an area extraction program that stably and highly accurately extract image areas corresponding to individual imaging targets.

Claims

請求の範囲 The scope of the claims
[1] 入力された画像の中から撮像対象に対応する画像領域である対象領域を抽出す る領域抽出装置において、  [1] In an area extraction device that extracts a target area, which is an image area corresponding to an imaging target, from input images,
前記画像を平滑化した平滑化画像を生成する平滑化手段と、  Smoothing means for generating a smoothed image obtained by smoothing the image;
前記平滑化画像内の各画素が有する平滑化画素値に基づ!/ヽて、前記平滑化画像 の中から少なくとも前記撮像対象の一部を含む画像領域である初期領域を検出する 初期領域検出手段と、  Based on the smoothed pixel value of each pixel in the smoothed image, the initial region detection is performed to detect an initial region that is an image region including at least a part of the imaging target from the smoothed image. Means,
前記初期領域の輪郭近傍画素が有する前記平滑化画素値に基づ 、て、前記輪郭 近傍画素が前記対象領域を構成する対象領域画素であるか否かを判別し、判別結 果に応じて前記初期領域の大きさおよび形状の少なくとも一方を変形して前記対象 領域を成形する領域変形手段と、  Based on the smoothed pixel value of the contour neighboring pixels of the initial region, it is determined whether or not the contour neighboring pixels are target region pixels constituting the target region, and according to the determination result, A region deforming means for deforming at least one of the size and shape of the initial region to form the target region;
を備えたことを特徴とする領域抽出装置。  An area extracting apparatus comprising:
[2] 入力された画像の中から撮像対象に対応する画像領域である対象領域を抽出す る領域抽出装置において、  [2] In an area extraction device that extracts a target area, which is an image area corresponding to an imaging target, from input images,
前記画像を平滑化した平滑化画像を生成する平滑化手段と、  Smoothing means for generating a smoothed image obtained by smoothing the image;
前記平滑化画像内の各画素が有する平滑化画素値に基づ!ヽて、前記平滑化画像 の中から少なくとも前記撮像対象の一部を含む画像領域である初期領域を検出する 初期領域検出手段と、  An initial region detecting means for detecting an initial region that is an image region including at least a part of the imaging target from the smoothed image based on a smoothed pixel value of each pixel in the smoothed image. When,
前記平滑ィ匕画像に含まれるエッジを検出したエッジ画像を生成するエッジ検出手 段と、  An edge detection means for generating an edge image in which an edge included in the smooth image is detected;
前記初期領域の輪郭近傍画素に対応する前記エッジ画像内の画素が有するエツ ジ画素値に基づ!ヽて、前記輪郭近傍画素が前記対象領域を構成する対象領域画 素であるか否かを判別し、判別結果に応じて前記初期領域の大きさおよび形状の少 なくとも一方を変形して前記対象領域を成形する領域変形手段と、  Based on the edge pixel values of the pixels in the edge image corresponding to the contour neighboring pixels of the initial region, it is determined whether or not the contour neighboring pixels are target region pixels constituting the target region. Region deforming means for determining, and deforming at least one of the size and shape of the initial region in accordance with the determination result to form the target region;
を備えたことを特徴とする領域抽出装置。  An area extracting apparatus comprising:
[3] 前記初期領域検出手段は、所定値より大きい前記平滑化画素値を有する各画素 を、前記初期領域として検出することを特徴とする請求項 1または 2に記載の領域抽 出装置。 [3] The region extracting device according to claim 1 or 2, wherein the initial region detecting means detects each pixel having the smoothed pixel value larger than a predetermined value as the initial region.
[4] 前記初期領域検出手段は、前記平滑化画像内の画素位置に対する前記平滑ィ匕 画素値の分布形状が所定の条件を満たす画素群を、前記初期領域として検出する ことを特徴とする請求項 1または 2に記載の領域抽出装置。 [4] The initial region detecting means detects, as the initial region, a pixel group in which a distribution shape of the smoothed pixel value with respect to a pixel position in the smoothed image satisfies a predetermined condition. Item 1. The region extraction device according to item 1 or 2.
[5] 前記初期領域検出手段は、前記分布形状が凸状である画素群を、前記初期領域 として検出することを特徴とする請求項 4に記載の領域抽出装置。  5. The area extracting apparatus according to claim 4, wherein the initial area detecting means detects a pixel group having a convex distribution shape as the initial area.
[6] 前記領域変形手段は、前記初期領域の外側に隣接する一連の隣接画素のうち、 前記初期領域内にあって該初期領域の輪郭を構成する輪郭画素との間で、前記平 滑ィ匕画素値が所定条件を満たす画素を前記対象領域画素であると判別し、該判別 した隣接画素を取り込むように前記初期領域を変形することを特徴とする請求項 1に 記載の領域抽出装置。  [6] The region deforming means may include the smooth smoothing between contour pixels in the initial region and constituting the contour of the initial region among a series of adjacent pixels adjacent to the outside of the initial region. The region extracting apparatus according to claim 1, wherein a pixel whose pixel value satisfies a predetermined condition is determined as the target region pixel, and the initial region is modified so as to capture the determined adjacent pixel.
[7] 前記領域変形手段は、前記隣接画素と前記輪郭画素との間で前記平滑化画素値 の画素値差が所定範囲内である前記隣接画素を前記対象領域画素であると判別す ることを特徴とする請求項 6に記載の領域抽出装置。  [7] The region deforming means determines that the adjacent pixel whose pixel value difference between the smoothed pixel values is within a predetermined range between the adjacent pixel and the contour pixel is the target region pixel. The region extracting apparatus according to claim 6, wherein:
[8] 前記領域変形手段は、前記初期領域の外側に隣接する一連の隣接画素のうち、 前記初期領域内にあって該初期領域の輪郭を構成する輪郭画素との間で、前記ェ ッジ画素値が所定条件を満たす画素を前記対象領域画素であると判別し、該判別し た隣接画素を取り込むように前記初期領域を変形することを特徴とする請求項 2に記 載の領域抽出装置。  [8] The region deforming means may include the edge between a series of adjacent pixels adjacent to the outside of the initial region and a contour pixel that is in the initial region and forms a contour of the initial region. 3. The region extraction device according to claim 2, wherein a pixel whose pixel value satisfies a predetermined condition is determined as the target region pixel, and the initial region is modified so as to capture the determined adjacent pixel. .
[9] 前記領域変形手段は、前記隣接画素と前記輪郭画素との間で前記エッジ画素値 の画素値差が所定範囲内である前記隣接画素を前記対象領域画素であると判別す ることを特徴とする請求項 8に記載の領域抽出装置。  [9] The region deforming means determines that the adjacent pixel in which a pixel value difference of the edge pixel value is within a predetermined range between the adjacent pixel and the contour pixel is the target region pixel. 9. The region extracting apparatus according to claim 8, wherein
[10] 前記領域変形手段は、前記初期領域内にあって該初期領域の輪郭を構成する一 連の輪郭画素のうち、前記初期領域の外側に隣接する隣接画素との間で、前記平 滑化画素値が所定条件を満たす画素を前記対象領域外の画素であると判別し、該 判別した輪郭画素を取り除くように前記初期領域を変形することを特徴とする請求項 1に記載の領域抽出装置。  [10] The region deforming means is configured to smooth the smoothing between adjacent pixels adjacent to the outside of the initial region among a series of contour pixels that are in the initial region and form the contour of the initial region. 2. The region extraction according to claim 1, wherein a pixel whose pixel value satisfies a predetermined condition is determined to be a pixel outside the target region, and the initial region is modified so as to remove the determined contour pixel. apparatus.
[11] 前記領域変形手段は、前記隣接画素と前記輪郭画素との間で前記平滑化画素値 の画素値差が所定範囲内である前記輪郭画素を前記対象領域外の画素であると判 別することを特徴とする請求項 10に記載の領域抽出装置。 [11] The region modifying means determines that the contour pixel in which a pixel value difference between the smoothed pixel values is within a predetermined range between the adjacent pixel and the contour pixel is a pixel outside the target region. 11. The region extraction device according to claim 10, wherein the region extraction device is separate.
[12] 前記領域変形手段は、前記初期領域内にあって該初期領域の輪郭を構成する一 連の輪郭画素のうち、前記初期領域の外側に隣接する隣接画素との間で、前記エツ ジ画素値が所定条件を満たす画素を前記対象領域外の画素であると判別し、該判 別した輪郭画素を取り除くように前記初期領域を変形することを特徴とする請求項 2 に記載の領域抽出装置。  [12] The region deforming means includes the edge between the adjacent pixels adjacent to the outside of the initial region among a series of contour pixels in the initial region and constituting the contour of the initial region. The region extraction according to claim 2, wherein a pixel whose pixel value satisfies a predetermined condition is determined to be a pixel outside the target region, and the initial region is modified so as to remove the determined contour pixel. apparatus.
[13] 前記領域変形手段は、前記隣接画素と前記輪郭画素との間で前記エッジ画素値 の画素値差が所定範囲内である前記輪郭画素を前記対象領域外の画素であると判 別することを特徴とする請求項 12に記載の領域抽出装置。  [13] The region modification means determines that the contour pixel in which a pixel value difference of the edge pixel value is within a predetermined range between the adjacent pixel and the contour pixel is a pixel outside the target region. The region extracting apparatus according to claim 12, wherein:
[14] 前記領域変形手段は、前記画素値差が前記所定範囲を超えるまで、前記対象領 域画素であるか否かの判別および前記初期領域の変形を繰り返すことを特徴とする 請求項 7, 9, 11または 13のいずれか一つに記載の領域抽出装置。  [14] The region modifying means repeats the determination of whether or not the pixel is the target region pixel and the deformation of the initial region until the pixel value difference exceeds the predetermined range. The region extraction device according to any one of 9, 11 and 13.
[15] 前記領域変形手段は、 1つの前記初期領域の外側に隣接し、他の前記初期領域 の外側に隣接していない画素であって前記所定条件を満たす隣接画素を、前記対 象領域画素であると判別することを特徴とする請求項 6〜9のいずれか一つに記載の 領域抽出装置。  [15] The region deforming means may include adjacent pixels that are adjacent to the outside of one of the initial regions and are not adjacent to the outside of the other initial region, and that satisfy the predetermined condition. The region extraction device according to claim 6, wherein the region extraction device is discriminated as being.
[16] 前記領域変形手段は、前記初期領域の外側に隣接する一連のすべての隣接画素 、または前記初期領域の外側に隣接する一連の隣接画素のうち所定割合以上の各 隣接画素に対して、前記対象領域画素である力否かの判別を行うことを特徴とする 請求項 6〜9および 15のいずれか一つに記載の領域抽出装置。  [16] The region deformation means, for a series of all adjacent pixels adjacent to the outside of the initial region, or for each adjacent pixel of a predetermined ratio or more of a series of adjacent pixels adjacent to the outside of the initial region, 16. The region extracting apparatus according to claim 6, wherein whether or not the force is a target region pixel is determined.
[17] 前記領域変形手段が変形した結果の画像領域である各変形領域の中から、隣接 する該変形領域によって形成された隣接領域群を検出し、該検出した隣接領域群内 の隣接した変形領域間の特徴を示す特徴量に基づいて、該隣接した変形領域同士 を統合して前記対象領域を成形する領域統合手段を更に備えたことを特徴とする請 求項 1〜 16の 、ずれか一つに記載の領域抽出装置。  [17] An adjacent region group formed by the adjacent deformed regions is detected from the deformed regions that are image regions as a result of the deformation by the region deforming means, and adjacent deformations in the detected adjacent region group are detected. Based on the feature amount indicating the feature between the regions, the region further includes a region integration unit that forms the target region by integrating the adjacent deformation regions. The region extraction device according to one.
[18] 前記領域統合手段は、前記特徴量を算出し、該算出した特徴量をもとに前記変形 領域同士を統合することを特徴とする請求項 17に記載の領域抽出装置。  18. The area extracting apparatus according to claim 17, wherein the area integrating unit calculates the feature amount and integrates the deformed regions based on the calculated feature amount.
[19] 前記領域統合手段は、前記各変形領域の輪郭近傍画素の中から処理対象の変形 領域と異なる変形領域に含まれる画素を検出することによって前記隣接領域群を検 出することを特徴とする請求項 17または 18に記載の領域抽出装置。 [19] The region integration means may change the deformation of the processing target from the pixels near the contour of each deformation region. 19. The area extracting apparatus according to claim 17, wherein the adjacent area group is detected by detecting pixels included in a deformation area different from the area.
[20] 前記領域変形手段が変形した結果の画像領域である各変形領域の中から、隣接 する該変形領域によって形成された隣接領域群を検出し、該検出した隣接領域群内 の変形領域間の境界線を示す境界画素および該境界線近傍の境界近傍画素が有 する前記平滑ィ匕画素値に基づいて該変形領域間の特徴を示す特徴量を算出し、該 算出した特徴量をもとに、該隣接領域群内の変形領域同士を統合して前記対象領 域を成形する領域統合手段を更に備えたことを特徴とする請求項 1に記載の領域抽 出装置。 [20] An adjacent region group formed by the adjacent deformed regions is detected from each deformed region that is an image region as a result of deformation by the region deforming means, and between the deformed regions in the detected adjacent region group. A feature amount indicating a feature between the deformation regions is calculated based on the smooth pixel value of the boundary pixel indicating the boundary line and the boundary vicinity pixel near the boundary line, and based on the calculated feature amount 2. The area extracting apparatus according to claim 1, further comprising area integration means for forming the target area by integrating the deformation areas in the adjacent area group.
[21] 前記領域統合手段は、前記境界線上の各境界画素と、該各境界画素近傍の境界 近傍画素との前記平滑化画素値の差の平均値を前記特徴量として算出することを特 徴とする請求項 20に記載の領域抽出装置。  [21] The region integration unit calculates, as the feature amount, an average value of the difference between the smoothed pixel values of each boundary pixel on the boundary line and a boundary vicinity pixel near each boundary pixel. 21. The region extraction device according to claim 20.
[22] 前記領域変形手段が変形した結果の画像領域である各変形領域の中から、隣接 する該変形領域によって形成された隣接領域群を検出し、該検出した隣接領域群内 の変形領域間の境界線を示す境界画素および該境界線近傍の境界近傍画素が有 する前記エッジ画素値に基づいて該変形領域間の特徴を示す特徴量を算出し、該 算出した特徴量をもとに、該隣接領域群内の変形領域同士を統合して前記対象領 域を成形する領域統合手段を更に備えたことを特徴とする請求項 2に記載の領域抽 出装置。  [22] An adjacent region group formed by the adjacent deformed regions is detected from among the deformed regions that are image regions as a result of the deformation by the region deforming means, and between the deformed regions in the detected adjacent region group. A feature amount indicating a feature between the deformation regions is calculated based on the edge pixel value of the boundary pixel indicating the boundary line and the boundary pixel near the boundary line, and based on the calculated feature amount, 3. The area extracting apparatus according to claim 2, further comprising area integration means for integrating the deformation areas in the adjacent area group to form the target area.
[23] 前記領域統合手段は、前記境界線上の各境界画素と、該各境界画素近傍の境界 近傍画素との前記エッジ画素値の差の平均値を前記特徴量として算出することを特 徴とする請求項 22に記載の領域抽出装置。  [23] The region integration means calculates, as the feature amount, an average value of the difference between the edge pixel values of each boundary pixel on the boundary line and a boundary neighboring pixel in the vicinity of each boundary pixel. The region extracting apparatus according to claim 22.
[24] 前記領域統合手段は、前記検出した隣接領域群内の各変形領域の外側に隣接す る一連の隣接画素を走査し、処理対象の変形領域と異なる変形領域の外側に隣接 する隣接画素を前記境界画素として検出することを特徴とする請求項 20〜23のい ずれか一つに記載の領域抽出装置。  [24] The region integration unit scans a series of adjacent pixels adjacent to the outside of each deformation region in the detected adjacent region group, and adjacent pixels outside the deformation region different from the deformation region to be processed. The region extracting apparatus according to any one of claims 20 to 23, wherein: is detected as the boundary pixel.
[25] 入力された画像の中から撮像対象に対応する画像領域である対象領域を抽出す る領域抽出装置に、前記画像の中カゝら前記対象領域を抽出させるための領域抽出 プログラムにおいて、 [25] Region extraction for causing a region extraction device that extracts a target region, which is an image region corresponding to an imaging target, from the input image to extract the target region from the middle of the image In the program
前記領域抽出装置に、  In the region extraction device,
前記画像を平滑化した平滑ィ匕画像を生成する平滑ィ匕手順と、  A smoothing procedure for generating a smoothed image obtained by smoothing the image;
前記平滑化画像内の各画素が有する平滑化画素値に基づ!ヽて、前記平滑化画像 の中から少なくとも前記撮像対象の一部を含む画像領域である初期領域を検出する 初期領域検出手順と、  An initial region detection procedure for detecting an initial region that is an image region including at least a part of the imaging target from the smoothed image based on a smoothed pixel value of each pixel in the smoothed image. When,
前記初期領域の各輪郭近傍画素が有する前記平滑化画素値に基づ 、て、前記各 輪郭近傍画素が前記対象領域を構成する対象領域画素であるか否かを判別し、判 別結果に応じて前記初期領域の大きさおよび形状の少なくとも一方を変形して前記 対象領域を成形する領域変形手順と、  Based on the smoothed pixel value of each contour neighboring pixel in the initial region, it is determined whether or not each contour neighboring pixel is a target region pixel constituting the target region, and according to the determination result. An area deformation procedure for deforming at least one of the size and shape of the initial area to form the target area;
を実行させることを特徴とする領域抽出プログラム。  A region extraction program characterized by causing
入力された画像の中から撮像対象に対応する画像領域である対象領域を抽出す る領域抽出装置に、前記画像の中カゝら前記対象領域を抽出させるための領域抽出 プログラムにおいて、  In an area extraction program for causing a region extraction device that extracts a target region, which is an image region corresponding to an imaging target, from an input image to extract the target region from the middle of the image,
前記領域抽出装置に、  In the region extraction device,
前記画像を平滑化した平滑ィ匕画像を生成する平滑ィ匕手順と、  A smoothing procedure for generating a smoothed image obtained by smoothing the image;
前記平滑化画像内の各画素が有する平滑化画素値に基づ!ヽて、前記平滑化画像 の中から少なくとも前記撮像対象の一部を含む画像領域である初期領域を検出する 初期領域検出手順と、  An initial region detection procedure for detecting an initial region that is an image region including at least a part of the imaging target from the smoothed image based on a smoothed pixel value of each pixel in the smoothed image. When,
前記平滑ィ匕画像に含まれるエッジを検出したエッジ画像を生成するエッジ検出手 順と、  An edge detection procedure for generating an edge image in which an edge included in the smooth image is detected;
前記初期領域の各輪郭近傍画素に対応する前記エッジ画像内の各画素が有する エッジ画素値に基づ ヽて、前記各輪郭近傍画素が前記対象領域を構成する対象領 域画素であるか否かを判別し、判別結果に応じて前記初期領域の大きさおよび形状 の少なくとも一方を変形して前記対象領域を成形する領域変形手順と、  Whether each pixel near the contour is a target region pixel constituting the target region based on an edge pixel value of each pixel in the edge image corresponding to each pixel near the contour in the initial region An area deformation procedure for forming the target area by deforming at least one of the size and shape of the initial area according to the determination result;
を実行させることを特徴とする領域抽出プログラム。  A region extraction program characterized by causing
PCT/JP2006/314579 2005-08-01 2006-07-24 Area extracting device and area extracting program WO2007015384A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2005-222408 2005-08-01
JP2005222408A JP2007041664A (en) 2005-08-01 2005-08-01 Device and program for extracting region

Publications (1)

Publication Number Publication Date
WO2007015384A1 true WO2007015384A1 (en) 2007-02-08

Family

ID=37708663

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2006/314579 WO2007015384A1 (en) 2005-08-01 2006-07-24 Area extracting device and area extracting program

Country Status (2)

Country Link
JP (1) JP2007041664A (en)
WO (1) WO2007015384A1 (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPWO2010098211A1 (en) * 2009-02-24 2012-08-30 独立行政法人理化学研究所 Outline extraction apparatus, outline extraction method, and program
JP2010205067A (en) 2009-03-04 2010-09-16 Fujifilm Corp Device, method and program for extracting area
US8831325B2 (en) 2009-12-29 2014-09-09 Shimadzu Corporation Radiographic image processing apparatus and radiographic image processing program
TR201101980A1 (en) * 2011-03-01 2012-09-21 Ulusoy İlkay An object-based segmentation method.

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH0235358A (en) * 1988-07-25 1990-02-05 Toa Medical Electronics Co Ltd Image processing method for taking out glandular cavity of stomack tissue
JPH11306334A (en) * 1998-02-17 1999-11-05 Fuji Xerox Co Ltd Picture processor and picture processing method
JP2000311248A (en) * 1999-04-28 2000-11-07 Sharp Corp Image processor
JP2000321031A (en) * 1999-05-11 2000-11-24 Nec Corp Device and method for extracting shape of cell
JP2001022931A (en) * 1999-07-07 2001-01-26 Shijin Kogyo Sakushinkai Image-dividing method
JP2001043376A (en) * 1999-07-30 2001-02-16 Canon Inc Image extraction method and device and storage medium

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH0235358A (en) * 1988-07-25 1990-02-05 Toa Medical Electronics Co Ltd Image processing method for taking out glandular cavity of stomack tissue
JPH11306334A (en) * 1998-02-17 1999-11-05 Fuji Xerox Co Ltd Picture processor and picture processing method
JP2000311248A (en) * 1999-04-28 2000-11-07 Sharp Corp Image processor
JP2000321031A (en) * 1999-05-11 2000-11-24 Nec Corp Device and method for extracting shape of cell
JP2001022931A (en) * 1999-07-07 2001-01-26 Shijin Kogyo Sakushinkai Image-dividing method
JP2001043376A (en) * 1999-07-30 2001-02-16 Canon Inc Image extraction method and device and storage medium

Also Published As

Publication number Publication date
JP2007041664A (en) 2007-02-15

Similar Documents

Publication Publication Date Title
CN107918931B (en) Image processing method and system and computer readable storage medium
Zhang et al. A graph-based optimization algorithm for fragmented image reassembly
JP4968075B2 (en) Pattern recognition device, pattern recognition method, and pattern recognition program
WO2018180386A1 (en) Ultrasound imaging diagnosis assistance method and system
WO2006080239A1 (en) Image processing device, microscope system, and area specification program
JP4970381B2 (en) Feature extraction device, feature extraction method, image processing device, and program
JP4518139B2 (en) Image processing device
US8983199B2 (en) Apparatus and method for generating image feature data
CN111126494B (en) Image classification method and system based on anisotropic convolution
KR102559790B1 (en) Method for detecting crack in structures
JP2006350740A (en) Image processing apparatus and program thereof
WO2007015384A1 (en) Area extracting device and area extracting program
JP2008146278A (en) Cell outline extraction device, cell outline extraction method and program
US20100098317A1 (en) Cell feature amount calculating apparatus and cell feature amount calculating method
Leborgne et al. Noise-resistant digital euclidean connected skeleton for graph-based shape matching
JP2006338191A (en) Image processor and domain division program
JP4496860B2 (en) Cell identification device, cell identification method, cell identification program, and cell analysis device
EP3989162A1 (en) Defect image generation method for deep learning and system therefore
JP2005241886A (en) Extraction method of changed area between geographical images, program for extracting changed area between geographical images, closed area extraction method and program for extracting closed area
JP2008310576A (en) Design support method and design support system
JP2006280456A (en) Device for processing image of endothelial cell of cornea
US7315650B2 (en) Image processor
US11640535B2 (en) Probability acquisition apparatus and probability acquisition method
KR100533209B1 (en) Method for detecting image
JP2008102589A (en) Moving image processor, moving image processing method and moving image processing program

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application
NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 06781491

Country of ref document: EP

Kind code of ref document: A1