WO2010098211A1 - Contour definition device and contour definition method, and program - Google Patents

Contour definition device and contour definition method, and program Download PDF

Info

Publication number
WO2010098211A1
WO2010098211A1 PCT/JP2010/051999 JP2010051999W WO2010098211A1 WO 2010098211 A1 WO2010098211 A1 WO 2010098211A1 JP 2010051999 W JP2010051999 W JP 2010051999W WO 2010098211 A1 WO2010098211 A1 WO 2010098211A1
Authority
WO
WIPO (PCT)
Prior art keywords
contour
luminance
variance
threshold
initial
Prior art date
Application number
PCT/JP2010/051999
Other languages
French (fr)
Japanese (ja)
Inventor
敦史 宮脇
裕 黒川
久順 野田
Original Assignee
独立行政法人理化学研究所
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 独立行政法人理化学研究所 filed Critical 独立行政法人理化学研究所
Priority to JP2011501550A priority Critical patent/JPWO2010098211A1/en
Publication of WO2010098211A1 publication Critical patent/WO2010098211A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/149Segmentation; Edge detection involving deformable models, e.g. active contour models
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
    • G06V10/755Deformable models or variational models, e.g. snakes or active contours
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/60Type of objects
    • G06V20/69Microscopic objects, e.g. biological cells or cellular parts
    • G06V20/695Preprocessing, e.g. image segmentation
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N21/00Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
    • G01N21/62Systems in which the material investigated is excited whereby it emits light or causes a change in wavelength of the incident light
    • G01N21/63Systems in which the material investigated is excited whereby it emits light or causes a change in wavelength of the incident light optically excited
    • G01N21/64Fluorescence; Phosphorescence
    • G01N21/645Specially adapted constructive features of fluorimeters
    • G01N21/6456Spatial resolved fluorescence measurements; Imaging
    • G01N21/6458Fluorescence microscopy
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30024Cell structures in vitro; Tissue sections in vitro

Definitions

  • the present invention relates to a contour extraction device, a contour extraction method, and a program, and more particularly, to a contour extraction device, a contour extraction method, and a program for a cell.
  • Patent Document 1 when a contour model is contracted or enlarged using Snakes and the distance between two nodes that are not adjacent to each other is smaller than a threshold value, the contour model is split at the two nodes. It is disclosed.
  • Patent Document 2 it is disclosed that the contour model is contracted and deformed using Snakes, and the contact model is divided into a plurality when the contact or intersection in the contour model is detected.
  • Non-Patent Document 1 unlike the cell contour extraction method by the Snakes method, classifies a cell region and a background from a phase contrast microscope image by creating a binary map based on a luminance histogram, A technique for tracking cells so that the energy function related to cell dynamics is minimized in time order is disclosed.
  • the cell extraction method and the cell tracking method in the image analysis software for the conventional fluorescence microscope image set a threshold value for a certain luminance value, extract a cell region from the analysis target image, and position information, etc. The cell movement was tracked from the statistical analysis results.
  • Kang Li Takeo Kanade, "Cell Population Tracking and Lineage Construction Using Multiple-Model Dynamics Filters and Spatiotemporal Optimization", Proceedings of the 2nd International Workshop on Microscopic Image Analysis with Applications in Biology (MIAAB), September, 2007
  • the conventional cell extraction method using threshold processing has a problem in that it may be difficult to perform good cell extraction due to a decrease in luminance due to a change in fluorescence luminance or a background luminance gradient.
  • the present invention has been made in view of the above problems, and an object of the present invention is to provide a contour extraction device, a contour extraction method, and a program that can accurately extract the true contour of an object. .
  • the contour extraction apparatus of the present invention includes at least a storage unit and a control unit, and extracts a contour of an object.
  • the storage unit images the object.
  • Image data storage means for storing the image data
  • the control unit in the image data stored in the image data storage means, initial contour setting means for setting an initial contour of the object, and the initial contour Contour convergence means for converging the initial contour set by the setting means to generate a convergent contour, and luminance dispersion for obtaining the luminance on the convergent contour generated by the contour convergence means and calculating the variance of the luminance
  • the calculation means and the variance of the luminance calculated by the luminance variance calculation means are compared with a predetermined threshold value, and the variance is greater than the threshold value, or A threshold value determining means for determining whether or not the variance is equal to or less than the threshold value, and when the threshold value determining means determines that the variance is greater than the threshold value, the convergence contour generated by the contour convergence means is expanded,
  • the contour expanding means for setting the expanded
  • the contour extracting device of the present invention is characterized in that, in the contour extracting device described above, the contour converging means and the contour expanding means generate and expand the convergent contour by a dynamic contour modeling method.
  • the contour extracting device of the present invention is the contour extracting device described above, wherein the control unit acquires the center position and / or the in-contour brightness of the contour of the object extracted by the contour extracting means.
  • An acquisition means is further provided.
  • the image data includes a plurality of frames captured at a plurality of times, and the control unit converts the plurality of frames into the time. Control is performed so that processing is performed in the forward or reverse order.
  • the contour extracting device when the control unit controls the plurality of frames to be processed in the reverse order of the time, the contour of the plurality of objects is obtained. And a cell division point setting means for setting a cell division point when the center position is smaller than a predetermined threshold value.
  • the contour extracting apparatus of the present invention is the contour extracting apparatus described above, wherein the luminance variance calculating means weights the luminance for each color when the image data includes luminance information corresponding to a plurality of colors.
  • the p-th root of the sum of the powers of information p (where p is a parameter for the luminance information) is acquired as the luminance and the variance of the luminance is calculated.
  • the present invention also relates to a contour extraction method, and the contour extraction method of the present invention is a contour extraction method that is executed in a contour extraction device that extracts at least a contour of an object, which includes at least a storage unit and a control unit.
  • the storage unit includes image data storage means for storing image data obtained by imaging the object, and the image data stored in the image data storage means is executed in the control unit.
  • Luminance on the converged contour is obtained, and a luminance variance calculating step for calculating the variance of the luminance and the luminance variance calculating step calculated above
  • the threshold value is compared with a predetermined threshold value to determine whether the variance value is greater than the threshold value or whether the variance value is less than or equal to the threshold value.
  • the contour extension step for expanding the convergent contour generated in the contour convergence step and setting the expanded contour as the initial contour; and the threshold determination step A contour extracting step of extracting the convergent contour generated in the contour convergence step as the contour of the object when it is determined that the variance is equal to or less than the threshold.
  • the present invention relates to a program
  • the program of the present invention is a program for causing an outline extraction apparatus including at least a storage unit and a control unit to execute the program, and the storage unit stores the object.
  • An image data storage means for storing captured image data, and an initial contour setting step for setting an initial contour of the object in the image data stored in the image data storage means in the control unit;
  • a contour convergence step for generating a convergent contour by converging the initial contour set in the initial contour setting step, and obtaining a luminance on the convergent contour generated in the contour convergence step, and distributing the luminance
  • a luminance variance calculating step for calculating the luminance, the variance of the luminance calculated in the luminance variance calculating step, and a predetermined threshold value,
  • the threshold determination step for determining whether the variance is greater than the threshold or whether the variance is less than or equal to the threshold, and when the variance is determined to be greater than the threshold in the threshold determination step , Expanding the convergence contour generated in the contour convergence
  • an initial contour of an object is set in image data obtained by imaging the object, (2) a convergent contour is generated by converging the set initial contour, and (3) is generated.
  • the luminance on the convergent contour is acquired, and the luminance variance is calculated.
  • the calculated luminance variance is compared with a predetermined threshold, and the variance is greater than the threshold or the variance is less than or equal to the threshold.
  • the generated convergent contour is expanded, and the expanded contour is set as the initial contour.
  • the variance is equal to or less than the threshold When it determines with it being, it extracts the produced
  • the present invention generates and expands the convergent contour by the dynamic contour modeling method (Snales), so that it is possible to accurately fit the contour of the object. .
  • the present invention obtains the center position and / or the luminance within the contour of the extracted object, so that the position and the luminance of the object can be analyzed with high accuracy.
  • the image data is composed of a plurality of frames captured at a plurality of times, and the plurality of frames are controlled to be processed in the normal order or the reverse order of the times. Even when the object is deformed or moved, the object can be tracked with high accuracy.
  • the present invention sets the cell division point when the center position of the contours of the plurality of objects is smaller than a predetermined threshold when the plurality of frames are controlled to be processed in reverse order of time. Even when cell division occurs, the cell division point can be analyzed accurately without complicating the algorithm. That is, in the conventional method for analyzing frames in the order of time, cell division is determined by a complicated algorithm called contour model division determination, whereas in the present invention, the frame is analyzed in reverse order of time, and the cells are analyzed. Therefore, the algorithm can be simplified and an accurate cell division point can be analyzed.
  • the present invention relates to the p-th root of the p-th power sum of the luminance information weighted for each color when the image data includes luminance information corresponding to a plurality of colors (where p is the luminance). Is obtained as the luminance and the variance of the luminance is calculated, so even if the image is expressed in multiple colors instead of a single color, the luminance corresponding to each color is a single luminance. There is an effect that the calculation for contour extraction can be performed by unifying the indices.
  • FIG. 1 is a diagram schematically showing how the initial contour is converged from the initial contour by the dynamic contour modeling method and fitted to the contour of the object.
  • FIG. 2 is a block diagram showing an example of the configuration of the contour extraction apparatus 100 to which the present invention is applied.
  • FIG. 3 is a flowchart showing an example of the contour extraction process of the contour extraction apparatus 100 according to the present embodiment.
  • FIG. 4 is a conceptual diagram showing an example in which the contour extraction process according to the present embodiment is performed.
  • FIG. 5 is a diagram illustrating an example in which processing according to a conventional method is performed and an example in which contour extraction processing according to the present embodiment is performed.
  • FIG. 6 is a flowchart illustrating an example of the time order analysis process.
  • FIG. 1 is a diagram schematically showing how the initial contour is converged from the initial contour by the dynamic contour modeling method and fitted to the contour of the object.
  • FIG. 2 is a block diagram showing an example of the configuration of the contour extraction apparatus 100 to which the present invention is
  • FIG. 7 is a diagram schematically showing a state in which the extracted contour extracted in the previous frame is expanded, the initial contour in the next frame is set, and a convergent contour is generated.
  • FIG. 8 is a flowchart illustrating an example of the reverse time order analysis process.
  • FIG. 9 is a diagram schematically illustrating an example of processing for resetting the initial contour by Hough transform.
  • FIG. 10 is a diagram illustrating an example of a contour extraction result according to the present embodiment.
  • FIG. 11 is a diagram for explaining the principle of determining the cell division point.
  • FIG. 12 is a diagram illustrating, as an example, the result of performing contour extraction processing in reverse order on the time axis in time-continuous frames.
  • FIG. 13 is a diagram showing a contour extraction result (left diagram) and a cell division analysis result (right diagram) in a certain frame k.
  • FIG. 14 is a flowchart illustrating the processing of the first embodiment.
  • FIG. 15 is a diagram showing a result of analyzing a fluorescence imaging image of a primary cultured cell of ZebraFish into which a fluorescent protein type indicator Fucci that emits fluorescence specifically in the cell cycle is introduced, based on this experimental example 2.
  • FIG. 16 is a graph showing changes in the intensity of each fluorescence in the contour extracted by the second embodiment, and a phylogenetic tree of cell division created from the cell tracking result and the luminance information in the contour according to the second embodiment.
  • FIG. FIG. 17 is a diagram illustrating an example in which an ROI is set for a colony of HeLa cells and contour extraction of the entire colony is performed.
  • the present invention is not limited to this case, and can be similarly applied to all technical fields such as a security system.
  • the contour extracting apparatus of the present invention includes a storage unit and a control unit, and stores image data obtained by capturing an object.
  • the contour extracting apparatus of the present invention sets the initial contour of the object in the stored image data image.
  • the initial contour is set by manually or automatically using a known technique so as to surround the object to be analyzed.
  • FIG. 1 is a diagram schematically showing how the initial contour is converged from the initial contour by the dynamic contour modeling method and fitted to the contour of the object.
  • the left diagram represents the object and the initial contour
  • the right diagram represents the object and the convergent contour.
  • the contour extraction apparatus of the present invention generates a convergent contour from a circular initial contour by a dynamic contour modeling method or the like.
  • the contour extracting apparatus of the present invention acquires the luminance on the convergent contour and calculates the luminance variance.
  • luminance variance is a value indicating variation in luminance value, and is represented by, for example, a standard deviation that is a square root of variance.
  • the contour extraction apparatus of the present invention compares the luminance variance with a predetermined threshold value, and determines whether the variance is greater than the threshold value or whether the variance is less than or equal to the threshold value.
  • the contour extracting apparatus of the present invention expands the convergent contour, sets the expanded contour as the initial contour, and repeats the above processing again.
  • the contour extracting apparatus of the present invention extracts the generated convergent contour as the contour of the object.
  • the contour extracting apparatus of the present invention may acquire the center position of the contour of the object and / or the luminance within the contour.
  • the above is the outline of the contour extraction processing of the present invention in one frame image.
  • the image data is composed of a plurality of frames captured at a plurality of times by shooting at regular intervals, etc.
  • the above-described contour extraction processing is performed on the plurality of frames in the normal order or the reverse order of the times. You may control.
  • the above-described contour extraction process is performed in the reverse order, the cell division point may be set when the center positions of the contours of a plurality of objects are smaller than a predetermined threshold value. This is the end of the description of the outline of the present invention.
  • FIG. 2 is a block diagram showing an example of the configuration of the contour extraction apparatus 100 to which the present invention is applied, and conceptually shows only the portion related to the present invention.
  • the contour extracting apparatus 100 is generally configured to include a control unit 102, a communication control interface unit 104, a storage unit 106, and an input / output control interface unit 108. Are communicably connected via an arbitrary communication path.
  • control unit 102 is a CPU or the like that comprehensively controls the entire contour extraction apparatus 100.
  • the input / output control interface unit 108 is an interface connected to the input unit 112 and the output unit 114.
  • the storage unit 106 is a device that stores various databases and tables.
  • Various databases and tables (image data file 106a to cell division point file 106d) stored in the storage unit 106 are storage means such as a fixed disk device.
  • the storage unit 106 stores various programs, tables, files, databases, and the like used for various processes.
  • the image data file 106a is an image data storage unit that stores image data obtained by capturing an image of an object.
  • the image data file 106a may store image data composed of a plurality of frames 1 to n imaged at a plurality of times by regular interval shooting (time-lapse shooting) or the like.
  • frame 1 represents the first frame on the time axis
  • frame n represents the last frame on the time axis
  • frame k represents an arbitrary frame.
  • the image data file 106a may store image data including luminance information corresponding to a plurality of colors (for example, luminance values corresponding to the three primary colors of RGB), and the image data may be stored in the original image.
  • image data obtained by converting the p-th root of the p-th power sum of luminance information weighted for each color (where p is a parameter for luminance information) as luminance may be stored.
  • the position information file 106b is position information storage means for storing the center position of the contour of the object.
  • each contour is assigned a contour number (ID), and one object is associated with one initial contour. Identified by contour number. That is, in this position information file 106b, coordinates (x, y) are stored in association with an outline number (ID) or a frame number as an example.
  • the luminance information file 106c is luminance information storage means for storing the luminance within the contour of the target object.
  • the luminance information file 106c stores luminance values in association with contour numbers and frame numbers.
  • the cell division point file 106d is a cell division point storage means for storing cell division points.
  • the cell division point file 106d stores the contour numbers of the contours of two objects in association with the frame numbers.
  • the input / output control interface unit 108 controls the input unit 112 and the output unit 114.
  • the output unit 114 is a monitor (including a home television), a speaker, or the like (hereinafter, the output unit 114 may be described as a monitor).
  • a keyboard, a mouse, a microphone, and the like can be used in addition to an image input device such as a microscopic photographing device for photographing at regular intervals.
  • the control unit 102 has an internal memory for storing a control program such as an OS (Operating System), a program defining various processing procedures, and necessary data. And the control part 102 performs the information processing for performing various processes by these programs.
  • the control unit 102 includes a frame setting unit 102a, an initial contour setting unit 102d, a contour convergence unit 102e, a luminance variance calculation unit 102f, a threshold determination unit 102g, a contour expansion unit 102h, a contour extraction unit 102i, and a contour acquisition unit in terms of functional concept. 102j and a cell division point setting unit 102k.
  • the frame setting unit 102a controls the plurality of frames stored in the image data file 106a so as to perform contour extraction processing in the normal or reverse order of time.
  • the frame setting unit 102a includes a time reverse order frame setting unit 102b and a time normal order frame setting unit 102c.
  • the time reverse order frame setting unit 102b is a time reverse order frame setting unit that sets the final frame n as a start point (key frame) and sets the frame k so that the contour extraction processing is performed in the reverse time order until the first frame 1 is reached. is there.
  • the time normal order frame setting unit 102c sets the first frame 1 as the start point (key frame) and sets the frame k so that the contour extraction processing is performed in the time normal order until the last frame n is reached. Means.
  • the initial contour setting unit 102d is initial contour setting means for setting the initial contour of the object in the image data stored in the image data file 106a. That is, the initial contour is set so as to surround the object by the initial contour setting unit 102d, and is converged (and expanded if necessary) so as to match the true contour of the object in the subsequent processing.
  • the contour and the convergent contour represent the ROI (Region of Interest) for each object (in the following embodiment, to distinguish it from the true contour of the object). Sometimes called “ROI”).
  • the initial contour setting unit 102d manages each ROI with a uniform contour number (ID) for the same object across frames.
  • the initial contour setting unit 102d for example, the X coordinate / Y coordinate indicating the initial contour setting position, the number of initial divisions indicating how many lines the initial contour is represented by, An initial radius indicating the size of the contour is set.
  • Some or all of these setting parameters may be set automatically by a known initial contour setting means, or may be set manually.
  • the initial contour setting unit 102d performs feature extraction by the Hough transform, and near the ROI in the frame in which the signal is present.
  • An initial contour may be set using feature points.
  • the contour convergence unit 102e is a contour convergence unit that converges the initial contour set by the initial contour setting unit 102d or the contour expansion unit 102h and generates a convergent contour.
  • the contour convergence unit 102e may generate a convergent contour by a dynamic contour modeling method. More specifically, the contour convergence unit 102e generates a convergent contour by converging the contour (ROI) so that the value calculated by the energy function E snacks below is minimized.
  • E snakes ⁇ ⁇ E in (v (s)) + E img (v (s)) + E con (v (s)) ⁇ ds
  • E in represents the internal energy term of the contour line, and is a function in which the energy decreases as the contour length of the ROI is shorter and the smoothness of the line segment set is higher.
  • E img is This represents a minimum energy term at the edge, and is a function in which the energy becomes smaller as the image is differentiated and closer to 0.
  • E con represents an energy term due to external binding force.
  • a “contracting force” indicating how much the ROI contracts or a smooth curve of the ROI may be defined.
  • “adsorption force” representing how much the ROI is to be retained in the vicinity of the object and “minimum number of vertices” representing how many ROIs are defined as a set of straight lines
  • the parameters such as “the number of contractions” indicating how many times the contraction calculation is performed at the time of ROI calculation can be set in accordance with the type of object, the purpose of analysis, and the like.
  • the luminance dispersion calculation unit 102f is a luminance dispersion calculation unit that acquires the luminance on the convergence contour generated by the contour convergence unit 102e and calculates the luminance dispersion.
  • the luminance variance calculation unit 102f performs color processing when the image data stored in the image data file 106a includes luminance information corresponding to a plurality of colors (for example, luminance values corresponding to the three primary colors of RGB).
  • the luminance variance may be calculated by acquiring the p-th root of the p-th power sum of the luminance information weighted for each (where p is a parameter for the luminance information) as the luminance.
  • the threshold determination unit 102g compares the luminance variance calculated by the luminance variance calculation unit 102f with a predetermined threshold, and determines whether the variance is greater than the threshold or whether the variance is equal to or less than the threshold. It is a threshold value determination means. For example, the luminance variance calculation unit 102f determines whether or not the luminance variance on the convergence contour is a variance larger than a certain standard deviation as a threshold value.
  • the contour expanding unit 102h is a contour expanding unit that expands the contour.
  • the contour expanding unit 102h may expand the convergent contour of the previous frame k-1 and set the expanded contour as the initial contour.
  • the threshold determining unit 102g determines that the variance is greater than the threshold
  • the contour extending unit 102h extends the convergent contour generated by the contour converging unit 102e, and sets the expanded contour as an initial contour.
  • the contour expanding unit 102h may expand the convergent contour by the dynamic contour modeling method.
  • the contour extraction unit 102i is a contour extraction unit that extracts the convergence contour generated by the contour convergence unit 102e as the contour of the object when the threshold determination unit 102g determines that the variance is equal to or less than the threshold. .
  • the contour acquisition unit 102j is a contour acquisition unit that acquires the center position and / or in-contour brightness of the contour of the object extracted by the contour extraction unit 102i.
  • the contour acquisition unit 102j stores the center position of the contour of the target object in the position information file 106b, and stores the in-contour brightness of the target object in the luminance information file 106c.
  • the cell division point setting unit 102k is configured such that when the target object is a cell and the time reverse order frame setting unit 102b is controlled to perform contour extraction processing of the frame k in reverse time order, the contours of a plurality of target objects
  • This is a cell division point setting means for setting a cell division point and storing it in the cell division point file 106d when the center position is smaller than a predetermined threshold value.
  • the threshold value is a parameter of “same point determination distance” for defining how close two contraction contours are to be tracked as the same object.
  • the contour extraction device 100 may be communicably connected to the network 300 via a communication device such as a router and a wired or wireless communication line such as a dedicated line. That is, as shown in FIG. 2, this system roughly provides an outline database 100, an external database that stores setting parameters and the like related to the dynamic outline modeling method, an external program that provides an external program such as an outline extraction program, and the like.
  • the system 200 may be configured to be communicably connected via the network 300.
  • the communication control interface unit 104 of the contour extraction device 100 is an interface connected to a communication device (not shown) such as a router connected to a communication line or the like, and the contour extraction device 100 and the network 300 (or router). Communication control between the communication device and the like. That is, the communication control interface unit 104 has a function of communicating data with other terminals via a communication line.
  • the network 300 has a function of interconnecting the contour extraction device 100 and the external system 200, such as the Internet.
  • the external system 200 is connected to the contour extraction apparatus 100 via the network 300, and stores an external database for storing setting parameters and the like regarding the dynamic contour modeling method for the user, and an external such as a contour extraction program. It has a function to provide programs and the like.
  • the external system 200 may be configured as a WEB server, an ASP server, or the like.
  • the hardware configuration of the external system 200 may be configured by an information processing apparatus such as a commercially available workstation or a personal computer and its attached devices.
  • Each function of the external system 200 is realized by a CPU, a disk device, a memory device, an input device, an output device, a communication control device, and the like in the hardware configuration of the external system 200 and a program for controlling them.
  • FIG. 3 is a flowchart showing an example of the contour extraction process of the contour extraction apparatus 100 according to the present embodiment.
  • the initial contour setting unit 102d sets an initial contour in association with an ID for each object in the image data stored in the image data file 106a (step SA-1).
  • the contour converging unit 102e generates a convergent contour by converging the set initial contour with Snakes (step SA-2). More specifically, the contour convergence unit 102e repeats the generation of the convergence contour and the calculation of the energy value based on the energy function E sneaks for the set number of convergences, and generates the convergence contour so that the energy value is minimized.
  • E snakes ⁇ ⁇ E in (v (s)) + E img (v (s)) + E con (v (s)) ⁇ ds (Here, E in represents an internal energy term of the contour line, E img represents an energy term that is minimal at the edge portion of the image, and E con represents an energy term due to a binding force from the outside. )
  • the luminance variance calculation unit 102f acquires the luminance on the convergence contour generated by the contour convergence unit 102e (step SA-3). More specifically, the luminance dispersion calculation unit 102f acquires the luminance value of the image pixel corresponding to the convergent contour.
  • the luminance variance calculation unit 102f performs color processing when the image data stored in the image data file 106a includes luminance information corresponding to a plurality of colors (for example, luminance values corresponding to the three primary colors of RGB).
  • the p-th root of the p-th power sum of the luminance information weighted for each may be acquired as the luminance.
  • the luminance variance calculation unit 102f calculates the obtained luminance variance (step SA-4).
  • the threshold determination unit 102g compares the luminance variance calculated by the luminance variance calculation unit 102f with a predetermined threshold (for example, a predetermined standard deviation value), and the variance is larger than the threshold, or Then, it is determined whether the variance is equal to or less than the threshold value (step SA-5).
  • a predetermined threshold for example, a predetermined standard deviation value
  • the contour expansion unit 102h determines that the convergent contour has not been correctly set in the true contour of the object (Step S5).
  • SA-6) The convergence contour is expanded, the expanded convergence contour is set as a new initial contour, and the process returns to step SA-2 (step SA-7).
  • FIG. 4 is a conceptual diagram showing an example in which the contour extraction processing according to the present embodiment is performed.
  • the shaded area in FIG. 4 represents an object (cell).
  • the step numbers in FIG. 4 correspond to the step numbers in FIG.
  • the initial contour setting unit 102d sets an initial contour for the next frame k ( ⁇ SA-1> in FIG. 4).
  • the initial contour setting unit 102d sets the extracted contour obtained by extending the contour extracted for the previous frame as the initial contour for the next frame.
  • the contour convergence unit 102e performs contour fitting by Snakes to converge the initial contour and generate a convergent contour ( ⁇ SA-2> on the left in FIG. 4).
  • the object since the object has moved / deformed from the previous frame and moved beyond the expansion range of the Snakes contour, it intersects the true contour of the object, and the luminance gradient inside the object and the luminance gradient around the object Energy minimization occurs, and the Snakes contour cannot be set correctly to the true contour of the object.
  • FIG. 5 is a diagram illustrating an example in which processing by the conventional method is performed and an example in which the contour extraction processing of the present embodiment is performed.
  • the right diagram and the left diagram represent the same image obtained by imaging two cells as objects, and the convergent contours of contour numbers 0 to 3 are represented by polygons.
  • the cell moves beyond the expansion range of the contour (ROI), and the initial contour before the recontraction calculation and the cell region are In the case of crossing, the contour energy is minimized by the false contour generated by the luminance change inside the cell, and the ROI cannot be matched with the true contour of the cell (see contour number 2 in the left figure). Therefore, in the present embodiment, as shown in the right diagram of FIG. 5, the ROI is arranged in the cell internal region by repeating the contraction and expansion of the ROI using the luminance dispersion as an index in the process of the contour extraction process. Even in the case where it has occurred, the ROI can be expanded from the cell internal region to the outside, and can match the true contour of the object (see contour number 2 in the right figure).
  • step SA-5 when it is determined by the threshold determination unit 102g that the variance is equal to or less than the threshold (step SA-5, Yes), the contour extraction unit 102i has correctly set the convergence contour to the true contour of the target object. Judgment is made and the convergence contour is extracted as the contour of the object (step SA-8).
  • the contour acquisition unit 102j stores the center position (X, Y) of the contour of the target object extracted by the contour extraction unit 102i in the position information file 106b in association with the ID, and extracts the contour of the target object.
  • the internal luminance is stored in the luminance information file 106c in association with the ID (step SA-9).
  • FIG. 6 is a flowchart illustrating an example of the time order analysis process.
  • the time-ordered frame setting unit 102c sets the first frame 1 on the time axis in the image data stored in the image data file 106a as the start frame, and sets the initial contour setting unit 102d. Sets an initial contour for each object (step SB-1).
  • Step SB-2 Contour enlargement processing
  • step SB-3 Contour enlargement processing
  • steps SB-3 to SB-8 The subsequent processing of steps SB-3 to SB-8 is performed in the same manner as the processing of steps SA-2 to SA-7 described above, and the processing is repeated until the luminance distribution on the convergence contour becomes equal to or less than the threshold value.
  • the contour extraction unit 102i determines that the convergence contour has been correctly set to the true contour of the object. (Step SB-9), the convergence contour is extracted as the contour of the object, and the setting of the convergence contour is completed (Step SB-10).
  • the contour acquisition unit 102j stores the center position (X, Y) of the contour of the target object extracted by the contour extraction unit 102i in the position information file 106b in association with the ID, and extracts the contour of the target object.
  • the internal luminance is stored in the luminance information file 106c in association with the ID (step SB-11).
  • step SB-12 When the time-ordered frame setting unit 102c determines that the analysis frame k has not reached the frame n (No in step SB-12), k is incremented (k ⁇ k + 1), and the analysis frame k is set to 1. The process returns to step SB-2 (step SB-13).
  • FIG. 7 is a diagram schematically illustrating a state in which the extracted contour extracted in the previous frame is expanded, the initial contour in the next frame is set, and a convergent contour is generated.
  • the left figure shows the object and the extracted contour in the previous frame
  • the center figure shows the object and the initial outline in the next frame
  • the right figure shows the object and the convergence outline in the next frame. Represents.
  • the contour expanding unit 102h expands the convergent contour extracted by the contour extracting unit 102i in the previous frame once and expands Since the contour is set as the initial contour in the next frame, even if the object moves or deforms, the convergent contour that matches the true contour of the object by setting the initial contour to surround the object Can be set.
  • the process of the contour expanding unit 102h is the same in the following time reverse order analysis process.
  • the initial contour setting unit 102d may perform feature extraction by performing Hough transform after the signal disappears, and reset the initial contour to the feature extraction portion in the vicinity of the contour. That is, when the signal (luminance) has disappeared in the previous frame k-1, in step SB-2 for the next frame k, the initial contour setting unit 102d performs feature extraction by Hough transform, and there is a signal.
  • An initial contour may be set with feature points near the ROI in the frame.
  • the Hough transform is performed to reset the initial contour near the signal disappearance point, so that the same object can be continuously tracked before and after the signal disappearance.
  • the initial contour resetting process by the Hough transform will be described in detail with reference to FIGS. 9 and 10 in the following time reverse order analysis process.
  • FIG. 8 is a flowchart showing an example of the time reverse order analysis processing.
  • the time-reverse order frame setting unit 102b sets the last frame n on the time axis among the image data stored in the image data file 106a as a start frame (step SC-1).
  • the initial contour setting unit 102d sets an initial contour for each object for the start frame (step SC-2).
  • Step SC-3 Contour enlargement processing
  • step SC-4 Contour enlargement processing
  • the contour converging unit 102e then converges the initial contour set by the initial contour setting unit 102d with Snakes to generate a converged contour.
  • Step SC-4 The subsequent steps SC-4 to SC-9 are performed in the same manner as the above-described steps SA-2 to SA-7, and the processing is repeated until the luminance dispersion on the convergence contour becomes equal to or less than the threshold value.
  • the contour extraction unit 102i determines that the convergence contour has been correctly set to the true contour of the object. Then, the convergence contour is extracted as the contour of the object, and the setting of the convergence contour is completed (step SC-11).
  • the contour acquisition unit 102j stores the center coordinates (X, Y) of the contour of the target object extracted by the contour extraction unit 102i in the position information file 106b in association with the ID, and extracts the contour of the target object.
  • the internal luminance is stored in the luminance information file 106c in association with the ID (step SC-12).
  • the cell division point setting unit 102k determines whether or not the distance between the center coordinates (X, Y) of the contours of any two objects is sufficiently close (ie, the same point determination). The determination is made based on the distance) (step SC-13).
  • step SC-13 determines that the distance between the center coordinates is smaller than a predetermined threshold value (step SC-13, Yes)
  • the time or simply the frame number of the analysis frame
  • the contour number is set. It is stored in the cell division point file 106d (step SC-14).
  • Step SC-16 When the time-reverse order frame setting unit 102b determines that the analysis frame k has not reached the frame 1 (No in step SC-15), it decrements k (k ⁇ k ⁇ 1), and sets the analysis frame k to One is reversed and the process returns to Step SC-3 (Step SC-16).
  • the contour expanding unit 102h expands the contour extracted by the contour extracting unit 102i in the previous frame k + 1 as in step SB-2 described above, and the expanded contour Is set as the initial contour of the analysis frame k (step SC-3).
  • the initial contour setting unit 102d extracts the feature by the Hough transform, and in the frame where the signal is present
  • An initial contour may be set with feature points in the vicinity of the ROI.
  • FIG. 9 is a diagram schematically illustrating an example of processing for resetting the initial contour by Hough transform.
  • FIG. 10 is a diagram illustrating an example of a contour extraction result according to the present embodiment.
  • the horizontal axis represents the number of frames
  • the vertical axis represents the luminance.
  • the right figure of FIG. 10 is the figure which represented the analysis result by the phylogenetic tree.
  • the luminance may change in the middle of the frame and the signal may disappear as shown in the graph.
  • the initial contour is reset near the signal vanishing point by performing the Hough transform, so that the same object can be identified before and after the signal disappearing, as shown in the right figure. Even if the signal disappears, the same object can be continuously tracked.
  • FIG. 11 is a diagram for explaining the principle of determining the cell division point.
  • the left figure, the center figure, and the right figure correspond to three frames arranged in order on the time axis, respectively.
  • one object, in the central figure and the right figure, two objects Objects are shown schematically.
  • the cell division point setting unit 102k uses this time (the time in the central diagram) as the cell division point. Set. Note that the cell division point setting unit 102k determines that the cell division is not performed when the two contours do not overlap in the next frame k-1, even if the distance between the centers of the contours is less than the threshold in a certain frame k.
  • FIG. 12 is a diagram illustrating, as an example, the result of contour extraction processing performed in reverse order on the time axis in time-continuous frames.
  • numbers “282” to “285” indicate frame numbers
  • numbers “0” and “1” indicate contour numbers (ID) for the respective contours.
  • Each frame image shows HeLa cells expressing a nuclear-labeled probe in which GFP is linked to Histone H2B, which is a nuclear-related protein responsible for DNA structure stabilization, using Olympus (company name) -LCV100 system (trade name).
  • Histone H2B which is a nuclear-related protein responsible for DNA structure stabilization
  • the contours of ID: 0 and ID: 1 exist independently of each other.
  • the contours of ID: 0 and ID: 1 are completely overlapped because the same cells before cell division are targeted.
  • the cell division point setting unit 102k displays the frame number (in this case, “285”) when the distance between the centers of the contours is less than the threshold, or two contours.
  • the frame number (in this case, “284”) in the case of overlapping is set as the cell division point.
  • the cell division point setting unit 102k may create a phylogenetic tree of cell division based on the time (or frame number) and the contour number stored in the cell division point file 106d.
  • FIG. 13 is a diagram showing a contour extraction result (left diagram) and a cell division analysis result (right diagram) in a certain frame k.
  • Example 1 of the present embodiment will be described below with reference to FIG. Here, first, the principle of the first embodiment will be described while comparing with the prior art.
  • time-lapse imaging time-lapse imaging
  • analysis of luminance information make it possible to analyze the time and space of intracellular signals. Information can be obtained.
  • a certain luminance value is set as a threshold for each frame, and a luminance set is extracted by performing binarization by luminance to be a cell region. Then, by performing statistical processing, the luminance sets having the minimum relative movement distance between frames are identified as corresponding to the same cell, and cell tracking is performed.
  • the analysis target is limited to a time-lapse image of cells having luminance (that is, a time-lapse image with a fluorescence microscope), and binarization is performed. Images that can be implemented were only those in which the background and the luminance of the cells had no unevenness or gradient, and the cell group of the photographed cells had no significant luminance change.
  • the cell tracking ability is improved by providing a more efficient cell extraction method, and the above problem is solved by the cell division identification algorithm. That is, in the first embodiment, the cell region extraction is not performed by the binarization based on the luminance used in the conventional method, but the dynamic contour modeling method (Snakes) for extracting the region by paying attention to the cell contour. In the subsequent frame, the cell is tracked by once again performing contraction after expanding the range of the contour model.
  • FIG. 14 is a flowchart showing the processing of the first embodiment.
  • the contour extracting apparatus 100 of the first embodiment opens the image data file 106a by the processing of the time reverse order frame setting unit 102b, and reads the microscopic time-lapse image data of the cells (step SD-1 ).
  • the contour extracting apparatus 100 acquires necessary information from the image data file 106a by the processing of the time reverse order frame setting unit 102b, and then moves the read control k to the last frame n recorded (step SD-2). .
  • the analysis proceeds sequentially from the reverse direction with respect to the time axis.
  • the contour extracting apparatus 100 performs initial parameters (initial contour position, initial contour radius, contraction strength, number of contraction calculations) to be given to the dynamic contour model (Snakes) on the final frame n by the processing of the initial contour setting unit 102d.
  • the enlargement range, the number of control points on the contour, etc.) are set (step SD-3).
  • This initial contour is individually set for a plurality of cells to be analyzed, that is, initial contours having IDs of 1 to m are set for m cells.
  • the contour extracting apparatus 100 performs energy model calculation so that the given initial radius is contracted to a position that best matches the contour of the cell by the processing of the contour converging unit 102e, and generates a convergent contour (step SD). -4).
  • the contour extracting apparatus 100 acquires the luminance value of the image pixel on the convergent contour by the processing of the luminance variance calculating unit 102f (step SD-5), and calculates the acquired luminance variance (step SD-6). .
  • the contour extracting apparatus 100 expands the convergent contour as a new initial contour. Set and return to step SD-4.
  • the contour extracting apparatus 100 repeats the contour convergence process until the variance is equal to or smaller than the predetermined standard deviation (steps SD-4 to 7), and when the variance is equal to or smaller than the predetermined standard deviation (step SD-7, Yes). Then, the outline of the cell is extracted by the processing of the outline extraction unit 102i, and the region of interest (ROI) is set (adapted to the dynamic outline model).
  • ROI region of interest
  • the contour extracting apparatus 100 acquires luminance information from the obtained ROI by the processing of the contour acquiring unit 102j (step SD-8), and the luminance information in the ROI with ID: 1 to m is stored in the luminance information file 106c. Writing out (step SD-9).
  • Step SD-10 When the contour extraction apparatus 100 determines that two or more ROIs indicate the same range and the ROIs are fused (that is, the cells are fused) by the processing of the cell division point setting unit 102k. (Step SD-10, Yes), when viewed from the forward direction of the time axis, it is determined that cell division has occurred, and the cell division point is recorded in the cell division point file 106d (step SD-11).
  • the contour extracting apparatus 100 determines whether or not the reading control k has reached the first frame 1 by the processing of the time reverse order frame setting unit 102b (step SD-12), and has not reached the first frame 1 (Step SD-12, No), the control is shifted to the previous frame k-1 with respect to the time axis by the processing of the time reverse order frame setting unit 102b, and then the ROI range is expanded by the processing of the contour extension unit 102h (Step SD-13).
  • the contour extracting apparatus 100 contracts the contour again in the expanded ROI range for the next frame, and extracts the cell range (steps SD-4 to SD-13).
  • the contour extracting apparatus 100 performs the above processing in order from the reverse direction of the time axis for all the frames n to 1.
  • the contour extracting apparatus 100 determines whether the information is recorded in the luminance information file 106c or the cell division point file 106d. Information such as luminance change information and a transition diagram (phylogenetic tree) of cell division is output to the output unit 114.
  • Example 2 in the present embodiment will be described below with reference to FIGS. 15 and 16.
  • the brightness for contour calculation by the dynamic contour modeling method is appropriate so that the contour can be accurately extracted even when the image is expressed in multiple colors instead of a single color.
  • the cell contour extraction and the cell tracking were performed after performing various processing.
  • the flow of processing other than the calculation of the brightness for contour calculation is basically the same as that in the first embodiment.
  • the p-th root of the p-th power sum of the luminance information weighted for each color is used as the luminance for contour calculation.
  • the luminance for contour calculation is obtained by the following equation.
  • Intensity ((a) * (Ch1) ⁇ (p) + (b) * (Ch2) ⁇ (p) + (c) * (Ch3) ⁇ (p)) ⁇ (1 / p) (Here, a, b, and c are arbitrary weighting parameters for the fluorescent luminances Ch1, Ch2, and Ch3, respectively, and p is a parameter for the overall luminance.)
  • Each parameter in the above formula is a real number specified by, for example, a double type floating point number, and more preferably 0. These are real numbers.
  • the brightness intensity for Snakes contour calculation reflecting the brightness of a plurality of fluorescent images can be obtained by this calculation formula. Then, based on the calculated luminance intensity value, the coordinates where the Snakes contour exists are calculated by the same method (time reverse order analysis processing) as in the first embodiment described above, and exist within the obtained Snakes contour.
  • the luminance information Ch1, Ch2, Ch3
  • FIG. 15 is a diagram showing a result of analyzing a fluorescence imaging image of a primary cultured cell of ZebraFish into which a fluorescent protein type indicator Fucci that emits fluorescence specifically in the cell cycle is introduced, based on this experimental example 2. It is.
  • this indicator Fucci Fluorescent Ubiquitation-based Cell Cycle Indicator
  • Each of the time required until the green fluorescent protein (mAG) is completely decomposed and the time required until the red fluorescent protein (mKO2) is synthesized and accumulated to emit sufficient fluorescence requires a certain amount of time.
  • the fluorescence imaging image used for the analysis is an image group of about 1000 frames obtained by time-lapse photographing of the primary culture cells of ZebraFish into which Fucci was introduced for 48 hours. Of these, in FIG. The representative image is shown together with the extracted contours (contour numbers 2, 3, and 5).
  • each calculation formula in the above calculation formula on the analysis algorithm The parameter was 1.
  • the cells exhibit S / G2 / M phase green fluorescence around 0h-12h, G1 phase red fluorescence around 15h-27h, and S / G2 / M around 30h-45h.
  • the green fluorescence of the period is exhibited, and then the red fluorescence of the G1 period is exhibited again in the vicinity of 48 h.
  • the outline of the cell can be accurately extracted in any image. It was confirmed that Also, as shown in FIG. 15, three contours (contour numbers 2, 3, and 5) that were common at 0h are contours of contour number 2 and contour number 3 in a frame (* 1) near 12h.
  • FIG. 16 is a graph showing the change in the intensity of each fluorescence in the contour extracted in the second embodiment, and the cell division created from the cell tracking result and the luminance information in the contour in the second embodiment. It is a figure which shows no phylogenetic tree.
  • FIG. 17 is a diagram illustrating an example in which an ROI is set to a HeLa cell colony in a differential interference image and contour extraction of the entire colony is performed.
  • the present invention is not only an analysis from a fluorescence image in which a cell region and a background region are expressed as a luminance difference, but also an image in which an object and a background such as a differential interference image are not expressed as a luminance difference.
  • one initial contour is set for one cell in the fluorescence microscope image, and the contour is extracted.
  • the object is extracted from other than the fluorescence microscope observation image.
  • one initial contour may be set for a group of a plurality of cells or a plurality of objects, and the contours of these groups may be extracted.
  • contour extracting apparatus 100 performs processing in a stand-alone form has been described as an example, but processing is performed in response to a request from a client terminal configured with a separate housing from the contour extracting apparatus 100, You may comprise so that the process result may be returned to the said client terminal.
  • all or part of the processes described as being automatically performed can be performed manually, or the processes described as being performed manually can be performed. All or a part can be automatically performed by a known method.
  • contour extracting apparatus 100 each illustrated component is functionally conceptual and does not necessarily need to be physically configured as illustrated.
  • each device of the contour extraction device 100 is interpreted and executed by a CPU (Central Processing Unit) and the CPU. It may be realized by a program to be executed, or may be realized as hardware by wired logic.
  • the program is recorded on a recording medium to be described later, and is mechanically read by the contour extracting apparatus 100 as necessary. That is, the storage unit 106 such as ROM or HD stores a computer program for performing various processes by giving instructions to the CPU in cooperation with an OS (Operating System). This computer program is executed by being loaded into the RAM, and constitutes a control unit in cooperation with the CPU.
  • OS Operating System
  • the computer program may be stored in an application program server connected to the contour extraction apparatus 100 via an arbitrary network 300, and may be downloaded in whole or in part as necessary. It is.
  • the program according to the present invention can also be stored in a computer-readable recording medium.
  • the “recording medium” is an arbitrary “portable physical medium” such as a flexible disk, a magneto-optical disk, a ROM, an EPROM, an EEPROM, a CD-ROM, an MO, a DVD, a Blu-ray Disc, or the like. It includes a “communication medium” that holds a program in a short period of time, such as a communication line or a carrier wave when a program is transmitted via a network represented by a LAN, WAN, or the Internet.
  • program is a data processing method described in an arbitrary language or description method, and may be in any form such as source code or binary code.
  • program is not necessarily limited to a single configuration, but is distributed in the form of a plurality of modules and libraries, or in cooperation with a separate program represented by an OS (Operating System). Including those that achieve the function.
  • OS Operating System
  • a well-known configuration and procedure can be used for a specific configuration for reading a recording medium, a reading procedure, an installation procedure after reading, and the like in each device described in the embodiment.
  • Various databases and the like (image data file 106a to cell division point file 106d) stored in the storage unit 106 are storage means such as a memory device such as RAM and ROM, a fixed disk device such as a hard disk, a flexible disk, and an optical disk.
  • Various programs, tables, databases, web page files, etc. used for various processing and website provision are stored.
  • the contour extraction apparatus 100 is connected to an information processing apparatus such as a known personal computer or workstation, and the software (including programs, data, etc.) for realizing the method of the present invention is installed in the information processing apparatus. It may be realized.
  • the specific form of distribution / integration of the devices is not limited to that shown in the figure, and all or a part of them may be functional or physical in arbitrary units according to various additions or according to functional loads. Can be distributed and integrated.
  • the true contour of the target object can be accurately extracted, and medical and pharmaceutical
  • fields such as drug discovery, biological research and clinical testing, it is extremely useful in various fields such as crime prevention systems.

Abstract

The disclosed method sets the initial contour of an object within the image data for an object that has been imaged, produces a convergent contour by convergence of the initial contour that has been set, acquires the brightness on the resulting convergent contour, calculates the brightness distribution, compares the calculated brightness distribution with a predetermined threshold value, and determines whether the dispersion is greater than the threshold value or if the dispersion is less than the threshold value. If the distribution is determined to be greater than the threshold value, the resulting convergent contour is expanded and said expanded contour is set as the initial contour, and if the distribution is determined to be less than the threshold value, the resulting convergent contour is set as the contour of the object.

Description

輪郭抽出装置および輪郭抽出方法、並びにプログラムOutline extraction apparatus, outline extraction method, and program
 本発明は、輪郭抽出装置および輪郭抽出方法、並びにプログラムに関し、特に、細胞を対象とした輪郭抽出装置および輪郭抽出方法、並びにプログラムに関する。 The present invention relates to a contour extraction device, a contour extraction method, and a program, and more particularly, to a contour extraction device, a contour extraction method, and a program for a cell.
 従来、対象物を含む画像データに対し、動的輪郭モデリング法(Snakes)等を用いて対象物の輪郭の抽出を行う画像解析技術が開発されている。 2. Description of the Related Art Conventionally, an image analysis technique has been developed that extracts the contour of an object using image contouring (Snakes) or the like for image data including the object.
 例えば、特許文献1では、Snakesを用いて輪郭モデルを収縮または拡大変形させ、輪郭の隣接しない2節点の距離がしきい値より小さくなった場合に、輪郭モデルをこの2節点で分裂させることが開示されている。 For example, in Patent Document 1, when a contour model is contracted or enlarged using Snakes and the distance between two nodes that are not adjacent to each other is smaller than a threshold value, the contour model is split at the two nodes. It is disclosed.
 また、特許文献2では、Snakesを用いて輪郭モデルを収縮変形させ、輪郭モデルにおける接触または交差が検出された場合に、輪郭モデルを複数に分裂させることが開示されている。 In Patent Document 2, it is disclosed that the contour model is contracted and deformed using Snakes, and the contact model is divided into a plurality when the contact or intersection in the contour model is detected.
 また、非特許文献1は、Snakes法による細胞輪郭の抽出手法とは異なり、位相差顕微鏡画像から細胞領域と背景とを輝度のヒストグラムに基づいて2値マップを作製することにより区分し、フレームの時間順に細胞動態に関するエネルギー関数が最小となるように細胞の追跡を行なう技法が開示されている。 Further, Non-Patent Document 1, unlike the cell contour extraction method by the Snakes method, classifies a cell region and a background from a phase contrast microscope image by creating a binary map based on a luminance histogram, A technique for tracking cells so that the energy function related to cell dynamics is minimized in time order is disclosed.
 このように、従来の蛍光顕微鏡画像を対象とした画像解析ソフトウェアにおける細胞抽出手法、並びに細胞追跡手法は、ある輝度値に対して閾値を設定し細胞領域を解析対象画像から抽出し、位置情報等の統計解析結果から細胞の移動を追跡していた。 As described above, the cell extraction method and the cell tracking method in the image analysis software for the conventional fluorescence microscope image set a threshold value for a certain luminance value, extract a cell region from the analysis target image, and position information, etc. The cell movement was tracked from the statistical analysis results.
特開2002-92622号公報JP 2002-92622 A 特開平8-329254号公報JP-A-8-329254
 しかしながら、従来の閾値処理による細胞抽出手法では、蛍光輝度変化による輝度低下や背景輝度勾配などの影響により良好な細胞抽出が困難となる場合が存在するという問題があった。 However, the conventional cell extraction method using threshold processing has a problem in that it may be difficult to perform good cell extraction due to a decrease in luminance due to a change in fluorescence luminance or a background luminance gradient.
 また、画像中の興味領域の輪郭を効率よく抽出するSnakes法により細胞輪郭を抽出し、タイムラプス像に応用にする場合、Snakes輪郭の拡大と収縮を繰り返し行うという手法を用いて細胞追跡を行なうことが考えられる。しかしながら、Snakesによって算出された輪郭範囲を、単に一度拡大し、次のフレームで再縮小しただけのアルゴリズムでは、輪郭の拡大範囲を超えて細胞が移動し、かつ、再収縮計算前の初期輪郭と細胞領域とが交差した場合、細胞内部の輝度変化によって生じた偽の輪郭で輪郭エネルギーが最小化されてしまい、細胞の真の輪郭を追跡出来ない場合が存在するという問題があった。 In addition, when a cell contour is extracted by the Snakes method for efficiently extracting the contour of the region of interest in the image and applied to a time-lapse image, cell tracking is performed using a technique of repeatedly expanding and contracting the Snakes contour. Can be considered. However, in the algorithm in which the contour range calculated by Snakes is simply expanded once and re-reduced in the next frame, the cell moves beyond the expanded range of the contour, and the initial contour before the recontraction calculation When the cell region intersects, there is a problem that the contour energy is minimized by the false contour generated by the luminance change inside the cell, and the true contour of the cell cannot be traced.
 本発明は、上記問題点に鑑みてなされたもので、正確に対象物の真の輪郭を抽出することを可能とする、輪郭抽出装置および輪郭抽出方法、並びにプログラムを提供することを目的とする。 The present invention has been made in view of the above problems, and an object of the present invention is to provide a contour extraction device, a contour extraction method, and a program that can accurately extract the true contour of an object. .
 このような目的を達成するため、本発明の輪郭抽出装置は、記憶部と制御部とを少なくとも備えた、対象物の輪郭を抽出する輪郭抽出装置において、上記記憶部は、上記対象物を撮像した画像データを記憶する画像データ記憶手段を備え、上記制御部は、上記画像データ記憶手段に記憶された上記画像データにおいて、上記対象物の初期輪郭を設定する初期輪郭設定手段と、上記初期輪郭設定手段により設定された上記初期輪郭を収束させて収束輪郭を生成する輪郭収束手段と、上記輪郭収束手段により生成された上記収束輪郭上の輝度を取得し、上記輝度の分散を算出する輝度分散算出手段と、上記輝度分散算出手段により算出された上記輝度の上記分散と、予め定めた閾値とを比較し、上記分散が上記閾値より大きいか、または、上記分散が上記閾値以下であるかを判定する閾値判定手段と、上記閾値判定手段により上記分散が上記閾値より大きいと判定された場合に、上記輪郭収束手段により生成された上記収束輪郭を拡張し、当該拡張された輪郭を上記初期輪郭として設定する輪郭拡張手段と、上記閾値判定手段にて上記分散が上記閾値以下であると判定された場合に、上記輪郭収束手段により生成された上記収束輪郭を上記対象物の輪郭として抽出する輪郭抽出手段と、を備えたことを特徴とする。 In order to achieve such an object, the contour extraction apparatus of the present invention includes at least a storage unit and a control unit, and extracts a contour of an object. The storage unit images the object. Image data storage means for storing the image data, and the control unit, in the image data stored in the image data storage means, initial contour setting means for setting an initial contour of the object, and the initial contour Contour convergence means for converging the initial contour set by the setting means to generate a convergent contour, and luminance dispersion for obtaining the luminance on the convergent contour generated by the contour convergence means and calculating the variance of the luminance The calculation means and the variance of the luminance calculated by the luminance variance calculation means are compared with a predetermined threshold value, and the variance is greater than the threshold value, or A threshold value determining means for determining whether or not the variance is equal to or less than the threshold value, and when the threshold value determining means determines that the variance is greater than the threshold value, the convergence contour generated by the contour convergence means is expanded, The contour expanding means for setting the expanded contour as the initial contour, and the convergence contour generated by the contour converging means when the variance is determined to be less than or equal to the threshold by the threshold determining means. A contour extracting means for extracting the contour of the object.
 また、本発明の輪郭抽出装置は、上記記載の輪郭抽出装置において、上記輪郭収束手段および上記輪郭拡張手段は、動的輪郭モデリング法により上記収束輪郭を生成および拡張すること、を特徴とする。 Further, the contour extracting device of the present invention is characterized in that, in the contour extracting device described above, the contour converging means and the contour expanding means generate and expand the convergent contour by a dynamic contour modeling method.
 また、本発明の輪郭抽出装置は、上記記載の輪郭抽出装置において、上記制御部は、上記輪郭抽出手段にて抽出された上記対象物の輪郭の中心位置および/または輪郭内輝度を取得する輪郭取得手段、をさらに備えたことを特徴とする。 Further, the contour extracting device of the present invention is the contour extracting device described above, wherein the control unit acquires the center position and / or the in-contour brightness of the contour of the object extracted by the contour extracting means. An acquisition means is further provided.
 また、本発明の輪郭抽出装置は、上記記載の輪郭抽出装置において、上記画像データは、複数の時間で撮像された複数のフレームから構成され、上記制御部は、上記複数のフレームを、上記時間の正順または逆順で処理するように制御することを特徴とする。 In the contour extracting device according to the present invention, the image data includes a plurality of frames captured at a plurality of times, and the control unit converts the plurality of frames into the time. Control is performed so that processing is performed in the forward or reverse order.
 また、本発明の輪郭抽出装置は、上記記載の輪郭抽出装置において、上記制御部は、上記複数のフレームを、上記時間の逆順で処理するように制御した場合に、複数の上記対象物の輪郭の上記中心位置が予め定めた閾値より小さい場合に、細胞分裂点を設定する細胞分裂点設定手段、を更に備えたことを特徴とする。 In the contour extracting device according to the present invention, when the control unit controls the plurality of frames to be processed in the reverse order of the time, the contour of the plurality of objects is obtained. And a cell division point setting means for setting a cell division point when the center position is smaller than a predetermined threshold value.
 また、本発明の輪郭抽出装置は、上記記載の輪郭抽出装置において、上記輝度分散算出手段は、上記画像データが複数の色に対応する輝度情報を含む場合に、上記色ごとに重み付けした上記輝度情報のp乗和のp乗根(ここで、pは上記輝度情報に対するパラメータである。)を、上記輝度として取得して上記輝度の分散を算出すること、を特徴とする。 Further, the contour extracting apparatus of the present invention is the contour extracting apparatus described above, wherein the luminance variance calculating means weights the luminance for each color when the image data includes luminance information corresponding to a plurality of colors. The p-th root of the sum of the powers of information p (where p is a parameter for the luminance information) is acquired as the luminance and the variance of the luminance is calculated.
 また、本発明は、輪郭抽出方法に関するものであり、本発明の輪郭抽出方法は、記憶部と制御部とを少なくとも備えた、対象物の輪郭を抽出する輪郭抽出装置において実行される輪郭抽出方法であって、上記記憶部は、上記対象物を撮像した画像データを記憶する画像データ記憶手段を備えており、上記制御部において実行される、上記画像データ記憶手段に記憶された上記画像データにおいて、上記対象物の初期輪郭を設定する初期輪郭設定ステップと、上記初期輪郭設定ステップにて設定された上記初期輪郭を収束させて収束輪郭を生成する輪郭収束ステップと、上記輪郭収束ステップにて生成された上記収束輪郭上の輝度を取得し、上記輝度の分散を算出する輝度分散算出ステップと、上記輝度分散算出ステップにて算出された上記輝度の上記分散と、予め定めた閾値とを比較し、上記分散が上記閾値より大きいか、または、上記分散が上記閾値以下であるかを判定する閾値判定ステップと、上記閾値判定ステップにて上記分散が上記閾値より大きいと判定された場合に、上記輪郭収束ステップにて生成された上記収束輪郭を拡張し、当該拡張された輪郭を上記初期輪郭として設定する輪郭拡張ステップと、上記閾値判定ステップにて上記分散が上記閾値以下であると判定された場合に、上記輪郭収束ステップにて生成された上記収束輪郭を上記対象物の輪郭として抽出する輪郭抽出ステップと、を含むことを特徴とする。 The present invention also relates to a contour extraction method, and the contour extraction method of the present invention is a contour extraction method that is executed in a contour extraction device that extracts at least a contour of an object, which includes at least a storage unit and a control unit. The storage unit includes image data storage means for storing image data obtained by imaging the object, and the image data stored in the image data storage means is executed in the control unit. An initial contour setting step for setting an initial contour of the object, a contour convergence step for generating a convergent contour by converging the initial contour set in the initial contour setting step, and a contour convergence step. Luminance on the converged contour is obtained, and a luminance variance calculating step for calculating the variance of the luminance and the luminance variance calculating step calculated above The threshold value is compared with a predetermined threshold value to determine whether the variance value is greater than the threshold value or whether the variance value is less than or equal to the threshold value. When it is determined that the variance is larger than the threshold value, the contour extension step for expanding the convergent contour generated in the contour convergence step and setting the expanded contour as the initial contour; and the threshold determination step A contour extracting step of extracting the convergent contour generated in the contour convergence step as the contour of the object when it is determined that the variance is equal to or less than the threshold. .
 また、本発明は、プログラムに関するものであり、本発明のプログラムは、記憶部と制御部とを少なくとも備えた輪郭抽出装置において実行させるためのプログラムであって、上記記憶部は、上記対象物を撮像した画像データを記憶する画像データ記憶手段を備えており、上記制御部において、上記画像データ記憶手段に記憶された上記画像データにおいて、上記対象物の初期輪郭を設定する初期輪郭設定ステップと、上記初期輪郭設定ステップにて設定された上記初期輪郭を収束させて収束輪郭を生成する輪郭収束ステップと、上記輪郭収束ステップにて生成された上記収束輪郭上の輝度を取得し、上記輝度の分散を算出する輝度分散算出ステップと、上記輝度分散算出ステップにて算出された上記輝度の上記分散と、予め定めた閾値とを比較し、上記分散が上記閾値より大きいか、または、上記分散が上記閾値以下であるかを判定する閾値判定ステップと、上記閾値判定ステップにて上記分散が上記閾値より大きいと判定された場合に、上記輪郭収束ステップにて生成された上記収束輪郭を拡張し、当該拡張された輪郭を上記初期輪郭として設定する輪郭拡張ステップと、上記閾値判定ステップにて上記分散が上記閾値以下であると判定された場合に、上記輪郭収束ステップにて生成された上記収束輪郭を上記対象物の輪郭として抽出する輪郭抽出ステップと、を実行させるためのプログラムであることを特徴とする。 In addition, the present invention relates to a program, and the program of the present invention is a program for causing an outline extraction apparatus including at least a storage unit and a control unit to execute the program, and the storage unit stores the object. An image data storage means for storing captured image data, and an initial contour setting step for setting an initial contour of the object in the image data stored in the image data storage means in the control unit; A contour convergence step for generating a convergent contour by converging the initial contour set in the initial contour setting step, and obtaining a luminance on the convergent contour generated in the contour convergence step, and distributing the luminance A luminance variance calculating step for calculating the luminance, the variance of the luminance calculated in the luminance variance calculating step, and a predetermined threshold value, In comparison, the threshold determination step for determining whether the variance is greater than the threshold or whether the variance is less than or equal to the threshold, and when the variance is determined to be greater than the threshold in the threshold determination step , Expanding the convergence contour generated in the contour convergence step, setting the expanded contour as the initial contour, and determining that the variance is equal to or less than the threshold in the threshold determination step And a contour extraction step for extracting the convergent contour generated in the contour convergence step as the contour of the object.
 この発明によれば、(1)対象物を撮像した画像データにおいて、対象物の初期輪郭を設定し、(2)設定された初期輪郭を収束させて収束輪郭を生成し、(3)生成された収束輪郭上の輝度を取得し、輝度の分散を算出し、(4)算出された輝度の分散と、予め定めた閾値とを比較し、分散が閾値より大きいか、または、分散が閾値以下であるかを判定し、(5)分散が閾値より大きいと判定された場合に、生成された収束輪郭を拡張し、当該拡張された輪郭を初期輪郭として設定し、(6)分散が閾値以下であると判定された場合に、生成された収束輪郭を対象物の輪郭として抽出する。これにより、正確に対象物の真の輪郭を抽出することができるという効果を奏する。 According to the present invention, (1) an initial contour of an object is set in image data obtained by imaging the object, (2) a convergent contour is generated by converging the set initial contour, and (3) is generated. The luminance on the convergent contour is acquired, and the luminance variance is calculated. (4) The calculated luminance variance is compared with a predetermined threshold, and the variance is greater than the threshold or the variance is less than or equal to the threshold. (5) When it is determined that the variance is larger than the threshold, the generated convergent contour is expanded, and the expanded contour is set as the initial contour. (6) The variance is equal to or less than the threshold When it determines with it being, it extracts the produced | generated convergence outline as an outline of a target object. Thereby, there exists an effect that the true outline of a target object can be extracted correctly.
 また、本発明は、上記(2)および(5)において、動的輪郭モデリング法(Snales)により収束輪郭を生成および拡張するので、正確に対象物の輪郭にフィッティングさせることができるという効果を奏する。 In addition, in the above (2) and (5), the present invention generates and expands the convergent contour by the dynamic contour modeling method (Snales), so that it is possible to accurately fit the contour of the object. .
 また、本発明は、抽出された対象物の輪郭の中心位置および/または輪郭内輝度を取得するので、対象物の位置および輝度を精度よく解析することができるという効果を奏する。 Also, the present invention obtains the center position and / or the luminance within the contour of the extracted object, so that the position and the luminance of the object can be analyzed with high accuracy.
 また、本発明は、画像データは、複数の時間で撮像された複数のフレームから構成され、複数のフレームを、時間の正順または逆順で処理するように制御するので、対象物が撮像した時間の中で変形や移動等した場合でも精度よく対象物を追跡することができるという効果を奏する。 In the present invention, the image data is composed of a plurality of frames captured at a plurality of times, and the plurality of frames are controlled to be processed in the normal order or the reverse order of the times. Even when the object is deformed or moved, the object can be tracked with high accuracy.
 また、本発明は、複数のフレームを、時間の逆順で処理するように制御した場合に、複数の対象物の輪郭の中心位置が予め定めた閾値より小さい場合に、細胞分裂点を設定するので、細胞分裂が生じた場合においても、アルゴリズムを複雑化することなく正確に細胞分裂点を解析することができるという効果を奏する。すなわち、時間正順でフレームを解析する従来手法では、細胞の分裂判定を、輪郭モデルの分裂判定という複雑なアルゴリズムで行っていたのに対し、本発明は、時間逆順でフレームを解析し、細胞の分裂判定を、輪郭モデルの融合により判定するので、アルゴリズムを単純化することができ、正確な細胞分裂点を解析することができる。 Further, the present invention sets the cell division point when the center position of the contours of the plurality of objects is smaller than a predetermined threshold when the plurality of frames are controlled to be processed in reverse order of time. Even when cell division occurs, the cell division point can be analyzed accurately without complicating the algorithm. That is, in the conventional method for analyzing frames in the order of time, cell division is determined by a complicated algorithm called contour model division determination, whereas in the present invention, the frame is analyzed in reverse order of time, and the cells are analyzed. Therefore, the algorithm can be simplified and an accurate cell division point can be analyzed.
 また、本発明は、上記(2)において、画像データが複数の色に対応する輝度情報を含む場合に、色ごとに重み付けした輝度情報のp乗和のp乗根(ここで、pは輝度情報に対するパラメータである。)を、輝度として取得して輝度の分散を算出するので、画像が単色ではなく多色で表現されている場合であっても、各色に対応する輝度を一つの輝度の指標に統一して輪郭抽出のための計算を行うことができるという効果を奏する。 In addition, in the above (2), the present invention relates to the p-th root of the p-th power sum of the luminance information weighted for each color when the image data includes luminance information corresponding to a plurality of colors (where p is the luminance). Is obtained as the luminance and the variance of the luminance is calculated, so even if the image is expressed in multiple colors instead of a single color, the luminance corresponding to each color is a single luminance. There is an effect that the calculation for contour extraction can be performed by unifying the indices.
図1は、初期輪郭から動的輪郭モデリング法により初期輪郭を収束させ、対象物の輪郭にフィッティングさせる様子を模式的に示した図である。FIG. 1 is a diagram schematically showing how the initial contour is converged from the initial contour by the dynamic contour modeling method and fitted to the contour of the object. 図2は、本発明が適用される本輪郭抽出装置100の構成の一例を示すブロック図である。FIG. 2 is a block diagram showing an example of the configuration of the contour extraction apparatus 100 to which the present invention is applied. 図3は、本実施の形態における本輪郭抽出装置100の輪郭抽出処理の一例を示すフローチャートである。FIG. 3 is a flowchart showing an example of the contour extraction process of the contour extraction apparatus 100 according to the present embodiment. 図4は、本実施の形態にかかる輪郭抽出処理を行った例を示す概念図である。FIG. 4 is a conceptual diagram showing an example in which the contour extraction process according to the present embodiment is performed. 図5は、従来法による処理を行った例と、本実施の形態の輪郭抽出処理を行った例を示す図である。FIG. 5 is a diagram illustrating an example in which processing according to a conventional method is performed and an example in which contour extraction processing according to the present embodiment is performed. 図6は、時間正順解析処理の一例を示すフローチャートである。FIG. 6 is a flowchart illustrating an example of the time order analysis process. 図7は、前のフレームにおいて抽出された抽出輪郭を拡張させて、次のフレームにおける初期輪郭を設定し、収束輪郭を生成させる様子を模式的に示した図である。FIG. 7 is a diagram schematically showing a state in which the extracted contour extracted in the previous frame is expanded, the initial contour in the next frame is set, and a convergent contour is generated. 図8は、時間逆順解析処理の一例を示すフローチャートである。FIG. 8 is a flowchart illustrating an example of the reverse time order analysis process. 図9は、ハフ変換により初期輪郭を再設定する処理の一例を模式的に示した図である。FIG. 9 is a diagram schematically illustrating an example of processing for resetting the initial contour by Hough transform. 図10は、本実施の形態による輪郭抽出結果の一例を示す図である。FIG. 10 is a diagram illustrating an example of a contour extraction result according to the present embodiment. 図11は、細胞分裂点を判定する原理を説明するための図である。FIG. 11 is a diagram for explaining the principle of determining the cell division point. 図12は、時間連続するフレームにおいて、時間軸に逆順で輪郭抽出処理を行った結果を一例として示す図である。FIG. 12 is a diagram illustrating, as an example, the result of performing contour extraction processing in reverse order on the time axis in time-continuous frames. 図13は、あるフレームkにおける輪郭の抽出結果(左図)と、系統樹による細胞分裂の解析結果(右図)を示す図である。FIG. 13 is a diagram showing a contour extraction result (left diagram) and a cell division analysis result (right diagram) in a certain frame k. 図14は、本実施例1の処理を示すフローチャートである。FIG. 14 is a flowchart illustrating the processing of the first embodiment. 図15は、細胞周期の時期特異的に蛍光を発する蛍光タンパク質型インジケータFucciが導入されたZebraFishの初代培養細胞の蛍光イメージング画像を、本実験例2に基づいて解析した結果を示す図である。FIG. 15 is a diagram showing a result of analyzing a fluorescence imaging image of a primary cultured cell of ZebraFish into which a fluorescent protein type indicator Fucci that emits fluorescence specifically in the cell cycle is introduced, based on this experimental example 2. 図16は、本実施例2により抽出された輪郭内における各蛍光の強度の変化を示すグラフ、および、本実施例2による細胞追跡結果および輪郭内の輝度情報から作成された細胞分裂の系統樹を示す図である。FIG. 16 is a graph showing changes in the intensity of each fluorescence in the contour extracted by the second embodiment, and a phylogenetic tree of cell division created from the cell tracking result and the luminance information in the contour according to the second embodiment. FIG. 図17は、HeLa細胞のコロニーへROIを設定し、コロニー全体の輪郭抽出を行った例を示す図である。FIG. 17 is a diagram illustrating an example in which an ROI is set for a colony of HeLa cells and contour extraction of the entire colony is performed.
 以下に、本発明にかかる輪郭抽出装置および輪郭抽出方法、並びにプログラムの実施の形態を図面に基づいて詳細に説明する。なお、この実施の形態によりこの発明が限定されるものではない。 Embodiments of a contour extraction apparatus, a contour extraction method, and a program according to the present invention will be described below in detail with reference to the drawings. Note that the present invention is not limited to the embodiments.
 特に以下の実施の形態においては、本発明を細胞解析技術に適用した例について説明することがあるが、この場合に限られず、防犯システムなど全ての技術分野において、同様に適用することができる。 In particular, in the following embodiment, an example in which the present invention is applied to a cell analysis technique may be described. However, the present invention is not limited to this case, and can be similarly applied to all technical fields such as a security system.
[本発明の概要]
 以下、本発明の概要について図1を参照して説明し、その後、本発明の構成および処理等について詳細に説明する。
[Outline of the present invention]
Hereinafter, the outline of the present invention will be described with reference to FIG. 1, and then the configuration and processing of the present invention will be described in detail.
 本発明は、概略的に、以下の基本的特徴を有する。すなわち、本発明の輪郭抽出装置は、記憶部と制御部とを備え、対象物を撮像した画像データを記憶する。 The present invention generally has the following basic features. In other words, the contour extracting apparatus of the present invention includes a storage unit and a control unit, and stores image data obtained by capturing an object.
 そして、本発明の輪郭抽出装置は、記憶した画像データの画像において、対象物の初期輪郭を設定する。なお、初期輪郭の設定は、公知技術を用いて手動あるいは自動にて、解析対象の対象物を囲うように初期輪郭を設定する。 The contour extracting apparatus of the present invention sets the initial contour of the object in the stored image data image. The initial contour is set by manually or automatically using a known technique so as to surround the object to be analyzed.
 そして、本発明の輪郭抽出装置は、設定した初期輪郭を収束させて収束輪郭を生成する。ここで、図1は、初期輪郭から動的輪郭モデリング法により初期輪郭を収束させ、対象物の輪郭にフィッティングさせる様子を模式的に示した図である。図1において、左図は、対象物と初期輪郭を表し、右図は、対象物と収束輪郭を表している。図1に示すように、本発明の輪郭抽出装置は、円形の初期輪郭から、動的輪郭モデリング法等により、収束輪郭を生成する。 Then, the contour extraction apparatus of the present invention generates a convergent contour by converging the set initial contour. Here, FIG. 1 is a diagram schematically showing how the initial contour is converged from the initial contour by the dynamic contour modeling method and fitted to the contour of the object. In FIG. 1, the left diagram represents the object and the initial contour, and the right diagram represents the object and the convergent contour. As shown in FIG. 1, the contour extraction apparatus of the present invention generates a convergent contour from a circular initial contour by a dynamic contour modeling method or the like.
 そして、本発明の輪郭抽出装置は、収束輪郭上の輝度を取得し、輝度の分散を算出する。ここで、「輝度の分散」は、輝度値のばらつきを示す値であり、例えば、分散の平方根である標準偏差などで表される。 Then, the contour extracting apparatus of the present invention acquires the luminance on the convergent contour and calculates the luminance variance. Here, “brightness variance” is a value indicating variation in luminance value, and is represented by, for example, a standard deviation that is a square root of variance.
 そして、本発明の輪郭抽出装置は、輝度の分散と、予め定めた閾値とを比較し、分散が閾値より大きいか、または、分散が閾値以下であるかを判定する。 Then, the contour extraction apparatus of the present invention compares the luminance variance with a predetermined threshold value, and determines whether the variance is greater than the threshold value or whether the variance is less than or equal to the threshold value.
 そして、本発明の輪郭抽出装置は、分散が閾値より大きいと判定された場合に、収束輪郭を拡張し、当該拡張した輪郭を初期輪郭として設定し、再び上述までの処理を繰り返す。 Then, when it is determined that the variance is larger than the threshold value, the contour extracting apparatus of the present invention expands the convergent contour, sets the expanded contour as the initial contour, and repeats the above processing again.
 一方、本発明の輪郭抽出装置は、分散が閾値以下であると判定した場合には、生成した収束輪郭を対象物の輪郭として抽出する。ここで、本発明の輪郭抽出装置は、対象物の輪郭の中心位置および/または輪郭内輝度を取得してもよい。 On the other hand, when it is determined that the variance is equal to or less than the threshold, the contour extracting apparatus of the present invention extracts the generated convergent contour as the contour of the object. Here, the contour extracting apparatus of the present invention may acquire the center position of the contour of the object and / or the luminance within the contour.
 以上は、1フレームの画像における本発明の輪郭抽出処理の概要である。ここで、画像データが、定時間隔撮影等により複数の時間で撮像された複数のフレームから構成される場合、複数のフレームを、時間の正順または逆順で、上述の輪郭抽出処理を行うように制御してもよい。また、逆順で上述の輪郭抽出処理を行う場合、複数の対象物の輪郭の中心位置が予め定めた閾値より小さい場合に、細胞分裂点を設定してもよい。以上で、本発明の概要の説明を終える。 The above is the outline of the contour extraction processing of the present invention in one frame image. Here, when the image data is composed of a plurality of frames captured at a plurality of times by shooting at regular intervals, etc., the above-described contour extraction processing is performed on the plurality of frames in the normal order or the reverse order of the times. You may control. Further, when the above-described contour extraction process is performed in the reverse order, the cell division point may be set when the center positions of the contours of a plurality of objects are smaller than a predetermined threshold value. This is the end of the description of the outline of the present invention.
[本輪郭抽出装置およびシステムの構成]
 次に、本輪郭抽出装置の構成について図2を参照して説明する。図2は、本発明が適用される本輪郭抽出装置100の構成の一例を示すブロック図であり、該構成のうち本発明に関係する部分のみを概念的に示している。図2に示すように、輪郭抽出装置100は、概略的に、制御部102と通信制御インターフェース部104と記憶部106と入出力制御インターフェース部108を備えて構成され、これら輪郭抽出装置100の各部は任意の通信路を介して通信可能に接続されている。
[Configuration of the present contour extraction apparatus and system]
Next, the configuration of the contour extracting apparatus will be described with reference to FIG. FIG. 2 is a block diagram showing an example of the configuration of the contour extraction apparatus 100 to which the present invention is applied, and conceptually shows only the portion related to the present invention. As shown in FIG. 2, the contour extracting apparatus 100 is generally configured to include a control unit 102, a communication control interface unit 104, a storage unit 106, and an input / output control interface unit 108. Are communicably connected via an arbitrary communication path.
 図2において、制御部102は、輪郭抽出装置100の全体を統括的に制御するCPU等である。入出力制御インターフェース部108は、入力部112や出力部114に接続されるインターフェースである。また、記憶部106は、各種のデータベースやテーブルなどを格納する装置である。 In FIG. 2, the control unit 102 is a CPU or the like that comprehensively controls the entire contour extraction apparatus 100. The input / output control interface unit 108 is an interface connected to the input unit 112 and the output unit 114. The storage unit 106 is a device that stores various databases and tables.
 記憶部106に格納される各種のデータベースやテーブル(画像データファイル106a~細胞分裂点ファイル106d)は、固定ディスク装置等のストレージ手段である。例えば、記憶部106は、各種処理に用いる各種のプログラムやテーブルやファイルやデータベース等を格納する。 Various databases and tables (image data file 106a to cell division point file 106d) stored in the storage unit 106 are storage means such as a fixed disk device. For example, the storage unit 106 stores various programs, tables, files, databases, and the like used for various processes.
 これら記憶部106の各構成要素のうち、画像データファイル106aは、対象物を撮像した画像データを記憶する画像データ記憶手段である。ここで、画像データファイル106aは、定時間隔撮影(タイムラプス撮影)等により複数の時間で撮像された複数のフレーム1~nから構成される画像データを記憶してもよい。なお、以下の実施の形態において、フレーム1は、時間軸における最初のフレームを表し、フレームnは、時間軸における最終のフレームを表し、フレームkは、任意のフレームを表すものとする。また、画像データファイル106aは、複数の色に対応する輝度情報(例えば、RGBの3原色のそれぞれに対応する輝度値)を含む画像データを記憶してもよく、また、この画像データを元画像データとして、色ごとに重み付けした輝度情報のp乗和のp乗根(ここで、pは輝度情報に対するパラメータである。)を、輝度として変換した画像データを記憶してもよい。 Among these components of the storage unit 106, the image data file 106a is an image data storage unit that stores image data obtained by capturing an image of an object. Here, the image data file 106a may store image data composed of a plurality of frames 1 to n imaged at a plurality of times by regular interval shooting (time-lapse shooting) or the like. In the following embodiment, frame 1 represents the first frame on the time axis, frame n represents the last frame on the time axis, and frame k represents an arbitrary frame. The image data file 106a may store image data including luminance information corresponding to a plurality of colors (for example, luminance values corresponding to the three primary colors of RGB), and the image data may be stored in the original image. As data, image data obtained by converting the p-th root of the p-th power sum of luminance information weighted for each color (where p is a parameter for luminance information) as luminance may be stored.
 また、位置情報ファイル106bは、対象物の輪郭の中心位置を記憶する位置情報記憶手段である。ここで、本実施の形態においては、各輪郭には輪郭番号(ID)が付与されており、一の対象物は一の初期輪郭に対応付けられているので、各対象物は、収束輪郭の輪郭番号により同定される。すなわち、この位置情報ファイル106bには、一例として、輪郭番号(ID)やフレーム番号に対応付けて座標(x,y)が格納される。 The position information file 106b is position information storage means for storing the center position of the contour of the object. Here, in the present embodiment, each contour is assigned a contour number (ID), and one object is associated with one initial contour. Identified by contour number. That is, in this position information file 106b, coordinates (x, y) are stored in association with an outline number (ID) or a frame number as an example.
 また、輝度情報ファイル106cは、対象物の輪郭の輪郭内輝度を記憶する輝度情報記憶手段である。例えば、輝度情報ファイル106cには、輪郭番号やフレーム番号に対応付けて輝度値が格納される。 Also, the luminance information file 106c is luminance information storage means for storing the luminance within the contour of the target object. For example, the luminance information file 106c stores luminance values in association with contour numbers and frame numbers.
 また、細胞分裂点ファイル106dは、細胞分裂点を記憶する細胞分裂点記憶手段である。例えば、細胞分裂点ファイル106dには、2つの対象物の輪郭の輪郭番号がフレーム番号と対応付けて格納される。 The cell division point file 106d is a cell division point storage means for storing cell division points. For example, the cell division point file 106d stores the contour numbers of the contours of two objects in association with the frame numbers.
 また、図2において、入出力制御インターフェース部108は、入力部112や出力部114の制御を行う。ここで、出力部114は、モニタ(家庭用テレビを含む)やスピーカ等である(なお、以下においては出力部114をモニタとして記載する場合がある)。また、入力部112としては、定時間隔撮影用の顕微鏡撮影装置等の画像入力装置の他、キーボード、マウス、およびマイク等を用いることができる。 In FIG. 2, the input / output control interface unit 108 controls the input unit 112 and the output unit 114. Here, the output unit 114 is a monitor (including a home television), a speaker, or the like (hereinafter, the output unit 114 may be described as a monitor). As the input unit 112, a keyboard, a mouse, a microphone, and the like can be used in addition to an image input device such as a microscopic photographing device for photographing at regular intervals.
 また、図2において、制御部102は、OS(Operating System)等の制御プログラムや、各種の処理手順等を規定したプログラム、および、所要データを格納するための内部メモリを有する。そして、制御部102は、これらのプログラム等により、種々の処理を実行するための情報処理を行う。制御部102は、機能概念的に、フレーム設定部102a、初期輪郭設定部102d、輪郭収束部102e、輝度分散算出部102f、閾値判定部102g、輪郭拡張部102h、輪郭抽出部102i、輪郭取得部102j、細胞分裂点設定部102kを備えて構成されている。 In FIG. 2, the control unit 102 has an internal memory for storing a control program such as an OS (Operating System), a program defining various processing procedures, and necessary data. And the control part 102 performs the information processing for performing various processes by these programs. The control unit 102 includes a frame setting unit 102a, an initial contour setting unit 102d, a contour convergence unit 102e, a luminance variance calculation unit 102f, a threshold determination unit 102g, a contour expansion unit 102h, a contour extraction unit 102i, and a contour acquisition unit in terms of functional concept. 102j and a cell division point setting unit 102k.
 このうち、フレーム設定部102aは、画像データファイル106aに記憶された複数のフレームを、時間の正順または逆順で輪郭抽出処理を行うように制御する。ここで、フレーム設定部102aは、図2に示すように、時間逆順フレーム設定部102b、時間正順フレーム設定部102cを備えて構成されている。時間逆順フレーム設定部102bは、最終フレームnを始点(キーフレーム)に設定し、最初のフレーム1に至るまで、時間逆順に輪郭抽出処理が行われるようフレームkを設定する時間逆順フレーム設定手段である。時間正順フレーム設定部102cは、最初のフレーム1を始点(キーフレーム)に設定し、最終フレームnに至るまで、時間正順に輪郭抽出処理が行われるようフレームkを設定する時間正順フレーム設定手段である。 Among these, the frame setting unit 102a controls the plurality of frames stored in the image data file 106a so as to perform contour extraction processing in the normal or reverse order of time. Here, as shown in FIG. 2, the frame setting unit 102a includes a time reverse order frame setting unit 102b and a time normal order frame setting unit 102c. The time reverse order frame setting unit 102b is a time reverse order frame setting unit that sets the final frame n as a start point (key frame) and sets the frame k so that the contour extraction processing is performed in the reverse time order until the first frame 1 is reached. is there. The time normal order frame setting unit 102c sets the first frame 1 as the start point (key frame) and sets the frame k so that the contour extraction processing is performed in the time normal order until the last frame n is reached. Means.
 また、初期輪郭設定部102dは、画像データファイル106aに記憶された画像データにおいて、対象物の初期輪郭を設定する初期輪郭設定手段である。すなわち、初期輪郭は、初期輪郭設定部102dにより対象物を囲うように設定され、その後の処理において対象物の真の輪郭に一致するように収束(および必要に応じて拡張)されるので、初期輪郭および収束輪郭(すなわち、動的輪郭モデル)は、各対象物に対するROI(Region of Interest: 注目領域)を表すことになる(以下の実施の形態では、対象物の真の輪郭と区別するため「ROI」と呼ぶ場合がある)。初期輪郭設定部102dは、各ROIを、フレーム間にわたって同一の対象物に対して統一した輪郭番号(ID)で管理する。 The initial contour setting unit 102d is initial contour setting means for setting the initial contour of the object in the image data stored in the image data file 106a. That is, the initial contour is set so as to surround the object by the initial contour setting unit 102d, and is converged (and expanded if necessary) so as to match the true contour of the object in the subsequent processing. The contour and the convergent contour (that is, the dynamic contour model) represent the ROI (Region of Interest) for each object (in the following embodiment, to distinguish it from the true contour of the object). Sometimes called “ROI”). The initial contour setting unit 102d manages each ROI with a uniform contour number (ID) for the same object across frames.
 また、この初期輪郭設定部102dによる初期輪郭の設定処理では、一例として、初期輪郭の設定位置を示すX座標・Y座標、初期輪郭を何分割の直線として表現するかを示す初期分割数、初期輪郭の大きさを示す初期半径などが設定される。なお、これらの設定パラメータの一部または全部を、公知の初期輪郭設定手段により自動で設定してもよく、手動で設定してもよい。また、初期輪郭設定部102dは、輝度(シグナル)が前のフレームで消失しており、次のフレームで再出現した場合には、ハフ変換で特徴抽出し、シグナルがあったフレームにおけるROI近傍の特徴点で初期輪郭を設定してもよい。 In the initial contour setting process by the initial contour setting unit 102d, for example, the X coordinate / Y coordinate indicating the initial contour setting position, the number of initial divisions indicating how many lines the initial contour is represented by, An initial radius indicating the size of the contour is set. Some or all of these setting parameters may be set automatically by a known initial contour setting means, or may be set manually. In addition, when the luminance (signal) disappears in the previous frame and reappears in the next frame, the initial contour setting unit 102d performs feature extraction by the Hough transform, and near the ROI in the frame in which the signal is present. An initial contour may be set using feature points.
 また、輪郭収束部102eは、初期輪郭設定部102dまたは輪郭拡張部102hにより設定された初期輪郭を収束させて収束輪郭を生成する輪郭収束手段である。ここで、輪郭収束部102eは、動的輪郭モデリング法により収束輪郭を生成してもよい。より具体的には、輪郭収束部102eは、下記のエネルギー関数Esnakesにより算出した値が最小となるように輪郭(ROI)を収束させて収束輪郭を生成する。
snakes=∫{Ein(v(s))+Eimg(v(s))+Econ(v(s))}ds
(ここで、Einは、輪郭線の持つ内部エネルギー項を表し、ROIの輪郭の長さが短く、線分集合の滑らかさが高いほどエネルギーが小さくなる関数である。Eimgは、画像のエッジ部分で極小となるエネルギー項を表し、画像の微分をして0に近くなる点ほどエネルギーが小さくなる関数である。Econは、外部からの拘束力によるエネルギー項を表す。)
The contour convergence unit 102e is a contour convergence unit that converges the initial contour set by the initial contour setting unit 102d or the contour expansion unit 102h and generates a convergent contour. Here, the contour convergence unit 102e may generate a convergent contour by a dynamic contour modeling method. More specifically, the contour convergence unit 102e generates a convergent contour by converging the contour (ROI) so that the value calculated by the energy function E snacks below is minimized.
E snakes = ∫ {E in (v (s)) + E img (v (s)) + E con (v (s))} ds
(Here, E in represents the internal energy term of the contour line, and is a function in which the energy decreases as the contour length of the ROI is shorter and the smoothness of the line segment set is higher. E img is This represents a minimum energy term at the edge, and is a function in which the energy becomes smaller as the image is differentiated and closer to 0. E con represents an energy term due to external binding force.
 この輪郭収束部102eによる収縮輪郭の生成処理のために、一例として、ROIを収縮させるとき、どの程度まで収縮させるかを表す「収縮力」や、ROIがどの程度なめらかな曲線として定義してよいかを表す「なめらかさ」、対象物近傍にROIをどの程度まで留めようとするかを表す「吸着力」、ROIが最小で何本の直線の集合として定義されるかを表す「最小頂点数」、ROI計算時に何回収縮させる計算を行うかを表す「収縮回数」等のパラメータが、対象物の種類や解析の目的等に応じて設定可能に構成されている。 For example, in order to generate a contraction contour by the contour convergence unit 102e, when the ROI is contracted, a “contracting force” indicating how much the ROI contracts or a smooth curve of the ROI may be defined. “Smoothness” representing the degree of “ROI”, “adsorption force” representing how much the ROI is to be retained in the vicinity of the object, and “minimum number of vertices” representing how many ROIs are defined as a set of straight lines The parameters such as “the number of contractions” indicating how many times the contraction calculation is performed at the time of ROI calculation can be set in accordance with the type of object, the purpose of analysis, and the like.
 また、輝度分散算出部102fは、輪郭収束部102eにより生成された収束輪郭上の輝度を取得し、輝度の分散を算出する輝度分散算出手段である。ここで、輝度分散算出部102fは、画像データファイル106aに記憶された画像データが複数の色に対応する輝度情報(例えば、RGBの3原色のそれぞれに対応する輝度値)を含む場合に、色ごとに重み付けした輝度情報のp乗和のp乗根(ここで、pは輝度情報に対するパラメータである。)を、輝度として取得して輝度の分散を算出してもよい。 Also, the luminance dispersion calculation unit 102f is a luminance dispersion calculation unit that acquires the luminance on the convergence contour generated by the contour convergence unit 102e and calculates the luminance dispersion. Here, the luminance variance calculation unit 102f performs color processing when the image data stored in the image data file 106a includes luminance information corresponding to a plurality of colors (for example, luminance values corresponding to the three primary colors of RGB). The luminance variance may be calculated by acquiring the p-th root of the p-th power sum of the luminance information weighted for each (where p is a parameter for the luminance information) as the luminance.
 また、閾値判定部102gは、輝度分散算出部102fにより算出された輝度の分散と、予め定めた閾値とを比較し、分散が閾値より大きいか、または、分散が閾値以下であるかを判定する閾値判定手段である。例えば、輝度分散算出部102fは、収束輪郭上の輝度の分散が、閾値として、ある標準偏差よりも大きい分散か否かを判定する。 The threshold determination unit 102g compares the luminance variance calculated by the luminance variance calculation unit 102f with a predetermined threshold, and determines whether the variance is greater than the threshold or whether the variance is equal to or less than the threshold. It is a threshold value determination means. For example, the luminance variance calculation unit 102f determines whether or not the luminance variance on the convergence contour is a variance larger than a certain standard deviation as a threshold value.
 また、輪郭拡張部102hは、輪郭を拡張させる輪郭拡張手段である。ここで、輪郭拡張部102hは、あるフレームkの初期輪郭を最初に設定する場合に、前のフレームk-1の収束輪郭を拡張し、当該拡張した輪郭を初期輪郭として設定してもよい。また、輪郭拡張部102hは、閾値判定部102gにより分散が閾値より大きいと判定された場合に、輪郭収束部102eにより生成された収束輪郭を拡張し、当該拡張した輪郭を初期輪郭として設定する。ここで、輪郭拡張部102hは、動的輪郭モデリング法により収束輪郭を拡張してもよい。 Further, the contour expanding unit 102h is a contour expanding unit that expands the contour. Here, when the initial contour of a certain frame k is initially set, the contour expanding unit 102h may expand the convergent contour of the previous frame k-1 and set the expanded contour as the initial contour. In addition, when the threshold determining unit 102g determines that the variance is greater than the threshold, the contour extending unit 102h extends the convergent contour generated by the contour converging unit 102e, and sets the expanded contour as an initial contour. Here, the contour expanding unit 102h may expand the convergent contour by the dynamic contour modeling method.
 また、輪郭抽出部102iは、閾値判定部102gにて分散が閾値以下であると判定された場合に、輪郭収束部102eにより生成された収束輪郭を対象物の輪郭として抽出する輪郭抽出手段である。 The contour extraction unit 102i is a contour extraction unit that extracts the convergence contour generated by the contour convergence unit 102e as the contour of the object when the threshold determination unit 102g determines that the variance is equal to or less than the threshold. .
 また、輪郭取得部102jは、輪郭抽出部102iにより抽出された対象物の輪郭の中心位置および/または輪郭内輝度を取得する輪郭取得手段である。なお、輪郭取得部102jは、対象物の輪郭の中心位置を位置情報ファイル106bに格納し、対象物の輪郭内輝度を輝度情報ファイル106cに格納する。 The contour acquisition unit 102j is a contour acquisition unit that acquires the center position and / or in-contour brightness of the contour of the object extracted by the contour extraction unit 102i. The contour acquisition unit 102j stores the center position of the contour of the target object in the position information file 106b, and stores the in-contour brightness of the target object in the luminance information file 106c.
 また、細胞分裂点設定部102kは、対象物が細胞であり、時間逆順フレーム設定部102bによりフレームkを時間の逆順で輪郭抽出処理するように制御された場合に、複数の対象物の輪郭の中心位置が予め定めた閾値より小さい場合に、細胞分裂点を設定して、細胞分裂点ファイル106dに格納する細胞分裂点設定手段である。ここで、閾値は、一例として、2つの収縮輪郭がどのくらいまで接近したら同じ対象物として追跡するかを定義するための「同一点判定距離」のパラメータである。 In addition, the cell division point setting unit 102k is configured such that when the target object is a cell and the time reverse order frame setting unit 102b is controlled to perform contour extraction processing of the frame k in reverse time order, the contours of a plurality of target objects This is a cell division point setting means for setting a cell division point and storing it in the cell division point file 106d when the center position is smaller than a predetermined threshold value. Here, as an example, the threshold value is a parameter of “same point determination distance” for defining how close two contraction contours are to be tracked as the same object.
 以上が、本輪郭抽出装置100の構成の一例の説明である。ここで、輪郭抽出装置100は、ルータ等の通信装置および専用線等の有線または無線の通信回線を介して、ネットワーク300に通信可能に接続されてもよい。すなわち、このシステムは、図2に示すように、概略的に輪郭抽出装置100と、動的輪郭モデリング法に関する設定パラメータ等を記憶する外部データベースや、輪郭抽出プログラム等の外部プログラム等を提供する外部システム200とを、ネットワーク300を介して通信可能に接続して構成されてもよい。 The above is an example of the configuration of the contour extraction apparatus 100. Here, the contour extraction device 100 may be communicably connected to the network 300 via a communication device such as a router and a wired or wireless communication line such as a dedicated line. That is, as shown in FIG. 2, this system roughly provides an outline database 100, an external database that stores setting parameters and the like related to the dynamic outline modeling method, an external program that provides an external program such as an outline extraction program, and the like. The system 200 may be configured to be communicably connected via the network 300.
 この場合、輪郭抽出装置100の通信制御インターフェース部104は、通信回線等に接続されるルータ等の通信装置(図示せず)に接続されるインターフェースであり、輪郭抽出装置100とネットワーク300(またはルータ等の通信装置)との間における通信制御を行う。すなわち、通信制御インターフェース部104は、他の端末と通信回線を介してデータを通信する機能を有する。 In this case, the communication control interface unit 104 of the contour extraction device 100 is an interface connected to a communication device (not shown) such as a router connected to a communication line or the like, and the contour extraction device 100 and the network 300 (or router). Communication control between the communication device and the like. That is, the communication control interface unit 104 has a function of communicating data with other terminals via a communication line.
 また、ネットワーク300は、輪郭抽出装置100と外部システム200とを相互に接続する機能を有し、例えば、インターネット等である。 The network 300 has a function of interconnecting the contour extraction device 100 and the external system 200, such as the Internet.
 また、外部システム200は、ネットワーク300を介して、輪郭抽出装置100と相互に接続され、利用者に対して動的輪郭モデリング法に関する設定パラメータ等を記憶する外部データベースや、輪郭抽出プログラム等の外部プログラム等を提供する機能を有する。ここで、外部システム200は、WEBサーバやASPサーバ等として構成していてもよい。また、外部システム200のハードウェア構成は、一般に市販されるワークステーション、パーソナルコンピュータ等の情報処理装置およびその付属装置により構成していてもよい。また、外部システム200の各機能は、外部システム200のハードウェア構成中のCPU、ディスク装置、メモリ装置、入力装置、出力装置、通信制御装置等およびそれらを制御するプログラム等により実現される。 The external system 200 is connected to the contour extraction apparatus 100 via the network 300, and stores an external database for storing setting parameters and the like regarding the dynamic contour modeling method for the user, and an external such as a contour extraction program. It has a function to provide programs and the like. Here, the external system 200 may be configured as a WEB server, an ASP server, or the like. Further, the hardware configuration of the external system 200 may be configured by an information processing apparatus such as a commercially available workstation or a personal computer and its attached devices. Each function of the external system 200 is realized by a CPU, a disk device, a memory device, an input device, an output device, a communication control device, and the like in the hardware configuration of the external system 200 and a program for controlling them.
[輪郭抽出装置100の処理]
 次に、このように構成された本実施の形態における本輪郭抽出装置100の処理の一例について、以下に図3~図13を参照して詳細に説明する。
[Processing of Outline Extraction Device 100]
Next, an example of processing of the contour extraction apparatus 100 according to the present embodiment configured as described above will be described in detail below with reference to FIGS.
[輪郭抽出処理]
 まず、輪郭抽出処理の詳細について図3~図5を参照して説明する。図3は、本実施の形態における本輪郭抽出装置100の輪郭抽出処理の一例を示すフローチャートである。
[Outline extraction processing]
First, the details of the contour extraction processing will be described with reference to FIGS. FIG. 3 is a flowchart showing an example of the contour extraction process of the contour extraction apparatus 100 according to the present embodiment.
 図3に示すように、まず、初期輪郭設定部102dは、画像データファイル106aに記憶された画像データにおいて、対象物毎に、IDと対応付けて初期輪郭を設定する(ステップSA-1)。 As shown in FIG. 3, first, the initial contour setting unit 102d sets an initial contour in association with an ID for each object in the image data stored in the image data file 106a (step SA-1).
 つぎに、輪郭収束部102eは、Snakesによって、設定された初期輪郭を収束させて収束輪郭を生成する(ステップSA-2)。より具体的には、輪郭収束部102eは、収束輪郭の生成と、エネルギー関数Esnakesに基づくエネルギー値の計算とを、設定された収束回数だけ繰り返し、エネルギー値が最小となるように収束輪郭を最適化する。
snakes=∫{Ein(v(s))+Eimg(v(s))+Econ(v(s))}ds
(ここで、Einは、輪郭線の持つ内部エネルギー項を表し、Eimgは、画像のエッジ部分で極小となるエネルギー項を表し、Econは、外部からの拘束力によるエネルギー項を表す。)
Next, the contour converging unit 102e generates a convergent contour by converging the set initial contour with Snakes (step SA-2). More specifically, the contour convergence unit 102e repeats the generation of the convergence contour and the calculation of the energy value based on the energy function E sneaks for the set number of convergences, and generates the convergence contour so that the energy value is minimized. Optimize.
E snakes = ∫ {E in (v (s)) + E img (v (s)) + E con (v (s))} ds
(Here, E in represents an internal energy term of the contour line, E img represents an energy term that is minimal at the edge portion of the image, and E con represents an energy term due to a binding force from the outside. )
 そして、輝度分散算出部102fは、輪郭収束部102eにより生成された収束輪郭上の輝度を取得する(ステップSA-3)。より具体的には、輝度分散算出部102fは、収束輪郭に対応する画像ピクセルの輝度値を取得する。ここで、輝度分散算出部102fは、画像データファイル106aに記憶された画像データが複数の色に対応する輝度情報(例えば、RGBの3原色のそれぞれに対応する輝度値)を含む場合に、色ごとに重み付けした輝度情報のp乗和のp乗根(ここで、pは輝度情報に対するパラメータである。)を、輝度として取得してもよい。 Then, the luminance variance calculation unit 102f acquires the luminance on the convergence contour generated by the contour convergence unit 102e (step SA-3). More specifically, the luminance dispersion calculation unit 102f acquires the luminance value of the image pixel corresponding to the convergent contour. Here, the luminance variance calculation unit 102f performs color processing when the image data stored in the image data file 106a includes luminance information corresponding to a plurality of colors (for example, luminance values corresponding to the three primary colors of RGB). The p-th root of the p-th power sum of the luminance information weighted for each (where p is a parameter for the luminance information) may be acquired as the luminance.
 そして、輝度分散算出部102fは、取得した輝度の分散を算出する(ステップSA-4)。 Then, the luminance variance calculation unit 102f calculates the obtained luminance variance (step SA-4).
 そして、閾値判定部102gは、輝度分散算出部102fにより算出された輝度の分散と、予め定めた閾値(例えば、所定の標準偏差の値)とを比較して、分散が閾値より大きいか、または、分散が閾値以下であるかを判定する(ステップSA-5)。 Then, the threshold determination unit 102g compares the luminance variance calculated by the luminance variance calculation unit 102f with a predetermined threshold (for example, a predetermined standard deviation value), and the variance is larger than the threshold, or Then, it is determined whether the variance is equal to or less than the threshold value (step SA-5).
 閾値判定部102gにより分散が閾値より大きいと判定された場合(ステップSA-5、No)、輪郭拡張部102hは、対象物の真の輪郭に収束輪郭を正しく設定できていないと判断し(ステップSA-6)、当該収束輪郭を拡張して、拡張した収束輪郭を新たな初期輪郭として設定し、処理をステップSA-2に戻す(ステップSA-7)。 When it is determined by the threshold determination unit 102g that the variance is larger than the threshold (No at Step SA-5), the contour expansion unit 102h determines that the convergent contour has not been correctly set in the true contour of the object (Step S5). SA-6) The convergence contour is expanded, the expanded convergence contour is set as a new initial contour, and the process returns to step SA-2 (step SA-7).
 すなわち、収束輪郭上の輝度の分散が大きい場合には、収束輪郭が対象物の内部に取られていると考えられるため、一旦、収束輪郭を細胞内部領域から外側へ拡大して初期輪郭の再配置を行い、上述の輪郭収束処理を分散が閾値以下となるまで複数回繰り返すことで、収束輪郭を対象物の真の輪郭に一致させるよう制御する。 That is, when the variance of luminance on the convergent contour is large, it is considered that the convergent contour is taken inside the object. Arrangement is performed, and the above-described contour convergence process is repeated a plurality of times until the variance becomes equal to or less than the threshold value, thereby controlling the convergence contour to match the true contour of the object.
 以上の輪郭抽出処理による輪郭の変遷を、図4を参照して説明する。ここで、図4は、本実施の形態にかかる輪郭抽出処理を行った例を示す概念図である。なお、図4の斜線領域は、対象物(細胞)を表している。また、図4のステップ番号は、図3のステップ番号に対応する。 The transition of the contour by the above contour extraction processing will be described with reference to FIG. Here, FIG. 4 is a conceptual diagram showing an example in which the contour extraction processing according to the present embodiment is performed. In addition, the shaded area in FIG. 4 represents an object (cell). Also, the step numbers in FIG. 4 correspond to the step numbers in FIG.
 まず、初期輪郭設定部102dは、次のフレームkについて初期輪郭を設定する(図4の<SA-1>)。なお、この例では、初期輪郭設定部102dは、前回のフレームについて抽出された輪郭を拡張した抽出輪郭を、次のフレームについての初期輪郭として設定している。そして、輪郭収束部102eは、Snakesによる輪郭フィッティングを行い、初期輪郭を収束させて収束輪郭を生成する(図4左の<SA-2>)。この場合、対象物が前回のフレームから移動・変形し、Snakes輪郭の拡大範囲を越えて移動したため対象物の真の輪郭と交差し対象物内部の輝度勾配と、対象物周囲の輝度勾配との間でエネルギー最小化が起こり対象物の真の輪郭に正しくSnakes輪郭を設定出来ていない。そのため、輝度分散算出部102fにより算出される収束輪郭上の輝度の分散は、閾値判定部102gにより、閾値より大きいと判定されるので、輪郭拡張部102hは、収束輪郭を拡張して、拡張した収束輪郭を新たな初期輪郭として設定する(図4の<SA-7>)。そして再び、輪郭収束部102eは、Snakesによる輪郭フィッティングを行い、初期輪郭を収束させて収束輪郭を生成する(図4右の<SA-2>)。ここで、図5は、従来法による処理を行った例と、本実施の形態の輪郭抽出処理を行った例を示す図である。図5において、右図および左図は、対象物として2つの細胞を撮像した同一の画像を表しており、それぞれ、輪郭番号0~3の収束輪郭が多角形で表されている。 First, the initial contour setting unit 102d sets an initial contour for the next frame k (<SA-1> in FIG. 4). In this example, the initial contour setting unit 102d sets the extracted contour obtained by extending the contour extracted for the previous frame as the initial contour for the next frame. Then, the contour convergence unit 102e performs contour fitting by Snakes to converge the initial contour and generate a convergent contour (<SA-2> on the left in FIG. 4). In this case, since the object has moved / deformed from the previous frame and moved beyond the expansion range of the Snakes contour, it intersects the true contour of the object, and the luminance gradient inside the object and the luminance gradient around the object Energy minimization occurs, and the Snakes contour cannot be set correctly to the true contour of the object. Therefore, since the variance of the luminance on the convergence contour calculated by the luminance variance calculation unit 102f is determined to be larger than the threshold by the threshold determination unit 102g, the contour expansion unit 102h expanded and expanded the convergence contour The convergence contour is set as a new initial contour (<SA-7> in FIG. 4). Again, the contour convergence unit 102e performs contour fitting by Snakes to converge the initial contour to generate a convergent contour (<SA-2> on the right in FIG. 4). Here, FIG. 5 is a diagram illustrating an example in which processing by the conventional method is performed and an example in which the contour extraction processing of the present embodiment is performed. In FIG. 5, the right diagram and the left diagram represent the same image obtained by imaging two cells as objects, and the convergent contours of contour numbers 0 to 3 are represented by polygons.
 図5の左図(従来例)に示すように、Snakesを用いた従来のアルゴリズムでは、輪郭(ROI)の拡大範囲を超えて細胞が移動し、再収縮計算前の初期輪郭と細胞領域とが交差した場合、細胞内部の輝度変化によって生じた偽の輪郭で輪郭エネルギーが最小化されてしまい、細胞の真の輪郭にROIを一致させることができなかった(左図の輪郭番号2参照)。そこで、本実施の形態では、図5の右図に示すように、輪郭抽出処理の過程で、輝度の分散を指標として、ROIの収縮と拡張を繰り返すことで、細胞内部領域にROIが配置されてしまった場合でも、ROIを細胞内部領域から外側へ拡大することができ、対象物の真の輪郭に一致させることができる(右図の輪郭番号2参照)。 As shown in the left diagram of FIG. 5 (conventional example), in the conventional algorithm using Snakes, the cell moves beyond the expansion range of the contour (ROI), and the initial contour before the recontraction calculation and the cell region are In the case of crossing, the contour energy is minimized by the false contour generated by the luminance change inside the cell, and the ROI cannot be matched with the true contour of the cell (see contour number 2 in the left figure). Therefore, in the present embodiment, as shown in the right diagram of FIG. 5, the ROI is arranged in the cell internal region by repeating the contraction and expansion of the ROI using the luminance dispersion as an index in the process of the contour extraction process. Even in the case where it has occurred, the ROI can be expanded from the cell internal region to the outside, and can match the true contour of the object (see contour number 2 in the right figure).
 つづいて、閾値判定部102gにより分散が閾値以下であると判定された場合には(ステップSA-5、Yes)、輪郭抽出部102iは、対象物の真の輪郭に収束輪郭を正しく設定できたと判断して当該収束輪郭を対象物の輪郭として抽出する(ステップSA-8)。 Subsequently, when it is determined by the threshold determination unit 102g that the variance is equal to or less than the threshold (step SA-5, Yes), the contour extraction unit 102i has correctly set the convergence contour to the true contour of the target object. Judgment is made and the convergence contour is extracted as the contour of the object (step SA-8).
 そして、輪郭取得部102jは、輪郭抽出部102iにより抽出された対象物の輪郭の中心位置(X,Y)を、IDと対応付けて位置情報ファイル106bに格納し、抽出された対象物の輪郭内輝度を、IDと対応付けて輝度情報ファイル106cに格納する(ステップSA-9)。 Then, the contour acquisition unit 102j stores the center position (X, Y) of the contour of the target object extracted by the contour extraction unit 102i in the position information file 106b in association with the ID, and extracts the contour of the target object. The internal luminance is stored in the luminance information file 106c in association with the ID (step SA-9).
 これにて、輪郭抽出処理の一例の説明を終える。 This completes the description of an example of the contour extraction process.
[時間正順解析処理]
 次に、タイムラプス撮影等により複数の時間で撮像された複数のフレーム1~nに対して、時間軸に正順で輪郭抽出処理を行う時間正順解析処理の詳細について、以下に図6および図7を参照して説明する。ここで、図6は、時間正順解析処理の一例を示すフローチャートである。
[Time order analysis processing]
Next, the details of time order analysis processing for performing contour extraction processing in the normal order on the time axis for a plurality of frames 1 to n imaged at a plurality of times by time lapse photography or the like will be described below with reference to FIGS. This will be described with reference to FIG. Here, FIG. 6 is a flowchart illustrating an example of the time order analysis process.
 図6に示すように、まず、時間正順フレーム設定部102cは、画像データファイル106aに記憶された画像データのうち、時間軸の最初のフレーム1を開始フレームとして設定し、初期輪郭設定部102dは、対象物毎に初期輪郭を設定する(ステップSB-1)。 As shown in FIG. 6, first, the time-ordered frame setting unit 102c sets the first frame 1 on the time axis in the image data stored in the image data file 106a as the start frame, and sets the initial contour setting unit 102d. Sets an initial contour for each object (step SB-1).
 開始フレームについては輪郭の拡大処理(ステップSB-2)は行わず、つぎに、輪郭収束部102eは、Snakesによって、初期輪郭設定部102dにより設定された初期輪郭を収束させて収束輪郭を生成する(ステップSB-3)。以降のステップSB-3~SB-8の処理は、上述のステップSA-2~SA-7の処理と同様に行い、収束輪郭上の輝度の分散が閾値以下となるまで処理を繰り返す。 Contour enlargement processing (step SB-2) is not performed for the start frame, and then the contour converging unit 102e converges the initial contour set by the initial contour setting unit 102d by Snakes to generate a converging contour. (Step SB-3). The subsequent processing of steps SB-3 to SB-8 is performed in the same manner as the processing of steps SA-2 to SA-7 described above, and the processing is repeated until the luminance distribution on the convergence contour becomes equal to or less than the threshold value.
 そして、閾値判定部102gにより分散が閾値以下であると判定された場合には(ステップSB-6、Yes)、輪郭抽出部102iは、対象物の真の輪郭に収束輪郭を正しく設定できたと判断し(ステップSB-9)、当該収束輪郭を対象物の輪郭として抽出し、収束輪郭の設定を完了する(ステップSB-10)。 If it is determined by the threshold value determination unit 102g that the variance is equal to or less than the threshold value (step SB-6, Yes), the contour extraction unit 102i determines that the convergence contour has been correctly set to the true contour of the object. (Step SB-9), the convergence contour is extracted as the contour of the object, and the setting of the convergence contour is completed (Step SB-10).
 そして、輪郭取得部102jは、輪郭抽出部102iにより抽出された対象物の輪郭の中心位置(X,Y)を、IDと対応付けて位置情報ファイル106bに格納し、抽出された対象物の輪郭内輝度を、IDと対応付けて輝度情報ファイル106cに格納する(ステップSB-11)。 Then, the contour acquisition unit 102j stores the center position (X, Y) of the contour of the target object extracted by the contour extraction unit 102i in the position information file 106b in association with the ID, and extracts the contour of the target object. The internal luminance is stored in the luminance information file 106c in association with the ID (step SB-11).
 そして、時間正順フレーム設定部102cは、当該輪郭抽出処理を行った解析フレームkが終了フレームであるフレームnに到達したか否かを判断する(ステップSB-12)。すなわち、時間正順フレーム設定部102cは、k=nであるか否かを判断する。 Then, the time-ordered frame setting unit 102c determines whether or not the analysis frame k that has undergone the contour extraction processing has reached the frame n that is the end frame (step SB-12). In other words, the time order frame setting unit 102c determines whether or not k = n.
 そして、時間正順フレーム設定部102cは、解析フレームkがフレームnに到達していないと判断した場合(ステップSB-12、No)、kをインクリメントし(k→k+1)、解析フレームkを1つ進行させ、処理をステップSB-2に戻す(ステップSB-13)。 When the time-ordered frame setting unit 102c determines that the analysis frame k has not reached the frame n (No in step SB-12), k is incremented (k → k + 1), and the analysis frame k is set to 1. The process returns to step SB-2 (step SB-13).
 そして、輪郭拡張部102hは、前のフレームk-1において輪郭抽出部102iにより抽出された輪郭を拡張し、当該拡張した輪郭を解析フレームkの初期輪郭として設定する(ステップSB-2)。ここで、図7は、前のフレームにおいて抽出された抽出輪郭を拡張させて、次のフレームにおける初期輪郭を設定し、収束輪郭を生成させる様子を模式的に示した図である。図7において、左図は、前のフレームにおける対象物と抽出輪郭を表し、中央図は、次のフレームにおける対象物と初期輪郭を表し、右図は、次のフレームにおける対象物と収束輪郭を表している。 Then, the contour expanding unit 102h extends the contour extracted by the contour extracting unit 102i in the previous frame k-1, and sets the expanded contour as the initial contour of the analysis frame k (step SB-2). Here, FIG. 7 is a diagram schematically illustrating a state in which the extracted contour extracted in the previous frame is expanded, the initial contour in the next frame is set, and a convergent contour is generated. In FIG. 7, the left figure shows the object and the extracted contour in the previous frame, the center figure shows the object and the initial outline in the next frame, and the right figure shows the object and the convergence outline in the next frame. Represents.
 図7に示すように、解析フレームkを次のフレームに移行する際には、輪郭拡張部102hは、前のフレームにおいて輪郭抽出部102iにより抽出された収束輪郭を一度拡張させ、当該拡張された輪郭を次のフレームにおける初期輪郭として設定するので、対象物が移動したり変形したりした場合にも、対象物を囲うように初期輪郭を設定して対象物の真の輪郭と一致する収束輪郭を設定することができる。なお、この輪郭拡張部102hの処理は、以下の時間逆順解析処理においても同様である。 As shown in FIG. 7, when the analysis frame k is shifted to the next frame, the contour expanding unit 102h expands the convergent contour extracted by the contour extracting unit 102i in the previous frame once and expands Since the contour is set as the initial contour in the next frame, even if the object moves or deforms, the convergent contour that matches the true contour of the object by setting the initial contour to surround the object Can be set. The process of the contour expanding unit 102h is the same in the following time reverse order analysis process.
 ここで、初期輪郭設定部102dは、シグナル消失後において、ハフ変換を行って特徴抽出を行い輪郭近傍での特徴抽出部分に初期輪郭を再設定してもよい。すなわち、前のフレームk-1においてシグナル(輝度)が消失していた場合、次のフレームkにかかるステップSB-2において、初期輪郭設定部102dは、ハフ変換で特徴抽出し、シグナルがあったフレームにおけるROI近傍の特徴点で初期輪郭を設定してもよい。これにより、ハフ変換を行ってシグナル消失点近傍に初期輪郭を再設定するので、シグナル消失前後で同一の対象物を継続して追跡することが出来る。なお、このハフ変換による初期輪郭の再設定処理については、以下の時間逆順解析処理において、図9および図10を参照して詳細に説明する。 Here, the initial contour setting unit 102d may perform feature extraction by performing Hough transform after the signal disappears, and reset the initial contour to the feature extraction portion in the vicinity of the contour. That is, when the signal (luminance) has disappeared in the previous frame k-1, in step SB-2 for the next frame k, the initial contour setting unit 102d performs feature extraction by Hough transform, and there is a signal. An initial contour may be set with feature points near the ROI in the frame. As a result, the Hough transform is performed to reset the initial contour near the signal disappearance point, so that the same object can be continuously tracked before and after the signal disappearance. The initial contour resetting process by the Hough transform will be described in detail with reference to FIGS. 9 and 10 in the following time reverse order analysis process.
 そして、時間正順フレーム設定部102cは、上述のステップSB-2~SB-13までの処理を、全てのフレーム(k=1~n)について時間軸に正順で実行されるよう制御し、最終フレームであるフレームnに到達したと判定した場合(ステップSB-12、Yes)、処理を終了する。以上で、時間正順解析処理の一例の説明を終える。 Then, the time normal order frame setting unit 102c controls the processing from the above steps SB-2 to SB-13 to be executed in the normal order on the time axis for all frames (k = 1 to n), If it is determined that the final frame, frame n, has been reached (step SB-12, Yes), the process ends. This is the end of the description of an example of the time order analysis process.
[時間逆順解析処理]
 つづいて、タイムラプス撮影等により複数の時間で撮像された複数のフレーム1~nに対して、時間軸に逆順で輪郭抽出処理を行う時間逆順解析処理の詳細について、以下に図8~図13を参照して説明する。ここで、図8は、時間逆順解析処理の一例を示すフローチャートである。
[Reverse time analysis process]
Next, details of the time reverse order analysis processing for performing contour extraction processing in reverse order on the time axis for a plurality of frames 1 to n captured at a plurality of times by time lapse photography or the like will be described below with reference to FIGS. The description will be given with reference. Here, FIG. 8 is a flowchart showing an example of the time reverse order analysis processing.
 図8に示すように、まず、時間逆順フレーム設定部102bは、画像データファイル106aに記憶された画像データのうち、時間軸の最終のフレームnを開始フレームとして設定する(ステップSC-1)。 As shown in FIG. 8, first, the time-reverse order frame setting unit 102b sets the last frame n on the time axis among the image data stored in the image data file 106a as a start frame (step SC-1).
 そして、初期輪郭設定部102dは、開始フレームについて、対象物毎に初期輪郭を設定する(ステップSC-2)。 Then, the initial contour setting unit 102d sets an initial contour for each object for the start frame (step SC-2).
 開始フレームについては輪郭の拡大処理(ステップSC-3)は行わず、つぎに、輪郭収束部102eは、Snakesによって、初期輪郭設定部102dにより設定された初期輪郭を収束させて収束輪郭を生成する(ステップSC-4)。以降のステップSC-4~SC-9の処理は、上述のステップSA-2~SA-7の処理と同様に行い、収束輪郭上の輝度の分散が閾値以下となるまで処理を繰り返す。 Contour enlargement processing (step SC-3) is not performed for the start frame, and the contour converging unit 102e then converges the initial contour set by the initial contour setting unit 102d with Snakes to generate a converged contour. (Step SC-4). The subsequent steps SC-4 to SC-9 are performed in the same manner as the above-described steps SA-2 to SA-7, and the processing is repeated until the luminance dispersion on the convergence contour becomes equal to or less than the threshold value.
 そして、閾値判定部102gにより分散が閾値以下であると判定された場合には(ステップSC-7、Yes)、輪郭抽出部102iは、対象物の真の輪郭に収束輪郭を正しく設定できたと判断し(ステップSC-10)、当該収束輪郭を対象物の輪郭として抽出し、収束輪郭の設定を完了する(ステップSC-11)。 If it is determined by the threshold determination unit 102g that the variance is equal to or less than the threshold (step SC-7, Yes), the contour extraction unit 102i determines that the convergence contour has been correctly set to the true contour of the object. Then, the convergence contour is extracted as the contour of the object, and the setting of the convergence contour is completed (step SC-11).
 そして、輪郭取得部102jは、輪郭抽出部102iにより抽出された対象物の輪郭の中心座標(X,Y)を、IDと対応付けて位置情報ファイル106bに格納し、抽出された対象物の輪郭内輝度を、IDと対応付けて輝度情報ファイル106cに格納する(ステップSC-12)。 Then, the contour acquisition unit 102j stores the center coordinates (X, Y) of the contour of the target object extracted by the contour extraction unit 102i in the position information file 106b in association with the ID, and extracts the contour of the target object. The internal luminance is stored in the luminance information file 106c in association with the ID (step SC-12).
 そして、細胞分裂点設定部102kは、任意の2つの対象物の輪郭の中心座標(X,Y)間の距離が、十分近接しているか否かを、予め定めた閾値(すなわち、同一点判定距離)に基づいて判定する(ステップSC-13)。 Then, the cell division point setting unit 102k determines whether or not the distance between the center coordinates (X, Y) of the contours of any two objects is sufficiently close (ie, the same point determination). The determination is made based on the distance) (step SC-13).
 そして、細胞分裂点設定部102kは、中心座標間の距離が予め定めた閾値より小さいと判定した場合(ステップSC-13、Yes)、時間(または単に、解析フレームのフレーム番号)と輪郭番号を細胞分裂点ファイル106dに格納する(ステップSC-14)。 When the cell division point setting unit 102k determines that the distance between the center coordinates is smaller than a predetermined threshold value (step SC-13, Yes), the time (or simply the frame number of the analysis frame) and the contour number are set. It is stored in the cell division point file 106d (step SC-14).
 そして、時間逆順フレーム設定部102bは、当該輪郭抽出処理を行った解析フレームkが終了フレームであるフレーム1に到達したか否かを判断する(ステップSC-15)。すなわち、時間逆順フレーム設定部102bは、k=1であるか否かを判断する。 Then, the time-reverse order frame setting unit 102b determines whether or not the analysis frame k that has undergone the contour extraction processing has reached the frame 1 that is the end frame (step SC-15). That is, the temporal reverse order frame setting unit 102b determines whether or not k = 1.
 そして、時間逆順フレーム設定部102bは、解析フレームkがフレーム1に到達していないと判断した場合(ステップSC-15、No)、kをデクリメントし(k→k-1)、解析フレームkを1つ逆行させ、処理をステップSC-3に戻す(ステップSC-16)。 When the time-reverse order frame setting unit 102b determines that the analysis frame k has not reached the frame 1 (No in step SC-15), it decrements k (k → k−1), and sets the analysis frame k to One is reversed and the process returns to Step SC-3 (Step SC-16).
 そして、輪郭拡張部102hは、図7を参照して説明したように上述のステップSB-2と同様に、前のフレームk+1において輪郭抽出部102iにより抽出された輪郭を拡張し、当該拡張した輪郭を解析フレームkの初期輪郭として設定する(ステップSC-3)。ここで、初期輪郭設定部102dは、シグナル(輝度)が前のフレームk+1で消失しており、次のフレームkで再出現した場合には、ハフ変換で特徴抽出し、シグナルがあったフレームにおけるROI近傍の特徴点で初期輪郭を設定してもよい。ここで、図9は、ハフ変換により初期輪郭を再設定する処理の一例を模式的に示した図である。 Then, as described with reference to FIG. 7, the contour expanding unit 102h expands the contour extracted by the contour extracting unit 102i in the previous frame k + 1 as in step SB-2 described above, and the expanded contour Is set as the initial contour of the analysis frame k (step SC-3). Here, when the signal (luminance) disappears in the previous frame k + 1 and reappears in the next frame k, the initial contour setting unit 102d extracts the feature by the Hough transform, and in the frame where the signal is present An initial contour may be set with feature points in the vicinity of the ROI. Here, FIG. 9 is a diagram schematically illustrating an example of processing for resetting the initial contour by Hough transform.
 図9に示すように、前のフレームk+1において、輪郭拡張部102hにより設定された初期輪郭内でシグナルが消失した場合(輝度が閾値以下の場合)、初期輪郭設定部102dは、次のフレームkにおいて、ハフ変換を行って特徴抽出を行い、輪郭近傍での特徴抽出部分に新たに初期輪郭を再設定してもよい。ここで、図10は、本実施の形態による輪郭抽出結果の一例を示す図である。図10の左図グラフにおいて、横軸はフレーム数を表し、縦軸は輝度を表している。また、図10の右図は、解析結果を系統樹で表した図である。 As shown in FIG. 9, in the previous frame k + 1, when the signal disappears within the initial contour set by the contour extension unit 102h (when the luminance is equal to or lower than the threshold value), the initial contour setting unit 102d , A feature extraction may be performed by performing a Hough transform, and a new initial contour may be reset in a feature extraction portion in the vicinity of the contour. Here, FIG. 10 is a diagram illustrating an example of a contour extraction result according to the present embodiment. In the left graph of FIG. 10, the horizontal axis represents the number of frames, and the vertical axis represents the luminance. Moreover, the right figure of FIG. 10 is the figure which represented the analysis result by the phylogenetic tree.
 図10に示すように、例えば、発現レベルに応じて蛍光を発する細胞を解析対象とした場合などでは、グラフで示すように、フレームの途中で輝度が変化しシグナルが消失してしまう場合がある。本実施の形態によれば、ハフ変換を行ってシグナル消失点近傍で、初期輪郭を再設定するので、右図に示すように、シグナル消失前後で同一の対象物を同定することができ、一度シグナルが消失した場合であっても、同じ対象物を継続して追跡することができる。 As shown in FIG. 10, for example, when a cell that emits fluorescence according to the expression level is an analysis target, the luminance may change in the middle of the frame and the signal may disappear as shown in the graph. . According to the present embodiment, the initial contour is reset near the signal vanishing point by performing the Hough transform, so that the same object can be identified before and after the signal disappearing, as shown in the right figure. Even if the signal disappears, the same object can be continuously tracked.
 このように、時間逆順フレーム設定部102bは、上述のステップSC-3~SC-16までの処理を、全てのフレーム(k=n~1)について時間軸に逆順で実行されるよう制御する。 In this way, the time reverse order frame setting unit 102b performs control so that the processes from the above steps SC-3 to SC-16 are executed in reverse order on the time axis for all frames (k = n to 1).
 そして、時間逆順フレーム設定部102bにより、最終フレームであるフレーム1に到達したと判定された場合(ステップSC-15、Yes)、細胞分裂点設定部102kは、細胞分裂点ファイル106dから、中心座標が近接した時間と輪郭番号を読み出し、細胞分裂点を判定する(ステップSC-17)。ここで、図11は、細胞分裂点を判定する原理を説明するための図である。図11において、左図、中央図、右図は、それぞれ順に、時間軸に正順に並べた3つのフレームに対応し、左図では、1つの対象物、中央図および右図では、2つの対象物が模式的に表されている。 When the time-reversed order frame setting unit 102b determines that the frame 1, which is the final frame, has been reached (step SC-15, Yes), the cell division point setting unit 102k reads the center coordinates from the cell division point file 106d. Is read out and the contour number is read out to determine the cell division point (step SC-17). Here, FIG. 11 is a diagram for explaining the principle of determining the cell division point. In FIG. 11, the left figure, the center figure, and the right figure correspond to three frames arranged in order on the time axis, respectively. In the left figure, one object, in the central figure and the right figure, two objects Objects are shown schematically.
 図11に示すように、時間軸逆行方向に解析フレームkの輪郭抽出処理を行う場合においては、輪郭同士の中心の距離が閾値より下回ったとき(すなわち、マージ(細胞融合)が発生したと判断できたとき)に、時間軸順方向から見れば、分裂(細胞分裂や核分裂)が発生したと考えられるので、細胞分裂点設定部102kは、この時点(中央図の時点)を細胞分裂点として設定する。なお、細胞分裂点設定部102kは、あるフレームkにおいて、輪郭同士の中心の距離が閾値より下回った場合でも、次のフレームk-1において、当該2つの輪郭が重なり合わない場合は、細胞分裂点として設定しなくともよく、あるフレームkにおいて、輪郭同士の中心の距離が閾値以上の場合でも、次のフレームk-1において、当該2つの輪郭が重なり合った場合は、細胞分裂点として設定してもよい。ここで、図12は、時間連続するフレームにおいて、時間軸に逆順で輪郭抽出処理を行った結果を一例として示す図である。図12において、「282」~「285」の番号は、各フレーム番号を表し、「0」および「1」の番号は、それぞれの輪郭に対する輪郭番号(ID)を示している。また、各フレームの画像は、DNA構造安定化を担う核関連タンパクであるHistoneH2BにGFPを繋いだ核標識プローブを発現させたHeLa細胞を、Olympus(会社名)-LCV100システム(商品名)で7分おきに299枚(299フレーム)の蛍光を用いた長時間タイムラプスイメージングにより取得された画像データのうちの一部(282~285フレーム目)である。 As shown in FIG. 11, in the case of performing the contour extraction process of the analysis frame k in the time axis reverse direction, it is determined that the merge (cell fusion) has occurred when the distance between the centers of the contours is less than the threshold value. In this case, the division (cell division or nuclear division) is considered to have occurred if viewed from the forward direction of the time axis. Therefore, the cell division point setting unit 102k uses this time (the time in the central diagram) as the cell division point. Set. Note that the cell division point setting unit 102k determines that the cell division is not performed when the two contours do not overlap in the next frame k-1, even if the distance between the centers of the contours is less than the threshold in a certain frame k. Even if the distance between the centers of the contours is greater than or equal to the threshold value in a certain frame k, if the two contours overlap in the next frame k-1, they are set as cell division points. May be. Here, FIG. 12 is a diagram illustrating, as an example, the result of contour extraction processing performed in reverse order on the time axis in time-continuous frames. In FIG. 12, numbers “282” to “285” indicate frame numbers, and numbers “0” and “1” indicate contour numbers (ID) for the respective contours. Each frame image shows HeLa cells expressing a nuclear-labeled probe in which GFP is linked to Histone H2B, which is a nuclear-related protein responsible for DNA structure stabilization, using Olympus (company name) -LCV100 system (trade name). This is a part (282nd to 285th frames) of image data acquired by long-time time-lapse imaging using 299 (299 frames) fluorescence every minute.
 図12の解析結果に示すように、285フレーム目において、対象物である細胞は2つ存在しているため、ID:0とID:1の輪郭はそれぞれ独立に存在している。そして、284フレーム目においては、細胞の核の分裂が確認され、このフレームにおいて輪郭の融合が発生している。そして、282フレーム目においては、ID:0とID:1の輪郭は、細胞分裂前の同一の細胞を対象とするため完全に重なっている。このように、時間逆順解析処理において、細胞分裂点設定部102kは、一例として、輪郭同士の中心の距離が閾値より下回った場合のフレーム番号(この場合、「285」)や、2つの輪郭が重なり合った場合のフレーム番号(この場合、「284」)等を、細胞分裂点として設定する。 As shown in the analysis result of FIG. 12, since there are two cells as the object in the 285th frame, the contours of ID: 0 and ID: 1 exist independently of each other. In the 284th frame, cell nucleus division is confirmed, and in this frame, contour fusion occurs. In the 282nd frame, the contours of ID: 0 and ID: 1 are completely overlapped because the same cells before cell division are targeted. As described above, in the time-reverse order analysis processing, the cell division point setting unit 102k, as an example, displays the frame number (in this case, “285”) when the distance between the centers of the contours is less than the threshold, or two contours. The frame number (in this case, “284”) in the case of overlapping is set as the cell division point.
 ここで、細胞分裂点設定部102kは、細胞分裂点ファイル106dに記憶された時間(またはフレーム番号)と輪郭番号に基づいて、細胞分裂の系統樹を作成してもよい。ここで、図13は、あるフレームkにおける輪郭の抽出結果(左図)と、系統樹による細胞分裂の解析結果(右図)を示す図である。 Here, the cell division point setting unit 102k may create a phylogenetic tree of cell division based on the time (or frame number) and the contour number stored in the cell division point file 106d. Here, FIG. 13 is a diagram showing a contour extraction result (left diagram) and a cell division analysis result (right diagram) in a certain frame k.
 図13左図に示すように、同一の対象物に複数の輪郭が一致した場合には、細胞分裂点設定部102kにより、細胞分裂点が設定され、細胞分裂点の輪郭IDとフレーム番号が登録される。これにより、右図に示すように、解析対象の分裂を認識して系統図を作成することが可能となる。以上で、時間逆順解析処理の一例の説明を終える。 As shown in the left diagram of FIG. 13, when a plurality of contours coincide with the same object, a cell division point is set by the cell division point setting unit 102k, and the contour ID and frame number of the cell division point are registered. Is done. As a result, as shown in the right figure, it is possible to recognize the division of the analysis target and create a system diagram. Above, description of an example of a time reverse order analysis process is completed.
 以上が、本実施の形態による輪郭抽出装置100の処理の詳細である。 The above is the details of the processing of the contour extracting apparatus 100 according to the present embodiment.
[実施例1]
 本実施の形態における実施例1について、以下に図14を参照して説明する。ここで、まず、従来技術と対比を行いながら、本実施例1の原理について説明する。
[Example 1]
Example 1 of the present embodiment will be described below with reference to FIG. Here, first, the principle of the first embodiment will be described while comparing with the prior art.
 近年の顕微鏡撮影記録技術、ならびに、蛍光タンパク質等による細胞内現象の可視化技術の進歩により、細胞の定時間隔撮影(タイムラプス撮影)を行い、輝度情報の解析を行うことで細胞内シグナルの時空間的情報の取得が可能となった。 With recent advances in microscopic recording technology and visualization of intracellular phenomena using fluorescent proteins, etc., time-lapse imaging (time-lapse imaging) of cells and analysis of luminance information make it possible to analyze the time and space of intracellular signals. Information can be obtained.
 この細胞のタイムラプス動画像から情報を効率よく取得し、科学的に意味のある情報を引き出すことが求められている。 It is required to efficiently acquire information from the time-lapse moving image of this cell and extract scientifically meaningful information.
 しかしながら、従来の細胞解析ソフトウェアでは、時間とともに移動し、形や輝度を変化させる細胞を効率よく追跡することは困難であり、また細胞分裂現象にも対応できなかった。 However, with conventional cell analysis software, it is difficult to efficiently track cells that move with time and change their shape and brightness, and cannot cope with cell division phenomena.
 例えば、従来の細胞解析ソフトウェアでは、1フレームごとに或る輝度値を閾値として設定し、輝度による2値化を行うことで輝度集合を抽出し、細胞領域としていた。そして、統計処理によりフレーム間での相対移動距離が最小となる輝度集合同士を同一の細胞に対応するものとして同定して、細胞追跡を行っていた。この従来の手法では、輝度による2値化を行うという特性から、解析対象は輝度を持った細胞のタイムラプス画像(すなわち、蛍光顕微鏡でのタイムラプス画像)に限られており、かつ、2値化が実施できる画像は、背景や細胞の輝度にむらや勾配がなく、撮影された細胞の細胞群に大きな輝度変化のないもののみが対象であった。 For example, in the conventional cell analysis software, a certain luminance value is set as a threshold for each frame, and a luminance set is extracted by performing binarization by luminance to be a cell region. Then, by performing statistical processing, the luminance sets having the minimum relative movement distance between frames are identified as corresponding to the same cell, and cell tracking is performed. In this conventional method, because of the characteristic of performing binarization by luminance, the analysis target is limited to a time-lapse image of cells having luminance (that is, a time-lapse image with a fluorescence microscope), and binarization is performed. Images that can be implemented were only those in which the background and the luminance of the cells had no unevenness or gradient, and the cell group of the photographed cells had no significant luminance change.
 また、フレームごとの輝度情報を取得後に統計処理で細胞の追跡を行っているため、細胞の同一性が保障されない場面が発生し、細胞を正確に追跡できない場合や、細胞特有の現象である細胞分裂に追従できない等の問題点があった。 Also, since cell tracking is performed by statistical processing after obtaining luminance information for each frame, there are cases where the identity of the cell is not guaranteed and the cell cannot be accurately tracked, or a cell-specific phenomenon There were problems such as failure to follow division.
 そこで、本実施例1では、より効率のよい細胞抽出方法を提供することで、細胞追跡能力の向上を図り、細胞分裂の識別アルゴリズムによって上記の問題を解決する。すなわち、本実施例1では、細胞領域の抽出を従来手法で用いられている輝度による2値化を行うのではなく、細胞の輪郭に注目して領域を抽出する動的輪郭モデリング法(Snakes)を用いて処理を行い、後続のフレームでは一度、輪郭モデルの範囲拡大後に再収縮を行うことで細胞の追跡を行う。 Therefore, in the first embodiment, the cell tracking ability is improved by providing a more efficient cell extraction method, and the above problem is solved by the cell division identification algorithm. That is, in the first embodiment, the cell region extraction is not performed by the binarization based on the luminance used in the conventional method, but the dynamic contour modeling method (Snakes) for extracting the region by paying attention to the cell contour. In the subsequent frame, the cell is tracked by once again performing contraction after expanding the range of the contour model.
 これにより、時間経過と共に形状を変化させながら移動し、かつ、輝度変化を起こす細胞に対しての追跡能力を向上させ、更に、従来2値化が非常に困難であった透過像に対する細胞解析も可能となった。 This improves the tracking ability for cells that move while changing their shape over time and causes a change in brightness, and also performs cell analysis for transmission images that have been very difficult to binarize in the past. It has become possible.
 また、時系列情報に対して逆向きに解析を行うことで、2つの輪郭情報が一致し、細胞が融合した時点をもって、細胞分裂が起こったと判定するアルゴリズムを開発し、細胞分裂の識別を可能とした。ここで、図14は、本実施例1の処理を示すフローチャートである。 In addition, by analyzing the time-series information in the opposite direction, an algorithm that determines that cell division has occurred at the time when the two contour information matches and the cells have fused can be identified. It was. Here, FIG. 14 is a flowchart showing the processing of the first embodiment.
 すなわち、本実施例1の輪郭抽出装置100は、図14に示すように、時間逆順フレーム設定部102bの処理により、画像データファイル106aを開き、細胞の顕微鏡タイムラプス画像データを読み出す(ステップSD-1)。 That is, as shown in FIG. 14, the contour extracting apparatus 100 of the first embodiment opens the image data file 106a by the processing of the time reverse order frame setting unit 102b, and reads the microscopic time-lapse image data of the cells (step SD-1 ).
 そして、輪郭抽出装置100は、時間逆順フレーム設定部102bの処理により、画像データファイル106aから必要な情報を取得後、記録されている最終フレームnへ読み込み制御kを移動する(ステップSD-2)。本実施例1では、解析は時間軸に対して逆方向から順次進める。 Then, the contour extracting apparatus 100 acquires necessary information from the image data file 106a by the processing of the time reverse order frame setting unit 102b, and then moves the read control k to the last frame n recorded (step SD-2). . In the first embodiment, the analysis proceeds sequentially from the reverse direction with respect to the time axis.
 そして、輪郭抽出装置100は、初期輪郭設定部102dの処理により、最終フレームn上で、動的輪郭モデル(Snakes)に与える初期パラメータ(初期輪郭位置、初期輪郭半径、収縮強度、収縮計算回数、拡大範囲、輪郭上の制御点個数など)を設定する(ステップSD-3)。この初期輪郭は、解析すべき複数の細胞に対して個別に設定し、すなわち、m個の細胞に対して、ID:1~mを付与した初期輪郭をそれぞれ設定する。 Then, the contour extracting apparatus 100 performs initial parameters (initial contour position, initial contour radius, contraction strength, number of contraction calculations) to be given to the dynamic contour model (Snakes) on the final frame n by the processing of the initial contour setting unit 102d. The enlargement range, the number of control points on the contour, etc.) are set (step SD-3). This initial contour is individually set for a plurality of cells to be analyzed, that is, initial contours having IDs of 1 to m are set for m cells.
 そして、輪郭抽出装置100は、輪郭収束部102eの処理により、与えられた初期半径が細胞の輪郭に最も適合する位置になるまで収縮するようエネルギーモデル計算をさせ、収束輪郭を生成する(ステップSD-4)。 Then, the contour extracting apparatus 100 performs energy model calculation so that the given initial radius is contracted to a position that best matches the contour of the cell by the processing of the contour converging unit 102e, and generates a convergent contour (step SD). -4).
 そして、輪郭抽出装置100は、輝度分散算出部102fの処理により、収束輪郭上の画像ピクセルの輝度値を取得し(ステップSD-5)、取得した輝度の分散を算出する(ステップSD-6)。 Then, the contour extracting apparatus 100 acquires the luminance value of the image pixel on the convergent contour by the processing of the luminance variance calculating unit 102f (step SD-5), and calculates the acquired luminance variance (step SD-6). .
 そして、輪郭抽出装置100は、閾値判定部102gの処理により、輝度の分散が所定の標準偏差より大きいと判定した場合(ステップSD-7、No)、収束輪郭を拡張して新たな初期輪郭として設定し、処理をステップSD-4に戻す。 Then, when it is determined by the threshold determination unit 102g that the luminance variance is larger than the predetermined standard deviation (No in step SD-7), the contour extracting apparatus 100 expands the convergent contour as a new initial contour. Set and return to step SD-4.
 そして、輪郭抽出装置100は、分散が所定の標準偏差以下となるまで輪郭収束処理を繰り返して(ステップSD-4~7)、所定の標準偏差以下となった場合(ステップSD-7、Yes)、輪郭抽出部102iの処理により細胞の輪郭を抽出し注目領域(ROI)を設定する(動的輪郭モデルに適応する)。 Then, the contour extracting apparatus 100 repeats the contour convergence process until the variance is equal to or smaller than the predetermined standard deviation (steps SD-4 to 7), and when the variance is equal to or smaller than the predetermined standard deviation (step SD-7, Yes). Then, the outline of the cell is extracted by the processing of the outline extraction unit 102i, and the region of interest (ROI) is set (adapted to the dynamic outline model).
 そして、輪郭抽出装置100は、輪郭取得部102jの処理により、求められたROIから輝度情報を取得し(ステップSD-8)、ID:1~mのROI内の輝度情報を輝度情報ファイル106cに書き出す(ステップSD-9)。 Then, the contour extracting apparatus 100 acquires luminance information from the obtained ROI by the processing of the contour acquiring unit 102j (step SD-8), and the luminance information in the ROI with ID: 1 to m is stored in the luminance information file 106c. Writing out (step SD-9).
 そして、輪郭抽出装置100は、細胞分裂点設定部102kの処理により、2つ以上のROIが同じ範囲を示しROIが融合している(すなわち、細胞が融合している)と判定した場合には(ステップSD-10、Yes)、時間軸順方向から見た場合には細胞分裂が起こったと判定して細胞分裂点を細胞分裂点ファイル106dに記録する(ステップSD-11)。 When the contour extraction apparatus 100 determines that two or more ROIs indicate the same range and the ROIs are fused (that is, the cells are fused) by the processing of the cell division point setting unit 102k. (Step SD-10, Yes), when viewed from the forward direction of the time axis, it is determined that cell division has occurred, and the cell division point is recorded in the cell division point file 106d (step SD-11).
 そして、輪郭抽出装置100は、時間逆順フレーム設定部102bの処理により、読み込み制御kが最初のフレーム1に到達しているか判定し(ステップSD-12)、最初のフレーム1に到達していない場合(ステップSD-12、No)、時間逆順フレーム設定部102bの処理により、時間軸に対し一つ前のフレームk-1へ制御を移したのち、輪郭拡張部102hの処理により、ROI範囲を拡大させる(ステップSD-13)。 Then, the contour extracting apparatus 100 determines whether or not the reading control k has reached the first frame 1 by the processing of the time reverse order frame setting unit 102b (step SD-12), and has not reached the first frame 1 (Step SD-12, No), the control is shifted to the previous frame k-1 with respect to the time axis by the processing of the time reverse order frame setting unit 102b, and then the ROI range is expanded by the processing of the contour extension unit 102h (Step SD-13).
 そして、輪郭抽出装置100は、次のフレームについて、拡大させたROI範囲で、再び輪郭の収縮を行い、細胞範囲を抽出する(ステップSD-4~SD-13)。 Then, the contour extracting apparatus 100 contracts the contour again in the expanded ROI range for the next frame, and extracts the cell range (steps SD-4 to SD-13).
 輪郭抽出装置100は、以上の処理をすべてのフレームn~1に対して時間軸逆方向から順に行う。 The contour extracting apparatus 100 performs the above processing in order from the reverse direction of the time axis for all the frames n to 1.
 すべてのフレームの解析後、すなわち最初のフレーム1に到達した場合(ステップSD-12、Yes)、輪郭抽出装置100は、輝度情報ファイル106cや細胞分裂点ファイル106dに記録された情報に基づいて、輝度変化情報や細胞分裂の遷移図(系統樹)などの情報を出力部114に出力する。 After the analysis of all the frames, that is, when the first frame 1 is reached (step SD-12, Yes), the contour extracting apparatus 100 determines whether the information is recorded in the luminance information file 106c or the cell division point file 106d. Information such as luminance change information and a transition diagram (phylogenetic tree) of cell division is output to the output unit 114.
 以上で本実施例1の処理を終える。 This completes the process of the first embodiment.
[実施例2]
 本実施の形態における実施例2について、以下に図15および図16を参照して説明する。
[Example 2]
Example 2 in the present embodiment will be described below with reference to FIGS. 15 and 16.
 本実施例2においては、画像が単色ではなく多色で表現されている場合であっても、正確に輪郭抽出を行えるよう、動的輪郭モデリング法(Snakes)による輪郭計算のための輝度について適切な処理を行った上で、細胞の輪郭抽出と細胞追跡を行った。なお、輪郭計算用の輝度の計算以外の処理の流れは、上述した実施例1と基本的に同様である。 In the second embodiment, the brightness for contour calculation by the dynamic contour modeling method (Snakes) is appropriate so that the contour can be accurately extracted even when the image is expressed in multiple colors instead of a single color. The cell contour extraction and the cell tracking were performed after performing various processing. The flow of processing other than the calculation of the brightness for contour calculation is basically the same as that in the first embodiment.
 すなわち、本実施例2においては、画像データが複数の色に対応する輝度情報を含む場合に対応させるため、色ごとに重み付けした輝度情報のp乗和のp乗根を、輪郭計算用の輝度Intensityとして設定した。より具体的には、蛍光顕微鏡により得られた各チャンネルの蛍光輝度をCh1,Ch2,Ch3とした場合、輪郭計算用の輝度Intensityを以下の式により取得した。
 Intensity=((a)*(Ch1)^(p)+(b)*(Ch2)^(p)+(c)*(Ch3)^(p))^(1/p)
(ここで、a,b,cは、それぞれ、各蛍光輝度Ch1,Ch2,Ch3に対する任意の重みづけパラメータである。また、pは、全体の輝度に対するパラメータである。)
That is, in the second embodiment, in order to cope with the case where the image data includes luminance information corresponding to a plurality of colors, the p-th root of the p-th power sum of the luminance information weighted for each color is used as the luminance for contour calculation. Set as Intensity. More specifically, when the fluorescence brightness of each channel obtained by the fluorescence microscope is Ch1, Ch2, and Ch3, the brightness intensity for contour calculation is obtained by the following equation.
Intensity = ((a) * (Ch1) ^ (p) + (b) * (Ch2) ^ (p) + (c) * (Ch3) ^ (p)) ^ (1 / p)
(Here, a, b, and c are arbitrary weighting parameters for the fluorescent luminances Ch1, Ch2, and Ch3, respectively, and p is a parameter for the overall luminance.)
 なお、上記計算式における各パラメータ(重みづけパラメータa,b,c、および、全体の輝度に対するパラメータp)は、例えばdouble型の浮動小数等により指定される実数であり、より好適には、0以上の実数である。 Each parameter in the above formula (weighting parameters a, b, c, and parameter p for the overall luminance) is a real number specified by, for example, a double type floating point number, and more preferably 0. These are real numbers.
 この計算式により、複数の蛍光画像の輝度を反映したSnakes輪郭計算用の輝度Intensityを得ることができる。そして、算出された輝度Intensityの値に基づいて、上述した実施例1と同様の手法(時間逆順解析処理)により、Snakes輪郭の存在している座標を計算し、求められたSnakes輪郭内に存在している元の各々の蛍光画像の輝度情報(Ch1,Ch2,Ch3)を取得することで、複数の蛍光画像群からなるイメージング画像からの解析を可能とした。 The brightness intensity for Snakes contour calculation reflecting the brightness of a plurality of fluorescent images can be obtained by this calculation formula. Then, based on the calculated luminance intensity value, the coordinates where the Snakes contour exists are calculated by the same method (time reverse order analysis processing) as in the first embodiment described above, and exist within the obtained Snakes contour. By acquiring the luminance information (Ch1, Ch2, Ch3) of each original fluorescent image, analysis from an imaging image made up of a plurality of fluorescent image groups is made possible.
 ここで、図15は、細胞周期の時期特異的に蛍光を発する蛍光タンパク質型インジケータFucciが導入されたZebraFishの初代培養細胞の蛍光イメージング画像を、本実験例2に基づいて解析した結果を示す図である。ここで、このインジケータFucci(Fluorescent Ubiquitination-based Cell Cycle Indicator)は、細胞周期のS/G2/M期において緑色蛍光を発し、G1期において赤色蛍光を発するものである(なお、Fucciの性質上、緑色蛍光タンパク質(mAG)が完全に分解されるまでの時間、および、赤色蛍光タンパク質(mKO2)が合成され蓄積し十分な蛍光を発するまでの時間のそれぞれには、ある程度の時間を要するため、厳密な意味で細胞分裂直後に緑色から赤色に蛍光が変化するものではない)。また、解析に用いた蛍光イメージング画像は、Fucciが導入されたZebraFishの初代培養細胞を、48時間にわたってタイムラプス撮影した約1000フレームの画像群であり、このうち、図15においては、3時間ごとの代表画像を、抽出した輪郭(輪郭番号2,3,5)とともに示している。なお、この解析においては、解析のための前処理の段階において、多色の蛍光イメージング画像の各輝度情報(Ch1,Ch2,Ch3)について輝度調整を行ったため、解析アルゴリズム上の上記計算式における各パラメータは1とした。 Here, FIG. 15 is a diagram showing a result of analyzing a fluorescence imaging image of a primary cultured cell of ZebraFish into which a fluorescent protein type indicator Fucci that emits fluorescence specifically in the cell cycle is introduced, based on this experimental example 2. It is. Here, this indicator Fucci (Fluorescent Ubiquitation-based Cell Cycle Indicator) emits green fluorescence in the S / G2 / M phase of the cell cycle and emits red fluorescence in the G1 phase ( Each of the time required until the green fluorescent protein (mAG) is completely decomposed and the time required until the red fluorescent protein (mKO2) is synthesized and accumulated to emit sufficient fluorescence requires a certain amount of time. In this sense, the fluorescence does not change from green to red immediately after cell division). In addition, the fluorescence imaging image used for the analysis is an image group of about 1000 frames obtained by time-lapse photographing of the primary culture cells of ZebraFish into which Fucci was introduced for 48 hours. Of these, in FIG. The representative image is shown together with the extracted contours ( contour numbers 2, 3, and 5). In this analysis, since the luminance adjustment is performed for each luminance information (Ch1, Ch2, Ch3) of the multicolor fluorescence imaging image in the preprocessing stage for the analysis, each calculation formula in the above calculation formula on the analysis algorithm The parameter was 1.
 図15に示すように、細胞は、0h~12h付近においてS/G2/M期の緑色蛍光を呈し、15h~27h付近においてG1期の赤色蛍光を呈し、30h~45h付近においてS/G2/M期の緑色蛍光を呈し、その後ふたたび48h付近においてG1期の赤色蛍光を呈しているが、2色の蛍光色の違いにも拘わらず、いずれの画像においても、正確に細胞の輪郭を抽出できていることが確認された。また、図15に示すように、0hにおいては共通していた3つの輪郭(輪郭番号2,3,5)が、12h付近のフレーム(※1)において、輪郭番号2の輪郭と、輪郭番号3,5の輪郭とに分離し、その後、共通していた2つの輪郭(輪郭番号3,5)が、45h付近のフレーム(※2)において、輪郭番号3の輪郭と、輪郭番号5の輪郭との2つの輪郭に分離しており、本実施例2により、これら2箇所の時点(※1,※2)付近において細胞分裂点が設定された。なお、図15に示すように、この2つの細胞分裂点は、いずれも、赤色蛍光を呈する前の緑色蛍光が弱まる時点であり、インジケータFucciが示すM期とほぼ一致し、本実施例2により細胞分裂点が正確に解析できていることが確認された。 As shown in FIG. 15, the cells exhibit S / G2 / M phase green fluorescence around 0h-12h, G1 phase red fluorescence around 15h-27h, and S / G2 / M around 30h-45h. The green fluorescence of the period is exhibited, and then the red fluorescence of the G1 period is exhibited again in the vicinity of 48 h. Regardless of the difference between the two fluorescent colors, the outline of the cell can be accurately extracted in any image. It was confirmed that Also, as shown in FIG. 15, three contours ( contour numbers 2, 3, and 5) that were common at 0h are contours of contour number 2 and contour number 3 in a frame (* 1) near 12h. , 5 and the two common contours (contour numbers 3 and 5) are the contours of contour number 3 and contour 5 in the frame (* 2) near 45h. The cell division point was set in the vicinity of these two time points (* 1, * 2) according to Example 2. As shown in FIG. 15, both of these two cell division points are points in time when the green fluorescence before the red fluorescence is weakened, which almost coincides with the M phase indicated by the indicator Fucci. It was confirmed that the cell division point was correctly analyzed.
 ここで、図16は、本実施例2により抽出された輪郭内における各蛍光の強度の変化を示すグラフ、および、本実施例2による細胞追跡結果および輪郭内の輝度情報から作成された細胞分裂の系統樹を示す図である。 Here, FIG. 16 is a graph showing the change in the intensity of each fluorescence in the contour extracted in the second embodiment, and the cell division created from the cell tracking result and the luminance information in the contour in the second embodiment. It is a figure which shows no phylogenetic tree.
 図16のグラフ図(上図)に示すように、本実施例2により抽出された輪郭内において、Fucciによる、S/G2/M期の緑色蛍光(mAG)の強度と、G1期の赤色蛍光(mKO2)の強度が細胞周期依存的に変化している様子が示された。また、図16の系統樹(下図)に示すように、細胞分裂の系統樹に各蛍光色の輝度情報を反映した色付けを行うことにより、細胞分裂に伴う細胞周期の各期間の長さの可視化を行うことに成功した。 As shown in the graph of FIG. 16 (upper figure), the intensity of green fluorescence (mAG) in the S / G2 / M phase and the red fluorescence in the G1 phase by Fucci in the contour extracted by the second embodiment. It was shown that the intensity of (mKO2) changed depending on the cell cycle. Further, as shown in the phylogenetic tree of FIG. 16 (below), the length of each period of the cell cycle accompanying cell division is visualized by coloring the phylogenetic tree of cell division reflecting the luminance information of each fluorescent color. Succeeded in doing.
 以上で、本実施例2の説明を終える。 This is the end of the description of the second embodiment.
[他の実施の形態]
 さて、これまで本発明の実施の形態について説明したが、本発明は、上述した実施の形態以外にも、特許請求の範囲に記載した技術的思想の範囲内において種々の異なる実施の形態にて実施されてよいものである。
[Other embodiments]
Although the embodiments of the present invention have been described so far, the present invention is not limited to the above-described embodiments, but can be applied to various different embodiments within the scope of the technical idea described in the claims. May be implemented.
 例えば、上記の実施の形態においては、蛍光顕微鏡画像中の1つの細胞を対象物として解析を行った例について説明したが、本発明はこれに限らない。ここで、図17は、微分干渉像中のHeLa細胞のコロニーへROIを設定し、コロニー全体の輪郭抽出を行った例を示す図である。このように、本発明は細胞領域と背景領域とが輝度の差として表現される蛍光画像からの解析のみならず、微分干渉画像のような対象物と背景とが輝度の差としては表現されない画像からも対象物の抽出を行うことが出来、対象物も1つの細胞のみならず複数の細胞から構成される細胞群を対象物として適用することも可能である。すなわち、上述の実施の形態においては、蛍光顕微鏡画像中の1つの細胞に対して1つの初期輪郭を設定して輪郭の抽出を行ったが、蛍光顕微鏡観察像以外からの対象物抽出を行ってもよく、複数の細胞や複数の対象物の集団に対して1つの初期輪郭を設定して、これらの集団の輪郭を抽出してもよい。 For example, in the above-described embodiment, the example in which one cell in the fluorescence microscope image is analyzed as an object has been described, but the present invention is not limited to this. Here, FIG. 17 is a diagram illustrating an example in which an ROI is set to a HeLa cell colony in a differential interference image and contour extraction of the entire colony is performed. Thus, the present invention is not only an analysis from a fluorescence image in which a cell region and a background region are expressed as a luminance difference, but also an image in which an object and a background such as a differential interference image are not expressed as a luminance difference. It is also possible to extract a target object from the above, and it is also possible to apply a cell group composed of a plurality of cells as a target object. That is, in the above-described embodiment, one initial contour is set for one cell in the fluorescence microscope image, and the contour is extracted. However, the object is extracted from other than the fluorescence microscope observation image. Alternatively, one initial contour may be set for a group of a plurality of cells or a plurality of objects, and the contours of these groups may be extracted.
 また、例えば、輪郭抽出装置100がスタンドアローンの形態で処理を行う場合を一例に説明したが、輪郭抽出装置100とは別筐体で構成されるクライアント端末からの要求に応じて処理を行い、その処理結果を当該クライアント端末に返却するように構成してもよい。 In addition, for example, the case where the contour extracting apparatus 100 performs processing in a stand-alone form has been described as an example, but processing is performed in response to a request from a client terminal configured with a separate housing from the contour extracting apparatus 100, You may comprise so that the process result may be returned to the said client terminal.
 また、実施の形態において説明した各処理のうち、自動的に行われるものとして説明した処理の全部または一部を手動的に行うこともでき、あるいは、手動的に行われるものとして説明した処理の全部または一部を公知の方法で自動的に行うこともできる。 In addition, among the processes described in the embodiment, all or part of the processes described as being automatically performed can be performed manually, or the processes described as being performed manually can be performed. All or a part can be automatically performed by a known method.
 このほか、上記文献中や図面中で示した処理手順、制御手順、具体的名称、各処理の登録データや検索条件等のパラメータを含む情報、画面例、データベース構成については、特記する場合を除いて任意に変更することができる。 In addition, unless otherwise specified, the processing procedures, control procedures, specific names, information including registration data for each processing, parameters such as search conditions, screen examples, and database configurations shown in the above documents and drawings Can be changed arbitrarily.
 また、輪郭抽出装置100に関して、図示の各構成要素は機能概念的なものであり、必ずしも物理的に図示の如く構成されていることを要しない。 Further, regarding the contour extracting apparatus 100, each illustrated component is functionally conceptual and does not necessarily need to be physically configured as illustrated.
 例えば、輪郭抽出装置100の各装置が備える処理機能、特に制御部102にて行われる各処理機能については、その全部または任意の一部を、CPU(Central Processing Unit)および当該CPUにて解釈実行されるプログラムにて実現してもよく、また、ワイヤードロジックによるハードウェアとして実現してもよい。尚、プログラムは、後述する記録媒体に記録されており、必要に応じて輪郭抽出装置100に機械的に読み取られる。すなわち、ROMまたはHDなどの記憶部106などは、OS(Operating System)として協働してCPUに命令を与え、各種処理を行うためのコンピュータプログラムが記録されている。このコンピュータプログラムは、RAMにロードされることによって実行され、CPUと協働して制御部を構成する。 For example, all or some of the processing functions provided in each device of the contour extraction device 100, particularly the processing functions performed by the control unit 102, are interpreted and executed by a CPU (Central Processing Unit) and the CPU. It may be realized by a program to be executed, or may be realized as hardware by wired logic. The program is recorded on a recording medium to be described later, and is mechanically read by the contour extracting apparatus 100 as necessary. That is, the storage unit 106 such as ROM or HD stores a computer program for performing various processes by giving instructions to the CPU in cooperation with an OS (Operating System). This computer program is executed by being loaded into the RAM, and constitutes a control unit in cooperation with the CPU.
 また、このコンピュータプログラムは、輪郭抽出装置100に対して任意のネットワーク300を介して接続されたアプリケーションプログラムサーバに記憶されていてもよく、必要に応じてその全部または一部をダウンロードすることも可能である。 The computer program may be stored in an application program server connected to the contour extraction apparatus 100 via an arbitrary network 300, and may be downloaded in whole or in part as necessary. It is.
 また、本発明に係るプログラムを、コンピュータ読み取り可能な記録媒体に格納することもできる。ここで、この「記録媒体」とは、フレキシブルディスク、光磁気ディスク、ROM、EPROM、EEPROM、CD-ROM、MO、DVD、Blu-ray Disc等の任意の「可搬用の物理媒体」、あるいは、LAN、WAN、インターネットに代表されるネットワークを介してプログラムを送信する場合の通信回線や搬送波のように、短期にプログラムを保持する「通信媒体」を含むものとする。 The program according to the present invention can also be stored in a computer-readable recording medium. Here, the “recording medium” is an arbitrary “portable physical medium” such as a flexible disk, a magneto-optical disk, a ROM, an EPROM, an EEPROM, a CD-ROM, an MO, a DVD, a Blu-ray Disc, or the like. It includes a “communication medium” that holds a program in a short period of time, such as a communication line or a carrier wave when a program is transmitted via a network represented by a LAN, WAN, or the Internet.
 また、「プログラム」とは、任意の言語や記述方法にて記述されたデータ処理方法であり、ソースコードやバイナリコード等の形式を問わない。なお、「プログラム」は必ずしも単一的に構成されるものに限られず、複数のモジュールやライブラリとして分散構成されるものや、OS(Operating System)に代表される別個のプログラムと協働してその機能を達成するものをも含む。なお、実施の形態に示した各装置において記録媒体を読み取るための具体的な構成、読み取り手順、あるいは、読み取り後のインストール手順等については、周知の構成や手順を用いることができる。 Also, “program” is a data processing method described in an arbitrary language or description method, and may be in any form such as source code or binary code. Note that the “program” is not necessarily limited to a single configuration, but is distributed in the form of a plurality of modules and libraries, or in cooperation with a separate program represented by an OS (Operating System). Including those that achieve the function. Note that a well-known configuration and procedure can be used for a specific configuration for reading a recording medium, a reading procedure, an installation procedure after reading, and the like in each device described in the embodiment.
 記憶部106に格納される各種のデータベース等(画像データファイル106a~細胞分裂点ファイル106d)は、RAM、ROM等のメモリ装置、ハードディスク等の固定ディスク装置、フレキシブルディスク、光ディスク等のストレージ手段であり、各種処理やウェブサイト提供に用いる各種のプログラムやテーブルやデータベースやウェブページ用ファイル等を格納する。 Various databases and the like (image data file 106a to cell division point file 106d) stored in the storage unit 106 are storage means such as a memory device such as RAM and ROM, a fixed disk device such as a hard disk, a flexible disk, and an optical disk. Various programs, tables, databases, web page files, etc. used for various processing and website provision are stored.
 また、輪郭抽出装置100は、既知のパーソナルコンピュータ、ワークステーション等の情報処理装置を接続し、該情報処理装置に本発明の方法を実現させるソフトウェア(プログラム、データ等を含む)を実装することにより実現してもよい。 The contour extraction apparatus 100 is connected to an information processing apparatus such as a known personal computer or workstation, and the software (including programs, data, etc.) for realizing the method of the present invention is installed in the information processing apparatus. It may be realized.
 更に、装置の分散・統合の具体的形態は図示するものに限られず、その全部または一部を、各種の付加等に応じて、または、機能負荷に応じて、任意の単位で機能的または物理的に分散・統合して構成することができる。 Furthermore, the specific form of distribution / integration of the devices is not limited to that shown in the figure, and all or a part of them may be functional or physical in arbitrary units according to various additions or according to functional loads. Can be distributed and integrated.
 以上、詳細に説明したように、本発明にかかる輪郭抽出装置および輪郭抽出方法、並びにプログラムおよび記録媒体によれば、正確に対象物の真の輪郭を抽出することができることができ、医療や製薬や創薬や生物学研究や臨床検査などの分野の他、防犯システムなど様々な分野において極めて有用である。 As described above in detail, according to the contour extracting apparatus, the contour extracting method, the program, and the recording medium according to the present invention, the true contour of the target object can be accurately extracted, and medical and pharmaceutical In addition to fields such as drug discovery, biological research and clinical testing, it is extremely useful in various fields such as crime prevention systems.
 100 輪郭抽出装置
 102 制御部
 102a フレーム設定部
 102b 時間逆順フレーム設定部
 102c 時間正順フレーム設定部
 102d 初期輪郭設定部
 102e 輪郭収束部
 102f 輝度分散算出部
 102g 閾値判定部
 102h 輪郭拡張部
 102i 輪郭抽出部
 102j 輪郭取得部
 102k 細胞分裂点設定部
 104 通信制御インターフェース部
 106 記憶部
 106a 画像データファイル
 106b 位置情報ファイル
 106c 輝度情報ファイル
 106d 細胞分裂点ファイル
 108 入出力制御インターフェース部
 112 入力部
 114 出力部
 200 外部システム
 300 ネットワーク
DESCRIPTION OF SYMBOLS 100 Contour extraction apparatus 102 Control part 102a Frame setting part 102b Time reverse order frame setting part 102c Time normal order frame setting part 102d Initial contour setting part 102e Contour convergence part 102f Luminance dispersion | distribution calculation part 102g Threshold determination part 102h Contour expansion part 102i Contour extraction part 102j Outline acquisition unit 102k Cell division point setting unit 104 Communication control interface unit 106 Storage unit 106a Image data file 106b Location information file 106c Luminance information file 106d Cell division point file 108 Input / output control interface unit 112 Input unit 114 Output unit 200 External system 300 network

Claims (8)

  1.  記憶部と制御部とを少なくとも備えた、対象物の輪郭を抽出する輪郭抽出装置において、
     上記記憶部は、
     上記対象物を撮像した画像データを記憶する画像データ記憶手段
     を備え、
     上記制御部は、
     上記画像データ記憶手段に記憶された上記画像データにおいて、上記対象物の初期輪郭を設定する初期輪郭設定手段と、
     上記初期輪郭設定手段により設定された上記初期輪郭を収束させて収束輪郭を生成する輪郭収束手段と、
     上記輪郭収束手段により生成された上記収束輪郭上の輝度を取得し、上記輝度の分散を算出する輝度分散算出手段と、
     上記輝度分散算出手段により算出された上記輝度の上記分散と、予め定めた閾値とを比較し、上記分散が上記閾値より大きいか、または、上記分散が上記閾値以下であるかを判定する閾値判定手段と、
     上記閾値判定手段により上記分散が上記閾値より大きいと判定された場合に、上記輪郭収束手段により生成された上記収束輪郭を拡張し、当該拡張された輪郭を上記初期輪郭として設定する輪郭拡張手段と、
     上記閾値判定手段にて上記分散が上記閾値以下であると判定された場合に、上記輪郭収束手段により生成された上記収束輪郭を上記対象物の輪郭として抽出する輪郭抽出手段と、
     を備えたことを特徴とする輪郭抽出装置。
    In the contour extraction apparatus for extracting the contour of the object, comprising at least a storage unit and a control unit,
    The storage unit
    Image data storage means for storing image data obtained by imaging the object,
    The control unit
    In the image data stored in the image data storage means, initial contour setting means for setting an initial contour of the object;
    Contour convergence means for generating a convergent contour by converging the initial contour set by the initial contour setting means;
    A luminance variance calculating means for obtaining the luminance on the convergent contour generated by the contour converging means, and calculating the luminance variance;
    Threshold determination for comparing the variance of the brightness calculated by the brightness variance calculating means with a predetermined threshold and determining whether the variance is greater than the threshold or whether the variance is less than or equal to the threshold Means,
    Contour expansion means for expanding the convergent contour generated by the contour convergence means and setting the expanded contour as the initial contour when the threshold determination means determines that the variance is greater than the threshold; ,
    Contour extraction means for extracting the convergence contour generated by the contour convergence means as the contour of the object when the threshold determination means determines that the variance is equal to or less than the threshold;
    An outline extraction apparatus comprising:
  2.  請求項1に記載の輪郭抽出装置において、
     上記輪郭収束手段および上記輪郭拡張手段は、動的輪郭モデリング法により上記収束輪郭を生成および拡張すること、
     を特徴とする輪郭抽出装置。
    The contour extracting apparatus according to claim 1,
    The contour converging means and the contour expanding means generate and expand the convergent contour by a dynamic contour modeling method;
    Contour extraction device characterized by
  3.  請求項1または2に記載の輪郭抽出装置において、
     上記制御部は、
     上記輪郭抽出手段にて抽出された上記対象物の輪郭の中心位置および/または輪郭内輝度を取得する輪郭取得手段、
     をさらに備えたことを特徴とする輪郭抽出装置。
    In the outline extraction device according to claim 1 or 2,
    The control unit
    Contour acquisition means for acquiring a center position and / or luminance within the contour of the object extracted by the contour extraction means;
    An outline extraction apparatus further comprising:
  4.  請求項1から3のいずれか一つに記載の輪郭抽出装置において、
     上記画像データは、複数の時間で撮像された複数のフレームから構成され、
     上記制御部は、上記複数のフレームを、上記時間の正順または逆順で処理するように制御することを特徴とする輪郭抽出装置。
    In the outline extraction device according to any one of claims 1 to 3,
    The image data is composed of a plurality of frames captured at a plurality of times,
    The contour extraction apparatus, wherein the control unit controls the plurality of frames to be processed in the normal order or the reverse order of the time.
  5.  請求項4に記載の輪郭抽出装置において、
     上記制御部は、
     上記複数のフレームを、上記時間の逆順で処理するように制御した場合に、複数の上記対象物の輪郭の上記中心位置が予め定めた閾値より小さい場合に、細胞分裂点を設定する細胞分裂点設定手段、
     を更に備えたことを特徴とする輪郭抽出装置。
    The contour extracting apparatus according to claim 4,
    The control unit
    A cell division point that sets a cell division point when the plurality of frames are controlled to be processed in the reverse order of the time and the center position of the outline of the plurality of objects is smaller than a predetermined threshold value. Setting means,
    An outline extraction apparatus further comprising:
  6.  請求項1から5のいずれか一つに記載の輪郭抽出装置において、
     上記輝度分散算出手段は、
     上記画像データが複数の色に対応する輝度情報を含む場合に、上記色ごとに重み付けした上記輝度情報のp乗和のp乗根(ここで、pは上記輝度情報に対するパラメータである。)を、上記輝度として取得して上記輝度の分散を算出すること、
     を特徴とする輪郭抽出装置。
    In the contour extracting device according to any one of claims 1 to 5,
    The luminance variance calculating means is:
    When the image data includes luminance information corresponding to a plurality of colors, a p-th root of the p-th power sum of the luminance information weighted for each color (where p is a parameter for the luminance information). Obtaining the luminance and calculating the variance of the luminance,
    Contour extraction device characterized by
  7.  記憶部と制御部とを少なくとも備えた、対象物の輪郭を抽出する輪郭抽出装置において実行される輪郭抽出方法であって、
     上記記憶部は、
     上記対象物を撮像した画像データを記憶する画像データ記憶手段
     を備えており、
     上記制御部において実行される、
     上記画像データ記憶手段に記憶された上記画像データにおいて、上記対象物の初期輪郭を設定する初期輪郭設定ステップと、
     上記初期輪郭設定ステップにて設定された上記初期輪郭を収束させて収束輪郭を生成する輪郭収束ステップと、
     上記輪郭収束ステップにて生成された上記収束輪郭上の輝度を取得し、上記輝度の分散を算出する輝度分散算出ステップと、
     上記輝度分散算出ステップにて算出された上記輝度の上記分散と、予め定めた閾値とを比較し、上記分散が上記閾値より大きいか、または、上記分散が上記閾値以下であるかを判定する閾値判定ステップと、
     上記閾値判定ステップにて上記分散が上記閾値より大きいと判定された場合に、上記輪郭収束ステップにて生成された上記収束輪郭を拡張し、当該拡張された輪郭を上記初期輪郭として設定する輪郭拡張ステップと、
     上記閾値判定ステップにて上記分散が上記閾値以下であると判定された場合に、上記輪郭収束ステップにて生成された上記収束輪郭を上記対象物の輪郭として抽出する輪郭抽出ステップと、
     を含むことを特徴とする輪郭抽出方法。
    A contour extraction method executed in a contour extraction apparatus that extracts a contour of an object, comprising at least a storage unit and a control unit,
    The storage unit
    Image data storage means for storing image data obtained by imaging the object,
    Executed in the control unit,
    An initial contour setting step for setting an initial contour of the object in the image data stored in the image data storage means;
    A contour convergence step for generating a convergent contour by converging the initial contour set in the initial contour setting step;
    A luminance dispersion calculating step for obtaining the luminance on the convergent contour generated in the contour convergence step and calculating the variance of the luminance;
    A threshold for comparing the variance of the luminance calculated in the luminance variance calculating step with a predetermined threshold and determining whether the variance is greater than the threshold or whether the variance is less than or equal to the threshold. A determination step;
    Contour expansion that expands the convergent contour generated in the contour convergence step and sets the expanded contour as the initial contour when the variance is determined to be greater than the threshold in the threshold determination step Steps,
    A contour extraction step for extracting the convergence contour generated in the contour convergence step as the contour of the object when the variance is determined to be equal to or less than the threshold in the threshold determination step;
    Contour extraction method characterized by including.
  8.  記憶部と制御部とを少なくとも備えた輪郭抽出装置において実行させるためのプログラムであって、
     上記記憶部は、
     上記対象物を撮像した画像データを記憶する画像データ記憶手段
     を備えており、
     上記制御部において、
     上記画像データ記憶手段に記憶された上記画像データにおいて、上記対象物の初期輪郭を設定する初期輪郭設定ステップと、
     上記初期輪郭設定ステップにて設定された上記初期輪郭を収束させて収束輪郭を生成する輪郭収束ステップと、
     上記輪郭収束ステップにて生成された上記収束輪郭上の輝度を取得し、上記輝度の分散を算出する輝度分散算出ステップと、
     上記輝度分散算出ステップにて算出された上記輝度の上記分散と、予め定めた閾値とを比較し、上記分散が上記閾値より大きいか、または、上記分散が上記閾値以下であるかを判定する閾値判定ステップと、
     上記閾値判定ステップにて上記分散が上記閾値より大きいと判定された場合に、上記輪郭収束ステップにて生成された上記収束輪郭を拡張し、当該拡張された輪郭を上記初期輪郭として設定する輪郭拡張ステップと、
     上記閾値判定ステップにて上記分散が上記閾値以下であると判定された場合に、上記輪郭収束ステップにて生成された上記収束輪郭を上記対象物の輪郭として抽出する輪郭抽出ステップと、
     を実行させるためのプログラム。
    A program for executing in a contour extraction device having at least a storage unit and a control unit,
    The storage unit
    Image data storage means for storing image data obtained by imaging the object,
    In the control unit,
    An initial contour setting step for setting an initial contour of the object in the image data stored in the image data storage means;
    A contour convergence step for generating a convergent contour by converging the initial contour set in the initial contour setting step;
    A luminance dispersion calculating step for obtaining the luminance on the convergent contour generated in the contour convergence step and calculating the variance of the luminance;
    A threshold for comparing the variance of the luminance calculated in the luminance variance calculating step with a predetermined threshold and determining whether the variance is greater than the threshold or whether the variance is less than or equal to the threshold. A determination step;
    Contour expansion that expands the convergent contour generated in the contour convergence step and sets the expanded contour as the initial contour when the variance is determined to be greater than the threshold in the threshold determination step Steps,
    A contour extraction step for extracting the convergence contour generated in the contour convergence step as the contour of the object when the variance is determined to be equal to or less than the threshold in the threshold determination step;
    A program for running
PCT/JP2010/051999 2009-02-24 2010-02-10 Contour definition device and contour definition method, and program WO2010098211A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP2011501550A JPWO2010098211A1 (en) 2009-02-24 2010-02-10 Outline extraction apparatus, outline extraction method, and program

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2009041171 2009-02-24
JP2009-041171 2009-02-24

Publications (1)

Publication Number Publication Date
WO2010098211A1 true WO2010098211A1 (en) 2010-09-02

Family

ID=42665422

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2010/051999 WO2010098211A1 (en) 2009-02-24 2010-02-10 Contour definition device and contour definition method, and program

Country Status (2)

Country Link
JP (1) JPWO2010098211A1 (en)
WO (1) WO2010098211A1 (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2012163538A (en) * 2011-02-09 2012-08-30 Olympus Corp Cell image analysis system
JPWO2014196134A1 (en) * 2013-06-06 2017-02-23 日本電気株式会社 Analysis processing system
JP2017520354A (en) * 2014-05-14 2017-07-27 ウニベルシダ デ ロス アンデス Method for automatic segmentation and quantification of body tissue

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2000331143A (en) * 1999-05-14 2000-11-30 Mitsubishi Electric Corp Image processing method
JP2004054347A (en) * 2002-07-16 2004-02-19 Fujitsu Ltd Image processing method, image processing program, and image processing apparatus
JP2007041664A (en) * 2005-08-01 2007-02-15 Olympus Corp Device and program for extracting region
JP2007222073A (en) * 2006-02-23 2007-09-06 Yamaguchi Univ Method for evaluating cell motility characteristic by image processing, image processor therefor and image processing program

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2000331143A (en) * 1999-05-14 2000-11-30 Mitsubishi Electric Corp Image processing method
JP2004054347A (en) * 2002-07-16 2004-02-19 Fujitsu Ltd Image processing method, image processing program, and image processing apparatus
JP2007041664A (en) * 2005-08-01 2007-02-15 Olympus Corp Device and program for extracting region
JP2007222073A (en) * 2006-02-23 2007-09-06 Yamaguchi Univ Method for evaluating cell motility characteristic by image processing, image processor therefor and image processing program

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
NOBORU HIGASHI ET AL.: "Doteki Rinkaku Chushutsu Hoho ni Okeru Bocho Shushuku Model no Kaihatsu to Kosokuka Shuho eno Tekio", THE INSTITUTE OF ELECTRICAL ENGINEERS OF JAPAN SANGYO SYSTEM JOHOKA KENKYUKAI SHIRYO, vol. IIS-00, no. 13-21, 10 August 2000 (2000-08-10), pages 15 - 18 *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2012163538A (en) * 2011-02-09 2012-08-30 Olympus Corp Cell image analysis system
JPWO2014196134A1 (en) * 2013-06-06 2017-02-23 日本電気株式会社 Analysis processing system
JP2017520354A (en) * 2014-05-14 2017-07-27 ウニベルシダ デ ロス アンデス Method for automatic segmentation and quantification of body tissue

Also Published As

Publication number Publication date
JPWO2010098211A1 (en) 2012-08-30

Similar Documents

Publication Publication Date Title
Fisher et al. Dictionary of computer vision and image processing
JP7026826B2 (en) Image processing methods, electronic devices and storage media
CN110210276A (en) A kind of motion track acquisition methods and its equipment, storage medium, terminal
Ji et al. Tracking quasi‐stationary flow of weak fluorescent signals by adaptive multi‐frame correlation
JP2010262350A (en) Image processing apparatus, image processing method, and program
Poux et al. Unsupervised segmentation of indoor 3D point cloud: Application to object-based classification
Amat et al. Towards comprehensive cell lineage reconstructions in complex organisms using light‐sheet microscopy
JP6179224B2 (en) Image processing filter creation apparatus and method
Dorn et al. Computational processing and analysis of dynamic fluorescence image data
CN112419295A (en) Medical image processing method, apparatus, computer device and storage medium
Yang et al. Intelligent crack extraction based on terrestrial laser scanning measurement
Mahmoudabadi et al. Efficient terrestrial laser scan segmentation exploiting data structure
JP5310485B2 (en) Image processing method and apparatus, and program
JP2019029935A (en) Image processing system and control method thereof
Carrasco et al. Image-based automated width measurement of surface cracking
WO2010098211A1 (en) Contour definition device and contour definition method, and program
JP5965764B2 (en) Image area dividing apparatus and image area dividing program
CN116091524B (en) Detection and segmentation method for target in complex background
Adam et al. Objects can move: 3d change detection by geometric transformation consistency
Sáez et al. Neuromuscular disease classification system
Chen et al. Plane segmentation for a building roof combining deep learning and the RANSAC method from a 3D point cloud
Kumar et al. A motion correction framework for time series sequences in microscopy images
Rieger et al. Aggregating explanation methods for stable and robust explainability
Breier et al. Analysis of video feature learning in two-stream CNNs on the example of zebrafish swim bout classification
Japar et al. Coherent group detection in still image

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 10746092

Country of ref document: EP

Kind code of ref document: A1

WWE Wipo information: entry into national phase

Ref document number: 2011501550

Country of ref document: JP

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 10746092

Country of ref document: EP

Kind code of ref document: A1