WO2010098211A1 - Dispositif de définition de contour et procédé de définition de contour, et programme associé - Google Patents

Dispositif de définition de contour et procédé de définition de contour, et programme associé Download PDF

Info

Publication number
WO2010098211A1
WO2010098211A1 PCT/JP2010/051999 JP2010051999W WO2010098211A1 WO 2010098211 A1 WO2010098211 A1 WO 2010098211A1 JP 2010051999 W JP2010051999 W JP 2010051999W WO 2010098211 A1 WO2010098211 A1 WO 2010098211A1
Authority
WO
WIPO (PCT)
Prior art keywords
contour
luminance
variance
threshold
initial
Prior art date
Application number
PCT/JP2010/051999
Other languages
English (en)
Japanese (ja)
Inventor
敦史 宮脇
裕 黒川
久順 野田
Original Assignee
独立行政法人理化学研究所
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 独立行政法人理化学研究所 filed Critical 独立行政法人理化学研究所
Priority to JP2011501550A priority Critical patent/JPWO2010098211A1/ja
Publication of WO2010098211A1 publication Critical patent/WO2010098211A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/149Segmentation; Edge detection involving deformable models, e.g. active contour models
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
    • G06V10/755Deformable models or variational models, e.g. snakes or active contours
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/60Type of objects
    • G06V20/69Microscopic objects, e.g. biological cells or cellular parts
    • G06V20/695Preprocessing, e.g. image segmentation
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N21/00Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
    • G01N21/62Systems in which the material investigated is excited whereby it emits light or causes a change in wavelength of the incident light
    • G01N21/63Systems in which the material investigated is excited whereby it emits light or causes a change in wavelength of the incident light optically excited
    • G01N21/64Fluorescence; Phosphorescence
    • G01N21/645Specially adapted constructive features of fluorimeters
    • G01N21/6456Spatial resolved fluorescence measurements; Imaging
    • G01N21/6458Fluorescence microscopy
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30024Cell structures in vitro; Tissue sections in vitro

Definitions

  • the present invention relates to a contour extraction device, a contour extraction method, and a program, and more particularly, to a contour extraction device, a contour extraction method, and a program for a cell.
  • Patent Document 1 when a contour model is contracted or enlarged using Snakes and the distance between two nodes that are not adjacent to each other is smaller than a threshold value, the contour model is split at the two nodes. It is disclosed.
  • Patent Document 2 it is disclosed that the contour model is contracted and deformed using Snakes, and the contact model is divided into a plurality when the contact or intersection in the contour model is detected.
  • Non-Patent Document 1 unlike the cell contour extraction method by the Snakes method, classifies a cell region and a background from a phase contrast microscope image by creating a binary map based on a luminance histogram, A technique for tracking cells so that the energy function related to cell dynamics is minimized in time order is disclosed.
  • the cell extraction method and the cell tracking method in the image analysis software for the conventional fluorescence microscope image set a threshold value for a certain luminance value, extract a cell region from the analysis target image, and position information, etc. The cell movement was tracked from the statistical analysis results.
  • Kang Li Takeo Kanade, "Cell Population Tracking and Lineage Construction Using Multiple-Model Dynamics Filters and Spatiotemporal Optimization", Proceedings of the 2nd International Workshop on Microscopic Image Analysis with Applications in Biology (MIAAB), September, 2007
  • the conventional cell extraction method using threshold processing has a problem in that it may be difficult to perform good cell extraction due to a decrease in luminance due to a change in fluorescence luminance or a background luminance gradient.
  • the present invention has been made in view of the above problems, and an object of the present invention is to provide a contour extraction device, a contour extraction method, and a program that can accurately extract the true contour of an object. .
  • the contour extraction apparatus of the present invention includes at least a storage unit and a control unit, and extracts a contour of an object.
  • the storage unit images the object.
  • Image data storage means for storing the image data
  • the control unit in the image data stored in the image data storage means, initial contour setting means for setting an initial contour of the object, and the initial contour Contour convergence means for converging the initial contour set by the setting means to generate a convergent contour, and luminance dispersion for obtaining the luminance on the convergent contour generated by the contour convergence means and calculating the variance of the luminance
  • the calculation means and the variance of the luminance calculated by the luminance variance calculation means are compared with a predetermined threshold value, and the variance is greater than the threshold value, or A threshold value determining means for determining whether or not the variance is equal to or less than the threshold value, and when the threshold value determining means determines that the variance is greater than the threshold value, the convergence contour generated by the contour convergence means is expanded,
  • the contour expanding means for setting the expanded
  • the contour extracting device of the present invention is characterized in that, in the contour extracting device described above, the contour converging means and the contour expanding means generate and expand the convergent contour by a dynamic contour modeling method.
  • the contour extracting device of the present invention is the contour extracting device described above, wherein the control unit acquires the center position and / or the in-contour brightness of the contour of the object extracted by the contour extracting means.
  • An acquisition means is further provided.
  • the image data includes a plurality of frames captured at a plurality of times, and the control unit converts the plurality of frames into the time. Control is performed so that processing is performed in the forward or reverse order.
  • the contour extracting device when the control unit controls the plurality of frames to be processed in the reverse order of the time, the contour of the plurality of objects is obtained. And a cell division point setting means for setting a cell division point when the center position is smaller than a predetermined threshold value.
  • the contour extracting apparatus of the present invention is the contour extracting apparatus described above, wherein the luminance variance calculating means weights the luminance for each color when the image data includes luminance information corresponding to a plurality of colors.
  • the p-th root of the sum of the powers of information p (where p is a parameter for the luminance information) is acquired as the luminance and the variance of the luminance is calculated.
  • the present invention also relates to a contour extraction method, and the contour extraction method of the present invention is a contour extraction method that is executed in a contour extraction device that extracts at least a contour of an object, which includes at least a storage unit and a control unit.
  • the storage unit includes image data storage means for storing image data obtained by imaging the object, and the image data stored in the image data storage means is executed in the control unit.
  • Luminance on the converged contour is obtained, and a luminance variance calculating step for calculating the variance of the luminance and the luminance variance calculating step calculated above
  • the threshold value is compared with a predetermined threshold value to determine whether the variance value is greater than the threshold value or whether the variance value is less than or equal to the threshold value.
  • the contour extension step for expanding the convergent contour generated in the contour convergence step and setting the expanded contour as the initial contour; and the threshold determination step A contour extracting step of extracting the convergent contour generated in the contour convergence step as the contour of the object when it is determined that the variance is equal to or less than the threshold.
  • the present invention relates to a program
  • the program of the present invention is a program for causing an outline extraction apparatus including at least a storage unit and a control unit to execute the program, and the storage unit stores the object.
  • An image data storage means for storing captured image data, and an initial contour setting step for setting an initial contour of the object in the image data stored in the image data storage means in the control unit;
  • a contour convergence step for generating a convergent contour by converging the initial contour set in the initial contour setting step, and obtaining a luminance on the convergent contour generated in the contour convergence step, and distributing the luminance
  • a luminance variance calculating step for calculating the luminance, the variance of the luminance calculated in the luminance variance calculating step, and a predetermined threshold value,
  • the threshold determination step for determining whether the variance is greater than the threshold or whether the variance is less than or equal to the threshold, and when the variance is determined to be greater than the threshold in the threshold determination step , Expanding the convergence contour generated in the contour convergence
  • an initial contour of an object is set in image data obtained by imaging the object, (2) a convergent contour is generated by converging the set initial contour, and (3) is generated.
  • the luminance on the convergent contour is acquired, and the luminance variance is calculated.
  • the calculated luminance variance is compared with a predetermined threshold, and the variance is greater than the threshold or the variance is less than or equal to the threshold.
  • the generated convergent contour is expanded, and the expanded contour is set as the initial contour.
  • the variance is equal to or less than the threshold When it determines with it being, it extracts the produced
  • the present invention generates and expands the convergent contour by the dynamic contour modeling method (Snales), so that it is possible to accurately fit the contour of the object. .
  • the present invention obtains the center position and / or the luminance within the contour of the extracted object, so that the position and the luminance of the object can be analyzed with high accuracy.
  • the image data is composed of a plurality of frames captured at a plurality of times, and the plurality of frames are controlled to be processed in the normal order or the reverse order of the times. Even when the object is deformed or moved, the object can be tracked with high accuracy.
  • the present invention sets the cell division point when the center position of the contours of the plurality of objects is smaller than a predetermined threshold when the plurality of frames are controlled to be processed in reverse order of time. Even when cell division occurs, the cell division point can be analyzed accurately without complicating the algorithm. That is, in the conventional method for analyzing frames in the order of time, cell division is determined by a complicated algorithm called contour model division determination, whereas in the present invention, the frame is analyzed in reverse order of time, and the cells are analyzed. Therefore, the algorithm can be simplified and an accurate cell division point can be analyzed.
  • the present invention relates to the p-th root of the p-th power sum of the luminance information weighted for each color when the image data includes luminance information corresponding to a plurality of colors (where p is the luminance). Is obtained as the luminance and the variance of the luminance is calculated, so even if the image is expressed in multiple colors instead of a single color, the luminance corresponding to each color is a single luminance. There is an effect that the calculation for contour extraction can be performed by unifying the indices.
  • FIG. 1 is a diagram schematically showing how the initial contour is converged from the initial contour by the dynamic contour modeling method and fitted to the contour of the object.
  • FIG. 2 is a block diagram showing an example of the configuration of the contour extraction apparatus 100 to which the present invention is applied.
  • FIG. 3 is a flowchart showing an example of the contour extraction process of the contour extraction apparatus 100 according to the present embodiment.
  • FIG. 4 is a conceptual diagram showing an example in which the contour extraction process according to the present embodiment is performed.
  • FIG. 5 is a diagram illustrating an example in which processing according to a conventional method is performed and an example in which contour extraction processing according to the present embodiment is performed.
  • FIG. 6 is a flowchart illustrating an example of the time order analysis process.
  • FIG. 1 is a diagram schematically showing how the initial contour is converged from the initial contour by the dynamic contour modeling method and fitted to the contour of the object.
  • FIG. 2 is a block diagram showing an example of the configuration of the contour extraction apparatus 100 to which the present invention is
  • FIG. 7 is a diagram schematically showing a state in which the extracted contour extracted in the previous frame is expanded, the initial contour in the next frame is set, and a convergent contour is generated.
  • FIG. 8 is a flowchart illustrating an example of the reverse time order analysis process.
  • FIG. 9 is a diagram schematically illustrating an example of processing for resetting the initial contour by Hough transform.
  • FIG. 10 is a diagram illustrating an example of a contour extraction result according to the present embodiment.
  • FIG. 11 is a diagram for explaining the principle of determining the cell division point.
  • FIG. 12 is a diagram illustrating, as an example, the result of performing contour extraction processing in reverse order on the time axis in time-continuous frames.
  • FIG. 13 is a diagram showing a contour extraction result (left diagram) and a cell division analysis result (right diagram) in a certain frame k.
  • FIG. 14 is a flowchart illustrating the processing of the first embodiment.
  • FIG. 15 is a diagram showing a result of analyzing a fluorescence imaging image of a primary cultured cell of ZebraFish into which a fluorescent protein type indicator Fucci that emits fluorescence specifically in the cell cycle is introduced, based on this experimental example 2.
  • FIG. 16 is a graph showing changes in the intensity of each fluorescence in the contour extracted by the second embodiment, and a phylogenetic tree of cell division created from the cell tracking result and the luminance information in the contour according to the second embodiment.
  • FIG. FIG. 17 is a diagram illustrating an example in which an ROI is set for a colony of HeLa cells and contour extraction of the entire colony is performed.
  • the present invention is not limited to this case, and can be similarly applied to all technical fields such as a security system.
  • the contour extracting apparatus of the present invention includes a storage unit and a control unit, and stores image data obtained by capturing an object.
  • the contour extracting apparatus of the present invention sets the initial contour of the object in the stored image data image.
  • the initial contour is set by manually or automatically using a known technique so as to surround the object to be analyzed.
  • FIG. 1 is a diagram schematically showing how the initial contour is converged from the initial contour by the dynamic contour modeling method and fitted to the contour of the object.
  • the left diagram represents the object and the initial contour
  • the right diagram represents the object and the convergent contour.
  • the contour extraction apparatus of the present invention generates a convergent contour from a circular initial contour by a dynamic contour modeling method or the like.
  • the contour extracting apparatus of the present invention acquires the luminance on the convergent contour and calculates the luminance variance.
  • luminance variance is a value indicating variation in luminance value, and is represented by, for example, a standard deviation that is a square root of variance.
  • the contour extraction apparatus of the present invention compares the luminance variance with a predetermined threshold value, and determines whether the variance is greater than the threshold value or whether the variance is less than or equal to the threshold value.
  • the contour extracting apparatus of the present invention expands the convergent contour, sets the expanded contour as the initial contour, and repeats the above processing again.
  • the contour extracting apparatus of the present invention extracts the generated convergent contour as the contour of the object.
  • the contour extracting apparatus of the present invention may acquire the center position of the contour of the object and / or the luminance within the contour.
  • the above is the outline of the contour extraction processing of the present invention in one frame image.
  • the image data is composed of a plurality of frames captured at a plurality of times by shooting at regular intervals, etc.
  • the above-described contour extraction processing is performed on the plurality of frames in the normal order or the reverse order of the times. You may control.
  • the above-described contour extraction process is performed in the reverse order, the cell division point may be set when the center positions of the contours of a plurality of objects are smaller than a predetermined threshold value. This is the end of the description of the outline of the present invention.
  • FIG. 2 is a block diagram showing an example of the configuration of the contour extraction apparatus 100 to which the present invention is applied, and conceptually shows only the portion related to the present invention.
  • the contour extracting apparatus 100 is generally configured to include a control unit 102, a communication control interface unit 104, a storage unit 106, and an input / output control interface unit 108. Are communicably connected via an arbitrary communication path.
  • control unit 102 is a CPU or the like that comprehensively controls the entire contour extraction apparatus 100.
  • the input / output control interface unit 108 is an interface connected to the input unit 112 and the output unit 114.
  • the storage unit 106 is a device that stores various databases and tables.
  • Various databases and tables (image data file 106a to cell division point file 106d) stored in the storage unit 106 are storage means such as a fixed disk device.
  • the storage unit 106 stores various programs, tables, files, databases, and the like used for various processes.
  • the image data file 106a is an image data storage unit that stores image data obtained by capturing an image of an object.
  • the image data file 106a may store image data composed of a plurality of frames 1 to n imaged at a plurality of times by regular interval shooting (time-lapse shooting) or the like.
  • frame 1 represents the first frame on the time axis
  • frame n represents the last frame on the time axis
  • frame k represents an arbitrary frame.
  • the image data file 106a may store image data including luminance information corresponding to a plurality of colors (for example, luminance values corresponding to the three primary colors of RGB), and the image data may be stored in the original image.
  • image data obtained by converting the p-th root of the p-th power sum of luminance information weighted for each color (where p is a parameter for luminance information) as luminance may be stored.
  • the position information file 106b is position information storage means for storing the center position of the contour of the object.
  • each contour is assigned a contour number (ID), and one object is associated with one initial contour. Identified by contour number. That is, in this position information file 106b, coordinates (x, y) are stored in association with an outline number (ID) or a frame number as an example.
  • the luminance information file 106c is luminance information storage means for storing the luminance within the contour of the target object.
  • the luminance information file 106c stores luminance values in association with contour numbers and frame numbers.
  • the cell division point file 106d is a cell division point storage means for storing cell division points.
  • the cell division point file 106d stores the contour numbers of the contours of two objects in association with the frame numbers.
  • the input / output control interface unit 108 controls the input unit 112 and the output unit 114.
  • the output unit 114 is a monitor (including a home television), a speaker, or the like (hereinafter, the output unit 114 may be described as a monitor).
  • a keyboard, a mouse, a microphone, and the like can be used in addition to an image input device such as a microscopic photographing device for photographing at regular intervals.
  • the control unit 102 has an internal memory for storing a control program such as an OS (Operating System), a program defining various processing procedures, and necessary data. And the control part 102 performs the information processing for performing various processes by these programs.
  • the control unit 102 includes a frame setting unit 102a, an initial contour setting unit 102d, a contour convergence unit 102e, a luminance variance calculation unit 102f, a threshold determination unit 102g, a contour expansion unit 102h, a contour extraction unit 102i, and a contour acquisition unit in terms of functional concept. 102j and a cell division point setting unit 102k.
  • the frame setting unit 102a controls the plurality of frames stored in the image data file 106a so as to perform contour extraction processing in the normal or reverse order of time.
  • the frame setting unit 102a includes a time reverse order frame setting unit 102b and a time normal order frame setting unit 102c.
  • the time reverse order frame setting unit 102b is a time reverse order frame setting unit that sets the final frame n as a start point (key frame) and sets the frame k so that the contour extraction processing is performed in the reverse time order until the first frame 1 is reached. is there.
  • the time normal order frame setting unit 102c sets the first frame 1 as the start point (key frame) and sets the frame k so that the contour extraction processing is performed in the time normal order until the last frame n is reached. Means.
  • the initial contour setting unit 102d is initial contour setting means for setting the initial contour of the object in the image data stored in the image data file 106a. That is, the initial contour is set so as to surround the object by the initial contour setting unit 102d, and is converged (and expanded if necessary) so as to match the true contour of the object in the subsequent processing.
  • the contour and the convergent contour represent the ROI (Region of Interest) for each object (in the following embodiment, to distinguish it from the true contour of the object). Sometimes called “ROI”).
  • the initial contour setting unit 102d manages each ROI with a uniform contour number (ID) for the same object across frames.
  • the initial contour setting unit 102d for example, the X coordinate / Y coordinate indicating the initial contour setting position, the number of initial divisions indicating how many lines the initial contour is represented by, An initial radius indicating the size of the contour is set.
  • Some or all of these setting parameters may be set automatically by a known initial contour setting means, or may be set manually.
  • the initial contour setting unit 102d performs feature extraction by the Hough transform, and near the ROI in the frame in which the signal is present.
  • An initial contour may be set using feature points.
  • the contour convergence unit 102e is a contour convergence unit that converges the initial contour set by the initial contour setting unit 102d or the contour expansion unit 102h and generates a convergent contour.
  • the contour convergence unit 102e may generate a convergent contour by a dynamic contour modeling method. More specifically, the contour convergence unit 102e generates a convergent contour by converging the contour (ROI) so that the value calculated by the energy function E snacks below is minimized.
  • E snakes ⁇ ⁇ E in (v (s)) + E img (v (s)) + E con (v (s)) ⁇ ds
  • E in represents the internal energy term of the contour line, and is a function in which the energy decreases as the contour length of the ROI is shorter and the smoothness of the line segment set is higher.
  • E img is This represents a minimum energy term at the edge, and is a function in which the energy becomes smaller as the image is differentiated and closer to 0.
  • E con represents an energy term due to external binding force.
  • a “contracting force” indicating how much the ROI contracts or a smooth curve of the ROI may be defined.
  • “adsorption force” representing how much the ROI is to be retained in the vicinity of the object and “minimum number of vertices” representing how many ROIs are defined as a set of straight lines
  • the parameters such as “the number of contractions” indicating how many times the contraction calculation is performed at the time of ROI calculation can be set in accordance with the type of object, the purpose of analysis, and the like.
  • the luminance dispersion calculation unit 102f is a luminance dispersion calculation unit that acquires the luminance on the convergence contour generated by the contour convergence unit 102e and calculates the luminance dispersion.
  • the luminance variance calculation unit 102f performs color processing when the image data stored in the image data file 106a includes luminance information corresponding to a plurality of colors (for example, luminance values corresponding to the three primary colors of RGB).
  • the luminance variance may be calculated by acquiring the p-th root of the p-th power sum of the luminance information weighted for each (where p is a parameter for the luminance information) as the luminance.
  • the threshold determination unit 102g compares the luminance variance calculated by the luminance variance calculation unit 102f with a predetermined threshold, and determines whether the variance is greater than the threshold or whether the variance is equal to or less than the threshold. It is a threshold value determination means. For example, the luminance variance calculation unit 102f determines whether or not the luminance variance on the convergence contour is a variance larger than a certain standard deviation as a threshold value.
  • the contour expanding unit 102h is a contour expanding unit that expands the contour.
  • the contour expanding unit 102h may expand the convergent contour of the previous frame k-1 and set the expanded contour as the initial contour.
  • the threshold determining unit 102g determines that the variance is greater than the threshold
  • the contour extending unit 102h extends the convergent contour generated by the contour converging unit 102e, and sets the expanded contour as an initial contour.
  • the contour expanding unit 102h may expand the convergent contour by the dynamic contour modeling method.
  • the contour extraction unit 102i is a contour extraction unit that extracts the convergence contour generated by the contour convergence unit 102e as the contour of the object when the threshold determination unit 102g determines that the variance is equal to or less than the threshold. .
  • the contour acquisition unit 102j is a contour acquisition unit that acquires the center position and / or in-contour brightness of the contour of the object extracted by the contour extraction unit 102i.
  • the contour acquisition unit 102j stores the center position of the contour of the target object in the position information file 106b, and stores the in-contour brightness of the target object in the luminance information file 106c.
  • the cell division point setting unit 102k is configured such that when the target object is a cell and the time reverse order frame setting unit 102b is controlled to perform contour extraction processing of the frame k in reverse time order, the contours of a plurality of target objects
  • This is a cell division point setting means for setting a cell division point and storing it in the cell division point file 106d when the center position is smaller than a predetermined threshold value.
  • the threshold value is a parameter of “same point determination distance” for defining how close two contraction contours are to be tracked as the same object.
  • the contour extraction device 100 may be communicably connected to the network 300 via a communication device such as a router and a wired or wireless communication line such as a dedicated line. That is, as shown in FIG. 2, this system roughly provides an outline database 100, an external database that stores setting parameters and the like related to the dynamic outline modeling method, an external program that provides an external program such as an outline extraction program, and the like.
  • the system 200 may be configured to be communicably connected via the network 300.
  • the communication control interface unit 104 of the contour extraction device 100 is an interface connected to a communication device (not shown) such as a router connected to a communication line or the like, and the contour extraction device 100 and the network 300 (or router). Communication control between the communication device and the like. That is, the communication control interface unit 104 has a function of communicating data with other terminals via a communication line.
  • the network 300 has a function of interconnecting the contour extraction device 100 and the external system 200, such as the Internet.
  • the external system 200 is connected to the contour extraction apparatus 100 via the network 300, and stores an external database for storing setting parameters and the like regarding the dynamic contour modeling method for the user, and an external such as a contour extraction program. It has a function to provide programs and the like.
  • the external system 200 may be configured as a WEB server, an ASP server, or the like.
  • the hardware configuration of the external system 200 may be configured by an information processing apparatus such as a commercially available workstation or a personal computer and its attached devices.
  • Each function of the external system 200 is realized by a CPU, a disk device, a memory device, an input device, an output device, a communication control device, and the like in the hardware configuration of the external system 200 and a program for controlling them.
  • FIG. 3 is a flowchart showing an example of the contour extraction process of the contour extraction apparatus 100 according to the present embodiment.
  • the initial contour setting unit 102d sets an initial contour in association with an ID for each object in the image data stored in the image data file 106a (step SA-1).
  • the contour converging unit 102e generates a convergent contour by converging the set initial contour with Snakes (step SA-2). More specifically, the contour convergence unit 102e repeats the generation of the convergence contour and the calculation of the energy value based on the energy function E sneaks for the set number of convergences, and generates the convergence contour so that the energy value is minimized.
  • E snakes ⁇ ⁇ E in (v (s)) + E img (v (s)) + E con (v (s)) ⁇ ds (Here, E in represents an internal energy term of the contour line, E img represents an energy term that is minimal at the edge portion of the image, and E con represents an energy term due to a binding force from the outside. )
  • the luminance variance calculation unit 102f acquires the luminance on the convergence contour generated by the contour convergence unit 102e (step SA-3). More specifically, the luminance dispersion calculation unit 102f acquires the luminance value of the image pixel corresponding to the convergent contour.
  • the luminance variance calculation unit 102f performs color processing when the image data stored in the image data file 106a includes luminance information corresponding to a plurality of colors (for example, luminance values corresponding to the three primary colors of RGB).
  • the p-th root of the p-th power sum of the luminance information weighted for each may be acquired as the luminance.
  • the luminance variance calculation unit 102f calculates the obtained luminance variance (step SA-4).
  • the threshold determination unit 102g compares the luminance variance calculated by the luminance variance calculation unit 102f with a predetermined threshold (for example, a predetermined standard deviation value), and the variance is larger than the threshold, or Then, it is determined whether the variance is equal to or less than the threshold value (step SA-5).
  • a predetermined threshold for example, a predetermined standard deviation value
  • the contour expansion unit 102h determines that the convergent contour has not been correctly set in the true contour of the object (Step S5).
  • SA-6) The convergence contour is expanded, the expanded convergence contour is set as a new initial contour, and the process returns to step SA-2 (step SA-7).
  • FIG. 4 is a conceptual diagram showing an example in which the contour extraction processing according to the present embodiment is performed.
  • the shaded area in FIG. 4 represents an object (cell).
  • the step numbers in FIG. 4 correspond to the step numbers in FIG.
  • the initial contour setting unit 102d sets an initial contour for the next frame k ( ⁇ SA-1> in FIG. 4).
  • the initial contour setting unit 102d sets the extracted contour obtained by extending the contour extracted for the previous frame as the initial contour for the next frame.
  • the contour convergence unit 102e performs contour fitting by Snakes to converge the initial contour and generate a convergent contour ( ⁇ SA-2> on the left in FIG. 4).
  • the object since the object has moved / deformed from the previous frame and moved beyond the expansion range of the Snakes contour, it intersects the true contour of the object, and the luminance gradient inside the object and the luminance gradient around the object Energy minimization occurs, and the Snakes contour cannot be set correctly to the true contour of the object.
  • FIG. 5 is a diagram illustrating an example in which processing by the conventional method is performed and an example in which the contour extraction processing of the present embodiment is performed.
  • the right diagram and the left diagram represent the same image obtained by imaging two cells as objects, and the convergent contours of contour numbers 0 to 3 are represented by polygons.
  • the cell moves beyond the expansion range of the contour (ROI), and the initial contour before the recontraction calculation and the cell region are In the case of crossing, the contour energy is minimized by the false contour generated by the luminance change inside the cell, and the ROI cannot be matched with the true contour of the cell (see contour number 2 in the left figure). Therefore, in the present embodiment, as shown in the right diagram of FIG. 5, the ROI is arranged in the cell internal region by repeating the contraction and expansion of the ROI using the luminance dispersion as an index in the process of the contour extraction process. Even in the case where it has occurred, the ROI can be expanded from the cell internal region to the outside, and can match the true contour of the object (see contour number 2 in the right figure).
  • step SA-5 when it is determined by the threshold determination unit 102g that the variance is equal to or less than the threshold (step SA-5, Yes), the contour extraction unit 102i has correctly set the convergence contour to the true contour of the target object. Judgment is made and the convergence contour is extracted as the contour of the object (step SA-8).
  • the contour acquisition unit 102j stores the center position (X, Y) of the contour of the target object extracted by the contour extraction unit 102i in the position information file 106b in association with the ID, and extracts the contour of the target object.
  • the internal luminance is stored in the luminance information file 106c in association with the ID (step SA-9).
  • FIG. 6 is a flowchart illustrating an example of the time order analysis process.
  • the time-ordered frame setting unit 102c sets the first frame 1 on the time axis in the image data stored in the image data file 106a as the start frame, and sets the initial contour setting unit 102d. Sets an initial contour for each object (step SB-1).
  • Step SB-2 Contour enlargement processing
  • step SB-3 Contour enlargement processing
  • steps SB-3 to SB-8 The subsequent processing of steps SB-3 to SB-8 is performed in the same manner as the processing of steps SA-2 to SA-7 described above, and the processing is repeated until the luminance distribution on the convergence contour becomes equal to or less than the threshold value.
  • the contour extraction unit 102i determines that the convergence contour has been correctly set to the true contour of the object. (Step SB-9), the convergence contour is extracted as the contour of the object, and the setting of the convergence contour is completed (Step SB-10).
  • the contour acquisition unit 102j stores the center position (X, Y) of the contour of the target object extracted by the contour extraction unit 102i in the position information file 106b in association with the ID, and extracts the contour of the target object.
  • the internal luminance is stored in the luminance information file 106c in association with the ID (step SB-11).
  • step SB-12 When the time-ordered frame setting unit 102c determines that the analysis frame k has not reached the frame n (No in step SB-12), k is incremented (k ⁇ k + 1), and the analysis frame k is set to 1. The process returns to step SB-2 (step SB-13).
  • FIG. 7 is a diagram schematically illustrating a state in which the extracted contour extracted in the previous frame is expanded, the initial contour in the next frame is set, and a convergent contour is generated.
  • the left figure shows the object and the extracted contour in the previous frame
  • the center figure shows the object and the initial outline in the next frame
  • the right figure shows the object and the convergence outline in the next frame. Represents.
  • the contour expanding unit 102h expands the convergent contour extracted by the contour extracting unit 102i in the previous frame once and expands Since the contour is set as the initial contour in the next frame, even if the object moves or deforms, the convergent contour that matches the true contour of the object by setting the initial contour to surround the object Can be set.
  • the process of the contour expanding unit 102h is the same in the following time reverse order analysis process.
  • the initial contour setting unit 102d may perform feature extraction by performing Hough transform after the signal disappears, and reset the initial contour to the feature extraction portion in the vicinity of the contour. That is, when the signal (luminance) has disappeared in the previous frame k-1, in step SB-2 for the next frame k, the initial contour setting unit 102d performs feature extraction by Hough transform, and there is a signal.
  • An initial contour may be set with feature points near the ROI in the frame.
  • the Hough transform is performed to reset the initial contour near the signal disappearance point, so that the same object can be continuously tracked before and after the signal disappearance.
  • the initial contour resetting process by the Hough transform will be described in detail with reference to FIGS. 9 and 10 in the following time reverse order analysis process.
  • FIG. 8 is a flowchart showing an example of the time reverse order analysis processing.
  • the time-reverse order frame setting unit 102b sets the last frame n on the time axis among the image data stored in the image data file 106a as a start frame (step SC-1).
  • the initial contour setting unit 102d sets an initial contour for each object for the start frame (step SC-2).
  • Step SC-3 Contour enlargement processing
  • step SC-4 Contour enlargement processing
  • the contour converging unit 102e then converges the initial contour set by the initial contour setting unit 102d with Snakes to generate a converged contour.
  • Step SC-4 The subsequent steps SC-4 to SC-9 are performed in the same manner as the above-described steps SA-2 to SA-7, and the processing is repeated until the luminance dispersion on the convergence contour becomes equal to or less than the threshold value.
  • the contour extraction unit 102i determines that the convergence contour has been correctly set to the true contour of the object. Then, the convergence contour is extracted as the contour of the object, and the setting of the convergence contour is completed (step SC-11).
  • the contour acquisition unit 102j stores the center coordinates (X, Y) of the contour of the target object extracted by the contour extraction unit 102i in the position information file 106b in association with the ID, and extracts the contour of the target object.
  • the internal luminance is stored in the luminance information file 106c in association with the ID (step SC-12).
  • the cell division point setting unit 102k determines whether or not the distance between the center coordinates (X, Y) of the contours of any two objects is sufficiently close (ie, the same point determination). The determination is made based on the distance) (step SC-13).
  • step SC-13 determines that the distance between the center coordinates is smaller than a predetermined threshold value (step SC-13, Yes)
  • the time or simply the frame number of the analysis frame
  • the contour number is set. It is stored in the cell division point file 106d (step SC-14).
  • Step SC-16 When the time-reverse order frame setting unit 102b determines that the analysis frame k has not reached the frame 1 (No in step SC-15), it decrements k (k ⁇ k ⁇ 1), and sets the analysis frame k to One is reversed and the process returns to Step SC-3 (Step SC-16).
  • the contour expanding unit 102h expands the contour extracted by the contour extracting unit 102i in the previous frame k + 1 as in step SB-2 described above, and the expanded contour Is set as the initial contour of the analysis frame k (step SC-3).
  • the initial contour setting unit 102d extracts the feature by the Hough transform, and in the frame where the signal is present
  • An initial contour may be set with feature points in the vicinity of the ROI.
  • FIG. 9 is a diagram schematically illustrating an example of processing for resetting the initial contour by Hough transform.
  • FIG. 10 is a diagram illustrating an example of a contour extraction result according to the present embodiment.
  • the horizontal axis represents the number of frames
  • the vertical axis represents the luminance.
  • the right figure of FIG. 10 is the figure which represented the analysis result by the phylogenetic tree.
  • the luminance may change in the middle of the frame and the signal may disappear as shown in the graph.
  • the initial contour is reset near the signal vanishing point by performing the Hough transform, so that the same object can be identified before and after the signal disappearing, as shown in the right figure. Even if the signal disappears, the same object can be continuously tracked.
  • FIG. 11 is a diagram for explaining the principle of determining the cell division point.
  • the left figure, the center figure, and the right figure correspond to three frames arranged in order on the time axis, respectively.
  • one object, in the central figure and the right figure, two objects Objects are shown schematically.
  • the cell division point setting unit 102k uses this time (the time in the central diagram) as the cell division point. Set. Note that the cell division point setting unit 102k determines that the cell division is not performed when the two contours do not overlap in the next frame k-1, even if the distance between the centers of the contours is less than the threshold in a certain frame k.
  • FIG. 12 is a diagram illustrating, as an example, the result of contour extraction processing performed in reverse order on the time axis in time-continuous frames.
  • numbers “282” to “285” indicate frame numbers
  • numbers “0” and “1” indicate contour numbers (ID) for the respective contours.
  • Each frame image shows HeLa cells expressing a nuclear-labeled probe in which GFP is linked to Histone H2B, which is a nuclear-related protein responsible for DNA structure stabilization, using Olympus (company name) -LCV100 system (trade name).
  • Histone H2B which is a nuclear-related protein responsible for DNA structure stabilization
  • the contours of ID: 0 and ID: 1 exist independently of each other.
  • the contours of ID: 0 and ID: 1 are completely overlapped because the same cells before cell division are targeted.
  • the cell division point setting unit 102k displays the frame number (in this case, “285”) when the distance between the centers of the contours is less than the threshold, or two contours.
  • the frame number (in this case, “284”) in the case of overlapping is set as the cell division point.
  • the cell division point setting unit 102k may create a phylogenetic tree of cell division based on the time (or frame number) and the contour number stored in the cell division point file 106d.
  • FIG. 13 is a diagram showing a contour extraction result (left diagram) and a cell division analysis result (right diagram) in a certain frame k.
  • Example 1 of the present embodiment will be described below with reference to FIG. Here, first, the principle of the first embodiment will be described while comparing with the prior art.
  • time-lapse imaging time-lapse imaging
  • analysis of luminance information make it possible to analyze the time and space of intracellular signals. Information can be obtained.
  • a certain luminance value is set as a threshold for each frame, and a luminance set is extracted by performing binarization by luminance to be a cell region. Then, by performing statistical processing, the luminance sets having the minimum relative movement distance between frames are identified as corresponding to the same cell, and cell tracking is performed.
  • the analysis target is limited to a time-lapse image of cells having luminance (that is, a time-lapse image with a fluorescence microscope), and binarization is performed. Images that can be implemented were only those in which the background and the luminance of the cells had no unevenness or gradient, and the cell group of the photographed cells had no significant luminance change.
  • the cell tracking ability is improved by providing a more efficient cell extraction method, and the above problem is solved by the cell division identification algorithm. That is, in the first embodiment, the cell region extraction is not performed by the binarization based on the luminance used in the conventional method, but the dynamic contour modeling method (Snakes) for extracting the region by paying attention to the cell contour. In the subsequent frame, the cell is tracked by once again performing contraction after expanding the range of the contour model.
  • FIG. 14 is a flowchart showing the processing of the first embodiment.
  • the contour extracting apparatus 100 of the first embodiment opens the image data file 106a by the processing of the time reverse order frame setting unit 102b, and reads the microscopic time-lapse image data of the cells (step SD-1 ).
  • the contour extracting apparatus 100 acquires necessary information from the image data file 106a by the processing of the time reverse order frame setting unit 102b, and then moves the read control k to the last frame n recorded (step SD-2). .
  • the analysis proceeds sequentially from the reverse direction with respect to the time axis.
  • the contour extracting apparatus 100 performs initial parameters (initial contour position, initial contour radius, contraction strength, number of contraction calculations) to be given to the dynamic contour model (Snakes) on the final frame n by the processing of the initial contour setting unit 102d.
  • the enlargement range, the number of control points on the contour, etc.) are set (step SD-3).
  • This initial contour is individually set for a plurality of cells to be analyzed, that is, initial contours having IDs of 1 to m are set for m cells.
  • the contour extracting apparatus 100 performs energy model calculation so that the given initial radius is contracted to a position that best matches the contour of the cell by the processing of the contour converging unit 102e, and generates a convergent contour (step SD). -4).
  • the contour extracting apparatus 100 acquires the luminance value of the image pixel on the convergent contour by the processing of the luminance variance calculating unit 102f (step SD-5), and calculates the acquired luminance variance (step SD-6). .
  • the contour extracting apparatus 100 expands the convergent contour as a new initial contour. Set and return to step SD-4.
  • the contour extracting apparatus 100 repeats the contour convergence process until the variance is equal to or smaller than the predetermined standard deviation (steps SD-4 to 7), and when the variance is equal to or smaller than the predetermined standard deviation (step SD-7, Yes). Then, the outline of the cell is extracted by the processing of the outline extraction unit 102i, and the region of interest (ROI) is set (adapted to the dynamic outline model).
  • ROI region of interest
  • the contour extracting apparatus 100 acquires luminance information from the obtained ROI by the processing of the contour acquiring unit 102j (step SD-8), and the luminance information in the ROI with ID: 1 to m is stored in the luminance information file 106c. Writing out (step SD-9).
  • Step SD-10 When the contour extraction apparatus 100 determines that two or more ROIs indicate the same range and the ROIs are fused (that is, the cells are fused) by the processing of the cell division point setting unit 102k. (Step SD-10, Yes), when viewed from the forward direction of the time axis, it is determined that cell division has occurred, and the cell division point is recorded in the cell division point file 106d (step SD-11).
  • the contour extracting apparatus 100 determines whether or not the reading control k has reached the first frame 1 by the processing of the time reverse order frame setting unit 102b (step SD-12), and has not reached the first frame 1 (Step SD-12, No), the control is shifted to the previous frame k-1 with respect to the time axis by the processing of the time reverse order frame setting unit 102b, and then the ROI range is expanded by the processing of the contour extension unit 102h (Step SD-13).
  • the contour extracting apparatus 100 contracts the contour again in the expanded ROI range for the next frame, and extracts the cell range (steps SD-4 to SD-13).
  • the contour extracting apparatus 100 performs the above processing in order from the reverse direction of the time axis for all the frames n to 1.
  • the contour extracting apparatus 100 determines whether the information is recorded in the luminance information file 106c or the cell division point file 106d. Information such as luminance change information and a transition diagram (phylogenetic tree) of cell division is output to the output unit 114.
  • Example 2 in the present embodiment will be described below with reference to FIGS. 15 and 16.
  • the brightness for contour calculation by the dynamic contour modeling method is appropriate so that the contour can be accurately extracted even when the image is expressed in multiple colors instead of a single color.
  • the cell contour extraction and the cell tracking were performed after performing various processing.
  • the flow of processing other than the calculation of the brightness for contour calculation is basically the same as that in the first embodiment.
  • the p-th root of the p-th power sum of the luminance information weighted for each color is used as the luminance for contour calculation.
  • the luminance for contour calculation is obtained by the following equation.
  • Intensity ((a) * (Ch1) ⁇ (p) + (b) * (Ch2) ⁇ (p) + (c) * (Ch3) ⁇ (p)) ⁇ (1 / p) (Here, a, b, and c are arbitrary weighting parameters for the fluorescent luminances Ch1, Ch2, and Ch3, respectively, and p is a parameter for the overall luminance.)
  • Each parameter in the above formula is a real number specified by, for example, a double type floating point number, and more preferably 0. These are real numbers.
  • the brightness intensity for Snakes contour calculation reflecting the brightness of a plurality of fluorescent images can be obtained by this calculation formula. Then, based on the calculated luminance intensity value, the coordinates where the Snakes contour exists are calculated by the same method (time reverse order analysis processing) as in the first embodiment described above, and exist within the obtained Snakes contour.
  • the luminance information Ch1, Ch2, Ch3
  • FIG. 15 is a diagram showing a result of analyzing a fluorescence imaging image of a primary cultured cell of ZebraFish into which a fluorescent protein type indicator Fucci that emits fluorescence specifically in the cell cycle is introduced, based on this experimental example 2. It is.
  • this indicator Fucci Fluorescent Ubiquitation-based Cell Cycle Indicator
  • Each of the time required until the green fluorescent protein (mAG) is completely decomposed and the time required until the red fluorescent protein (mKO2) is synthesized and accumulated to emit sufficient fluorescence requires a certain amount of time.
  • the fluorescence imaging image used for the analysis is an image group of about 1000 frames obtained by time-lapse photographing of the primary culture cells of ZebraFish into which Fucci was introduced for 48 hours. Of these, in FIG. The representative image is shown together with the extracted contours (contour numbers 2, 3, and 5).
  • each calculation formula in the above calculation formula on the analysis algorithm The parameter was 1.
  • the cells exhibit S / G2 / M phase green fluorescence around 0h-12h, G1 phase red fluorescence around 15h-27h, and S / G2 / M around 30h-45h.
  • the green fluorescence of the period is exhibited, and then the red fluorescence of the G1 period is exhibited again in the vicinity of 48 h.
  • the outline of the cell can be accurately extracted in any image. It was confirmed that Also, as shown in FIG. 15, three contours (contour numbers 2, 3, and 5) that were common at 0h are contours of contour number 2 and contour number 3 in a frame (* 1) near 12h.
  • FIG. 16 is a graph showing the change in the intensity of each fluorescence in the contour extracted in the second embodiment, and the cell division created from the cell tracking result and the luminance information in the contour in the second embodiment. It is a figure which shows no phylogenetic tree.
  • FIG. 17 is a diagram illustrating an example in which an ROI is set to a HeLa cell colony in a differential interference image and contour extraction of the entire colony is performed.
  • the present invention is not only an analysis from a fluorescence image in which a cell region and a background region are expressed as a luminance difference, but also an image in which an object and a background such as a differential interference image are not expressed as a luminance difference.
  • one initial contour is set for one cell in the fluorescence microscope image, and the contour is extracted.
  • the object is extracted from other than the fluorescence microscope observation image.
  • one initial contour may be set for a group of a plurality of cells or a plurality of objects, and the contours of these groups may be extracted.
  • contour extracting apparatus 100 performs processing in a stand-alone form has been described as an example, but processing is performed in response to a request from a client terminal configured with a separate housing from the contour extracting apparatus 100, You may comprise so that the process result may be returned to the said client terminal.
  • all or part of the processes described as being automatically performed can be performed manually, or the processes described as being performed manually can be performed. All or a part can be automatically performed by a known method.
  • contour extracting apparatus 100 each illustrated component is functionally conceptual and does not necessarily need to be physically configured as illustrated.
  • each device of the contour extraction device 100 is interpreted and executed by a CPU (Central Processing Unit) and the CPU. It may be realized by a program to be executed, or may be realized as hardware by wired logic.
  • the program is recorded on a recording medium to be described later, and is mechanically read by the contour extracting apparatus 100 as necessary. That is, the storage unit 106 such as ROM or HD stores a computer program for performing various processes by giving instructions to the CPU in cooperation with an OS (Operating System). This computer program is executed by being loaded into the RAM, and constitutes a control unit in cooperation with the CPU.
  • OS Operating System
  • the computer program may be stored in an application program server connected to the contour extraction apparatus 100 via an arbitrary network 300, and may be downloaded in whole or in part as necessary. It is.
  • the program according to the present invention can also be stored in a computer-readable recording medium.
  • the “recording medium” is an arbitrary “portable physical medium” such as a flexible disk, a magneto-optical disk, a ROM, an EPROM, an EEPROM, a CD-ROM, an MO, a DVD, a Blu-ray Disc, or the like. It includes a “communication medium” that holds a program in a short period of time, such as a communication line or a carrier wave when a program is transmitted via a network represented by a LAN, WAN, or the Internet.
  • program is a data processing method described in an arbitrary language or description method, and may be in any form such as source code or binary code.
  • program is not necessarily limited to a single configuration, but is distributed in the form of a plurality of modules and libraries, or in cooperation with a separate program represented by an OS (Operating System). Including those that achieve the function.
  • OS Operating System
  • a well-known configuration and procedure can be used for a specific configuration for reading a recording medium, a reading procedure, an installation procedure after reading, and the like in each device described in the embodiment.
  • Various databases and the like (image data file 106a to cell division point file 106d) stored in the storage unit 106 are storage means such as a memory device such as RAM and ROM, a fixed disk device such as a hard disk, a flexible disk, and an optical disk.
  • Various programs, tables, databases, web page files, etc. used for various processing and website provision are stored.
  • the contour extraction apparatus 100 is connected to an information processing apparatus such as a known personal computer or workstation, and the software (including programs, data, etc.) for realizing the method of the present invention is installed in the information processing apparatus. It may be realized.
  • the specific form of distribution / integration of the devices is not limited to that shown in the figure, and all or a part of them may be functional or physical in arbitrary units according to various additions or according to functional loads. Can be distributed and integrated.
  • the true contour of the target object can be accurately extracted, and medical and pharmaceutical
  • fields such as drug discovery, biological research and clinical testing, it is extremely useful in various fields such as crime prevention systems.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Physics & Mathematics (AREA)
  • Software Systems (AREA)
  • Multimedia (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Biomedical Technology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • Databases & Information Systems (AREA)
  • Evolutionary Computation (AREA)
  • Medical Informatics (AREA)
  • Image Analysis (AREA)
  • Investigating Or Analysing Biological Materials (AREA)

Abstract

L'invention porte sur un procédé qui définit le contour initial d'un objet dans des données images d'un objet qui a été imagé, qui produit un contour convergent par convergence du contour initial qui a été défini, qui acquiert la luminosité du contour convergeant obtenu, qui calcule la distribution de la luminosité, qui compare la distribution de la luminosité calculée à une valeur de seuil prédéterminée, et qui détermine si la dispersion est supérieure à la valeur de seuil ou si la dispersion est inférieure à la valeur de seuil. Si la distribution se révèle être supérieure à la valeur de seuil, le contour convergeant obtenu est élargi, et ledit contour élargi est défini comme étant le contour initial, et, si la distribution se révèle être inférieure à la valeur de seuil, le contour convergent obtenu est défini comme étant le contour de l'objet.
PCT/JP2010/051999 2009-02-24 2010-02-10 Dispositif de définition de contour et procédé de définition de contour, et programme associé WO2010098211A1 (fr)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP2011501550A JPWO2010098211A1 (ja) 2009-02-24 2010-02-10 輪郭抽出装置および輪郭抽出方法、並びにプログラム

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2009-041171 2009-02-24
JP2009041171 2009-02-24

Publications (1)

Publication Number Publication Date
WO2010098211A1 true WO2010098211A1 (fr) 2010-09-02

Family

ID=42665422

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2010/051999 WO2010098211A1 (fr) 2009-02-24 2010-02-10 Dispositif de définition de contour et procédé de définition de contour, et programme associé

Country Status (2)

Country Link
JP (1) JPWO2010098211A1 (fr)
WO (1) WO2010098211A1 (fr)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2012163538A (ja) * 2011-02-09 2012-08-30 Olympus Corp 細胞画像解析システム
JPWO2014196134A1 (ja) * 2013-06-06 2017-02-23 日本電気株式会社 解析処理システム
JP2017520354A (ja) * 2014-05-14 2017-07-27 ウニベルシダ デ ロス アンデス 体組織の自動的なセグメンテーションおよび定量化のための方法

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2000331143A (ja) * 1999-05-14 2000-11-30 Mitsubishi Electric Corp 画像処理方法
JP2004054347A (ja) * 2002-07-16 2004-02-19 Fujitsu Ltd 画像処理方法、画像処理プログラムおよび画像処理装置
JP2007041664A (ja) * 2005-08-01 2007-02-15 Olympus Corp 領域抽出装置および領域抽出プログラム
JP2007222073A (ja) * 2006-02-23 2007-09-06 Yamaguchi Univ 画像処理により細胞運動特性を評価する方法、そのための画像処理装置及び画像処理プログラム

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2000331143A (ja) * 1999-05-14 2000-11-30 Mitsubishi Electric Corp 画像処理方法
JP2004054347A (ja) * 2002-07-16 2004-02-19 Fujitsu Ltd 画像処理方法、画像処理プログラムおよび画像処理装置
JP2007041664A (ja) * 2005-08-01 2007-02-15 Olympus Corp 領域抽出装置および領域抽出プログラム
JP2007222073A (ja) * 2006-02-23 2007-09-06 Yamaguchi Univ 画像処理により細胞運動特性を評価する方法、そのための画像処理装置及び画像処理プログラム

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
NOBORU HIGASHI ET AL.: "Doteki Rinkaku Chushutsu Hoho ni Okeru Bocho Shushuku Model no Kaihatsu to Kosokuka Shuho eno Tekio", THE INSTITUTE OF ELECTRICAL ENGINEERS OF JAPAN SANGYO SYSTEM JOHOKA KENKYUKAI SHIRYO, vol. IIS-00, no. 13-21, 10 August 2000 (2000-08-10), pages 15 - 18 *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2012163538A (ja) * 2011-02-09 2012-08-30 Olympus Corp 細胞画像解析システム
JPWO2014196134A1 (ja) * 2013-06-06 2017-02-23 日本電気株式会社 解析処理システム
JP2017520354A (ja) * 2014-05-14 2017-07-27 ウニベルシダ デ ロス アンデス 体組織の自動的なセグメンテーションおよび定量化のための方法

Also Published As

Publication number Publication date
JPWO2010098211A1 (ja) 2012-08-30

Similar Documents

Publication Publication Date Title
Fisher et al. Dictionary of computer vision and image processing
JP7026826B2 (ja) 画像処理方法、電子機器および記憶媒体
Ji et al. Tracking quasi‐stationary flow of weak fluorescent signals by adaptive multi‐frame correlation
Andrade-Miranda et al. Laryngeal image processing of vocal folds motion
JP2010262350A (ja) 画像処理装置、画像処理方法およびプログラム
Poux et al. Unsupervised segmentation of indoor 3D point cloud: Application to object-based classification
Amat et al. Towards comprehensive cell lineage reconstructions in complex organisms using light‐sheet microscopy
JP6179224B2 (ja) 画像処理フィルタの作成装置及びその方法
Karantzalos Recent advances on 2D and 3D change detection in urban environments from remote sensing data
Dorn et al. Computational processing and analysis of dynamic fluorescence image data
CN112419295A (zh) 医学图像处理方法、装置、计算机设备和存储介质
Yang et al. Intelligent crack extraction based on terrestrial laser scanning measurement
JP5310485B2 (ja) 画像処理方法及び装置並びにプログラム
JP2019029935A (ja) 画像処理装置およびその制御方法
WO2010098211A1 (fr) Dispositif de définition de contour et procédé de définition de contour, et programme associé
JP5965764B2 (ja) 映像領域分割装置及び映像領域分割プログラム
Bhanu et al. Video bioinformatics
CN116091524B (zh) 一种针对复杂背景中目标的检测与分割方法
Adam et al. Objects can move: 3d change detection by geometric transformation consistency
Sáez et al. Neuromuscular disease classification system
Chen et al. Plane segmentation for a building roof combining deep learning and the RANSAC method from a 3D point cloud
Rieger et al. Aggregating explanation methods for stable and robust explainability
Breier et al. Analysis of video feature learning in two-stream CNNs on the example of zebrafish swim bout classification
Cheikh et al. A structure-based approach for colon gland segmentation in digital pathology
Japar et al. Coherent group detection in still image

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 10746092

Country of ref document: EP

Kind code of ref document: A1

WWE Wipo information: entry into national phase

Ref document number: 2011501550

Country of ref document: JP

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 10746092

Country of ref document: EP

Kind code of ref document: A1