WO2011156948A1 - Reconstruction of overlapped objects in image - Google Patents

Reconstruction of overlapped objects in image Download PDF

Info

Publication number
WO2011156948A1
WO2011156948A1 PCT/CN2010/073922 CN2010073922W WO2011156948A1 WO 2011156948 A1 WO2011156948 A1 WO 2011156948A1 CN 2010073922 W CN2010073922 W CN 2010073922W WO 2011156948 A1 WO2011156948 A1 WO 2011156948A1
Authority
WO
WIPO (PCT)
Prior art keywords
points
segment
overlapped
image
component
Prior art date
Application number
PCT/CN2010/073922
Other languages
French (fr)
Inventor
Yang Gao
Original Assignee
Nanjing University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing University filed Critical Nanjing University
Priority to PCT/CN2010/073922 priority Critical patent/WO2011156948A1/en
Publication of WO2011156948A1 publication Critical patent/WO2011156948A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/12Edge-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/181Segmentation; Edge detection involving edge growing; involving edge linking

Definitions

  • Distinguishing objects from a background is often referred to as image segmentation.
  • image segmentation For disease diagnostic applications, an accurate and automatic image segmentation method is needed. In conventional image segmentation methods, overlapped objects are usually not segmented from each other. For disease diagnostic applications, such as lung cancer cell image classification, this may cause problems such as low diagnostic accuracy and low classification training efficiency.
  • a method for reconstructing overlapped objects in an image includes: obtaining segment points having a positive two-order derivative on a contour of an overlapped object area in the image; segmenting the contour of the overlapped object area using the segment points to obtain incomplete contours of overlapped objects; and generating virtual points based on the incomplete contours.
  • the virtual points and the incomplete contours can be used to reconstruct contours of the overlapped objects.
  • an overlapped objects reconstructing device includes: a segment points obtaining component to obtain segment points having a positive two-order derivative on a contour of an overlapped object area of a segmented image; and a virtual points computing component to generate virtual points based on incomplete contours of overlapped objects obtained by segmenting the contour of the overlapped object area using the segment points, where the virtual points and the incomplete contours can be used to reconstruct contours of the overlapped objects.
  • a computer readable medium has computer executable instructions stored thereon that, when executed, cause the computer to perform a method including: obtaining a first set of points on an edge of an overlapped object area; sampling the first set of points to obtain a second set of points; fitting the second set of points to obtain a third set of points; calculating a two-order derivative of at least some points of the third set of points;
  • a method for training a reward matrix for image segmentation includes: providing a group of example images and a corresponding group of segmented training images; and using a reinforcement learning method to train the reward matrix with the group of example images and the group of segmented training images.
  • a computer readable medium having a computer program stored therein is provided.
  • the computer program When the computer program is executed by a computer, it will instruct the computer to conduct the method for training a reward matrix for image segmentation.
  • the method includes: providing an image to be segmented; using an ⁇ -greedy strategy and the reward matrix trained by the method for training a reward matrix to compute an optimal threshold to segment the image; and segmenting the image using the computed optimal threshold.
  • a computer readable medium having a computer program stored therein is provided.
  • the computer program When the computer program is executed by a computer, it will instruct the computer to conduct the method for image segmentation.
  • a device for training a reward matrix for image segmentation is provided.
  • An element in the reward matrix represents a reward given for taking a corresponding action at a corresponding state.
  • the device trains the reward matrix using a group of example images and a corresponding group of segmented training images by a reinforcement learning method.
  • FIG. 1 a shows a flowchart of an illustrative embodiment of a method for reconstructing overlapped objects in an image.
  • FIG. 1 b shows a flowchart of an illustrative embodiment of a method for obtaining segment points described in the flowchart shown in FIG. 1 a.
  • FIG. 1 c shows a flowchart of an illustrative embodiment of a method for generating virtual points described in the flowchart shown in FIG. 1 a.
  • FIG. 2 shows an illustration of an example image showing obtained contour points of an overlapped cell area.
  • FIG. 3a shows an illustration of an example image showing a first part of a contour segmented from the overlapped cell area shown in FIG. 2.
  • FIG. 3b shows an illustration of an example image showing a second part of a contour segmented from the overlapped cell area shown in FIG. 2.
  • FIG. 4 shows an example image illustrating how virtual points are obtained by a deBoor-Cox method.
  • FIGS. 5a-a to 5d-d show example images illustrating overlapped cell areas, corresponding reconstructed cell areas using a method of the present application, and corresponding reconstructed cell areas using a conventional watershed method.
  • FIG. 6 shows a flowchart of an illustrative embodiment of a method for classifying images.
  • FIG. 7 shows a block diagram of an illustrative embodiment of a device that reconstructs overlapped objects in an image.
  • FIG. 8 shows a flowchart of an illustrative embodiment of a method for training a reward matrix.
  • FIG. 9 shows a flowchart of an illustrative embodiment of a method for training a reward matrix by a reinforcement learning method.
  • FIG. 10 shows a flowchart of an illustrative embodiment of a method for segmenting an image.
  • FIG. 11 shows a block diagram of an illustrative embodiment of a device that trains a reward matrix for image segmentation.
  • FIG. 12 shows a block diagram of an illustrative embodiment of a device that segments an image using a reward matrix.
  • FIG. 13 shows a block diagram of an illustrative embodiment of a computer system that trains a reward matrix, and classifies image using the trained reward matrix.
  • virtual points are generated based on at least partial contours of objects overlapped in the overlapped object area obtained by segmenting the contour of the overlapped object area using the segment points.
  • obtaining segment points includes: obtaining a first set of points on an edge of the overlapped object area; sampling the first set of points to obtain a second set of points; fitting the second set of points to obtain a third set of points; calculating a two-order derivative of at least some points of the third set of points; and obtaining segment points based on the two-order derivative calculation results.
  • the first set of points is obtained by an eight-connected chain code running clockwise on the edge of the overlapped object area.
  • the second set of points is obtained by sampling the first set of points every N points, where N is a predetermined number.
  • the third set of points is obtained by fitting the second set of points using a B-spline method.
  • corresponding sequences of points having a positive two-order derivative are selected from the third set of points as segment points.
  • points at 1/3 and 2/3 along the lengths of lines formed by connecting corresponding sequences of points having a positive two-order derivative are computed as segment points.
  • generating virtual points includes: calculating tangent lines for the segment points; and generating virtual points using a deBoor-Cox method based on tangent lines of the segment points.
  • the method for reconstructing overlapped objects further includes fitting the virtual points using a B-spline method to obtain a fourth set of points.
  • the image is a lung cancer cell image.
  • the segment points obtaining component includes: a contour points computing component to obtain a first set of points on an edge of the overlapped object area; a sampling component to sample the first set of points to obtain a second set of points; a fitting calculating component to fit the second set of points to obtain a third set of points; a two-order derivative calculating component to calculating a two-order derivative of at least some points in the third set of points; and a segment points calculating component to calculate segment points based on the two-order derivative calculation results of the two-order derivative calculating component.
  • the segment points calculating component selects points closest to 1/3 and 2/3 along the lengths of lines formed by connecting corresponding sequences of points having a positive two-order derivative, from the third set of points as segment points. In one embodiment, wherein the segment points calculating component calculates points at 1/3 and 2/3 along the lengths of lines formed by connecting corresponding sequences of points having a positive two-order derivative, as segment points.
  • the contour points computing component computes the first set of points by an
  • the sampling component samples the first set of points every N points to obtain the second set of points, where N is a predetermined number.
  • the fitting calculating component calculates the third set points using a B-spline method based on the second set of points.
  • the virtual points computing component calculates tangent lines of the segment points, and generates virtual points using a deBoor-Cox method based on the calculated tangent lines.
  • the virtual points computing component further comprises a virtual points fitting component to fit the virtual points using a B-spline method to obtain a fourth set of points.
  • state in the method for training a reward matrix for image segmentation, in the reinforcement learning method, state is defined as:
  • S E * F
  • S represents state, , where E c represents pixels surrounded by edges of objects in an image obtained by segmenting the image using a current threshold, Es represents pixels surrounded by edges of objects in the image calculated using a Sobel operator, F c represents a foreground obtained by segmenting the image using the current threshold, and Fo represents a foreground obtained by segmenting the image using an OTSU method.
  • action is defined as changing a threshold.
  • training a reward matrix includes: computing edges of objects in a selected example image using a Sobel operator; segmenting the selected example image using an OTSU method; computing states of the selected image of pre-selected grey levels; and computing rewards of the selected image of the pre-selected grey levels using the following equation:
  • Training the reward matrix further includes: initializing the current threshold and computing a corresponding state; selecting an action using an ⁇ -greedy strategy according to the reward matrix; updating the reward matrix according to:
  • Training the reward matrix further includes repeating the operations of selecting an action using the ⁇ -greedy strategy according to the reward matrix and updating the reward matrix until the reward matrix converges.
  • converge condition of the reward matrix is defined as mean square deviation of the average of the reward matrix in last M circles is less than a predetermined value, where M is a predetermined number.
  • converge condition of the reward matrix is defined as mean square deviation of the average of the reward matrix in M circles is less than the predetermined value.
  • state in the device for training a reward matrix for image segmentation, in the reinforcement learning method, state is defined as:
  • action is defined as changing a threshold.
  • the device for training the reward matrix includes: a Sobel operator computing component to compute edges of objects in a selected example image using a Sobel operator; an OTSU computing component to segment the selected example image using an OTSU method; a state computing component to compute states of the selected image of pre-selected grey levels; a reward computing component to compute rewards of the selected image of the pre-determined grey levels using the following equation:
  • R represents reward
  • Be represents a background obtained by segmenting the selected example image using a current threshold
  • B T represents a background of the corresponding segmented training image
  • Fc represents a foreground obtained by segmenting the selected example image using the current threshold
  • F T represents a foreground of the corresponding segmented training image.
  • the device for training the reward matrix further includes an updating component to initialize the current threshold, compute a corresponding state, select an action using an ⁇ -greedy strategy according to the reward matrix, and update the reward matrix according to:
  • a represents learning rate which is a step-size parameter to control changing speed of Q(s,a)
  • r(s') represents reward of state s'
  • a' is the next action
  • s' is the next state after taking the action a'
  • represents discount rate.
  • the updating component repeats the operations of selecting an action using the ⁇ -greedy strategy and updating the reward matrix until the reward matrix converges.
  • a converge condition of the reward matrix is defined as mean square deviation of the average of the reward matrix in last M circles is less than a predetermined value, where M is a predetermined number.
  • FIG. 1 a a flowchart of an illustrative embodiment of a method 100 for reconstructing overlapped objects in an image is shown.
  • method 100 and other methods and processes described herein, set forth various functional blocks or actions that may be described as processing steps, functional operations, events and/or acts, etc., which may be performed by hardware, software, and/or firmware.
  • FIG. 1 depicts one particular order of blocks or actions, the order in which these blocks or actions are presented does not necessarily limit claimed subject matter to any particular order.
  • Method 100 may include one or more of operations, functions, or actions as illustrated by blocks 101 and/or 103.
  • the various blocks are not intended to be limiting to the described embodiments.
  • one skilled in the art will appreciate that, for this and other processes and methods disclosed herein, the functions performed in the processes and methods may be implemented in differing order.
  • the outlined steps and operations are only provided as examples, and some of the steps and operations may be optional, combined into fewer steps and operations, or expanded into additional steps and operations without detracting from the essence of the disclosed embodiments.
  • segment points are obtained.
  • the segment points may be obtained on the contour of the overlapped object area.
  • the segment points may be used to segment a contour of an overlapped object area in the image to obtain incomplete contours of the overlapped objects.
  • virtual points are generated.
  • virtual points may be generated based on the incomplete contours of the overlapped objects.
  • the virtual points and the incomplete contours can be used to reconstruct contours of the overlapped objects.
  • the virtual points may be generated based on the segment points.
  • FIG. 1 b a flowchart of an illustrative embodiment of a method for obtaining segment points (block 101 of FIG. 1 a) is shown.
  • the illustrated method for obtaining segment points may include one or more of operations as illustrated by blocks 105, 107, 109, 111 , 113, and/or 115.
  • an overlapped object area is extracted.
  • the overlapped object area may be extracted from a segmented image such as a binary image.
  • a first set of points is obtained on an edge of the overlapped object area.
  • the first set of points is obtained by an eight-connected chain code running clockwise.
  • a two pass connected component labeling method can also be used to obtain the first set of points.
  • FIG. 2 shows an illustration of an example image showing a first set of points obtained on an edge of an overlapped cell area in a segmented lung cancer cell image, by an eight-connected chain code running clockwise.
  • a lung cancer cell image may be used to conduct lung cancer diagnosis, but it does not mean that a lung cancer cell image must have a lung cancer cell therein, for example, a lung cancer cell image may be an image having healthy cells only therein.
  • the first set of points is sampled to obtain a second set of points.
  • the second set of points is obtained by sampling the first set of points every N points, where N is a predetermined number.
  • N can be determined according to the concentration of the first set of points, accuracy requirement, and computation cost etc.
  • N may be selected from the range of 5-10.
  • the second set of points is fitted to obtain a third set of points.
  • a smoother contour may be obtained by connecting the third set of points compared to a contour obtained by connecting the first set of points directly.
  • a cubic B-spline method may be used to fit the second set of points to obtain the third set of points.
  • Lagrange interpolation, Newton interpolation, Newton iterative algorithm, bisection method, etc. may be used to fit the second set of points to obtain the third set of points.
  • X is a serial number of a point to be calculated in the third set of points
  • As represents the serial number of the point A, in the third set of points, A SI ⁇ X ⁇ A SI+3 .
  • a two-order derivative of each point of the third set of points may be calculated.
  • two-order derivatives of points sampled from the third set of points may be calculated.
  • the third set of points is sampled every N' points, where N' is a predetermined number.
  • two-order derivative calculation may be conducted in a discrete manner.
  • a two-order derivative of a point may be calculated using coordinates of the point and its neighboring points.
  • the i-th point is represented as (Xi,y), where x, represents the x coordinate of the i-th point, and y represents the y coordinate of the i-th point.
  • a two-order derivative of the i-th point can be calculated as follows. First, a one-order derivative Fi(,) of the i-th point may be calculated according to the following equation (3).
  • segment points are obtained based on the two-order derivative calculation results in block 113.
  • points closest to 1/3 and 2/3 along lengths of lines formed by connecting corresponding sequences of points having a positive two-order derivative may be selected from the third set of points as segment points.
  • points at 1/3 and 2/3 along lengths of lines formed by connecting corresponding sequences of points having a positive two-order derivative may be calculated as segment points.
  • the first and the last points along lengths of lines formed by connecting corresponding sequences of points having a positive two-order derivative may be selected from the third set of points as segment points.
  • FIG. 3a shows an illustration of an example image showing a first part of a contour segmented from the contour of the overlapped cell area of FIG. 2 using segment points which are closer to the first part of contour
  • FIG. 3b shows an illustration of an example image showing a second part of a contour segmented from the contour of the overlapped cell area of FIG. 2 using segment points which are closer to the second part of contour.
  • shape "o" represents the second set of points.
  • FIG. 1 c shows a flowchart of an illustrative embodiment of a method (block 103 of FIG. 1 a) for generating virtual points.
  • the illustrated method for generating virtual points may include one or more of operations as illustrated by blocks 117, and/or 119.
  • block 117 GERATE VIRTUAL POINTS
  • virtual points to reconstruct contours of the overlapped objects are generated.
  • the virtual points may be generated based on incomplete contours of overlapped objects.
  • the incomplete contours of overlapped objects may be obtained by segmenting the contour of the overlapped object area using the segment points obtained in the operation of block 115. Then the virtual points and the incomplete contours can be used to reconstruct contours of the overlapped objects.
  • tangent lines of the segment points are computed, and virtual points are generated using the tangent lines by a deBoor-Cox method.
  • FIG. 4 shows an example image illustrating how the virtual points may be generated using the deBoor-Cox method.
  • Tangent lines SiT and S 2 T of segment points Si and S 2 are computed, where T is the intersection point of the tangent lines SiT and S 2 T.
  • middle points of the tangent lines SiT and S 2 T are calculated which middle points are indicated as Pi and P 2 , respectively.
  • middle points of lines S1P1, P1P2, P2S2 are calculated, which middle points are indicated as P , P 3 , and P 5 , respectively.
  • the middle points calculated in the last circle are taken as virtual points.
  • the virtual points are fitted to obtain a fourth set of points.
  • a cubic B-spline method may be used to fit the virtual points to obtain the fourth set of points.
  • a contour of a segmented cell can be obtained by connecting the third and the fourth sets of points in sequence. The contour formed by connecting the third and the fourth sets of points is smoother than that formed by connecting the virtual points and the third set of points.
  • FIG. 5a-a, FIG. 5b-a, and FIG. 5c-a show example images illustrating three example overlapped cell areas.
  • FIG. 5a-b and FIG. 5a-c show example images illustrating two corresponding cell areas reconstructed from the original overlapped cell area shown in FIG. 5a-a using the method 100
  • FIG. 5a-d illustrates cell areas reconstructed from the original overlapped cell area using a conventional watershed algorithm.
  • FIG. 5b-b and FIG. 5b-c illustrate two corresponding cell areas reconstructed from the original overlapped cell area shown in FIG. 5b-a using the method 100
  • FIG. 5b-d illustrates cell areas reconstructed from the original overlapped cell area using the conventional watershed algorithm.
  • FIG. 5c-c illustrate two corresponding cell areas reconstructed from the original overlapped cell area of FIG. 5c-a using the method 100
  • FIG. 5c-d illustrates cell areas reconstructed from the original overlapped cell area using the conventional watershed algorithm.
  • the method 100 is better than the conventional watershed algorithm in reconstructing overlapped objects, especially in reconstructing overlapped cells.
  • the method 100 can be used to reconstructing overlapped objects other than overlapped cells in an image.
  • FIG. 6 a flowchart of an illustrative embodiment of a method 200 for classifying images, such as lung cancer cell images, having an overlapped object area therein, is shown.
  • the illustrated method for classifying images may include one or more of operations as illustrated by blocks 201 , 203, and/or 205.
  • block 201 SEGMENT AN IMAGE
  • an image to be classified is segmented to obtain multiple object areas, such as cell areas in the case of lung cancer cell image.
  • overlapped object area(s) are reconstructed.
  • the overlapped object areas may be reconstructed using the method 100.
  • the image is classified using the segmented object areas and the reconstructed object areas. Since overlapped object areas may be used to classify images, compared with conventional image classifying methods in which overlapped object areas are not used to classify images, efficiency and accuracy of image classifying are improved.
  • FIG. 7 a block diagram of an illustrative embodiment of a device 300 that reconstructs overlapped objects, such as lung cancer cells, in an image is shown.
  • the device 300 includes a segment points obtaining component 301 and a virtual points computing component 303.
  • the segment points obtaining component 301 computes segment points.
  • the segment points may be used to segment a contour of an overlapped object area, such as an overlapped cell area in the case of lung cancer cell image, to obtain multiple incomplete contours of overlapped objects.
  • the virtual points computing component 303 computes virtual points based on the incomplete contours of the overlapped objects.
  • the virtual points and the incomplete contours may be used to reconstruct contours of the overlapped objects.
  • the segment points obtaining component 301 includes a contour points computing component 305, a sampling component 307, a fitting calculating component 309, a two-order derivative calculating component 311 , and a segment points calculating component 313.
  • the contour points computing component 305 obtains a first set of points on an edge of the overlapped object area.
  • the contour points computing component 305 obtains the first set of points by an eight-connected chain code running clockwise.
  • the contour points computing component 305 may use other methods to obtain the first set of points, for example, a two pass connected component labeling method.
  • the sampling component 307 samples the first set of points to obtain a second set of points.
  • the sampling component 307 samples the first set of points every N points, where N is a predetermined number, and N may be determined based on accuracy requirement, computation speed etc.
  • the fitting calculating component 309 fits the second set of points to obtain a third set of points.
  • the fitting calculating component 309 fits the second set of points using a cubic B-spline method.
  • the fitting calculating component 309 may fit the second set of points to obtain the third set of points using other methods such as those mentioned above.
  • the two-order derivative calculating component 311 calculates two-order derivatives of at least some points of the third set of points. In one embodiment, the two-order derivative calculating component 311 may calculate a two-order derivative of each point of the third set of points. In one embodiment, the two-order derivative calculating component 311 may calculate two-order derivatives of points sampled from the third set of points. In one embodiment, the third set of points is sampled every N' points, where N' may be a pre-determined number. The segment points calculating component 313 obtains segment points based on the two-order derivative calculation results of the two-order derivative calculating component 311.
  • the segment points calculating component 313 selects, from the third set of points, points closest to 1/3 and 2/3 along the length of a line formed by connecting a sequence of points having positive two-order derivative. In one embodiment, the segment points calculating component 313 calculates points at 1/3 and 2/3 along the length of a line formed by connecting a sequence of points having positive two-order derivative as segment points. Those of ordinary skill in the art will appreciate that the segment points calculating component 313 may use other method to calculate segment points, the 1/3 and 2/3 along the length of a line are just examples.
  • the virtual points computing component 303 computes virtual points using segment points by a deBoor-Cox method. Those of ordinary skill in the art will appreciate that the virtual points computing component 303 may generates virtual points using other methods.
  • the virtual points computing component 303 may include an optional virtual points fitting component 315, which fits the generated virtual points to obtain a fourth set of points. A smoother contour may be obtained by connecting the fourth set of points and the third set of points in sequence, compared with a contour formed by connecting the virtual points and the third set of points directly.
  • the virtual points computing component 303 may share the fitting calculating component 309 to fit the generated virtual points.
  • the device 300 further includes an image receiving component 317 which receives images having overlapped object areas such as lung cancer images having overlapped cell areas.
  • the device 300 further includes a display 319 which displays information and data to facilitate the execution of the method 100 described above to, for example, users of the device 300.
  • the device 300 further includes an input component 321 which facilitates the reception (e.g., input) of instructions from, for example, users.
  • the device 300 further includes an output component 323 which outputs calculation results of the device 300. All the components of the device 300 may communicate with each other via a communication BUS 325.
  • FIG. 8 shows a flow chart of an illustrative embodiment of a method 400 for training a reward matrix for image segmentation.
  • the illustrated method for training reward matrix may include one or more of operations as illustrated by blocks 401 , and/or 403.
  • An element in the reward matrix represents a reward given to taking a
  • Table 1 illustrates an example of reward matrix, in which each element is initialized as 0, S1 , S2...S5 represent corresponding states, and "+5", "0", "-5" represent actions of changing threshold by +5, 0, -5, respectively.
  • block 401 PROVIDE A GROUP OF EXAMPLE IMAGES AND A CORRESPONDING GROUP OF SEGMENTED TRAINING IMAGES
  • a group of example images and a corresponding group of segmented training images are provided.
  • a segmented training image may be obtained by segmenting a
  • the reward matrix before training, is initialized first. In one embodiment, the reward matrix is initialized by setting all elements to 0.
  • state is defined as:
  • action is defined as changing the threshold.
  • FIG. 9 shows a flowchart of an illustrative embodiment of a method (block 403 of FIG. 8) for training a reward matrix using example images and segmented training images.
  • the illustrated method for training reward matrix may include one or more of operations as illustrated by blocks 405, 407, 409, 411 , 413, 415, 417, 419, 421 , and/or
  • edges of object areas in a selected example image are computed using the Sobel operator.
  • computing states of an image of each grey level may be computation intensive.
  • states of an image of grey levels, pre-selected according to a certain rule may be calculated for use. For example, every
  • 5 grey levels i.e. 0, 5, 10, 15 and so on may be selected.
  • a reward of an image at a grey level is calculated according to the following equation (6): R equation (6)
  • R represents reward
  • Be represents a background obtained by segmenting a selected example image using a current threshold
  • B T represents a background of a corresponding segmented training image
  • FC represents a foreground obtained by segmenting the selected example image using the current threshold
  • F T represents a foreground of the corresponding segmented training image
  • an action is selected using an ⁇ -greedy strategy according to the reward matrix.
  • Q(s,a) represents an element corresponding to state s and action a in the reward matrix
  • a represents learning rate which is a step-size parameter to control changing speed of Q(s,a)
  • r(s') represents reward of state s'
  • a' represents the next action to be taken
  • s' represents the state after taking the action a'
  • represents discount rate. Discount rate is a parameter, which reflects that what has been selected may not be totally believed, a discount should be given to it, so that there is still a chance to find the optimal action even if a bad choice have been made before.
  • block 417 it is determined whether the reward matrix is converged. If it is determined that the reward matrix is converged, then operation in block 419 (FINISH?) determines if the training is finished. In one embodiment, all images of the group of example images and the group of segmented training images may be used to train the reward matrix. Therefore, the training will not be finished until the last image is used to train the reward matrix. So if it is determined that the training is not finished in block 419, operation in block 423 (SELECT NEXT IMAGE) then selects the next image. If it is determined that the reward matrix is not converged in block 417, then the operations in blocks 413 and 415 are repeated until the reward matrix is converged.
  • the optimal converge condition is that the reward matrix does remain constant. However, it is almost impossible to satisfy this condition. So, a converge condition needs to be redefined.
  • a variable that can reflect the change of the elements in the reward matrix whose elements are numeric data needs to be defined.
  • mean square deviation to estimate the change of the reward matrix is chosen as the variable. If there is little change of almost each element of the reward matrix in the last ten circles, the mean square deviation will be very small. However, if there are significant changes of some elements, the mean square deviation may be large correspondingly. So, whether the reward matrix converges can be determined according to the mean square deviation.
  • the converge condition may be that the mean square deviation of the average of the reward matrix in the last M circles is less than a predetermined value, where M is a predetermined number.
  • M may be set 10
  • the predetermined value may be set 0.005.
  • one cycle means taking an action according to ⁇ -greedy strategy.
  • FIG. 10 shows a flow chart of an illustrative embodiment of a method 500 for image segmentation.
  • the illustrated method for image segmentation may include one or more of operations as illustrated by blocks 501 , 503, and/or 505.
  • block 501 PROVIDE AN IMAGE TO BE SEGMENTED
  • an image to be segmented is provided.
  • FIG. 11 shows a block diagram of an illustrative embodiment of a device 600 for training a reward matrix for image segmentation.
  • An element in the reward matrix represents a reward given for taking a corresponding action at a corresponding state.
  • the device 600 includes an example image receiving component 601 , a segmented training image receiving component 603, a reinforcement learning component 605, an input component 607, and an output component 609, all communicatively coupled to each other via a BUS 621.
  • the example image receiving component 601 receives example images.
  • the segmented training image receiving component 603 receives segmented training images.
  • the reinforcement learning component 605 trains the reward matrix using received example images and corresponding segmented training images by a
  • the input component 607 may facilitate the reception (e.g., input) of instructions from, for example, users.
  • the output component 609 outputs the trained matrix.
  • the reinforcement learning method used herein may be same as that of the method 400.
  • the reinforcement learning component 605 includes: a Sobel operator computing component 611 , an OTSU computing component 613, a state computing component 615, a reward computing component 617, and an updating component 619.
  • the Sobel operator computing component 611 computes edges of objects in a selected example image using a Sobel operator.
  • the OTSU computing component 613 segments the selected example image using an OTSU method.
  • the state computing component 615 computes states of a selected image of pre-selected grey levels.
  • the reward computing component 617 computes rewards of the selected image of the pre-selected grey levels using the equation (6).
  • the updating component 619 initializes a current threshold, computes a corresponding state of the selected example image, selects an action using an ⁇ -greedy strategy according to the reward matrix, and updates the reward matrix according to:
  • the updating component 619 repeats the operations of selecting an action using ⁇ -greedy strategy and updating the reward matrix until the reward matrix converges.
  • FIG. 12 illustrates an example block diagram of a device 700 for image segmentation.
  • the device 700 includes: an ⁇ -greedy strategy computing component 701 to compute an optimal threshold for an image to be segmented using the reward matrix trained by reinforcement learning method; and a segmenting component 703 to segment the image using the computed optimal threshold.
  • FIG. 13 illustrates an example block diagram of a computer system 800 for classifying images such as lung cancer cell images.
  • the computer system 800 includes a CPU 801 , a memory 803, a storage component 805 having a reward matrix training program 807, an image segmentation program 809, an overlapped objects reconstruction program 811 , and an image classification program 813 stored therein, a display 815, an output component 817, and an input component 819 connected together by a bus 821.
  • the computer system 800 When the image classification program 813 is executed, the computer system 800 will be instructed to conduct the method 200.
  • the computer system 800 may be coupled to a database having a group of example lung cancer cell images and a corresponding group of segmented lung cancer cell images.
  • the computer system 800 may train a reward matrix using the method 400, and stores the trained reward matrix in the storage component 805.
  • the computer system 800 may be coupled to a lung cancer cell image generator, which generates lung cancer cell images from tissues taken from a patient.
  • the computer system 800 may receive a lung cancer cell image from the lung cancer cell image generator, such as a microscopy camera, segment a lung cancer cell image generated by the camera using the method 500, reconstruct overlapped cells in the segmented image using the method 100, and classify the lung cancer cell image to determine whether the patient have lung cancer and what kind of lung cancer he/she has, using the method 200.
  • a lung cancer cell image generator such as a microscopy camera
  • segment a lung cancer cell image generated by the camera using the method 500 reconstruct overlapped cells in the segmented image using the method 100
  • classify the lung cancer cell image to determine whether the patient have lung cancer and what kind of lung cancer he/she has, using the method 200.
  • the implementer may opt for a mainly hardware and/or firmware vehicle; if flexibility is paramount, the implementer may opt for a mainly software implementation; or, yet again alternatively, the implementer may opt for some combination of hardware, software, and/or firmware.
  • a signal bearing medium examples include, but are not limited to, the following: a recordable type medium such as a floppy disk, a hard disk drive, a Compact Disc (CD), a Digital Video Disk (DVD), a digital tape, a computer memory, etc.; and a transmission type medium such as a digital and/or an analog communication medium (e.g., a fiber optic cable, a waveguide, a wired communications link, a wireless communication link, etc.).
  • a typical data processing system generally includes one or more of a system unit housing, a video display device, a memory such as volatile and non-volatile memory, processors such as microprocessors and digital signal processors, computational entities such as operating systems, drivers, graphical user interfaces, and applications programs, one or more interaction devices, such as a touch pad or screen, and/or control systems including feedback loops and control motors (e.g., feedback for sensing position and/or velocity; control motors for moving and/or adjusting components and/or quantities).
  • a typical data processing system may be implemented utilizing any suitable commercially available components, such as those typically found in data computing/communication and/or network computing/communication systems.
  • any two components so associated can also be viewed as being “operably connected”, or “operably coupled”, to each other to achieve the desired functionality, and any two components capable of being so associated can also be viewed as being “operably couplable”, to each other to achieve the desired functionality.
  • operably couplable include but are not limited to physically mateable and/or physically interacting components and/or wirelessly interactable and/or wirelessly interacting components and/or logically interacting and/or logically interactable components.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

A method and device for reconstructing overlapped objects in an image are provided. The method for reconstructing overlapped objects includes the following steps: obtaining segment points having a positive two-order derivative on a contour of an overlapped object area; segmenting the contour of the overlapped object area using the segment points to obtain incomplete contours of the overlapped objects; and generating virtual points based on the incomplete contours, and said virtual points and the incomplete contours can be used to reconstruct contours of the overlapped objects.

Description

RECONSTRUCTION OF OVERLAPPED OBJECTS IN AN IMAGE
BACKGROUND
[0001 ] Distinguishing objects from a background is often referred to as image segmentation. For disease diagnostic applications, an accurate and automatic image segmentation method is needed. In conventional image segmentation methods, overlapped objects are usually not segmented from each other. For disease diagnostic applications, such as lung cancer cell image classification, this may cause problems such as low diagnostic accuracy and low classification training efficiency.
SUMMARY
[0002] In one embodiment, a method for reconstructing overlapped objects in an image is provided. The method includes: obtaining segment points having a positive two-order derivative on a contour of an overlapped object area in the image; segmenting the contour of the overlapped object area using the segment points to obtain incomplete contours of overlapped objects; and generating virtual points based on the incomplete contours. The virtual points and the incomplete contours can be used to reconstruct contours of the overlapped objects.
[0003] In one embodiment, an overlapped objects reconstructing device is provided. The device includes: a segment points obtaining component to obtain segment points having a positive two-order derivative on a contour of an overlapped object area of a segmented image; and a virtual points computing component to generate virtual points based on incomplete contours of overlapped objects obtained by segmenting the contour of the overlapped object area using the segment points, where the virtual points and the incomplete contours can be used to reconstruct contours of the overlapped objects.
[0004] In one embodiment, a computer readable medium is provided. The computer readable medium has computer executable instructions stored thereon that, when executed, cause the computer to perform a method including: obtaining a first set of points on an edge of an overlapped object area; sampling the first set of points to obtain a second set of points; fitting the second set of points to obtain a third set of points; calculating a two-order derivative of at least some points of the third set of points;
obtaining segment points based on the two-order derivative calculation results; and generating virtual points based on incomplete contours of overlapped objects obtained by segmenting a contour of the overlapped object area using the segment points, where the virtual points and the incomplete contours can be used to reconstruct contours of the overlapped objects.
[0005] In one embodiment of the present application, a method for training a reward matrix for image segmentation is provided. An element in the reward matrix represents a reward given for taking a corresponding action at a corresponding state. The method includes: providing a group of example images and a corresponding group of segmented training images; and using a reinforcement learning method to train the reward matrix with the group of example images and the group of segmented training images.
[0006] In one embodiment, a computer readable medium having a computer program stored therein is provided. When the computer program is executed by a computer, it will instruct the computer to conduct the method for training a reward matrix for image segmentation.
[0007] In one embodiment of the present application, a method for image
segmentation is provided. The method includes: providing an image to be segmented; using an ε-greedy strategy and the reward matrix trained by the method for training a reward matrix to compute an optimal threshold to segment the image; and segmenting the image using the computed optimal threshold.
[0008] In one embodiment, a computer readable medium having a computer program stored therein is provided. When the computer program is executed by a computer, it will instruct the computer to conduct the method for image segmentation.
[0009] In one embodiment of the present application, a device for training a reward matrix for image segmentation is provided. An element in the reward matrix represents a reward given for taking a corresponding action at a corresponding state. The device trains the reward matrix using a group of example images and a corresponding group of segmented training images by a reinforcement learning method. [0010] The foregoing summary is illustrative only and is not intended to be in any way limiting. In addition to the illustrative aspects, embodiments, and features described above, further aspects, embodiments, and features will become apparent by reference to the drawings and the following detailed description.
BRIEF DESCRIPTION OF THE DRAWINGS
[001 1 ] FIG. 1 a shows a flowchart of an illustrative embodiment of a method for reconstructing overlapped objects in an image.
[0012] FIG. 1 b shows a flowchart of an illustrative embodiment of a method for obtaining segment points described in the flowchart shown in FIG. 1 a.
[0013] FIG. 1 c shows a flowchart of an illustrative embodiment of a method for generating virtual points described in the flowchart shown in FIG. 1 a.
[0014] FIG. 2 shows an illustration of an example image showing obtained contour points of an overlapped cell area.
[0015] FIG. 3a shows an illustration of an example image showing a first part of a contour segmented from the overlapped cell area shown in FIG. 2.
[0016] FIG. 3b shows an illustration of an example image showing a second part of a contour segmented from the overlapped cell area shown in FIG. 2.
[0017] FIG. 4 shows an example image illustrating how virtual points are obtained by a deBoor-Cox method.
[0018] FIGS. 5a-a to 5d-d show example images illustrating overlapped cell areas, corresponding reconstructed cell areas using a method of the present application, and corresponding reconstructed cell areas using a conventional watershed method.
[0019] FIG. 6 shows a flowchart of an illustrative embodiment of a method for classifying images.
[0020] FIG. 7 shows a block diagram of an illustrative embodiment of a device that reconstructs overlapped objects in an image.
[0021 ] FIG. 8 shows a flowchart of an illustrative embodiment of a method for training a reward matrix.
[0022] FIG. 9 shows a flowchart of an illustrative embodiment of a method for training a reward matrix by a reinforcement learning method. [0023] FIG. 10 shows a flowchart of an illustrative embodiment of a method for segmenting an image.
[0024] FIG. 11 shows a block diagram of an illustrative embodiment of a device that trains a reward matrix for image segmentation.
[0025] FIG. 12 shows a block diagram of an illustrative embodiment of a device that segments an image using a reward matrix.
[0026] FIG. 13 shows a block diagram of an illustrative embodiment of a computer system that trains a reward matrix, and classifies image using the trained reward matrix.
DETAILED DESCRIPTION OF ILLUSTRATIVE EMBODIMENTS
[0027] In the following detailed description, reference is made to the accompanying drawings, which form a part hereof. In the drawings, similar symbols typically identify similar components, unless context dictates otherwise. The illustrative embodiments described in the detailed description, drawings, and claims are not meant to be limiting. Other embodiments may be utilized, and other changes may be made, without departing from the spirit or scope of the subject matter presented here. It will be readily understood that the aspects of the present disclosure, as generally described herein, and illustrated in the Figures, can be arranged, substituted, combined, and designed in a wide variety of different configurations, all of which are explicitly contemplated and make part of this disclosure.
[0028] In one embodiment, in the method for reconstructing overlapped objects, virtual points are generated based on at least partial contours of objects overlapped in the overlapped object area obtained by segmenting the contour of the overlapped object area using the segment points.
[0029] In one embodiment, in the method for reconstructing overlapped objects, obtaining segment points includes: obtaining a first set of points on an edge of the overlapped object area; sampling the first set of points to obtain a second set of points; fitting the second set of points to obtain a third set of points; calculating a two-order derivative of at least some points of the third set of points; and obtaining segment points based on the two-order derivative calculation results.
[0030] In one embodiment, in the method for reconstructing overlapped objects, the first set of points is obtained by an eight-connected chain code running clockwise on the edge of the overlapped object area.
[0031 ] In one embodiment, in the method for reconstructing overlapped objects, the second set of points is obtained by sampling the first set of points every N points, where N is a predetermined number.
[0032] In one embodiment, in the method for reconstructing overlapped objects, the third set of points is obtained by fitting the second set of points using a B-spline method.
[0033] In one embodiment, in the method for reconstructing overlapped objects, points closest to 1/3 and 2/3 along the lengths of lines formed by connecting
corresponding sequences of points having a positive two-order derivative, are selected from the third set of points as segment points.
[0034] In one embodiment, in the method for reconstructing overlapped objects, points at 1/3 and 2/3 along the lengths of lines formed by connecting corresponding sequences of points having a positive two-order derivative, are computed as segment points.
[0035] In one embodiment, in the method for reconstructing overlapped objects, generating virtual points includes: calculating tangent lines for the segment points; and generating virtual points using a deBoor-Cox method based on tangent lines of the segment points.
[0036] In one embodiment, the method for reconstructing overlapped objects further includes fitting the virtual points using a B-spline method to obtain a fourth set of points.
[0037] In one embodiment, in the method for reconstructing overlapped objects, the image is a lung cancer cell image.
[0038] In one embodiment, in the overlapped objects reconstructing device, the segment points obtaining component includes: a contour points computing component to obtain a first set of points on an edge of the overlapped object area; a sampling component to sample the first set of points to obtain a second set of points; a fitting calculating component to fit the second set of points to obtain a third set of points; a two-order derivative calculating component to calculating a two-order derivative of at least some points in the third set of points; and a segment points calculating component to calculate segment points based on the two-order derivative calculation results of the two-order derivative calculating component.
[0039] In one embodiment, in the overlapped objects reconstructing device, the segment points calculating component selects points closest to 1/3 and 2/3 along the lengths of lines formed by connecting corresponding sequences of points having a positive two-order derivative, from the third set of points as segment points. In one embodiment, wherein the segment points calculating component calculates points at 1/3 and 2/3 along the lengths of lines formed by connecting corresponding sequences of points having a positive two-order derivative, as segment points.
[0040] In one embodiment, in the overlapped objects reconstructing device, the contour points computing component computes the first set of points by an
eight-connected chain code running clockwise.
[0041 ] In one embodiment, in the overlapped objects reconstructing device, the sampling component samples the first set of points every N points to obtain the second set of points, where N is a predetermined number.
[0042] In one embodiment, in the overlapped objects reconstructing device, the fitting calculating component calculates the third set points using a B-spline method based on the second set of points.
[0043] In one embodiment, in the overlapped objects reconstructing device, the virtual points computing component calculates tangent lines of the segment points, and generates virtual points using a deBoor-Cox method based on the calculated tangent lines.
[0044] In one embodiment, in the overlapped objects reconstructing device, the virtual points computing component further comprises a virtual points fitting component to fit the virtual points using a B-spline method to obtain a fourth set of points.
[0045] In one embodiment, in the method for training a reward matrix for image segmentation, in the reinforcement learning method, state is defined as:
S = E * F where S represents state,
Figure imgf000007_0001
, where Ec represents pixels surrounded by edges of objects in an image obtained by segmenting the image using a current threshold, Es represents pixels surrounded by edges of objects in the image calculated using a Sobel operator, Fc represents a foreground obtained by segmenting the image using the current threshold, and Fo represents a foreground obtained by segmenting the image using an OTSU method.
[0046] In one embodiment, in the reinforcement learning method, action is defined as changing a threshold.
[0047] In one embodiment, in the method for training a reward matrix for image segmentation, training a reward matrix includes: computing edges of objects in a selected example image using a Sobel operator; segmenting the selected example image using an OTSU method; computing states of the selected image of pre-selected grey levels; and computing rewards of the selected image of the pre-selected grey levels using the following equation:
Figure imgf000008_0001
where R represents reward, Be represents a background obtained by segmenting the selected example image using a current threshold, BT represents a background of a corresponding segmented training image, Fc represents a foreground obtained by segmenting the selected example image using the current threshold, and FT represents a foreground of the corresponding segmented training image. Training the reward matrix further includes: initializing the current threshold and computing a corresponding state; selecting an action using an ε-greedy strategy according to the reward matrix; updating the reward matrix according to:
Q(s,a)— Q(s,a)+a[r(s)' +y max Q(s ',a)' -Q(s,a)J
a'
where Q(s,a) represents an element corresponds to state s and action a in the reward matrix, a represents learning rate which is a step-size parameter to control changing speed of Q(s,a), r(s') represents reward of state s', a' is the next action, s' is the next state after taking the action a', and γ represents discount rate. Training the reward matrix further includes repeating the operations of selecting an action using the ε-greedy strategy according to the reward matrix and updating the reward matrix until the reward matrix converges. In one embodiment, converge condition of the reward matrix is defined as mean square deviation of the average of the reward matrix in last M circles is less than a predetermined value, where M is a predetermined number. In one embodiment, converge condition of the reward matrix is defined as mean square deviation of the average of the reward matrix in M circles is less than the predetermined value.
[0048] In one embodiment, in the device for training a reward matrix for image segmentation, in the reinforcement learning method, state is defined as:
S = E * F where S represents state,
Figure imgf000009_0001
, and |J o| , where Ec represents pixels surrounded by edges of objects in an image obtained by segmenting the image using a current threshold, Es represents pixels surrounded by edges of objects in an image calculated using a Sobel operator, Fc represents a foreground obtained by segmenting the image using the current threshold, and Fo represents a foreground obtained by segmenting the image using OTSU method.
[0049] In one embodiment, in the device for training a reward matrix for image segmentation, in the reinforcement learning method, action is defined as changing a threshold.
[0050] The device for training the reward matrix includes: a Sobel operator computing component to compute edges of objects in a selected example image using a Sobel operator; an OTSU computing component to segment the selected example image using an OTSU method; a state computing component to compute states of the selected image of pre-selected grey levels; a reward computing component to compute rewards of the selected image of the pre-determined grey levels using the following equation:
Figure imgf000009_0002
where R represents reward, Be represents a background obtained by segmenting the selected example image using a current threshold, BT represents a background of the corresponding segmented training image, Fc represents a foreground obtained by segmenting the selected example image using the current threshold, and FT represents a foreground of the corresponding segmented training image. The device for training the reward matrix further includes an updating component to initialize the current threshold, compute a corresponding state, select an action using an ε-greedy strategy according to the reward matrix, and update the reward matrix according to:
Q(s,a)— Q(s,a)+a[r(s)' +y max Q(s ',a)' -Q(s,a)J
a'
where Q(s,a) represents an element corresponding to state s and action a in the reward matrix, a represents learning rate which is a step-size parameter to control changing speed of Q(s,a), r(s') represents reward of state s', a' is the next action, s' is the next state after taking the action a', and γ represents discount rate. The updating component repeats the operations of selecting an action using the ε-greedy strategy and updating the reward matrix until the reward matrix converges. In one embodiment, a converge condition of the reward matrix is defined as mean square deviation of the average of the reward matrix in last M circles is less than a predetermined value, where M is a predetermined number.
[0051 ] Referring to FIG. 1 a, a flowchart of an illustrative embodiment of a method 100 for reconstructing overlapped objects in an image is shown. In the illustrated embodiment, method 100, and other methods and processes described herein, set forth various functional blocks or actions that may be described as processing steps, functional operations, events and/or acts, etc., which may be performed by hardware, software, and/or firmware. With reference to method 100 as an example, those skilled in the art in light of the present disclosure will recognize that numerous alternatives to the functional blocks shown in FIG. 1 may be practiced in various implementations. For example, although method 100, as shown in FIG. 1 , depicts one particular order of blocks or actions, the order in which these blocks or actions are presented does not necessarily limit claimed subject matter to any particular order. Likewise, intervening actions not shown in FIG. 1 and/or additional actions not shown in FIG. 1 may be employed and/or some of the actions shown in FIG. 1 may be eliminated, without departing from the scope of claimed subject matter. Method 100 may include one or more of operations, functions, or actions as illustrated by blocks 101 and/or 103. The various blocks are not intended to be limiting to the described embodiments. For example, one skilled in the art will appreciate that, for this and other processes and methods disclosed herein, the functions performed in the processes and methods may be implemented in differing order. Furthermore, the outlined steps and operations are only provided as examples, and some of the steps and operations may be optional, combined into fewer steps and operations, or expanded into additional steps and operations without detracting from the essence of the disclosed embodiments.
[0052] In block 101 (OBTAIN SEGMENT POINTS), segment points are obtained. For example, the segment points may be obtained on the contour of the overlapped object area. The segment points may be used to segment a contour of an overlapped object area in the image to obtain incomplete contours of the overlapped objects.
[0053] In block 103 (GENERATE VIRTUAL POINTS), virtual points are generated. For example, virtual points may be generated based on the incomplete contours of the overlapped objects. The virtual points and the incomplete contours can be used to reconstruct contours of the overlapped objects. In one embodiment, the virtual points may be generated based on the segment points.
[0054] Referring to FIG. 1 b, a flowchart of an illustrative embodiment of a method for obtaining segment points (block 101 of FIG. 1 a) is shown. The illustrated method for obtaining segment points may include one or more of operations as illustrated by blocks 105, 107, 109, 111 , 113, and/or 115.
[0055] In block 105 (EXTRACT OVERLAPPED OBJECT AREA), an overlapped object area is extracted. For example, the overlapped object area may be extracted from a segmented image such as a binary image.
[0056] In block 107 (OBTAIN A FIRST SET OF POINTS ON THE EDGE OF THE OVERLAPPED OBJECT AREA), a first set of points is obtained on an edge of the overlapped object area. In one embodiment, the first set of points is obtained by an eight-connected chain code running clockwise. Those of ordinary skill in the art will appreciate that other methods can be used to obtain the first set of points, for example, a two pass connected component labeling method can also be used to obtain the first set of points.
[0057] Taking lung cancer cell image analysis as an example, FIG. 2 shows an illustration of an example image showing a first set of points obtained on an edge of an overlapped cell area in a segmented lung cancer cell image, by an eight-connected chain code running clockwise. A lung cancer cell image may be used to conduct lung cancer diagnosis, but it does not mean that a lung cancer cell image must have a lung cancer cell therein, for example, a lung cancer cell image may be an image having healthy cells only therein.
[0058] Referring again to the illustrated method of FIG. 1 b, in block 109 (SAMPLE THE FIRST SET OF POINTS TO OBTAIN A SECOND SET OF POINTS), the first set of points is sampled to obtain a second set of points. In one embodiment, the second set of points is obtained by sampling the first set of points every N points, where N is a predetermined number. In one embodiment, N can be determined according to the concentration of the first set of points, accuracy requirement, and computation cost etc. In one embodiment, N may be selected from the range of 5-10. Those of ordinary skill in the art will appreciate that other sampling methods may be used to sample the first set of points to obtain the second set of points.
[0059] In block 111 (FIT THE SECOND SET OF POINTS TO OBTAIN A THIRD SET OF POINTS), the second set of points is fitted to obtain a third set of points. A smoother contour may be obtained by connecting the third set of points compared to a contour obtained by connecting the first set of points directly. In other words, by sampling (block 109) and fitting (block 111 ), a smoother contour may be obtained. In one embodiment, a cubic B-spline method may be used to fit the second set of points to obtain the third set of points. Those of ordinary skill in the art will appreciate that other methods can be used to fit the second set of points. For example, Lagrange interpolation, Newton interpolation, Newton iterative algorithm, bisection method, etc. may be used to fit the second set of points to obtain the third set of points. In one embodiment, four points are used to do the fitting calculation by cubic B-spline method, as shown in equation (1 ): B J. = equation (1 )
Figure imgf000013_0001
where A, represents a point in the second set of points, i.e., coordinates of the point, and Bj represents a point in the third set of points, and μ (0<μ<1 ) can be calculated using the following equation (2):
Χ = ΑΆ + μ * (Α equation (2)
where X is a serial number of a point to be calculated in the third set of points, As, represents the serial number of the point A, in the third set of points, ASI < X < ASI+3 . For example, if the second set of points is obtained by sampling the first set of points every 5 points, and if A, is the 6th point in the third set of points, and if the 3rd point from A, is to be calculated, then Α=6, Ai+3=21 , X=9, and then
9=6+μ*(21 -6)
μ=0.2.
[0060] In block 113 (CALCULATE TWO-ORDER DERIVATIVE OF EACH POINT IN THE THIRD SET OF POINTS), a two-order derivative of each point of the third set of points may be calculated. In one embodiment, two-order derivatives of points sampled from the third set of points may be calculated. For example, the third set of points is sampled every N' points, where N' is a predetermined number.
[0061 ] In one embodiment, two-order derivative calculation may be conducted in a discrete manner. For example, a two-order derivative of a point may be calculated using coordinates of the point and its neighboring points. Given the i-th point is represented as (Xi,y), where x, represents the x coordinate of the i-th point, and y represents the y coordinate of the i-th point. A two-order derivative of the i-th point can be calculated as follows. First, a one-order derivative Fi(,) of the i-th point may be calculated according to the following equation (3).
Fm = [ (yM - yt)/(xM - xt) + (yt - yt-i)/(xi - xt-i)]/2 equation (3) And then, a two-order derivative F2(i) of the i-th point may be calculated according to the following equation (4).
F2(i) = [(Fl(M) - F )/(xt+1 - x + (Fl(i) - Fl(i_l})/(Xi - xt_,)]/2 equation (4)
[0062] In block 115 (OBTAIN SEGMENT POINTS BASED ON THE TWO-ORDER DERIVATIVE CALCULATION RESULTS), segment points are obtained based on the two-order derivative calculation results in block 113. In one embodiment, points closest to 1/3 and 2/3 along lengths of lines formed by connecting corresponding sequences of points having a positive two-order derivative, may be selected from the third set of points as segment points. In one embodiment, points at 1/3 and 2/3 along lengths of lines formed by connecting corresponding sequences of points having a positive two-order derivative, may be calculated as segment points. Those of ordinary skill in the art will appreciate that other methods can be used to obtain segment points. For example, the first and the last points along lengths of lines formed by connecting corresponding sequences of points having a positive two-order derivative, may be selected from the third set of points as segment points.
[0063] FIG. 3a shows an illustration of an example image showing a first part of a contour segmented from the contour of the overlapped cell area of FIG. 2 using segment points which are closer to the first part of contour, and FIG. 3b shows an illustration of an example image showing a second part of a contour segmented from the contour of the overlapped cell area of FIG. 2 using segment points which are closer to the second part of contour. In FIG. 3a and FIG. 3b, shape "o" represents the second set of points.
[0064] FIG. 1 c shows a flowchart of an illustrative embodiment of a method (block 103 of FIG. 1 a) for generating virtual points. The illustrated method for generating virtual points may include one or more of operations as illustrated by blocks 117, and/or 119. In block 117 (GENERATE VIRTUAL POINTS), virtual points to reconstruct contours of the overlapped objects are generated. In one embodiment, the virtual points may be generated based on incomplete contours of overlapped objects. The incomplete contours of overlapped objects may be obtained by segmenting the contour of the overlapped object area using the segment points obtained in the operation of block 115. Then the virtual points and the incomplete contours can be used to reconstruct contours of the overlapped objects. In one embodiment, tangent lines of the segment points are computed, and virtual points are generated using the tangent lines by a deBoor-Cox method.
[0065] FIG. 4 shows an example image illustrating how the virtual points may be generated using the deBoor-Cox method. Tangent lines SiT and S2T of segment points Si and S2, respectively, are computed, where T is the intersection point of the tangent lines SiT and S2T. Then middle points of the tangent lines SiT and S2T are calculated which middle points are indicated as Pi and P2, respectively. After that, middle points of lines S1P1, P1P2, P2S2 are calculated, which middle points are indicated as P , P3, and P5, respectively. In one embodiment, the middle points calculated in the last circle are taken as virtual points.
[0066] Referring again to the illustrated method of FIG. 1 c, in block 119 (FIT THE VIRTUAL POINTS), the virtual points are fitted to obtain a fourth set of points. In one embodiment, a cubic B-spline method may be used to fit the virtual points to obtain the fourth set of points. A contour of a segmented cell can be obtained by connecting the third and the fourth sets of points in sequence. The contour formed by connecting the third and the fourth sets of points is smoother than that formed by connecting the virtual points and the third set of points.
[0067] FIG. 5a-a, FIG. 5b-a, and FIG. 5c-a show example images illustrating three example overlapped cell areas. FIG. 5a-b and FIG. 5a-c show example images illustrating two corresponding cell areas reconstructed from the original overlapped cell area shown in FIG. 5a-a using the method 100, and FIG. 5a-d illustrates cell areas reconstructed from the original overlapped cell area using a conventional watershed algorithm. FIG. 5b-b and FIG. 5b-c illustrate two corresponding cell areas reconstructed from the original overlapped cell area shown in FIG. 5b-a using the method 100, and FIG. 5b-d illustrates cell areas reconstructed from the original overlapped cell area using the conventional watershed algorithm. FIG. 5c-b and FIG. 5c-c illustrate two corresponding cell areas reconstructed from the original overlapped cell area of FIG. 5c-a using the method 100, and FIG. 5c-d illustrates cell areas reconstructed from the original overlapped cell area using the conventional watershed algorithm. As can be seen, the method 100 is better than the conventional watershed algorithm in reconstructing overlapped objects, especially in reconstructing overlapped cells.
[0068] Those of ordinary skill in the art will appreciate that the method 100 can be used to reconstructing overlapped objects other than overlapped cells in an image.
[0069] In conventional lung cancer image classification methods, such as the method disclosed in Multi-Class Multi- Instance Learning for Lung Cancer Image Classification Based on Bag Feature Selection by Liang Zhu etc. 978-0-7695-3305-6/08 IEEE, does not take overlapped lung cancer cell area into consideration in the classification of the images. Accordingly, more example images are needed to train classifiers and the accuracy and efficiency of classification is relatively low. For example, if a training image contains only one lung cancer cell which overlaps with a healthy cell, this image will be deemed as an image having no lung cancer cell. Therefore, more images having isolated lung cancer cells of this type are needed to train the classifiers. In another example, if an image to be classified containing only one lung cancer cell which overlaps with a healthy cell or another lung cancer cell, it will be ignored, and this image will be classified as an image having no lung cancer cell, thus a wrong diagnostic result will be given.
[0070] Reconstructing overlapped lung cancer cells could substantially improve efficiency and accuracy of training classifiers and/or classifying lung cancer cell images. Those of ordinary skill in the art will appreciate that the method 100 can be used to not only classify lung cancer cell images, but also to classify other images in which there is overlapped object area.
[0071 ] Referring to FIG. 6, a flowchart of an illustrative embodiment of a method 200 for classifying images, such as lung cancer cell images, having an overlapped object area therein, is shown. The illustrated method for classifying images may include one or more of operations as illustrated by blocks 201 , 203, and/or 205. In block 201 (SEGMENT AN IMAGE), an image to be classified is segmented to obtain multiple object areas, such as cell areas in the case of lung cancer cell image.
[0072] In block 203 (RECONSTRUCT OVERLAPPED OBJECT AREA(S)), overlapped object area(s) are reconstructed. For example, the overlapped object areas may be reconstructed using the method 100.
[0073] In block 205 (CLASSIFY THE IMAGE), the image is classified using the segmented object areas and the reconstructed object areas. Since overlapped object areas may be used to classify images, compared with conventional image classifying methods in which overlapped object areas are not used to classify images, efficiency and accuracy of image classifying are improved.
[0074] Referring to FIG. 7, a block diagram of an illustrative embodiment of a device 300 that reconstructs overlapped objects, such as lung cancer cells, in an image is shown. The device 300 includes a segment points obtaining component 301 and a virtual points computing component 303. The segment points obtaining component 301 computes segment points. The segment points may be used to segment a contour of an overlapped object area, such as an overlapped cell area in the case of lung cancer cell image, to obtain multiple incomplete contours of overlapped objects. The virtual points computing component 303 computes virtual points based on the incomplete contours of the overlapped objects. The virtual points and the incomplete contours may be used to reconstruct contours of the overlapped objects.
[0075] In one embodiment, the segment points obtaining component 301 includes a contour points computing component 305, a sampling component 307, a fitting calculating component 309, a two-order derivative calculating component 311 , and a segment points calculating component 313.
[0076] The contour points computing component 305 obtains a first set of points on an edge of the overlapped object area. In one embodiment, the contour points computing component 305 obtains the first set of points by an eight-connected chain code running clockwise. Those of ordinary skill in the art will appreciate that the contour points computing component 305 may use other methods to obtain the first set of points, for example, a two pass connected component labeling method.
[0077] The sampling component 307 samples the first set of points to obtain a second set of points. In one embodiment, the sampling component 307 samples the first set of points every N points, where N is a predetermined number, and N may be determined based on accuracy requirement, computation speed etc.
[0078] The fitting calculating component 309 fits the second set of points to obtain a third set of points. In one embodiment, the fitting calculating component 309 fits the second set of points using a cubic B-spline method. Those of ordinary skill in the art will appreciate that the fitting calculating component 309 may fit the second set of points to obtain the third set of points using other methods such as those mentioned above.
[0079] The two-order derivative calculating component 311 calculates two-order derivatives of at least some points of the third set of points. In one embodiment, the two-order derivative calculating component 311 may calculate a two-order derivative of each point of the third set of points. In one embodiment, the two-order derivative calculating component 311 may calculate two-order derivatives of points sampled from the third set of points. In one embodiment, the third set of points is sampled every N' points, where N' may be a pre-determined number. The segment points calculating component 313 obtains segment points based on the two-order derivative calculation results of the two-order derivative calculating component 311. In one embodiment, the segment points calculating component 313 selects, from the third set of points, points closest to 1/3 and 2/3 along the length of a line formed by connecting a sequence of points having positive two-order derivative. In one embodiment, the segment points calculating component 313 calculates points at 1/3 and 2/3 along the length of a line formed by connecting a sequence of points having positive two-order derivative as segment points. Those of ordinary skill in the art will appreciate that the segment points calculating component 313 may use other method to calculate segment points, the 1/3 and 2/3 along the length of a line are just examples.
[0080] In one embodiment, the virtual points computing component 303 computes virtual points using segment points by a deBoor-Cox method. Those of ordinary skill in the art will appreciate that the virtual points computing component 303 may generates virtual points using other methods. The virtual points computing component 303 may include an optional virtual points fitting component 315, which fits the generated virtual points to obtain a fourth set of points. A smoother contour may be obtained by connecting the fourth set of points and the third set of points in sequence, compared with a contour formed by connecting the virtual points and the third set of points directly. In one embodiment, the virtual points computing component 303 may share the fitting calculating component 309 to fit the generated virtual points.
[0081 ] The device 300 further includes an image receiving component 317 which receives images having overlapped object areas such as lung cancer images having overlapped cell areas. The device 300 further includes a display 319 which displays information and data to facilitate the execution of the method 100 described above to, for example, users of the device 300. The device 300 further includes an input component 321 which facilitates the reception (e.g., input) of instructions from, for example, users. The device 300 further includes an output component 323 which outputs calculation results of the device 300. All the components of the device 300 may communicate with each other via a communication BUS 325.
[0082] FIG. 8 shows a flow chart of an illustrative embodiment of a method 400 for training a reward matrix for image segmentation. The illustrated method for training reward matrix may include one or more of operations as illustrated by blocks 401 , and/or 403. An element in the reward matrix represents a reward given to taking a
corresponding action at a corresponding state. Table 1 illustrates an example of reward matrix, in which each element is initialized as 0, S1 , S2...S5 represent corresponding states, and "+5", "0", "-5" represent actions of changing threshold by +5, 0, -5, respectively. In block 401 (PROVIDE A GROUP OF EXAMPLE IMAGES AND A CORRESPONDING GROUP OF SEGMENTED TRAINING IMAGES), a group of example images and a corresponding group of segmented training images are provided.
[0083] In block 403, (TRAIN A REWARD MATRIX BY REINFORCEMENT
LEARNING METHOD), use a reinforcement learning method to train the reward matrix with the group of example images and the group of segmented training images. In one embodiment, a segmented training image may be obtained by segmenting a
corresponding image of the group of example images relatively reliably, for example, segmenting manually.
Figure imgf000020_0002
Table 1
[0084] In one embodiment, before training, the reward matrix is initialized first. In one embodiment, the reward matrix is initialized by setting all elements to 0.
[0085] In one embodiment, in the reinforcement learning method, state is defined as:
S = E * F on (5)
[0086] where S represents state, E = where Ec
Figure imgf000020_0001
represents pixels surrounded by edges obtained by segmenting an image using a current threshold, Es represents pixels surrounded by edges calculated using a Sobel operator, Fc represents a foreground obtained by segment the image using the current threshold, and Fo represents a foreground obtained by segmenting the image using an OTSU method. In one embodiment, action is defined as changing the threshold.
[0087] FIG. 9 shows a flowchart of an illustrative embodiment of a method (block 403 of FIG. 8) for training a reward matrix using example images and segmented training images. The illustrated method for training reward matrix may include one or more of operations as illustrated by blocks 405, 407, 409, 411 , 413, 415, 417, 419, 421 , and/or
423. In block 405 (COMPUTE EDGES OF OBJECT AREAS), edges of object areas in a selected example image are computed using the Sobel operator.
[0088] In block 407 (SEGMENT THE SELECTED EXAMPLE IMAGE), the selected example image is segmented using the OTSU method.
[0089] In block 409 (COMPUTE STATES OF THE SELECTED IMAGE OF
PRE-SELECTED GREY LEVELS), states of the selected image of pre-selected grey levels are computed. In an embodiment, there are 256 grey levels in total.
Accordingly, computing states of an image of each grey level may be computation intensive. To reduce computation intensity, states of an image of grey levels, pre-selected according to a certain rule, may be calculated for use. For example, every
5 grey levels i.e. 0, 5, 10, 15 and so on may be selected.
[0090] In one embodiment, a reward of an image at a grey level is calculated according to the following equation (6): R equation (6)
Figure imgf000021_0001
where R represents reward, Be represents a background obtained by segmenting a selected example image using a current threshold, BT represents a background of a corresponding segmented training image, FC represents a foreground obtained by segmenting the selected example image using the current threshold, and FT represents a foreground of the corresponding segmented training image.
[0091 ] Referring again to the illustrated method for training reward matrix of FIG. 9, in block 411 (INITIALIZE THE CURRENT THRESHOLD AND COMPUTE A
CORRESPONDING STATE), initialize the current threshold and compute a
corresponding state.
[0092] In block 413 (SELECT AN ACTION USING ε-G REEDY STRATEGY
ACCORDING TO THE REWARD MATRIX), an action is selected using an ε-greedy strategy according to the reward matrix.
[0093] In block 415 (UPDATE THE REWARD MATRIX), update the reward matrix according to:
Q(s,a)— Q(s,a)+a[r(s)' +y max Q(s ',a)' -Q(s,a)J
a'
where Q(s,a) represents an element corresponding to state s and action a in the reward matrix, a represents learning rate which is a step-size parameter to control changing speed of Q(s,a), r(s') represents reward of state s', a' represents the next action to be taken, s' represents the state after taking the action a', and γ represents discount rate. Discount rate is a parameter, which reflects that what has been selected may not be totally believed, a discount should be given to it, so that there is still a chance to find the optimal action even if a bad choice have been made before.
[0094] In block 417 (CONVERGED?), it is determined whether the reward matrix is converged. If it is determined that the reward matrix is converged, then operation in block 419 (FINISH?) determines if the training is finished. In one embodiment, all images of the group of example images and the group of segmented training images may be used to train the reward matrix. Therefore, the training will not be finished until the last image is used to train the reward matrix. So if it is determined that the training is not finished in block 419, operation in block 423 (SELECT NEXT IMAGE) then selects the next image. If it is determined that the reward matrix is not converged in block 417, then the operations in blocks 413 and 415 are repeated until the reward matrix is converged.
[0095] It is known that the optimal converge condition is that the reward matrix does remain constant. However, it is almost impossible to satisfy this condition. So, a converge condition needs to be redefined. Before defining the converge condition, a variable that can reflect the change of the elements in the reward matrix whose elements are numeric data, needs to be defined. In one embodiment, mean square deviation to estimate the change of the reward matrix is chosen as the variable. If there is little change of almost each element of the reward matrix in the last ten circles, the mean square deviation will be very small. However, if there are significant changes of some elements, the mean square deviation may be large correspondingly. So, whether the reward matrix converges can be determined according to the mean square deviation. In one embodiment, the converge condition may be that the mean square deviation of the average of the reward matrix in the last M circles is less than a predetermined value, where M is a predetermined number. In one embodiment, M may be set 10, and the predetermined value may be set 0.005. Here, one cycle means taking an action according to ε-greedy strategy.
[0096] FIG. 10 shows a flow chart of an illustrative embodiment of a method 500 for image segmentation. The illustrated method for image segmentation may include one or more of operations as illustrated by blocks 501 , 503, and/or 505. In block 501 (PROVIDE AN IMAGE TO BE SEGMENTED), an image to be segmented is provided.
[0097] In block 503 (COMPUTE AN OPTIMAL THRESHOLD), use an ε-greedy strategy and the reward matrix trained using the method 400, to compute an optimal threshold to segment the image.
[0098] In block 505 (SEGMENT THE IMAGE), segment the image using the computed optimal threshold.
[0099] FIG. 11 shows a block diagram of an illustrative embodiment of a device 600 for training a reward matrix for image segmentation. An element in the reward matrix represents a reward given for taking a corresponding action at a corresponding state. The device 600 includes an example image receiving component 601 , a segmented training image receiving component 603, a reinforcement learning component 605, an input component 607, and an output component 609, all communicatively coupled to each other via a BUS 621.
[00100] The example image receiving component 601 receives example images. The segmented training image receiving component 603 receives segmented training images. The reinforcement learning component 605 trains the reward matrix using received example images and corresponding segmented training images by a
reinforcement learning method. The input component 607 may facilitate the reception (e.g., input) of instructions from, for example, users. The output component 609 outputs the trained matrix.
[00101 ] In one embodiment, the reinforcement learning method used herein may be same as that of the method 400.
[00102] The reinforcement learning component 605 includes: a Sobel operator computing component 611 , an OTSU computing component 613, a state computing component 615, a reward computing component 617, and an updating component 619.
[00103] The Sobel operator computing component 611 computes edges of objects in a selected example image using a Sobel operator.
[00104] The OTSU computing component 613 segments the selected example image using an OTSU method.
[00105] The state computing component 615 computes states of a selected image of pre-selected grey levels.
[00106] The reward computing component 617 computes rewards of the selected image of the pre-selected grey levels using the equation (6).
[00107] The updating component 619 initializes a current threshold, computes a corresponding state of the selected example image, selects an action using an ε-greedy strategy according to the reward matrix, and updates the reward matrix according to:
Q(s,a) r- Q(s,a)+ a[r(s)' + y maxQ(s',a)' - Q(s,a)] .
a'
[00108] The updating component 619 repeats the operations of selecting an action using ε-greedy strategy and updating the reward matrix until the reward matrix converges.
[00109] FIG. 12 illustrates an example block diagram of a device 700 for image segmentation. The device 700 includes: an ε-greedy strategy computing component 701 to compute an optimal threshold for an image to be segmented using the reward matrix trained by reinforcement learning method; and a segmenting component 703 to segment the image using the computed optimal threshold.
[001 10] FIG. 13 illustrates an example block diagram of a computer system 800 for classifying images such as lung cancer cell images. The computer system 800 includes a CPU 801 , a memory 803, a storage component 805 having a reward matrix training program 807, an image segmentation program 809, an overlapped objects reconstruction program 811 , and an image classification program 813 stored therein, a display 815, an output component 817, and an input component 819 connected together by a bus 821.
[001 1 1 ] When the reward matrix training program 807 is executed, the computer system 800 will be instructed to conduct the method 400. When the image segmentation program 809 is executed, the computer system 800 will be instructed to conduct the method 500. When the overlapped objects reconstruction program 811 is executed, the computer system 800 will be instructed to conduct the method 100.
When the image classification program 813 is executed, the computer system 800 will be instructed to conduct the method 200.
[001 12] In one embodiment, the computer system 800 may be coupled to a database having a group of example lung cancer cell images and a corresponding group of segmented lung cancer cell images. The computer system 800 may train a reward matrix using the method 400, and stores the trained reward matrix in the storage component 805.
[001 13] In one embodiment, the computer system 800 may be coupled to a lung cancer cell image generator, which generates lung cancer cell images from tissues taken from a patient. The computer system 800 may receive a lung cancer cell image from the lung cancer cell image generator, such as a microscopy camera, segment a lung cancer cell image generated by the camera using the method 500, reconstruct overlapped cells in the segmented image using the method 100, and classify the lung cancer cell image to determine whether the patient have lung cancer and what kind of lung cancer he/she has, using the method 200.
[001 14] Those of ordinary skill in the art will appreciate that the methods and components of the present application can be used not only in lung cancer diagnostic applications, but also in diagnosis of all kinds of cancers and other diseases. In addition, they can also be used in classification of other kinds of images.
[001 15] The foregoing description of the illustrated embodiments of the present invention is by way of example only, and other variations and modifications of the above-described embodiments and methods are possible in light of the foregoing teaching. Further, components of this invention may be implemented using a programmed general purpose digital computer, using application specific integrated circuits, or using a network of interconnected conventional components and circuits. Connections may be wired, wireless, modem, etc. The embodiments described herein are not intended to be exhaustive or limiting. The present invention is limited only by the following claims.
[001 16] There is little distinction left between hardware and software implementations of aspects of systems; the use of hardware or software is generally (but not always, in that in certain contexts the choice between hardware and software can become significant) a design choice representing cost vs. efficiency tradeoffs. There are various vehicles by which processes and/or systems and/or other technologies described herein can be effected (e.g., hardware, software, and/or firmware), and that the preferred vehicle will vary with the context in which the processes and/or systems and/or other technologies are deployed. For example, if an implementer determines that speed and accuracy are paramount, the implementer may opt for a mainly hardware and/or firmware vehicle; if flexibility is paramount, the implementer may opt for a mainly software implementation; or, yet again alternatively, the implementer may opt for some combination of hardware, software, and/or firmware.
[001 17] The foregoing detailed description has set forth various embodiments of the devices and/or processes via the use of block diagrams, flowcharts, and/or examples. Insofar as such block diagrams, flowcharts, and/or examples contain one or more functions and/or operations, it will be understood by those within the art that each function and/or operation within such block diagrams, flowcharts, or examples can be implemented, individually and/or collectively, by a wide range of hardware, software, firmware, or virtually any combination thereof. In one embodiment, several portions of the subject matter described herein may be implemented via Application Specific Integrated Circuits (ASICs), Field Programmable Gate Arrays (FPGAs), digital signal processors (DSPs), or other integrated formats. However, those skilled in the art will recognize that some aspects of the embodiments disclosed herein, in whole or in part, can be equivalently implemented in integrated circuits, as one or more computer programs running on one or more computers (e.g., as one or more programs running on one or more computer systems), as one or more programs running on one or more processors (e.g., as one or more programs running on one or more microprocessors), as firmware, or as virtually any combination thereof, and that designing the circuitry and/or writing the code for the software and or firmware would be well within the skill of one of skill in the art in light of this disclosure. In addition, those skilled in the art will appreciate that the mechanisms of the subject matter described herein are capable of being distributed as a program product in a variety of forms, and that an illustrative embodiment of the subject matter described herein applies regardless of the particular type of signal bearing medium used to actually carry out the distribution. Examples of a signal bearing medium include, but are not limited to, the following: a recordable type medium such as a floppy disk, a hard disk drive, a Compact Disc (CD), a Digital Video Disk (DVD), a digital tape, a computer memory, etc.; and a transmission type medium such as a digital and/or an analog communication medium (e.g., a fiber optic cable, a waveguide, a wired communications link, a wireless communication link, etc.).
[001 18] Those skilled in the art will recognize that it is common within the art to describe devices and/or processes in the fashion set forth herein, and thereafter use engineering practices to integrate such described devices and/or processes into data processing systems. That is, at least a portion of the devices and/or processes described herein can be integrated into a data processing system via a reasonable amount of experimentation. Those having skill in the art will recognize that a typical data processing system generally includes one or more of a system unit housing, a video display device, a memory such as volatile and non-volatile memory, processors such as microprocessors and digital signal processors, computational entities such as operating systems, drivers, graphical user interfaces, and applications programs, one or more interaction devices, such as a touch pad or screen, and/or control systems including feedback loops and control motors (e.g., feedback for sensing position and/or velocity; control motors for moving and/or adjusting components and/or quantities). A typical data processing system may be implemented utilizing any suitable commercially available components, such as those typically found in data computing/communication and/or network computing/communication systems.
[001 19] The herein described subject matter sometimes illustrates different components contained within, or connected with, different other components. It is to be understood that such depicted architectures are merely exemplary, and that in fact many other architectures can be implemented which achieve the same functionality. In a conceptual sense, any arrangement of components to achieve the same functionality is effectively "associated" such that the desired functionality is achieved. Hence, any two components herein combined to achieve a particular functionality can be seen as "associated with" each other such that the desired functionality is achieved, irrespective of architectures or intermedial components. Likewise, any two components so associated can also be viewed as being "operably connected", or "operably coupled", to each other to achieve the desired functionality, and any two components capable of being so associated can also be viewed as being "operably couplable", to each other to achieve the desired functionality. Specific examples of operably couplable include but are not limited to physically mateable and/or physically interacting components and/or wirelessly interactable and/or wirelessly interacting components and/or logically interacting and/or logically interactable components.
[00120] With respect to the use of substantially any plural and/or singular terms herein, those having skill in the art can translate from the plural to the singular and/or from the singular to the plural as is appropriate to the context and/or application. The various singular/plural permutations may be expressly set forth herein for sake of clarity.
[00121 ] It will be understood by those within the art that, in general, terms used herein, and especially in the appended claims (e.g., bodies of the appended claims) are generally intended as "open" terms (e.g., the term "including" should be interpreted as "including but not limited to," the term "having" should be interpreted as "having at least," the term "includes" should be interpreted as "includes but is not limited to," etc.). It will be further understood by those within the art that if a specific number of an introduced claim recitation is intended, such an intent will be explicitly recited in the claim, and in the absence of such recitation no such intent is present. For example, as an aid to understanding, the following appended claims may contain usage of the introductory phrases "at least one" and "one or more" to introduce claim recitations. However, the use of such phrases should not be construed to imply that the introduction of a claim recitation by the indefinite articles "a" or "an" limits any particular claim containing such introduced claim recitation to disclosures containing only one such recitation, even when the same claim includes the introductory phrases "one or more" or "at least one" and indefinite articles such as "a" or "an" (e.g., "a" and/or "an" should typically be interpreted to mean "at least one" or "one or more"); the same holds true for the use of definite articles used to introduce claim recitations. In addition, even if a specific number of an introduced claim recitation is explicitly recited, those skilled in the art will recognize that such recitation should typically be interpreted to mean at least the recited number (e.g., the bare recitation of "two recitations," without other modifiers, typically means at least two recitations, or two or more recitations). In those instances where a convention analogous to "at least one of A, B, or C, etc." is used, in general such a construction is intended in the sense one having skill in the art would understand the convention (e.g., "a system having at least one of A, B, or C" would include but not be limited to systems that have A alone, B alone, C alone, A and B together, A and C together, B and C together, and/or A, B, and C together, etc.). It will be further understood by those within the art that virtually any disjunctive word and/or phrase presenting two or more alternative terms, whether in the description, claims, or drawings, should be understood to contemplate the possibilities of including one of the terms, either of the terms, or both terms. For example, the phrase "A or B" will be understood to include the possibilities of "A" or "B" or "A and B."
[00122] While various aspects and embodiments have been disclosed herein, other aspects and embodiments will be apparent to those skilled in the art. The various aspects and embodiments disclosed herein are for purposes of illustration and are not intended to be limiting, with the true scope and spirit being indicated by the following claims.

Claims

CLAIMS We claim:
1. A method for reconstructing overlapped objects in an image, the method comprising: obtaining segment points having a positive two-order derivative on a contour of an overlapped object area in the image;
segmenting the contour of the overlapped object area using the segment points to obtain incomplete contours of overlapped objects; and
generating virtual points based on the incomplete contours, where the virtual points and the incomplete contours can be used to reconstruct contours of the overlapped objects.
2. The method of claim 1 , wherein obtaining segment points comprises:
obtaining a first set of points on an edge of the overlapped object area;
sampling the first set of points to obtain a second set of points;
fitting the second set of points to obtain a third set of points;
calculating two-order derivatives of at least some points of the third set of points; and
obtaining the segment points based on the two-order derivative calculation results.
3. The method of claim 2, wherein the first set of points is obtained by an
eight-connected chain code running clockwise.
4. The method of claim 2, wherein the third set of points is obtained by fitting the
second set of points using a B-spline method.
5. The method of claim 2, wherein points closest to 1/3 and 2/3 along the lengths of lines formed by connecting corresponding sequences of points having a positive two-order derivative are selected from the third set of points as the segment points.
6. The method of claim 2, wherein the points at 1/3 and 2/3 along the lengths of lines formed by connecting corresponding sequences of points having a positive two-order derivative are computed as the segment points.
7. The method of claim 1 , wherein generating virtual points comprises:
calculating tangent lines for the segment points; and
generating virtual points using a deBoor-Cox method based on tangent lines of the segment points.
8. The method of claim 1 , further comprising:
classifying the segmented image using the reconstructed contours of the overlapped objects.
9. An overlapped objects reconstructing device comprising:
a segment points obtaining component to obtain segment points having a positive two-order derivative on a contour of an overlapped object area of a segmented image; and
a virtual points computing component to generate virtual points based on incomplete contours of overlapped objects obtained by segmenting the contour of the overlapped object area using the segment points, where the virtual points and the incomplete contours can be used to reconstruct contours of the overlapped objects.
10. The device of claim 9, wherein the segment points obtaining component comprises: a contour points computing component to obtain a first set of points on an edge of the overlapped object area;
a sampling component to sample the first set of points to obtain a second set of points;
a fitting calculating component to fit the second set of points to obtain a third set of points; a two-order derivative calculating component to calculate a two-order derivative of at least some points in the third set of points; and
a segment points calculating component to calculate segment points based on the calculation results of the two-order derivative calculating component.
1 1. The device of claim 10, wherein the segment points calculating component selects points closest to 1/3 and 2/3 along the lengths of lines formed by connecting corresponding sequences of points having a positive two-order derivative, from the third set of points as segment points.
12. The device of claim 10, wherein the segment points calculating component
calculates points at 1/3 and 2/3 along the lengths of lines formed by connecting corresponding sequences of points having a positive two-order derivative, as segment points.
13. The device of claim 10, wherein the contour points computing component computes the first set of points by an eight-connected chain code running clockwise.
14. The device of claim 10, wherein the fitting calculating component calculates the third set points using a B-spline method based on the second set of points.
15. The device of claim 10, wherein the virtual points computing component calculates tangent lines of the segment points, and generates virtual points using a deBoor-Cox method based on the calculated tangent lines.
16. The device of claim 15, wherein the virtual points computing component further comprises a virtual points fitting calculating component to fit the virtual points using a B-spline method to obtain a fourth set of points, where the fourth set of points and the incomplete contours can be used to reconstruct contours of the overlapped objects.
17. A computer readable medium having computer executable instructions stored
thereon that, when executed, cause the computer to perform a method comprising: obtaining a first set of points on an edge of an overlapped object area;
sampling the first set of points to obtain a second set of points; fitting the second set of points to obtain a third set of points;
calculating a two-order derivative of at least some points of the third set of points;
obtaining segment points based on the calculation results of the two-order derivative calculation; and
generating virtual points based on incomplete contours of overlapped objects obtained by segmenting the contour of the overlapped object area using the segment points, where the virtual points and the incomplete contours can be used to reconstruct the contours of the overlapped objects.
18. The computer readable medium of claim 17, wherein the overlapped object area is obtained from a lung cancer cell image.
PCT/CN2010/073922 2010-06-13 2010-06-13 Reconstruction of overlapped objects in image WO2011156948A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/CN2010/073922 WO2011156948A1 (en) 2010-06-13 2010-06-13 Reconstruction of overlapped objects in image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2010/073922 WO2011156948A1 (en) 2010-06-13 2010-06-13 Reconstruction of overlapped objects in image

Publications (1)

Publication Number Publication Date
WO2011156948A1 true WO2011156948A1 (en) 2011-12-22

Family

ID=45347623

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2010/073922 WO2011156948A1 (en) 2010-06-13 2010-06-13 Reconstruction of overlapped objects in image

Country Status (1)

Country Link
WO (1) WO2011156948A1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107492107A (en) * 2017-08-10 2017-12-19 昆山伟宇慧创智能科技有限公司 The object identification merged based on plane with spatial information and method for reconstructing
CN111369592A (en) * 2020-03-13 2020-07-03 浙江工业大学 Rapid global motion estimation method based on Newton interpolation

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6134293A (en) * 1998-02-04 2000-10-17 Siemens Aktiengesellschaft Image reconstruction method for computed tomography
US6459094B1 (en) * 2000-12-20 2002-10-01 Eastman Kodak Company Method for stitching partial radiation images to reconstruct a full image
CN101042770A (en) * 2007-04-29 2007-09-26 南京大学 Superpose cell area separation method
CN101042771A (en) * 2007-04-29 2007-09-26 南京大学 Medicine image segmentation method based on intensification learning
CN101226155A (en) * 2007-12-21 2008-07-23 中国人民解放军第八一医院 Intelligentize lung cancer early cell pathological picture recognition processing method

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6134293A (en) * 1998-02-04 2000-10-17 Siemens Aktiengesellschaft Image reconstruction method for computed tomography
US6459094B1 (en) * 2000-12-20 2002-10-01 Eastman Kodak Company Method for stitching partial radiation images to reconstruct a full image
CN101042770A (en) * 2007-04-29 2007-09-26 南京大学 Superpose cell area separation method
CN101042771A (en) * 2007-04-29 2007-09-26 南京大学 Medicine image segmentation method based on intensification learning
CN101226155A (en) * 2007-12-21 2008-07-23 中国人民解放军第八一医院 Intelligentize lung cancer early cell pathological picture recognition processing method

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107492107A (en) * 2017-08-10 2017-12-19 昆山伟宇慧创智能科技有限公司 The object identification merged based on plane with spatial information and method for reconstructing
CN107492107B (en) * 2017-08-10 2020-09-22 昆山伟宇慧创智能科技有限公司 Object identification and reconstruction method based on plane and space information fusion
CN111369592A (en) * 2020-03-13 2020-07-03 浙江工业大学 Rapid global motion estimation method based on Newton interpolation
CN111369592B (en) * 2020-03-13 2023-07-25 浙江工业大学 Newton interpolation-based rapid global motion estimation method

Similar Documents

Publication Publication Date Title
US11328430B2 (en) Methods, systems, and media for segmenting images
US10600185B2 (en) Automatic liver segmentation using adversarial image-to-image network
CN107203999B (en) Dermatoscope image automatic segmentation method based on full convolution neural network
JP6636678B2 (en) Learning to annotate objects in images
Schmitt et al. New methods for the computer-assisted 3-D reconstruction of neurons from confocal image stacks
CN110036409B (en) System and method for image segmentation using joint deep learning model
CN107203989A (en) End-to-end chest CT image dividing method based on full convolutional neural networks
CN111275686B (en) Method and device for generating medical image data for artificial neural network training
CN109492668B (en) MRI (magnetic resonance imaging) different-phase multimode image characterization method based on multi-channel convolutional neural network
JP2007111531A (en) Method for improving image consistency, signal, computer readable medium and system
US11769594B2 (en) Deep learning model learning device and method for cancer region
WO2020046986A1 (en) System for automatic tumor detection and classification
Mirikharaji et al. Deep auto-context fully convolutional neural network for skin lesion segmentation
US7539333B2 (en) System and method for processing human body image
CN110415253A (en) A kind of point Interactive medical image dividing method based on deep neural network
CN112136157A (en) Method, system and computer program for segmenting tooth pulp regions from images
Chitsaz et al. Software agent with reinforcement learning approach for medical image segmentation
CN111127487B (en) Real-time multi-tissue medical image segmentation method
EP3514755A1 (en) Image diagnostic assistance device, image diagnostic assistance method, and sample analysis system
WO2011156948A1 (en) Reconstruction of overlapped objects in image
CN110648317B (en) Quality classification method and system suitable for spine metastasis tumor bone
CN111209946A (en) Three-dimensional image processing method, image processing model training method, and medium
EP4239589A1 (en) Ai-assisted generation of annotated medical images
CN115240843A (en) Fairness prediction system based on structure causal model
US8284196B2 (en) Method and system for reconstructing a model of an object

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 10853058

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 10853058

Country of ref document: EP

Kind code of ref document: A1