WO2005057496A1 - 画像からの対象物検出方法及び対象物検出装置 - Google Patents
画像からの対象物検出方法及び対象物検出装置 Download PDFInfo
- Publication number
- WO2005057496A1 WO2005057496A1 PCT/JP2004/018024 JP2004018024W WO2005057496A1 WO 2005057496 A1 WO2005057496 A1 WO 2005057496A1 JP 2004018024 W JP2004018024 W JP 2004018024W WO 2005057496 A1 WO2005057496 A1 WO 2005057496A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- score
- pixel
- exclusive
- area
- image
- Prior art date
Links
- 238000000034 method Methods 0.000 title claims abstract description 25
- 210000004027 cell Anatomy 0.000 claims description 95
- 238000001514 detection method Methods 0.000 claims description 35
- 210000003855 cell nucleus Anatomy 0.000 claims description 16
- 238000004364 calculation method Methods 0.000 claims description 11
- 210000000170 cell membrane Anatomy 0.000 claims description 6
- 238000004590 computer program Methods 0.000 claims description 5
- 210000003850 cellular structure Anatomy 0.000 claims description 2
- 238000003672 processing method Methods 0.000 abstract 1
- 238000010586 diagram Methods 0.000 description 11
- 210000004940 nucleus Anatomy 0.000 description 6
- 238000012545 processing Methods 0.000 description 6
- 238000012360 testing method Methods 0.000 description 6
- 229920000742 Cotton Polymers 0.000 description 3
- 241000255581 Drosophila <fruit fly, genus> Species 0.000 description 3
- 238000004422 calculation algorithm Methods 0.000 description 3
- 238000007796 conventional method Methods 0.000 description 3
- 239000007850 fluorescent dye Substances 0.000 description 3
- 230000005484 gravity Effects 0.000 description 3
- 210000005056 cell body Anatomy 0.000 description 2
- 238000002474 experimental method Methods 0.000 description 2
- 238000000605 extraction Methods 0.000 description 2
- 230000014759 maintenance of location Effects 0.000 description 2
- 238000012935 Averaging Methods 0.000 description 1
- 241001164374 Calyx Species 0.000 description 1
- 230000001174 ascending effect Effects 0.000 description 1
- 230000002238 attenuated effect Effects 0.000 description 1
- 210000003050 axon Anatomy 0.000 description 1
- 210000004556 brain Anatomy 0.000 description 1
- 210000004958 brain cell Anatomy 0.000 description 1
- 230000005859 cell recognition Effects 0.000 description 1
- 238000002591 computed tomography Methods 0.000 description 1
- 239000000470 constituent Substances 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000011156 evaluation Methods 0.000 description 1
- 230000007717 exclusion Effects 0.000 description 1
- 238000002073 fluorescence micrograph Methods 0.000 description 1
- 238000002372 labelling Methods 0.000 description 1
- 239000011159 matrix material Substances 0.000 description 1
- 238000005259 measurement Methods 0.000 description 1
- 210000002686 mushroom body Anatomy 0.000 description 1
- 210000005036 nerve Anatomy 0.000 description 1
- 210000000056 organ Anatomy 0.000 description 1
- 230000002093 peripheral effect Effects 0.000 description 1
- 238000003325 tomography Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01N—INVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
- G01N15/00—Investigating characteristics of particles; Investigating permeability, pore-volume or surface-area of porous materials
- G01N15/10—Investigating individual particles
- G01N15/14—Optical investigation techniques, e.g. flow cytometry
- G01N15/1429—Signal processing
- G01N15/1433—Signal processing using image recognition
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
- G06T7/0012—Biomedical image inspection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
- G06T7/73—Determining position or orientation of objects or cameras using feature-based methods
- G06T7/74—Determining position or orientation of objects or cameras using feature-based methods involving reference images or patches
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01N—INVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
- G01N15/00—Investigating characteristics of particles; Investigating permeability, pore-volume or surface-area of porous materials
- G01N15/10—Investigating individual particles
- G01N15/14—Optical investigation techniques, e.g. flow cytometry
- G01N2015/1486—Counting the particles
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30004—Biomedical image processing
- G06T2207/30024—Cell structures in vitro; Tissue sections in vitro
Definitions
- the present invention relates to an object detection method and apparatus for detecting an object from image data using image processing.
- the present invention will be mainly described based on a cell image (particularly, extraction of a nucleus region), but the cell image is a preferable example of the object of the present invention, and the object to which the present invention is applied. Is not limited to cells or their components.
- the conventional method employs an algorithm that emphasizes the outline of a cell image and then removes noise using a threshold value, thereby recognizing a portion having a certain brightness or higher as a “cell”.
- FIG. 10 shows a diagram in which the conventional method is applied to the cell image shown in FIG.
- Patent Document 1 JP 2001-269195A
- the present invention provides a robust object detection method that is not easily affected by the quality and type of an image.
- An object of the present invention is to detect a target object from an image well and to obtain the position or Z and the number of the target objects.
- an object of the present invention is to determine the position or Z and the number of cells in a cell image.
- the technical means adopted by the present invention is a method for detecting an object from an image which can be expressed as a set of a plurality of pixels and which can be expressed in Z, and the method includes a predetermined method including a pixel of interest.
- a feature value of the predetermined area calculated based on pixel values of a plurality of pixels in the area is set as a score of the pixel of interest, and a step of calculating a score for each pixel in the image, and in order of magnitude of the calculated score
- Images to which the present invention is applied include one-dimensional images, two-dimensional images, three-dimensional images, and high-dimensional images of four or more dimensions.
- image in claim 1 can be replaced with “two-dimensional image”, and “region” can be replaced with “area area”.
- region can be replaced with “area area”.
- image can be replaced with “3D image”, “region” with “volume region”, and “pixel” with “botacell”.
- One-dimensional images can be treated as a special case of two-dimensional images (pixels are continuous only in a certain direction).
- images to which the present invention is applied include moving images.
- moving images include three-dimensional image data composed of a time-series stack of two-dimensional images and four-dimensional image data as a time-series stack of three-dimensional images.
- the score is a value representing a feature of a predetermined area including a certain pixel of interest, and in one preferred embodiment, the score is an average value of a plurality of pixels included in the predetermined area.
- the calculation of the average may include a weighted average. For example, in a cell image as shown in FIG. 9, the pixel value of the nucleus region is larger than the pixel value of the background (bright region), and it is useful to use the average value of the pixel values as the feature value of the region.
- a plurality in the area The median of the pixel value of the pixel or the maximum or minimum value may be calculated and used as the score of the pixel of interest.
- a threshold may be provided when obtaining the maximum value or the minimum value.
- the texture feature amount of the region may be calculated based on the pixel values of a plurality of pixels in the region, and may be used as the score of the pixel of interest.
- texture features include average, variance, skewness, and kurtosis values calculated based on the density histogram, or contrast, uniformity, correlation, and entropy calculated based on the simultaneous density occurrence matrix. You.
- the score also includes a result obtained by normalizing the obtained numerical value and a result obtained by inverting the sign.
- a so-called stationary condition is imposed on the score of a pixel in selecting a pixel (including a vowel cell) when arranging an exclusive area.
- the stopping condition automatic recognition of the object (for example, cell nucleus) can be performed well.
- a pixel that satisfies the stopping condition is selected as a candidate pixel for arranging an exclusive region, and a pixel is selected from the selected candidate pixels based on the score and the exclusive region is arranged. is there.
- pixels are selected in the order of the score (for example, in the order of larger or smaller).
- the stop condition is a condition that the score of the target pixel is not smaller than the score of the neighboring pixel.
- the maximum includes a case where the score of the target pixel is not smaller than that of the surrounding pixel (for example, a case where the score of the target pixel and the score of the adjacent pixel are the same).
- the stationary condition is a condition that the score of the pixel of interest is not greater than the score of the neighboring pixel.
- the minimum includes a case where the score is not larger than the surroundings (for example, a case where the score of the target pixel and the score of the adjacent pixel are the same).
- the predetermined area or Z for calculating the score and the exclusive area to be arranged have the same shape or an approximate shape as the object, and Z or the same or an approximate shape as the object.
- the dimensions For example, if the object is spherical (eg, a cell nucleus), it is advantageous to use a spherical region.
- the position and the score of the pixel of interest are calculated.
- the relationship between the fixed area or the position of the selected pixel and the exclusive area is determined such that the pixel of interest Z is located at the center or the center of gravity of the predetermined area Z exclusive area, respectively.
- the position (coordinates) of the pixel at the center or the center of gravity can be regarded as the position of the object.
- the predetermined area Z exclusive area is an n-dimensional hypersphere centered on the pixel of interest or the selected pixel.
- the n-dimensional hypersphere includes a circle, a sphere, and a four-dimensional hypersphere.
- the present invention provides a computer program, i.e., a computer for detecting an object from an image, calculates a characteristic value of a predetermined area calculated based on pixel values of a plurality of pixels in a predetermined area including a pixel of interest. Calculating the score of each pixel in the image as the score of the pixel of interest; selecting pixels in the order of the size of the calculated score; and excluding regions that are the same or similar to the predetermined region in order from the selected pixel
- a computer program for executing a step of arranging in the image as an area and a step of detecting at least a part of the arranged one or more exclusive areas as the object. You may.
- the present invention may be configured as a computer-readable recording medium on which such a computer program is recorded.
- the present invention provides an object detection system! / ⁇ may be configured as an object detection device! ⁇ .
- the object detection system or apparatus includes a storage unit that stores image data, a display unit that displays an image based on the image data, a score calculation unit for each pixel that forms the image, and an exclusive Means for arranging the area and means for detecting the object from the arranged exclusive area are provided.
- the score calculation means is configured to calculate a characteristic value of the predetermined area based on pixel values of a plurality of pixels in the predetermined area including the target pixel, and to use the characteristic value as a score of the target pixel. .
- the exclusive area arranging means is configured to select pixels in the order of the calculated score, and to arrange in the image an area that is the same as or similar to the predetermined area as an exclusive area in order from the selected pixel. I have.
- the object detection means is configured to detect at least a part of the arranged one or more exclusive regions as the object.
- the invention's effect it is possible to satisfactorily measure the position, the Z, or the number of an object from an image such as a two-dimensional image or a three-dimensional image including the object.
- FIG. 1 shows an algorithm for automatically measuring the position and number of cells using the object detection method according to the present invention.
- the image data to be detected is, in one preferred example, a three-dimensional image of a cell.
- the concept of the present invention can be applied across dimensions. For example, the present invention can be applied to not only a three-dimensional image but also a two-dimensional image.
- the object detection according to the present invention is performed by an object detection device including a computer and an image processing means in the configuration, and the object detection device includes a processing device (such as a CPU), a storage device (including a memory and an external storage device). ), Input devices (mouse, keyboard, etc.), output devices (display, etc.), control programs for operating a computer, and the like.
- a processing device such as a CPU
- a storage device including a memory and an external storage device.
- Input devices mouse, keyboard, etc.
- output devices display, etc.
- control programs for operating a computer and the like.
- the three-dimensional image is composed of a plurality of botcels, and each botcel has a votacell value (intensity can be replaced with a density value).
- a Vota cell is a small unit area that is a constituent unit of a three-dimensional area of an object (see Fig. 3 (A)), and is equivalent to a pixel in a two-dimensional image.
- the term “pixel” includes not only pixels but also pixels.
- the term “botacell” can be replaced with a pixel when interpreted in the context of a two-dimensional image.
- the three-dimensional image data is obtained by stacking two-dimensional tomographic image data (so-called slice images) along a direction perpendicular to the tomographic plane (interpolation is performed if necessary).
- Figure 2 is a schematic diagram showing a three-dimensional image of the cell nucleus for convenience.
- the three-dimensional image also includes five two-dimensional images with different coordinates (positions) in the z-axis direction.
- a three-dimensional image is preferably composed of a large number of two-dimensional image cards densely stacked in the z-axis direction.
- the distance between the two-dimensional images in the z-axis direction is the size of two or three pixels on the xy plane.
- the three-dimensional image data based on a fluorescence microscope image is handled.
- the three-dimensional image data includes a plurality of Votacell forces
- the three-dimensional image data is obtained.
- the means is not limited.
- the image targeted by the present invention includes a two-dimensional image, a three-dimensional continuous tomographic image, and the like acquired by a device such as a microscope, a camera, a scanner, and a computer tomography.
- the three-dimensional image data is stored in the storage unit of the object detection device as the position data of each button cell and the button cell value of each button cell, and is displayed on the display unit as a three-dimensional image.
- the score of the arrangement in each of the font cells constituting the three-dimensional image data is calculated.
- image data obtained by fluorescently labeling cell nuclei in a cell image is used.
- the value of the pixel (botacell) in the cell nucleus becomes large. That is, a bright place in the image is expected to be the core area.
- the fluorescent label is applied, the fluorescent light appears larger than the actual nucleus area, so that the image becomes a connected bright part.
- the number of cells is counted. Is difficult. It has been found that the number of cells can be measured favorably by using the method according to the present invention.
- a sphere having a predetermined radius r is prepared as a preferred embodiment of a three-dimensional region having a predetermined size.
- the cell nucleus to be detected has a spherical shape, and is therefore adapted to the shape of the cell nucleus. It is also desirable that the size (volume) of the sphere is the same as or similar to the cell nucleus to be detected.
- the score may be calculated using a plurality of spheres having different diameters as described in an experimental example described later. By calculating a score using a plurality of regions having different sizes, it is possible to determine the region having the optimal size. An area of the same size as the object is not always the optimal size.
- the sphere is arranged such that the selected attentional cell (x, y, z) is located at the center of the sphere, and a sphere having a radius r is set around the attentional cell (Fig. 3 (B)). ).
- Multiple boxes included in the sphere area (actually, it is gurgling as a combination of blocks that are not spheres)
- the average of the respective vota cell values is calculated, and the obtained value is used as the score of the focused vota cell.
- a score is calculated for each vota cell in the three-dimensional image data.
- the calculated score is stored in the storage unit in association with each of the vota cell data. Equation (1) shows the formula for calculating the score.
- V,, and I represent the volume of the region, the convolution weight, and the vota cell value (intensity), respectively.
- the score is a normal average value, and is an average of the Vota cell values of a plurality of Vota cells in the area.
- the average includes the weighted average.
- the value of the vocell is close to the focused botcell, and the value of the focused vocell is far and the value of the botcell is lightly evaluated.
- the average of the pixel values of a plurality of pixels in the region including the target pixel is used as a score for evaluating the arrangement of the target object to be detected.
- the score calculation method according to the present invention is not limited to the average of the pixel values.
- the median of a plurality of pixels in the region may be used as the score.
- the texture feature of the region may be adopted as the score.
- the stop condition of the score is evaluated for each button cell.
- the stop condition is a condition that the score of the focused vota cell is not smaller than the score of the neighboring vota cell.
- the stop conditions are shown in equation (2).
- Neighborhood Votacells in the stopping condition are not limited to the nearest neighbor Votacells.
- a Votacell included in a region obtained by adding the next proximity to the nearest Votacell or a region obtained by further successively adding the proximity is also regarded as a neighborhood Votacell.
- the neighboring pixels do not include pixels located at a distance exceeding the number of pixels having a radius r from the pixel of interest.
- the neighboring pixels are, for example, 4 neighborhoods (up, down, left, and right of the pixel of interest), and 8 neighborhoods (addition of pixels in the diagonal direction).
- the score calculated for each bottle cell is stored in the storage unit in association with each bottle cell.
- the bottle cells are rearranged in the order of the score (for example, in descending order or in the descending order). Assuming that no stopping conditions are set, the ball area with a large score will be sequentially arranged in the ball area. In this case, the only conditions required are that “the order of the score is large” and “the area to be arranged is an exclusive volume area”.
- the vatcell with the highest score is selected, and a sphere region is arranged in the 3D image data with the selected vatcell as the center.
- a sphere region is arranged in the 3D image data with the selected vatcell as the center.
- the target volume region is arranged. That is, the condition that the score power of a certain bottle cell is maximal (including the case where the score is not smaller than the surroundings) is required.
- the exclusive volume region to be arranged is the same as the sphere region used when the score was calculated, or the size (the size of the sphere region used when calculating the score) is similar to the sphere region used when the score was calculated. (Including the case).
- the sphere region to be arranged is an exclusive volume region, and in principle, a plurality of sphere regions are not arranged to partially overlap.
- the sphere area to be placed is not necessarily completely exclusive. It may be arranged to reduce the score.
- an image in which an elastic sphere is arranged so as to allow deformation is exemplified. In the present specification, such a case is referred to as “quasi-exclusive area”, and the term “exclusive area” includes “quasi-exclusive area”.
- FIG. 4 shows a two-dimensional image for convenience, the arrangement of the exclusive area is substantially the same whether the image is a two-dimensional image or a three-dimensional image.
- Exclusive areas are arranged in order from the pixel (botacell) that has a large score and is a pixel (botacell) and satisfies the stopping condition.
- four exclusive sphere regions 2 are set around the selected pixel (botacell) 1.
- the exclusive sphere area 2 sets a virtual sphere of a predetermined radius from the selected pixel (botacell) 1 and becomes an area force exclusive area 2 in the pixel (botacell) where the spherical surface of the virtual sphere is located.
- the peripheral surface of the exclusive spherical area 2 is a rugged area defined by each botcell (see the cells in FIG. 4).
- the exclusive sphere regions 2 do not overlap each other.
- the pixels included in that area are excluded from the pixel 1 which is the center when the next exclusive sphere area 2 is set.
- four exclusive sphere regions 2 are detected as cell nucleus candidates, and the coordinates of the center (selected pixel 1) of each exclusive sphere region are used as position data of each cell.
- the exclusive volume sphere regions are arranged one by one in such a manner that the score that satisfies the stopping condition is large and the Vota cell force is also one by one.
- the arrangement of the exclusive volume sphere region is determined based on whether the arrangement score is larger than a provisionally set (sufficiently small) censoring value, and when the arrangement score becomes smaller than the censoring value, Stop placing the exclusive volume sphere region.
- the temporary cutoff value is a value that can be changed depending on the size of the exclusive volume area used.
- the censored value score is determined from the distribution of the arrangement scores.
- the distribution of the placement score is calculated by the processing unit, and the result is displayed on the display unit as a graph showing the relationship between the score and the count of the number of cells.
- the number of counts up to the truncation score determined by the maximum point of the slope and the inflection point agrees well with the actual cell number. This is a new finding.
- the exclusive volume sphere region having a score larger than the truncation score is simulated as a cell nucleus, and the number of exclusive volume sphere regions is calculated to obtain the number of cells. Then, the position (coordinate) of the bottom cell at the center of each exclusive volume sphere region simulated as a cell nucleus is defined as the position of each cell. In this way, the number of cells can be measured and the position of the cells can be automatically specified.
- the method of determining the score to be adopted has been described above.
- the method of determining the score to be adopted has been described above.
- conditional forces such as the type of a target cell and the size of an exclusive region to be used are also determined in advance.
- the censoring value is a component, the censoring value may be set in advance and the number of cells and the position of the cell may be automatically measured.
- a human may look at a score graph or a three-dimensional image of the result, determine the censoring value, and count the number of cells by a predetermined input operation of the input means.
- the present invention has been described based on image data in which the nucleus in a cell image is larger than the pixel values of other regions by applying a fluorescent label to the nucleus, the fluorescent label is applied to the cell membrane.
- the stop condition of the score is a condition that the score of the botcell of interest is not larger than the score of the neighboring botcell. In other words, it can be said that the minimum condition (including the case where it is not larger than the surroundings). If a score whose sign is changed is treated as a score, it can be processed as a maximum condition.
- the shape of the region employed in the present invention is not limited to a sphere.
- the target is not spherical, it is advantageous to adopt a region having a shape similar to the shape of the target.
- the coordinates of the center of gravity in the area are set as the center coordinates of the area.
- FIG. 5 shows the test results.
- the vertical axis is the score value
- the horizontal axis is the count number of the arranged sphere region
- the score value is normalized between 0 and 1.
- the figure matches the actual cell number. An arrangement exceeding 14 results in an arrangement of a spherical area in the noise portion. If the radial force of the sphere is too small (r 5), the change in the score drop is small. This corresponds to the misplacement of multiple spheres in the same cell. If the radius of the sphere is too large (r> 9), the score will fall too quickly and the sphere cannot be located in nearby cells. By using a plurality of sphere regions having different diameters, a sphere region having an optimal size can be obtained.
- An object detection test of the present invention was performed based on the three-dimensional image partially including the two-dimensional image shown in FIG.
- the area for calculating the score of each pixel is a sphere, and the test was performed by changing the radius of the sphere.
- the radius of the sphere used is 5, 6, 6. 5, 7, 8 (pixels).
- Exclusive sphere regions are arranged in the three-dimensional image in order of the calculated score being large! / ⁇ .
- Figure 6 shows the test results.
- the vertical axis is the score value
- the horizontal axis is the count number of the arranged sphere area
- the score value is normalized between 0 and 1. From Fig. 6, the drop in score is seen as in the two-dimensional image. The dip is not clear while trying.
- a three-dimensional image has a large difference in signal intensity (botacell value) and sharpness between the front and the back in a continuous tomography image of a fluorescence microscope. Can be Especially in the back, the signal is attenuated and there is much noise.
- the number of cells was measured by actually applying the improved algorithm including the retention conditions.
- the subject is the Drosophila Johnston organ (auditory cells).
- Fig. 8 shows the results.
- the vertical axis represents the placement score normalized between 0 and 1
- the horizontal axis represents the number of cells (the number of force points in the placed sphere region).
- a stop condition is imposed on the arrangement of the spherical area. In evaluating the stopping condition, only the closest pixel was compared. By recognizing up to the inflection point where the drop in the score becomes gentle, 570 cells were counted. In addition, the obtained result was a recognition accuracy that was sufficiently satisfactory for an expert at this site.
- the target was detected based on (the cell nucleus was darkened).
- the technique according to the present invention in combination with the technique for distinguishing the area surrounded by the cell membrane from the outside and the technique according to the present invention, the number of “sphere areas surrounded by the cell membrane and having a cell-like size” was counted. As a result, they were able to accurately recognize only cell bodies without being distracted by images of nerve axons.
- the position of the cell can be specified, and the number of cells can be automatically measured.
- FIG. 1 is a flowchart showing an object detection method according to the present invention.
- FIG. 2 is a diagram illustrating a three-dimensional image according to the present invention.
- FIG. 3 (A) is a diagram illustrating the concept of a vota cell, and (B) is a schematic diagram showing the arrangement of a sphere region with a radius r based on a certain vota cell (x, y, z). The region is composed of a plurality of bath cells.
- FIG. 4 is a diagram illustrating an arrangement of an exclusive volume sphere region.
- FIG. 5 is a diagram showing a result of applying the object detection method (without stopping conditions) according to the present invention to a two-dimensional image.
- FIG. 6 is a diagram showing the result of applying the object detection method (without stopping conditions) according to the present invention to a three-dimensional image.
- FIG. 7 is a diagram illustrating stop conditions.
- FIG. 8 is a view showing a result of applying the object detection method (with a stopping condition) according to the present invention to a three-dimensional image.
- FIG. 9 is a diagram showing a two-dimensional image of Drosophila brain cells.
- FIG. 10 is a diagram showing cell recognition based on a conventional cell extraction method for the image in FIG. 9.
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- General Health & Medical Sciences (AREA)
- Theoretical Computer Science (AREA)
- Chemical & Material Sciences (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Health & Medical Sciences (AREA)
- Life Sciences & Earth Sciences (AREA)
- Radiology & Medical Imaging (AREA)
- Analytical Chemistry (AREA)
- Immunology (AREA)
- Pathology (AREA)
- Medical Informatics (AREA)
- Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
- Biochemistry (AREA)
- Quality & Reliability (AREA)
- Dispersion Chemistry (AREA)
- Signal Processing (AREA)
- Image Analysis (AREA)
- Image Processing (AREA)
- Investigating Or Analysing Biological Materials (AREA)
- Apparatus Associated With Microorganisms And Enzymes (AREA)
- Measuring Or Testing Involving Enzymes Or Micro-Organisms (AREA)
Abstract
Description
Claims
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2005516098A JP4623516B2 (ja) | 2003-12-09 | 2004-12-03 | 画像からの対象物検出方法及び対象物検出装置 |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2003-409873 | 2003-12-09 | ||
JP2003409873 | 2003-12-09 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2005057496A1 true WO2005057496A1 (ja) | 2005-06-23 |
Family
ID=34674913
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2004/018024 WO2005057496A1 (ja) | 2003-12-09 | 2004-12-03 | 画像からの対象物検出方法及び対象物検出装置 |
Country Status (2)
Country | Link |
---|---|
JP (1) | JP4623516B2 (ja) |
WO (1) | WO2005057496A1 (ja) |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2012205169A (ja) * | 2011-03-28 | 2012-10-22 | Sony Corp | 画像処理装置および画像処理方法 |
DE102018215770A1 (de) | 2017-09-25 | 2019-03-28 | Olympus Corporation | Bildbearbeitungsvorrichtung, Zell-Erkennungsvorrichtung, Zell-Erkennungsverfahren und Zell-Erkennungsprogramm |
JP2019215766A (ja) * | 2018-06-14 | 2019-12-19 | オリンパス株式会社 | 画像処理装置、細胞認識装置、細胞認識方法および細胞認識プログラム |
KR20230136760A (ko) | 2021-03-23 | 2023-09-26 | 가부시키가이샤 스크린 홀딩스 | 세포 계수 방법, 세포 계수를 위한 기계 학습 모델의 구축 방법, 컴퓨터 프로그램 및 기록 매체 |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10509976B2 (en) | 2012-06-22 | 2019-12-17 | Malvern Panalytical Limited | Heterogeneous fluid sample characterization |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH0734230B2 (ja) * | 1987-07-20 | 1995-04-12 | 工業技術院長 | パタ−ン認識装置 |
JPH07129770A (ja) * | 1993-10-28 | 1995-05-19 | Mitsubishi Electric Corp | 画像処理装置 |
JP2001266143A (ja) * | 2000-03-17 | 2001-09-28 | Nippon Telegr & Teleph Corp <Ntt> | 図形抽出方法 |
Family Cites Families (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2903137B2 (ja) * | 1992-09-10 | 1999-06-07 | 住友金属工業株式会社 | 核抽出方法 |
JP2003099776A (ja) * | 2001-09-25 | 2003-04-04 | Hitachi Ltd | 画像処理装置 |
-
2004
- 2004-12-03 WO PCT/JP2004/018024 patent/WO2005057496A1/ja active Application Filing
- 2004-12-03 JP JP2005516098A patent/JP4623516B2/ja not_active Expired - Fee Related
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH0734230B2 (ja) * | 1987-07-20 | 1995-04-12 | 工業技術院長 | パタ−ン認識装置 |
JPH07129770A (ja) * | 1993-10-28 | 1995-05-19 | Mitsubishi Electric Corp | 画像処理装置 |
JP2001266143A (ja) * | 2000-03-17 | 2001-09-28 | Nippon Telegr & Teleph Corp <Ntt> | 図形抽出方法 |
Cited By (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2012205169A (ja) * | 2011-03-28 | 2012-10-22 | Sony Corp | 画像処理装置および画像処理方法 |
US9779539B2 (en) | 2011-03-28 | 2017-10-03 | Sony Corporation | Image processing apparatus and image processing method |
DE102018215770A1 (de) | 2017-09-25 | 2019-03-28 | Olympus Corporation | Bildbearbeitungsvorrichtung, Zell-Erkennungsvorrichtung, Zell-Erkennungsverfahren und Zell-Erkennungsprogramm |
US10860835B2 (en) | 2017-09-25 | 2020-12-08 | Olympus Corporation | Image processing device, cell recognition device, cell recognition method, and cell recognition program |
JP2019215766A (ja) * | 2018-06-14 | 2019-12-19 | オリンパス株式会社 | 画像処理装置、細胞認識装置、細胞認識方法および細胞認識プログラム |
US11181463B2 (en) | 2018-06-14 | 2021-11-23 | Olympus Corporation | Image processing device, cell recognition apparatus, cell recognition method, and cell recognition program |
JP7085909B2 (ja) | 2018-06-14 | 2022-06-17 | オリンパス株式会社 | 画像処理装置、細胞認識装置、細胞認識方法および細胞認識プログラム |
KR20230136760A (ko) | 2021-03-23 | 2023-09-26 | 가부시키가이샤 스크린 홀딩스 | 세포 계수 방법, 세포 계수를 위한 기계 학습 모델의 구축 방법, 컴퓨터 프로그램 및 기록 매체 |
Also Published As
Publication number | Publication date |
---|---|
JP4623516B2 (ja) | 2011-02-02 |
JPWO2005057496A1 (ja) | 2007-12-13 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
TWI746674B (zh) | 識別圖像中物件的類型預測方法、裝置及電子設備 | |
US8805077B2 (en) | Subject region detecting apparatus | |
JP5576782B2 (ja) | 画像処理装置、画像処理方法、及び画像処理プログラム | |
JP6552613B2 (ja) | 画像処理装置、画像処理装置の作動方法、及び画像処理プログラム | |
CN109815865B (zh) | 一种基于虚拟水尺的水位识别方法及系统 | |
JP6265588B2 (ja) | 画像処理装置、画像処理装置の作動方法、及び画像処理プログラム | |
US10786227B2 (en) | System and method for ultrasound examination | |
JP2020507836A (ja) | 重複撮像を予測した手術アイテムの追跡 | |
US9672610B2 (en) | Image processing apparatus, image processing method, and computer-readable recording medium | |
CN110070531B (zh) | 用于检测眼底图片的模型训练方法、眼底图片的检测方法及装置 | |
EP2665406A1 (en) | Automated determination of arteriovenous ratio in images of blood vessels | |
JP2010244178A (ja) | 顔特徴点検出装置及びプログラム | |
US20100021035A1 (en) | Method for identifying a pathological region of a scan, such as an ischemic stroke region of an mri scan | |
AU2011250827B2 (en) | Image processing apparatus, image processing method, and program | |
US11170246B2 (en) | Recognition processing device, recognition processing method, and program | |
JP2012120799A (ja) | 画像処理装置、画像処理方法、及び画像処理プログラム | |
CN105579847A (zh) | 疾病分析装置、控制方法和程序 | |
CN117333489B (zh) | 一种薄膜破损检测装置及检测系统 | |
WO2005057496A1 (ja) | 画像からの対象物検出方法及び対象物検出装置 | |
CN116051541B (zh) | 基于频闪光源的轴承端面平缓磨伤检测方法及装置 | |
JP4530173B2 (ja) | 顔パーツの位置の検出方法及び検出システム | |
CN110956623A (zh) | 皱纹检测方法、装置、设备及计算机可读存储介质 | |
KR101509991B1 (ko) | 피부 나이 측정 방법 및 장치 | |
Zhang et al. | Retinal vessel segmentation using Gabor filter and textons | |
JP2011150626A (ja) | 画像分類方法、装置、及びプログラム |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AK | Designated states |
Kind code of ref document: A1 Designated state(s): AE AG AL AM AT AU AZ BA BB BG BR BW BY BZ CA CH CN CO CR CU CZ DE DK DM DZ EC EE EG ES FI GB GD GE GH GM HR HU ID IL IN IS JP KE KG KP KR KZ LC LK LR LS LT LU LV MA MD MG MK MN MW MX MZ NA NI NO NZ OM PG PH PL PT RO RU SC SD SE SG SK SL SY TJ TM TN TR TT TZ UA UG US UZ VC VN YU ZA ZM ZW |
|
AL | Designated countries for regional patents |
Kind code of ref document: A1 Designated state(s): BW GH GM KE LS MW MZ NA SD SL SZ TZ UG ZM ZW AM AZ BY KG KZ MD RU TJ TM AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HU IE IS IT LT LU MC NL PL PT RO SE SI SK TR BF BJ CF CG CI CM GA GN GQ GW ML MR NE SN TD TG |
|
121 | Ep: the epo has been informed by wipo that ep was designated in this application | ||
WWE | Wipo information: entry into national phase |
Ref document number: 2005516098 Country of ref document: JP |
|
122 | Ep: pct application non-entry in european phase |