WO2017056600A1 - Image processing method and control program - Google Patents

Image processing method and control program Download PDF

Info

Publication number
WO2017056600A1
WO2017056600A1 PCT/JP2016/069136 JP2016069136W WO2017056600A1 WO 2017056600 A1 WO2017056600 A1 WO 2017056600A1 JP 2016069136 W JP2016069136 W JP 2016069136W WO 2017056600 A1 WO2017056600 A1 WO 2017056600A1
Authority
WO
WIPO (PCT)
Prior art keywords
block
value
image
feature
processing method
Prior art date
Application number
PCT/JP2016/069136
Other languages
French (fr)
Japanese (ja)
Inventor
ジャリコ バンステーンベルグ
Original Assignee
株式会社Screenホールディングス
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 株式会社Screenホールディングス filed Critical 株式会社Screenホールディングス
Publication of WO2017056600A1 publication Critical patent/WO2017056600A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis

Definitions

  • the present invention relates to an image processing method for dividing an image into a plurality of regions based on feature value.
  • the present invention has been made in view of the above problems, and in the technology for dividing an image into a plurality of regions based on the value of the feature amount, it is not necessary to change the processing method according to the type of defect, and the defect is a result of the region division.
  • the purpose is to provide a technology capable of suppressing the influence.
  • a first step of obtaining a feature amount for each block obtained by dividing an image into a plurality of blocks, and a feature amount for each block And a second step of correcting the image based on the feature value values of the plurality of neighboring blocks, and a third step of dividing the image into a plurality of regions according to the feature value values of the corrected blocks.
  • the invention thus configured does not correct the image itself by correcting the luminance value of the image. That is, correction is performed on the feature amount obtained from the image from the viewpoint of favorably performing region division based on the feature amount value. Specifically, the feature value for each block obtained by dividing the image into a plurality of blocks is corrected based on the feature value of the blocks around the block.
  • the feature value calculated for each block may include a contribution due to a defect included in the image in the block. Such a contribution is considered to appear in the surrounding blocks containing the same defect as well. Therefore, it is possible to reduce the influence of the defect by performing correction based on the feature value of the peripheral block with respect to the feature value of the target block to be corrected. Even when the influence on the value of the feature amount varies depending on the type of defect, the correction is performed using the feature amount values including the influence, so that it is not necessary to change the correction method according to the type of defect.
  • a first step of obtaining a feature amount for each block in which an image is divided into a plurality of blocks, and a feature amount for each block are calculated as feature amounts of a plurality of peripheral blocks around the block.
  • correction based on the feature values of the peripheral blocks is performed on the feature value obtained for each block by dividing the image into a plurality of blocks instead of the luminance value of the image. Therefore, it is possible to reduce the influence of the defect on the feature value regardless of the type of defect. As a result, it is possible to suppress the influence of defects included in the image on the result of area division.
  • FIG. 1 is a diagram showing a schematic configuration of an imaging apparatus to which an image processing method according to the present invention can be applied.
  • This imaging device 1 is cultured in a liquid injected into a recess called a well W formed on the upper surface of a well plate WP. ).
  • the well plate WP is generally used in the fields of drug discovery and biological science.
  • a plurality of wells W having a substantially circular cross section and a transparent bottom surface are provided.
  • the number of wells W in one well plate WP is arbitrary, for example, 96 (12 ⁇ 8 matrix arrangement) can be used.
  • the diameter and depth of each well W is typically about several mm.
  • the size of the well plate and the number of wells targeted by the imaging apparatus 1 are not limited to these and are arbitrary. For example, those having 12 to 384 holes are generally used.
  • the imaging apparatus 1 can be used not only for imaging a well plate having a plurality of wells but also for imaging cells or the like cultured in a flat container called a dish.
  • a predetermined amount of a liquid as a medium is injected into each well W of the well plate WP, and cells or the like cultured under a predetermined culture condition in the liquid are imaging targets of the imaging apparatus 1.
  • the medium may be one to which an appropriate reagent has been added, or it may be in a liquid state and gelled after being placed in the well W.
  • cells cultured on the inner bottom surface of the well W can be targeted for imaging.
  • Commonly used liquid volume is about 50 to 200 microliters.
  • the imaging apparatus 1 includes a holder 11 that holds the well plate WP, an illumination unit 12 that is disposed above the holder 11, an imaging unit 13 that is disposed below the holder 11, and a CPU 141 that controls the operations of these units.
  • the control part 14 which has.
  • the holder 11 is in contact with the peripheral edge of the lower surface of the well plate WP that holds the sample together with the liquid in each well W to hold the well plate WP in a substantially horizontal posture.
  • the illumination unit 12 emits appropriate diffused light (for example, white light) toward the well plate WP held by the holder 11. More specifically, for example, a combination of a white LED (Light-Emitting-Diode) light source as a light source and a diffusion plate can be used as the illumination unit 12.
  • the illumination unit 12 illuminates cells and the like in the well W provided on the well plate WP from above.
  • An imaging unit 13 is provided below the well plate WP held by the holder 11.
  • the imaging unit 13 is provided with an imaging optical system (not shown) at a position directly below the well plate WP.
  • the optical axis of the imaging optical system is oriented in the vertical direction.
  • FIG. 1 is a side view, and the vertical direction in the figure represents the vertical direction.
  • the cell etc. in the well W are imaged by the imaging unit 13. Specifically, light emitted from the illumination unit 12 and incident on the liquid from above the well W illuminates the imaging target. Light transmitted downward from the bottom surface of the well W is incident on a light receiving surface of an image pickup device (not shown) through the image pickup optical system. An image of the imaging target imaged on the light receiving surface of the imaging device by the imaging optical system is captured by the imaging device.
  • the image sensor for example, a CCD sensor or a CMOS sensor can be used, and either a two-dimensional image sensor or a one-dimensional image sensor may be used.
  • the imaging unit 13 can be moved in the horizontal direction and the vertical direction by a mechanical control unit 146 provided in the control unit 14. Specifically, the mechanical control unit 146 moves the imaging unit 13 in the horizontal direction based on a control command from the CPU 141. Thereby, the imaging unit 13 moves in the horizontal direction with respect to the well W. The focus is adjusted by moving in the vertical direction.
  • the mechanical control unit 146 positions the imaging unit 13 in the horizontal direction so that the optical axis coincides with the center of the well W.
  • the imaging device of the imaging unit 13 is a one-dimensional image sensor, a two-dimensional image is captured by scanning the imaging unit 13 in a direction orthogonal to the longitudinal direction of the image sensor.
  • the mechanical control unit 146 moves the illumination unit 12 integrally with the imaging unit 13 as indicated by a dotted arrow in the drawing. That is, the illuminating unit 12 is arranged so that the optical center thereof substantially coincides with the optical axis of the imaging unit 13, and moves in conjunction with the imaging unit 13 when moving in the horizontal direction. Thereby, no matter which well W is imaged, the center of the well W and the light center of the illumination unit 12 are always located on the optical axis of the imaging unit 13. Therefore, it is possible to maintain the imaging condition favorably while keeping the illumination condition for each well W constant.
  • the image signal output from the image sensor of the imaging unit 13 is sent to the control unit 14. That is, the image signal is input to an AD converter (A / D) 143 provided in the control unit 14 and converted into digital image data.
  • the CPU 141 executes image processing as appropriate based on the received image data.
  • the control unit 14 further includes an image memory 144 for storing and storing image data, and a memory 145 for storing and storing programs to be executed by the CPU 141 and data generated by the CPU 141. It may be integral.
  • the CPU 141 executes various control processes described later by executing a control program stored in the memory 145.
  • control unit 14 is provided with an interface (IF) unit 142.
  • the interface unit 142 accepts operation input from the user and presents information such as processing results to the user, and exchanges data with an external device connected via a communication line. It has a function.
  • the interface unit 142 is connected to an input receiving unit 147 that receives an operation input from the user and a display unit 148 that displays and outputs a message to the user, a processing result, and the like.
  • the control unit 14 may be a dedicated device equipped with the hardware described above, and a control program for realizing processing functions to be described later is incorporated into a general-purpose processing device such as a personal computer or a workstation. It may be. That is, a general-purpose computer device can be used as the control unit 14 of the imaging device 1. In the case of using a general-purpose processing device, it is sufficient that the imaging device 1 has a minimum control function for operating each unit such as the imaging unit 13.
  • FIG. 2A and FIG. 2B are diagrams schematically showing examples of images picked up by this image pickup apparatus.
  • FIG. 2A shows an example of an ideal image picked up by the image pickup apparatus 1
  • FIG. 2B shows an example of an image including a defect.
  • the horizontal direction of the image captured by the imaging unit 13 is referred to as “X direction”
  • the vertical direction is referred to as “Y direction”.
  • the image Ia is imaged so that the whole well W fits in a rectangular region corresponding to the imaging field of view of the imaging unit 13.
  • the substantially circular region Rw corresponding to the inside of the well W cells and the like C are scattered in the medium M distributed substantially uniformly.
  • the well region Rw is divided into a region of the medium M and a region of cells C, etc. by an appropriate region dividing process using the feature value calculated for each pixel constituting the image. Divided.
  • the actually captured image Ib may include some image defect D. Image defects can occur due to various causes such as noise, dirt, distortion, and the like.
  • correction processing is required to reduce the influence of defects appearing in the image on the region division processing.
  • correction processing in this embodiment will be described.
  • FIG. 3 is a diagram for explaining the principle of correction processing in this embodiment.
  • the upper part of FIG. 3 schematically shows a part of an image including the defect D like the image Ib shown in FIG. 2B.
  • a feature amount F related to edge strength is obtained from this image Ic.
  • the lower part of FIG. 3 schematically shows how the value of the feature value F obtained for each pixel located on the line segment AA in the image Ic changes depending on the position.
  • the value of the feature amount F is remarkably increased in the pixel corresponding to the outline of the cell C or the like, and a value different from the other portion appears in the portion corresponding to the defect D.
  • the value of the feature amount F is smaller in the portion of the defect D than in the other portions. However, depending on the type of the defect D and the type of the feature amount F, the value may be larger than the other portions or there may be almost no difference.
  • the value of the feature amount of the target pixel is set, for example, in the vicinity thereof (more specifically, an appropriate number selected in order from the closest distance to the target pixel).
  • the value of the feature amount of the pixel is set, for example, in the vicinity thereof (more specifically, an appropriate number selected in order from the closest distance to the target pixel).
  • FIG. 4A and 4B are diagrams for explaining specific contents of the correction processing. More specifically, FIG. 4A is a diagram showing a relationship between a pixel of interest and peripheral pixels used for correction. FIG. 4B is a diagram showing the value of the feature amount F used for correction.
  • N pixels N is an integer of 3 or more
  • N pixels N is an integer of 3 or more
  • a rectangular region Rr having a size of The feature value of the pixel of interest P (x, y) is corrected using the feature value of the pixel in the region Rr.
  • each pixel in the region Rr excluding the target pixel P (x, y) is referred to as a “peripheral pixel” for the target pixel P (x, y).
  • the pixel at the upper left corner of the rectangular area Rr is denoted by P (x1, y1)
  • the pixel at the upper right corner is denoted by P (x2, y1)
  • the pixel at the lower left corner is denoted by P (x1, y2)
  • the pixel at the lower right corner is denoted by P.
  • Equation (1) represents an average value of feature amounts of peripheral pixels other than the target pixel P (x, y) among the pixels in the rectangular region Rr.
  • Expression (2) represents the average value of the feature values of all the pixels in the rectangular region Rr including both the target pixel P (x, y) and the surrounding pixels.
  • Fa (x, y) of the feature values of peripheral pixels serving as a reference when correcting the feature value Fr (x, y) of the target pixel P (x, y) the above formulas (1) and (2 ) May be used.
  • the difference between the feature value Fr (x, y) of the target pixel P (x, y) and the average value Fa (x, y) of the feature values of the surrounding pixels (and the target pixel) is corrected.
  • the subsequent feature value Fc (x, y) is used.
  • a value obtained by scaling the difference value by adding an appropriate offset value or multiplying by a coefficient may be used as the corrected feature amount.
  • Fc (x, y) a difference between the feature value Fr (x, y) before correction and the average value of the feature values of the surrounding pixels (and the target pixel) as a variable. It can be defined as a linear function.
  • the corrected feature value is appropriately scaled as described above. It becomes possible to enhance the visualization effect.
  • the above correction method based on the average value of the feature values of the surrounding pixels functions effectively.
  • the feature amount is expressed as a convex function or a linear function with the pixel value as a variable, a good correction result can be obtained.
  • FIG. 5 is a flowchart showing the contents of image processing executed by the imaging apparatus.
  • This process is a process for distinguishing the area of cells and the area of the culture medium from the image of the well W carrying the cells and the like.
  • This processing is realized by the CPU 141 executing a control program created in advance and stored in the memory 145 to cause each unit of the device to perform a predetermined operation.
  • the control program may be incorporated in the memory 145 in advance, or may be read from an appropriate storage medium as needed and read into the memory 145 via the interface unit 142 and executed.
  • imaging of the well W is performed by the imaging unit 13 (step S101), and the well region Rw (for example, FIG. 2A) is cut out from the obtained original image (step S102).
  • a well image representing the well region Rw is displayed on the display unit 148 (step S103).
  • the content of the image displayed at this time is the same as that of the captured original image.
  • the image is displayed in a state including the defect. The user can grasp the presence / absence of a defect and the extent thereof from the displayed image.
  • an operation input from the user regarding the correction size information is received by the input receiving unit 147 (step S104).
  • the above correction algorithm itself can be applied regardless of the type and degree of the defect, but the size of the rectangular region Rr that determines the range of the peripheral pixels used for the correction needs to be appropriately selected according to the appearance of the defect. Specifically, the size of the rectangular region Rr must be larger than the size of such a texture so that information on useful textures (for example, contour lines of cells or the like) in the image is not lost. On the other hand, in order to make effective the effect of removing the influence of the defect by subtracting the average value of the feature values of the peripheral pixels, it is preferable that the defect appears to appear uniformly in the rectangular region Rr. . Then, the size of the rectangular region Rr must be smaller than the size of the region where the defect appears in the image.
  • the size of the rectangular region Rr is variable. For example, more effective correction can be performed by accepting information for determining the size as correction size information from the user. For example, the value of the number of pixels N that determines the length of one side of the rectangular region Rr can be received as the correction size information.
  • the well image is divided into a plurality of small blocks (step S105), and the feature amount of each block is calculated (step S106).
  • the feature amount is obtained in units of pixels, which corresponds to the case where one pixel is one block.
  • one block may be set to include several pixels.
  • the image can be divided into blocks of a size corresponding to the size of the texture of interest in the image. When correction is performed using the feature amount obtained in units of blocks, it is desirable that the blocks have the same size.
  • the feature value values of the block are corrected based on the feature value values of the surrounding blocks by the correction method described above (step S107).
  • An appropriate region division process is performed based on the feature value of each block after correction (step S108).
  • the well image is divided into a region corresponding to the cell C or the like and a region corresponding to the medium M. Since the influence of the image defect is reduced in the corrected feature amount, the influence of the image defect on the result of the area division process is suppressed.
  • the region corresponding to the cell etc. C may be classified into a plurality of types according to the type and shape of the cell etc. C (for example, normal cells and undifferentiated cells etc.).
  • a known machine learning method using a feature value for example, a random forest method can be used.
  • a method may be used in which a threshold value in several steps is set for the corrected feature value, and the region is divided according to the magnitude relationship with the threshold value.
  • steps S105 and S106 in FIG. 5 correspond to the “first step” of the present invention.
  • steps S107 and S108 correspond to the “second step” and “third step” of the present invention, respectively.
  • each of the pixels constituting the image corresponds to a “block” of the present invention.
  • the pixels other than the target pixel P (x, y) correspond to the “peripheral block” of the present invention.
  • the one represented by the expression (1) corresponds to the “first average value” of the present invention
  • the one represented by the expression (2) is the present value. This corresponds to the “second average value” of the invention.
  • the present invention is not limited to the above-described embodiment, and various modifications other than those described above can be made without departing from the spirit of the present invention.
  • the above embodiment is an imaging apparatus capable of executing the image processing method according to the present invention.
  • an apparatus that executes the image processing method of the present invention may not have an imaging function.
  • the image processing method according to the present invention can be executed using hardware resources of a personal computer or workstation having a general configuration. Therefore, the present invention may be realized by mounting a control program describing each processing step of the image processing method according to the present invention on such an apparatus.
  • an image captured by an external imaging device, an image previously stored in a database or a storage medium, and the like are processing targets of the present invention.
  • the correction size information setting input is received from the user and the correction is executed.
  • the configuration may be such that the degree of defect is estimated by appropriate image processing, and the size of the rectangular region Rr is automatically determined according to the result.
  • there may be a mode in which several sizes of the rectangular region Rr are prepared in advance and a correction result using them is presented to the user.
  • each of the pixels constituting the image is a “block” of the present invention, which is the smallest in principle as a “block”. In this case, it is possible to perform correction while maintaining image information of a fine texture represented by a pixel unit included in the image.
  • the block size may be larger depending on the application. For example, for the purpose of counting cells of substantially the same size scattered in the wells, a block composed of a plurality of pixels corresponding to the size of such cells may be set.
  • region division processing is performed on an image obtained by imaging a well carrying cells or the like.
  • the image to be processed in the present invention is not limited to the image obtained by using such a raw sample as an imaging target, and various images can be used.
  • the value of the feature value for each block is the value of the feature value of each peripheral block of the block.
  • the feature value of the block after correction in the second step may be a value corresponding to the difference between the feature value of the block before correction and the first average value or the second average value.
  • a value obtained by subtracting a value due to the influence of a defect from a value of the feature value of the block can be used as a corrected feature value.
  • the value of the feature value of the block after correction is a value represented by a linear function whose variable is the difference value between the feature value of the block before correction and the first average value or the second average value. It may be.
  • the corrected feature value can be appropriately scaled, and the feature value in which the influence of the defect is reduced can be output in a mode according to the purpose.
  • a plurality of blocks may be the same size.
  • the feature value of each block has the same weight as each other, and it is not necessary to give a special weight in the calculation at the time of correction, so that the processing can be simplified. .
  • blocks other than the block that are included in a rectangle of a predetermined size including a plurality of blocks and centering on the block may be set as the peripheral blocks.
  • the correction processing is targeted for the feature amount of the block in the rectangular window set around the block, and the same processing can be performed for any block in the image. Thereby, the correction process can be simplified.
  • the size of the rectangle can be changed.
  • the second step may be executed in response to an input for setting information for determining the size of the rectangle.
  • the feature amount may be represented by a continuous function with the pixel value of each pixel constituting the image as a variable. In the feature quantity that satisfies such a condition, correction based on the value of the feature quantity of the peripheral block functions particularly effectively.
  • each of the blocks may be one of the pixels constituting the image. According to such a configuration, it is possible to make the correction function effectively for an image including a fine texture expressed in pixel units.
  • the present invention can be applied to a technique for dividing an image into a plurality of regions according to the feature value, and an imaging target in an image to be processed is arbitrary.

Abstract

The purpose of the present invention is to provide a technology which segments an image into a plurality of regions on the basis of feature values, with which it is possible to alleviate effects which defects have on the results of the segmenting into the regions, without changing processing methods depending on the type of defect. Provided is an image processing method, comprising a first step (S105-S106) in which an image is segmented into a plurality of blocks and feature values are derived for each block, a second step (S107) in which the feature values for each block are corrected on the basis of the feature values of a plurality of peripheral blocks which are in the periphery of a given block, and a third step (S108) in which the image is region segmented into a plurality of regions according to the corrected feature values of each block.

Description

画像処理方法および制御プログラムImage processing method and control program
 この発明は、特徴量の値に基づき画像を複数の領域に分割する画像処理方法に関するものである。
関連出願の相互参照
 以下に示す日本出願の明細書、図面および特許請求の範囲における開示内容は、参照によりその全内容が本書に組み入れられる:
 特願2015-189844(2015年9月28日出願)。
The present invention relates to an image processing method for dividing an image into a plurality of regions based on feature value.
Cross-reference to related applications The disclosures in the specification, drawings, and claims of the following Japanese application are incorporated herein by reference in their entirety:
Japanese Patent Application No. 2015-189844 (filed on Sep. 28, 2015).
 画像から求められた特徴量に応じて当該画像を複数の領域に分割する領域分割処理では、画像に混入したノイズ、汚れ、歪み等の部分的な欠陥が処理結果に影響を及ぼすことがある。このため、画像からこのような欠陥を除去する必要がある。画像からノイズ等の欠陥を除去することを目的とした技術としては、例えば特許文献1に記載されたものがある。この技術においては、注目画素の輝度値に対しその周辺の画素の輝度値の平均値との差分に応じた係数を乗じるという補正処理が行われることにより、ハイライトにより生じた階調段差が解消されている。このように、画像を構成する画素の画素値(輝度値)を補正することで欠陥を解消しようとする試みが一般に行われている。 In the area division process in which the image is divided into a plurality of areas according to the feature amount obtained from the image, partial defects such as noise, dirt, and distortion mixed in the image may affect the processing result. For this reason, it is necessary to remove such defects from the image. As a technique aiming at removing defects such as noise from an image, there is one described in Patent Document 1, for example. In this technology, the gradation level difference caused by highlights is eliminated by performing a correction process that multiplies the luminance value of the target pixel by a coefficient corresponding to the difference between the average value of the luminance values of the surrounding pixels. Has been. As described above, an attempt is generally made to eliminate the defect by correcting the pixel value (luminance value) of the pixels constituting the image.
特開2011-022656号公報JP 2011-022656 A
 上記従来技術はハイライトによる階調段差の解消に特化したものであるが、このように画像そのものに対する補正を行う場合には、欠陥の種類に応じた補正方法が用意される必要がある。というのは、欠陥の種類によって特徴量の値に及ぼす影響が異なるからである。このため、例えば複数種類の欠陥が含まれる画像においては欠陥の種類ごとに個別の補正処理が必要となる。その結果、処理のために必要な時間が長くなったり、過剰な補正により特徴量の算出に必要な画像情報が失われたりするなどの問題が生じる。また、そもそも画像にどのような欠陥が現れるかを事前に予測することが困難である。 The above prior art is specialized in eliminating gradation steps due to highlights, but when correcting the image itself as described above, it is necessary to prepare a correction method according to the type of defect. This is because the influence on the feature value varies depending on the type of defect. For this reason, for example, in an image including a plurality of types of defects, individual correction processing is required for each type of defect. As a result, there arises a problem that the time required for processing becomes long, or image information necessary for calculating the feature amount is lost due to excessive correction. In addition, it is difficult to predict in advance what kind of defect will appear in the image.
 この発明は上記課題に鑑みなされたものであり、特徴量の値に基づき画像を複数の領域に分割する技術において、欠陥の種類による処理方法の変更を必要とせず、欠陥が領域分割の結果に与える影響を抑えることのできる技術を提供することを目的とする。 The present invention has been made in view of the above problems, and in the technology for dividing an image into a plurality of regions based on the value of the feature amount, it is not necessary to change the processing method according to the type of defect, and the defect is a result of the region division. The purpose is to provide a technology capable of suppressing the influence.
 この発明にかかる画像処理方法の一の態様は、上記課題を解決するため、画像が複数に分割されたブロックごとに特徴量を求める第1工程と、ブロックごとの特徴量を、当該ブロックの周辺にある複数の周辺ブロックの特徴量の値に基づき補正する第2工程と、画像を、補正後のブロックごとの特徴量の値に応じて複数の領域に領域分割する第3工程とを備えている。 In one aspect of the image processing method according to the present invention, in order to solve the above-described problem, a first step of obtaining a feature amount for each block obtained by dividing an image into a plurality of blocks, and a feature amount for each block And a second step of correcting the image based on the feature value values of the plurality of neighboring blocks, and a third step of dividing the image into a plurality of regions according to the feature value values of the corrected blocks. Yes.
 このように構成された発明は、画像の輝度値を補正することで画像自体に補正を施すものではない。すなわち、特徴量の値に基づく領域分割を良好に行うという観点から、画像から求められた特徴量に対して補正が行われる。具体的には、画像が複数に分割されたブロックごとの特徴量の値が、当該ブロックの周辺にあるブロックの特徴量の値に基づいて補正される。 The invention thus configured does not correct the image itself by correcting the luminance value of the image. That is, correction is performed on the feature amount obtained from the image from the viewpoint of favorably performing region division based on the feature amount value. Specifically, the feature value for each block obtained by dividing the image into a plurality of blocks is corrected based on the feature value of the blocks around the block.
 各ブロックについて求められる特徴量の値に、当該ブロック内の画像に含まれる欠陥による寄与分が含まれる場合がある。そのような寄与は同じ欠陥が含まれる周辺のブロックにも同様に現れると考えられる。したがって、補正したい注目ブロックの特徴量の値に対し、周辺ブロックの特徴量の値に基づく補正を行うことで、欠陥の影響を小さくすることが可能となる。欠陥の種類によって特徴量の値への影響が異なる場合でも、その影響が含まれた特徴量の値同士を用いて補正が行われるので、欠陥の種類に応じて補正方法を変える必要はない。 The feature value calculated for each block may include a contribution due to a defect included in the image in the block. Such a contribution is considered to appear in the surrounding blocks containing the same defect as well. Therefore, it is possible to reduce the influence of the defect by performing correction based on the feature value of the peripheral block with respect to the feature value of the target block to be corrected. Even when the influence on the value of the feature amount varies depending on the type of defect, the correction is performed using the feature amount values including the influence, so that it is not necessary to change the correction method according to the type of defect.
 また、この発明の他の態様は、画像が複数に分割されたブロックごとに特徴量を求める第1工程と、ブロックごとの特徴量を、当該ブロックの周辺にある複数の周辺ブロックの特徴量の値に基づき補正する第2工程と、画像を、補正後のブロックごとの特徴量の値に応じて複数の領域に領域分割する第3工程とを、コンピュータに実行させる制御プログラムである。上記した一連の処理は、一般的な構成のコンピュータ装置が備えるハードウェア資源を用いて実行可能なものである。本発明がこのようなコンピュータにより実行可能な制御プログラムとして提供されることで、当該コンピュータに本発明の処理を実行させることが可能になる。 According to another aspect of the present invention, a first step of obtaining a feature amount for each block in which an image is divided into a plurality of blocks, and a feature amount for each block are calculated as feature amounts of a plurality of peripheral blocks around the block. A control program that causes a computer to execute a second step of correcting based on a value and a third step of dividing an image into a plurality of regions in accordance with the value of the feature value for each corrected block. The series of processes described above can be executed using hardware resources provided in a computer device having a general configuration. By providing the present invention as a control program executable by such a computer, it is possible to cause the computer to execute the processing of the present invention.
 上記のように、本発明では、画像の輝度値ではなく、画像が複数のブロックに分割されブロックごとに求められた特徴量の値に対し、周辺ブロックの特徴量に基づく補正が施される。そのため、欠陥の種類によらず、特徴量の値に及んだ欠陥の影響を低減することができる。その結果、画像に含まれる欠陥が領域分割の結果に与える影響を抑えることが可能である。 As described above, in the present invention, correction based on the feature values of the peripheral blocks is performed on the feature value obtained for each block by dividing the image into a plurality of blocks instead of the luminance value of the image. Therefore, it is possible to reduce the influence of the defect on the feature value regardless of the type of defect. As a result, it is possible to suppress the influence of defects included in the image on the result of area division.
 この発明の前記ならびにその他の目的と新規な特徴は、添付図面を参照しながら次の詳細な説明を読めば、より完全に明らかとなるであろう。ただし、図面は専ら解説のためのものであって、この発明の範囲を限定するものではない。 The above and other objects and novel features of the present invention will become more fully apparent when the following detailed description is read with reference to the accompanying drawings. However, the drawings are for explanation only and do not limit the scope of the present invention.
本発明にかかる画像処理方法を適用可能な撮像装置の概略構成を示す図である。It is a figure which shows schematic structure of the imaging device which can apply the image processing method concerning this invention. この撮像装置により撮像される画像の例を模式的に示す第1の図である。It is a 1st figure which shows typically the example of the image imaged with this imaging device. この撮像装置により撮像される画像の例を模式的に示す第2の図である。It is a 2nd figure which shows the example of the image imaged with this imaging device typically. この実施形態における補正処理の原理を説明する図である。It is a figure explaining the principle of the correction process in this embodiment. 補正処理の具体的内容を説明するための第1の図である。It is a 1st figure for demonstrating the specific content of a correction process. 補正処理の具体的内容を説明するための第2の図である。It is a 2nd figure for demonstrating the specific content of a correction process. この撮像装置により実行される画像処理内容を示すフローチャートである。It is a flowchart which shows the image processing content performed by this imaging device.
 図1は本発明にかかる画像処理方法を適用可能な撮像装置の概略構成を示す図である。この撮像装置1は、ウェルプレートWPの上面に形成されたウェルWと称される窪部に注入された液体中で培養される、細胞、細胞コロニー、細菌等(以下、「細胞等」と称する)の生試料を撮像する装置である。 FIG. 1 is a diagram showing a schematic configuration of an imaging apparatus to which an image processing method according to the present invention can be applied. This imaging device 1 is cultured in a liquid injected into a recess called a well W formed on the upper surface of a well plate WP. ).
 ウェルプレートWPは、創薬や生物科学の分野において一般的に使用されているものである。平板状のウェルプレートWPの上面に、断面が略円形の筒状に形成され底面が透明で平坦なウェルWが複数設けられている。1つのウェルプレートWPにおけるウェルWの数は任意であるが、例えば96個(12×8のマトリクス配列)のものを用いることができる。各ウェルWの直径および深さは代表的には数mm程度である。なお、この撮像装置1が対象とするウェルプレートのサイズやウェルの数はこれらに限定されるものではなく任意である。例えば12ないし384穴のものが一般的に使用されている。また、複数ウェルを有するウェルプレートに限らず、例えばディッシュと呼ばれる平型の容器で培養された細胞等の撮像にも、この撮像装置1を使用することが可能である。 The well plate WP is generally used in the fields of drug discovery and biological science. On the upper surface of the flat well plate WP, a plurality of wells W having a substantially circular cross section and a transparent bottom surface are provided. Although the number of wells W in one well plate WP is arbitrary, for example, 96 (12 × 8 matrix arrangement) can be used. The diameter and depth of each well W is typically about several mm. The size of the well plate and the number of wells targeted by the imaging apparatus 1 are not limited to these and are arbitrary. For example, those having 12 to 384 holes are generally used. The imaging apparatus 1 can be used not only for imaging a well plate having a plurality of wells but also for imaging cells or the like cultured in a flat container called a dish.
 ウェルプレートWPの各ウェルWには、培地としての液体が所定量注入され、この液体中において所定の培養条件で培養された細胞等が、この撮像装置1の撮像対象となる。培地は適宜の試薬が添加されたものでもよく、また液状でウェルWに投入された後ゲル化するものであってもよい。後述するように、この撮像装置1では、例えばウェルWの内底面で培養された細胞等を撮像対象とすることができる。常用される一般的な液量は、50ないし200マイクロリットル程度である。 A predetermined amount of a liquid as a medium is injected into each well W of the well plate WP, and cells or the like cultured under a predetermined culture condition in the liquid are imaging targets of the imaging apparatus 1. The medium may be one to which an appropriate reagent has been added, or it may be in a liquid state and gelled after being placed in the well W. As will be described later, in the imaging apparatus 1, for example, cells cultured on the inner bottom surface of the well W can be targeted for imaging. Commonly used liquid volume is about 50 to 200 microliters.
 撮像装置1は、ウェルプレートWPを保持するホルダ11と、ホルダ11の上方に配置される照明部12と、ホルダ11の下方に配置される撮像部13と、これら各部の動作を制御するCPU141を有する制御部14とを備えている。ホルダ11は、試料を液体とともに各ウェルWに担持するウェルプレートWPの下面周縁部に当接してウェルプレートWPを略水平姿勢に保持する。 The imaging apparatus 1 includes a holder 11 that holds the well plate WP, an illumination unit 12 that is disposed above the holder 11, an imaging unit 13 that is disposed below the holder 11, and a CPU 141 that controls the operations of these units. The control part 14 which has. The holder 11 is in contact with the peripheral edge of the lower surface of the well plate WP that holds the sample together with the liquid in each well W to hold the well plate WP in a substantially horizontal posture.
 照明部12は、ホルダ11により保持されたウェルプレートWPに向けて適宜の拡散光(例えば白色光)を出射する。より具体的には、例えば光源としての白色LED(Light Emitting Diode)光源と拡散板とを組み合わせたものを、照明部12として用いることができる。照明部12により、ウェルプレートWPに設けられたウェルW内の細胞等が上方から照明される。 The illumination unit 12 emits appropriate diffused light (for example, white light) toward the well plate WP held by the holder 11. More specifically, for example, a combination of a white LED (Light-Emitting-Diode) light source as a light source and a diffusion plate can be used as the illumination unit 12. The illumination unit 12 illuminates cells and the like in the well W provided on the well plate WP from above.
 ホルダ11により保持されたウェルプレートWPの下方に、撮像部13が設けられる。撮像部13には、ウェルプレートWPの直下位置に図示を省略する撮像光学系が配置されている。撮像光学系の光軸は鉛直方向に向けられている。図1は側面図であり、図の上下方向が鉛直方向を表す。 An imaging unit 13 is provided below the well plate WP held by the holder 11. The imaging unit 13 is provided with an imaging optical system (not shown) at a position directly below the well plate WP. The optical axis of the imaging optical system is oriented in the vertical direction. FIG. 1 is a side view, and the vertical direction in the figure represents the vertical direction.
 撮像部13により、ウェルW内の細胞等が撮像される。具体的には、照明部12から出射されウェルWの上方から液体に入射した光が撮像対象物を照明する。ウェルW底面から下方へ透過した光が、撮像光学系を介して図示しない撮像素子の受光面に入射する。撮像光学系により撮像素子の受光面に結像する撮像対象物の像が、撮像素子により撮像される。撮像素子としては例えばCCDセンサまたはCMOSセンサを用いることができ、二次元イメージセンサおよび一次元イメージセンサのいずれであってもよい。 The cell etc. in the well W are imaged by the imaging unit 13. Specifically, light emitted from the illumination unit 12 and incident on the liquid from above the well W illuminates the imaging target. Light transmitted downward from the bottom surface of the well W is incident on a light receiving surface of an image pickup device (not shown) through the image pickup optical system. An image of the imaging target imaged on the light receiving surface of the imaging device by the imaging optical system is captured by the imaging device. As the image sensor, for example, a CCD sensor or a CMOS sensor can be used, and either a two-dimensional image sensor or a one-dimensional image sensor may be used.
 撮像部13は、制御部14に設けられたメカ制御部146により水平方向および鉛直方向に移動可能となっている。具体的には、メカ制御部146が、CPU141からの制御指令に基づき、撮像部13を水平方向に移動させる。これにより、撮像部13がウェルWに対し水平方向に移動する。また鉛直方向への移動によりフォーカス調整がなされる。1つのウェルW内の撮像対象物が撮像されるときには、メカ制御部146は、光軸が当該ウェルWの中心と一致するように、撮像部13を水平方向に位置決めする。撮像部13の撮像素子が一次元イメージセンサである場合には、イメージセンサの長手方向と直交する方向に撮像部13を走査させることにより二次元画像が撮像されることになる。このような撮像方法では、撮像対象である細胞等に対し非接触、非破壊かつ非侵襲で撮像を行うことができ、撮像による細胞等へのダメージを抑えることができる。 The imaging unit 13 can be moved in the horizontal direction and the vertical direction by a mechanical control unit 146 provided in the control unit 14. Specifically, the mechanical control unit 146 moves the imaging unit 13 in the horizontal direction based on a control command from the CPU 141. Thereby, the imaging unit 13 moves in the horizontal direction with respect to the well W. The focus is adjusted by moving in the vertical direction. When the imaging target in one well W is imaged, the mechanical control unit 146 positions the imaging unit 13 in the horizontal direction so that the optical axis coincides with the center of the well W. When the imaging device of the imaging unit 13 is a one-dimensional image sensor, a two-dimensional image is captured by scanning the imaging unit 13 in a direction orthogonal to the longitudinal direction of the image sensor. With such an imaging method, it is possible to perform non-contact, non-destructive and non-invasive imaging on cells or the like that are imaging targets, and suppress damage to cells or the like due to imaging.
 また、メカ制御部146は、撮像部13を水平方向に移動させる際、図において点線矢印で示すように照明部12を撮像部13と一体的に移動させる。すなわち、照明部12は、その光中心が撮像部13の光軸と略一致するように配置されており、撮像部13が水平方向に移動するとき、これと連動して移動する。これにより、どのウェルWが撮像される場合でも、当該ウェルWの中心および照明部12の光中心が常に撮像部13の光軸上に位置することとなる。そのため、各ウェルWに対する照明条件を一定にして、撮像条件を良好に維持することができる。 Further, when moving the imaging unit 13 in the horizontal direction, the mechanical control unit 146 moves the illumination unit 12 integrally with the imaging unit 13 as indicated by a dotted arrow in the drawing. That is, the illuminating unit 12 is arranged so that the optical center thereof substantially coincides with the optical axis of the imaging unit 13, and moves in conjunction with the imaging unit 13 when moving in the horizontal direction. Thereby, no matter which well W is imaged, the center of the well W and the light center of the illumination unit 12 are always located on the optical axis of the imaging unit 13. Therefore, it is possible to maintain the imaging condition favorably while keeping the illumination condition for each well W constant.
 撮像部13の撮像素子から出力される画像信号は、制御部14に送られる。すなわち、画像信号は制御部14に設けられたADコンバータ(A/D)143に入力されてデジタル画像データに変換される。CPU141は、受信した画像データに基づき適宜画像処理を実行する。制御部14はさらに、画像データを記憶保存するための画像メモリ144と、CPU141が実行すべきプログラムやCPU141により生成されるデータを記憶保存するためのメモリ145とを有しているが、これらは一体のものであってもよい。CPU141は、メモリ145に記憶された制御プログラムを実行することにより、後述する各種の演算処理を行う。 The image signal output from the image sensor of the imaging unit 13 is sent to the control unit 14. That is, the image signal is input to an AD converter (A / D) 143 provided in the control unit 14 and converted into digital image data. The CPU 141 executes image processing as appropriate based on the received image data. The control unit 14 further includes an image memory 144 for storing and storing image data, and a memory 145 for storing and storing programs to be executed by the CPU 141 and data generated by the CPU 141. It may be integral. The CPU 141 executes various control processes described later by executing a control program stored in the memory 145.
 その他に、制御部14には、インターフェース(IF)部142が設けられている。インターフェース部142は、ユーザからの操作入力の受け付けや、ユーザへの処理結果等の情報提示を行うユーザインターフェース機能のほか、通信回線を介して接続された外部装置との間でのデータ交換を行う機能を有する。ユーザインターフェース機能を実現するために、インターフェース部142には、ユーザからの操作入力を受け付ける入力受付部147と、ユーザへのメッセージや処理結果などを表示出力する表示部148とが接続されている。 Besides, the control unit 14 is provided with an interface (IF) unit 142. The interface unit 142 accepts operation input from the user and presents information such as processing results to the user, and exchanges data with an external device connected via a communication line. It has a function. In order to realize the user interface function, the interface unit 142 is connected to an input receiving unit 147 that receives an operation input from the user and a display unit 148 that displays and outputs a message to the user, a processing result, and the like.
 なお、制御部14は、上記したハードウェアを備えた専用装置であってもよく、またパーソナルコンピュータやワークステーション等の汎用処理装置に、後述する処理機能を実現するための制御プログラムを組み込んだものであってもよい。すなわち、この撮像装置1の制御部14として、汎用のコンピュータ装置を利用することが可能である。汎用処理装置を用いる場合、撮像装置1には、撮像部13等の各部を動作させるために必要最小限の制御機能が備わっていれば足りる。 The control unit 14 may be a dedicated device equipped with the hardware described above, and a control program for realizing processing functions to be described later is incorporated into a general-purpose processing device such as a personal computer or a workstation. It may be. That is, a general-purpose computer device can be used as the control unit 14 of the imaging device 1. In the case of using a general-purpose processing device, it is sufficient that the imaging device 1 has a minimum control function for operating each unit such as the imaging unit 13.
 図2Aおよび図2Bはこの撮像装置により撮像される画像の例を模式的に示す図である。具体的には、図2Aは撮像装置1により撮像される理想的な画像の例を示し、図2Bは欠陥を含んだ画像の例を示す。以下では、撮像部13により撮像される画像のうち横方向を「X方向」、縦方向を「Y方向」と称する。図2Aに示すように、画像Iaは、撮像部13の撮像視野に相当する矩形領域に、ウェルWの全体が収まるようにして撮像される。画像IaのうちウェルWの内部に相当する略円形の領域Rw内では、略一様に分布する培地Mの中に細胞等Cが点在している。 FIG. 2A and FIG. 2B are diagrams schematically showing examples of images picked up by this image pickup apparatus. Specifically, FIG. 2A shows an example of an ideal image picked up by the image pickup apparatus 1, and FIG. 2B shows an example of an image including a defect. Hereinafter, the horizontal direction of the image captured by the imaging unit 13 is referred to as “X direction”, and the vertical direction is referred to as “Y direction”. As shown in FIG. 2A, the image Ia is imaged so that the whole well W fits in a rectangular region corresponding to the imaging field of view of the imaging unit 13. In the image Ia, in the substantially circular region Rw corresponding to the inside of the well W, cells and the like C are scattered in the medium M distributed substantially uniformly.
 後述するように、この撮像装置1では、画像を構成する画素ごとに算出した特徴量の値を用いた適宜の領域分割処理により、ウェル領域Rwが培地Mの領域と細胞等Cの領域とに分割される。しかしながら、図2Bに示すように、実際に撮像される画像Ibは何らかの画像欠陥Dを含む場合がある。画像欠陥は、例えばノイズ、汚れ、歪み等、種々の原因に起因して生じ得る。 As will be described later, in the imaging apparatus 1, the well region Rw is divided into a region of the medium M and a region of cells C, etc. by an appropriate region dividing process using the feature value calculated for each pixel constituting the image. Divided. However, as shown in FIG. 2B, the actually captured image Ib may include some image defect D. Image defects can occur due to various causes such as noise, dirt, distortion, and the like.
 このような欠陥が、特徴量の算出およびその結果に基づく領域分割処理の結果に影響を与えることがある。そこで、画像に現れた欠陥が領域分割処理に及ぼす影響を低減するための補正処理が必要となる。以下、この実施形態における補正処理について説明する。 Such a defect may affect the calculation of the feature value and the result of the region division processing based on the result. Therefore, correction processing is required to reduce the influence of defects appearing in the image on the region division processing. Hereinafter, correction processing in this embodiment will be described.
 図3はこの実施形態における補正処理の原理を説明する図である。図3上段は図2Bに示す画像Ibのように欠陥Dを含む画像の一部を模式的に示している。この画像Icから、例えばエッジ強度に関する特徴量Fを求める場合を考える。図3下段は、画像Ic中の線分A-A上に位置する画素ごとに求められる特徴量Fの値が、位置によりどのように変化するかを模式的に示している。同図に示すように、細胞等Cの輪郭に相当する部分の画素で特徴量Fの値が顕著に大きくなるほか、欠陥Dに対応する部分で他の部分と異なる値が現れる。 FIG. 3 is a diagram for explaining the principle of correction processing in this embodiment. The upper part of FIG. 3 schematically shows a part of an image including the defect D like the image Ib shown in FIG. 2B. Consider a case where, for example, a feature amount F related to edge strength is obtained from this image Ic. The lower part of FIG. 3 schematically shows how the value of the feature value F obtained for each pixel located on the line segment AA in the image Ic changes depending on the position. As shown in the figure, the value of the feature amount F is remarkably increased in the pixel corresponding to the outline of the cell C or the like, and a value different from the other portion appears in the portion corresponding to the defect D.
 なお、ここでは欠陥Dの部分で特徴量Fの値が他の部分より小さくなるものとしている。しかしながら、欠陥Dの種類および特徴量Fの種類によっては、他の部分より値が大きくなる場合やほとんど差異が現れない場合もあり得る。 Note that here, the value of the feature amount F is smaller in the portion of the defect D than in the other portions. However, depending on the type of the defect D and the type of the feature amount F, the value may be larger than the other portions or there may be almost no difference.
 欠陥Dによる特徴量への影響は、欠陥Dに対応する画素間でほぼ同様に現れると考えられる。そのため、図に示す値F1,F2のように、注目画素の特徴量の値を例えばその周辺の(より具体的には、注目画素との距離が近いものから順に選出された適宜の個数の)画素の特徴量の値を基準とした相対値として表すことにより、欠陥Dの有無に影響されない特徴量とすることができる。これは、注目画素の特徴量の値を、当該画素の周辺にある他の画素の特徴量の値に基づいて補正することに相当する。実際の画像は二次元的な広がりを有しているので、補正処理の具体的内容は次のようなものとなる。 It is considered that the influence on the feature amount due to the defect D appears almost the same between the pixels corresponding to the defect D. Therefore, like the values F1 and F2 shown in the figure, the value of the feature amount of the target pixel is set, for example, in the vicinity thereof (more specifically, an appropriate number selected in order from the closest distance to the target pixel). By expressing the value of the feature amount of the pixel as a relative value, the feature amount that is not affected by the presence or absence of the defect D can be obtained. This corresponds to correcting the feature value of the pixel of interest based on the feature value of other pixels around the pixel. Since the actual image has a two-dimensional spread, the specific contents of the correction processing are as follows.
 図4Aおよび図4Bは補正処理の具体的内容を説明するための図である。より具体的には、図4Aは注目画素と補正に使用される周辺画素との関係を示す図である。また図4Bは補正に使用される特徴量Fの値を示す図である。図4Aに示すように、X座標値がx、Y座標値がyである注目画素P(x,y)を中心として、X方向、Y方向にそれぞれN画素(Nは3以上の整数)分のサイズを有する矩形領域Rrを考える。この領域Rr内の画素の特徴量の値を用いて注目画素P(x,y)の特徴量を補正する。以下では、領域Rr内の画素のうち注目画素P(x,y)を除く各画素を、当該注目画素P(x,y)に対する「周辺画素」と称することとする。 4A and 4B are diagrams for explaining specific contents of the correction processing. More specifically, FIG. 4A is a diagram showing a relationship between a pixel of interest and peripheral pixels used for correction. FIG. 4B is a diagram showing the value of the feature amount F used for correction. As shown in FIG. 4A, N pixels (N is an integer of 3 or more) in the X direction and the Y direction centering on the pixel of interest P (x, y) whose X coordinate value is x and Y coordinate value is y. A rectangular region Rr having a size of The feature value of the pixel of interest P (x, y) is corrected using the feature value of the pixel in the region Rr. Hereinafter, each pixel in the region Rr excluding the target pixel P (x, y) is referred to as a “peripheral pixel” for the target pixel P (x, y).
 矩形領域Rrの左上隅の画素は符号P(x1,y1)、右上隅の画素は符号P(x2,y1)、左下隅の画素は符号P(x1,y2)、右下隅の画素は符号P(x2,y2)によりそれぞれ表される。ここで、
  x1=x-(N-1)/2
  x2=x+(N-1)/2
  y1=y-(N-1)/2
  y2=y+(N-1)/2
である。
The pixel at the upper left corner of the rectangular area Rr is denoted by P (x1, y1), the pixel at the upper right corner is denoted by P (x2, y1), the pixel at the lower left corner is denoted by P (x1, y2), and the pixel at the lower right corner is denoted by P. Represented by (x2, y2), respectively. here,
x1 = x− (N−1) / 2
x2 = x + (N-1) / 2
y1 = y− (N−1) / 2
y2 = y + (N-1) / 2
It is.
 図4Bに示すように、画素P(x,y)の補正前の特徴量の値を符号Fr(x,y)により表すとき、当該画素P(x,y)の補正後の特徴量Fc(x,y)は次式:
  Fc(x,y)=Fr(x,y)-Fa(x,y)
により定義される。ここで、Fa(x,y)は矩形領域Rr内の各画素P(i,j)それぞれの特徴量Fr(i,j)の平均値であり、以下の式(1)または式(2)のいずれかで表される。
Figure JPOXMLDOC01-appb-M000001
As shown in FIG. 4B, when the value of the feature value before correction of the pixel P (x, y) is represented by the symbol Fr (x, y), the feature value Fc () after correction of the pixel P (x, y). x, y) is:
Fc (x, y) = Fr (x, y) −Fa (x, y)
Defined by Here, Fa (x, y) is an average value of the feature values Fr (i, j) of the respective pixels P (i, j) in the rectangular region Rr, and the following formula (1) or formula (2) Represented by either
Figure JPOXMLDOC01-appb-M000001
 式(1)は、矩形領域Rr内の画素のうち、注目画素P(x,y)以外の周辺画素の特徴量の平均値を表す。また、式(2)は注目画素P(x,y)とそれに対する周辺画素との双方を含む矩形領域Rr内の全画素の特徴量の平均値を表す。注目画素P(x,y)の特徴量Fr(x,y)を補正する際の基準となる周辺画素の特徴量の平均値Fa(x,y)としては、上記式(1)、(2)のいずれが用いられてもよい。 Equation (1) represents an average value of feature amounts of peripheral pixels other than the target pixel P (x, y) among the pixels in the rectangular region Rr. Expression (2) represents the average value of the feature values of all the pixels in the rectangular region Rr including both the target pixel P (x, y) and the surrounding pixels. As the average value Fa (x, y) of the feature values of peripheral pixels serving as a reference when correcting the feature value Fr (x, y) of the target pixel P (x, y), the above formulas (1) and (2 ) May be used.
 なお、上記では、注目画素P(x,y)の特徴量Fr(x,y)と、周辺画素(および当該注目画素)の特徴量の平均値Fa(x,y)との差が、補正後の特徴量Fc(x,y)とされる。さらに、上記差の値に適宜のオフセット値を加算あるいは係数を乗じてスケーリングした値が補正後の特徴量とされてもよい。一般には、補正後の特徴量Fc(x,y)は定数a,b,c(ただし、a≠0)を用いて次式:
  Fc(x,y)=a・{Fr(x,y)-Fa(x,y)+b}+c
により表すことができる。すなわち、補正後の特徴量Fc(x,y)は一般に、補正前の特徴量Fr(x,y)と、周辺画素(および当該注目画素)の特徴量の平均値との差を変数とする一次関数として定義することができる。
In the above description, the difference between the feature value Fr (x, y) of the target pixel P (x, y) and the average value Fa (x, y) of the feature values of the surrounding pixels (and the target pixel) is corrected. The subsequent feature value Fc (x, y) is used. Furthermore, a value obtained by scaling the difference value by adding an appropriate offset value or multiplying by a coefficient may be used as the corrected feature amount. In general, the corrected feature value Fc (x, y) is expressed by the following equation using constants a, b, and c (where a ≠ 0):
Fc (x, y) = a · {Fr (x, y) −Fa (x, y) + b} + c
Can be represented by That is, the corrected feature value Fc (x, y) generally uses a difference between the feature value Fr (x, y) before correction and the average value of the feature values of the surrounding pixels (and the target pixel) as a variable. It can be defined as a linear function.
 例えば、補正後の特徴量の値をXY座標平面にマッピングすることで特徴量の値を可視化することが必要な場合には、このように補正後の特徴量の値を適宜にスケーリングすることで、可視化の効果を高めることが可能となる。 For example, when it is necessary to visualize the feature value by mapping the corrected feature value on the XY coordinate plane, the corrected feature value is appropriately scaled as described above. It becomes possible to enhance the visualization effect.
 このような補正が有効に機能する特徴量としては種々のものが考えられる。本願発明者の実験では、画素値(輝度値)をX座標およびY座標と直交する第3の座標軸とした三次元座標空間に各画素の画素値をプロットすることで得られる画素値プロファイルが描く曲面の各画素位置における法線ベクトルの方向の標準偏差として求められる特徴量(この特徴量はしばしば「NormalStDev」と表記される)において、良好な補正結果を得られることが確認されている。 There are various possible feature quantities for which such correction functions effectively. In the experiment of the present inventor, a pixel value profile obtained by plotting the pixel value of each pixel in a three-dimensional coordinate space having the pixel value (luminance value) as a third coordinate axis orthogonal to the X coordinate and the Y coordinate is drawn. It has been confirmed that a good correction result can be obtained with a feature amount obtained as a standard deviation in the direction of the normal vector at each pixel position on the curved surface (this feature amount is often expressed as “NormalStDev”).
 より一般には、特徴量の値を、各画素の画素値を変数とする連続関数として表すことができるとき、周辺画素の特徴量の平均値に基づく上記補正方法が有効に機能する。特に、特徴量が画素値を変数とする凸関数あるいは線形関数として表されるとき、良好な補正結果を得ることができる。 More generally, when the feature value can be expressed as a continuous function with the pixel value of each pixel as a variable, the above correction method based on the average value of the feature values of the surrounding pixels functions effectively. In particular, when the feature amount is expressed as a convex function or a linear function with the pixel value as a variable, a good correction result can be obtained.
 図5はこの撮像装置により実行される画像処理内容を示すフローチャートである。この処理は、細胞等が担持されたウェルWを撮像した画像から細胞等の領域と培地の領域とを区別するための処理である。CPU141が、予め作成されメモリ145に記憶されている制御プログラムを実行して装置各部に所定の動作を行わせることにより、この処理が実現される。制御プログラムはメモリ145に予め組み込まれていてもよく、また必要に応じて適宜の記憶媒体から読み出されインターフェース部142を介してメモリ145に読み込まれ実行される態様であってもよい。 FIG. 5 is a flowchart showing the contents of image processing executed by the imaging apparatus. This process is a process for distinguishing the area of cells and the area of the culture medium from the image of the well W carrying the cells and the like. This processing is realized by the CPU 141 executing a control program created in advance and stored in the memory 145 to cause each unit of the device to perform a predetermined operation. The control program may be incorporated in the memory 145 in advance, or may be read from an appropriate storage medium as needed and read into the memory 145 via the interface unit 142 and executed.
 最初に、撮像部13によりウェルWの撮像が行われ(ステップS101)、得られた原画像からウェル領域Rw(例えば図2A)が切り出される(ステップS102)。ウェル領域Rwを表すウェル画像は表示部148に表示される(ステップS103)。このとき表示される画像の内容は撮像された原画像のものと同じであり、原画像が欠陥を含む場合、その欠陥を含んだ状態で表示される。ユーザは表示された画像から欠陥の有無やその程度を把握することができる。 First, imaging of the well W is performed by the imaging unit 13 (step S101), and the well region Rw (for example, FIG. 2A) is cut out from the obtained original image (step S102). A well image representing the well region Rw is displayed on the display unit 148 (step S103). The content of the image displayed at this time is the same as that of the captured original image. When the original image includes a defect, the image is displayed in a state including the defect. The user can grasp the presence / absence of a defect and the extent thereof from the displayed image.
 続いて、補正サイズ情報に関するユーザからの操作入力が入力受付部147により受け付けられる(ステップS104)。上記した補正アルゴリズム自体は欠陥の種類や程度によらず適用可能であるが、補正に用いる周辺画素の範囲を決める矩形領域Rrのサイズは欠陥の現れ方に応じて適切に選ばれる必要がある。具体的には、画像内の有用なテクスチャ(例えば細胞等の輪郭線)の情報が失われないようにするために、このようなテクスチャのサイズよりも矩形領域Rrのサイズが大きくなければならない。一方、周辺画素の特徴量の平均値を差し引くことで欠陥の影響を除去するという作用を有効なものとするためには、欠陥が矩形領域Rr内で一様に現れていると見なせることが好ましい。そうすると、画像において欠陥が現れている領域のサイズよりも矩形領域Rrのサイズが小さくなければならない。 Subsequently, an operation input from the user regarding the correction size information is received by the input receiving unit 147 (step S104). The above correction algorithm itself can be applied regardless of the type and degree of the defect, but the size of the rectangular region Rr that determines the range of the peripheral pixels used for the correction needs to be appropriately selected according to the appearance of the defect. Specifically, the size of the rectangular region Rr must be larger than the size of such a texture so that information on useful textures (for example, contour lines of cells or the like) in the image is not lost. On the other hand, in order to make effective the effect of removing the influence of the defect by subtracting the average value of the feature values of the peripheral pixels, it is preferable that the defect appears to appear uniformly in the rectangular region Rr. . Then, the size of the rectangular region Rr must be smaller than the size of the region where the defect appears in the image.
 種々の欠陥に対応するためには矩形領域Rrのサイズが可変であることが好ましい。例えばこのサイズを決定するための情報を補正サイズ情報としてユーザから受け付けるようにすることで、より効果的な補正を行うことが可能となる。例えば、矩形領域Rrの1辺の長さを決める画素数Nの値を補正サイズ情報として受け付けるようにすることができる。 In order to cope with various defects, it is preferable that the size of the rectangular region Rr is variable. For example, more effective correction can be performed by accepting information for determining the size as correction size information from the user. For example, the value of the number of pixels N that determines the length of one side of the rectangular region Rr can be received as the correction size information.
 続いて、ウェル画像が複数の小ブロックに分割されて(ステップS105)、各ブロックの特徴量が算出される(ステップS106)。上記の例では特徴量が画素単位で求められ、これは1つの画素を1つのブロックとした場合に対応する。これに代えて、1つのブロックがいくつかの画素を含むものとして設定されてもよい。例えば画像中で着目するテクスチャのサイズに応じたサイズのブロックに画像を分割することができる。ブロック単位で求められた特徴量を用いて補正が行われる場合、各ブロックは同一サイズであることが望ましい。 Subsequently, the well image is divided into a plurality of small blocks (step S105), and the feature amount of each block is calculated (step S106). In the above example, the feature amount is obtained in units of pixels, which corresponds to the case where one pixel is one block. Alternatively, one block may be set to include several pixels. For example, the image can be divided into blocks of a size corresponding to the size of the texture of interest in the image. When correction is performed using the feature amount obtained in units of blocks, it is desirable that the blocks have the same size.
 こうして求められた各ブロックの特徴量について、上記した補正方法により、当該ブロックの特徴量の値がその周辺ブロックの特徴量の値に基づいて補正される(ステップS107)。補正後の各ブロックの特徴量の値に基づき適宜の領域分割処理が行われる(ステップS108)。これにより、ウェル画像が細胞等Cに対応する領域と培地Mに対応する領域とに分割される。補正後の特徴量では画像欠陥の影響が低減されているので、画像欠陥が領域分割処理の結果に影響を及ぼすことが抑制されている。細胞等Cに対応する領域については、細胞等Cの種類や形状等(例えば、通常細胞と未分化細胞など)に応じて複数種に区分されてもよい。 With respect to the feature values of each block thus obtained, the feature value values of the block are corrected based on the feature value values of the surrounding blocks by the correction method described above (step S107). An appropriate region division process is performed based on the feature value of each block after correction (step S108). Thus, the well image is divided into a region corresponding to the cell C or the like and a region corresponding to the medium M. Since the influence of the image defect is reduced in the corrected feature amount, the influence of the image defect on the result of the area division process is suppressed. The region corresponding to the cell etc. C may be classified into a plurality of types according to the type and shape of the cell etc. C (for example, normal cells and undifferentiated cells etc.).
 領域分割の手法としては、特徴量の値を用いた公知の機械学習法、例えばランダムフォレスト法を用いることができる。また例えば、補正後の特徴量の値に何段階かの閾値を設定し、該閾値との大小関係に応じて領域分割を行う方法でもよい。 As a region segmentation method, a known machine learning method using a feature value, for example, a random forest method can be used. Alternatively, for example, a method may be used in which a threshold value in several steps is set for the corrected feature value, and the region is divided according to the magnitude relationship with the threshold value.
 以上説明したように、上記実施形態においては、図5のステップS105およびS106が本発明の「第1工程」に相当する。一方、ステップS107、S108がそれぞれ本発明の「第2工程」、「第3工程」に相当している。また、上記実施形態では画像を構成する画素の各々が本発明の「ブロック」に相当している。そして、矩形領域Rr内の各画素のうち注目画素P(x,y)以外の画素が本発明の「周辺ブロック」に対応している。 As described above, in the above embodiment, steps S105 and S106 in FIG. 5 correspond to the “first step” of the present invention. On the other hand, steps S107 and S108 correspond to the “second step” and “third step” of the present invention, respectively. In the above embodiment, each of the pixels constituting the image corresponds to a “block” of the present invention. Of the pixels in the rectangular region Rr, the pixels other than the target pixel P (x, y) correspond to the “peripheral block” of the present invention.
 また、特徴量の平均値Fa(x,y)のうち、式(1)で表されるものが本発明の「第1平均値」に相当し、式(2)で表されるものが本発明の「第2平均値」に相当している。 Further, among the average value Fa (x, y) of the feature quantity, the one represented by the expression (1) corresponds to the “first average value” of the present invention, and the one represented by the expression (2) is the present value. This corresponds to the “second average value” of the invention.
 なお、本発明は上記した実施形態に限定されるものではなく、その趣旨を逸脱しない限りにおいて上述したもの以外に種々の変更を行うことが可能である。例えば、上記実施形態は本発明にかかる画像処理方法を実行可能な撮像装置である。しかしながら、本発明の画像処理方法を実行する装置は撮像機能を有していなくてもよい。先にも述べたように、本発明にかかる画像処理方法は一般的な構成を有するパーソナルコンピュータやワークステーションのハードウェア資源を用いて実行することが可能である。そこで、このような装置に本発明にかかる画像処理方法の各処理工程を記述した制御プログラムを実装することにより、本発明が実現されてもよい。この場合、外部の撮像装置で撮像された画像や予めデータベースや記憶媒体に記憶保存された画像等が本発明の処理対象となる。 Note that the present invention is not limited to the above-described embodiment, and various modifications other than those described above can be made without departing from the spirit of the present invention. For example, the above embodiment is an imaging apparatus capable of executing the image processing method according to the present invention. However, an apparatus that executes the image processing method of the present invention may not have an imaging function. As described above, the image processing method according to the present invention can be executed using hardware resources of a personal computer or workstation having a general configuration. Therefore, the present invention may be realized by mounting a control program describing each processing step of the image processing method according to the present invention on such an apparatus. In this case, an image captured by an external imaging device, an image previously stored in a database or a storage medium, and the like are processing targets of the present invention.
 また、上記実施形態では補正サイズ情報の設定入力をユーザから受け付けて補正が実行されるが、これに限定されない。例えば、適宜の画像処理によって欠陥の程度が見積もられ、その結果に応じて矩形領域Rrのサイズが自動的に決定される構成でもよい。また例えば、矩形領域Rrのサイズが予めいくつか用意され、それらを用いた補正結果がユーザに提示される態様であってもよい。 In the above embodiment, the correction size information setting input is received from the user and the correction is executed. However, the present invention is not limited to this. For example, the configuration may be such that the degree of defect is estimated by appropriate image processing, and the size of the rectangular region Rr is automatically determined according to the result. Further, for example, there may be a mode in which several sizes of the rectangular region Rr are prepared in advance and a correction result using them is presented to the user.
 また、上記実施形態では画像を構成する画素の各々が本発明の「ブロック」とされており、これは「ブロック」として原理的に最も小さいものである。この場合、画像に含まれる画素単位で表されるような細かいテクスチャの画像情報を維持しつつ補正を行うことができる。しかしながら、用途に応じてブロックのサイズはより大きいものであってもよい。例えば、ウェル内に点在するほぼ同サイズの細胞等を計数する目的においては、そのような細胞等のサイズに応じた複数画素からなるブロックが設定されてもよい。 In the above embodiment, each of the pixels constituting the image is a “block” of the present invention, which is the smallest in principle as a “block”. In this case, it is possible to perform correction while maintaining image information of a fine texture represented by a pixel unit included in the image. However, the block size may be larger depending on the application. For example, for the purpose of counting cells of substantially the same size scattered in the wells, a block composed of a plurality of pixels corresponding to the size of such cells may be set.
 また、上記実施形態は細胞等を担持するウェルを撮像した画像に対し領域分割処理を施すものである。しかしながら、本発明において処理対象となる画像はこのような生試料を撮像対象物としたものに限定されず、種々の画像を用いることが可能である。 Further, in the above-described embodiment, region division processing is performed on an image obtained by imaging a well carrying cells or the like. However, the image to be processed in the present invention is not limited to the image obtained by using such a raw sample as an imaging target, and various images can be used.
 以上、具体的な実施形態を例示して説明してきたように、本発明の画像処理方法において、例えば第2工程では、ブロックごとの特徴量の値が、当該ブロックの周辺ブロック各々の特徴量の平均値である第1平均値、または当該ブロックと当該ブロックの周辺ブロックとの各々の特徴量の平均値である第2平均値に基づき補正されるようにしてもよい。このような構成によれば、当該ブロックおよび周辺ブロックの特徴量に対して同程度の影響を及ぼす欠陥による寄与分をキャンセルすることができる。 As described above, the specific embodiment has been described by way of example. In the image processing method of the present invention, for example, in the second step, the value of the feature value for each block is the value of the feature value of each peripheral block of the block. You may make it correct | amend based on the 1st average value which is an average value, or the 2nd average value which is the average value of each feature-value of the said block and the surrounding block of the said block. According to such a configuration, it is possible to cancel the contribution due to defects that have the same degree of influence on the feature quantities of the block and the peripheral blocks.
 また例えば、第2工程における補正後のブロックの特徴量の値は、補正前の当該ブロックの特徴量の値と第1平均値または第2平均値との差に応じた値であってもよい。このような構成によれば、当該ブロックの特徴量の値から欠陥の影響による値を差し引いた値を補正後の特徴量とすることができる。 Further, for example, the feature value of the block after correction in the second step may be a value corresponding to the difference between the feature value of the block before correction and the first average value or the second average value. . According to such a configuration, a value obtained by subtracting a value due to the influence of a defect from a value of the feature value of the block can be used as a corrected feature value.
 この場合、補正後のブロックの特徴量の値は、補正前の当該ブロックの特徴量の値と第1平均値または第2平均値との差の値を変数とする一次関数により表される値であってもよい。このようにすることで、補正後の特徴量の値を適宜にスケーリングされたものとすることが可能となり、欠陥の影響が低減された特徴量を目的に応じた態様で出力することができる。 In this case, the value of the feature value of the block after correction is a value represented by a linear function whose variable is the difference value between the feature value of the block before correction and the first average value or the second average value. It may be. In this way, the corrected feature value can be appropriately scaled, and the feature value in which the influence of the defect is reduced can be output in a mode according to the purpose.
 また例えば、複数のブロックが互いに同一サイズであってもよい。このような構成によれば、各ブロックの特徴量の値が互いに同程度の重みを持つことになり、補正時の演算において特別な重み付けを与える必要がないので、処理を簡単にすることができる。 Also, for example, a plurality of blocks may be the same size. According to such a configuration, the feature value of each block has the same weight as each other, and it is not necessary to give a special weight in the calculation at the time of correction, so that the processing can be simplified. .
 また例えば、第2工程では、複数のブロックを含み当該ブロックを中心とする所定サイズの矩形内にある、当該ブロック以外のブロックが周辺ブロックとされてもよい。このような構成では、補正処理は当該ブロックを中心として設定された矩形ウィンドウ内のブロックの特徴量を対象としたものとなり、画像中の任意のブロックについて同様の処理を行うことができる。これにより、補正処理を簡単にすることができる。 Further, for example, in the second step, blocks other than the block that are included in a rectangle of a predetermined size including a plurality of blocks and centering on the block may be set as the peripheral blocks. In such a configuration, the correction processing is targeted for the feature amount of the block in the rectangular window set around the block, and the same processing can be performed for any block in the image. Thereby, the correction process can be simplified.
 この場合、矩形のサイズが変更可能であることがより好ましい。より具体的には、例えば、矩形のサイズを決定するための情報の設定入力を受け付けて第2工程が実行されるようにしてもよい。補正処理に用いられる周辺ブロックの範囲を欠陥の現れ方に応じて変更できるようにすることで、補正の効果をより高めることが可能である。 In this case, it is more preferable that the size of the rectangle can be changed. More specifically, for example, the second step may be executed in response to an input for setting information for determining the size of the rectangle. By making it possible to change the range of the peripheral blocks used for the correction process according to the appearance of defects, it is possible to further enhance the correction effect.
 また、特徴量は、画像を構成する各画素の画素値を変数とする連続関数で表されるものであってもよい。このような条件が満たされる特徴量では、周辺ブロックの特徴量の値に基づく補正が特に有効に機能する。 Further, the feature amount may be represented by a continuous function with the pixel value of each pixel constituting the image as a variable. In the feature quantity that satisfies such a condition, correction based on the value of the feature quantity of the peripheral block functions particularly effectively.
 また例えば、ブロックの各々は、画像を構成する画素の1つであってもよい。このような構成によれば、画素単位で表される細かいテクスチャを含む画像に対して補正を有効に機能させることができる。 Also, for example, each of the blocks may be one of the pixels constituting the image. According to such a configuration, it is possible to make the correction function effectively for an image including a fine texture expressed in pixel units.
 以上、特定の実施例に沿って発明を説明したが、この説明は限定的な意味で解釈されることを意図したものではない。発明の説明を参照すれば、本発明のその他の実施形態と同様に、開示された実施形態の様々な変形例が、この技術に精通した者に明らかとなるであろう。故に、添付の特許請求の範囲は、発明の真の範囲を逸脱しない範囲内で、当該変形例または実施形態を含むものと考えられる。 Although the invention has been described with reference to specific embodiments, this description is not intended to be construed in a limiting sense. Reference to the description of the invention, as well as other embodiments of the present invention, various modifications of the disclosed embodiments will become apparent to those skilled in the art. Accordingly, the appended claims are intended to include such modifications or embodiments without departing from the true scope of the invention.
 この発明は、画像を特徴量の値に応じて複数の領域に分割する技術に適用可能であり、処理対象となる画像における撮像対象物は任意である。 The present invention can be applied to a technique for dividing an image into a plurality of regions according to the feature value, and an imaging target in an image to be processed is arbitrary.
 1 撮像装置
 13 撮像部
 14 制御部
 141 CPU
 147 入力受付部
 148 表示部
 C 細胞等
 M 培地
 S105~S106 第1工程
 S107 第2工程
 S108 第3工程
 W ウェル
DESCRIPTION OF SYMBOLS 1 Imaging device 13 Imaging part 14 Control part 141 CPU
147 Input reception unit 148 Display unit C cells and the like M medium S105 to S106 First step S107 Second step S108 Third step W well

Claims (11)

  1.  画像が複数に分割されたブロックごとに特徴量を求める第1工程と、
     前記ブロックごとの特徴量を、当該ブロックの周辺にある複数の周辺ブロックの特徴量の値に基づき補正する第2工程と、
     前記画像を、補正後の前記ブロックごとの特徴量の値に応じて複数の領域に領域分割する第3工程と
    を備える画像処理方法。
    A first step for obtaining a feature value for each block obtained by dividing an image into a plurality of blocks;
    A second step of correcting the feature value for each block based on the feature value values of a plurality of peripheral blocks around the block;
    An image processing method comprising: a third step of dividing the image into a plurality of regions in accordance with the feature value of each block after correction.
  2.  前記第2工程では、前記ブロックごとの特徴量の値が、当該ブロックの前記周辺ブロック各々の特徴量の平均値である第1平均値、または当該ブロックと当該ブロックの前記周辺ブロックとの各々の特徴量の平均値である第2平均値に基づき補正される請求項1に記載の画像処理方法。 In the second step, the feature value for each block is a first average value that is an average value of the feature values of each of the peripheral blocks of the block, or each of the block and the peripheral blocks of the block. The image processing method according to claim 1, wherein correction is performed based on a second average value that is an average value of feature amounts.
  3.  前記第2工程における補正後の前記ブロックの特徴量の値は、補正前の当該ブロックの特徴量の値と前記第1平均値または前記第2平均値との差に応じた値である請求項2に記載の画像処理方法。 The feature value of the block after correction in the second step is a value corresponding to a difference between the feature value of the block before correction and the first average value or the second average value. 3. The image processing method according to 2.
  4.  前記第2工程における補正後の前記ブロックの特徴量の値は、補正前の当該ブロックの特徴量の値と前記第1平均値または前記第2平均値との差の値を変数とする一次関数により表される値である請求項3に記載の画像処理方法。 The value of the feature value of the block after correction in the second step is a linear function having a variable between the value of the feature value of the block before correction and the first average value or the second average value. The image processing method according to claim 3, wherein the image processing method is a value represented by:
  5.  前記複数のブロックが互いに同一サイズである請求項1ないし4のいずれかに記載の画像処理方法。 5. The image processing method according to claim 1, wherein the plurality of blocks have the same size.
  6.  前記第2工程では、複数の前記ブロックを含み当該ブロックを中心とする所定サイズの矩形内にある、当該ブロック以外の前記ブロックが前記周辺ブロックとされる請求項1ないし5のいずれかに記載の画像処理方法。 The said 2nd process WHEREIN: The said blocks other than the said block which are in the rectangle of the predetermined size centering on the said block including several said blocks are made into the said surrounding blocks. Image processing method.
  7.  前記矩形のサイズが変更可能である請求項6に記載の画像処理方法。 The image processing method according to claim 6, wherein the size of the rectangle is changeable.
  8.  前記矩形のサイズを決定するための情報の設定入力を受け付けて前記第2工程が実行される請求項7に記載の画像処理方法。 The image processing method according to claim 7, wherein the second step is executed in response to an input of setting information for determining the size of the rectangle.
  9.  前記特徴量は、前記画像を構成する各画素の画素値を変数とする連続関数で表される請求項1ないし8のいずれかに記載の画像処理方法。 9. The image processing method according to claim 1, wherein the feature amount is represented by a continuous function having a pixel value of each pixel constituting the image as a variable.
  10.  前記ブロックの各々は、前記画像を構成する画素の1つである請求項1ないし9のいずれかに記載の画像処理方法。 10. The image processing method according to claim 1, wherein each of the blocks is one of pixels constituting the image.
  11.  画像が複数に分割されたブロックごとに特徴量を求める第1工程と、
     前記ブロックごとの特徴量を、当該ブロックの周辺にある複数の周辺ブロックの特徴量の値に基づき補正する第2工程と、
     前記画像を、補正後の前記ブロックごとの特徴量の値に応じて複数の領域に領域分割する第3工程と
    の各工程をコンピュータに実行させる制御プログラム。
    A first step for obtaining a feature value for each block obtained by dividing an image into a plurality of blocks;
    A second step of correcting the feature value for each block based on the feature value values of a plurality of peripheral blocks around the block;
    A control program for causing a computer to execute each step of the third step of dividing the image into a plurality of regions according to the corrected feature value for each block.
PCT/JP2016/069136 2015-09-28 2016-06-28 Image processing method and control program WO2017056600A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2015-189844 2015-09-28
JP2015189844A JP2017068349A (en) 2015-09-28 2015-09-28 Image processing method and control program

Publications (1)

Publication Number Publication Date
WO2017056600A1 true WO2017056600A1 (en) 2017-04-06

Family

ID=58423368

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2016/069136 WO2017056600A1 (en) 2015-09-28 2016-06-28 Image processing method and control program

Country Status (2)

Country Link
JP (1) JP2017068349A (en)
WO (1) WO2017056600A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2021009906A1 (en) * 2019-07-18 2021-01-21 株式会社島津製作所 Cell image analysis method and cell image analysis device

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2001155146A (en) * 1999-11-26 2001-06-08 Fujitsu Ltd Device and method for image processing
JP2001298615A (en) * 2000-04-13 2001-10-26 Matsushita Electric Ind Co Ltd Image processing method and image input device

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2001155146A (en) * 1999-11-26 2001-06-08 Fujitsu Ltd Device and method for image processing
JP2001298615A (en) * 2000-04-13 2001-10-26 Matsushita Electric Ind Co Ltd Image processing method and image input device

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2021009906A1 (en) * 2019-07-18 2021-01-21 株式会社島津製作所 Cell image analysis method and cell image analysis device
JPWO2021009906A1 (en) * 2019-07-18 2021-01-21
JP7342950B2 (en) 2019-07-18 2023-09-12 株式会社島津製作所 Cell image analysis method and cell image analysis device

Also Published As

Publication number Publication date
JP2017068349A (en) 2017-04-06

Similar Documents

Publication Publication Date Title
US8265393B2 (en) Photo-document segmentation method and system
JP5742399B2 (en) Image processing apparatus and program
JP2006155610A (en) Evaluation of document background using color, texture and edge feature
KR20130016213A (en) Text enhancement of a textual image undergoing optical character recognition
JP2012221237A (en) Haze removal image processing system, haze removal image processing method and haze removal image processing program
JP5819897B2 (en) Imaging system and imaging method
WO2010050412A1 (en) Calibration index determination device, calibration device, calibration performance evaluation device, system, method, and program
JP6196607B2 (en) Image processing method, control program, and image processing apparatus
JP2012221117A (en) Image processing device and program
JP6229365B2 (en) Colony counting device, colony counting method, and colony counting program
JP5859749B2 (en) Contrast improvement method using Bezier curve
US20130169787A1 (en) Image processing device, imaging device, microscope device, image processing method, and image processing program
WO2017056600A1 (en) Image processing method and control program
JP5762315B2 (en) Image processing method
US11430130B2 (en) Image processing method and computer-readable recording medium having recorded thereon image processing program
WO2019176614A1 (en) Image processing device, image processing method, and computer program
JP2020052119A (en) Image processing method and image processing device
JP5264956B2 (en) Two-dimensional code reading apparatus and method
WO2017069035A1 (en) Image-processing method and method for creating shading reference data
WO2016158719A1 (en) Image processing method, control program, and image processing device
JP4790031B2 (en) Image processing apparatus, image processing method, program, and recording medium
CN113516584B (en) Image gray processing method and system and computer storage medium
JP2019016898A (en) Image processing system, and computer program
EP3779463A1 (en) Image processing method, image processing device, program, and recording medium
JP4829757B2 (en) Ruled line extraction apparatus and method

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 16850777

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 16850777

Country of ref document: EP

Kind code of ref document: A1