WO2017183441A1 - Image processing method, image processing device, and image processing program - Google Patents

Image processing method, image processing device, and image processing program Download PDF

Info

Publication number
WO2017183441A1
WO2017183441A1 PCT/JP2017/014046 JP2017014046W WO2017183441A1 WO 2017183441 A1 WO2017183441 A1 WO 2017183441A1 JP 2017014046 W JP2017014046 W JP 2017014046W WO 2017183441 A1 WO2017183441 A1 WO 2017183441A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
data
black data
circle
shape
Prior art date
Application number
PCT/JP2017/014046
Other languages
French (fr)
Japanese (ja)
Inventor
治郎 津村
Original Assignee
株式会社Screenホールディングス
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 株式会社Screenホールディングス filed Critical 株式会社Screenホールディングス
Publication of WO2017183441A1 publication Critical patent/WO2017183441A1/en

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N21/00Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
    • G01N21/17Systems in which incident light is modified in accordance with the properties of the material investigated
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes

Definitions

  • the present invention relates to a method for processing a captured image obtained by imaging an imaging target (spheroid or the like) held in a number of depressions provided in a well.
  • microplates cells cultured in sample containers called “microplates” and “wellplates” have been observed as samples.
  • a sample container a plurality of hollow sample storage portions called wells are formed.
  • a sample is injected into a well together with a liquid medium.
  • imaging device equipped with a CCD camera or the like, and the sample is observed using image data obtained by imaging.
  • cancer drug discovery research cancer cells are observed and analyzed by imaging cancer cells injected into a well together with a liquid (culture solution) as a medium using an imaging device.
  • microspheroid array a microarray of spheroids (cell clusters) regularly arranged) has been provided with a large number of depressions (dents) at the bottom of each well.
  • a sample container called “spheroid array plate” or the like may be used.
  • spheroid array plate In the field of regenerative medicine, since it is necessary to generate a large amount of spheroids of uniform size, cells are cultured using this microspheroid array plate.
  • quality control of the spheroids is performed by observing the growth state of each spheroid and quantifying the observation results.
  • the size (diameter or area) and roundness of each spheroid are measured by analyzing the captured image obtained by imaging the microspheroid array with an imaging device, and the entire microspheroid array A histogram is created at. Based on the characteristics of the histogram, the growth state of the spheroid is determined.
  • FIG. 23 is a diagram illustrating an example of a captured image obtained by imaging a well including a large number of depressions. From FIG. 23, it can be understood that the captured image includes a large number of black rings. These black rings represent the wall surface of the well recess (hereinafter referred to as the “depression wall surface”). The reason why the wall surface of the depression is imaged as a black ring in this way is that the wall surface of the depression is tapered upward.
  • the depression wall surface is imaged as a black ring as described above, so it may be difficult to distinguish the spheroid and depression wall surface in the captured image. . This will be described below with reference to FIGS.
  • FIG. 24 is a diagram schematically showing a part of a captured image (for example, a captured image as shown in FIG. 23) obtained by imaging a well including a large number of depressions.
  • the captured image shown in FIG. 24 is a captured image corresponding to six depressions. Focusing on the captured image corresponding to one depression, for example, as shown in FIG. 25, the captured image includes an image 7 representing the depression wall surface and an image 8 representing the spheroid. With respect to such a captured image, it is difficult to distinguish between the spheroid and the depression wall surface depending on the brightness of the entire image, the size / darkness of the image 8 representing the spheroid, and the like. For example, as shown in FIG.
  • an image 7 representing a depression wall surface and an image 8 representing a spheroid may be in contact.
  • the image 7 representing the wall surface of the depression may not form a closed ring (black ring) as shown by a reference numeral 9 in FIG. The presence of such an image makes it difficult to perform general-purpose image processing.
  • a configuration example of the microspheroid array plate is disclosed in, for example, Japanese Unexamined Patent Publication No. 2014-79223 and International Publication No. 2013/042360.
  • an image 8 representing a spheroid held in each depression in the well is captured by removing data of a portion having a certain level of density (blackness) from the captured image. It was extracted from the image.
  • the image 7 representing the depression wall surface and the image 8 representing the spheroid may have the same level of darkness (blackness).
  • the image 8 representing the spheroid is not extracted from the captured image.
  • the image 7 representing the depression wall surface and the image 8 representing the spheroid have the same darkness (blackness), the image 7 representing the depression wall surface and the image representing the spheroid. It is difficult to extract only the image 8 representing the spheroid from the captured image by separating 8 from the captured image.
  • an object of the present invention is to provide an image processing method for extracting only a spheroid image from a captured image obtained by imaging a microspheroid array, regardless of the color density of the spheroid.
  • a first aspect of the present invention is an image processing method for processing a captured image obtained by imaging a well having a plurality of recesses for holding an imaging target object, A binarization step of generating a binarized image composed of black data and white data by performing binarization processing on the captured image; A first image generation step of generating a first image by converting white data surrounded by black data in the binarized image into black data; An exclusive OR of the binarized image and the first image is obtained, and a second image in which data with a true result is associated with black data and data with a false result is associated with white data is generated.
  • a second image generation step A third image generation step of generating a third image by converting white data surrounded by black data in the second image into black data; And an image extracting step of extracting only an image of an area where black data constituting the third image exists in the captured image.
  • the shape of the outer edge of the second image corresponding to each recess is not circular before the white data surrounded by the black data is converted to black data, the shape of the outer edge is made circular.
  • a circle correction process for correction is performed.
  • the circle correction process extracts a circle by performing a Hough transform on a boundary point between a black data area and a white data area, and the shape of the outer edge of the second image corresponding to each recess is extracted from the extracted circle. The shape is corrected.
  • a circle is extracted by obtaining a separation degree using a circular separation degree filter that separates an image area into an inner area and an outer area, and the shape of the outer edge of the second image corresponding to each depression is obtained.
  • the extracted circle shape is corrected.
  • the circle correction process extracts a minimum circle including all black data constituting the second image for each recess, and changes the shape of the outer edge of the second image corresponding to each recess to the shape of the extracted circle. It is characterized by correcting.
  • the shape of the outer edge of the binarized image corresponding to each depression is not circular before the white data surrounded by the black data is converted to black data, the shape of the outer edge is circular. It is characterized in that a circle correction process is performed to correct the above.
  • a seventh aspect of the present invention is an image processing apparatus that processes a captured image obtained by capturing an image of a well having a plurality of recesses for holding an imaging target object.
  • a binarization processing unit that generates a binarized image composed of black data and white data by performing binarization processing on the captured image;
  • a first image generation unit that generates a first image by converting white data surrounded by black data among the binarized images into black data;
  • An exclusive OR of the binarized image and the first image is obtained, and a second image in which data with a true result is associated with black data and data with a false result is associated with white data is generated.
  • a second image generation unit A third image generation unit for generating a third image by converting white data surrounded by black data in the second image into black data; And an image extracting unit that extracts only an image of an area where black data constituting the third image exists in the captured image.
  • An eighth aspect of the present invention is an image processing program for processing a captured image obtained by imaging a well having a plurality of recesses for holding an imaging target object, A binarization step of generating a binarized image composed of black data and white data by performing binarization processing on the captured image; A first image generation step of generating a first image by converting white data surrounded by black data in the binarized image into black data; An exclusive OR of the binarized image and the first image is obtained, and a second image in which data with a true result is associated with black data and data with a false result is associated with white data is generated.
  • the CPU of the computer uses a memory to execute an image extraction step of extracting only an image of an area where black data constituting the third image exists in the captured image.
  • a ninth aspect of the present invention is the eighth aspect of the present invention, In the third image generation step, if the shape of the outer edge of the second image corresponding to each recess is not circular before the white data surrounded by the black data is converted to black data, the shape of the outer edge is made circular. A circle correction process for correction is performed.
  • the circle correction process extracts a circle by performing a Hough transform on a boundary point between a black data area and a white data area, and the shape of the outer edge of the second image corresponding to each recess is extracted from the extracted circle. The shape is corrected.
  • An eleventh aspect of the present invention is the ninth aspect of the present invention,
  • a circle is extracted by obtaining a separation degree using a circular separation degree filter that separates an image area into an inner area and an outer area, and the shape of the outer edge of the second image corresponding to each depression is obtained.
  • the extracted circle shape is corrected.
  • a twelfth aspect of the present invention is the ninth aspect of the present invention.
  • the circle correction process extracts a minimum circle including all black data constituting the second image for each recess, and changes the shape of the outer edge of the second image corresponding to each recess to the shape of the extracted circle. It is characterized by correcting.
  • the shape of the outer edge of the binarized image corresponding to each depression is not circular before the white data surrounded by the black data is converted to black data, the shape of the outer edge is circular. It is characterized in that a circle correction process is performed to correct the above.
  • a binarized image based on the captured image and a hollow portion of the binarized image are filled.
  • a second image representing an exclusive OR with the first image obtained by converting white data surrounded by black data into black data is generated.
  • the 2nd image which uses only the part except the part in which the imaging target object (typically spheroid) exists among the inner side parts from the hollow part wall surface of a well is produced
  • the object extraction from the captured image is performed using the third image obtained by filling the hollow portion of the second image as the ROI mask. Thereby, only the image of the inside part is extracted from the hollow wall surface of the well in the captured image.
  • an image representing the depression wall surface is removed from the captured image, and an image obtained by extracting only the image representing the captured object is obtained.
  • the third image serving as the ROI mask when the object is extracted is the wall surface of the well recess
  • the entire inner part is black data and the other part is white data. Therefore, even when the color of the image representing the imaging target in the captured image is light, only the image representing the imaging target is extracted from the captured image.
  • only the image of the imaging target can be extracted from the captured image regardless of the color density of the imaging target. For example, only a spheroid image can be extracted from a captured image obtained by imaging a microspheroid array regardless of the color density of the spheroid.
  • the shape of the outer edge of the second image corresponding to each depression is not circular, the shape of the outer edge is corrected to a circle.
  • Processing for filling the hollow portion is performed. For this reason, even when the image representing the depression wall surface and the image representing the imaging target are in contact with each other in the captured image, only the image of the imaging target is extracted from the captured image regardless of the color density of the imaging target. It becomes possible to do.
  • the shape of the outer edge of the binarized image corresponding to each depression is not circular, the shape of the outer edge is corrected to be circular.
  • the process of filling the hollow portion processing for converting white data surrounded by black data into black data. Therefore, even when the image representing the depression wall surface in the captured image does not form a closed ring, it is possible to extract only the image of the captured object from the captured image regardless of the color density of the captured object. It becomes.
  • FIG. 1 is a block diagram showing a schematic configuration of an image processing system according to a first embodiment of the present invention. It is a figure which shows the structure of the imaging device in the said 1st Embodiment. It is a top view of one well in the 1st embodiment of the above.
  • FIG. 4 is a sectional view taken along line AA in FIG. 3.
  • the said 1st Embodiment it is a figure which shows the more detailed structure of an imaging unit.
  • It is a block diagram which shows the hardware constitutions of the image processing apparatus in the said 1st Embodiment.
  • It is a flowchart which shows the procedure of the image processing in the said 1st Embodiment. It is a figure for demonstrating the image processing in the said 1st Embodiment.
  • FIG. 1 is a block diagram showing a schematic configuration of an image processing system according to the first embodiment of the present invention.
  • This image processing system includes an imaging device 20 and an image processing device 10.
  • the imaging device 20 performs imaging of a microspheroid array.
  • the captured image DAT obtained by imaging by the imaging device 20 is sent to the image processing device 10.
  • the image processing apparatus 10 performs image processing described later on the captured image DAT.
  • FIG. 2 is a diagram illustrating a configuration of the imaging device 20 in the present embodiment.
  • the imaging device 20 is a device for imaging cells or the like cultured in various sample containers.
  • the imaging device 20 performs imaging of a spheroid cultured in a liquid injected into a well W formed on the upper surface of the microspheroid array plate MP (that is, imaging of a microspheroid array).
  • each well W has, for example, a circular shape.
  • 3 is a plan view of one well W
  • FIG. 4 is a cross-sectional view taken along line AA of FIG.
  • each well W is provided with a large number of recesses (recessed spaces) 30.
  • the wall surface (recess wall surface) of the recess 30 is tapered.
  • the bottom surface of the recess 30 has a curved surface or a flat surface that protrudes downward.
  • Spheroid SF which is an imaging target is held in such a recess 30.
  • Each well W is injected with a predetermined amount of liquid (culture medium) M as a medium.
  • illustration of a hollow part is abbreviate
  • the imaging device 20 includes a light source 21 that emits imaging light, a holder 22 for holding a sample container such as a microspheroid array plate MP, and imaging of spheroids and the like in the well W.
  • An image pickup unit 23 that performs the above operation, a camera drive mechanism 24 that moves the image pickup unit 23 during image pickup, and a control unit 25 that controls operations of the light source 21, the image pickup unit 23, and the camera drive mechanism 24.
  • the light source 21 is disposed on the upper part of the imaging device 20.
  • the holder 22 is disposed below the light source 21, and the imaging unit 23 is disposed below the holder 22.
  • the light source 21 irradiates the well W with light L from above the microspheroid array plate MP held by the holder 22 based on a control command given from the light source control unit 252 in the control unit 25.
  • the light L to be irradiated is visible light, typically white light.
  • the microspheroid array plate MP including a plurality of wells W holding the spheroid SF is held in the holder 22.
  • the holder 22 is in contact with the peripheral edge of the lower surface of the microspheroid array plate MP and holds the microspheroid array plate MP in a substantially horizontal posture.
  • the imaging unit 23 captures an image of the microspheroid array plate MP by receiving the transmitted light Lt emitted from the light source 21 and transmitted below the microspheroid array plate MP held by the holder 22.
  • the imaging unit 23 is connected to a camera driving mechanism 24, and the imaging unit 23 moves horizontally along the lower surface of the microspheroid array plate MP by the operation of the camera driving mechanism 24. That is, the imaging unit 23 can scan and move along the lower surface of the microspheroid array plate MP.
  • the relative movement between the imaging unit 23 and the microspheroid array plate MP may be realized, and the microspheroid array plate MP may be moved with respect to the imaging unit 23.
  • the camera drive mechanism 24 moves the imaging unit 23 in the horizontal direction based on a control command given from the imaging control unit 253 in the control unit 25.
  • the control unit 25 includes a CPU 251, a light source control unit 252, an imaging control unit 253, an AD converter (A / D) 254, a storage unit 255, and an interface unit 256.
  • the CPU 251 controls the operation of each component in the control unit 25 and performs various arithmetic processes.
  • the light source control unit 252 controls the lighting state of the light source 21.
  • the imaging control unit 253 controls the operations of the imaging unit 23 and the camera driving mechanism 24 so that the imaging object is imaged according to a predetermined scanning movement recipe.
  • the AD converter (A / D) 254 receives an image signal (analog data) obtained by imaging by the imaging unit 23 and converts it into digital image data.
  • the storage unit 255 holds the digital image data.
  • the interface unit 256 has a function of accepting an operation input from the user, a function of displaying information such as a processing result to the user, and a function of performing data communication with other devices via a communication line. Yes.
  • digital image data held in the storage unit 255 is transmitted to the image processing apparatus 10 as the captured image DAT via the interface unit 256.
  • the interface unit 256 is connected to an input receiving unit (such as a keyboard and a mouse) that receives operation inputs, a display unit that displays information, and a communication line.
  • FIG. 5 is a diagram showing a more detailed configuration of the imaging unit 23.
  • the imaging unit 23 outputs a line sensor 231 that outputs an electrical signal corresponding to incident light, for example, a line sensor 231 using a CCD, and light emitted from the bottom surface of the microspheroid array plate MP held by the holder 22.
  • an imaging optical system 232 that forms an image on the light receiving surface of the sensor 231.
  • the imaging optical system 232 may include a plurality of optical components such as lenses, but in FIG. 5, the imaging optical system 232 is shown by a single lens.
  • the line sensor 231 is a one-dimensional array of a large number of fine image sensors 231a in a uniaxial direction in a horizontal plane.
  • the line sensor 231 is configured such that at least one entire well W (preferably, a plurality of wells W) can be included in the imaging range SR via the imaging optical system 232 in the longitudinal direction.
  • FIG. 6 is a block diagram illustrating a hardware configuration of the image processing apparatus 10.
  • the image processing apparatus 10 includes a CPU 11, a ROM 12, a RAM 13, an auxiliary storage device 14, an input unit 15, a display unit 16, an optical disk drive 17, and a network interface unit 18.
  • the CPU 11 performs various arithmetic processes according to the given command.
  • the ROM 12 is a read-only memory, and stores, for example, an initial program to be executed by the CPU 11 when the image processing apparatus 10 is activated.
  • the RAM 13 is a writable volatile memory, and temporarily stores an executing program, data, and the like.
  • the auxiliary storage device 14 stores various data.
  • the image processing program P and the captured image (digital image data) DAT transmitted from the imaging device 20 are stored in the auxiliary storage device 14.
  • the input unit 15 receives input from an operator using a mouse or a keyboard.
  • the display unit 16 displays, for example, various screens for an operator to perform work, a captured image DAT transmitted from the imaging device 20, an image obtained by performing image processing described later on the captured image DAT, and the like.
  • the optical disk drive 17 is a device for reading data from the optical disk 170 and writing data to the optical disk 170.
  • the network interface unit 18 has a function of performing data communication with other devices via a communication line.
  • the captured image DAT transmitted from the imaging device 20 is input to the inside of the image processing device 10 via the network interface unit 18.
  • the image processing program P is stored in the auxiliary storage device 14.
  • the image processing program P is read into the RAM 13, and the CPU 11 executes the image processing program P read into the RAM 13, whereby an image to be described later is obtained. Processing is executed.
  • the image processing program P is provided by being stored in a computer-readable recording medium such as a CD-ROM or DVD-ROM. That is, for example, the user purchases an optical disk (CD-ROM, DVD-ROM, etc.) 170 as a recording medium for the image processing program P, attaches it to the optical disk drive 17, and loads the image processing program P from the optical disk 170. Read and install in the auxiliary storage device 14.
  • the image processing program P sent via a LAN or the like may be received by the network interface unit 18 and installed in the auxiliary storage device 14.
  • the image to be processed is an image as shown in FIG. That is, in the captured image, the image 7 representing the depression wall surface forms a closed ring, and the image 7 representing the depression wall surface and the image 8 representing the spheroid are not in contact with each other. Further, it is assumed that the image 7 representing the depression wall surface and the image 8 representing the spheroid have the same level of darkness (blackness).
  • step S110 binarization processing is performed on the captured image DAT (step S110).
  • the captured image DAT is digital image data expressed by, for example, 256 gradations.
  • binary data including black data (data corresponding to the value “1”) and white data (data corresponding to the value “0”) is obtained.
  • a converted image IMGbi is generated.
  • a portion corresponding to the recess wall surface and a portion where spheroids exist are black data. Note that a threshold value for the binarization process needs to be set so that the binarization process is performed in this way.
  • step S120 processing for converting white data surrounded by black data in the binarized image IMGbi into black data (in other words, processing for filling a hollow portion) is performed (step S120).
  • the image generated in step S120 is referred to as “first image” for convenience.
  • the first image is denoted by reference numeral IMG1.
  • step S120 white data existing between the black data of the portion corresponding to the depression wall surface and the black data of the portion where the spheroid exists is converted into black data.
  • the inner part of the outer edge of the recess 30 is all black data.
  • step S130 an image based on the exclusive OR of the binarized image IMGbi generated in step S110 and the first image IMG1 generated in step S120 is generated (step S130).
  • the image generated in step S130 is referred to as a “second image” for convenience.
  • the second image is denoted by reference numeral IMG2.
  • the generation of the second image IMG2 in step S130 is performed according to the truth table shown in FIG. That is, for the second image IMG2, the portion of the binarized image IMGbi and the first image IMG1 that is black data in one image and white data in the other image is black data, The part becomes white data.
  • step S130 the exclusive OR of the binarized image IMGbi and the first image IMG1 is obtained, the data whose result is true is associated with the black data, and the data whose result is false is the white data.
  • a second image IMG2 associated with is generated.
  • the portion corresponding to the depression wall surface and the portion where the spheroid exists are black data, and the other portion is white data.
  • the inner part of the outer edge of the recess 30 is all black data. Therefore, for the second image IMG2 generated in step S130, as shown in FIG. 8, the portion inside the recess wall surface and excluding the portion where the spheroid exists is black data, and the other portions becomes white data.
  • a portion corresponding to the outer edge of the recess 30 is indicated by a dotted line (the same applies to FIGS. 11 and 17).
  • step S140 a process of converting white data surrounded by black data in the second image IMG2 into black data (in other words, a process of filling a hollow portion) is performed (step S140).
  • the image generated in step S140 is referred to as a “third image” for convenience.
  • a code IMG3 is attached to the third image.
  • step S140 the white data of the portion where the spheroid exists is converted into black data.
  • the third image IMG3 generated by this conversion as shown in FIG. 8, the inner part of the wall surface of the depression is all black data.
  • step S150 object extraction from the captured image DAT is performed using the third image IMG3 generated in step S140 as the ROI mask (step S150). That is, in step S150, only the image of the area where the black data constituting the third image IMG3 exists is extracted from the captured image DAT. Thereby, as shown in FIG. 8, an image IMGex obtained by extracting only the image 8 representing the spheroid from the captured image DAT is obtained.
  • the binarization step is realized by step S110
  • the first image generation step is realized by step S120
  • the second image generation step is realized by step S130
  • the third image generation is executed by step S140.
  • Steps are realized, and an image extraction step is realized by Step S150.
  • the captured image DAT obtained by imaging the microspheroid array plate MP including a plurality of wells W including a large number of depressions 30 in which the spheroid SF is held is based on the captured image DAT.
  • Two images IMG2 are generated.
  • the second image IMG2 is generated in which only the portion excluding the portion where the spheroid is present among the inner portion of the wall surface of the well W is formed as black data.
  • object extraction from the captured image DAT is performed using the third image IMG3 obtained by filling the hollow portion of the second image IMG2 as the ROI mask.
  • the image 7 representing the depression wall surface is removed from the captured image DAT, and an image obtained by extracting only the image 8 representing the spheroid is obtained.
  • the third image IMG3 that becomes the ROI mask when the object is extracted is the wall surface of the well W
  • the entire inner part is black data and the other part is white data. Therefore, even when the color of the image representing the spheroid in the captured image DAT is light, only the image 8 representing the spheroid is extracted from the captured image DAT.
  • Second Embodiment> A second embodiment of the present invention will be described. Since the schematic configuration of the image processing system, the configuration of the imaging device 20, and the configuration of the image processing device 10 are the same as those in the first embodiment, description thereof is omitted (see FIGS. 1 to 6).
  • the processing target image is an image as shown in FIG. That is, in the captured image, the image 7 representing the depression wall surface forms a closed ring, and the image 7 representing the depression wall surface and the image 8 representing the spheroid are in contact with each other. It is assumed here that the image 7 representing the depression wall surface and the image 8 representing the spheroid have the same level of darkness (blackness).
  • step S210 binarization processing is performed on the captured image DAT (step S210).
  • a binarized image IMGbi is generated in which the portion corresponding to the depression wall surface and the portion where the spheroid exists are black data (see FIG. 11).
  • step S1 a process of converting white data surrounded by black data in the binarized image IMGbi into black data (in other words, a process of filling a hollow portion) is performed (step S1). S220).
  • the first image IMG1 in which the inner part of the outer edge of the recess 30 is all black data is generated (see FIG. 11).
  • an image is generated based on an exclusive OR of the binarized image IMGbi generated in step S210 and the first image IMG1 generated in step S220 (step S230).
  • a second image IMG2 is generated in which the portion inside the recess wall surface and excluding the portion where the spheroid exists is black data, and the other portion is white data (see FIG. 11).
  • the white data is not completely surrounded by the black data for the second image IMG2. Therefore, before executing the process for filling the hollow portion similar to that in the first embodiment (step S140 in FIG. 7), the circle correction process for making the shape of the outer edge (the outer edge of the black data) of the second image IMG2 circular. Is done. Focusing only on the outer edge of the second image IMG2, as shown in FIG. 12, a circle with a part missing is corrected to a complete circle by the circle correction processing. As a result, as shown in FIG. 13, the second image IMG2 after the circle correction process has a completely circular outer edge, and only the portion where the spheroid exists is white data inside the outer edge.
  • processing for filling the hollow portion processing for converting white data surrounded by black data into black data
  • processing for filling the hollow portion is performed on the second image IMG2 after the circle correction processing.
  • the process of filling the hollow portion is performed (step S240).
  • the 3rd image IMG3 from which all the inner parts than a hollow part wall surface become black data is produced
  • object extraction from the captured image DAT is performed using the third image IMG3 generated in step S240 as the ROI mask (step S250).
  • an image IMGex obtained by extracting only the image 8 representing the spheroid from the captured image DAT is obtained (see FIG. 11).
  • the binarization step is realized by step S210
  • the first image generation step is realized by step S220
  • the second image generation step is realized by step S230
  • the third image generation is executed by step S240. Steps are realized, and the image extraction step is realized by Step S250.
  • a convex hull is the smallest polygon that encompasses all given points. Using this concept of convex hull, a minimum circle including all black data constituting the second image IMG2 can be obtained. Then, the obtained circle may be used as the outer edge of the second image IMG2 after the circle correction process.
  • the processing target image is an image as shown in FIG. That is, in the captured image, the image 7 representing the recess wall surface does not form a closed ring, and the image 7 representing the recess wall surface and the image 8 representing the spheroid are in contact. It is assumed here that the image 7 representing the depression wall surface and the image 8 representing the spheroid have the same level of darkness (blackness).
  • the binarized processing is performed on the captured image DAT (step S310). Thereby, a binarized image IMGbi is generated in which a portion excluding a part of the portion corresponding to the depression wall surface and a portion where the spheroid exists are black data (see FIG. 17).
  • the image 7 representing the depression wall surface does not form a closed ring with respect to the binarized image IMGbi. Therefore, before executing the process of filling the hollow portion similar to that in the first embodiment (step S120 in FIG. 7), the circle correction is performed so that the outer edge of the binarized image IMGbi (the outer edge of the black data) has a circular shape. Processing is performed.
  • this circle correction processing for example, a method using a circular Hough transform, a method using a circular separability filter, a method using the concept of a convex hull, and the like are employed, as in step S240 in the second embodiment. Done.
  • an image based on the exclusive OR of the binarized image IMGbi generated in step S310 and the first image IMG1 generated in step S320 is generated (step S330).
  • the portion inside the recess wall surface excluding the portion where the spheroid exists and a portion of the recess wall surface (the portion lacking the black ring) are black data, and the other portion is the white data.
  • a second image IMG2 is generated (see FIG. 17).
  • the white data is not completely surrounded by the black data for the second image IMG2.
  • the second image IMG2 includes convex black data corresponding to a portion where a black ring is missing. Therefore, before executing the process for filling the hollow portion similar to that in the first embodiment (step S140 in FIG. 7), the circle correction process for making the shape of the outer edge (the outer edge of the black data) of the second image IMG2 circular. Is done. If attention is paid only to the outer edge of the second image IMG2, as shown in FIG. 20, a circle including a convex portion and partially missing is corrected to a complete circle by the circle correction processing. As a result, as in the second embodiment, as shown in FIG.
  • the second image IMG2 after the circle correction processing has a completely circular outer edge, and only the portion where spheroids exist inside the outer edge. Becomes white data. Then, processing for filling the hollow portion (processing for converting white data surrounded by black data into black data) is performed on the second image IMG2 after the circle correction processing. Thus, in the present embodiment, after the circle correction process is performed, a process for filling the hollow portion is performed (step S340). Thereby, the 3rd image IMG3 from which all the inner parts than a hollow part wall surface become black data is produced
  • object extraction from the captured image DAT is performed using the third image IMG3 generated in step S340 as the ROI mask (step S350).
  • an image IMGex obtained by extracting only the image 8 representing the spheroid from the captured image DAT is obtained (see FIG. 17).
  • the binarization step is realized by step S310
  • the first image generation step is realized by step S320
  • the second image generation step is realized by step S330
  • the third image generation is executed by step S340. Steps are realized, and the image extraction step is realized by Step S350.
  • the process of correcting the outer edge shape to a circular shape is performed on the binarized image IMGbi whose outer edge shape is not circular, and then the process of filling the hollow portion ( Processing for converting white data surrounded by black data into black data) is performed. Therefore, even when the image 7 representing the depression wall surface in the captured image DAT does not form a closed ring, the image 7 representing the depression wall surface is removed from the captured image DAT regardless of the color density of the spheroid. Thus, only the image 8 representing the spheroid can be extracted.
  • the image as shown in FIG. 27 is used as the processing target image.
  • the image to be processed is an image as shown in FIG. 21, that is, in the captured image DAT, the image 7 representing the recessed wall surface does not form a closed ring, and the image 7 representing the recessed wall surface.
  • the image 8 representing the spheroid is not in contact, only the image 8 representing the spheroid can be extracted from the captured image DAT by the same procedure as in the third embodiment.
  • the image processing for the captured image DAT is performed by the image processing apparatus 10 that is different from the imaging apparatus 20.
  • the present invention is not limited to this, and for example, image processing on the captured image DAT may be performed in the control unit 25 of the imaging device 20.
  • the picked-up image DAT obtained by picking up the microspheroid array includes images of a large number of depressions 30. Therefore, for example, various images as shown in FIGS. 25 to 27 can be mixed in the captured image DAT as the images of the recesses 30. For this reason, it is conceivable to prepare a plurality of processing procedures (respective processing procedures of the first to third embodiments) in the image processing program so as to cope with such various image patterns. . However, only by preparing the processing procedure of the third embodiment, it is possible to extract only the image 8 representing the spheroid from any pattern image shown in FIGS.

Abstract

The purpose of the present invention is to extract only an image of a spheroid from a captured image which is obtained by capturing an image of a micro-spheroid array, regardless of the depth of color of the spheroid. First, a captured image which is obtained by capturing an image of a micro-spheroid array is binarized (step S110). Next, a process is carried out of filling blank portions of the image which is generated in step S110 (step S120). Next, an image is generated based on an exclusive OR operation which is carried out upon the image which is generated in step S110 and the image which is generated in step S120 (step S130). Next, a process is carried out of filling blank portions of the image which is generated in step S130 (step S140). Thereafter, an object extraction is carried out from the captured image, with the image which is generated in step S140 as a ROI mask.

Description

画像処理方法、画像処理装置、および画像処理プログラムImage processing method, image processing apparatus, and image processing program
 本発明は、ウェルに設けられた多数の窪みに保持された撮像対象物(スフェロイド等)を撮像することによって得られる撮像画像を処理する方法に関する。 The present invention relates to a method for processing a captured image obtained by imaging an imaging target (spheroid or the like) held in a number of depressions provided in a well.
 従来より、医療・創薬などの分野において、「マイクロプレート」,「ウェルプレート」などと呼ばれる試料容器で培養された細胞等を試料として観察することが行われている。そのような試料容器にはウェルと呼ばれるくぼみ状の複数の試料収納部が形成されており、一般に試料は液体状の培地とともにウェルに注入されている。近年、そのような試料をCCDカメラ等を搭載した撮像装置によって撮像し、撮像によって得られた画像データを用いて試料を観察することが行われている。例えば、がんの創薬研究において、培地としての液体(培養液)とともにウェルに注入されたがん細胞を撮像装置で撮像することによって、がん細胞の観察や分析がなされている。 Conventionally, in the fields of medical treatment and drug discovery, cells cultured in sample containers called “microplates” and “wellplates” have been observed as samples. In such a sample container, a plurality of hollow sample storage portions called wells are formed. In general, a sample is injected into a well together with a liquid medium. In recent years, such a sample has been imaged by an imaging device equipped with a CCD camera or the like, and the sample is observed using image data obtained by imaging. For example, in cancer drug discovery research, cancer cells are observed and analyzed by imaging cancer cells injected into a well together with a liquid (culture solution) as a medium using an imaging device.
 また、近年、マイクロスフェロイドアレイ(微小なスフェロイド(細胞のかたまり)を規則的に配列したもの)を形成することができるよう各ウェルの底部に多数の窪部(窪み)を設けた構成の「マイクロスフェロイドアレイプレート」などと呼ばれる試料容器が用いられることもある。再生医療の分野では、均一なサイズのスフェロイドを大量に生成する必要があるので、このマイクロスフェロイドアレイプレートを用いて細胞の培養が行われる。そのスフェロイドの生成過程において、各スフェロイドの生育状態を観察して観察結果を定量化することによって、スフェロイドの品質管理が行われる。その際、マイクロスフェロイドアレイを撮像装置で撮像することによって得られた撮像画像をコンピュータを用いて解析することにより、各スフェロイドのサイズ(直径あるいは面積)や真円度が測定され、マイクロスフェロイドアレイ全体でのヒストグラムが作成される。そして、ヒストグラムの特性に基づいて、スフェロイドの生育状態の判断が行われる。 In recent years, a microspheroid array (a microarray of spheroids (cell clusters) regularly arranged) has been provided with a large number of depressions (dents) at the bottom of each well. A sample container called “spheroid array plate” or the like may be used. In the field of regenerative medicine, since it is necessary to generate a large amount of spheroids of uniform size, cells are cultured using this microspheroid array plate. In the production process of the spheroids, quality control of the spheroids is performed by observing the growth state of each spheroid and quantifying the observation results. At that time, the size (diameter or area) and roundness of each spheroid are measured by analyzing the captured image obtained by imaging the microspheroid array with an imaging device, and the entire microspheroid array A histogram is created at. Based on the characteristics of the histogram, the growth state of the spheroid is determined.
 図23は、多数の窪部を含むウェルを撮像することによって得られた撮像画像の一例を示す図である。図23より、撮像画像には多数の黒い輪が含まれていることが把握される。これらの黒い輪は、ウェルの窪部の壁面(以下、「窪部壁面」という。)を表している。なお、窪部壁面がこのように黒い輪として撮像される理由は、窪部壁面が上方に広がったテーパ状になっているからである。 FIG. 23 is a diagram illustrating an example of a captured image obtained by imaging a well including a large number of depressions. From FIG. 23, it can be understood that the captured image includes a large number of black rings. These black rings represent the wall surface of the well recess (hereinafter referred to as the “depression wall surface”). The reason why the wall surface of the depression is imaged as a black ring in this way is that the wall surface of the depression is tapered upward.
 スフェロイドは、マイクロスフェロイドアレイプレートの各ウェルの窪部で成長する。そのスフェロイドの成長過程においてマイクロスフェロイドアレイを撮像した際、上述のように窪部壁面が黒い輪として撮像されることから、撮像画像においてスフェロイドと窪部壁面とを区別することが困難な場合がある。これについて、図24~図27を参照しつつ、以下に説明する。 Spheroids grow in the wells of each well of the microspheroid array plate. When the microspheroid array is imaged during the growth process of the spheroid, the depression wall surface is imaged as a black ring as described above, so it may be difficult to distinguish the spheroid and depression wall surface in the captured image. . This will be described below with reference to FIGS.
 図24は、多数の窪部を含むウェルを撮像することによって得られた撮像画像(例えば図23に示したような撮像画像)の一部を模式的に示した図である。図24に示す撮像画像は、6つの窪部に対応する撮像画像である。1つの窪部に対応する撮像画像に着目すると、例えば図25に示すように、撮像画像には窪部壁面を表す画像7とスフェロイドを表す画像8とが含まれている。このような撮像画像に関し、画像全体の明るさやスフェロイドを表す画像8の大きさ・濃さなどによっては、スフェロイドと窪部壁面とを区別することが困難となる。また、例えば図26に示すように、撮像画像において窪部壁面を表す画像7とスフェロイドを表す画像8とが接触している場合もある。このような場合、スフェロイドの外縁を特定することが難しいためスフェロイドのサイズの測定が困難となる。さらに、画像の濃淡むら等に起因して、窪部壁面を表す画像7が例えば図27で符号9で示す部分のように閉じた輪(黒い輪)を形成していない場合もある。このような画像の存在は、汎用的な画像処理を行うことを困難にしている。 FIG. 24 is a diagram schematically showing a part of a captured image (for example, a captured image as shown in FIG. 23) obtained by imaging a well including a large number of depressions. The captured image shown in FIG. 24 is a captured image corresponding to six depressions. Focusing on the captured image corresponding to one depression, for example, as shown in FIG. 25, the captured image includes an image 7 representing the depression wall surface and an image 8 representing the spheroid. With respect to such a captured image, it is difficult to distinguish between the spheroid and the depression wall surface depending on the brightness of the entire image, the size / darkness of the image 8 representing the spheroid, and the like. For example, as shown in FIG. 26, in the captured image, an image 7 representing a depression wall surface and an image 8 representing a spheroid may be in contact. In such a case, since it is difficult to specify the outer edge of the spheroid, it is difficult to measure the size of the spheroid. Furthermore, the image 7 representing the wall surface of the depression may not form a closed ring (black ring) as shown by a reference numeral 9 in FIG. The presence of such an image makes it difficult to perform general-purpose image processing.
 そこで、従来においては、スフェロイドを表す画像8よりも窪部壁面を表す画像7の方が充分に濃い(黒い)場合、撮像画像から一定以上の濃さ(黒さ)を有する部分のデータを除去することによって、撮像画像からスフェロイドを表す画像8が抽出されている。 Therefore, conventionally, when the image 7 representing the depression wall surface is sufficiently darker (black) than the image 8 representing the spheroid, the data of the portion having a certain darkness (blackness) is removed from the captured image. As a result, an image 8 representing a spheroid is extracted from the captured image.
 なお、マイクロスフェロイドアレイプレートの構成例は、例えば日本の特開2014-79223号公報や国際公開2013/042360号パンフレットに開示されている。 A configuration example of the microspheroid array plate is disclosed in, for example, Japanese Unexamined Patent Publication No. 2014-79223 and International Publication No. 2013/042360.
日本の特開2014-79223号公報Japanese Unexamined Patent Publication No. 2014-79223 国際公開2013/042360号パンフレットInternational publication 2013/042360 pamphlet
 上述のように、従来においては、撮像画像から一定以上の濃さ(黒さ)を有する部分のデータを除去することによって、ウェル内の各窪部に保持されているスフェロイドを表す画像8が撮像画像から抽出されていた。ところが、窪部壁面を表す画像7とスフェロイドを表す画像8とが同程度の濃さ(黒さ)を有している場合がある。このような場合、上述の手法によれば、窪部壁面を表す画像7が撮像画像から除去されるだけでなく、スフェロイドを表す画像8も撮像画像から除去される。従って、撮像画像からスフェロイドを表す画像8が抽出されない。以上のように、窪部壁面を表す画像7とスフェロイドを表す画像8とが同程度の濃さ(黒さ)を有している場合には、窪部壁面を表す画像7とスフェロイドを表す画像8とを分離して撮像画像からスフェロイドを表す画像8のみを抽出することは困難である。 As described above, conventionally, an image 8 representing a spheroid held in each depression in the well is captured by removing data of a portion having a certain level of density (blackness) from the captured image. It was extracted from the image. However, the image 7 representing the depression wall surface and the image 8 representing the spheroid may have the same level of darkness (blackness). In such a case, according to the above-described method, not only the image 7 representing the depression wall surface is removed from the captured image, but also the image 8 representing the spheroid is removed from the captured image. Therefore, the image 8 representing the spheroid is not extracted from the captured image. As described above, when the image 7 representing the depression wall surface and the image 8 representing the spheroid have the same darkness (blackness), the image 7 representing the depression wall surface and the image representing the spheroid. It is difficult to extract only the image 8 representing the spheroid from the captured image by separating 8 from the captured image.
 そこで、本発明は、マイクロスフェロイドアレイを撮像することによって得られた撮像画像からスフェロイドの色の濃さに関わらずスフェロイドの画像のみを抽出する画像処理方法を提供することを目的とする。 Therefore, an object of the present invention is to provide an image processing method for extracting only a spheroid image from a captured image obtained by imaging a microspheroid array, regardless of the color density of the spheroid.
 本発明の第1の局面は、撮像対象物を保持するための複数の窪部を有するウェルを撮像することによって得られた撮像画像を処理する画像処理方法であって、
 前記撮像画像に対して2値化処理を施すことによって黒データと白データとからなる2値化画像を生成する2値化ステップと、
 前記2値化画像のうち黒データによって囲まれた白データを黒データに変換することによって第1画像を生成する第1画像生成ステップと、
 前記2値化画像と前記第1画像との排他的論理和を求め、結果が真であるデータを黒データに対応付けるとともに結果が偽であるデータを白データに対応付けた第2画像を生成する第2画像生成ステップと、
 前記第2画像のうち黒データによって囲まれた白データを黒データに変換することによって第3画像を生成する第3画像生成ステップと、
 前記撮像画像のうち前記第3画像を構成する黒データが存在する領域の画像のみを抽出する画像抽出ステップと
を含むことを特徴とする。
A first aspect of the present invention is an image processing method for processing a captured image obtained by imaging a well having a plurality of recesses for holding an imaging target object,
A binarization step of generating a binarized image composed of black data and white data by performing binarization processing on the captured image;
A first image generation step of generating a first image by converting white data surrounded by black data in the binarized image into black data;
An exclusive OR of the binarized image and the first image is obtained, and a second image in which data with a true result is associated with black data and data with a false result is associated with white data is generated. A second image generation step;
A third image generation step of generating a third image by converting white data surrounded by black data in the second image into black data;
And an image extracting step of extracting only an image of an area where black data constituting the third image exists in the captured image.
 本発明の第2の局面は、本発明の第1の局面において、
 前記第3画像生成ステップでは、黒データによって囲まれた白データを黒データに変換する前に、各窪部に対応する第2画像の外縁の形状が円形でない場合に当該外縁の形状を円形に補正する円補正処理が行われることを特徴とする。
According to a second aspect of the present invention, in the first aspect of the present invention,
In the third image generation step, if the shape of the outer edge of the second image corresponding to each recess is not circular before the white data surrounded by the black data is converted to black data, the shape of the outer edge is made circular. A circle correction process for correction is performed.
 本発明の第3の局面は、本発明の第2の局面において、
 前記円補正処理は、黒データの領域と白データの領域との境界点にハフ変換を施すことによって円を抽出し、各窪部に対応する第2画像の外縁の形状をその抽出した円の形状に補正することを特徴とする。
According to a third aspect of the present invention, in the second aspect of the present invention,
The circle correction process extracts a circle by performing a Hough transform on a boundary point between a black data area and a white data area, and the shape of the outer edge of the second image corresponding to each recess is extracted from the extracted circle. The shape is corrected.
 本発明の第4の局面は、本発明の第2の局面において、
 前記円補正処理は、画像領域を内部領域と外部領域との分離する円形分離度フィルタを用いて分離度を求めることによって円を抽出し、各窪部に対応する第2画像の外縁の形状をその抽出した円の形状に補正することを特徴とする。
According to a fourth aspect of the present invention, in the second aspect of the present invention,
In the circle correction process, a circle is extracted by obtaining a separation degree using a circular separation degree filter that separates an image area into an inner area and an outer area, and the shape of the outer edge of the second image corresponding to each depression is obtained. The extracted circle shape is corrected.
 本発明の第5の局面は、本発明の第2の局面において、
 前記円補正処理は、各窪部につき第2画像を構成する全ての黒データを含む最小の円を抽出し、各窪部に対応する第2画像の外縁の形状をその抽出した円の形状に補正することを特徴とする。
According to a fifth aspect of the present invention, in the second aspect of the present invention,
The circle correction process extracts a minimum circle including all black data constituting the second image for each recess, and changes the shape of the outer edge of the second image corresponding to each recess to the shape of the extracted circle. It is characterized by correcting.
 本発明の第6の局面は、本発明の第2から第5までのいずれかの局面において、
 前記第1画像生成ステップでは、黒データによって囲まれた白データを黒データに変換する前に、各窪部に対応する2値化画像の外縁の形状が円形でない場合に当該外縁の形状を円形に補正する円補正処理が行われることを特徴とする。
According to a sixth aspect of the present invention, in any one of the second to fifth aspects of the present invention,
In the first image generation step, if the shape of the outer edge of the binarized image corresponding to each depression is not circular before the white data surrounded by the black data is converted to black data, the shape of the outer edge is circular. It is characterized in that a circle correction process is performed to correct the above.
 本発明の第7の局面は、撮像対象物を保持するための複数の窪部を有するウェルを撮像することによって得られた撮像画像を処理する画像処理装置であって、
 前記撮像画像に対して2値化処理を施すことによって黒データと白データとからなる2値化画像を生成する2値化処理部と、
 前記2値化画像のうち黒データによって囲まれた白データを黒データに変換することによって第1画像を生成する第1画像生成部と、
 前記2値化画像と前記第1画像との排他的論理和を求め、結果が真であるデータを黒データに対応付けるとともに結果が偽であるデータを白データに対応付けた第2画像を生成する第2画像生成部と、
 前記第2画像のうち黒データによって囲まれた白データを黒データに変換することによって第3画像を生成する第3画像生成部と、
 前記撮像画像のうち前記第3画像を構成する黒データが存在する領域の画像のみを抽出する画像抽出部と
を備えることを特徴とする。
A seventh aspect of the present invention is an image processing apparatus that processes a captured image obtained by capturing an image of a well having a plurality of recesses for holding an imaging target object.
A binarization processing unit that generates a binarized image composed of black data and white data by performing binarization processing on the captured image;
A first image generation unit that generates a first image by converting white data surrounded by black data among the binarized images into black data;
An exclusive OR of the binarized image and the first image is obtained, and a second image in which data with a true result is associated with black data and data with a false result is associated with white data is generated. A second image generation unit;
A third image generation unit for generating a third image by converting white data surrounded by black data in the second image into black data;
And an image extracting unit that extracts only an image of an area where black data constituting the third image exists in the captured image.
 本発明の第8の局面は、撮像対象物を保持するための複数の窪部を有するウェルを撮像することによって得られた撮像画像を処理する画像処理プログラムであって、
 前記撮像画像に対して2値化処理を施すことによって黒データと白データとからなる2値化画像を生成する2値化ステップと、
 前記2値化画像のうち黒データによって囲まれた白データを黒データに変換することによって第1画像を生成する第1画像生成ステップと、
 前記2値化画像と前記第1画像との排他的論理和を求め、結果が真であるデータを黒データに対応付けるとともに結果が偽であるデータを白データに対応付けた第2画像を生成する第2画像生成ステップと、
 前記第2画像のうち黒データによって囲まれた白データを黒データに変換することによって第3画像を生成する第3画像生成ステップと、
 前記撮像画像のうち前記第3画像を構成する黒データが存在する領域の画像のみを抽出する画像抽出ステップと
をコンピュータのCPUがメモリを利用して実行することを特徴とする。
An eighth aspect of the present invention is an image processing program for processing a captured image obtained by imaging a well having a plurality of recesses for holding an imaging target object,
A binarization step of generating a binarized image composed of black data and white data by performing binarization processing on the captured image;
A first image generation step of generating a first image by converting white data surrounded by black data in the binarized image into black data;
An exclusive OR of the binarized image and the first image is obtained, and a second image in which data with a true result is associated with black data and data with a false result is associated with white data is generated. A second image generation step;
A third image generation step of generating a third image by converting white data surrounded by black data in the second image into black data;
The CPU of the computer uses a memory to execute an image extraction step of extracting only an image of an area where black data constituting the third image exists in the captured image.
 本発明の第9の局面は、本発明の第8の局面において、
 前記第3画像生成ステップでは、黒データによって囲まれた白データを黒データに変換する前に、各窪部に対応する第2画像の外縁の形状が円形でない場合に当該外縁の形状を円形に補正する円補正処理が行われることを特徴とする。
A ninth aspect of the present invention is the eighth aspect of the present invention,
In the third image generation step, if the shape of the outer edge of the second image corresponding to each recess is not circular before the white data surrounded by the black data is converted to black data, the shape of the outer edge is made circular. A circle correction process for correction is performed.
 本発明の第10の局面は、本発明の第9の局面において、
 前記円補正処理は、黒データの領域と白データの領域との境界点にハフ変換を施すことによって円を抽出し、各窪部に対応する第2画像の外縁の形状をその抽出した円の形状に補正することを特徴とする。
According to a tenth aspect of the present invention, in a ninth aspect of the present invention,
The circle correction process extracts a circle by performing a Hough transform on a boundary point between a black data area and a white data area, and the shape of the outer edge of the second image corresponding to each recess is extracted from the extracted circle. The shape is corrected.
 本発明の第11の局面は、本発明の第9の局面において、
 前記円補正処理は、画像領域を内部領域と外部領域との分離する円形分離度フィルタを用いて分離度を求めることによって円を抽出し、各窪部に対応する第2画像の外縁の形状をその抽出した円の形状に補正することを特徴とする。
An eleventh aspect of the present invention is the ninth aspect of the present invention,
In the circle correction process, a circle is extracted by obtaining a separation degree using a circular separation degree filter that separates an image area into an inner area and an outer area, and the shape of the outer edge of the second image corresponding to each depression is obtained. The extracted circle shape is corrected.
 本発明の第12の局面は、本発明の第9の局面において、
 前記円補正処理は、各窪部につき第2画像を構成する全ての黒データを含む最小の円を抽出し、各窪部に対応する第2画像の外縁の形状をその抽出した円の形状に補正することを特徴とする。
A twelfth aspect of the present invention is the ninth aspect of the present invention,
The circle correction process extracts a minimum circle including all black data constituting the second image for each recess, and changes the shape of the outer edge of the second image corresponding to each recess to the shape of the extracted circle. It is characterized by correcting.
 本発明の第13の局面は、本発明の第9から第12までのいずれかの局面において、
 前記第1画像生成ステップでは、黒データによって囲まれた白データを黒データに変換する前に、各窪部に対応する2値化画像の外縁の形状が円形でない場合に当該外縁の形状を円形に補正する円補正処理が行われることを特徴とする。
According to a thirteenth aspect of the present invention, in any one of the ninth to twelfth aspects of the present invention,
In the first image generation step, if the shape of the outer edge of the binarized image corresponding to each depression is not circular before the white data surrounded by the black data is converted to black data, the shape of the outer edge is circular. It is characterized in that a circle correction process is performed to correct the above.
 本発明の第1の局面によれば、複数の窪部を有するウェルを撮像することによって得られた撮像画像に関し、当該撮像画像に基づく2値化画像と当該2値化画像の中空部分を埋める(黒データによって囲まれた白データを黒データに変換する)ことによって得られる第1画像との排他的論理和を表す第2画像が生成される。これにより、ウェルの窪部壁面よりも内側部分のうち撮像対象物(典型的にはスフェロイド)が存在する部分を除く部分のみを黒データとする第2画像が生成される。そして、第2画像の中空部分を埋めることによって得られる第3画像をROIマスクとして、撮像画像からのオブジェクト抽出が行われる。これにより、撮像画像のうちウェルの窪部壁面よりも内側部分の画像のみが抽出される。その結果、撮像画像から窪部壁面を表す画像が除去されて撮像対象物を表す画像のみを抽出した画像が得られる。また、2値化処理の際に仮に撮像対象物が存在する部分のデータが白データに変換されたとしても、オブジェクト抽出が行われる際のROIマスクとなる第3画像は、ウェルの窪部壁面よりも内側部分全体を黒データとし、それ以外の部分を白データとする画像となる。従って、撮像画像において撮像対象物を表す画像の色が薄い場合でも、撮像画像から撮像対象物を表す画像のみが抽出される。以上のように、撮像対象物の色の濃さに関わらず撮像画像から撮像対象物の画像のみを抽出することが可能となる。例えば、マイクロスフェロイドアレイを撮像することによって得られた撮像画像からスフェロイドの色の濃さに関わらずスフェロイドの画像のみを抽出することが可能となる。 According to the first aspect of the present invention, regarding a captured image obtained by imaging a well having a plurality of depressions, a binarized image based on the captured image and a hollow portion of the binarized image are filled. A second image representing an exclusive OR with the first image obtained by converting white data surrounded by black data into black data is generated. Thereby, the 2nd image which uses only the part except the part in which the imaging target object (typically spheroid) exists among the inner side parts from the hollow part wall surface of a well is produced | generated. Then, the object extraction from the captured image is performed using the third image obtained by filling the hollow portion of the second image as the ROI mask. Thereby, only the image of the inside part is extracted from the hollow wall surface of the well in the captured image. As a result, an image representing the depression wall surface is removed from the captured image, and an image obtained by extracting only the image representing the captured object is obtained. Further, even if the data of the portion where the imaging target exists is converted into white data during the binarization process, the third image serving as the ROI mask when the object is extracted is the wall surface of the well recess The entire inner part is black data and the other part is white data. Therefore, even when the color of the image representing the imaging target in the captured image is light, only the image representing the imaging target is extracted from the captured image. As described above, only the image of the imaging target can be extracted from the captured image regardless of the color density of the imaging target. For example, only a spheroid image can be extracted from a captured image obtained by imaging a microspheroid array regardless of the color density of the spheroid.
 本発明の第2の局面によれば、第3画像が生成される際、各窪部に対応する第2画像の外縁の形状が円形でなければ、当該外縁の形状が円形に補正されてから中空部分を埋める処理(黒データによって囲まれた白データを黒データに変換する処理)が行われる。このため、撮像画像において窪部壁面を表す画像と撮像対象物を表す画像とが接触している場合でも、撮像対象物の色の濃さに関わらず撮像画像から撮像対象物の画像のみを抽出することが可能となる。 According to the second aspect of the present invention, when the third image is generated, if the shape of the outer edge of the second image corresponding to each depression is not circular, the shape of the outer edge is corrected to a circle. Processing for filling the hollow portion (processing for converting white data surrounded by black data into black data) is performed. For this reason, even when the image representing the depression wall surface and the image representing the imaging target are in contact with each other in the captured image, only the image of the imaging target is extracted from the captured image regardless of the color density of the imaging target. It becomes possible to do.
 本発明の第3の局面によれば、本発明の第2の局面と同様の効果が得られる。 According to the third aspect of the present invention, the same effect as in the second aspect of the present invention can be obtained.
 本発明の第4の局面によれば、本発明の第2の局面と同様の効果が得られる。 According to the fourth aspect of the present invention, the same effect as in the second aspect of the present invention can be obtained.
 本発明の第5の局面によれば、本発明の第2の局面と同様の効果が得られる。 According to the fifth aspect of the present invention, the same effect as in the second aspect of the present invention can be obtained.
 本発明の第6の局面によれば、第1画像が生成される際、各窪部に対応する2値化画像の外縁の形状が円形でなければ、当該外縁の形状が円形に補正されてから中空部分を埋める処理(黒データによって囲まれた白データを黒データに変換する処理)が行われる。このため、撮像画像において窪部壁面を表す画像が閉じた輪を形成していない場合でも、撮像対象物の色の濃さに関わらず撮像画像から撮像対象物の画像のみを抽出することが可能となる。 According to the sixth aspect of the present invention, when the first image is generated, if the shape of the outer edge of the binarized image corresponding to each depression is not circular, the shape of the outer edge is corrected to be circular. To the process of filling the hollow portion (processing for converting white data surrounded by black data into black data). Therefore, even when the image representing the depression wall surface in the captured image does not form a closed ring, it is possible to extract only the image of the captured object from the captured image regardless of the color density of the captured object. It becomes.
 本発明の第7の局面によれば、本発明の第1の局面と同様の効果が得られる。 According to the seventh aspect of the present invention, the same effect as in the first aspect of the present invention can be obtained.
 本発明の第8から第13までの局面によれば、それぞれ本発明の第1から第6までの局面と同様の効果が得られる。 According to the eighth to thirteenth aspects of the present invention, effects similar to those of the first to sixth aspects of the present invention can be obtained, respectively.
本発明の第1の実施形態に係る画像処理システムの概略構成を示すブロック図である。1 is a block diagram showing a schematic configuration of an image processing system according to a first embodiment of the present invention. 上記第1の実施形態における撮像装置の構成を示す図である。It is a figure which shows the structure of the imaging device in the said 1st Embodiment. 上記第1の実施形態における1つのウェルの平面図である。It is a top view of one well in the 1st embodiment of the above. 図3のA-A線断面図である。FIG. 4 is a sectional view taken along line AA in FIG. 3. 上記第1の実施形態において、撮像ユニットのより詳細な構成を示す図である。In the said 1st Embodiment, it is a figure which shows the more detailed structure of an imaging unit. 上記第1の実施形態における画像処理装置のハードウェア構成を示すブロック図である。It is a block diagram which shows the hardware constitutions of the image processing apparatus in the said 1st Embodiment. 上記第1の実施形態における画像処理の手順を示すフローチャートである。It is a flowchart which shows the procedure of the image processing in the said 1st Embodiment. 上記第1の実施形態における画像処理について説明するための図である。It is a figure for demonstrating the image processing in the said 1st Embodiment. 上記第1の実施形態において第2画像を生成する際の排他的論理和の真理値表である。It is a truth table of exclusive OR at the time of generating the 2nd picture in the 1st embodiment of the above. 本発明の第2の実施形態における画像処理の手順を示すフローチャートである。It is a flowchart which shows the procedure of the image process in the 2nd Embodiment of this invention. 上記第2実施形態における画像処理について説明するための図である。It is a figure for demonstrating the image processing in the said 2nd Embodiment. 上記第2実施形態において、円補正処理について説明するための図である。In the said 2nd Embodiment, it is a figure for demonstrating a circle correction process. 上記第2実施形態において、円補正処理後の第2画像を示す図である。In the said 2nd Embodiment, it is a figure which shows the 2nd image after a circle correction process. 上記第2実施形態に関し、円形ハフ変換を用いた円補正処理について説明するための図である。It is a figure for demonstrating the circle correction process using circular Hough transform regarding the said 2nd Embodiment. 上記第2実施形態に関し、円形分離度フィルターを用いた円補正処理について説明するための図である。It is a figure for demonstrating the circle correction process using a circular separability filter regarding the said 2nd Embodiment. 本発明の第3の実施形態における画像処理の手順を示すフローチャートである。It is a flowchart which shows the procedure of the image processing in the 3rd Embodiment of this invention. 上記第3実施形態における画像処理について説明するための図である。It is a figure for demonstrating the image processing in the said 3rd Embodiment. 上記第3実施形態において、円補正処理について説明するための図である。In the said 3rd Embodiment, it is a figure for demonstrating a circle correction process. 上記第3実施形態において、円補正処理後の2値化画像を示す図である。In the said 3rd Embodiment, it is a figure which shows the binarized image after a circle correction process. 上記第3実施形態において、円補正処理について説明するための図である。In the said 3rd Embodiment, it is a figure for demonstrating a circle correction process. 上記第3実施形態の変形例に関し、1つの窪部に対応する撮像画像の一例を示す図である。It is a figure which shows an example of the captured image corresponding to one hollow part regarding the modification of the said 3rd Embodiment. 円形の画像は円補正処理後も円形の画像であることを説明するための図である。It is a figure for demonstrating that a circular image is a circular image after a circle correction process. 多数の窪部を含むウェルを撮像することによって得られた撮像画像の一例を示す図である。It is a figure which shows an example of the captured image obtained by imaging the well containing many recessed parts. 多数の窪部を含むウェルを撮像することによって得られた撮像画像の一部を模式的に示した図である。It is the figure which showed typically a part of captured image obtained by imaging the well containing many recessed parts. 撮像画像においてスフェロイドと窪部壁面とを区別することが困難であることを説明するための図である。It is a figure for demonstrating that it is difficult to distinguish a spheroid and a hollow part wall surface in a captured image. 撮像画像においてスフェロイドと窪部壁面とを区別することが困難であることを説明するための図である。It is a figure for demonstrating that it is difficult to distinguish a spheroid and a hollow part wall surface in a captured image. 撮像画像においてスフェロイドと窪部壁面とを区別することが困難であることを説明するための図である。It is a figure for demonstrating that it is difficult to distinguish a spheroid and a hollow part wall surface in a captured image.
 以下、添付図面を参照しつつ本発明の実施形態について説明する。 Hereinafter, embodiments of the present invention will be described with reference to the accompanying drawings.
<1.第1の実施形態>
<1.1 構成>
<1.1.1 画像処理システムの概略構成>
 図1は、本発明の第1の実施形態に係る画像処理システムの概略構成を示すブロック図である。この画像処理システムは、撮像装置20と画像処理装置10とによって構成されている。撮像装置20は、マイクロスフェロイドアレイの撮像を行う。撮像装置20による撮像で得られた撮像画像DATは、画像処理装置10に送られる。画像処理装置10は、撮像画像DATに対して後述する画像処理を施す。
<1. First Embodiment>
<1.1 Configuration>
<1.1.1 Schematic configuration of image processing system>
FIG. 1 is a block diagram showing a schematic configuration of an image processing system according to the first embodiment of the present invention. This image processing system includes an imaging device 20 and an image processing device 10. The imaging device 20 performs imaging of a microspheroid array. The captured image DAT obtained by imaging by the imaging device 20 is sent to the image processing device 10. The image processing apparatus 10 performs image processing described later on the captured image DAT.
<1.1.2 撮像装置>
 図2は、本実施形態における撮像装置20の構成を示す図である。この撮像装置20は、各種試料容器で培養された細胞等を撮像するための装置である。本実施形態では、この撮像装置20で、マイクロスフェロイドアレイプレートMPの上面に形成されたウェルWに注入された液体中で培養されているスフェロイドの撮像(すなわちマイクロスフェロイドアレイの撮像)が行われる。
<1.1.2 Imaging device>
FIG. 2 is a diagram illustrating a configuration of the imaging device 20 in the present embodiment. The imaging device 20 is a device for imaging cells or the like cultured in various sample containers. In the present embodiment, the imaging device 20 performs imaging of a spheroid cultured in a liquid injected into a well W formed on the upper surface of the microspheroid array plate MP (that is, imaging of a microspheroid array).
 マイクロスフェロイドアレイプレートMPには、上面側に開口を有し下面側に透明の底面を有する試料収納部としての複数個(例えば6個)のウェルWが配列されている。各ウェルWは、例えば円形の形状を有している。図3は、1つのウェルWの平面図であり、図4は、図3のA-A線断面図である。図3および図4に示すように、各ウェルWには、多数の窪部(くぼみ状の空間)30が設けられている。窪部30の壁面(窪部壁面)は、テーパ状になっている。窪部30の底面は、下に凸の曲面状もしくは平面状になっている。このような窪部30に、撮像対象物であるスフェロイドSFが保持されている。また、各ウェルWには、培地としての液体(培養液)Mが所定量注入されている。なお、図2では、窪部の図示を省略している。 In the microspheroid array plate MP, a plurality of (for example, six) wells W are arranged as sample storage portions having an opening on the upper surface side and a transparent bottom surface on the lower surface side. Each well W has, for example, a circular shape. 3 is a plan view of one well W, and FIG. 4 is a cross-sectional view taken along line AA of FIG. As shown in FIGS. 3 and 4, each well W is provided with a large number of recesses (recessed spaces) 30. The wall surface (recess wall surface) of the recess 30 is tapered. The bottom surface of the recess 30 has a curved surface or a flat surface that protrudes downward. Spheroid SF which is an imaging target is held in such a recess 30. Each well W is injected with a predetermined amount of liquid (culture medium) M as a medium. In addition, illustration of a hollow part is abbreviate | omitted in FIG.
 図2に示すように、この撮像装置20は、撮像用の光を出射する光源21と、マイクロスフェロイドアレイプレートMP等の試料容器を保持するためのホルダ22と、ウェルW内のスフェロイド等の撮像を行う撮像ユニット23と、撮像の際に撮像ユニット23を移動させるカメラ駆動機構24と、光源21,撮像ユニット23,およびカメラ駆動機構24の動作を制御する制御部25とを備えている。光源21は、この撮像装置20の上部に配置されている。ホルダ22は光源21の下方に配置され、撮像ユニット23はホルダ22の下方に配置されている。 As shown in FIG. 2, the imaging device 20 includes a light source 21 that emits imaging light, a holder 22 for holding a sample container such as a microspheroid array plate MP, and imaging of spheroids and the like in the well W. An image pickup unit 23 that performs the above operation, a camera drive mechanism 24 that moves the image pickup unit 23 during image pickup, and a control unit 25 that controls operations of the light source 21, the image pickup unit 23, and the camera drive mechanism 24. The light source 21 is disposed on the upper part of the imaging device 20. The holder 22 is disposed below the light source 21, and the imaging unit 23 is disposed below the holder 22.
 光源21は、制御部25内の光源制御部252から与えられる制御指令に基づき、ホルダ22に保持されているマイクロスフェロイドアレイプレートMPの上方からウェルWに対して光Lを照射する。照射される光Lは可視光であって、典型的には白色光である。 The light source 21 irradiates the well W with light L from above the microspheroid array plate MP held by the holder 22 based on a control command given from the light source control unit 252 in the control unit 25. The light L to be irradiated is visible light, typically white light.
 撮像装置20による撮像が行われる際、スフェロイドSFを保持する複数のウェルWからなるマイクロスフェロイドアレイプレートMPはホルダ22内に保持される。ホルダ22は、マイクロスフェロイドアレイプレートMPの下面周縁部に当接してマイクロスフェロイドアレイプレートMPを略水平姿勢に保持する。 When imaging by the imaging device 20 is performed, the microspheroid array plate MP including a plurality of wells W holding the spheroid SF is held in the holder 22. The holder 22 is in contact with the peripheral edge of the lower surface of the microspheroid array plate MP and holds the microspheroid array plate MP in a substantially horizontal posture.
 撮像ユニット23は、光源21から出射されてホルダ22に保持されたマイクロスフェロイドアレイプレートMPの下方に透過してくる透過光Ltを受光することによって、マイクロスフェロイドアレイプレートMPの画像を撮像する。撮像ユニット23はカメラ駆動機構24に連結されており、カメラ駆動機構24の動作によって撮像ユニット23はマイクロスフェロイドアレイプレートMPの下面に沿って水平移動する。すなわち、撮像ユニット23がマイクロスフェロイドアレイプレートMPの下面に沿って走査移動可能となっている。但し、撮像ユニット23とマイクロスフェロイドアレイプレートMPとの間の相対移動が実現されれば良く、マイクロスフェロイドアレイプレートMPを撮像ユニット23に対して移動させるようにしてもよい。 The imaging unit 23 captures an image of the microspheroid array plate MP by receiving the transmitted light Lt emitted from the light source 21 and transmitted below the microspheroid array plate MP held by the holder 22. The imaging unit 23 is connected to a camera driving mechanism 24, and the imaging unit 23 moves horizontally along the lower surface of the microspheroid array plate MP by the operation of the camera driving mechanism 24. That is, the imaging unit 23 can scan and move along the lower surface of the microspheroid array plate MP. However, the relative movement between the imaging unit 23 and the microspheroid array plate MP may be realized, and the microspheroid array plate MP may be moved with respect to the imaging unit 23.
 カメラ駆動機構24は、制御部25内の撮像制御部253から与えられる制御指令に基づき、撮像ユニット23を水平方向に移動させる。 The camera drive mechanism 24 moves the imaging unit 23 in the horizontal direction based on a control command given from the imaging control unit 253 in the control unit 25.
 制御部25は、CPU251、光源制御部252、撮像制御部253、ADコンバータ(A/D)254、記憶部255、およびインタフェース部256を備えている。CPU251は、制御部25内の各構成要素の動作の制御や各種演算処理を行う。光源制御部252は、光源21の点灯状態を制御する。撮像制御部253は、所定の走査移動レシピに従って撮像対象物の撮像が行われるよう、撮像ユニット23およびカメラ駆動機構24の動作を制御する。ADコンバータ(A/D)254は、撮像ユニット23による撮像で得られた画像信号(アナログデータ)を受け取り、それをデジタル画像データに変換する。記憶部255は、そのデジタル画像データを保持する。インタフェース部256は、ユーザからの操作入力を受け付ける機能、ユーザへの処理結果等の情報表示を行う機能、通信回線を介して他の装置との間でのデータ通信を行う機能などを有している。このインタフェース部256を介して、例えば記憶部255に保持されたデジタル画像データが上記撮像画像DATとして画像処理装置10に送信される。なお、インタフェース部256には、操作入力を受け付ける入力受付部(キーボードやマウスなど)、情報表示を行う表示部、通信回線などが接続されている。 The control unit 25 includes a CPU 251, a light source control unit 252, an imaging control unit 253, an AD converter (A / D) 254, a storage unit 255, and an interface unit 256. The CPU 251 controls the operation of each component in the control unit 25 and performs various arithmetic processes. The light source control unit 252 controls the lighting state of the light source 21. The imaging control unit 253 controls the operations of the imaging unit 23 and the camera driving mechanism 24 so that the imaging object is imaged according to a predetermined scanning movement recipe. The AD converter (A / D) 254 receives an image signal (analog data) obtained by imaging by the imaging unit 23 and converts it into digital image data. The storage unit 255 holds the digital image data. The interface unit 256 has a function of accepting an operation input from the user, a function of displaying information such as a processing result to the user, and a function of performing data communication with other devices via a communication line. Yes. For example, digital image data held in the storage unit 255 is transmitted to the image processing apparatus 10 as the captured image DAT via the interface unit 256. The interface unit 256 is connected to an input receiving unit (such as a keyboard and a mouse) that receives operation inputs, a display unit that displays information, and a communication line.
 図5は、撮像ユニット23のより詳細な構成を示す図である。図5に示すように、撮像ユニット23は、入射光に応じた電気信号を出力する例えばCCDによるラインセンサ231と、ホルダ22に保持されたマイクロスフェロイドアレイプレートMPの底面から出射される光をラインセンサ231の受光面に結像させる結像光学系232とを備えている。なお、結像光学系232はレンズ等の光学部品を複数備えるものであっても良いが、図5では結像光学系232を単一のレンズによって示している。 FIG. 5 is a diagram showing a more detailed configuration of the imaging unit 23. As shown in FIG. 5, the imaging unit 23 outputs a line sensor 231 that outputs an electrical signal corresponding to incident light, for example, a line sensor 231 using a CCD, and light emitted from the bottom surface of the microspheroid array plate MP held by the holder 22. And an imaging optical system 232 that forms an image on the light receiving surface of the sensor 231. The imaging optical system 232 may include a plurality of optical components such as lenses, but in FIG. 5, the imaging optical system 232 is shown by a single lens.
 ラインセンサ231は、多数の微細な撮像素子231aを水平面内の一軸方向に一次元配列したものである。ラインセンサ231は、その長手方向には結像光学系232を介して少なくとも1つのウェルW(好ましくは、複数のウェルW)全体を撮像範囲SRに含めることができるように構成されている。 The line sensor 231 is a one-dimensional array of a large number of fine image sensors 231a in a uniaxial direction in a horizontal plane. The line sensor 231 is configured such that at least one entire well W (preferably, a plurality of wells W) can be included in the imaging range SR via the imaging optical system 232 in the longitudinal direction.
<1.1.3 画像処理装置>
 図6は、画像処理装置10のハードウェア構成を示すブロック図である。画像処理装置10は、CPU11とROM12とRAM13と補助記憶装置14と入力部15と表示部16と光学ディスクドライブ17とネットワークインタフェース部18とを備えている。CPU11は、与えられた命令に従い各種演算処理等を行う。ROM12は、読み出し専用のメモリであって、例えばこの画像処理装置10の起動時にCPU11に実行させる初期プログラムなどを格納する。RAM13は、書き込み可能な揮発性のメモリであって、実行中のプログラムやデータ等を一時的に格納する。補助記憶装置14は、各種データ等を記憶する。本発明に関連するものとしては、画像処理プログラムPや撮像装置20から送信された撮像画像(デジタル画像データ)DATが補助記憶装置14に格納される。入力部15は、マウスやキーボードによるオペレータからの入力を受け付ける。表示部16は、例えば、オペレータが作業を行うための各種画面,撮像装置20から送信された撮像画像DAT,撮像画像DATに対して後述する画像処理を施して得られた画像などを表示する。光学ディスクドライブ17は、光学ディスク170からのデータの読み出しや光学ディスク170へのデータの書き込みを行うためのデバイスである。ネットワークインタフェース部18は、通信回線を介して他の装置との間でのデータ通信を行う機能を有している。撮像装置20から送信された撮像画像DATは、このネットワークインタフェース部18を介して画像処理装置10の内部へと入力される。
<1.1.3 Image Processing Device>
FIG. 6 is a block diagram illustrating a hardware configuration of the image processing apparatus 10. The image processing apparatus 10 includes a CPU 11, a ROM 12, a RAM 13, an auxiliary storage device 14, an input unit 15, a display unit 16, an optical disk drive 17, and a network interface unit 18. The CPU 11 performs various arithmetic processes according to the given command. The ROM 12 is a read-only memory, and stores, for example, an initial program to be executed by the CPU 11 when the image processing apparatus 10 is activated. The RAM 13 is a writable volatile memory, and temporarily stores an executing program, data, and the like. The auxiliary storage device 14 stores various data. As related to the present invention, the image processing program P and the captured image (digital image data) DAT transmitted from the imaging device 20 are stored in the auxiliary storage device 14. The input unit 15 receives input from an operator using a mouse or a keyboard. The display unit 16 displays, for example, various screens for an operator to perform work, a captured image DAT transmitted from the imaging device 20, an image obtained by performing image processing described later on the captured image DAT, and the like. The optical disk drive 17 is a device for reading data from the optical disk 170 and writing data to the optical disk 170. The network interface unit 18 has a function of performing data communication with other devices via a communication line. The captured image DAT transmitted from the imaging device 20 is input to the inside of the image processing device 10 via the network interface unit 18.
 上述したように、画像処理プログラムPは補助記憶装置14に格納されている。オペレータによって撮像画像DATに対する画像処理の実行が指示されると、画像処理プログラムPはRAM13へと読み出され、そのRAM13に読み出された画像処理プログラムPをCPU11が実行することにより、後述する画像処理が実行される。なお、画像処理プログラムPは、CD-ROMやDVD-ROM等のコンピュータ読み取り可能な記録媒体に格納されて提供される。すなわちユーザは、例えば、画像処理プログラムPの記録媒体としての光学ディスク(CD-ROM、DVD-ROM等)170を購入して光学ディスクドライブ17に装着し、その光学ディスク170から画像処理プログラムPを読み出して補助記憶装置14にインストールする。また、これに代えて、LAN等を介して送られる画像処理プログラムPをネットワークインタフェース部18で受信して、それを補助記憶装置14にインストールするようにしてもよい。 As described above, the image processing program P is stored in the auxiliary storage device 14. When the operator instructs execution of image processing on the captured image DAT, the image processing program P is read into the RAM 13, and the CPU 11 executes the image processing program P read into the RAM 13, whereby an image to be described later is obtained. Processing is executed. The image processing program P is provided by being stored in a computer-readable recording medium such as a CD-ROM or DVD-ROM. That is, for example, the user purchases an optical disk (CD-ROM, DVD-ROM, etc.) 170 as a recording medium for the image processing program P, attaches it to the optical disk drive 17, and loads the image processing program P from the optical disk 170. Read and install in the auxiliary storage device 14. Alternatively, the image processing program P sent via a LAN or the like may be received by the network interface unit 18 and installed in the auxiliary storage device 14.
<1.2 画像処理方法>
 次に、図7に示すフローチャートを参照しつつ、本実施形態における画像処理の手順を説明する。なお、ここでは、1つの窪部30に対応する撮像画像に対する処理に着目する。処理対象の画像は、図25に示したような画像である。すなわち、撮像画像において、窪部壁面を表す画像7は閉じた輪を形成しており、窪部壁面を表す画像7とスフェロイドを表す画像8とは接触していない。また、窪部壁面を表す画像7とスフェロイドを表す画像8とが同程度の濃さ(黒さ)を有しているものと仮定する。
<1.2 Image processing method>
Next, the procedure of image processing in the present embodiment will be described with reference to the flowchart shown in FIG. Here, attention is focused on processing for a captured image corresponding to one depression 30. The image to be processed is an image as shown in FIG. That is, in the captured image, the image 7 representing the depression wall surface forms a closed ring, and the image 7 representing the depression wall surface and the image 8 representing the spheroid are not in contact with each other. Further, it is assumed that the image 7 representing the depression wall surface and the image 8 representing the spheroid have the same level of darkness (blackness).
 画像処理の開始後、まず、撮像画像DATに対して2値化処理が施される(ステップS110)。撮像画像DATは、例えば256階調で表現されたデジタル画像データである。このような撮像画像DATに対して2値化処理が施されることによって、黒データ(値「1」に相当するデータ)と白データ(値「0」に相当するデータ)とからなる2値化画像IMGbiが生成される。この2値化画像IMGbiについては、図8に示すように、窪部壁面に相当する部分とスフェロイドが存在する部分とが黒データとなる。なお、2値化処理がこのように行われるよう、2値化処理の閾値が設定されている必要がある。 After the start of image processing, first, binarization processing is performed on the captured image DAT (step S110). The captured image DAT is digital image data expressed by, for example, 256 gradations. By performing a binarization process on such a captured image DAT, binary data including black data (data corresponding to the value “1”) and white data (data corresponding to the value “0”) is obtained. A converted image IMGbi is generated. As for this binarized image IMGbi, as shown in FIG. 8, a portion corresponding to the recess wall surface and a portion where spheroids exist are black data. Note that a threshold value for the binarization process needs to be set so that the binarization process is performed in this way.
 次に、2値化画像IMGbiのうち黒データによって囲まれた白データを黒データに変換する処理(換言すれば、中空部分を埋める処理)が行われる(ステップS120)。なお、このステップS120で生成される画像を便宜上「第1画像」という。第1画像には符号IMG1を付す。ステップS120によって、窪部壁面に相当する部分の黒データとスフェロイドが存在する部分の黒データとの間に存在する白データが黒データに変換される。この変換によって生成される第1画像IMG1については、図8に示すように、窪部30の外縁の内側部分が全て黒データとなる。 Next, processing for converting white data surrounded by black data in the binarized image IMGbi into black data (in other words, processing for filling a hollow portion) is performed (step S120). The image generated in step S120 is referred to as “first image” for convenience. The first image is denoted by reference numeral IMG1. By step S120, white data existing between the black data of the portion corresponding to the depression wall surface and the black data of the portion where the spheroid exists is converted into black data. As for the first image IMG1 generated by this conversion, as shown in FIG. 8, the inner part of the outer edge of the recess 30 is all black data.
 次に、ステップS110で生成された2値化画像IMGbiとステップS120で生成された第1画像IMG1との排他的論理和に基づく画像が生成される(ステップS130)。なお、このステップS130で生成される画像を便宜上「第2画像」という。第2画像には符号IMG2を付す。ステップS130での第2画像IMG2の生成は、図9に示す真理値表に従って行われる。すなわち、第2画像IMG2については、2値化画像IMGbiおよび第1画像IMG1のうち一方の画像で黒データとなっていて他方の画像で白データとなっている部分が黒データとなり、それ以外の部分が白データとなる。以上のように、ステップS130では、2値化画像IMGbiと第1画像IMG1との排他的論理和が求められ、結果が真であるデータを黒データに対応付けるとともに結果が偽であるデータを白データに対応付けた第2画像IMG2が生成される。 Next, an image based on the exclusive OR of the binarized image IMGbi generated in step S110 and the first image IMG1 generated in step S120 is generated (step S130). The image generated in step S130 is referred to as a “second image” for convenience. The second image is denoted by reference numeral IMG2. The generation of the second image IMG2 in step S130 is performed according to the truth table shown in FIG. That is, for the second image IMG2, the portion of the binarized image IMGbi and the first image IMG1 that is black data in one image and white data in the other image is black data, The part becomes white data. As described above, in step S130, the exclusive OR of the binarized image IMGbi and the first image IMG1 is obtained, the data whose result is true is associated with the black data, and the data whose result is false is the white data. A second image IMG2 associated with is generated.
 ところで、2値化画像IMGbiについては、窪部壁面に相当する部分とスフェロイドが存在する部分とが黒データとなっていて、それ以外の部分が白データとなっている。また、第1画像IMG1については、窪部30の外縁の内側部分が全て黒データとなっている。従って、ステップS130によって生成される第2画像IMG2については、図8に示すように、窪部壁面よりも内側の部分であってスフェロイドが存在する部分を除く部分が黒データとなり、それ以外の部分が白データとなる。なお、図8では、窪部30の外縁に相当する部分を点線で表している(図11および図17についても同様)。 By the way, in the binarized image IMGbi, the portion corresponding to the depression wall surface and the portion where the spheroid exists are black data, and the other portion is white data. For the first image IMG1, the inner part of the outer edge of the recess 30 is all black data. Therefore, for the second image IMG2 generated in step S130, as shown in FIG. 8, the portion inside the recess wall surface and excluding the portion where the spheroid exists is black data, and the other portions Becomes white data. In FIG. 8, a portion corresponding to the outer edge of the recess 30 is indicated by a dotted line (the same applies to FIGS. 11 and 17).
 次に、第2画像IMG2のうち黒データによって囲まれた白データを黒データに変換する処理(換言すれば、中空部分を埋める処理)が行われる(ステップS140)。なお、このステップS140で生成される画像を便宜上「第3画像」という。第3画像には符号IMG3を付す。ステップS140によって、スフェロイドが存在する部分の白データが黒データに変換される。この変換によって生成される第3画像IMG3については、図8に示すように、窪部壁面よりも内側部分が全て黒データとなる。 Next, a process of converting white data surrounded by black data in the second image IMG2 into black data (in other words, a process of filling a hollow portion) is performed (step S140). Note that the image generated in step S140 is referred to as a “third image” for convenience. A code IMG3 is attached to the third image. By step S140, the white data of the portion where the spheroid exists is converted into black data. As for the third image IMG3 generated by this conversion, as shown in FIG. 8, the inner part of the wall surface of the depression is all black data.
 次に、ステップS140で生成された第3画像IMG3をROIマスクとして撮像画像DATからのオブジェクト抽出が行われる(ステップS150)。すなわち、ステップS150では、撮像画像DATのうち第3画像IMG3を構成する黒データが存在する領域の画像のみが抽出される。これにより、図8に示すように、撮像画像DATからスフェロイドを表す画像8のみを抽出した画像IMGexが得られる。 Next, object extraction from the captured image DAT is performed using the third image IMG3 generated in step S140 as the ROI mask (step S150). That is, in step S150, only the image of the area where the black data constituting the third image IMG3 exists is extracted from the captured image DAT. Thereby, as shown in FIG. 8, an image IMGex obtained by extracting only the image 8 representing the spheroid from the captured image DAT is obtained.
 なお、本実施形態においては、ステップS110によって2値化ステップが実現され、ステップS120によって第1画像生成ステップが実現され、ステップS130によって第2画像生成ステップが実現され、ステップS140によって第3画像生成ステップが実現され、ステップS150によって画像抽出ステップが実現されている。 In this embodiment, the binarization step is realized by step S110, the first image generation step is realized by step S120, the second image generation step is realized by step S130, and the third image generation is executed by step S140. Steps are realized, and an image extraction step is realized by Step S150.
<1.3 効果>
 本実施形態によれば、スフェロイドSFが保持された多数の窪部30を含む複数のウェルWからなるマイクロスフェロイドアレイプレートMPを撮像することによって得られた撮像画像DATに関し、当該撮像画像DATに基づく2値化画像IMGbiと当該2値化画像IMGbiの中空部分を埋める(黒データによって囲まれた白データを黒データに変換する)ことによって得られる第1画像IMG1との排他的論理和を表す第2画像IMG2が生成される。これにより、ウェルWの窪部壁面よりも内側部分のうちスフェロイドが存在する部分を除く部分のみを黒データとする第2画像IMG2が生成される。そして、第2画像IMG2の中空部分を埋めることによって得られる第3画像IMG3をROIマスクとして、撮像画像DATからのオブジェクト抽出が行われる。これにより、撮像画像DATのうちウェルWの窪部壁面よりも内側部分の画像のみが抽出される。その結果、撮像画像DATから窪部壁面を表す画像7が除去されてスフェロイドを表す画像8のみを抽出した画像が得られる。
<1.3 Effect>
According to the present embodiment, the captured image DAT obtained by imaging the microspheroid array plate MP including a plurality of wells W including a large number of depressions 30 in which the spheroid SF is held is based on the captured image DAT. A first image representing the exclusive OR of the binarized image IMGbi and the first image IMG1 obtained by filling the hollow portion of the binarized image IMGbi (converting white data surrounded by black data into black data). Two images IMG2 are generated. As a result, the second image IMG2 is generated in which only the portion excluding the portion where the spheroid is present among the inner portion of the wall surface of the well W is formed as black data. Then, object extraction from the captured image DAT is performed using the third image IMG3 obtained by filling the hollow portion of the second image IMG2 as the ROI mask. Thereby, only the image of the inside part from the hollow wall surface of the well W is extracted from the captured image DAT. As a result, the image 7 representing the depression wall surface is removed from the captured image DAT, and an image obtained by extracting only the image 8 representing the spheroid is obtained.
 また、2値化処理の際に仮にスフェロイドが存在する部分のデータが白データに変換されたとしても、オブジェクト抽出が行われる際のROIマスクとなる第3画像IMG3は、ウェルWの窪部壁面よりも内側部分全体を黒データとし、それ以外の部分を白データとする画像となる。従って、撮像画像DATにおいてスフェロイドを表す画像の色が薄い場合でも、撮像画像DATからスフェロイドを表す画像8のみが抽出される。 Further, even if the data of the portion where the spheroid exists is converted into white data during the binarization process, the third image IMG3 that becomes the ROI mask when the object is extracted is the wall surface of the well W The entire inner part is black data and the other part is white data. Therefore, even when the color of the image representing the spheroid in the captured image DAT is light, only the image 8 representing the spheroid is extracted from the captured image DAT.
 以上のように、本実施形態によれば、マイクロスフェロイドアレイを撮像することによって得られた撮像画像DATからスフェロイドの色の濃さに関わらずスフェロイドの画像のみを抽出することが可能となる。 As described above, according to the present embodiment, it is possible to extract only the spheroid image from the captured image DAT obtained by imaging the microspheroid array, regardless of the color density of the spheroid.
<2.第2の実施形態>
 本発明の第2の実施形態について説明する。画像処理システムの概略構成,撮像装置20の構成,および画像処理装置10の構成については、上記第1の実施形態と同様であるので、説明を省略する(図1~図6を参照)。
<2. Second Embodiment>
A second embodiment of the present invention will be described. Since the schematic configuration of the image processing system, the configuration of the imaging device 20, and the configuration of the image processing device 10 are the same as those in the first embodiment, description thereof is omitted (see FIGS. 1 to 6).
<2.1 画像処理方法>
<2.1.1 処理手順>
 図10に示すフローチャートを参照しつつ、本実施形態における画像処理の手順を説明する。本実施形態においては、処理対象の画像は、図26に示したような画像である。すなわち、撮像画像において、窪部壁面を表す画像7は閉じた輪を形成しており、窪部壁面を表す画像7とスフェロイドを表す画像8とが接触している。なお、ここでも、窪部壁面を表す画像7とスフェロイドを表す画像8とが同程度の濃さ(黒さ)を有しているものと仮定する。
<2.1 Image processing method>
<2.1.1 Processing procedure>
The procedure of image processing in this embodiment will be described with reference to the flowchart shown in FIG. In the present embodiment, the processing target image is an image as shown in FIG. That is, in the captured image, the image 7 representing the depression wall surface forms a closed ring, and the image 7 representing the depression wall surface and the image 8 representing the spheroid are in contact with each other. It is assumed here that the image 7 representing the depression wall surface and the image 8 representing the spheroid have the same level of darkness (blackness).
 画像処理の開始後、まず、上記第1の実施形態と同様、撮像画像DATに対して2値化処理が施される(ステップS210)。これにより、窪部壁面に相当する部分とスフェロイドが存在する部分とが黒データとなる2値化画像IMGbiが生成される(図11参照)。次に、上記第1の実施形態と同様、2値化画像IMGbiのうち黒データによって囲まれた白データを黒データに変換する処理(換言すれば、中空部分を埋める処理)が行われる(ステップS220)。これにより、窪部30の外縁の内側部分が全て黒データとなる第1画像IMG1が生成される(図11参照)。 After the start of image processing, first, similarly to the first embodiment, binarization processing is performed on the captured image DAT (step S210). As a result, a binarized image IMGbi is generated in which the portion corresponding to the depression wall surface and the portion where the spheroid exists are black data (see FIG. 11). Next, as in the first embodiment, a process of converting white data surrounded by black data in the binarized image IMGbi into black data (in other words, a process of filling a hollow portion) is performed (step S1). S220). Thereby, the first image IMG1 in which the inner part of the outer edge of the recess 30 is all black data is generated (see FIG. 11).
 次に、上記第1の実施形態と同様、ステップS210で生成された2値化画像IMGbiとステップS220で生成された第1画像IMG1との排他的論理和に基づく画像の生成が行われる(ステップS230)。これにより、窪部壁面よりも内側の部分であってスフェロイドが存在する部分を除く部分が黒データ、それ以外の部分が白データとなる第2画像IMG2が生成される(図11参照)。 Next, as in the first embodiment, an image is generated based on an exclusive OR of the binarized image IMGbi generated in step S210 and the first image IMG1 generated in step S220 (step S230). As a result, a second image IMG2 is generated in which the portion inside the recess wall surface and excluding the portion where the spheroid exists is black data, and the other portion is white data (see FIG. 11).
 図11から把握されるように、本実施形態においては、第2画像IMG2に関して、白データは完全には黒データに囲まれてはいない。そこで、上記第1の実施形態と同様の中空部分を埋める処理(図7のステップS140)を実行する前に、第2画像IMG2の外縁(黒データの外縁)の形状を円形にする円補正処理が行われる。第2画像IMG2の外縁のみに着目すると、図12に示すように、一部が欠けた状態の円形が円補正処理によって完全な円形に補正される。これにより、円補正処理後の第2画像IMG2については、図13に示すように、外縁の形状は完全な円形となり、外縁の内側ではスフェロイドが存在する部分のみが白データとなる。そして、円補正処理後の第2画像IMG2に対して、中空部分を埋める処理(黒データによって囲まれた白データを黒データに変換する処理)が行われる。このように、本実施形態においては、円補正処理の実行後に、中空部分を埋める処理が行われる(ステップS240)。これにより、窪部壁面よりも内側部分が全て黒データとなる第3画像IMG3が生成される(図11参照)。 As can be understood from FIG. 11, in the present embodiment, the white data is not completely surrounded by the black data for the second image IMG2. Therefore, before executing the process for filling the hollow portion similar to that in the first embodiment (step S140 in FIG. 7), the circle correction process for making the shape of the outer edge (the outer edge of the black data) of the second image IMG2 circular. Is done. Focusing only on the outer edge of the second image IMG2, as shown in FIG. 12, a circle with a part missing is corrected to a complete circle by the circle correction processing. As a result, as shown in FIG. 13, the second image IMG2 after the circle correction process has a completely circular outer edge, and only the portion where the spheroid exists is white data inside the outer edge. Then, processing for filling the hollow portion (processing for converting white data surrounded by black data into black data) is performed on the second image IMG2 after the circle correction processing. As described above, in the present embodiment, after the circle correction process is performed, the process of filling the hollow portion is performed (step S240). Thereby, the 3rd image IMG3 from which all the inner parts than a hollow part wall surface become black data is produced | generated (refer FIG. 11).
 その後、上記第1の実施形態と同様、ステップS240で生成された第3画像IMG3をROIマスクとして撮像画像DATからのオブジェクト抽出が行われる(ステップS250)。これにより、撮像画像DATからスフェロイドを表す画像8のみを抽出した画像IMGexが得られる(図11参照)。 Thereafter, as in the first embodiment, object extraction from the captured image DAT is performed using the third image IMG3 generated in step S240 as the ROI mask (step S250). Thereby, an image IMGex obtained by extracting only the image 8 representing the spheroid from the captured image DAT is obtained (see FIG. 11).
 なお、本実施形態においては、ステップS210によって2値化ステップが実現され、ステップS220によって第1画像生成ステップが実現され、ステップS230によって第2画像生成ステップが実現され、ステップS240によって第3画像生成ステップが実現され、ステップS250によって画像抽出ステップが実現されている。 In this embodiment, the binarization step is realized by step S210, the first image generation step is realized by step S220, the second image generation step is realized by step S230, and the third image generation is executed by step S240. Steps are realized, and the image extraction step is realized by Step S250.
<2.1.2 円補正処理>
 ここで、上述の円補正処理の具体的な手法について説明する。但し、それらの手法自体は公知であるので、以下では簡単に説明する。なお、不完全な円の形状をほぼ完全な円の形状に補正できるものであれば、以下で説明する手法以外の手法を採用することもできる。
<2.1.2 Yen correction processing>
Here, a specific method of the above-described circle correction process will be described. However, since these methods are known per se, they will be briefly described below. Any method other than the method described below can be adopted as long as the shape of the incomplete circle can be corrected to the shape of an almost perfect circle.
<2.1.2.1 円形ハフ変換を用いた手法>
 まず、円形ハフ変換を用いた手法について説明する。この手法では、図14に示すようなXY平面を用いて、第2画像IMG2の外縁上の点を通る多くの円を考える。例えば、点Z(x,y)を通る円41について、中心の座標C(p,q)の値および半径rの値を様々な値とする多くの円を描くことができる。このとき、個々の円は、「p,q,r」の組み合わせで特定することができる。また、第2画像IMG2の外縁上の点のうちの点Z(x,y)以外の点を通る円についても、中心の座標C(p,q)の値および半径rの値を様々な値とする多くの円を描くことができる。このようにして第2画像IMG2の外縁上の点を通る多くの円を考えたときに、最も多く現れた「p,q,r」の組み合わせで特定される円を円補正処理後の第2画像IMG2の外縁とすれば良い。
<2.1.2.1 Method Using Circular Hough Transform>
First, a method using the circular Hough transform will be described. In this method, many circles passing through points on the outer edge of the second image IMG2 are considered using an XY plane as shown in FIG. For example, with respect to a circle 41 passing through the point Z (x, y), many circles having various values of the center coordinate C (p, q) and the radius r can be drawn. At this time, each circle can be specified by a combination of “p, q, r”. Further, regarding the circle passing through the points other than the point Z (x, y) among the points on the outer edge of the second image IMG2, the values of the central coordinates C (p, q) and the value of the radius r are various values. Can draw many circles. In this way, when many circles passing through points on the outer edge of the second image IMG2 are considered, the circle specified by the combination of “p, q, r” that appears most frequently is the second after the circle correction processing. The outer edge of the image IMG2 may be used.
<2.1.2.2 円形分離度フィルタを用いた手法>
 次に、円形分離度フィルタを用いた手法について説明する。この手法では、図15に示すように、或る中心座標および或る半径を有する円42によって分離される2つの領域R1,R2を考える。そして、それぞれの領域R1,R2における黒データの割合(あるいは白データの割合)に基づいて領域R1と領域R2との分離度を求める。このようにして、円の中心座標・半径を様々な値にして分離度を求めることができる。そして、最大の分離度が得られる円を円補正処理後の第2画像IMG2の外縁とすれば良い。
<2.1.2.2 Method Using Circular Separation Filter>
Next, a method using a circular separability filter will be described. In this method, as shown in FIG. 15, two regions R1 and R2 separated by a circle 42 having a certain center coordinate and a certain radius are considered. Then, the degree of separation between the region R1 and the region R2 is obtained based on the ratio of black data (or the ratio of white data) in each of the regions R1 and R2. In this way, the degree of separation can be obtained with various values for the center coordinates and radius of the circle. Then, the circle that provides the maximum degree of separation may be used as the outer edge of the second image IMG2 after the circle correction process.
<2.1.2.3 凸包の概念を用いた手法>
 最後に、凸包の概念を用いた手法について説明する。凸包とは、与えられた点をすべて包含する最小の多角形のことである。この凸包の概念を用いて、第2画像IMG2を構成する全ての黒データを包含する最小の円を求めることができる。そして、その求めた円を円補正処理後の第2画像IMG2の外縁とすれば良い。
<2.1.2.3 Method using the concept of convex hull>
Finally, a method using the concept of convex hull will be described. A convex hull is the smallest polygon that encompasses all given points. Using this concept of convex hull, a minimum circle including all black data constituting the second image IMG2 can be obtained. Then, the obtained circle may be used as the outer edge of the second image IMG2 after the circle correction process.
<2.2 効果>
 本実施形態によれば、第3画像が生成される際、外縁の形状が円形でない第2画像IMG2に対して外縁の形状を円形に補正する処理が行われてから中空部分を埋める処理(黒データによって囲まれた白データを黒データに変換する処理)が行われる。このため、撮像画像DATにおいて窪部壁面を表す画像7とスフェロイドを表す画像8とが接触している場合でも、スフェロイドの色の濃さに関わらず、撮像画像DATから窪部壁面を表す画像7を除去してスフェロイドを表す画像8のみを抽出することが可能となる。
<2.2 Effect>
According to this embodiment, when the third image is generated, the process of correcting the shape of the outer edge to a circle is performed on the second image IMG2 whose outer edge is not circular, and then the process of filling the hollow portion (black) (Processing for converting white data surrounded by data into black data). For this reason, even when the image 7 representing the depression wall surface and the image 8 representing the spheroid are in contact with each other in the captured image DAT, the image 7 representing the depression wall surface from the captured image DAT regardless of the color density of the spheroid. It is possible to extract only the image 8 representing the spheroids.
<3.第3の実施形態>
 本発明の第3の実施形態について説明する。画像処理システムの概略構成,撮像装置20の構成,および画像処理装置10の構成については、上記第1の実施形態と同様であるので、説明を省略する(図1~図6を参照)。
<3. Third Embodiment>
A third embodiment of the present invention will be described. Since the schematic configuration of the image processing system, the configuration of the imaging device 20, and the configuration of the image processing device 10 are the same as those in the first embodiment, description thereof is omitted (see FIGS. 1 to 6).
<3.1 画像処理方法>
 図16に示すフローチャートを参照しつつ、本実施形態における画像処理の手順を説明する。本実施形態においては、処理対象の画像は、図27に示したような画像である。すなわち、撮像画像において、窪部壁面を表す画像7は閉じた輪を形成しておらず、窪部壁面を表す画像7とスフェロイドを表す画像8とが接触している。なお、ここでも、窪部壁面を表す画像7とスフェロイドを表す画像8とが同程度の濃さ(黒さ)を有しているものと仮定する。
<3.1 Image processing method>
The procedure of image processing in the present embodiment will be described with reference to the flowchart shown in FIG. In the present embodiment, the processing target image is an image as shown in FIG. That is, in the captured image, the image 7 representing the recess wall surface does not form a closed ring, and the image 7 representing the recess wall surface and the image 8 representing the spheroid are in contact. It is assumed here that the image 7 representing the depression wall surface and the image 8 representing the spheroid have the same level of darkness (blackness).
 画像処理の開始後、まず、上記第1の実施形態と同様、撮像画像DATに対して2値化処理が施される(ステップS310)。これにより、窪部壁面に相当する部分のうちの一部を除く部分とスフェロイドが存在する部分とが黒データとなる2値化画像IMGbiが生成される(図17参照)。 After the start of the image processing, first, similarly to the first embodiment, the binarized processing is performed on the captured image DAT (step S310). Thereby, a binarized image IMGbi is generated in which a portion excluding a part of the portion corresponding to the depression wall surface and a portion where the spheroid exists are black data (see FIG. 17).
 図17から把握されるように、本実施形態においては、2値化画像IMGbiに関して、窪部壁面を表す画像7は閉じた輪を形成していない。そこで、上記第1の実施形態と同様の中空部分を埋める処理(図7のステップS120)を実行する前に、2値化画像IMGbiの外縁(黒データの外縁)の形状を円形にする円補正処理が行われる。この円補正処理については、上記第2の実施形態におけるステップS240と同様に、例えば、円形ハフ変換を用いた手法、円形分離度フィルタを用いた手法、凸包の概念を用いた手法などを採用して行われる。2値化画像IMGbiの外縁のみに着目すると、図18に示すように、一部が欠けた状態の円形が円補正処理によって完全な円形に補正される。これにより、円補正処理後の2値化画像IMGbiについては、図19に示すように、外縁の形状は完全な円形となる。そして、円補正処理後の2値化画像IMGbiに対して、中空部分を埋める処理(黒データによって囲まれた白データを黒データに変換する処理)が行われる。このように、本実施形態においては、円補正処理の実行後に、中空部分を埋める処理が行われる(ステップS320)。これにより、窪部30の外縁の内側部分が全て黒データとなる第1画像IMG1が生成される(図17参照)。 As can be understood from FIG. 17, in this embodiment, the image 7 representing the depression wall surface does not form a closed ring with respect to the binarized image IMGbi. Therefore, before executing the process of filling the hollow portion similar to that in the first embodiment (step S120 in FIG. 7), the circle correction is performed so that the outer edge of the binarized image IMGbi (the outer edge of the black data) has a circular shape. Processing is performed. For this circle correction processing, for example, a method using a circular Hough transform, a method using a circular separability filter, a method using the concept of a convex hull, and the like are employed, as in step S240 in the second embodiment. Done. If attention is paid only to the outer edge of the binarized image IMGbi, as shown in FIG. 18, a circle with a part missing is corrected to a complete circle by the circle correction processing. Thereby, as for the binarized image IMGbi after the circle correction processing, as shown in FIG. 19, the outer edge has a complete circular shape. Then, processing for filling the hollow portion (processing for converting white data surrounded by black data into black data) is performed on the binarized image IMGbi after the circle correction processing. As described above, in the present embodiment, the process for filling the hollow portion is performed after the circle correction process is executed (step S320). Thereby, the first image IMG1 in which the inner part of the outer edge of the recess 30 is all black data is generated (see FIG. 17).
 次に、上記第1の実施形態と同様、ステップS310で生成された2値化画像IMGbiとステップS320で生成された第1画像IMG1との排他的論理和に基づく画像の生成が行われる(ステップS330)。これにより、窪部壁面よりも内側の部分であってスフェロイドが存在する部分を除く部分および窪部壁面の一部(黒い輪の欠けている部分)が黒データ、それ以外の部分が白データとなる第2画像IMG2が生成される(図17参照)。 Next, as in the first embodiment, an image based on the exclusive OR of the binarized image IMGbi generated in step S310 and the first image IMG1 generated in step S320 is generated (step S330). As a result, the portion inside the recess wall surface excluding the portion where the spheroid exists and a portion of the recess wall surface (the portion lacking the black ring) are black data, and the other portion is the white data. A second image IMG2 is generated (see FIG. 17).
 図17から把握されるように、本実施形態においては、第2画像IMG2に関して、白データは完全には黒データに囲まれてはいない。また、第2画像IMG2には、黒い輪の欠けている部分に相当する凸状の黒データが存在する。そこで、上記第1の実施形態と同様の中空部分を埋める処理(図7のステップS140)を実行する前に、第2画像IMG2の外縁(黒データの外縁)の形状を円形にする円補正処理が行われる。第2画像IMG2の外縁のみに着目すると、図20に示すように、一部に凸状部分を含むとともに一部が欠けた状態の円形が円補正処理によって完全な円形に補正される。これにより、上記第2の実施形態と同様、円補正処理後の第2画像IMG2については、図13に示すように、外縁の形状は完全な円形となり、外縁の内側ではスフェロイドが存在する部分のみが白データとなる。そして、円補正処理後の第2画像IMG2に対して、中空部分を埋める処理(黒データによって囲まれた白データを黒データに変換する処理)が行われる。このように、本実施形態においては、円補正処理の実行後に、中空部分を埋める処理が行われる(ステップS340)。これにより、窪部壁面よりも内側部分が全て黒データとなる第3画像IMG3が生成される(図17参照)。 As can be seen from FIG. 17, in the present embodiment, the white data is not completely surrounded by the black data for the second image IMG2. Further, the second image IMG2 includes convex black data corresponding to a portion where a black ring is missing. Therefore, before executing the process for filling the hollow portion similar to that in the first embodiment (step S140 in FIG. 7), the circle correction process for making the shape of the outer edge (the outer edge of the black data) of the second image IMG2 circular. Is done. If attention is paid only to the outer edge of the second image IMG2, as shown in FIG. 20, a circle including a convex portion and partially missing is corrected to a complete circle by the circle correction processing. As a result, as in the second embodiment, as shown in FIG. 13, the second image IMG2 after the circle correction processing has a completely circular outer edge, and only the portion where spheroids exist inside the outer edge. Becomes white data. Then, processing for filling the hollow portion (processing for converting white data surrounded by black data into black data) is performed on the second image IMG2 after the circle correction processing. Thus, in the present embodiment, after the circle correction process is performed, a process for filling the hollow portion is performed (step S340). Thereby, the 3rd image IMG3 from which all the inner parts than a hollow part wall surface become black data is produced | generated (refer FIG. 17).
 その後、上記第1の実施形態と同様、ステップS340で生成された第3画像IMG3をROIマスクとして撮像画像DATからのオブジェクト抽出が行われる(ステップS350)。これにより、撮像画像DATからスフェロイドを表す画像8のみを抽出した画像IMGexが得られる(図17参照)。 Thereafter, as in the first embodiment, object extraction from the captured image DAT is performed using the third image IMG3 generated in step S340 as the ROI mask (step S350). Thereby, an image IMGex obtained by extracting only the image 8 representing the spheroid from the captured image DAT is obtained (see FIG. 17).
 なお、本実施形態においては、ステップS310によって2値化ステップが実現され、ステップS320によって第1画像生成ステップが実現され、ステップS330によって第2画像生成ステップが実現され、ステップS340によって第3画像生成ステップが実現され、ステップS350によって画像抽出ステップが実現されている。 In this embodiment, the binarization step is realized by step S310, the first image generation step is realized by step S320, the second image generation step is realized by step S330, and the third image generation is executed by step S340. Steps are realized, and the image extraction step is realized by Step S350.
<3.2 効果>
 本実施形態によれば、第1画像が生成される際、外縁の形状が円形でない2値化画像IMGbiに対して外縁の形状を円形に補正する処理が行われてから中空部分を埋める処理(黒データによって囲まれた白データを黒データに変換する処理)が行われる。このため、撮像画像DATにおいて窪部壁面を表す画像7が閉じた輪を形成していない場合でも、スフェロイドの色の濃さに関わらず、撮像画像DATから窪部壁面を表す画像7を除去してスフェロイドを表す画像8のみを抽出することが可能となる。
<3.2 Effects>
According to the present embodiment, when the first image is generated, the process of correcting the outer edge shape to a circular shape is performed on the binarized image IMGbi whose outer edge shape is not circular, and then the process of filling the hollow portion ( Processing for converting white data surrounded by black data into black data) is performed. Therefore, even when the image 7 representing the depression wall surface in the captured image DAT does not form a closed ring, the image 7 representing the depression wall surface is removed from the captured image DAT regardless of the color density of the spheroid. Thus, only the image 8 representing the spheroid can be extracted.
<3.3 変形例>
 上記第3の実施形態では、図27に示したような画像を処理対象の画像としていた。しかしながら、処理対象の画像が図21に示すような画像である場合、すなわち、撮像画像DATにおいて、窪部壁面を表す画像7が閉じた輪を形成しておらず、窪部壁面を表す画像7とスフェロイドを表す画像8とが接触していない場合にも、上記第3の実施形態と同様の手順によって、撮像画像DATからスフェロイドを表す画像8のみを抽出することができる。
<3.3 Modification>
In the third embodiment, the image as shown in FIG. 27 is used as the processing target image. However, when the image to be processed is an image as shown in FIG. 21, that is, in the captured image DAT, the image 7 representing the recessed wall surface does not form a closed ring, and the image 7 representing the recessed wall surface. Even when the image 8 representing the spheroid is not in contact, only the image 8 representing the spheroid can be extracted from the captured image DAT by the same procedure as in the third embodiment.
<4.その他>
 上記各実施形態では、撮像画像DATに対する画像処理は撮像装置20とは別の画像処理装置10で行われていた。しかしながら、本発明はこれに限定されず、例えば撮像装置20の制御部25内で撮像画像DATに対する画像処理が行われるようにしても良い。
<4. Other>
In each of the above embodiments, the image processing for the captured image DAT is performed by the image processing apparatus 10 that is different from the imaging apparatus 20. However, the present invention is not limited to this, and for example, image processing on the captured image DAT may be performed in the control unit 25 of the imaging device 20.
 ところで、マイクロスフェロイドアレイの撮像によって得られる撮像画像DATには、多数の窪部30の画像が含まれている。従って、撮像画像DATには、各窪部30の画像として、例えば図25~図27に示したような様々な画像が混在し得る。このため、そのような様々な画像のパターンに対応できるよう、画像処理プログラム内に複数の処理手順(上記第1~第3の実施形態のそれぞれの処理手順)を用意しておくことが考えられる。しかしながら、上記第3の実施形態の処理手順を用意しておくだけで、図25~図27に示したいずれのパターンの画像に対しても、スフェロイドを表す画像8のみを抽出することができる。何故ならば、2値化画像IMGbiや第2画像IMG2の外縁の形状が円形である場合にたとえ円補正処理が行われても、外縁の形状は図22に示すように円形のまま維持され、中空部分を埋める処理は所望通りに行われるからである。 By the way, the picked-up image DAT obtained by picking up the microspheroid array includes images of a large number of depressions 30. Therefore, for example, various images as shown in FIGS. 25 to 27 can be mixed in the captured image DAT as the images of the recesses 30. For this reason, it is conceivable to prepare a plurality of processing procedures (respective processing procedures of the first to third embodiments) in the image processing program so as to cope with such various image patterns. . However, only by preparing the processing procedure of the third embodiment, it is possible to extract only the image 8 representing the spheroid from any pattern image shown in FIGS. This is because even if the circle correction process is performed when the outer edge shape of the binarized image IMGbi or the second image IMG2 is circular, the outer edge shape remains circular as shown in FIG. This is because the process of filling the hollow portion is performed as desired.
 7…窪部壁面を表す画像
 8…スフェロイドを表す画像
 10…画像処理装置
 20…撮像装置
 30…(ウェルの)窪部
 MP…マイクロスフェロイドアレイプレート
 W…ウェル
7 ... Image representing the wall surface of the recess 8 ... Image representing the spheroid 10 ... Image processing device 20 ... Imaging device 30 ... Recessed portion (of the well) MP ... Microspheroid array plate W ... Well

Claims (13)

  1.  撮像対象物を保持するための複数の窪部を有するウェルを撮像することによって得られた撮像画像を処理する画像処理方法であって、
     前記撮像画像に対して2値化処理を施すことによって黒データと白データとからなる2値化画像を生成する2値化ステップと、
     前記2値化画像のうち黒データによって囲まれた白データを黒データに変換することによって第1画像を生成する第1画像生成ステップと、
     前記2値化画像と前記第1画像との排他的論理和を求め、結果が真であるデータを黒データに対応付けるとともに結果が偽であるデータを白データに対応付けた第2画像を生成する第2画像生成ステップと、
     前記第2画像のうち黒データによって囲まれた白データを黒データに変換することによって第3画像を生成する第3画像生成ステップと、
     前記撮像画像のうち前記第3画像を構成する黒データが存在する領域の画像のみを抽出する画像抽出ステップと
    を含むことを特徴とする、画像処理方法。
    An image processing method for processing a captured image obtained by imaging a well having a plurality of depressions for holding an imaging target,
    A binarization step of generating a binarized image composed of black data and white data by performing binarization processing on the captured image;
    A first image generation step of generating a first image by converting white data surrounded by black data in the binarized image into black data;
    An exclusive OR of the binarized image and the first image is obtained, and a second image in which data with a true result is associated with black data and data with a false result is associated with white data is generated. A second image generation step;
    A third image generation step of generating a third image by converting white data surrounded by black data in the second image into black data;
    An image extracting step of extracting only an image of an area where black data constituting the third image exists in the captured image.
  2.  前記第3画像生成ステップでは、黒データによって囲まれた白データを黒データに変換する前に、各窪部に対応する第2画像の外縁の形状が円形でない場合に当該外縁の形状を円形に補正する円補正処理が行われることを特徴とする、請求項1に記載の画像処理方法。 In the third image generation step, if the shape of the outer edge of the second image corresponding to each recess is not circular before the white data surrounded by the black data is converted to black data, the shape of the outer edge is made circular. The image processing method according to claim 1, wherein circle correction processing for correction is performed.
  3.  前記円補正処理は、黒データの領域と白データの領域との境界点にハフ変換を施すことによって円を抽出し、各窪部に対応する第2画像の外縁の形状をその抽出した円の形状に補正することを特徴とする、請求項2に記載の画像処理方法。 The circle correction process extracts a circle by performing a Hough transform on a boundary point between a black data area and a white data area, and the shape of the outer edge of the second image corresponding to each recess is extracted from the extracted circle. The image processing method according to claim 2, wherein the image processing method is corrected to a shape.
  4.  前記円補正処理は、画像領域を内部領域と外部領域との分離する円形分離度フィルタを用いて分離度を求めることによって円を抽出し、各窪部に対応する第2画像の外縁の形状をその抽出した円の形状に補正することを特徴とする、請求項2に記載の画像処理方法。 In the circle correction process, a circle is extracted by obtaining a separation degree using a circular separation degree filter that separates an image area into an inner area and an outer area, and the shape of the outer edge of the second image corresponding to each depression is obtained. The image processing method according to claim 2, wherein the shape of the extracted circle is corrected.
  5.  前記円補正処理は、各窪部につき第2画像を構成する全ての黒データを含む最小の円を抽出し、各窪部に対応する第2画像の外縁の形状をその抽出した円の形状に補正することを特徴とする、請求項2に記載の画像処理方法。 The circle correction process extracts a minimum circle including all black data constituting the second image for each recess, and changes the shape of the outer edge of the second image corresponding to each recess to the shape of the extracted circle. The image processing method according to claim 2, wherein correction is performed.
  6.  前記第1画像生成ステップでは、黒データによって囲まれた白データを黒データに変換する前に、各窪部に対応する2値化画像の外縁の形状が円形でない場合に当該外縁の形状を円形に補正する円補正処理が行われることを特徴とする、請求項2から5までのいずれか1項に記載の画像処理方法。 In the first image generation step, if the shape of the outer edge of the binarized image corresponding to each depression is not circular before the white data surrounded by the black data is converted to black data, the shape of the outer edge is circular. 6. The image processing method according to claim 2, wherein a circle correction process for correcting the image is performed.
  7.  撮像対象物を保持するための複数の窪部を有するウェルを撮像することによって得られた撮像画像を処理する画像処理装置であって、
     前記撮像画像に対して2値化処理を施すことによって黒データと白データとからなる2値化画像を生成する2値化処理部と、
     前記2値化画像のうち黒データによって囲まれた白データを黒データに変換することによって第1画像を生成する第1画像生成部と、
     前記2値化画像と前記第1画像との排他的論理和を求め、結果が真であるデータを黒データに対応付けるとともに結果が偽であるデータを白データに対応付けた第2画像を生成する第2画像生成部と、
     前記第2画像のうち黒データによって囲まれた白データを黒データに変換することによって第3画像を生成する第3画像生成部と、
     前記撮像画像のうち前記第3画像を構成する黒データが存在する領域の画像のみを抽出する画像抽出部と
    を備えることを特徴とする、画像処理装置。
    An image processing apparatus that processes a captured image obtained by imaging a well having a plurality of depressions for holding an imaging target,
    A binarization processing unit that generates a binarized image composed of black data and white data by performing binarization processing on the captured image;
    A first image generation unit that generates a first image by converting white data surrounded by black data among the binarized images into black data;
    An exclusive OR of the binarized image and the first image is obtained, and a second image in which data with a true result is associated with black data and data with a false result is associated with white data is generated. A second image generation unit;
    A third image generation unit for generating a third image by converting white data surrounded by black data in the second image into black data;
    An image processing apparatus, comprising: an image extracting unit that extracts only an image of an area where black data constituting the third image exists in the captured image.
  8.  撮像対象物を保持するための複数の窪部を有するウェルを撮像することによって得られた撮像画像を処理する画像処理プログラムであって、
     前記撮像画像に対して2値化処理を施すことによって黒データと白データとからなる2値化画像を生成する2値化ステップと、
     前記2値化画像のうち黒データによって囲まれた白データを黒データに変換することによって第1画像を生成する第1画像生成ステップと、
     前記2値化画像と前記第1画像との排他的論理和を求め、結果が真であるデータを黒データに対応付けるとともに結果が偽であるデータを白データに対応付けた第2画像を生成する第2画像生成ステップと、
     前記第2画像のうち黒データによって囲まれた白データを黒データに変換することによって第3画像を生成する第3画像生成ステップと、
     前記撮像画像のうち前記第3画像を構成する黒データが存在する領域の画像のみを抽出する画像抽出ステップと
    をコンピュータのCPUがメモリを利用して実行することを特徴とする、画像処理プログラム。
    An image processing program for processing a captured image obtained by imaging a well having a plurality of depressions for holding an imaging target,
    A binarization step of generating a binarized image composed of black data and white data by performing binarization processing on the captured image;
    A first image generation step of generating a first image by converting white data surrounded by black data in the binarized image into black data;
    An exclusive OR of the binarized image and the first image is obtained, and a second image in which data with a true result is associated with black data and data with a false result is associated with white data is generated. A second image generation step;
    A third image generation step of generating a third image by converting white data surrounded by black data in the second image into black data;
    An image processing program in which a CPU of a computer uses a memory to execute an image extraction step of extracting only an image of an area where black data constituting the third image exists in the captured image.
  9.  前記第3画像生成ステップでは、黒データによって囲まれた白データを黒データに変換する前に、各窪部に対応する第2画像の外縁の形状が円形でない場合に当該外縁の形状を円形に補正する円補正処理が行われることを特徴とする、請求項8に記載の画像処理プログラム。 In the third image generation step, if the shape of the outer edge of the second image corresponding to each recess is not circular before the white data surrounded by the black data is converted to black data, the shape of the outer edge is made circular. The image processing program according to claim 8, wherein circle correction processing for correction is performed.
  10.  前記円補正処理は、黒データの領域と白データの領域との境界点にハフ変換を施すことによって円を抽出し、各窪部に対応する第2画像の外縁の形状をその抽出した円の形状に補正することを特徴とする、請求項9に記載の画像処理プログラム。 The circle correction process extracts a circle by performing a Hough transform on a boundary point between a black data area and a white data area, and the shape of the outer edge of the second image corresponding to each recess is extracted from the extracted circle. The image processing program according to claim 9, wherein the image processing program is corrected to a shape.
  11.  前記円補正処理は、画像領域を内部領域と外部領域との分離する円形分離度フィルタを用いて分離度を求めることによって円を抽出し、各窪部に対応する第2画像の外縁の形状をその抽出した円の形状に補正することを特徴とする、請求項9に記載の画像処理プログラム。 In the circle correction process, a circle is extracted by obtaining a separation degree using a circular separation degree filter that separates an image area into an inner area and an outer area, and the shape of the outer edge of the second image corresponding to each depression is obtained. The image processing program according to claim 9, wherein the image processing program corrects the extracted circle shape.
  12.  前記円補正処理は、各窪部につき第2画像を構成する全ての黒データを含む最小の円を抽出し、各窪部に対応する第2画像の外縁の形状をその抽出した円の形状に補正することを特徴とする、請求項9に記載の画像処理プログラム。 The circle correction process extracts a minimum circle including all black data constituting the second image for each recess, and changes the shape of the outer edge of the second image corresponding to each recess to the shape of the extracted circle. The image processing program according to claim 9, wherein correction is performed.
  13.  前記第1画像生成ステップでは、黒データによって囲まれた白データを黒データに変換する前に、各窪部に対応する2値化画像の外縁の形状が円形でない場合に当該外縁の形状を円形に補正する円補正処理が行われることを特徴とする、請求項9から12までのいずれか1項に記載の画像処理プログラム。 In the first image generation step, if the shape of the outer edge of the binarized image corresponding to each depression is not circular before the white data surrounded by the black data is converted to black data, the shape of the outer edge is circular. The image processing program according to any one of claims 9 to 12, wherein a circle correction process for correcting the image is performed.
PCT/JP2017/014046 2016-04-22 2017-04-04 Image processing method, image processing device, and image processing program WO2017183441A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2016-085756 2016-04-22
JP2016085756A JP6326445B2 (en) 2016-04-22 2016-04-22 Image processing method, image processing apparatus, and image processing program

Publications (1)

Publication Number Publication Date
WO2017183441A1 true WO2017183441A1 (en) 2017-10-26

Family

ID=60115814

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2017/014046 WO2017183441A1 (en) 2016-04-22 2017-04-04 Image processing method, image processing device, and image processing program

Country Status (2)

Country Link
JP (1) JP6326445B2 (en)
WO (1) WO2017183441A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110496304A (en) * 2018-05-16 2019-11-26 富士胶片株式会社 The manufacturing method of microneedle array

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111805894B (en) * 2020-06-15 2021-08-03 苏州大学 STL model slicing method and device

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH05143733A (en) * 1991-11-18 1993-06-11 Seiko Epson Corp Contour extracting device
JP2010181402A (en) * 2009-01-09 2010-08-19 Dainippon Printing Co Ltd Embryo quality evaluation assistance system, embryo quality evaluation assistance apparatus and embryo quality evaluation assistance method
JP2013148441A (en) * 2012-01-19 2013-08-01 Dainippon Screen Mfg Co Ltd Image processing method

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH05143733A (en) * 1991-11-18 1993-06-11 Seiko Epson Corp Contour extracting device
JP2010181402A (en) * 2009-01-09 2010-08-19 Dainippon Printing Co Ltd Embryo quality evaluation assistance system, embryo quality evaluation assistance apparatus and embryo quality evaluation assistance method
JP2013148441A (en) * 2012-01-19 2013-08-01 Dainippon Screen Mfg Co Ltd Image processing method

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110496304A (en) * 2018-05-16 2019-11-26 富士胶片株式会社 The manufacturing method of microneedle array
US11452854B2 (en) 2018-05-16 2022-09-27 Fujifilm Corporation Method of manufacturing microneedle array

Also Published As

Publication number Publication date
JP2017194879A (en) 2017-10-26
JP6326445B2 (en) 2018-05-16

Similar Documents

Publication Publication Date Title
JP6791245B2 (en) Image processing device, image processing method and image processing program
JP5804220B1 (en) Image processing apparatus and image processing program
JP6122817B2 (en) Spheroid evaluation method and spheroid evaluation apparatus
JPWO2006080239A1 (en) Image processing apparatus, microscope system, and region specifying program
WO2017150194A1 (en) Image processing device, image processing method, and program
WO2013099045A1 (en) Image display device and image display method
US8064714B2 (en) Method for binarizing a digital gray value image to generate a binarized gray value image and arrangement for carrying out said method
WO2017183441A1 (en) Image processing method, image processing device, and image processing program
CN106971141B (en) Cell region specifying method, cell imaging system, and cell image processing device
JP5580267B2 (en) Detection method
JP2016014974A (en) Image processing method and image processor
JP5762315B2 (en) Image processing method
JP6132824B2 (en) Image processing method, control program, recording medium, and image processing apparatus
WO2001071663A1 (en) Cell lineage extracting method
JP2009222420A (en) Image processing method, image processing apparatus, and image processing program
WO2016027542A1 (en) Threshold value determination method, image processing method, and image processing device
JP2010158317A (en) Image processing method and computer program
JP5520908B2 (en) Image processing method and image processing apparatus
JPWO2016158719A1 (en) Image processing method, control program, and image processing apparatus
WO2017069035A1 (en) Image-processing method and method for creating shading reference data
JP5157963B2 (en) Object detection device
JP2001258599A (en) Method for extracting cell genealogy
JP2009277618A (en) Magnetic domain structural image acquisition method and scanning transmission electron microscope
US20160348057A1 (en) Cell Colony Area Specifying Apparatus, Cell Colony Area Specifying Method, and Recording Medium
EP4226314A1 (en) Method for detecting defects in a 3d printer

Legal Events

Date Code Title Description
NENP Non-entry into the national phase

Ref country code: DE

121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17785783

Country of ref document: EP

Kind code of ref document: A1

122 Ep: pct application non-entry in european phase

Ref document number: 17785783

Country of ref document: EP

Kind code of ref document: A1