WO2017056599A1 - Image generation device and image generation method - Google Patents

Image generation device and image generation method Download PDF

Info

Publication number
WO2017056599A1
WO2017056599A1 PCT/JP2016/069135 JP2016069135W WO2017056599A1 WO 2017056599 A1 WO2017056599 A1 WO 2017056599A1 JP 2016069135 W JP2016069135 W JP 2016069135W WO 2017056599 A1 WO2017056599 A1 WO 2017056599A1
Authority
WO
WIPO (PCT)
Prior art keywords
imaging
image
pixel
target region
original
Prior art date
Application number
PCT/JP2016/069135
Other languages
French (fr)
Japanese (ja)
Inventor
拓矢 安田
Original Assignee
株式会社Screenホールディングス
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 株式会社Screenホールディングス filed Critical 株式会社Screenホールディングス
Publication of WO2017056599A1 publication Critical patent/WO2017056599A1/en

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N1/00Scanning, transmission or reproduction of documents or the like, e.g. facsimile transmission; Details thereof
    • H04N1/387Composing, repositioning or otherwise geometrically modifying originals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/95Computational photography systems, e.g. light-field imaging systems

Definitions

  • the present invention relates to a technique for synthesizing a plurality of original images to create an image of an imaging target area that is larger than an imaging field of view of an imaging means.
  • a composite image may be created from the original image captured by dividing the imaging target area into a plurality of areas.
  • how to combine a plurality of original images greatly affects the image quality of the combined image.
  • a part of the adjacent original images is imaged by overlapping each other, and an appropriate image processing is performed on the overlapping portions so that the original images are joined together to make the seam of the images less noticeable. Yes.
  • the imaging optical system Image quality non-uniformities due to the optical properties of the individual original images can be included. For example, image quality deterioration such as uneven brightness, blurring, and distortion is particularly likely to appear at the periphery of an image.
  • special imaging include imaging in a wide-angle imaging field of view and imaging using an imaging optical system having a high percentic characteristic.
  • the present invention has been made in view of the above problems, and in a technique for combining a plurality of original images together, it is possible to create a composite image in which the influence of image quality degradation caused by the optical characteristics of the imaging system is suppressed.
  • the purpose is to provide technology that can be used.
  • an aspect of the image creating apparatus has a two-dimensional imaging field of view, the imaging field of view with respect to the imaging target region is different from each other, and the imaging fields of adjacent positions are
  • the image capturing means that divides the image area into a plurality of original images and composites the plurality of original images by capturing the image several times by partially overlapping each other and combining the plurality of original images
  • An image composition unit that creates a single composite image representing a region, and the image composition unit associates a position in the original image and a weight given to a pixel at the position in accordance with the optical characteristics of the imaging unit.
  • each pixel in each original image is weighted, and a plurality of weighted original images are synthesized so that pixels corresponding to the same position in the imaging target region overlap each other, and imaging means As all positions of the imaging target region is represented as an active pixel in at least one of the original images, perform imaging with overlapping imaging field in the imaging target region.
  • the position of the imaging field of view relative to the imaging target region is made different from each other by an imaging unit having a two-dimensional imaging field of view.
  • a composite step of creating a single composite image representing the imaging target region, and in the composite step, the position in the original image and the weight given to the pixel at the position are associated according to the optical characteristics of the image pickup means.
  • weighting is performed on each pixel in each original image so that pixels corresponding to the same position in the imaging target region overlap each other in the plurality of weighted original images. It is combined, in the imaging process, all positions in the imaging target region as represented as an effective pixel in at least one of the original image, the imaging is performed with overlapping imaging field in the imaging target region.
  • the “effective pixel” is defined as a pixel that is given a weight equal to or higher than a predetermined threshold by the above-described weighting rule among the pixels constituting the original image.
  • a weight based on a weighting rule corresponding to the optical characteristics of the imaging unit is given to each pixel of a plurality of original images captured so as to partially overlap. Thereby, the weight which each pixel in an original image has can be varied. For this reason, if the weight of the pixel in the portion that causes degradation or loss of the image information due to the optical characteristics of the imaging means is made smaller than the other portions, the portion is less likely to be reflected in the composite image.
  • weighting rules at the time of composition are taken into consideration at the imaging stage. That is, the entire imaging target area is covered with pixels having a weight equal to or greater than a predetermined threshold in at least one original image. More precisely, the imaging field of view for each imaging is set according to the weighting rule so as to have such a relationship.
  • a pixel having a weight less than the threshold in one original image is given a weight greater than or equal to the threshold in any of the other original images. For this reason, when the original image is synthesized, the image information of the pixel to which a large weight is given is strongly reflected in the synthesized image in all the pixels, and the omission of the image information is avoided.
  • each pixel constituting the plurality of original images is weighted according to the optical characteristics of the imaging means.
  • the method of superimposing the imaging fields during imaging of the original image is determined so that the entire imaging target region is represented by pixels having a weight equal to or greater than the threshold in at least one original image. Therefore, it is possible to create a composite image in which the influence of image quality deterioration caused by the optical characteristics of the imaging system is suppressed.
  • FIG. 1 is a diagram showing a schematic configuration of an imaging apparatus to which an image creating method according to the present invention can be applied.
  • This imaging device 1 is a cell, cell colony, bacteria, etc. (hereinafter referred to as “cell etc.”) cultured in a liquid injected into a recess called a well W formed on the upper surface of the well plate WP.
  • This is an apparatus for imaging a raw sample.
  • the well plate WP is generally used in the fields of drug discovery and biological science.
  • the well plate WP is formed in a cylindrical shape with a substantially circular cross section on the top surface of a flat plate, and the bottom surface is transparent and flat. Are provided.
  • the number of wells W in one well plate WP is arbitrary, for example, 96 (12 ⁇ 8 matrix arrangement) can be used.
  • the diameter and depth of each well W is typically about several mm.
  • the size of the well plate and the number of wells targeted by the imaging apparatus 1 are not limited to these, and are arbitrary, for example, those having 6 to 384 holes are generally used.
  • the imaging apparatus 1 can be used not only for imaging a well plate having a plurality of wells but also for imaging cells or the like cultured in a flat container called a dish.
  • a predetermined amount of liquid as the medium M is injected into each well W of the well plate WP, and cells or the like cultured under predetermined culture conditions in the liquid are imaging objects of the imaging apparatus 1.
  • the medium may be one to which an appropriate reagent has been added, or it may be in a liquid state and gelled after being placed in the well W.
  • cells cultured on the inner bottom surface of the well W can be targeted for imaging.
  • Commonly used liquid volume is about 50 to 200 microliters.
  • the imaging apparatus 1 includes a holder 11 that holds the well plate WP, an illumination unit 12 that is disposed above the holder 11, an imaging unit 13 that is disposed below the holder 11, and a CPU 141 that controls the operations of these units.
  • the control part 14 which has.
  • the holder 11 is in contact with the peripheral edge of the lower surface of the well plate WP carrying the sample together with the medium M in each well W, and holds the well plate WP in a substantially horizontal posture.
  • the illumination unit 12 emits illumination light toward the well plate WP held by the holder 11.
  • a light source of illumination light for example, a white LED (Light Emitting Diode) can be used.
  • a combination of a light source and an appropriate illumination optical system is used as the illumination unit 12.
  • the illumination unit 12 illuminates cells and the like in the well W provided on the well plate WP from above.
  • An imaging unit 13 is provided below the well plate WP held by the holder 11.
  • the imaging unit 13 is provided with an imaging optical system (not shown) at a position directly below the well plate WP.
  • the optical axis of the imaging optical system is oriented in the vertical direction.
  • FIG. 1 is a side view, and the vertical direction in the figure represents the vertical direction.
  • the cell etc. in the well W are imaged by the imaging unit 13. Specifically, light emitted from the illumination unit 12 and incident on the liquid from above the well W illuminates the imaging target. Light transmitted downward from the bottom surface of the well W is incident on the light receiving surface of the image sensor 132 through the imaging optical system of the imaging unit 13 including the objective lens 131. An image of the imaging target imaged on the light receiving surface of the imaging device 132 by the imaging optical system is captured by the imaging device 132.
  • the image sensor 132 is an area image sensor having a two-dimensional light receiving surface, and for example, a CCD sensor or a CMOS sensor can be used.
  • the imaging unit 13 can be moved in the horizontal direction and the vertical direction by a mechanical control unit 146 provided in the control unit 14. Specifically, the mechanical control unit 146 operates the drive mechanism 15 based on a control command from the CPU 141 to move the imaging unit 13 in the horizontal direction. Thereby, the imaging unit 13 moves in the horizontal direction with respect to the well W. The focus is adjusted by moving in the vertical direction.
  • the mechanical control unit 146 positions the imaging unit 13 in the horizontal direction so that the optical axis coincides with the center of the well W. .
  • the driving mechanism 15 moves the illumination unit 12 integrally with the imaging unit 13 as indicated by a dotted arrow in the drawing. That is, the illuminating unit 12 is arranged so that the optical center thereof substantially coincides with the optical axis of the imaging unit 13, and moves in conjunction with the imaging unit 13 when moving in the horizontal direction. Thereby, no matter which well W is imaged, the center of the well W and the light center of the illumination unit 12 are always located on the optical axis of the imaging unit 13. Therefore, it is possible to maintain the imaging condition favorably while keeping the illumination condition for each well W constant.
  • the image signal output from the imaging device 132 of the imaging unit 13 is sent to the control unit 14. That is, the image signal is input to an AD converter (A / D) 143 provided in the control unit 14 and converted into digital image data.
  • the CPU 141 executes image processing as appropriate based on the received image data.
  • the control unit 14 further includes an image memory 144 for storing and storing image data, and a memory 145 for storing and storing programs to be executed by the CPU 141 and data generated by the CPU 141. It may be integral.
  • the CPU 141 executes various control processes described later by executing a control program stored in the memory 145.
  • control unit 14 is provided with an interface (IF) unit 142.
  • the interface unit 142 accepts operation input from the user and presents information such as processing results to the user, and exchanges data with an external device connected via a communication line. It has a function.
  • the interface unit 142 is connected to an input receiving unit 147 that receives an operation input from the user and a display unit 148 that displays and outputs a message to the user, a processing result, and the like.
  • the control unit 14 may be a dedicated device equipped with the hardware described above, and a control program for realizing processing functions to be described later is incorporated into a general-purpose processing device such as a personal computer or a workstation. It may be. That is, a general-purpose computer device can be used as the control unit 14 of the imaging device 1. In the case of using a general-purpose processing device, it is sufficient that the imaging device 1 has a minimum control function for operating each unit such as the imaging unit 13.
  • FIG. 2A and 2B are diagrams showing well imaging by this imaging apparatus. More specifically, FIG. 2A is a diagram showing the path of light during well imaging, and FIG. 2B is a diagram showing the relationship between the well and the imaging field of view.
  • a well W that carries cells or the like as an imaging object contains a medium M injected in a liquid state. Accordingly, the illumination light L that enters from above the well W enters the imaging object via the liquid level of the culture medium M.
  • the liquid surface generally forms a downwardly convex meniscus, whereby the illumination light L is refracted and bent outward from the center of the well W.
  • the principal ray of the illumination light is bent in an outward direction away from the optical center except for the optical center.
  • the refraction is small, and the closer to the peripheral edge of the well W, the larger the refraction.
  • the optical characteristic of the imaging optical system including the objective lens 131 in this embodiment is that the principal ray is a general meniscus. It is assumed that the inclination of the principal ray of the illumination light that is bent by the inclination is approximately the same. Such an optical characteristic is also called an object-side high percentic characteristic.
  • this imaging optical system even light whose chief ray is obliquely outward at a position away from the optical axis of the objective lens 131 can be efficiently condensed and imaged on the imaging element 132. For this reason, this imaging optical system is suitable for imaging an entire well W in the imaging field of view V as shown in FIG. 2B. This point is also described in, for example, Japanese Patent Application Laid-Open No. 2015-118036 previously disclosed by the applicant of the present application.
  • the entire well W can be included in the imaging field of view V when imaging a well W having a relatively small diameter at a low magnification.
  • the size of the area to be imaged is the imaging field of view. It becomes relatively large with respect to the size of. There may be a case where the entire well W that is the imaging target region cannot be included in the imaging visual field V.
  • FIG. 3 is a diagram showing imaging when the size of the imaging target area is larger than the size of the imaging visual field.
  • a case will be described in which the entire well W having a larger diameter than that described so far is used as an imaging target region and the imaging target inside thereof is imaged.
  • the same idea can be applied when the size of the imaging target region is relatively larger than the size of the imaging visual field V. This corresponds to the case of imaging an imaging object carried in a shallow container having a large diameter called a dish, or the case of imaging at a higher magnification even for a small-diameter well.
  • the imaging target area is wider than the imaging field of view V, the imaging target area is divided into a plurality of images, and an image representing the entire area to be imaged is created by combining the images by image processing. It is possible. In this case, in order to ensure the desired image quality in the created image, it is necessary to improve the quality of the individual images before synthesis. Items to be noted for this purpose will be described next.
  • the imaging region V includes only the central region away from the peripheral portion Wp of the well W, the influence of the meniscus on the surface of the medium M on the optical path is sufficiently small. For this reason, in telecentric illumination, light incident near the optical axis C of the objective lens 131 is collected and incident on the image sensor 132. On the other hand, at a position away from the optical axis C, a mismatch due to the difference in the chief ray inclination between the incident light and the optical system occurs.
  • the objective lens 131 side is configured to receive light whose principal ray is inclined outward as shown by the dotted line in the drawing, on the premise of refraction at the liquid surface.
  • the light passing through the well W goes straight without being refracted by the meniscus.
  • the inclination of the principal ray in the incident light does not match the inclination of the principal ray on the light receiving side.
  • the image quality may be deteriorated, particularly at the peripheral edge of the imaging field of view V. Specifically, the image may become dark or uneven brightness may occur.
  • the physical imaging visual field V that can be used as an effective imaging visual field is mainly the central portion, and compared with this, deterioration in image quality can be a problem at the peripheral portion. That is, in the image captured in such a state, the image quality is good in the central region of the image, more strictly in the region extending around the position corresponding to the optical axis C of the objective lens 131 in the image. . On the other hand, there is a possibility that the image quality is deteriorated in the outer region.
  • FIG. 4A and FIG. 4B are diagrams showing examples of image quality degradation at the peripheral edge of an image.
  • FIG. 4A shows an example of an image obtained by imaging a grid pattern for image quality evaluation in which small dots are regularly arranged on a glass plate instead of the well plate WP as an imaging object, and a partially enlarged view thereof. Indicates. In the central portion of the image, each dot appears clearly and at a constant arrangement pitch. On the other hand, at the periphery of the image, particularly at the corner of the rectangular image region, the image is dark, the outline of the dots is unclear, and the shape and arrangement pitch are also disturbed.
  • FIG. 4B shows a problem when such images are simply stitched together.
  • FIG. 4A it is assumed that, for example, four images including an area where the image quality is reduced are joined at the peripheral edge, particularly at the corner.
  • the areas where the image quality is lowered are adjacent to each other, such as the area surrounded by a circle in FIG. 4B, and the desired image quality cannot be obtained in the synthesized image.
  • a plurality of images are captured so that a part thereof overlaps, and the overlapping part is synthesized by an appropriate image processing to make the joint inconspicuous. However, this also does not compensate for image quality degradation in the original image.
  • the imaging device 1 of this embodiment when the imaging target area is larger than the imaging visual field V of the imaging unit 13, a plurality of original images are acquired by performing imaging a plurality of times at different positions in the imaging target area. . Then, by synthesizing these original images, an image of the entire imaging target area is created. At this time, a plurality of original images are picked up so as to partially overlap, and the degree of the overlap reflects the processing contents in the later synthesis stage.
  • the principle of image composition and the actual processing contents in the present embodiment will be described.
  • the image information in the region near the optical axis C in the original image has high reliability, and the reliability of the image information is low in the peripheral portion away from the optical axis C.
  • a large weight is given to a pixel having reliable image information in an overlapping portion of a plurality of images, while a smaller weight is given to a pixel having unreliable image information.
  • the image information of highly reliable pixels is more strongly reflected in the combined pixels, and the image quality of the combined image can be improved.
  • FIG. 5A, FIG. 5B, and FIG. 5C are diagrams showing how to give weights to pixels.
  • the upper left of the image is the origin (0, 0)
  • the horizontal direction is the X coordinate
  • the vertical direction is the Y coordinate.
  • the coordinates of the point on the image plane are represented by (x, y).
  • FIG. 5A is a diagram schematically showing a change in image quality.
  • indices representing image quality such as image brightness, distortion, and blur are generally constant up to a certain distance from the optical axis C, but gradually decrease as the distance further increases. Therefore, weighting reflecting such characteristics may be performed. That is, as shown in FIG. 5B, a certain weight (for example, 1) is given to a pixel within a certain distance D from the optical axis C among the pixels in the original image. Further, the weighting coefficient Wt is set so that the weight gradually decreases as the distance from the optical axis C increases, and finally the weight becomes zero.
  • the weighting coefficient Wt at each pixel position should be set so that the value of the weighting coefficient Wt is rotationally symmetric with respect to the optical axis C of the imaging optical system.
  • a weighting map in which the weighting coefficient Wt for each pixel position is mapped is created in advance.
  • the weighting map is obtained by evaluating the imaging optical system in advance. For example, test imaging is performed using a test chart such as the grid pattern shown in FIG. 4A, the degree of deterioration in image quality for each pixel is quantitatively evaluated, and the result is mapped in correspondence with the pixel position. It can be a map.
  • the weighting factor Wt may not have rotational symmetry.
  • the characteristics of the illumination optical system are also included by the prior evaluation as described above It can be a weighting map.
  • the weighting map reflects the optical characteristics of both the illumination optical system and the imaging optical system.
  • the pixel value P (x, y) of each pixel of the original image Im is multiplied by a weight coefficient Wt (x, y) obtained from the weight map Mw according to the pixel position.
  • Wt weight coefficient
  • the pixel value Pw (x, y) of each pixel in the weighted image Iw is determined.
  • an image Iw obtained by assigning weighting to the original image Im based on the optical characteristics of the imaging optical system is obtained.
  • the pixel value P (x, y) of the original image Im is maintained as it is in the central portion near the optical axis C.
  • the pixel value weight of the original image Im is reduced to a small value.
  • the weighted pixel value Pw (x, y) of each pixel can be expressed by the following (Equation 1).
  • FIG. 6A and FIG. 6B are diagrams showing examples of expression of weighting maps.
  • FIG. 6A represents the value of the weighting factor Wt for the XY plane (image plane) as a three-dimensional map.
  • Wt weighting factor
  • the map may be stored as a lookup table for the X coordinate value and the Y coordinate value.
  • FIG. 6B shows an example in which the weight coefficient Wt is quantized in multiple stages and expressed as a two-dimensional map.
  • a value 1 is set for the weighting factor Wt in a region within a predetermined distance centered on the optical axis C.
  • a smaller value is set as the weighting coefficient Wt in a region farther from the optical axis C, such as 0.7 for the outer region and 0.5 for the outer region.
  • the weighting factor Wt corresponding to each pixel in the original image Im is set stepwise according to the distance from the optical axis C of the pixel position. It may also be expressed as a function of distance. In such an aspect, the amount of information to be stored in the memory 145 can be greatly reduced.
  • the map can be expressed in an arbitrary format and stored in the memory 145.
  • the weighting map Mw may be one set for them. However, it may be desirable to prepare the weighting map Mw for each condition, for example, when a plurality of original images are captured under different illumination conditions or imaging conditions.
  • the degree of image overlap when the original image is captured and the calculation method during image composition are determined.
  • Pixels with higher reliability of image information are given greater weight. Therefore, when the entire imaging target area is occupied by only pixels given a weight of a certain value or more, the entire imaging target area is determined by extracting those pixels and reconstructing the image. Images expressed in image quality can be created.
  • a virtual mask obtained by extracting an area where the weighting coefficient Wt is equal to or greater than a predetermined threshold in the weighting map Mw is set. Then, it is only necessary to determine the arrangement of the mask so that the entire imaging target area can be covered with the mask.
  • the number and arrangement of masks at this time represent the number and arrangement of original images necessary for representing the entire imaging target region.
  • Such a mask is obtained, for example, by binarizing the weighting map Mw using the above threshold value.
  • FIG. 7A and 7B are diagrams illustrating a mask created from the weighting map.
  • the mask M1 illustrated in FIG. 7A is an example in which a relatively large threshold Th1 (for example, 0.9) is set for the weighting factor Wt.
  • FIG. 7B shows an example of the mask M2 in which a smaller threshold Th2 (for example, 0.7) is set.
  • a hatched portion indicates a portion having a weight greater than or equal to a threshold, and this portion masks the imaging target region.
  • the image quality is guaranteed to be high in the masked area, but the masked area is small. This means that many images are required to cover the entire imaging area. For example, if the threshold value is set to 1, the image quality deterioration can be ignored, but the number of necessary images increases and the processing time becomes longer.
  • the set value of the threshold is small, the area to be masked is large, so that the number of necessary images is small. However, the part where the image quality is slightly lowered is also used for the synthesized image, and this is reflected in the quality of the synthesized image.
  • the threshold value may be determined according to the required image quality. In the following description, it is assumed that the mask M2 shown in FIG.
  • FIG. 8 is a diagram showing an example of original image allocation.
  • the area of the overlapping portion between the masks is as small as possible.
  • the moving path of the imaging unit 13 is as simple as possible and the length is as short as possible.
  • the imaging apparatus 1 moves the imaging unit 13 in the X direction and the Y direction, which are the side directions of the mask M (or original image), the arrangement of the mask M is also along these directions. Is preferred. From such a viewpoint, for example, an arrangement as shown in FIG. 8 can be considered.
  • points P1 to P10 indicated by black circles indicate the barycentric positions of the plurality of masks M (or original images) arranged.
  • the position of the center of gravity of the original image coincides with the position of the optical axis C when the original image is captured.
  • a solid line arrow indicates a trajectory of scanning movement of the imaging unit 13, more precisely, a trajectory of the optical axis C of the imaging unit 13 on the imaging target region of the well W.
  • White circles Ps and Pe indicate the start point and the end point of the scanning movement of the imaging unit 13.
  • the entire inner region of the well W which is the imaging target region, is covered by ten masks M.
  • the 10 masks M are arranged in 3 rows in the X direction, the left side, ie, the most ( ⁇ X) side row is 3, the center row is 4, the right side, ie, the (+ X) side row is 3 rows.
  • Each is constituted by a mask.
  • adjacent masks M partially overlap each other.
  • the mask shape is isotropic in the X direction and the Y direction, and the arrangement pitch of the mask M is the same in the X direction and the Y direction.
  • each mask M in the center column is shifted in the Y direction by half the arrangement pitch with respect to the Y direction arrangement of each mask M in each of the left and right columns. Therefore, when a barycentric position (for example, point P3) of one mask included in one column and a barycentric position (for example, points P4 and P5) of two nearest masks included in the adjacent column are connected to each other, an equilateral triangle is obtained. become. In other words, the distance between the centroid position of one mask and the centroid position of each of the plurality of masks partially overlapping the mask is the same among the plurality of masks.
  • the mask arrangement in FIG. 8 shows the allocation of the original image at the time of imaging.
  • the scanning movement recipe of the imaging unit 13 may be set so that the optical axis C of the imaging optical system sequentially passes through the points P1 to P10 corresponding to the center of gravity of each mask M during imaging. If imaging is performed every time the optical axis C of the imaging optical system reaches any one of the points P1 to P10 while scanning and moving the imaging unit 13 according to the set scanning movement recipe, the imaging target region is obtained. An original image covering the whole well W is acquired. In this sense, the position of the imaging unit 13 corresponding to each of the points P1 to P10, that is, when the optical axis C coincides with each of the points P1 to P10, is hereinafter referred to as “imaging position”.
  • FIG. 9 is a flowchart showing the imaging process in this embodiment.
  • the CPU 141 causes each unit of the apparatus to perform a predetermined operation based on a control program created in advance, and executes an imaging process illustrated in FIG. 9, thereby acquiring a plurality of original images.
  • the imaging unit 13 is positioned at a predetermined start position by the drive mechanism 15 that operates according to a control command from the mechanical control unit 146 (step S101).
  • the start point Ps shown in FIG. 8 corresponds to the optical axis position of the objective lens 131 at this time.
  • the illumination unit is moved along with the movement of the imaging unit 13 so that the optical center of the illumination light and the optical axis of the objective lens 131 always coincide with each other. 12 also moves.
  • step S102 the scanning movement of the imaging unit 13 with respect to the well W is started based on a preset scanning movement recipe. If the imaging unit 13 reaches the end position corresponding to the end point Pe, the process ends (step S103). Until the end position is reached, every time the imaging unit 13 reaches one of the imaging positions corresponding to the points P1 to P10 (step S104), steps S105 and S106 are executed to perform imaging.
  • the position where the imaging unit 13 is located can be detected based on, for example, an output signal from a position sensor (not shown) mounted on the imaging unit 13.
  • the imaging unit 13 When the imaging unit 13 reaches the imaging position, the light source of the illuminating unit 12 is turned on for a predetermined time, so that the imaging target is stroboscopically illuminated. Is acquired (step S105). Image data obtained by digitizing the image signal output from the image sensor 132 by the AD converter 143 is stored and saved in the image memory 144 (step S106). By repeating the above process until the imaging unit 13 reaches the end position, imaging is performed at imaging positions corresponding to the points P1 to P10, and 10 original images are acquired.
  • the mechanical control unit 146 scans and moves the imaging unit 13 at a constant speed according to the scanning movement recipe. Just do it.
  • the imaging unit 13 moves in the Y direction by a predetermined distance from the start position corresponding to the start point Ps to capture an original image for one column (left column), and then a certain amount in the X direction.
  • the original image for one column (center column) is captured while moving in the Y direction again. Further, it moves a certain amount in the X direction, captures one row (right column) of original images while moving in the Y direction, and finally reaches the end position corresponding to the end point Pe.
  • the imaging unit 13 acquires the necessary number of original images by alternately performing the movement in the Y direction and the movement in the X direction.
  • FIG. 10 is a flowchart showing image composition processing.
  • various correction processes that can be executed for each single original image, such as shading correction, distortion correction, and deconvolution correction, are performed on the acquired K original images (step S201). These corrections are performed as necessary and may be omitted.
  • the corrected plurality of original images Ik are superimposed on a virtual two-dimensional map plane so that the positions of the pixels corresponding to the same position of the well W coincide with each other (step S202). Of these, a well region covering the entire well W is cut out (step S203).
  • FIG. 11 is a diagram for explaining the extraction of the well region.
  • a rectangular area including the entire well W and a minimum area outside the well is cut out as the well area Rw.
  • the subsequent processing is performed in the well region Rw, and other regions are unnecessary.
  • a new coordinate system is set for the well region Rw.
  • the new coordinate system is the X′Y ′ coordinate system, and points in the coordinate system are represented by coordinates (x ′, y ′).
  • the synthesis process will be described. Subsequent to the extraction of the well region, a synthesis process for each pixel is executed. First, the initial value of the coordinates (x ′, y ′) of the pixel to be processed in the well region Rw is set to the origin (0, 0) of the upper left corner of the well region Rw (steps S204 and S205). Then, the pixel value of the pixel corresponding to the coordinate point (x ′, y ′) in the combined image is calculated (step S206). When there is only one original image having pixels occupying the coordinate point (x ′, y ′), the pixel value of the pixel in the original image (not weighted) is directly used as the combined pixel value. On the other hand, when there are a plurality of original images having pixels that occupy the coordinate point (x ′, y ′), the combined pixel values are obtained by calculation based on the pixel values of the pixels extracted from the original images. It is done.
  • each pixel of the original image Ik is given a weight according to the reliability of the image information of the pixel. Therefore, when a plurality of original images are overlapped, the pixel value of the pixel having a larger weight may be reflected more strongly in the combined pixel value.
  • the combined pixel value Po (x ′, y ′) can be expressed by a weighted sum of pixel values of overlapping pixels. Specifically, according to the following (Formula 2).
  • the code Pwk (x ′, y ′) is a pixel value Pw (x ′, y ′) weighted by (Expression 1) with respect to the pixel of the original image Ik at the coordinate point (x ′, y ′). y ′). Note that the pixel value Pwk (x ′, y ′) of the original image Ik that does not have a pixel at the coordinate position is set to 0.
  • the pixel value may be too large compared to other regions. Therefore, it is more preferable that the pixel value is normalized by the total value of the weighting factors Wt of the overlapping pixels as in (Equation 3) below.
  • a symbol Pk (x ′, y ′) is a pixel value of a pixel extracted from the original image Ik as a pixel located at the coordinate point (x ′, y ′). If the original image Ik does not have a pixel at that position, the value is 0.
  • a symbol Wtk (x ′, y ′) is a weighting factor that acts on the pixel value Pk (x ′, y ′) based on the weighting map Mw.
  • the weighting map Mw itself is common to each original image, since each original image Ik is mapped to a different position on the virtual map plane, the weighting map Mw applied to each original image Ik is also on the virtual map plane. It is necessary to link with the position of the original image.
  • the weighting map Mw is converted to the original image Ik on the X′Y ′ map plane by performing the coordinate transformation necessary for mapping the original image Ik on the XY map plane on the X′Y ′ map plane. Can be adjusted to the position.
  • the value of the weighting factor Wt at the coordinate point (x ′, y ′) is not necessarily unique, and when a plurality of original images occupy the coordinate point, each corresponding original image Ik is individually Value exists.
  • a subscript k is used to express a weighting coefficient corresponding to the pixel value Pk (x ′, y ′) of the pixel occupying the coordinate point (x ′, y ′) in the original image Ik.
  • step S209 and S210 The calculation of pixel values for one column as described above is repeated until the lower end of the well region Rw is reached while the coordinate value y ′ is incremented by one (steps S209 and S210). Thereby, the pixel values of all the pixels in the well region Rw in the combined image are determined.
  • step S211 By reconstructing an image corresponding to the well region Rw based on the pixel value thus calculated (step S211), a composite image representing the entire well W is created from a plurality of original images obtained by partially imaging the well W. Can do.
  • imaging is performed by dividing the imaging target area into a plurality of parts.
  • images are taken so that adjacent original images partially overlap.
  • the overlap between the original images reflects the processing contents in the subsequent image composition.
  • the weighting map Mw that represents the weighting factor Wt that gives weight to each pixel of the original image in consideration of the image quality degradation that occurs at the peripheral edge of each original image due to the optical characteristics of the imaging unit 13. Is prepared.
  • the combined pixel value is calculated according to the weight of the pixel extracted from each original image. Specifically, by multiplying the pixel value of each pixel by a weighting coefficient, the pixel value of a pixel to which a greater weight is given is reflected more strongly in the combined pixel value.
  • the allocation of the original image when the divided images are taken is determined so that the entire imaging region is occupied by pixels to which a weight equal to or higher than a predetermined threshold is given in at least one original image.
  • the illumination unit 12 is the “illumination unit” of the present invention
  • the imaging unit 13 is the “imaging unit” of the present invention
  • the drive mechanism 15 is the “moving unit” of the present invention.
  • Each is functioning.
  • the imaging element 132 corresponds to the “area image sensor” of the present invention
  • the imaging optical system including the objective lens 131 corresponds to the “high percentic optical system”.
  • the control unit 14 functions as the “image composition unit” of the present invention.
  • the movement direction of the imaging unit 13 corresponding to the Y direction of the image corresponds to the “first scanning direction” of the present invention, and the movement of the imaging unit 13 in this direction and the imaging in the middle thereof are the main directions. This corresponds to the “main scanning process” of the invention.
  • the direction corresponding to the X direction corresponds to the “second scanning direction” of the present invention, and the movement of the imaging unit 13 in this direction corresponds to the “sub-scanning process” of the present invention.
  • the imaging unit 13 of the above embodiment has a high percentric optical system.
  • the imaging unit 13 when there is no influence of the meniscus, the image quality is likely to be deteriorated at the peripheral portion of the image. Therefore, the effect of the present invention is particularly effective.
  • the problem that image quality is likely to deteriorate at the peripheral edge of an image due to, for example, lens aberration, vignetting, illumination light unevenness, etc. also occurs in various optical systems other than such an imaging optical system.
  • the present invention can be applied to an apparatus having such various imaging optical systems.
  • the above embodiment is an imaging device that images a living sample such as a cell carried in the well W.
  • what the imaging apparatus according to the present invention uses as an imaging target is not limited to these and is arbitrary. Even if the image is not picked up via the liquid surface, for example, an image picked up using a wide-angle lens tends to have a large distortion at the peripheral portion, so that it can be said that this is an example in which the present invention functions particularly effectively.
  • a set of imaging units 13 captures a plurality of original images while scanning and moving with respect to the imaging target region.
  • the plurality of original images may be captured by different imaging means.
  • the “weighting rule” according to the present invention is described as the weighting map Mw. However, it is arbitrary how the weighting rule is expressed and implemented in the apparatus. As described above, in addition to the lookup table and the function, a configuration in which the weight coefficient is derived by conditional branching according to case classification may be used.
  • the weight given by the weighting rule is set to be larger in the central pixel of the original image than in the peripheral pixel. Also good.
  • the image quality is the best at the center of the image, and the image quality is lower at the periphery.
  • the weight given by the weighting rule is a constant value equal to or greater than the threshold for pixels within a predetermined distance from the position corresponding to the optical center of the imaging unit in the original image, while for pixels far from the predetermined distance. You may set so that it may become so small that it leaves
  • the magnitude of the weight given by the weighting rule may be rotationally symmetric with respect to the optical center, for example.
  • the imaging optical system is configured by a combination of lenses having rotationally symmetric characteristics with respect to the optical axis, the optical characteristics also have rotational symmetry with respect to the optical axis.
  • the pixel value of the pixel corresponding to the position in each of the original images is weighted based on the weighting rule.
  • a value obtained by multiplying the coefficients according to the sum may be used as the pixel value of the pixel corresponding to the position in the composite image.
  • the imaging means may include an area image sensor that converts an optical image in the imaging field of view into an electrical signal.
  • the imaging optical system In a configuration in which an area image sensor captures a two-dimensional imaging field at a time, the imaging optical system also needs a function to form a two-dimensional optical image. In such a configuration, the image quality deteriorates at the periphery of the imaging field. Easy to appear. In such a case, by applying the technical idea of the present invention, it becomes possible to create a composite image in which the influence of image quality deterioration is suppressed.
  • the imaging means may have a high percentic optical system that guides the light emitted from the imaging target region in the imaging field of view to the area image sensor.
  • image quality deterioration is likely to occur particularly in the peripheral portion of the image, so that the effect of the present invention becomes remarkable.
  • the imaging unit may include an imaging unit that captures an image within the imaging field of view, and a moving unit that moves the imaging unit relative to the imaging target region.
  • a plurality of original images can be captured by a set of imaging units.
  • the imaging unit may further include an illuminating unit that illuminates the imaging target region intermittently in synchronization with the relative movement of the imaging unit with respect to the imaging target region by the moving unit.
  • an illuminating unit that illuminates the imaging target region intermittently in synchronization with the relative movement of the imaging unit with respect to the imaging target region by the moving unit.
  • the imaging means intermittently captures an image while scanning and moving at a constant speed relative to the imaging target area in the first scanning direction.
  • the main scanning process for acquiring a plurality of original images arranged along the line and the sub-scanning process in which the imaging unit moves in the second scanning direction orthogonal to the first scanning direction relative to the imaging target region are alternately performed.
  • a plurality of original images are acquired in each of the first scanning direction and the second scanning direction, and positions in the first scanning direction may be different between adjacent original images in the second scanning direction.
  • by changing the positions of the original images adjacent in the second scanning direction in the first scanning direction it is possible to prevent the corners of the original image having large image quality deterioration from overlapping.
  • the present invention can be applied to all techniques for imaging an area wider than the imaging field of view of the imaging means by dividing it into a plurality of images and creating an image obtained by synthesizing them, and the imaging object is not particularly limited.
  • Imaging device image creation device
  • Illumination unit imaging means, illumination unit
  • Imaging unit imaging unit
  • Control unit image composition means
  • Drive mechanism imaging means, moving part
  • Objective lens image sensor
  • Mw weighting map Rw well region W well WP well plate Wt weighting factor

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computing Systems (AREA)
  • Image Processing (AREA)
  • Studio Devices (AREA)

Abstract

A compositing technique for joining multiple original images together, with which a composite image is generated while suppressing the influence of image quality deterioration attributable to optical characteristics of an imaging system. Weights to be assigned to respective pixels of the images to be captured are set in advance in accordance with the optical characteristics of the imaging system. By using a mask M, that is, an extracted area having weights equal to or greater than a certain threshold value, a mask arrangement for entirely masking an imaging object region within a well W is determined. The multiple original images are captured using a layout corresponding to the mask arrangement, and the original images are composited after the pixels of the respective images are weighted.

Description

画像作成装置および画像作成方法Image creating apparatus and image creating method
 この発明は、複数の原画像を合成して、撮像手段の撮像視野よりも大きい撮像対象領域の画像を作成する技術に関するものである。
関連出願の相互参照
 以下に示す各日本出願の明細書、図面および特許請求の範囲における開示内容は、参照によりその全内容が本書に組み入れられる:
 特願2015-189251(2015年9月28日出願)。
The present invention relates to a technique for synthesizing a plurality of original images to create an image of an imaging target area that is larger than an imaging field of view of an imaging means.
Cross-reference of related applications The disclosures in the specification, drawings and claims of each of the following Japanese applications are incorporated herein by reference in their entirety:
Japanese Patent Application No. 2015-189251 (filed on Sep. 28, 2015).
 撮像手段の撮像視野よりも広範囲の画像が必要なとき、撮像対象領域を複数に分割して撮像された原画像から合成画像が作成されることがある。この場合、複数の原画像をどのように合成するかが合成画像の画像品質に大きく影響する。一般的には、隣り合う原画像の一部を互いに重複させて撮像し、重なり合う部分に適宜の画像処理を施して原画像同士を継ぎ合わせることで、画像の継ぎ目を目立ちにくくする工夫がなされている。 When a wider image than the imaging field of view of the imaging means is required, a composite image may be created from the original image captured by dividing the imaging target area into a plurality of areas. In this case, how to combine a plurality of original images greatly affects the image quality of the combined image. In general, a part of the adjacent original images is imaged by overlapping each other, and an appropriate image processing is performed on the overlapping portions so that the original images are joined together to make the seam of the images less noticeable. Yes.
 例えば特許文献1に記載の技術では、重複部分が設けられた2つの画像を合成する際に、それぞれの画像において重複部分に対応する画素の画素データに重み付けをして加算し、画像の端部ほど重みを小さくすることで、継ぎ目が目立たないようにされる。また、特許文献2に記載の技術では、微小な時間間隔を空けて撮像された複数の画像を位置合わせして合成する際に、一の画像を基準画像、他の複数の画像が参照画像とされる。そして、基準画像との間で位置合わせを行う領域を参照画像ごとに異ならせることで、合成後の画像の不自然さの解消が図られている。 For example, in the technique described in Patent Document 1, when two images provided with overlapping portions are combined, the pixel data of the pixels corresponding to the overlapping portions in each image are weighted and added, and the end portion of the image By reducing the weight as much as possible, the joint is made inconspicuous. In the technique described in Patent Document 2, when a plurality of images captured with a minute time interval are aligned and combined, one image is a reference image, and the other plurality of images are a reference image. Is done. Then, the unnaturalness of the combined image is eliminated by changing the region for alignment with the reference image for each reference image.
特開昭62-140174号公報Japanese Patent Laid-Open No. 62-140174 特開2011-176777号公報JP 2011-176777 A
 上記従来技術では特に考慮されていないが、広い撮像対象領域を複数の原画像に分割して合成する場合においては、例えば原画像が特殊な撮像条件で撮像されたものであるとき、撮像光学系の光学特性に起因する画像品質の不均一さが個々の原画像に含まれ得る。例えば、明るさのムラ、ボケ、歪みなどの画質劣化が特に画像の周縁部に現れやすい。このような特殊な撮像としては、例えば広角の撮像視野での撮像やハイパーセントリック特性を有する撮像光学系を用いた撮像が該当する。このため、原画像を撮像する際の撮像視野の重ね合わせ方が適切に設定されていなければ、劣化した画像同士が継ぎ合わせられることとなってしまい、必ずしも所期の画像品質の合成画像を得ることができないという問題がある。 Although not particularly taken into consideration in the above-described prior art, in the case where a wide imaging target region is divided into a plurality of original images and synthesized, for example, when the original image is imaged under special imaging conditions, the imaging optical system Image quality non-uniformities due to the optical properties of the individual original images can be included. For example, image quality deterioration such as uneven brightness, blurring, and distortion is particularly likely to appear at the periphery of an image. Examples of such special imaging include imaging in a wide-angle imaging field of view and imaging using an imaging optical system having a high percentic characteristic. For this reason, if the method of superimposing the field of view for capturing the original image is not set appropriately, the deteriorated images will be stitched together, and a composite image with the desired image quality will always be obtained. There is a problem that can not be.
 この発明は上記課題に鑑みなされたものであり、複数の原画像を継ぎ合わせて合成する技術において、撮像系の光学特性に起因する画像品質の劣化の影響を抑えた合成画像を作成することのできる技術を提供することを目的とする。 The present invention has been made in view of the above problems, and in a technique for combining a plurality of original images together, it is possible to create a composite image in which the influence of image quality degradation caused by the optical characteristics of the imaging system is suppressed. The purpose is to provide technology that can be used.
 この発明にかかる画像作成装置の一の態様は、上記課題を解決するため、二次元の撮像視野を有し、撮像対象領域に対する撮像視野の位置を互いに異ならせて、かつ位置の隣り合う撮像視野を互いに一部重複させて複数回撮像を行うことで、撮像視野よりも広範囲の撮像対象領域を複数の原画像に分割して撮像する撮像手段と、複数の原画像を合成して、撮像対象領域を表す単一の合成画像を作成する画像合成手段とを備え、画像合成手段は、原画像内における位置と当該位置の画素に与えられる重みとが撮像手段の光学特性に応じて関連付けられた重み付けルールに基づき、個々の原画像内の各画素に対し重み付けを行い、重み付けされた複数の原画像を、撮像対象領域の同一位置に対応する画素が互いに重なるように合成し、撮像手段は、撮像対象領域内の全ての位置が原画像の少なくとも1つにおいて有効画素として表されるように、撮像対象領域において撮像視野を重複させて撮像を行う。 In order to solve the above-described problem, an aspect of the image creating apparatus according to the present invention has a two-dimensional imaging field of view, the imaging field of view with respect to the imaging target region is different from each other, and the imaging fields of adjacent positions are The image capturing means that divides the image area into a plurality of original images and composites the plurality of original images by capturing the image several times by partially overlapping each other and combining the plurality of original images An image composition unit that creates a single composite image representing a region, and the image composition unit associates a position in the original image and a weight given to a pixel at the position in accordance with the optical characteristics of the imaging unit. Based on the weighting rule, each pixel in each original image is weighted, and a plurality of weighted original images are synthesized so that pixels corresponding to the same position in the imaging target region overlap each other, and imaging means As all positions of the imaging target region is represented as an active pixel in at least one of the original images, perform imaging with overlapping imaging field in the imaging target region.
 また、この発明にかかる画像作成方法の一の態様は、上記課題を解決するため、二次元の撮像視野を有する撮像手段により、撮像対象領域に対する撮像視野の位置を互いに異ならせて、かつ位置の隣り合う撮像視野を互いに一部重複させて複数回撮像を行うことで、撮像視野よりも広範囲の撮像対象領域を複数の原画像に分割して撮像する撮像工程と、複数の原画像を合成して、撮像対象領域を表す単一の合成画像を作成する合成工程とを備え、合成工程では、原画像内における位置と当該位置の画素に与えられる重みとが撮像手段の光学特性に応じて関連付けられた重み付けルールに基づき、個々の原画像内の各画素に対し重み付けが行われ、重み付けされた複数の原画像が、撮像対象領域の同一位置に対応する画素が互いに重なるように合成され、撮像工程では、撮像対象領域内の全ての位置が原画像の少なくとも1つにおいて有効画素として表されるように、撮像対象領域において撮像視野を重複させて撮像が行われる。 According to another aspect of the image creating method of the present invention, in order to solve the above-described problem, the position of the imaging field of view relative to the imaging target region is made different from each other by an imaging unit having a two-dimensional imaging field of view. An imaging process for dividing an imaging target area wider than the imaging field of view into a plurality of original images and synthesizing the plurality of original images by performing imaging a plurality of times by partially overlapping adjacent imaging fields of view. A composite step of creating a single composite image representing the imaging target region, and in the composite step, the position in the original image and the weight given to the pixel at the position are associated according to the optical characteristics of the image pickup means. Based on the assigned weighting rule, weighting is performed on each pixel in each original image so that pixels corresponding to the same position in the imaging target region overlap each other in the plurality of weighted original images. It is combined, in the imaging process, all positions in the imaging target region as represented as an effective pixel in at least one of the original image, the imaging is performed with overlapping imaging field in the imaging target region.
 これらの発明において、「有効画素」は、原画像を構成する画素のうち、上述の重み付けルールによって所定の閾値以上の重みが与えられた画素として定義される。このように構成された発明では、部分的に重なるように撮像された複数の原画像の各画素に対し、撮像手段の光学特性に応じた重み付けルールに基づく重みが与えられる。これにより、原画像内の各画素が持つ重みを異ならせることができる。そのため、撮像手段の光学特性に起因して画像情報の劣化または欠損を生じる部分の画素の重みが他の部分より小さくなるようにすれば、当該部分は合成画像に反映されにくくなる。 In these inventions, the “effective pixel” is defined as a pixel that is given a weight equal to or higher than a predetermined threshold by the above-described weighting rule among the pixels constituting the original image. In the invention configured as described above, a weight based on a weighting rule corresponding to the optical characteristics of the imaging unit is given to each pixel of a plurality of original images captured so as to partially overlap. Thereby, the weight which each pixel in an original image has can be varied. For this reason, if the weight of the pixel in the portion that causes degradation or loss of the image information due to the optical characteristics of the imaging means is made smaller than the other portions, the portion is less likely to be reflected in the composite image.
 一方、複数の原画像に適用される重み付けルールが単一であっても、撮像位置が異なる複数の原画像間では、撮像対象領域内の同一位置に対応する画素にそれぞれ与えられる重みは必ずしも同じでない。このため、原画像の撮像段階において相互の重なり部分が適切でなければ、合成画像において部分的に画像情報の欠落が生じることがあり得る。すなわち、小さな重みしか与えられていない画素同士を重ね合わせたとしても、有効な画像情報は合成画像に反映されない。そこで、本発明では、撮像段階において合成時の重み付けルールが考慮されている。すなわち、撮像対象領域の全体が、少なくとも1つの原画像において所定の閾値以上の重みを有する画素によってカバーされる。より厳密には、そのような関係となるように、重み付けルールに応じて個々の撮像における撮像視野が設定される。 On the other hand, even if a single weighting rule is applied to a plurality of original images, the weights given to the pixels corresponding to the same position in the imaging target region are not necessarily the same between a plurality of original images having different imaging positions. Not. For this reason, if the overlapping portion is not appropriate at the stage of capturing the original image, image information may be partially lost in the composite image. That is, even if the pixels to which only a small weight is given are overlapped, effective image information is not reflected in the composite image. Therefore, in the present invention, weighting rules at the time of composition are taken into consideration at the imaging stage. That is, the entire imaging target area is covered with pixels having a weight equal to or greater than a predetermined threshold in at least one original image. More precisely, the imaging field of view for each imaging is set according to the weighting rule so as to have such a relationship.
 したがって、1つの原画像において閾値未満の重みが与えられている画素に対しては、別の原画像のいずれかにおいて閾値以上の重みが与えられることになる。このため、原画像の合成に際しては、全ての画素において、大きな重みが与えられた画素の画像情報が合成画像により強く反映されることとなり、画像情報の欠落が回避される。 Therefore, a pixel having a weight less than the threshold in one original image is given a weight greater than or equal to the threshold in any of the other original images. For this reason, when the original image is synthesized, the image information of the pixel to which a large weight is given is strongly reflected in the synthesized image in all the pixels, and the omission of the image information is avoided.
 以上のように、本発明によれば、複数の原画像を継ぎ合わせて合成する技術において、複数の原画像を構成する各画素に対して撮像手段の光学特性に応じた重み付けがなされる。しかも、撮像対象領域の全体が少なくとも1つの原画像において閾値以上の重みを有する画素によって表されるように、原画像の撮像時における撮像視野の重ね合わせ方が定められている。そのため、撮像系の光学特性に起因する画像品質の劣化の影響を抑えた合成画像を作成することができる。 As described above, according to the present invention, in the technique for combining a plurality of original images together, each pixel constituting the plurality of original images is weighted according to the optical characteristics of the imaging means. In addition, the method of superimposing the imaging fields during imaging of the original image is determined so that the entire imaging target region is represented by pixels having a weight equal to or greater than the threshold in at least one original image. Therefore, it is possible to create a composite image in which the influence of image quality deterioration caused by the optical characteristics of the imaging system is suppressed.
 この発明の前記ならびにその他の目的と新規な特徴は、添付図面を参照しながら次の詳細な説明を読めば、より完全に明らかとなるであろう。ただし、図面は専ら解説のためのものであって、この発明の範囲を限定するものではない。 The above and other objects and novel features of the present invention will become more fully apparent when the following detailed description is read with reference to the accompanying drawings. However, the drawings are for explanation only and do not limit the scope of the present invention.
本発明にかかる画像作成方法を適用可能な撮像装置の概略構成を示す図である。It is a figure which shows schematic structure of the imaging device which can apply the image preparation method concerning this invention. この撮像装置によるウェル撮像を示す第1の図である。It is a 1st figure which shows well imaging by this imaging device. この撮像装置によるウェル撮像を示す第2の図である。It is a 2nd figure which shows well imaging by this imaging device. 撮像視野よりも撮像対象領域が大きい場合の撮像を示す図である。It is a figure which shows imaging when an imaging object area | region is larger than an imaging visual field. 画像の周縁部での画質低下の実例を示す第1の図である。It is a 1st figure which shows the actual example of the image quality fall in the peripheral part of an image. 画像の周縁部での画質低下の実例を示す第2の図である。It is a 2nd figure which shows the actual example of the image quality fall in the peripheral part of an image. 画素への重みの与え方を示す第1の図である。It is a 1st figure which shows how to give the weight to a pixel. 画素への重みの与え方を示す第2の図である。It is a 2nd figure which shows how to give the weight to a pixel. 画素への重みの与え方を示す第3の図である。It is a 3rd figure which shows how to give the weight to a pixel. 重み付けマップの表現例を示す第1の図である。It is a 1st figure which shows the example of expression of a weighting map. 重み付けマップの表現例を示す第2の図である。It is a 2nd figure which shows the example of expression of a weighting map. 重み付けマップから作成されるマスクを例示する第1の図である。It is a 1st figure which illustrates the mask produced from a weighting map. 重み付けマップから作成されるマスクを例示する第2の図である。It is a 2nd figure which illustrates the mask produced from a weighting map. 原画像の割り付け例を示す図である。It is a figure which shows the example of allocation of an original image. この実施形態における撮像処理を示すフローチャートである。It is a flowchart which shows the imaging process in this embodiment. 画像の合成処理を示すフローチャートである。It is a flowchart which shows the composition process of an image. ウェル領域の切り出しを説明するための図である。It is a figure for demonstrating extraction of a well area | region.
 図1は本発明にかかる画像作成方法を適用可能な撮像装置の概略構成を示す図である。この撮像装置1は、ウェルプレートWPの上面に形成されたウェルWと称される窪部に注入された液体中で培養される細胞、細胞コロニー、細菌等(以下、「細胞等」と称する)の生試料を撮像する装置である。 FIG. 1 is a diagram showing a schematic configuration of an imaging apparatus to which an image creating method according to the present invention can be applied. This imaging device 1 is a cell, cell colony, bacteria, etc. (hereinafter referred to as “cell etc.”) cultured in a liquid injected into a recess called a well W formed on the upper surface of the well plate WP. This is an apparatus for imaging a raw sample.
 ウェルプレートWPは、創薬や生物科学の分野において一般的に使用されているものであり、平板状のプレートの上面に、断面が略円形の筒状に形成され底面が透明で平坦なウェルWが複数設けられている。1つのウェルプレートWPにおけるウェルWの数は任意であるが、例えば96個(12×8のマトリクス配列)のものを用いることができる。各ウェルWの直径および深さは代表的には数mm程度である。なお、この撮像装置1が対象とするウェルプレートのサイズやウェルの数はこれらに限定されるものではなく任意であり、例えば6ないし384穴のものが一般的に使用されている。また、複数ウェルを有するウェルプレートに限らず、例えばディッシュと呼ばれる平型の容器で培養された細胞等の撮像にも、この撮像装置1を使用することが可能である。 The well plate WP is generally used in the fields of drug discovery and biological science. The well plate WP is formed in a cylindrical shape with a substantially circular cross section on the top surface of a flat plate, and the bottom surface is transparent and flat. Are provided. Although the number of wells W in one well plate WP is arbitrary, for example, 96 (12 × 8 matrix arrangement) can be used. The diameter and depth of each well W is typically about several mm. The size of the well plate and the number of wells targeted by the imaging apparatus 1 are not limited to these, and are arbitrary, for example, those having 6 to 384 holes are generally used. The imaging apparatus 1 can be used not only for imaging a well plate having a plurality of wells but also for imaging cells or the like cultured in a flat container called a dish.
 ウェルプレートWPの各ウェルWには、培地Mとしての液体が所定量注入され、この液体中において所定の培養条件で培養された細胞等が、この撮像装置1の撮像対象物となる。培地は適宜の試薬が添加されたものでもよく、また液状でウェルWに投入された後ゲル化するものであってもよい。後述するように、この撮像装置1では、例えばウェルWの内底面で培養された細胞等を撮像対象とすることができる。常用される一般的な液量は、50ないし200マイクロリットル程度である。 A predetermined amount of liquid as the medium M is injected into each well W of the well plate WP, and cells or the like cultured under predetermined culture conditions in the liquid are imaging objects of the imaging apparatus 1. The medium may be one to which an appropriate reagent has been added, or it may be in a liquid state and gelled after being placed in the well W. As will be described later, in the imaging apparatus 1, for example, cells cultured on the inner bottom surface of the well W can be targeted for imaging. Commonly used liquid volume is about 50 to 200 microliters.
 撮像装置1は、ウェルプレートWPを保持するホルダ11と、ホルダ11の上方に配置される照明部12と、ホルダ11の下方に配置される撮像部13と、これら各部の動作を制御するCPU141を有する制御部14とを備えている。ホルダ11は、試料を培地Mとともに各ウェルWに担持するウェルプレートWPの下面周縁部に当接してウェルプレートWPを略水平姿勢に保持する。 The imaging apparatus 1 includes a holder 11 that holds the well plate WP, an illumination unit 12 that is disposed above the holder 11, an imaging unit 13 that is disposed below the holder 11, and a CPU 141 that controls the operations of these units. The control part 14 which has. The holder 11 is in contact with the peripheral edge of the lower surface of the well plate WP carrying the sample together with the medium M in each well W, and holds the well plate WP in a substantially horizontal posture.
 照明部12は、ホルダ11により保持されたウェルプレートWPに向けて照明光を出射する。照明光の光源としては、例えば白色LED(Light Emitting Diode)を用いることができる。光源と適宜の照明光学系とを組み合わせたものが、照明部12として用いられる。照明部12により、ウェルプレートWPに設けられたウェルW内の細胞等が上方から照明される。 The illumination unit 12 emits illumination light toward the well plate WP held by the holder 11. As a light source of illumination light, for example, a white LED (Light Emitting Diode) can be used. A combination of a light source and an appropriate illumination optical system is used as the illumination unit 12. The illumination unit 12 illuminates cells and the like in the well W provided on the well plate WP from above.
 ホルダ11により保持されたウェルプレートWPの下方に、撮像部13が設けられる。撮像部13には、ウェルプレートWPの直下位置に図示を省略する撮像光学系が配置されている。撮像光学系の光軸は鉛直方向に向けられている。図1は側面図であり、図の上下方向が鉛直方向を表す。 An imaging unit 13 is provided below the well plate WP held by the holder 11. The imaging unit 13 is provided with an imaging optical system (not shown) at a position directly below the well plate WP. The optical axis of the imaging optical system is oriented in the vertical direction. FIG. 1 is a side view, and the vertical direction in the figure represents the vertical direction.
 撮像部13により、ウェルW内の細胞等が撮像される。具体的には、照明部12から出射されウェルWの上方から液体に入射した光が撮像対象物を照明する。ウェルW底面から下方へ透過した光が、対物レンズ131を含む撮像部13の撮像光学系を介して撮像素子132の受光面に入射する。撮像光学系により撮像素子132の受光面に結像する撮像対象物の像が、撮像素子132により撮像される。撮像素子132は二次元の受光面を有するエリアイメージセンサであり、例えばCCDセンサまたはCMOSセンサを用いることができる。 The cell etc. in the well W are imaged by the imaging unit 13. Specifically, light emitted from the illumination unit 12 and incident on the liquid from above the well W illuminates the imaging target. Light transmitted downward from the bottom surface of the well W is incident on the light receiving surface of the image sensor 132 through the imaging optical system of the imaging unit 13 including the objective lens 131. An image of the imaging target imaged on the light receiving surface of the imaging device 132 by the imaging optical system is captured by the imaging device 132. The image sensor 132 is an area image sensor having a two-dimensional light receiving surface, and for example, a CCD sensor or a CMOS sensor can be used.
 撮像部13は、制御部14に設けられたメカ制御部146により水平方向および鉛直方向に移動可能となっている。具体的には、メカ制御部146がCPU141からの制御指令に基づき駆動機構15を作動させ、撮像部13を水平方向に移動させる。これにより、撮像部13がウェルWに対し水平方向に移動する。また鉛直方向への移動によりフォーカス調整がなされる。撮像視野内に1つのウェルWの全体が収められた状態で撮像されるときには、メカ制御部146は、光軸が当該ウェルWの中心と一致するように、撮像部13を水平方向に位置決めする。 The imaging unit 13 can be moved in the horizontal direction and the vertical direction by a mechanical control unit 146 provided in the control unit 14. Specifically, the mechanical control unit 146 operates the drive mechanism 15 based on a control command from the CPU 141 to move the imaging unit 13 in the horizontal direction. Thereby, the imaging unit 13 moves in the horizontal direction with respect to the well W. The focus is adjusted by moving in the vertical direction. When imaging is performed in a state where the entire well W is accommodated in the imaging field, the mechanical control unit 146 positions the imaging unit 13 in the horizontal direction so that the optical axis coincides with the center of the well W. .
 また、駆動機構15は、撮像部13を水平方向に移動させる際、図において点線矢印で示すように照明部12を撮像部13と一体的に移動させる。すなわち、照明部12は、その光中心が撮像部13の光軸と略一致するように配置されており、撮像部13が水平方向に移動するとき、これと連動して移動する。これにより、どのウェルWが撮像される場合でも、当該ウェルWの中心および照明部12の光中心が常に撮像部13の光軸上に位置することとなる。そのため、各ウェルWに対する照明条件を一定にして、撮像条件を良好に維持することができる。 Further, when moving the imaging unit 13 in the horizontal direction, the driving mechanism 15 moves the illumination unit 12 integrally with the imaging unit 13 as indicated by a dotted arrow in the drawing. That is, the illuminating unit 12 is arranged so that the optical center thereof substantially coincides with the optical axis of the imaging unit 13, and moves in conjunction with the imaging unit 13 when moving in the horizontal direction. Thereby, no matter which well W is imaged, the center of the well W and the light center of the illumination unit 12 are always located on the optical axis of the imaging unit 13. Therefore, it is possible to maintain the imaging condition favorably while keeping the illumination condition for each well W constant.
 撮像部13の撮像素子132から出力される画像信号は、制御部14に送られる。すなわち、画像信号は制御部14に設けられたADコンバータ(A/D)143に入力されてデジタル画像データに変換される。CPU141は、受信した画像データに基づき適宜画像処理を実行する。制御部14はさらに、画像データを記憶保存するための画像メモリ144と、CPU141が実行すべきプログラムやCPU141により生成されるデータを記憶保存するためのメモリ145とを有しているが、これらは一体のものであってもよい。CPU141は、メモリ145に記憶された制御プログラムを実行することにより、後述する各種の演算処理を行う。 The image signal output from the imaging device 132 of the imaging unit 13 is sent to the control unit 14. That is, the image signal is input to an AD converter (A / D) 143 provided in the control unit 14 and converted into digital image data. The CPU 141 executes image processing as appropriate based on the received image data. The control unit 14 further includes an image memory 144 for storing and storing image data, and a memory 145 for storing and storing programs to be executed by the CPU 141 and data generated by the CPU 141. It may be integral. The CPU 141 executes various control processes described later by executing a control program stored in the memory 145.
 その他に、制御部14には、インターフェース(IF)部142が設けられている。インターフェース部142は、ユーザからの操作入力の受け付けや、ユーザへの処理結果等の情報提示を行うユーザインターフェース機能のほか、通信回線を介して接続された外部装置との間でのデータ交換を行う機能を有する。ユーザインターフェース機能を実現するために、インターフェース部142には、ユーザからの操作入力を受け付ける入力受付部147と、ユーザへのメッセージや処理結果などを表示出力する表示部148とが接続されている。 Besides, the control unit 14 is provided with an interface (IF) unit 142. The interface unit 142 accepts operation input from the user and presents information such as processing results to the user, and exchanges data with an external device connected via a communication line. It has a function. In order to realize the user interface function, the interface unit 142 is connected to an input receiving unit 147 that receives an operation input from the user and a display unit 148 that displays and outputs a message to the user, a processing result, and the like.
 なお、制御部14は、上記したハードウェアを備えた専用装置であってもよく、またパーソナルコンピュータやワークステーション等の汎用処理装置に、後述する処理機能を実現するための制御プログラムを組み込んだものであってもよい。すなわち、この撮像装置1の制御部14として、汎用のコンピュータ装置を利用することが可能である。汎用処理装置を用いる場合、撮像装置1には、撮像部13等の各部を動作させるために必要最小限の制御機能が備わっていれば足りる。 The control unit 14 may be a dedicated device equipped with the hardware described above, and a control program for realizing processing functions to be described later is incorporated into a general-purpose processing device such as a personal computer or a workstation. It may be. That is, a general-purpose computer device can be used as the control unit 14 of the imaging device 1. In the case of using a general-purpose processing device, it is sufficient that the imaging device 1 has a minimum control function for operating each unit such as the imaging unit 13.
 図2Aおよび図2Bはこの撮像装置によるウェル撮像を示す図である。より具体的には、図2Aはウェル撮像時の光の進路を示す図であり、図2Bはウェルと撮像視野との関係を示す図である。撮像対象物たる細胞等を担持するウェルWには、液状で注入された培地Mが入っている。したがって、ウェルWの上方から入射する照明光Lは培地Mの液面を介して撮像対象物に入射する。液面は一般に下に凸のメニスカスを形成しており、これにより照明光Lは屈折しウェルWの中心から外向きに曲げられる。照明光Lがこの種の撮像装置において広く用いられるテレセントリック照明であるとき、その光中心以外では、照明光の主光線が光中心から離れる外向きの方向に曲げられることになる。ウェルWの中心付近では屈折は小さく、ウェルWの周縁部に近いほど屈折も大きくなる。 2A and 2B are diagrams showing well imaging by this imaging apparatus. More specifically, FIG. 2A is a diagram showing the path of light during well imaging, and FIG. 2B is a diagram showing the relationship between the well and the imaging field of view. A well W that carries cells or the like as an imaging object contains a medium M injected in a liquid state. Accordingly, the illumination light L that enters from above the well W enters the imaging object via the liquid level of the culture medium M. The liquid surface generally forms a downwardly convex meniscus, whereby the illumination light L is refracted and bent outward from the center of the well W. When the illumination light L is telecentric illumination widely used in this type of imaging apparatus, the principal ray of the illumination light is bent in an outward direction away from the optical center except for the optical center. In the vicinity of the center of the well W, the refraction is small, and the closer to the peripheral edge of the well W, the larger the refraction.
 このように外向きに曲げられた光を効率よく集光し撮像素子132に導くために、この実施形態における対物レンズ131を含む撮像光学系の光学特性は、その主光線が、一般的なメニスカスにより曲げられる照明光の主光線の傾きと同程度の傾きを有するものとされている。このような光学特性は、物体側ハイパーセントリック特性とも呼ばれる。この撮像光学系では、対物レンズ131の光軸から離れた位置において主光線が斜め外向きとなるような光についても、効率よく集光して撮像素子132に結像させることができる。このため、この撮像光学系は、図2Bに示すように1つのウェルW全体を撮像視野Vに収めて撮像する場合に好適なものである。この点については、例えば本願出願人が先に開示した特開2015-118036号公報にも記載されている。 In order to efficiently collect the light bent outward in this way and guide it to the imaging device 132, the optical characteristic of the imaging optical system including the objective lens 131 in this embodiment is that the principal ray is a general meniscus. It is assumed that the inclination of the principal ray of the illumination light that is bent by the inclination is approximately the same. Such an optical characteristic is also called an object-side high percentic characteristic. In this imaging optical system, even light whose chief ray is obliquely outward at a position away from the optical axis of the objective lens 131 can be efficiently condensed and imaged on the imaging element 132. For this reason, this imaging optical system is suitable for imaging an entire well W in the imaging field of view V as shown in FIG. 2B. This point is also described in, for example, Japanese Patent Application Laid-Open No. 2015-118036 previously disclosed by the applicant of the present application.
 図2Bに示すように1つのウェルW全体を撮像視野Vに含めることができるのは、比較的口径の小さなウェルWを低倍率で撮像する場合である。一方、より口径の大きなウェル(例えば6ウェルプレートにおけるウェル)に担持された撮像対象物を撮像する場合や、高倍率での撮像を行う場合などには、撮像対象とする領域のサイズが撮像視野のサイズに対して相対的に大きくなる。撮像対象領域であるウェルW全体を撮像視野Vに含めることができない場合もあり得る。 As shown in FIG. 2B, the entire well W can be included in the imaging field of view V when imaging a well W having a relatively small diameter at a low magnification. On the other hand, when imaging an imaging object carried by a well having a larger diameter (for example, a well in a 6-well plate) or when imaging at a high magnification, the size of the area to be imaged is the imaging field of view. It becomes relatively large with respect to the size of. There may be a case where the entire well W that is the imaging target region cannot be included in the imaging visual field V.
 図3は撮像視野のサイズよりも撮像対象領域のサイズが大きい場合の撮像を示す図である。ここでは、これまで説明してきたよりも口径の大きなウェルW全体を撮像対象領域として、その内部の撮像対象物を撮像する場合について説明する。撮像対象領域のサイズが相対的に撮像視野Vのサイズよりも大きくなる場合については同様の考え方を適用することができる。ディッシュと呼ばれる大口径の浅型容器に担持された撮像対象物を撮像する場合や、小口径のウェルであってもより高倍率で撮像する場合がこれに該当する。 FIG. 3 is a diagram showing imaging when the size of the imaging target area is larger than the size of the imaging visual field. Here, a case will be described in which the entire well W having a larger diameter than that described so far is used as an imaging target region and the imaging target inside thereof is imaged. The same idea can be applied when the size of the imaging target region is relatively larger than the size of the imaging visual field V. This corresponds to the case of imaging an imaging object carried in a shallow container having a large diameter called a dish, or the case of imaging at a higher magnification even for a small-diameter well.
 撮像対象領域が撮像視野Vよりも広いとき、撮像対象領域を複数の画像に分割して撮像し、それらの画像を画像処理によって合成することで、撮像すべき領域の全体を表す画像を作成することが考えられる。この場合、作成された画像において所期の画像品質を確保するためには、合成前の個々の画像の品質をより良好なものとしておく必要がある。このために留意すべき事項について、次に説明する。 When the imaging target area is wider than the imaging field of view V, the imaging target area is divided into a plurality of images, and an image representing the entire area to be imaged is created by combining the images by image processing. It is possible. In this case, in order to ensure the desired image quality in the created image, it is necessary to improve the quality of the individual images before synthesis. Items to be noted for this purpose will be described next.
 図3に示すように、撮像領域VがウェルWの周縁部Wpから離れた中央部の領域のみを含む場合、培地M表面のメニスカスが光路に及ぼす影響は十分小さい。このため、テレセントリック照明では、対物レンズ131の光軸C付近に入射する光は集光されて撮像素子132に入射する。これに対し、光軸Cから離れた位置では入射光と光学系との主光線の傾きの違いに起因するミスマッチが生じる。 As shown in FIG. 3, when the imaging region V includes only the central region away from the peripheral portion Wp of the well W, the influence of the meniscus on the surface of the medium M on the optical path is sufficiently small. For this reason, in telecentric illumination, light incident near the optical axis C of the objective lens 131 is collected and incident on the image sensor 132. On the other hand, at a position away from the optical axis C, a mismatch due to the difference in the chief ray inclination between the incident light and the optical system occurs.
 すなわち、光軸Cから離れた位置では、対物レンズ131側では液面での屈折を前提として、図に点線で示すように主光線が外向きに傾いた光を受光する構成となっている。一方、ウェルWを通過した光はメニスカスによる屈折を受けず直進する。このため、入射光における主光線の傾きと、受光側での主光線の傾きとが一致していない。このことに起因して、特に撮像視野Vの周縁部で画質の劣化、具体的には画像が暗くなったり明るさのムラが生じたりすることがある。 That is, at a position away from the optical axis C, the objective lens 131 side is configured to receive light whose principal ray is inclined outward as shown by the dotted line in the drawing, on the premise of refraction at the liquid surface. On the other hand, the light passing through the well W goes straight without being refracted by the meniscus. For this reason, the inclination of the principal ray in the incident light does not match the inclination of the principal ray on the light receiving side. As a result, the image quality may be deteriorated, particularly at the peripheral edge of the imaging field of view V. Specifically, the image may become dark or uneven brightness may occur.
 撮像部13の物理的な撮像視野Vのうち必要な画像品質を確保することのできる領域を実効的な撮像視野と考える。そうすると、上記のように、物理的な撮像視野Vのうち実効的な撮像視野として使用できるのは主として中央部分であり、これに比べて周縁部では画質の低下が問題となり得る。つまり、このような状態で撮像された画像では、画像の中央領域、より厳密には画像内で対物レンズ131の光軸Cに対応する位置を中心としてその周囲に広がる領域で画質が良好となる。一方、それより外側の領域では画質が低下している可能性がある。 The area where the required image quality can be ensured in the physical imaging field V of the imaging unit 13 is considered as an effective imaging field. Then, as described above, the physical imaging visual field V that can be used as an effective imaging visual field is mainly the central portion, and compared with this, deterioration in image quality can be a problem at the peripheral portion. That is, in the image captured in such a state, the image quality is good in the central region of the image, more strictly in the region extending around the position corresponding to the optical axis C of the objective lens 131 in the image. . On the other hand, there is a possibility that the image quality is deteriorated in the outer region.
 図4Aおよび図4Bは画像の周縁部での画質低下の実例を示す図である。図4Aは、ウェルプレートWPに代えて、ガラス板上に小さなドットが規則的に配列されてなる画質評価用のグリッドパターンを撮像対象物として撮像し得られた画像の例、およびその部分拡大図を示す。画像の中央部分では各ドットが明瞭に、かつ一定の配列ピッチで現れている。一方、画像の周縁部、特に矩形の画像領域の隅部においては、画像が暗く、ドットの輪郭が不明瞭で、その形状や配列ピッチも乱れている。 FIG. 4A and FIG. 4B are diagrams showing examples of image quality degradation at the peripheral edge of an image. FIG. 4A shows an example of an image obtained by imaging a grid pattern for image quality evaluation in which small dots are regularly arranged on a glass plate instead of the well plate WP as an imaging object, and a partially enlarged view thereof. Indicates. In the central portion of the image, each dot appears clearly and at a constant arrangement pitch. On the other hand, at the periphery of the image, particularly at the corner of the rectangular image region, the image is dark, the outline of the dots is unclear, and the shape and arrangement pitch are also disturbed.
 図4Bはこのような画像を単純に継ぎ合わせた場合の問題点を示している。図4Aに示すように周縁部、特に隅部に画質の低下した領域を含む画像が例えば4枚継ぎ合わされたとする。このとき、図4Bに円で囲んだ領域のように、画質の低下した領域同士が隣接し、これらを合成した画像において所期の画像品質を得られないことがある。前述の従来技術のように、複数の画像を一部が重複するように撮像し重なり部分を適宜の画像処理により合成することで継ぎ目を目立たなくすることも考えられる。しかし、これも元の画像における画質低下を補償するものではない。 FIG. 4B shows a problem when such images are simply stitched together. As shown in FIG. 4A, it is assumed that, for example, four images including an area where the image quality is reduced are joined at the peripheral edge, particularly at the corner. At this time, there may be a case where the areas where the image quality is lowered are adjacent to each other, such as the area surrounded by a circle in FIG. 4B, and the desired image quality cannot be obtained in the synthesized image. As in the above-described prior art, it is conceivable that a plurality of images are captured so that a part thereof overlaps, and the overlapping part is synthesized by an appropriate image processing to make the joint inconspicuous. However, this also does not compensate for image quality degradation in the original image.
 合成後の画像品質という点では、複数の画像から画質低下のない中央部分のみを抽出して組み合わせればよい。しかしながら、このようにすると1つの画像から抽出される有効な領域は狭くなり、より多くの画像が必要となる。そのため、撮像や合成処理に要する時間が長くなって現実的ではない。また、撮像の際に画像同士の重なりをどの程度取ればよいかという点についても定量的な指標は特にない。 In terms of post-combination image quality, it is only necessary to extract and combine only the central portion where there is no deterioration in image quality from multiple images. However, when this is done, the effective area extracted from one image becomes narrower and more images are required. For this reason, the time required for imaging and composition processing becomes long, which is not realistic. Further, there is no particular quantitative index as to how much overlap between images should be taken during imaging.
 このように、複数の画像を継ぎ合わせて撮像対象領域全体をカバーする画像を作成することを前提にしたとき、撮像すべき画像の数と画像品質との間にはトレードオフがある。しかし、画像間の重なりをどの程度確保すればよいか、また得られた画像をどのように合成するかについて、これまで明確な基準を有する手法は確立されていなかった。 As described above, there is a trade-off between the number of images to be captured and the image quality when it is assumed that an image covering a whole imaging target region is created by joining a plurality of images. However, a method having a clear standard has not been established so far regarding how much overlap between images should be ensured and how to synthesize the obtained images.
 この実施形態の撮像装置1では、撮像対象領域が撮像部13の撮像視野Vよりも大きな場合には、撮像対象領域の互いに異なる位置で複数回撮像を行うことで複数の原画像が取得される。そして、それらの原画像を合成することで撮像対象領域全体の画像が作成される。このとき、複数の原画像が部分的に重なるようにして撮像されるが、その重なりの程度は、後の合成段階での処理内容が反映されたものとなっている。以下、本実施形態における画像合成の原理および実際の処理内容について説明する。 In the imaging device 1 of this embodiment, when the imaging target area is larger than the imaging visual field V of the imaging unit 13, a plurality of original images are acquired by performing imaging a plurality of times at different positions in the imaging target area. . Then, by synthesizing these original images, an image of the entire imaging target area is created. At this time, a plurality of original images are picked up so as to partially overlap, and the degree of the overlap reflects the processing contents in the later synthesis stage. Hereinafter, the principle of image composition and the actual processing contents in the present embodiment will be described.
 上記したように、撮像された原画像においては、撮像光学系の光軸Cに近い中心部で一定の画像品質が得られ、周縁部では画質の低下が見られる。言い換えれば、原画像のうち光軸Cに近い領域の画像情報は信頼性が高く、光軸Cから離れた周縁部では画像情報の信頼性が低い。このことから、画像を合成する際、複数の画像の重なり部分では、信頼性の高い画像情報を有する画素に大きな重みを与える一方、信頼性の低い画像情報を有する画素にはより小さな重みを与えて合成する。このようにすれば、合成後の画素では信頼性の高い画素の画像情報がより強く反映されるようになり、合成画像の画像品質を向上させることが可能となる。 As described above, in the captured original image, a constant image quality is obtained at the central portion near the optical axis C of the imaging optical system, and the image quality is deteriorated at the peripheral portion. In other words, the image information in the region near the optical axis C in the original image has high reliability, and the reliability of the image information is low in the peripheral portion away from the optical axis C. For this reason, when combining images, a large weight is given to a pixel having reliable image information in an overlapping portion of a plurality of images, while a smaller weight is given to a pixel having unreliable image information. To synthesize. In this way, the image information of highly reliable pixels is more strongly reflected in the combined pixels, and the image quality of the combined image can be improved.
 図5A、図5Bおよび図5Cは画素への重みの与え方を示す図である。なお、以降の図において画像の方向を表す場合、二次元画像における一般的な座標設定に倣い、画像の左上を原点(0,0)として横方向をX座標、縦方向をY座標とする。また、この画像平面における点の座標を(x,y)により表す。 FIG. 5A, FIG. 5B, and FIG. 5C are diagrams showing how to give weights to pixels. In the following drawings, when representing the direction of an image, following the general coordinate setting in a two-dimensional image, the upper left of the image is the origin (0, 0), the horizontal direction is the X coordinate, and the vertical direction is the Y coordinate. Further, the coordinates of the point on the image plane are represented by (x, y).
 図5Aは画像品質の変化を模式的に示す図である。同図に示すように、画像の明るさや歪み、ボケなど画像品質を表す指標は、光軸Cからある程度の距離までは概ね一定であるが、それより遠くなると次第に低下してゆく。よって、このような特性を反映させた重み付けを行えばよい。すなわち、図5Bに示すように、原画像中の各画素のうち光軸Cから一定の距離D以内にある画素には一定の重み(例えば1)が与えられる。また、これより遠くでは光軸Cから離れるにつれて次第に重みが小さくなり、最終的には重みが0となるような重み係数Wtが設定される。 FIG. 5A is a diagram schematically showing a change in image quality. As shown in the figure, indices representing image quality such as image brightness, distortion, and blur are generally constant up to a certain distance from the optical axis C, but gradually decrease as the distance further increases. Therefore, weighting reflecting such characteristics may be performed. That is, as shown in FIG. 5B, a certain weight (for example, 1) is given to a pixel within a certain distance D from the optical axis C among the pixels in the original image. Further, the weighting coefficient Wt is set so that the weight gradually decreases as the distance from the optical axis C increases, and finally the weight becomes zero.
 撮像光学系の性質上、一般的には重み係数Wtの値が撮像光学系の光軸Cに対して回転対称となるように、各画素位置の重み係数Wtを設定すればよいと考えられる。このようにして画素位置ごとの重み係数Wtをマッピングした重み付けマップが予め作成される。重み付けマップは事前に撮像光学系の評価を行うことによって得られる。例えば図4Aに示したグリッドパターンのようなテストチャートを用いてテスト撮像を行い、画素ごとの画質の劣化度合いを定量的に評価して、その結果を画素位置に対応させてマッピングしたものを重み付けマップとすることができる。なお、光学系の構成によっては重み係数Wtが回転対称性を有していない場合もあり得る。 Due to the nature of the imaging optical system, it is generally considered that the weighting coefficient Wt at each pixel position should be set so that the value of the weighting coefficient Wt is rotationally symmetric with respect to the optical axis C of the imaging optical system. In this way, a weighting map in which the weighting coefficient Wt for each pixel position is mapped is created in advance. The weighting map is obtained by evaluating the imaging optical system in advance. For example, test imaging is performed using a test chart such as the grid pattern shown in FIG. 4A, the degree of deterioration in image quality for each pixel is quantitatively evaluated, and the result is mapped in correspondence with the pixel position. It can be a map. Depending on the configuration of the optical system, the weighting factor Wt may not have rotational symmetry.
 例えばウェルWに入射する照明光の光量分布が一様でないなど、照明光学系側に光学特性の不均一さがあるような場合でも、上記のような事前評価により照明光学系の特性も含んだ重み付けマップとすることができる。この場合、重み付けマップは照明光学系、撮像光学系双方の光学特性を反映したものとなる。 For example, even if there is non-uniformity in the optical characteristics on the side of the illumination optical system, for example, the distribution of the amount of illumination light incident on the well W is not uniform, the characteristics of the illumination optical system are also included by the prior evaluation as described above It can be a weighting map. In this case, the weighting map reflects the optical characteristics of both the illumination optical system and the imaging optical system.
 そして、図5Cに示すように、原画像Imの各画素の画素値P(x,y)に対し、その画素位置に応じて重み付けマップMwから求めた重み係数Wt(x,y)を乗じることで、重み付けされた画像Iwにおける各画素の画素値Pw(x,y)が決定される。これにより、原画像Imに撮像光学系の光学特性に基づく重み付けを与えた画像Iwが得られる。このようにして重み付けされた画像Iwでは、光軸Cに近い中央部では原画像Imの画素値P(x,y)がそのまま維持される。一方、周縁部では原画像Imの持つ画素値の重みが減じられて小さな値となる。重み付けされた各画素の画素値Pw(x,y)は、次の(式1)により表すことができる。
Figure JPOXMLDOC01-appb-M000001
Then, as shown in FIG. 5C, the pixel value P (x, y) of each pixel of the original image Im is multiplied by a weight coefficient Wt (x, y) obtained from the weight map Mw according to the pixel position. Thus, the pixel value Pw (x, y) of each pixel in the weighted image Iw is determined. Thereby, an image Iw obtained by assigning weighting to the original image Im based on the optical characteristics of the imaging optical system is obtained. In the weighted image Iw, the pixel value P (x, y) of the original image Im is maintained as it is in the central portion near the optical axis C. On the other hand, in the peripheral portion, the pixel value weight of the original image Im is reduced to a small value. The weighted pixel value Pw (x, y) of each pixel can be expressed by the following (Equation 1).
Figure JPOXMLDOC01-appb-M000001
 図6Aおよび図6Bは重み付けマップの表現例を示す図である。図6AはXY平面(画像平面)に対する重み係数Wtの値を三次元マップとして表したものである。一般的には、このような三次元マップを用いて任意の分布形状を有する重み係数Wtをマップ空間内の曲面として表現することが可能である。撮像光学系の光学特性が回転対称性を有していない場合でも、非対称な曲面として表現することが可能である。マップをメモリ145に格納し利用するには、X座標値およびY座標値に対するルックアップテーブルとして記憶させておけばよい。 FIG. 6A and FIG. 6B are diagrams showing examples of expression of weighting maps. FIG. 6A represents the value of the weighting factor Wt for the XY plane (image plane) as a three-dimensional map. In general, it is possible to express a weight coefficient Wt having an arbitrary distribution shape as a curved surface in the map space using such a three-dimensional map. Even when the optical characteristic of the imaging optical system does not have rotational symmetry, it can be expressed as an asymmetric curved surface. In order to store and use the map in the memory 145, the map may be stored as a lookup table for the X coordinate value and the Y coordinate value.
 図6Bは重み係数Wtを多段階に量子化して二次元マップとして表現した例を示す。光軸Cを中心とする所定距離内の領域には重み係数Wtに対し値1が設定される。その外側の領域には0.7、さらに外側の領域には0.5というように、光軸Cから離れた領域ほど小さな値が重み係数Wtとして設定される。この場合、原画像Imにおける各画素に対応する重み係数Wtは、当該画素位置の光軸Cからの距離に応じて段階的に設定される。また、距離の関数として表現されてもよい。このような態様では、メモリ145に記憶させるべき情報の量を大きく低減することが可能である。 FIG. 6B shows an example in which the weight coefficient Wt is quantized in multiple stages and expressed as a two-dimensional map. A value 1 is set for the weighting factor Wt in a region within a predetermined distance centered on the optical axis C. A smaller value is set as the weighting coefficient Wt in a region farther from the optical axis C, such as 0.7 for the outer region and 0.5 for the outer region. In this case, the weighting factor Wt corresponding to each pixel in the original image Im is set stepwise according to the distance from the optical axis C of the pixel position. It may also be expressed as a function of distance. In such an aspect, the amount of information to be stored in the memory 145 can be greatly reduced.
 その他、画像における座標位置から当該位置の画素に与える重みが導き出せる態様であれば、任意の形式でマップを表現しメモリ145に格納しておくことができる。なお、複数の原画像が同一の撮像条件で撮像される場合、それらに対して重み付けマップMwは1セットであってよい。ただし、例えば複数の原画像が異なる照明条件または撮像条件で撮像される場合のように、条件ごとに重み付けマップMwが用意されることが望ましい場合がある。 As long as the weight given to the pixel at the position can be derived from the coordinate position in the image, the map can be expressed in an arbitrary format and stored in the memory 145. When a plurality of original images are captured under the same imaging condition, the weighting map Mw may be one set for them. However, it may be desirable to prepare the weighting map Mw for each condition, for example, when a plurality of original images are captured under different illumination conditions or imaging conditions.
 このような重み付けの考え方に基づき、原画像が撮像される時の画像の重なりの程度および画像合成時の計算方法が定められる。まず撮像時の原画像の重ね方について説明する。画像情報の信頼性の高い画素ほど大きな重みが与えられる。したがって、撮像対象領域の全体が一定値以上の重みを与えられた画素のみで占められるようになれば、それらの画素を抽出して画像を再構成することで、撮像対象領域の全体を所定の画像品質で表した画像を作成することができる。 Based on this weighting concept, the degree of image overlap when the original image is captured and the calculation method during image composition are determined. First, how to overlay the original images at the time of imaging will be described. Pixels with higher reliability of image information are given greater weight. Therefore, when the entire imaging target area is occupied by only pixels given a weight of a certain value or more, the entire imaging target area is determined by extracting those pixels and reconstructing the image. Images expressed in image quality can be created.
 すなわち、重み付けマップMwにおいて重み係数Wtが所定の閾値以上である領域を抽出した仮想的なマスクが設定される。そして、該マスクにより撮像対象領域の全体を覆うことができるようなマスクの配置が定められればよい。このときのマスクの数および配置が、撮像対象領域全体を表すために必要な原画像の数および配置を表すことになる。このようなマスクは、例えば重み付けマップMwを上記の閾値を用いて2値化することによって得られる。 That is, a virtual mask obtained by extracting an area where the weighting coefficient Wt is equal to or greater than a predetermined threshold in the weighting map Mw is set. Then, it is only necessary to determine the arrangement of the mask so that the entire imaging target area can be covered with the mask. The number and arrangement of masks at this time represent the number and arrangement of original images necessary for representing the entire imaging target region. Such a mask is obtained, for example, by binarizing the weighting map Mw using the above threshold value.
 図7Aおよび図7Bは重み付けマップから作成されるマスクを例示する図である。図7Aに例示するマスクM1は、重み係数Wtに対して比較的大きな閾値Th1(例えば0.9)が設定された例である。一方、図7Bはより小さな閾値Th2(例えば0.7)が設定されたマスクM2の例を示す。これらの図において斜線を付した部分が閾値以上の重みを有する部分を示しており、この部分が撮像対象領域をマスクする。 7A and 7B are diagrams illustrating a mask created from the weighting map. The mask M1 illustrated in FIG. 7A is an example in which a relatively large threshold Th1 (for example, 0.9) is set for the weighting factor Wt. On the other hand, FIG. 7B shows an example of the mask M2 in which a smaller threshold Th2 (for example, 0.7) is set. In these drawings, a hatched portion indicates a portion having a weight greater than or equal to a threshold, and this portion masks the imaging target region.
 閾値の設定値が大きい場合、マスクされる領域内で画像品質は高いレベルが保証されるが、マスクされる面積は小さい。このことは、撮像領域全体をカバーするのに多くの画像を必要とすることを意味する。例えば閾値を1に設定すれば画質低下が無視できるレベルとなるが、その分必要となる画像の枚数が多くなり処理時間が長くなる。 When the set value of the threshold is large, the image quality is guaranteed to be high in the masked area, but the masked area is small. This means that many images are required to cover the entire imaging area. For example, if the threshold value is set to 1, the image quality deterioration can be ignored, but the number of necessary images increases and the processing time becomes longer.
 これに対し、閾値の設定値が小さい場合、マスクされる面積が大きくなるので必要な画像の数は少なくて済む。ただし、画質が若干低下した部分も合成画像に用いられることとなり、これが合成後の画像の品質にも反映されることになる。求められる画像の品質に応じて閾値が定められればよい。以下の説明では、マスクMとして図7Bに示すマスクM2が用いられるものとする。 On the other hand, when the set value of the threshold is small, the area to be masked is large, so that the number of necessary images is small. However, the part where the image quality is slightly lowered is also used for the synthesized image, and this is reflected in the quality of the synthesized image. The threshold value may be determined according to the required image quality. In the following description, it is assumed that the mask M2 shown in FIG.
 図8は原画像の割り付け例を示す図である。重み付けマップMwから作成されるマスクMで撮像対象領域であるウェルW全体をカバーするためのマスクの配置は無数にあり得る。ただし、マスク同士の重なり部分の面積ができるだけ小さいことが望ましい。また、ウェルWに対する撮像部13の位置を変えながら順次原画像を撮像することを考えると、撮像部13の移動経路ができるだけ簡潔で、かつその長さができるだけ短いことが望ましい。 FIG. 8 is a diagram showing an example of original image allocation. There can be innumerable mask arrangements for covering the entire well W, which is the imaging target area, with the mask M created from the weighting map Mw. However, it is desirable that the area of the overlapping portion between the masks is as small as possible. Considering that the original image is sequentially captured while changing the position of the imaging unit 13 with respect to the well W, it is desirable that the moving path of the imaging unit 13 is as simple as possible and the length is as short as possible.
 撮像装置1が撮像部13をマスクM(または原画像)の辺方向であるX方向およびY方向にそれぞれ移動させるものであるとすると、マスクMの配置もこれらの方向に沿ったものであることが好ましい。このような観点から、例えば図8に示すような配置が考えられる。 If the imaging apparatus 1 moves the imaging unit 13 in the X direction and the Y direction, which are the side directions of the mask M (or original image), the arrangement of the mask M is also along these directions. Is preferred. From such a viewpoint, for example, an arrangement as shown in FIG. 8 can be considered.
 図8において、黒丸印で示される点P1~P10は配置された複数マスクM(または原画像)の重心位置を示している。一般的な撮像方法では、原画像の重心位置は当該原画像が撮像されるときの光軸Cの位置と一致している。また実線矢印は撮像部13の走査移動の軌跡、より正確にはウェルWの撮像対象領域上における撮像部13の光軸Cの軌跡を示している。また白丸印Ps,Peは撮像部13の走査移動の始点および終点を示している。 In FIG. 8, points P1 to P10 indicated by black circles indicate the barycentric positions of the plurality of masks M (or original images) arranged. In a general imaging method, the position of the center of gravity of the original image coincides with the position of the optical axis C when the original image is captured. A solid line arrow indicates a trajectory of scanning movement of the imaging unit 13, more precisely, a trajectory of the optical axis C of the imaging unit 13 on the imaging target region of the well W. White circles Ps and Pe indicate the start point and the end point of the scanning movement of the imaging unit 13.
 図8に示す例では、10枚のマスクMにより撮像対象領域であるウェルWの内部領域の全体がカバーされる。10枚のマスクMはX方向に3列に並んでおり、左側すなわち最も(-X)側の列は3枚、中央の列は4枚、右側すなわち最も(+X)側の列は3枚のマスクによりそれぞれ構成される。各列において、隣り合うマスクM同士は一部が重複している。マスク形状はX方向およびY方向において等方的であり、マスクMの配列ピッチはX方向、Y方向で同じである。 In the example shown in FIG. 8, the entire inner region of the well W, which is the imaging target region, is covered by ten masks M. The 10 masks M are arranged in 3 rows in the X direction, the left side, ie, the most (−X) side row is 3, the center row is 4, the right side, ie, the (+ X) side row is 3 rows. Each is constituted by a mask. In each column, adjacent masks M partially overlap each other. The mask shape is isotropic in the X direction and the Y direction, and the arrangement pitch of the mask M is the same in the X direction and the Y direction.
 また、中央の列における各マスクMのY方向の配列は、左右の各列における各マスクMのY方向の配列に対して、配列ピッチの半分だけY方向にずれている。したがって、1つの列に含まれる1つのマスクの重心位置(例えば点P3)と、隣の列に含まれる最近傍の2つのマスクの重心位置(例えば点P4,P5)とを互いに結ぶと正三角形になる。別の言い方をすると、1つのマスクの重心位置と、当該マスクに部分的に重なる複数のマスクそれぞれの重心位置との距離は、これらの複数のマスクの間で同じになっている。 Further, the arrangement in the Y direction of each mask M in the center column is shifted in the Y direction by half the arrangement pitch with respect to the Y direction arrangement of each mask M in each of the left and right columns. Therefore, when a barycentric position (for example, point P3) of one mask included in one column and a barycentric position (for example, points P4 and P5) of two nearest masks included in the adjacent column are connected to each other, an equilateral triangle is obtained. become. In other words, the distance between the centroid position of one mask and the centroid position of each of the plurality of masks partially overlapping the mask is the same among the plurality of masks.
 このような配置とすることで、回転対称な撮像光学系による撮像で原画像の隅部に現れる画質の低下した部分を他の原画像で効果的にカバーすることができる。より一般的には、隣り合う列の間では、原画像の位置を各列内での原画像の並ぶ方向に沿って異ならせることにより、隣り合う列の間で原画像の隅部同士が重なってしまうのを防止することができる。画質評価に基づく重み付けの結果から定量的に求められたマスクに対応した原画像の配置とすることで、重ね合わせにおいて画質の劣化した部分が集中してしまうのを回避し、一定レベル以上の画像品質を確保することが可能となる。 With such an arrangement, it is possible to effectively cover, with another original image, a portion of the original image having a reduced image quality that appears at the corner of the original image by imaging with a rotationally symmetric imaging optical system. More generally, by changing the position of the original image along the direction in which the original images are arranged in each column between adjacent columns, the corners of the original image overlap between adjacent columns. Can be prevented. By arranging the original image corresponding to the mask quantitatively obtained from the weighting result based on the image quality evaluation, it is possible to avoid the concentration of the degraded image quality in the overlay, and an image of a certain level or higher. It is possible to ensure quality.
 図8のマスク配置は撮像時の原画像の割り付けを示している。したがって、撮像時には撮像光学系の光軸Cが各マスクMの重心に対応する点P1~P10を順次通過するように、撮像部13の走査移動レシピが設定されればよい。設定された走査移動レシピにしたがい撮像部13を走査移動させながら、撮像光学系の光軸Cが点P1~P10のいずれかに到達する度に撮像を行うようにすれば、撮像対象領域であるウェルW全体をカバーする原画像が取得されることになる。この意味において、点P1~P10のそれぞれに対応する、つまり光軸Cが点P1~P10それぞれと一致するときの撮像部13の位置を、以下では「撮像位置」と称する。 The mask arrangement in FIG. 8 shows the allocation of the original image at the time of imaging. Accordingly, the scanning movement recipe of the imaging unit 13 may be set so that the optical axis C of the imaging optical system sequentially passes through the points P1 to P10 corresponding to the center of gravity of each mask M during imaging. If imaging is performed every time the optical axis C of the imaging optical system reaches any one of the points P1 to P10 while scanning and moving the imaging unit 13 according to the set scanning movement recipe, the imaging target region is obtained. An original image covering the whole well W is acquired. In this sense, the position of the imaging unit 13 corresponding to each of the points P1 to P10, that is, when the optical axis C coincides with each of the points P1 to P10, is hereinafter referred to as “imaging position”.
 図9はこの実施形態における撮像処理を示すフローチャートである。CPU141が予め作成された制御プログラムに基づいて装置各部に所定の動作を行わせ、図9に示す撮像処理を実行することにより、複数の原画像が取得される。最初に、メカ制御部146からの制御指令に応じて作動する駆動機構15により、撮像部13が所定のスタート位置に位置決めされる(ステップS101)。図8に示す始点Psは、このときの対物レンズ131の光軸位置に対応する。なお、ここでは撮像部13の走査移動についてのみ説明するが、前記した通り、照明光の光中心と対物レンズ131の光軸とが常に一致するように、撮像部13の移動に伴って照明部12も移動する。 FIG. 9 is a flowchart showing the imaging process in this embodiment. The CPU 141 causes each unit of the apparatus to perform a predetermined operation based on a control program created in advance, and executes an imaging process illustrated in FIG. 9, thereby acquiring a plurality of original images. First, the imaging unit 13 is positioned at a predetermined start position by the drive mechanism 15 that operates according to a control command from the mechanical control unit 146 (step S101). The start point Ps shown in FIG. 8 corresponds to the optical axis position of the objective lens 131 at this time. Although only the scanning movement of the imaging unit 13 will be described here, as described above, the illumination unit is moved along with the movement of the imaging unit 13 so that the optical center of the illumination light and the optical axis of the objective lens 131 always coincide with each other. 12 also moves.
 続いて、予め設定された走査移動レシピに基づき、ウェルWに対する撮像部13の走査移動が開始される(ステップS102)。撮像部13が終点Peに対応する終了位置に到達すれば、処理は終了する(ステップS103)。終了位置に到達するまでの間は、撮像部13が点P1~P10に対応する撮像位置のいずれかに到達する度に(ステップS104)、ステップS105、S106が実行されて撮像が行われる。撮像部13がどの位置にあるかについては、例えば撮像部13に装着されたポジションセンサ(図示せず)からの出力信号に基づき検出することができる。 Subsequently, the scanning movement of the imaging unit 13 with respect to the well W is started based on a preset scanning movement recipe (step S102). If the imaging unit 13 reaches the end position corresponding to the end point Pe, the process ends (step S103). Until the end position is reached, every time the imaging unit 13 reaches one of the imaging positions corresponding to the points P1 to P10 (step S104), steps S105 and S106 are executed to perform imaging. The position where the imaging unit 13 is located can be detected based on, for example, an output signal from a position sensor (not shown) mounted on the imaging unit 13.
 撮像部13が撮像位置に到達すると、照明部12の光源が所定時間点灯されることで撮像対象物がストロボ照明され、これと同期して撮像素子132が撮像を行うことで、1枚の画像が取得される(ステップS105)。撮像素子132から出力される画像信号をADコンバータ143によりデジタル化して得られた画像データは、画像メモリ144に記憶保存される(ステップS106)。撮像部13が終了位置に到達するまで上記処理が繰り返されることで、点P1~P10の各々に対応する撮像位置での撮像が行われ、10枚の原画像が取得される。 When the imaging unit 13 reaches the imaging position, the light source of the illuminating unit 12 is turned on for a predetermined time, so that the imaging target is stroboscopically illuminated. Is acquired (step S105). Image data obtained by digitizing the image signal output from the image sensor 132 by the AD converter 143 is stored and saved in the image memory 144 (step S106). By repeating the above process until the imaging unit 13 reaches the end position, imaging is performed at imaging positions corresponding to the points P1 to P10, and 10 original images are acquired.
 ストロボ照明下で撮像が行われるため、撮像のために撮像部13の走査移動を一時的に停止させる必要はなく、メカ制御部146は走査移動レシピにしたがい一定速度で撮像部13を走査移動させればよい。各点P1~P10を結ぶ経路ができるだけ短くなるように走査移動レシピを最適化することで、撮像に要する時間を短縮することができる。 Since imaging is performed under strobe illumination, there is no need to temporarily stop the scanning movement of the imaging unit 13 for imaging, and the mechanical control unit 146 scans and moves the imaging unit 13 at a constant speed according to the scanning movement recipe. Just do it. By optimizing the scanning movement recipe so that the path connecting the points P1 to P10 is as short as possible, the time required for imaging can be shortened.
 この実施形態の走査移動レシピでは、撮像部13が始点Psに対応するスタート位置から所定距離だけY方向に移動して1列(左列)分の原画像を撮像した後、X方向に一定量だけ移動し、再びY方向に移動しながら1列(中央列)分の原画像を撮像する。さらにX方向に一定量だけ移動し、Y方向に移動しながら1列(右列)分の原画像を撮像し、最終的に終点Peに対応する終了位置に到達する。このように、この実施形態では撮像部13はY方向の移動とX方向移動とを交互に実行することで必要な枚数の原画像が取得される。 In the scanning movement recipe of this embodiment, the imaging unit 13 moves in the Y direction by a predetermined distance from the start position corresponding to the start point Ps to capture an original image for one column (left column), and then a certain amount in the X direction. The original image for one column (center column) is captured while moving in the Y direction again. Further, it moves a certain amount in the X direction, captures one row (right column) of original images while moving in the Y direction, and finally reaches the end position corresponding to the end point Pe. As described above, in this embodiment, the imaging unit 13 acquires the necessary number of original images by alternately performing the movement in the Y direction and the movement in the X direction.
 次に、こうして得られた複数の原画像を合成し撮像対象領域全体に対応する画像を作成する処理について説明する。この割り付け例では10枚の原画像が取得されるが、以下、より一般化する場合には原画像の数をK枚(Kは2以上の整数)とし、各原画像を符号Ik(k=1,2,…,K)により表すこととする。この例ではK=10である。 Next, a process for synthesizing a plurality of original images thus obtained and creating an image corresponding to the entire imaging target area will be described. In this allocation example, ten original images are acquired. However, in the case of further generalization, the number of original images is set to K (K is an integer of 2 or more), and each original image is represented by a code Ik (k = 1, 2, ..., K). In this example, K = 10.
 図10は画像の合成処理を示すフローチャートである。最初に、取得されたK枚の原画像に対し、単独の原画像ごとに実行可能な各種の補正処理、例えばシェーディング補正、ディストーション補正、デコンボリューション補正などが実施される(ステップS201)。これらの補正は必要に応じ実施され、また省かれてもよい。続いて、補正された複数の原画像Ikが、ウェルWの同一位置に対応する画素同士の位置が一致するように、仮想的な二次元マップ平面上で重ね合わせられる(ステップS202)。そのうちウェルW全体をカバーするウェル領域が切り出される(ステップS203)。 FIG. 10 is a flowchart showing image composition processing. First, various correction processes that can be executed for each single original image, such as shading correction, distortion correction, and deconvolution correction, are performed on the acquired K original images (step S201). These corrections are performed as necessary and may be omitted. Subsequently, the corrected plurality of original images Ik are superimposed on a virtual two-dimensional map plane so that the positions of the pixels corresponding to the same position of the well W coincide with each other (step S202). Of these, a well region covering the entire well W is cut out (step S203).
 図11はウェル領域の切り出しを説明するための図である。仮想マップ平面上で重ねられた原画像Ik(この例ではk=1~10)が占める領域のうち、ウェルW全体および最小限のウェル外の領域を含む矩形領域が、ウェル領域Rwとして切り出される。ウェルW全体を表す画像を作成するという目的では、以後の処理はウェル領域Rw内について行われ、他の領域は不要となる。説明の便宜上、ウェル領域Rwに対して新たに座標系を設定する。XY座標系と区別するために新たな座標系をX’Y’座標系とし、該座標系内の点を座標(x’,y’)により表す。 FIG. 11 is a diagram for explaining the extraction of the well region. Of the area occupied by the original image Ik (k = 1 to 10 in this example) superimposed on the virtual map plane, a rectangular area including the entire well W and a minimum area outside the well is cut out as the well area Rw. . For the purpose of creating an image representing the entire well W, the subsequent processing is performed in the well region Rw, and other regions are unnecessary. For convenience of explanation, a new coordinate system is set for the well region Rw. In order to distinguish from the XY coordinate system, the new coordinate system is the X′Y ′ coordinate system, and points in the coordinate system are represented by coordinates (x ′, y ′).
 図10に戻って合成処理について説明する。ウェル領域の切り出しに続いて画素ごとの合成処理が実行される。まず、ウェル領域Rw内で処理対象となる画素の座標(x’,y’)の初期値が、ウェル領域Rw左上隅の原点(0,0)に設定される(ステップS204、S205)。そして、合成後の画像において当該座標点(x’,y’)に対応する画素の画素値が算出される(ステップS206)。当該座標点(x’,y’)を占める画素を有する原画像が1つしかないときには、当該原画像における当該画素の(重み付けしない)画素値がそのまま合成後の画素値とされる。一方、当該座標点(x’,y’)を占める画素を有する原画像が複数ある場合には、それらの原画像から抽出される画素の画素値に基づく計算により、合成後の画素値が求められる。 Referring back to FIG. 10, the synthesis process will be described. Subsequent to the extraction of the well region, a synthesis process for each pixel is executed. First, the initial value of the coordinates (x ′, y ′) of the pixel to be processed in the well region Rw is set to the origin (0, 0) of the upper left corner of the well region Rw (steps S204 and S205). Then, the pixel value of the pixel corresponding to the coordinate point (x ′, y ′) in the combined image is calculated (step S206). When there is only one original image having pixels occupying the coordinate point (x ′, y ′), the pixel value of the pixel in the original image (not weighted) is directly used as the combined pixel value. On the other hand, when there are a plurality of original images having pixels that occupy the coordinate point (x ′, y ′), the combined pixel values are obtained by calculation based on the pixel values of the pixels extracted from the original images. It is done.
 原画像Ikの各画素には、当該画素の画像情報の信頼性に応じた重みが与えられている。したがって、複数の原画像が重なる場合、重みの大きい画素の画素値ほど強く合成後の画素値に反映されるようにすればよい。原理的には、重なり合う画素の画素値を重み付けして合計した値により、合成後の画素値Po(x’,y’)を表すことができる。具体的には下記の(式2)による。
Figure JPOXMLDOC01-appb-M000002
Each pixel of the original image Ik is given a weight according to the reliability of the image information of the pixel. Therefore, when a plurality of original images are overlapped, the pixel value of the pixel having a larger weight may be reflected more strongly in the combined pixel value. In principle, the combined pixel value Po (x ′, y ′) can be expressed by a weighted sum of pixel values of overlapping pixels. Specifically, according to the following (Formula 2).
Figure JPOXMLDOC01-appb-M000002
 (式2)において、符号Pwk(x’,y’)は、座標点(x’,y’)にある原画像Ikの画素に対し(式1)で重み付けされた画素値Pw(x’,y’)を表す。なお、当該座標位置に画素を有していない原画像Ikの画素値Pwk(x’,y’)は0とされる。 In (Expression 2), the code Pwk (x ′, y ′) is a pixel value Pw (x ′, y ′) weighted by (Expression 1) with respect to the pixel of the original image Ik at the coordinate point (x ′, y ′). y ′). Note that the pixel value Pwk (x ′, y ′) of the original image Ik that does not have a pixel at the coordinate position is set to 0.
 この場合、例えば比較的大きな重みを与えられた画素同士が重なっている領域では他の領域に比べ画素値が大きくなりすぎることがある。そこで、下記の(式3)のように、重なる画素の重み係数Wtの合計値により画素値が正規化されることがより好ましい。
Figure JPOXMLDOC01-appb-M000003
In this case, for example, in a region where pixels having relatively large weights overlap each other, the pixel value may be too large compared to other regions. Therefore, it is more preferable that the pixel value is normalized by the total value of the weighting factors Wt of the overlapping pixels as in (Equation 3) below.
Figure JPOXMLDOC01-appb-M000003
 (式3)において、符号Pk(x’,y’)は、座標点(x’,y’)に位置する画素として原画像Ikから抽出される画素の画素値である。原画像Ikが当該位置に画素を有していない場合には値が0とされる。また、符号Wtk(x’,y’)は、画素値Pk(x’,y’)に対し重み付けマップMwに基づいて作用させる重み係数である。重み付けマップMw自体は各原画像に対して共通であるが、各原画像Ikが仮想マップ平面において互いに異なる位置にマッピングされるため、各原画像Ikに作用させる重み付けマップMwも仮想マップ平面上で原画像の位置に連動させる必要がある。 In (Equation 3), a symbol Pk (x ′, y ′) is a pixel value of a pixel extracted from the original image Ik as a pixel located at the coordinate point (x ′, y ′). If the original image Ik does not have a pixel at that position, the value is 0. A symbol Wtk (x ′, y ′) is a weighting factor that acts on the pixel value Pk (x ′, y ′) based on the weighting map Mw. Although the weighting map Mw itself is common to each original image, since each original image Ik is mapped to a different position on the virtual map plane, the weighting map Mw applied to each original image Ik is also on the virtual map plane. It is necessary to link with the position of the original image.
 XYマップ平面における原画像IkをX’Y’マップ平面に写像するために必要な座標変換を重み付けマップMwに対しても施すことにより、重み付けマップMwをX’Y’マップ平面における原画像Ikの位置に合わせることができる。このようにした結果、座標点(x’,y’)における重み係数Wtの値は必ずしも一意とならず、当該座標点を複数の原画像が占める場合には、対応する原画像Ikごとに個別の値が存在することとなる。これらを区別するために、添え字kを用いて、原画像Ikのうち座標点(x’,y’)を占める画素の画素値Pk(x’,y’)に対応する重み係数を、符号Wtk(x’,y’)により表している。原画像Ikが座標点(x’,y’)に画素を有していないとき、対応する重み係数Wtk(x’,y’)の値は0とされる。 The weighting map Mw is converted to the original image Ik on the X′Y ′ map plane by performing the coordinate transformation necessary for mapping the original image Ik on the XY map plane on the X′Y ′ map plane. Can be adjusted to the position. As a result, the value of the weighting factor Wt at the coordinate point (x ′, y ′) is not necessarily unique, and when a plurality of original images occupy the coordinate point, each corresponding original image Ik is individually Value exists. In order to distinguish these, a subscript k is used to express a weighting coefficient corresponding to the pixel value Pk (x ′, y ′) of the pixel occupying the coordinate point (x ′, y ′) in the original image Ik. It is expressed by Wtk (x ′, y ′). When the original image Ik has no pixel at the coordinate point (x ′, y ′), the value of the corresponding weight coefficient Wtk (x ′, y ′) is set to 0.
 (式3)に基づく正規化を行うと、仮に小さな(例えば閾値に対し十分小さい)重みを与えられた画素同士が重なるようなケースでは、(式3)の分母の値が小さくなり却って画素値が強調されてしまうことがあり得る。しかしながら、本実施形態では、互いに重なる原画像のうち少なくとも1つは閾値以上の重みを有するものとなるように原画像の割り付けを決定しているため、このような問題は生じない。 When normalization based on (Expression 3) is performed, in a case where pixels given weights that are small (for example, sufficiently small with respect to the threshold) overlap, the denominator value of (Expression 3) decreases and the pixel value is neglected. May be emphasized. However, in the present embodiment, since the assignment of the original images is determined so that at least one of the overlapping original images has a weight equal to or greater than the threshold value, such a problem does not occur.
 このようにして(式2)または(式3)のいずれかによりウェル領域Rw内の座標点(x’,y’)の画素値が求められると、座標値x’を1つインクリメントすることで対象画素を右隣に移行させる。そして、座標位置がウェル領域Rwの右端に到達するまで画素値Po(x’,y’)の計算が繰り返される(ステップS207、S208)。これにより、合成後の画像における横方向(X’方向)の画素列1列分の画素値が求まる。 Thus, when the pixel value of the coordinate point (x ′, y ′) in the well region Rw is obtained by either (Expression 2) or (Expression 3), the coordinate value x ′ is incremented by one. Shift the target pixel to the right. The calculation of the pixel value Po (x ′, y ′) is repeated until the coordinate position reaches the right end of the well region Rw (steps S207 and S208). Thereby, the pixel value for one pixel column in the horizontal direction (X ′ direction) in the combined image is obtained.
 上記のような1列分の画素値の計算が、座標値y’を1つずつインクリメントしながらウェル領域Rwの下端に達するまで繰り返される(ステップS209、S210)。これにより、合成後の画像におけるウェル領域Rw内の全ての画素の画素値が決定される。こうして算出された画素値に基づきウェル領域Rwに対応する画像を再構成することにより(ステップS211)、ウェルWを部分的に撮像した複数の原画像からウェルW全体を表す合成画像を作成することができる。 The calculation of pixel values for one column as described above is repeated until the lower end of the well region Rw is reached while the coordinate value y ′ is incremented by one (steps S209 and S210). Thereby, the pixel values of all the pixels in the well region Rw in the combined image are determined. By reconstructing an image corresponding to the well region Rw based on the pixel value thus calculated (step S211), a composite image representing the entire well W is created from a plurality of original images obtained by partially imaging the well W. Can do.
 画像の合成に際して、原画像が重なる部分の処理については上記以外の方法も考えられる。例えば、重なり合う画素のうち最も大きな重みを与えられたものの画素値を採用することが考えられる。ただしこの場合には、重複部分の端において画像内容のつながりが不自然となり継ぎ目が目立つことがあり得る。 When the images are synthesized, methods other than those described above are also conceivable for processing the portion where the original images overlap. For example, it is conceivable to use the pixel value of the overlapping pixel that is given the highest weight. However, in this case, the connection of the image contents becomes unnatural at the end of the overlapping portion, and the joint may be noticeable.
 一方、例えば重複部分に重みが1の画素がある場合には、画素値を加工せず重み1の画素値をそのまま採用することも考えられる。当該画素について見れば画質低下が問題とならないため、画像情報を最大限に保存するという点においてはこの方法が有効である。この場合も周辺の画素とのつながりが問題となり得る。画像情報の保存性と合成画像のスムーズさとのいずれを優先するかにより、合成方法については適宜選択することが可能である。 On the other hand, for example, when there is a pixel having a weight of 1 in an overlapping portion, it is possible to adopt the pixel value having a weight of 1 as it is without processing the pixel value. This method is effective in terms of preserving image information to the maximum extent because image quality degradation does not become a problem when viewing the pixels. In this case as well, connection with surrounding pixels can be a problem. The composition method can be selected as appropriate depending on which of the preservation of image information and the smoothness of the composite image is given priority.
 以上のように、この実施形態では、撮像装置1の撮像視野Vよりも広範囲の撮像対象領域の画像が必要な場合、撮像対象領域を複数に分割して撮像が行われる。このとき、複数の原画像をスムーズに継ぎ合わせるために、隣り合う原画像が部分的に重複するように撮像される。原画像間の重なりは、後の画像合成での処理内容が反映されたものとなっている。 As described above, in this embodiment, when an image of an imaging target area that is wider than the imaging visual field V of the imaging device 1 is required, imaging is performed by dividing the imaging target area into a plurality of parts. At this time, in order to smoothly stitch together a plurality of original images, images are taken so that adjacent original images partially overlap. The overlap between the original images reflects the processing contents in the subsequent image composition.
 すなわち、この実施形態では、撮像部13の光学特性に起因して各原画像の周縁部に生じる画質低下を考慮して、原画像の各画素に重みを与える重み係数Wtを表した重み付けマップMwが用意される。複数の原画像が重複する部分では、それぞれの原画像から抽出される画素の重みに応じて合成後の画素値が算出される。具体的には、各画素の画素値に重み係数を乗じて合計することで、大きな重みが与えられた画素の画素値ほど合成後の画素値に強く反映されるようにする。 That is, in this embodiment, the weighting map Mw that represents the weighting factor Wt that gives weight to each pixel of the original image in consideration of the image quality degradation that occurs at the peripheral edge of each original image due to the optical characteristics of the imaging unit 13. Is prepared. In a portion where a plurality of original images overlap, the combined pixel value is calculated according to the weight of the pixel extracted from each original image. Specifically, by multiplying the pixel value of each pixel by a weighting coefficient, the pixel value of a pixel to which a greater weight is given is reflected more strongly in the combined pixel value.
 そして、分割して撮像を行う際の原画像の割り付けは、撮像領域の全体が、少なくとも1つの原画像において所定の閾値以上の重みが与えられた画素によって占められるように定められる。こうすることによって、原画像において一定レベル以上の画像品質を有する画素の画像情報を用いて撮像領域全体の画像情報を決定することができる。そのため、重みの小さい、つまり画像情報に対する信頼性の低い画素のみから再構成された画素は存在せず、合成後の画像の品質を良好なものとすることができる。 Then, the allocation of the original image when the divided images are taken is determined so that the entire imaging region is occupied by pixels to which a weight equal to or higher than a predetermined threshold is given in at least one original image. By doing so, it is possible to determine the image information of the entire imaging region using image information of pixels having image quality of a certain level or higher in the original image. Therefore, there is no pixel reconstructed only from pixels having a small weight, that is, low reliability with respect to the image information, and the quality of the combined image can be improved.
 以上説明したように、上記実施形態では、照明部12が本発明の「照明部」として、撮像部13が本発明の「撮像部」として、また駆動機構15が本発明の「移動部」としてそれぞれ機能している。そして、これらが一体的に本発明の「撮像手段」として機能している。また、撮像素子132が本発明の「エリアイメージセンサ」に、対物レンズ131を含む撮像光学系が「ハイパーセントリック光学系」にそれぞれ相当している。また、制御部14が本発明の「画像合成手段」として機能している。 As described above, in the above embodiment, the illumination unit 12 is the “illumination unit” of the present invention, the imaging unit 13 is the “imaging unit” of the present invention, and the drive mechanism 15 is the “moving unit” of the present invention. Each is functioning. These functions integrally as “imaging means” of the present invention. The imaging element 132 corresponds to the “area image sensor” of the present invention, and the imaging optical system including the objective lens 131 corresponds to the “high percentic optical system”. In addition, the control unit 14 functions as the “image composition unit” of the present invention.
 また、上記実施形態では画像のY方向に対応する撮像部13の移動方向が本発明の「第1走査方向」に相当し、この方向への撮像部13の移動およびその途中での撮像が本発明の「主走査処理」に相当している。また、X方向に対応する方向が本発明の「第2走査方向」に相当し、この方向への撮像部13の移動が本発明の「副走査処理」に相当する。 In the above embodiment, the movement direction of the imaging unit 13 corresponding to the Y direction of the image corresponds to the “first scanning direction” of the present invention, and the movement of the imaging unit 13 in this direction and the imaging in the middle thereof are the main directions. This corresponds to the “main scanning process” of the invention. The direction corresponding to the X direction corresponds to the “second scanning direction” of the present invention, and the movement of the imaging unit 13 in this direction corresponds to the “sub-scanning process” of the present invention.
 なお、本発明は上記した実施形態に限定されるものではなく、その趣旨を逸脱しない限りにおいて上述したもの以外に種々の変更を行うことが可能である。例えば、上記実施形態の撮像部13はハイパーセントリック光学系を有するものである。この撮像部13では、メニスカスの影響がない場合に画像の周縁部において画質低下が生じやすいため、本発明の効果が特に効果的である。しかしながら、例えばレンズの収差やケラレ、照明光のムラ等に起因して画像の周縁部に画質低下が起こりやすいという問題は、このような撮像光学系以外の各種の光学系でも生じるものである。本発明は、このような種々の撮像光学系を有する装置に適用可能である。 Note that the present invention is not limited to the above-described embodiment, and various modifications other than those described above can be made without departing from the spirit of the present invention. For example, the imaging unit 13 of the above embodiment has a high percentric optical system. In the imaging unit 13, when there is no influence of the meniscus, the image quality is likely to be deteriorated at the peripheral portion of the image. Therefore, the effect of the present invention is particularly effective. However, the problem that image quality is likely to deteriorate at the peripheral edge of an image due to, for example, lens aberration, vignetting, illumination light unevenness, etc. also occurs in various optical systems other than such an imaging optical system. The present invention can be applied to an apparatus having such various imaging optical systems.
 また、上記実施形態は、ウェルWに担持される細胞等の生試料を撮像する撮像装置である。しかしながら、本発明にかかる撮像装置が撮像対象物とするものはこれらに限定されるものではなく任意である。液面を介した撮像でなくても、例えば広角レンズを使用した撮像した画像では周縁部の歪みが大きくなる傾向があるため、本発明が特に有効に機能する事例であるといえる。 Further, the above embodiment is an imaging device that images a living sample such as a cell carried in the well W. However, what the imaging apparatus according to the present invention uses as an imaging target is not limited to these and is arbitrary. Even if the image is not picked up via the liquid surface, for example, an image picked up using a wide-angle lens tends to have a large distortion at the peripheral portion, so that it can be said that this is an example in which the present invention functions particularly effectively.
 また、上記した原画像の割り付けは単なる一例であり、目的に応じて、また本発明の趣旨に沿って適宜に設定されるべきものである。 Also, the above-described assignment of the original image is merely an example, and should be appropriately set according to the purpose and in accordance with the gist of the present invention.
 また、上記実施形態は1組の撮像部13が撮像対象領域に対し走査移動しながら複数の原画像を撮像するものである。しかしながら、複数の原画像は、異なる撮像手段によって撮像されてもよい。この場合、撮像手段ごとに特性の差異があるため、重み付けマップについても撮像手段ごとに用意されることが望ましい。 In the above-described embodiment, a set of imaging units 13 captures a plurality of original images while scanning and moving with respect to the imaging target region. However, the plurality of original images may be captured by different imaging means. In this case, since there is a difference in characteristics for each imaging unit, it is desirable to prepare a weighting map for each imaging unit.
 また、上記実施形態の説明では、本発明にかかる「重み付けルール」を重み付けマップMwとして説明しているが、重み付けルールをどのような形式で表現し装置に実装するかは任意である。前記したようにルックアップテーブル、関数の他、場合分けによる条件分岐で重み係数が導出される構成であってもよい。 In the description of the above-described embodiment, the “weighting rule” according to the present invention is described as the weighting map Mw. However, it is arbitrary how the weighting rule is expressed and implemented in the apparatus. As described above, in addition to the lookup table and the function, a configuration in which the weight coefficient is derived by conditional branching according to case classification may be used.
 以上、具体的な実施形態を例示して説明してきたように、この発明において、重み付けルールにより与えられる重みは、原画像の中央部の画素において周縁部の画素よりも大きくなるように設定されてもよい。一般的な撮像光学系では画像の中央部で最も画質が良好であり、周縁部ではこれより画質が低下する。このような性質を反映した重み付けを行うことで、撮像光学系の特性に起因する画質の低下の影響を効果的に抑制することが可能になる。 As described above, the specific embodiment has been exemplified and described. In the present invention, the weight given by the weighting rule is set to be larger in the central pixel of the original image than in the peripheral pixel. Also good. In a general imaging optical system, the image quality is the best at the center of the image, and the image quality is lower at the periphery. By performing weighting that reflects such properties, it is possible to effectively suppress the influence of the deterioration in image quality caused by the characteristics of the imaging optical system.
 また例えば、重み付けルールにより与えられる重みは、原画像のうち撮像手段の光学中心に対応する位置から所定距離以内にある画素については閾値以上の一定値である一方、所定距離よりも遠い画素については光学中心から離れるほど小さくなるように設定されてもよい。画像の中央部分で一定の重みが与えられることにより、無用な画像の加工が抑制されて、原画像の画像品質を維持することができる。 Further, for example, the weight given by the weighting rule is a constant value equal to or greater than the threshold for pixels within a predetermined distance from the position corresponding to the optical center of the imaging unit in the original image, while for pixels far from the predetermined distance. You may set so that it may become so small that it leaves | separates from an optical center. By giving a constant weight to the central portion of the image, unnecessary image processing is suppressed, and the image quality of the original image can be maintained.
 この場合、重み付けルールにより与えられる重みの大きさは、例えば光学中心に対して回転対称性を有するものであってもよい。撮像光学系が光軸に対し回転対称な特性を有するレンズの組み合わせにより構成される場合、その光学特性も光軸に対し回転対称性を有する。これを反映した重み付けを行うことで、重み付けルールをより簡単な表現によって記述することができ、重み付けのための処理を簡単にすることができる。 In this case, the magnitude of the weight given by the weighting rule may be rotationally symmetric with respect to the optical center, for example. When the imaging optical system is configured by a combination of lenses having rotationally symmetric characteristics with respect to the optical axis, the optical characteristics also have rotational symmetry with respect to the optical axis. By performing weighting that reflects this, the weighting rule can be described with simpler expressions, and the processing for weighting can be simplified.
 また例えば、画像合成においては、撮像対象領域の同一位置に対応する画素を含む原画像が複数あるとき、それらの原画像各々における当該位置に対応する画素の画素値に重み付けルールに基づく重み付けの大きさに応じた係数を乗じて合計した値を、合成画像において当該位置に対応する画素の画素値とするようにしてもよい。このような構成によれば、大きな重みが与えられた画素ほどその画素値が強く結果に反映されることになる。これにより、重みの小さい、したがって画質劣化の大きい画素の影響を抑えることができる。 Further, for example, in image composition, when there are a plurality of original images including pixels corresponding to the same position in the imaging target region, the pixel value of the pixel corresponding to the position in each of the original images is weighted based on the weighting rule. A value obtained by multiplying the coefficients according to the sum may be used as the pixel value of the pixel corresponding to the position in the composite image. According to such a configuration, a pixel to which a greater weight is given reflects the pixel value more strongly in the result. Thereby, it is possible to suppress the influence of a pixel having a small weight and therefore a large image quality degradation.
 また例えば、撮像手段は、撮像視野内の光学像を電気信号に変換するエリアイメージセンサを有するものであってもよい。二次元の撮像視野をエリアイメージセンサで一度に撮像する構成では、撮像光学系においても二次元の光学像を結像させる機能が必要となり、そのような構成では撮像視野の周縁部に画質低下が現れやすい。このような場合に本発明の技術思想を適用することで、画質低下の影響を抑えた合成画像を作成することが可能になる。 Further, for example, the imaging means may include an area image sensor that converts an optical image in the imaging field of view into an electrical signal. In a configuration in which an area image sensor captures a two-dimensional imaging field at a time, the imaging optical system also needs a function to form a two-dimensional optical image. In such a configuration, the image quality deteriorates at the periphery of the imaging field. Easy to appear. In such a case, by applying the technical idea of the present invention, it becomes possible to create a composite image in which the influence of image quality deterioration is suppressed.
 この場合、撮像手段は、撮像視野内の撮像対象領域から出射される光をエリアイメージセンサに導くハイパーセントリック光学系を有するものであってもよい。このような撮像光学系では特に画像の周縁部に画質劣化が出やすいので、本発明の効果が顕著なものとなる。 In this case, the imaging means may have a high percentic optical system that guides the light emitted from the imaging target region in the imaging field of view to the area image sensor. In such an imaging optical system, image quality deterioration is likely to occur particularly in the peripheral portion of the image, so that the effect of the present invention becomes remarkable.
 また例えば、撮像手段は、撮像視野内を撮像する撮像部と、撮像部を撮像対象領域に対して相対移動させる移動部とを有するものであってもよい。このような構成によれば、1組の撮像部により複数の原画像を撮像することが可能である。 For example, the imaging unit may include an imaging unit that captures an image within the imaging field of view, and a moving unit that moves the imaging unit relative to the imaging target region. According to such a configuration, a plurality of original images can be captured by a set of imaging units.
 この場合さらに、撮像手段は、移動部による撮像部の撮像対象領域に対する相対移動に同期して間欠点灯し撮像対象領域を照明する照明部を有するものであってもよい。このような構成では、照明部が点灯する間のみ撮像部に光学像が受光されるため、移動部による撮像部の移動を停止させることなく静止画像を取得することができる。そのため、撮像部を一時的に停止させて撮像を行う構成に比べて撮像に要する時間を短くすることが可能である。 In this case, the imaging unit may further include an illuminating unit that illuminates the imaging target region intermittently in synchronization with the relative movement of the imaging unit with respect to the imaging target region by the moving unit. In such a configuration, since the optical image is received by the imaging unit only while the illumination unit is lit, a still image can be acquired without stopping the movement of the imaging unit by the moving unit. Therefore, it is possible to shorten the time required for imaging compared to a configuration in which imaging is performed by temporarily stopping the imaging unit.
 また、この発明にかかる撮像方法において、撮像工程では、撮像手段が、撮像対象領域に対し相対的に第1走査方向に一定速度で走査移動しながら間欠的に撮像を行うことで第1走査方向に沿って並ぶ複数の原画像を取得する主走査処理と、撮像手段が、撮像対象領域に対し相対的に、第1走査方向に直交する第2走査方向に移動する副走査処理とが交互に実行されて第1走査方向および第2走査方向のそれぞれにおいて複数の原画像が取得され、第2走査方向において隣り合う原画像の間では、第1走査方向における位置が互いに異なっていてもよい。このような構成では、第2走査方向に隣り合う原画像の位置を第1走査方向に異ならせることで、画質劣化が大きい原画像の隅部同士が重なり合うことが防止される。 In the imaging method according to the present invention, in the imaging step, the imaging means intermittently captures an image while scanning and moving at a constant speed relative to the imaging target area in the first scanning direction. The main scanning process for acquiring a plurality of original images arranged along the line and the sub-scanning process in which the imaging unit moves in the second scanning direction orthogonal to the first scanning direction relative to the imaging target region are alternately performed. When executed, a plurality of original images are acquired in each of the first scanning direction and the second scanning direction, and positions in the first scanning direction may be different between adjacent original images in the second scanning direction. In such a configuration, by changing the positions of the original images adjacent in the second scanning direction in the first scanning direction, it is possible to prevent the corners of the original image having large image quality deterioration from overlapping.
 以上、特定の実施例に沿って発明を説明したが、この説明は限定的な意味で解釈されることを意図したものではない。発明の説明を参照すれば、本発明のその他の実施形態と同様に、開示された実施形態の様々な変形例が、この技術に精通した者に明らかとなるであろう。故に、添付の特許請求の範囲は、発明の真の範囲を逸脱しない範囲内で、当該変形例または実施形態を含むものと考えられる。 Although the invention has been described with reference to specific embodiments, this description is not intended to be construed in a limiting sense. Reference to the description of the invention, as well as other embodiments of the present invention, various modifications of the disclosed embodiments will become apparent to those skilled in the art. Accordingly, the appended claims are intended to include such modifications or embodiments without departing from the true scope of the invention.
 この発明は、撮像手段の撮像視野より広い領域を複数画像に分割して撮像し、それらを合成した画像を作成する技術全般に適用することが可能であり、撮像対象物は特に限定されない。 The present invention can be applied to all techniques for imaging an area wider than the imaging field of view of the imaging means by dividing it into a plurality of images and creating an image obtained by synthesizing them, and the imaging object is not particularly limited.
 1 撮像装置(画像作成装置)
 12 照明部(撮像手段、照明部)
 13 撮像部(撮像手段、撮像部)
 14 制御部(画像合成手段)
 15 駆動機構(撮像手段、移動部)
 131 対物レンズ
 132 撮像素子(エリアイメージセンサ)
 Mw 重み付けマップ
 Rw ウェル領域
 W ウェル
 WP ウェルプレート
 Wt 重み係数
1 Imaging device (image creation device)
12 Illumination unit (imaging means, illumination unit)
13 Imaging unit (imaging means, imaging unit)
14 Control unit (image composition means)
15 Drive mechanism (imaging means, moving part)
131 Objective lens 132 Image sensor (area image sensor)
Mw weighting map Rw well region W well WP well plate Wt weighting factor

Claims (11)

  1.  二次元の撮像視野を有し、撮像対象領域に対する前記撮像視野の位置を互いに異ならせて、かつ位置の隣り合う前記撮像視野を互いに一部重複させて複数回撮像を行うことで、前記撮像視野よりも広範囲の前記撮像対象領域を複数の原画像に分割して撮像する撮像手段と、
     前記複数の原画像を合成して、前記撮像対象領域を表す単一の合成画像を作成する画像合成手段と
    を備え、
     前記画像合成手段は、前記原画像内における位置と当該位置の画素に与えられる重みとが前記撮像手段の光学特性に応じて関連付けられた重み付けルールに基づき、個々の前記原画像内の各画素に対し重み付けを行い、重み付けされた複数の前記原画像を、前記撮像対象領域の同一位置に対応する画素が互いに重なるように合成し、
     前記原画像内の画素のうち前記重み付けルールによって所定の閾値以上の重みを与えられる画素を有効画素と定義したとき、
     前記撮像手段は、前記撮像対象領域内の全ての位置が前記原画像の少なくとも1つにおいて前記有効画素として表されるように、前記撮像対象領域において前記撮像視野を重複させて撮像を行う画像作成装置。
    The imaging field of view has a two-dimensional imaging field of view, the positions of the imaging field of view relative to the imaging target region are different from each other, and the imaging fields of view adjacent to each other are partially overlapped with each other to perform imaging a plurality of times. An imaging means for dividing and imaging a wider area of the imaging target area into a plurality of original images;
    Image combining means for combining the plurality of original images and creating a single combined image representing the imaging target region;
    The image synthesizing unit is configured to assign each pixel in the original image based on a weighting rule in which a position in the original image and a weight given to the pixel at the position are associated according to optical characteristics of the imaging unit. Weighting is performed, and the plurality of weighted original images are combined so that pixels corresponding to the same position in the imaging target region overlap each other,
    When a pixel that is given a weight equal to or higher than a predetermined threshold by the weighting rule among the pixels in the original image is defined as an effective pixel,
    The imaging means performs image generation in which the imaging field of view is overlapped in the imaging target region so that all positions in the imaging target region are represented as the effective pixels in at least one of the original images. apparatus.
  2.  前記重み付けルールにより与えられる重みは、前記原画像の中央部の画素において前記周縁部の画素よりも大きい請求項1に記載の画像作成装置。 2. The image creating apparatus according to claim 1, wherein a weight given by the weighting rule is larger in a central pixel of the original image than in a peripheral pixel.
  3.  前記重み付けルールにより与えられる重みは、前記原画像のうち前記撮像手段の光学中心に対応する位置から所定距離以内にある画素については前記閾値以上の一定値である一方、前記所定距離よりも遠い画素については前記光学中心から離れるほど小さくなる請求項1または2に記載の画像作成装置。 The weight given by the weighting rule is a constant value greater than or equal to the threshold for pixels within a predetermined distance from the position corresponding to the optical center of the imaging means in the original image, while pixels farther than the predetermined distance. The image creating apparatus according to claim 1, wherein the smaller the distance from the optical center.
  4.  前記重み付けルールにより与えられる重みの大きさは、前記光学中心に対して回転対称性を有する請求項3に記載の画像作成装置。 4. The image creating apparatus according to claim 3, wherein the weight given by the weighting rule has rotational symmetry with respect to the optical center.
  5.  前記画像合成手段は、前記撮像対象領域の同一位置に対応する画素を含む前記原画像が複数あるとき、それらの前記原画像各々における当該位置に対応する画素の画素値に前記重み付けルールに基づく重み付けの大きさに応じた係数を乗じて合計した値を、前記合成画像において当該位置に対応する画素の画素値とする請求項1ないし4のいずれかに記載の画像作成装置。 When there are a plurality of original images including pixels corresponding to the same position in the imaging target region, the image composition unit weights the pixel values of the pixels corresponding to the position in each of the original images based on the weighting rule. 5. The image creating apparatus according to claim 1, wherein a value obtained by multiplying a coefficient corresponding to the size of the pixel is a pixel value of a pixel corresponding to the position in the composite image.
  6.  前記撮像手段は、前記撮像視野内の光学像を電気信号に変換するエリアイメージセンサを有する請求項1ないし5のいずれかに記載の画像作成装置。 6. The image creating apparatus according to claim 1, wherein the imaging unit includes an area image sensor that converts an optical image in the imaging field of view into an electrical signal.
  7.  前記撮像手段は、前記撮像視野内の前記撮像対象領域から出射される光を前記エリアイメージセンサに導くハイパーセントリック光学系を有する請求項6に記載の画像作成装置。 The image creating apparatus according to claim 6, wherein the imaging unit includes a high-percentric optical system that guides light emitted from the imaging target region in the imaging visual field to the area image sensor.
  8.  前記撮像手段は、前記撮像視野内を撮像する撮像部と、前記撮像部を前記撮像対象領域に対して相対移動させる移動部とを有する請求項1ないし7のいずれかに記載の画像作成装置。 The image creating apparatus according to any one of claims 1 to 7, wherein the imaging unit includes an imaging unit that images the imaging field of view and a moving unit that moves the imaging unit relative to the imaging target region.
  9.  前記撮像手段は、前記移動部による前記撮像部の前記撮像対象領域に対する相対移動に同期して間欠点灯し前記撮像対象領域を照明する照明部を有する請求項8に記載の画像作成装置。 The image creating apparatus according to claim 8, wherein the imaging unit includes an illuminating unit that illuminates the imaging target region by intermittently lighting in synchronization with relative movement of the imaging unit with respect to the imaging target region by the moving unit.
  10.  二次元の撮像視野を有する撮像手段により、撮像対象領域に対する前記撮像視野の位置を互いに異ならせて、かつ位置の隣り合う前記撮像視野を互いに一部重複させて複数回撮像を行うことで、前記撮像視野よりも広範囲の前記撮像対象領域を複数の原画像に分割して撮像する撮像工程と、
     前記複数の原画像を合成して、前記撮像対象領域を表す単一の合成画像を作成する合成工程と
    を備え、
     前記合成工程では、前記原画像内における位置と当該位置の画素に与えられる重みとが前記撮像手段の光学特性に応じて関連付けられた重み付けルールに基づき、個々の前記原画像内の各画素に対し重み付けが行われ、重み付けされた複数の前記原画像が、前記撮像対象領域の同一位置に対応する画素が互いに重なるように合成され、
     前記原画像内の画素のうち前記重み付けルールによって所定の閾値以上の重みを与えられる画素を有効画素と定義したとき、
     前記撮像工程では、前記撮像対象領域内の全ての位置が前記原画像の少なくとも1つにおいて前記有効画素として表されるように、前記撮像対象領域において前記撮像視野を重複させて撮像が行われる画像作成方法。
    The imaging means having a two-dimensional imaging field of view makes the position of the imaging field of view different from the region to be imaged and performs imaging a plurality of times by partially overlapping the imaging fields of view adjacent to each other, An imaging step of imaging by dividing the imaging target area in a wider range than the imaging field of view into a plurality of original images;
    Combining the plurality of original images to create a single composite image representing the imaging target region, and
    In the compositing step, a position in the original image and a weight given to the pixel at the position are based on a weighting rule associated according to the optical characteristics of the imaging unit, for each pixel in the individual original image. Weighting is performed, and the plurality of weighted original images are combined so that pixels corresponding to the same position in the imaging target region overlap each other,
    When a pixel that is given a weight equal to or higher than a predetermined threshold by the weighting rule among the pixels in the original image is defined as an effective pixel,
    In the imaging step, an image is captured by overlapping the imaging field of view in the imaging target region so that all positions in the imaging target region are represented as the effective pixels in at least one of the original images. How to make.
  11.  前記撮像工程では、
     前記撮像手段が、前記撮像対象領域に対し相対的に第1走査方向に一定速度で走査移動しながら間欠的に撮像を行うことで、前記第1走査方向に沿って並ぶ複数の前記原画像を取得する主走査処理と、
     前記撮像手段が、前記撮像対象領域に対し相対的に、前記第1走査方向に直交する第2走査方向に移動する副走査処理と
    が交互に実行されて前記第1走査方向および前記第2走査方向のそれぞれにおいて複数の前記原画像が取得され、
     前記第2走査方向において隣り合う前記原画像の間では、前記第1走査方向における位置が互いに異なる請求項10に記載の画像作成方法。
    In the imaging step,
    The imaging unit intermittently captures images while scanning and moving at a constant speed in the first scanning direction relative to the imaging target region, so that a plurality of the original images arranged along the first scanning direction are captured. Main scanning processing to be acquired;
    The first scanning direction and the second scanning are alternately performed by the imaging unit alternately with a sub-scanning process that moves in a second scanning direction orthogonal to the first scanning direction relative to the imaging target region. A plurality of the original images are acquired in each of the directions,
    The image creation method according to claim 10, wherein positions in the first scanning direction are different between the original images adjacent in the second scanning direction.
PCT/JP2016/069135 2015-09-28 2016-06-28 Image generation device and image generation method WO2017056599A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2015-189251 2015-09-28
JP2015189251A JP2017068302A (en) 2015-09-28 2015-09-28 Image creation device and image creation method

Publications (1)

Publication Number Publication Date
WO2017056599A1 true WO2017056599A1 (en) 2017-04-06

Family

ID=58423328

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2016/069135 WO2017056599A1 (en) 2015-09-28 2016-06-28 Image generation device and image generation method

Country Status (2)

Country Link
JP (1) JP2017068302A (en)
WO (1) WO2017056599A1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114594107A (en) * 2022-05-09 2022-06-07 武汉精立电子技术有限公司 Optimization method and application of scanning path and detection method of surface of semiconductor material
CN116306764A (en) * 2023-03-22 2023-06-23 北京京瀚禹电子工程技术有限公司 Electronic component counting system based on machine vision

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPWO2019044408A1 (en) * 2017-08-31 2020-02-27 富士フイルム株式会社 Image processing apparatus, method and program

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPS62140174A (en) * 1985-12-13 1987-06-23 Canon Inc Image synthesizing method
JP2010134374A (en) * 2008-12-08 2010-06-17 Olympus Corp Microscope system and method of operation thereof
JP2012003214A (en) * 2010-05-19 2012-01-05 Sony Corp Information processor, information processing method, program, imaging device and imaging device having light microscope
JP2015055849A (en) * 2013-09-13 2015-03-23 オリンパス株式会社 Imaging device, microscope system, imaging method and imaging program

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPS62140174A (en) * 1985-12-13 1987-06-23 Canon Inc Image synthesizing method
JP2010134374A (en) * 2008-12-08 2010-06-17 Olympus Corp Microscope system and method of operation thereof
JP2012003214A (en) * 2010-05-19 2012-01-05 Sony Corp Information processor, information processing method, program, imaging device and imaging device having light microscope
JP2015055849A (en) * 2013-09-13 2015-03-23 オリンパス株式会社 Imaging device, microscope system, imaging method and imaging program

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114594107A (en) * 2022-05-09 2022-06-07 武汉精立电子技术有限公司 Optimization method and application of scanning path and detection method of surface of semiconductor material
CN114594107B (en) * 2022-05-09 2022-08-16 武汉精立电子技术有限公司 Optimization method and application of scanning path and detection method of surface of semiconductor material
CN116306764A (en) * 2023-03-22 2023-06-23 北京京瀚禹电子工程技术有限公司 Electronic component counting system based on machine vision
CN116306764B (en) * 2023-03-22 2023-11-14 北京京瀚禹电子工程技术有限公司 Electronic component counting system based on machine vision

Also Published As

Publication number Publication date
JP2017068302A (en) 2017-04-06

Similar Documents

Publication Publication Date Title
US9088729B2 (en) Imaging apparatus and method of controlling same
US20120147224A1 (en) Imaging apparatus
US10334216B2 (en) Imaging system including lens with longitudinal chromatic aberration, endoscope and imaging method
JP2016071117A (en) Imaging device and imaging method
JP5940383B2 (en) Microscope system
CN101241590B (en) Image processing apparatus and method, program, and recording medium
US9046680B2 (en) Scanning illumination microscope
JP2004101871A (en) Photographing apparatus for microscope image
WO2017056599A1 (en) Image generation device and image generation method
JP6739061B2 (en) Image generating apparatus, image generating method and program
JP5819897B2 (en) Imaging system and imaging method
JP2017156208A (en) Imaging apparatus
JP2013137635A (en) Picture display unit and picture display method
US10921577B2 (en) Endoscope device
US20200098094A1 (en) Image processing method and image processing apparatus
JP5224976B2 (en) Image correction apparatus, image correction method, program, and recording medium
TWI781490B (en) Focusing position detection method, focusing position detector, recording medium, and focusing position detection program
JP6215894B2 (en) Image processing method and shading reference data creation method
US9065960B2 (en) System for non-uniformly illuminating an original and capturing an image therof
US20130215251A1 (en) Imaging apparatus, imaging control program, and imaging method
JP2014086899A (en) Image processing device, image processing method, and program
WO2017056600A1 (en) Image processing method and control program
JP7053588B2 (en) A method for forming a preview image using a slope microscope and an image forming device for a slope microscope and a slope microscope.
CN1726430A (en) Device for recording and device for reproducing three-dimensional items of image information of an object
JP6985966B2 (en) Imaging method and imaging device

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 16850776

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 16850776

Country of ref document: EP

Kind code of ref document: A1