WO2023210185A1 - Microscope image information processing method, microscope image information processing system, and computer program - Google Patents

Microscope image information processing method, microscope image information processing system, and computer program Download PDF

Info

Publication number
WO2023210185A1
WO2023210185A1 PCT/JP2023/009276 JP2023009276W WO2023210185A1 WO 2023210185 A1 WO2023210185 A1 WO 2023210185A1 JP 2023009276 W JP2023009276 W JP 2023009276W WO 2023210185 A1 WO2023210185 A1 WO 2023210185A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
images
information processing
microscope
processing method
Prior art date
Application number
PCT/JP2023/009276
Other languages
French (fr)
Japanese (ja)
Inventor
宣仁 森
泰之 木田
Original Assignee
国立研究開発法人 産業技術総合研究所
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 国立研究開発法人 産業技術総合研究所 filed Critical 国立研究開発法人 産業技術総合研究所
Publication of WO2023210185A1 publication Critical patent/WO2023210185A1/en

Links

Classifications

    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B21/00Microscopes
    • G02B21/36Microscopes arranged for photographic purposes or projection purposes or digital imaging or video purposes including associated control and data processing arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting

Definitions

  • the present invention relates to a method of processing microscopic image information.
  • virtual slide exists as a technology for creating high-definition, large-area digital images of glass slide specimens and the like observed under a microscope. Since virtual slides are image data, they are easier to handle than storing the slide glass specimens themselves. Virtual slides can be used, for example, for remote pathology diagnosis and digital storage of pathology samples.
  • Off-line stitching is a method in which multiple images of a slide glass specimen or the like are taken over a wide area, and then the multiple images are stitched offline to generate a single image.
  • Real time stitching is a method of stitching together multiple captured images to create a single image while observing a glass slide specimen.
  • a slide scanner that automatically scans a glass slide specimen.
  • Alessandro Gherardi1 and Alessandro Bevilacqua “Real-time whole slide mosaicing for non-automated microscopes in histopathology analysis”, [online], March 30, 2013, National Library of Medicine, [Retrieved April 18, 2022], Internet ⁇ URL: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3678752/>
  • the present invention has been made in view of the above problems.
  • one aspect of the present invention is a microscope image information processing method executed by a computer system, which acquires a captured image of a portion of a sample observed using a microscope and stores it in a storage area.
  • the first step of acquiring a photographed image of a part of a sample observed using a microscope and storing it in a storage area is performed, and the photographed image is stored in the storage area.
  • a second step of calculating feature point information which is information regarding the feature points of the captured image, is executed, and the feature point information of the previously captured image and the feature point information of the captured image are calculated.
  • Another aspect of the present invention is a computer program that causes a computer system to execute the above-described microscope image information processing method.
  • FIG. 3 is a diagram illustrating an overview of sequential joining processing in a microscope image information processing method according to an embodiment of the present invention.
  • FIG. 2 is a diagram illustrating an overview of overall configuration processing in a microscope image information processing method according to an embodiment of the present invention.
  • FIG. 7 is a diagram illustrating an example of the results of matching a plurality of feature points detected in the Nth image with a plurality of feature points detected in the immediately previous merged image.
  • FIG. 7 is a diagram conceptually explaining a process of recalculating a global transformation matrix using a connected undirected graph.
  • FIG. 7 is a diagram conceptually explaining a process of recalculating a global transformation matrix using a connected undirected graph.
  • FIG. 7 is a diagram showing an example (partially) of a spliced image (without distortion correction) generated by sequential splicing processing.
  • FIG. 7 is a diagram illustrating an example (with distortion correction) of a spliced image (part) generated by sequential splicing processing.
  • FIG. 7 is a diagram illustrating an example of a flowchart of processing in step S120 (processing for the image file from the second time onwards).
  • FIG. 3 is a diagram showing an example of a flowchart of overall configuration processing in a microscope image information processing method according to an embodiment of the present invention.
  • FIG. 7 is a diagram illustrating an example of a spliced image (partially) that has been subjected to bundle adjustment (distortion correction) in the overall configuration process. It is a figure which shows an example of the image produced
  • FIG. 3 is a diagram illustrating an example in which a rectangular area is divided into multiple tiles.
  • FIG. 7 is a diagram illustrating the effect of additional matching processing of region overlapping images in sequential joining processing.
  • FIG. 7 is a diagram illustrating the effect of additional matching processing of region overlapping images in sequential joining processing.
  • FIG. 7 is a diagram illustrating the effect of reducing misalignment between images by calculating Max Spanning Tree and simple bundle adjustment in sequential splicing processing.
  • FIG. 7 is a diagram illustrating the effect of reducing misalignment between images by calculating Max Spanning Tree and simple bundle adjustment in sequential splicing processing.
  • FIG. 7 is a diagram showing the effect of distortion correction in overall configuration processing.
  • FIG. 7 is a diagram showing the effect of distortion correction in overall configuration processing.
  • 3 is a diagram showing an example of a hardware configuration of a computer device 30.
  • FIGS. 1 and 2 are diagrams illustrating an overview of the microscope image information processing method according to the present embodiment.
  • the microscope image information processing method according to the present embodiment can be executed by a microscope image information processing system 1 that includes an existing general microscope 10, a camera 20, and a computer device 30.
  • the microscope image information processing method according to this embodiment is roughly divided into two phrases: sequential joining processing and overall configuration processing.
  • the sequential bonding process is a process that generates the latest bonded image every time an image of a part of the slide glass specimen is taken, and finally generates a bonded image of the entire slide glass specimen.
  • the overall configuration process is a process that is executed after the photographing of the slide glass specimen is completed, and is a process that is executed on the joined images that have been sequentially subjected to the joining process.
  • the overall configuration processing mainly performs various processing to improve the quality of the spliced images and saves the processed images.
  • sequential joining processing processing is mainly executed with a small waiting time
  • overall configuration processing processing is mainly executed that requires time or calculation cost.
  • FIG. 1 is a diagram illustrating an overview of sequential joining processing.
  • the user uses the camera software 40 to sequentially photograph each part of the glass slide specimen with the camera 20.
  • the photographed image 50 of each part of the slide glass specimen is written to the hard disk drive (HDD) 32 included in the computer device 30 by the camera software 40 each time the photograph is taken (ST1).
  • the HDD 32 is monitored by the joining software 42 (ST2), and each time it is detected that an image file 50 has been written to the HDD 32, the written image file 50 is read from the HDD 32 (ST3).
  • a joining process is executed (ST4).
  • the photographed image is read out from the HDD 32 and the joining process is performed. This process is repeated, and each time the image is taken, a joined image is created. 52 will be updated. Then, each time the spliced image 52 is updated, the updated spliced image 52 is displayed as a preview to the user on a display device such as a display.
  • the merged image 52 that is currently being generated is previewed by the user at any time, so the user can confirm at any time that the appropriate merged image 52 is being generated. It is. As a result, it is possible to reduce the possibility that re-photographing work will occur after the photographing is completed, as in the case of virtual slide generation using conventional off-line stitching.
  • the sequential joining process can be executed as follows. That is, matching of the feature points between the previously captured image and stored in the HDD 32 and the newly captured image 50 captured and stored in the HDD 32 is performed, and a transformation matrix between the two images is calculated. Furthermore, based on the results, feature point matching with another image that is already stored in the HDD 32 and whose movement destination area overlaps is executed, and a transformation matrix is calculated.
  • the user's waiting time can be reduced by limiting the calculation target to only images with overlapping movement destination areas, rather than all stored captured images. Furthermore, by calculating the Max Spanning Tree and performing simple bundle adjustment as necessary, it is possible to reduce the accumulation of misalignment between images.
  • FIG. 2 is a diagram illustrating an overview of the overall configuration processing.
  • the joining software 42 executes processing to improve the quality of the images (ST5), and stores the joined images 52 after performing the processing on the HDD 32. Save (ST6).
  • the overall configuration process may be executed as follows, for example.
  • Bundle adjustment (calculation of a transformation matrix that minimizes the deviation of all feature point pairs and distortion correction of the lens of the camera 20)
  • Seam calculation search for the best break between images
  • Exposure correction correction of exposure time and vignetting between images
  • Blending processing (combining images so that the breaks between images are less noticeable)
  • Writing the overall configuration processed image to the HDD 32 of the computer device 30 A part of the above processing is performed by dividing the entire image into small areas (tiles). In this case, (2) to (5) or (3) to (5) may be processed for each tile. This makes it possible to reduce the amount of memory and calculation time used for one-time processing.
  • microscope image information processing method is similarly applicable to, for example, imaging cells cultured in a petri dish.
  • FIG. 3 is a diagram showing an example of the result of matching a plurality of feature points detected in the Last (N) image 50' and a plurality of feature points detected in the New (N+1) image 50. Note that in FIG. 3, for convenience of explanation, only some of the matching results are shown by broken lines.
  • a transformation matrix R N,N+1 is calculated between the Last (N) image 50' and the New (N+1) image 50 based on the matching result.
  • the transformation matrix R N,N+1 is a matrix (affine transformation matrix) indicating the movement distance and rotation amount between the Last (N) image 50' and the New (N+1) image 50. That is, the following relational expression holds true.
  • a cos ⁇
  • b sin ⁇
  • d -sin ⁇
  • e cos ⁇
  • the "global transformation matrix” is information representing how much the New (N+1) image 50 has moved and rotated from a specific image with respect to the specific image.
  • the global transformation matrix of the New (N+1) image 50 is R N+1 with the first captured image as a reference, the following relationship holds true.
  • the global transformation matrix R N+1 of the New (N+1) image 50 is each transformation matrix between the immediately previous captured image and the new captured image, which is calculated in each joining process before the Nth sequential joining. It can be calculated as the product of R 1,2 , R 2,3 , . . . R N,N+1 .
  • the image data of a specific reference image here, the first captured image
  • the image data of the New (N+1) image 50 are joined. Processing is performed and a spliced image 52 is generated.
  • the rough destination (hereinafter referred to as "movement destination area”) of the New (N+1) image 50 from the reference image (first image) can be found. Then, by comparing the movement destination area of the New (N+1) image with the movement destination area of each image calculated in the past, in addition to the Last (N) image 50', the New (N+1) image 50 and the movement destination area of each image calculated in the past are compared. Search for past merged images with overlapping destination areas.
  • “movement destination areas overlap” may be such that, for example, it is determined that the movement destination areas overlap when the overlapping area between them exceeds "0", or a predetermined value may be used.
  • the overlapping area between the destination areas exceeds the threshold value, it may be determined that the destination areas overlap. If it is determined that the K-th (0 ⁇ K ⁇ N) joined image 50' overlaps the New (N+1) image 50 in the movement destination area, then the New (N+1) image 50 and the K-th joined image 50', and a transformation matrix R K,N+1 between the K-th spliced image 50' and the New (N+1) image 50 is calculated.
  • additional matching processing for region overlapping images is calculated. Also, based on this information, a connected undirected graph with each image as a vertex is calculated.
  • FIGS. 4A and 4B are diagrams conceptually explaining the recalculation process of the global transformation matrix using the graph.
  • the first image, the second image, the third image, the Kth image, the Nth image, and the ) Image 50 A path was determined to follow each image in order.
  • the K-th image and the N+1-th image are linked, for example, as shown in FIG. 4B.
  • a transformation matrix R K,N+1 between the K-th image and the N+1-th image is calculated based on the result of the matching process between the feature points of the K-th image and the feature points of the N+1-th image. .
  • the optimal path for tracing the entire image can be determined by calculating the Max Spanning Tree, for example.
  • Max Spanning Tree is a spanning tree with the maximum sum of weights in a weighted connected undirected graph, and it is known that it can be calculated using algorithms such as Kruskal's method. In the present invention, the number of feature points matched between each image can be used as the weight of the graph, but another index may be used as the weight.
  • the center of the tree is used as a new reference. Through this process, it is better to use the image near the center of the image as a reference, for example, the third image in FIG. 4B, rather than the path that sequentially follows from the first image to the N+1-th image as shown in FIG. 4A. Since the overall route is shorter (fewer matrices are multiplied and errors are reduced), the third image can be determined as the new reference image. In this case, the following relationship holds true.
  • the overall configuration process is a process that is executed after all the images of the slide glass specimens are completed, and the overall configuration process improves the quality of the bonded image. More specifically, the following processes are executed in the overall configuration process in this embodiment.
  • Bundle adjustment (calculation of a transformation matrix that minimizes the deviation of all feature point pairs and distortion correction of the lens of the camera 20) (2) Seam calculation (search for the best break between images) (3) Exposure correction (correction of exposure time and vignetting between images) (4) Blending processing (combining images so that the breaks between images are less noticeable) (5)
  • Writing the overall configuration processed image to the HDD 32 of the computer device 30 A part of the above processing is performed by dividing the entire image into small areas (tiles). In this case, (2) to (5) or (3) to (5) may be processed for each tile. This makes it possible to reduce the amount of memory and calculation time used for one-time processing. (bundle adjustment) As described above, matching of feature points is performed in the sequential joining process.
  • the total error is calculated by calculating the amount of error between matched feature points (reprojection error) for all feature points. It can be done. By minimizing this overall error using the least squares method, the sequentially joined images can be made into images with better quality as a whole.
  • the Levenberg-Marquardt method can be used.
  • the Levenberg-Marquardt method is one of the methods for solving nonlinear least squares problems.
  • x i Component of the transformation matrix of the image at processing time i J i : Jacobian at processing time i (T on the right shoulder indicates transposition)
  • r i Error ⁇ between feature points at processing time i: Real number I greater than or equal to zero that is adjusted according to the size of the error: Equation (1) can be calculated as a unit matrix.
  • FIG. 5 is a diagram showing an example of a flowchart of sequential joining processing.
  • step S102 input from the user of the microscope image information processing system 1 to designate a folder to be monitored on the HDD 32 is received via an application such as camera software 40 or joining software 42 running on the computer device 30 (step S102).
  • the joining software 42 checks for updates to the designated folder at regular intervals (step S104). Unless the joining software 4 receives an input indicating the end of the sequential joining process (step S106: No), the sequential joining process is continued and the process proceeds to step S108.
  • step S108 if the joining software 42 determines that the image file has not been updated for the folder confirmed in step S104 (step S108: No), the process returns to step S104. If the bonding software 42 determines that the image file has been updated, that is, if a new image of the slide glass specimen has been taken and saved in the folder (step S108: Yes), the image file has been updated. It is determined whether this is the first storage, that is, the first storage of the captured image of the slide glass specimen (step S110). If the image file is not updated for the first time (step S110: No), processing for the second and subsequent saved image files is executed (step S120). The process will be described in detail later.
  • step S110 If the update of the image file is the first saving of the image file (step S110: Yes), the saved image file is read from the HDD 32, feature points and feature amounts in the image file are calculated, and the calculation result is are stored in an array for storing information on feature points and feature amounts of each image (step S112).
  • the distortion parameters, vignetting parameters, etc. are known in advance, correction may be performed when reading the image file. Distortion may occur in the captured image due to the optical system lens of the camera 20. It is generally known that the transformation from "true coordinates x, y, z without distortion” to "coordinates u, v after photographing with distortion” can be calculated, for example, by the following formula. (For example, Zhengyou Zhang.
  • k n , p n , c x , c y , f x , and f y are distortion parameters, and these values are estimated before performing distortion correction. More specifically, before performing the distortion correction, a grid pattern in which squares of the same size are lined up is photographed using the same camera 20 that photographs the slide glass specimen. Then, the values of the parameters k n , p n , c x , c y , f x , and f y can be estimated from the distortion of the photographed image.
  • inverse transformation can be performed using an algorithm that calculates an approximate solution, such as Newton's method. Further, only some parameters may be considered (for example, only the value of the parameter k 1 and the value of the parameter p 1 are considered, and the values of other parameters are regarded as "0", etc.). Further, distortion correction may be performed using other distortion models.
  • FIGS. 6A and 6B are diagrams showing an example of a joined image (partially) generated by the inventors of the present application using the microscope image information processing method according to the present embodiment.
  • Figure 6A shows the merged images obtained by sequentially joining the captured images without performing distortion correction
  • Figure 6B shows the merged images obtained by sequentially joining the captured images after performing distortion correction. Show images.
  • the error for the stitched image in FIG. 6A was about 3.32e+05
  • the error for the stitched image in FIG. 6B was about 1.57e+04.
  • Errors were significantly reduced by applying distortion correction to each captured image and sequentially combining them.
  • the error was calculated using the following procedure. That is, (1) the coordinates of the feature points of each image in the reference coordinate system (the coordinate system of the reference image) are calculated using the global transformation matrix. (2) Calculate the deviation in the reference coordinate system between the matched feature points. Ideally, if the matched feature points were transferred to the reference coordinate system, they would overlap exactly, but due to distortions, etc., this does not happen. This becomes an error.
  • the image file is reduced at a predetermined ratio and stored in an array for saving image data (step S114).
  • the reduced image By retaining the reduced image in the RAM 33, HDD 32, etc., memory consumption and storage capacity consumption can be reduced compared to having the original size image.
  • full-size image data may be stored in the array without reducing the image file, and from now on, the spliced image may be generated and treated as a full-size image rather than a reduced image.
  • the global transformation matrix (here, the unit matrix) of the first captured image reduced in step S114 is stored in an array for saving the global transformation matrix (step S116).
  • a preview image is synthesized using the reduced image data saved in step S114 and the global transformation matrix (unit matrix) saved in step S116, and the preview image is displayed. Note that a preview of a reduced image of the first photographed image may be displayed (step S118).
  • step S106 The above process is performed in step S106 when the joining software 42 receives a sequential request from the user via an input device such as a keyboard or a mouse due to reasons such as the completion of the sequential joining process (that is, the completion of virtual slide generation). The process continues until an input indicating the end of the joining process is received (step S106: Yes). When receiving an input from the user to end the sequential joining process, the sequential joining process is ended and the process proceeds to the overall configuration process. (Processing flow: Processing of step S120)
  • FIG. 7 is a diagram illustrating an example of a flowchart of the process in step S120 (processing for the image file from the second time onward).
  • image n which is a partial image of a slide glass specimen stored in the HDD 32, is read for the nth time (n: an integer of 2 or more), and feature points and feature amounts are calculated (step S1202).
  • n an integer of 2 or more
  • correction may be performed when reading the image file.
  • feature point matching is performed between image n and image n-1, which is the image taken the (n-1)th time, and a transformation matrix (hereinafter referred to as "image pair matrix”) between both images is performed. ) is calculated (step S1204).
  • step S1204 If the process in step S1204 is successful (step S1206: Yes), the destination area of image n is calculated, and based on this calculation result, one or A plurality of images (hereinafter also referred to as "paired images"; image K in the example of FIG. 4B corresponds to this) are selected (step S1208).
  • step S1208 Next, matching of feature points is performed between image n and all paired images selected in step S1208, and each image pair matrix (transformation matrix R K,N+1 in the example of FIG. 4B) is calculated (step S1210).
  • the matching of feature points executed in step S1210 is a process with high calculation cost, and if the matching of feature points is performed for all saved images, the user's waiting time will increase. Therefore, it is possible to reduce the user's waiting time by performing feature point matching only on images in which the movement destination areas selected in step S1208 overlap.
  • the feature points and feature amounts of image n are stored in an array (step S1212).
  • the image n is reduced by a predetermined ratio, and the image data of the reduced image is stored in an array (step S1214).
  • the image pair matrix obtained in step S1210 is stored in an array (step S1216).
  • a global transformation matrix is calculated from the image pair matrix stored in step S1216 and stored in an array (step S1218).
  • the global matrix of image n is the image pair matrix between image n and the paired image of image n (the image selected in step S1208, image K in the example of FIG. 4B), and the global matrix of the paired image of image n. It can be calculated as a product with a transformation matrix.
  • the global transformation matrix of all the saved images may be recalculated as necessary.
  • "As needed” may include, for example, “every predetermined number of times,”"when input is received from the user,””when the reprojection error between all matched feature points reaches a certain value,” and the like.
  • a Max Spanning Tree may be calculated with the number of feature matchings as weights. Set the image at the center of the Tree (the third image in the example of Figure 4B) as the new reference image (the global transformation matrix of the third image becomes the identity matrix), and calculate the product of the image pair matrices in order of the edges of the Tree. This allows us to calculate the global matrix for each image.
  • bundle adjustment may be performed a small number of times (for example, twice) to calculate the global transformation matrix. Bundle adjustment can be performed by minimizing the reprojection error between all matched feature points, for example by the Levenberg-Marquardt method.
  • the calculated image pair matrix may be updated based on the recalculation results of the global transformation matrices of all images (for example, if the global transformation matrices of image A and image B are Ma and Mb, respectively, image A
  • the image pair matrix between and image B can be calculated as the product of matrix Ma and matrix Mb ⁇ 1 ). This allows the next recalculation to be started from a state with fewer errors.
  • a preview image is generated using the reduced image data saved in step S1214 and displayed on a display device such as a display (step S1220).
  • the preview image since the preview images up to image n-1 have been combined, the preview image may be generated by joining the reduced-sized image n to the preview image of image n-1. Furthermore, if the global transformation matrix is recalculated for all of the saved images in step S1218, the preview images generated so far may be discarded and preview images may be generated. After this, the process transitions to step S104 in FIG.
  • step S1206 if the process in step S1204 is not successful (step S1206: No), matching of feature points is performed between image n and all saved images, and an image pair matrix is calculated (step S1222). As a result, if at least one pair of images (a pair of one or more images with overlapping destination areas) exists (step S1224: Yes), the process transitions to step S1212. If the image pair does not exist (step S1224: No), the process transitions to step S104 in FIG. 5.
  • step S1204 is not successful (step S1206: No), that is, if the matching of feature points between image n and image n-1 fails, for example, the photographing of the slide glass specimen is This can occur when restarting from a different field of view that is significantly different from the one that was being photographed (because the entire area of the slide glass specimen is not necessarily photographed by tracing it with one stroke). Further, the process in step S1222 may be terminated when at least one image pair matrix has been calculated. In that case, the subsequent processing moves to step S1208.
  • the array used in the flows of FIGS. 5 to 7 includes an array in which image pair matrices are stored, an array in which reduced images are stored, an array in which global transformation matrices are stored, and feature points and feature amounts of each image.
  • the data in the array where is saved is carried over to the overall configuration processing described later. Furthermore, by saving all or part of this data as a file in a storage area such as the HDD 32 of the computer device 30, the overall configuration process can be suspended and the overall configuration process performed at a later date. It can be executed on another computer device, etc.
  • FIG. 8 is a diagram illustrating an example of a flowchart of the overall configuration process.
  • Bundle adjustment is performed on all images on which the sequential splicing process has been performed.
  • the bundle adjustment may be performed by the Levenberg-Marquardt method (the above equation (1)) or the like (step S302).
  • the distortion parameters of the optical system lens of the camera 20 are not known in advance, in addition to the global transformation matrix, distortion parameters common to all images are included as adjustment target parameters for bundle adjustment, and the distortion parameters of the feature point coordinates are included.
  • the parameter x in the above equation (1) includes the components of the transformation matrix of all images and the common distortion parameter.
  • the distortion parameter of equation ( 2 ) is It becomes the form by adding . This becomes a common distortion parameter.
  • step S304 distortion correction is performed on the reduced image (the reduced image stored in the array in steps S114 and S1214) (step S304).
  • the distortion parameters are known in advance prior to the sequential splicing process and distortion correction has already been performed for each captured image in step S112 in FIG. 5 and step S1202 in FIG. 7, the correction in this step is omitted. be done.
  • FIG. 9 is an example of a spliced image obtained by performing distortion correction on the same captured images as in FIGS. 6A and 6B by including a common distortion parameter in the parameter x in equation (1), and then splicing the images.
  • FIG. The error for the stitched image in FIG. 6A was about 3.32e+05, and the error for the stitched image in FIG.
  • the error of the spliced image in FIG. 9 was approximately 1.55e+03. It can be seen that the error in the spliced image in FIG. 9 is further reduced compared to the spliced image shown in FIG. 6B, in which the captured images are subjected to distortion correction in advance and spliced sequentially.
  • a seam between the photographed images is calculated using the reduced image (step S306).
  • images can be joined at appropriate breaks.
  • a method for calculating the seam for example, existing methods such as a method using a Voronoi diagram, dynamic programming, and graph cut can be used. Note that seam calculation takes time, so in this embodiment, seam calculation is performed in overall configuration processing rather than sequential joining processing so that the user's waiting time can be long. There is. Further, if a full-size image is used, it takes time to calculate the seam, but in this embodiment, a reduced image is used as described above, which contributes to shortening the calculation time.
  • FIG. 10 is a diagram showing an example of an image generated by joining all images. Note that FIG. 10 is shown in a simplified manner for convenience of explanation. Assuming that there are six images (images 1 to 6), the image generated by joining images 1 to 6 is image 60, which is configured to include all of images 1 to 6. In this step, the size (width w, height h) of the image 60 is calculated as follows. That is, (1) using the global transformation matrix and distortion correction parameters obtained in step S302, calculate the movement destination area (in the reference coordinate system) of each image 1 to 6 (the size of each image 1 to 6). (2) Calculate a rectangular area that includes the entire destination area obtained in (1).
  • FIG. 11 is a diagram showing an example in which the rectangular area 60 is divided into a plurality of (here, two for simplicity of explanation) tiles 65 and 66. Also, images that intersect with the tiles 65 and 66 are defined as a tile image set.
  • the tile image set for tile 65 is images 1, 2, 3, 4, 5, and the tile image set for tile 66 is images 4, 5, 6. For example, when tile 65 is joined, an image larger than the size of tile 65 is generated, but the protruding portion is cut off. Further, the size and shape of the tile can be determined as follows.
  • the memory required to process one tile can be estimated by "the number of images included in the tile image set x the size of one full-size image", so this required memory amount is The size of the tiles may be determined so that the software does not exceed 10% of the available memory, and so on.
  • the shape of the tile may be a specific predetermined shape (such as a square), or may be a shape similar to each of the images 1 to 6 (such as a rectangle with an aspect ratio of 3:4).
  • steps S310 to S318 full-size images are used to join the images. Furthermore, if a full-size image is all processed at once, the amount of memory used and the calculation time will increase, so the entire image is divided into tiles and the processes of steps S310 to S318 are executed for each tile.
  • Step S310 an image where the movement destination area intersects with the tile to be processed is identified.
  • the tile image set of tile 65 is images 1, 2, 3, 4, and 5
  • the tile image set of tile 66 is images 4, 5, and 6.
  • Step S310 each image file of the tile image set specified in step S310 is read from the HDD 32 (step S312).
  • the distortion parameter either a distortion parameter known in advance or a distortion parameter estimated by including a common distortion parameter in the parameter x of the above equation (1) may be used.
  • step S314 Exposure compensation methods are described, for example, in M. Uyttendaele, A. Eden and R. Skeliski (2001) "Eliminating ghosting and exposure artifacts in image mosaics," Proceedings of the 2001 IEEE Computer Society Conference on Computer Vision and Pattern Recognition. CVPR 2001 , p. II (https://www.cs.jhu.edu/ ⁇ misha/ReadingSeminar/Papers/Uyttendaele01.pdf) can be used.
  • a blending process may be performed at the time of joining.
  • Various known techniques such as linear blending and multiband blending may be used for the blending process. For example, Richard Szeliski (2007), “Image Alignment and Stitching: A tutorial", Foundations and Trends(R) in Computer Graphics and Vision: Vol. 2: No. 1, pp 1-104. (https://www. nowpublishers.com/article/Details/CGV-009) may be used.
  • step S316 The image (tile) synthesized in step S316 is saved as a temporary file in a storage area such as the HDD 32 of the computer device 30 (step S318).
  • the above processing of steps S310 to S318 is executed for all tiles.
  • step S318 The temporary file saved in step S318 for each tile is read from a storage area such as the HDD 32 of the computer device 30, and sequentially written into the final file (step S320).
  • step S320 the final virtual slide image that has been subjected to the overall composition processing is stored in a storage area such as the HDD 32 of the computer device 30.
  • FIGS. 12A and 12B are diagrams illustrating the effect of additional matching processing of region overlapping images in sequential joining processing.
  • the horizontal axis represents the registration order of images
  • the vertical axis represents the number of images for which matching was attempted.
  • the horizontal axis represents the registration order of images
  • the vertical axis represents the time required for matching.
  • the first method is a method in which when a new image (New (N+1) image) is detected, matching is attempted with respect to all images that have been joined up to that point.
  • the second method is a method that performs additional matching processing for region-overlapping images.
  • the third method is a method in which when a new image (New (N+1) image) is detected, matching is attempted only with respect to the immediately previous spliced image (Last (N) image).
  • the number of images to be registered increases linearly as the images are registered later in the order. For this reason, the time required for matching is also increasing, and the user's waiting time gradually increases.
  • the number of images for which matching is attempted varies, but remains almost constant regardless of the order in which the images are registered. Therefore, the time required for matching is also almost constant.
  • the number of images for which matching is attempted is always 1, and therefore the time required for matching is also constant.
  • FIGS. 13A and 13B are diagrams illustrating the effect of reducing misalignment between images by calculating Max Spanning Tree and simple bundle adjustment in sequential splicing processing.
  • FIG. 13A shows a graph for pancreatic cancer HE-stained slides (320 images taken)
  • FIG. 13B shows a graph for liver HE-stained slides (59 images taken).
  • RMS root mean square
  • FIGS. 14A and 14B are diagrams illustrating the effect of distortion correction in overall configuration processing. These particularly relate to the process of step S304 described above.
  • FIG. 14A shows a graph for pancreatic cancer HE-stained slides (320 images taken)
  • FIG. 14B shows a graph for liver HE-stained slides (59 images taken).
  • RMS is a numerical value indicating the deviation per pair of feature points in pixel units. It can be seen that when distortion correction is performed, the deviation is reduced to 0.5 pixel or less. This is a level of deviation that is invisible to the naked eye, indicating that the effect of distortion correction is extremely large.
  • a microscope image information processing system 1 that executes the microscope image information processing method according to the present embodiment may include a microscope 10, a camera 20, and a computer device 30.
  • a microscope 10, camera 20, and computer device 30 existing general computer devices can be used.
  • FIG. 15 is a diagram showing an example of the hardware configuration of the computer device 30.
  • the computer device 30 includes a processor 31, an HDD 32, a RAM (Random Access Memory) 33, a ROM (Read Only Memory) 34, a removable memory 35 such as a CD, DVD, USB memory, memory stick, or SD card, and an input/output user interface (keyboard).
  • the computer device 30 reads computer programs (camera software 40, joining software 42, and various other computer programs) stored in the HDD 32 and various data to be processed into a memory such as a RAM 33 and executes them. Each process of the microscope image information processing method according to the present embodiment described above can be realized.
  • the computer device 30 is illustrated as one device in FIGS. 1 and 2, it may be configured by two or more computer devices.
  • a first computer device stores images of each part of a slide glass specimen in the HDD 32
  • a second computer device separate from this computer device uses a plurality of captured images stored in the first computer device.
  • the process after the joining process may be executed sequentially.
  • the photographed image of the slide glass specimen temporarily stored in the HDD 32 of the first computer device is automatically transmitted to the second computer device by wired or wireless communication, and may be configured to use this as a trigger to execute subsequent sequential joining processing and overall configuration processing.
  • a first computer device executes the sequential joining process, and sends the sequentially joined images for which the sequential joining process has been completed and the data necessary for the overall configuration processing to the second computer device by wire or wirelessly.
  • the information may be transmitted by communication.
  • the second computer device may then execute the overall configuration process using the sent joined images and data necessary for the overall configuration process.
  • the photographed images and joined images of each part of the slide glass specimen are described as being stored in the HDD 32, but they may be stored in other recording media.
  • the bonding process is performed sequentially while images of each part of the slide glass specimen are photographed, and images in the middle of bonding are sequentially displayed as previews to the user.
  • the user can proceed with photographing the slide glass specimen while viewing the preview display and confirming whether the bonding process has been properly executed each time. This makes it possible to avoid the need for re-imaging the slide glass specimen due to the eventual failure of joining as in conventional off-line stitching.
  • an automatic motorized stage or a dedicated camera is not required, and an existing imaging system (microscope, camera, computer device) can be used. It is possible to implement using As a result, the installation cost of the system is low, and virtual slides can be generated using familiar photographic images using a microscope that is familiar to daily use. Furthermore, as a conventional method for creating virtual slides, there is a device called a slide scanner that automatically scans slide glass specimens, but such devices are very expensive, but the microscope according to this embodiment According to the image information processing method, such a dedicated device is not required.
  • the microscope image information processing method according to the present embodiment can also be applied to fluorescent samples because the sample images are photographed one by one.
  • a preview image is generated as a reduced image, and a spliced image of the original size can be generated in the overall configuration processing, so that the amount of memory used can be reduced. Contributes to improving processing speed.
  • the scope of the present invention is not limited to the exemplary embodiments shown and described, but also includes all embodiments that provide equivalent effects to the object of the present invention. Furthermore, the scope of the invention is not limited to the combinations of inventive features delineated by each claim, but may be defined by any desired combination of specific features of each and every disclosed feature. .
  • a method for processing microscopic image information performed by a computer system comprising: a first step of acquiring a captured image of a portion of the sample observed using a microscope and storing it in a storage area; a second step of calculating feature point information that is information regarding feature points of the captured image when it is detected that the captured image is saved in the storage area; Using the feature point information of the previous captured image and the feature point information of the captured image, a matching process is performed between the feature points of the previously captured image and the feature points of the captured image.
  • a global transformation matrix of the photographed image stored in the first step is calculated based on a specific image among the photographed images stored in the storage area.
  • the microscope image information processing method according to (1) or (2) above the microscope image information processing method according to (1) or (2) above.
  • the microscopic image information processing method according to any of (1) to (4) above, further comprising: (8)
  • bundle adjustment is performed on the spliced image generated by repeating the first step to the fifth step until imaging of a portion of the sample is completed.
  • x i Components of the image transformation matrix at processing time i and distortion correction parameters J i : Jacobian at processing time i (T on the right shoulder indicates transposition)
  • r i Error ⁇ between feature points at processing time i: Real number greater than or equal to zero that is adjusted according to the size of the error
  • I Identity matrix
  • the computer system includes a first computer device and a second computer device that is separate from the first computer device, the first step is performed on the first computer device;
  • the microscopic image information processing method according to (1) to (8) above, wherein the second step to the fifth step are executed in the second computer device.
  • a first step of acquiring a captured image of a portion of the sample observed using a microscope and storing it in a storage area When it is detected that the photographed image is stored in the storage area, a second step of calculating feature point information, which is information regarding the feature points of the photographed image, is executed; Using the feature point information of the previous captured image and the feature point information of the captured image, a matching process is performed between the feature points of the previously captured image and the feature points of the captured image.

Landscapes

  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Chemical & Material Sciences (AREA)
  • Analytical Chemistry (AREA)
  • Optics & Photonics (AREA)
  • Theoretical Computer Science (AREA)
  • Microscoopes, Condenser (AREA)
  • Image Processing (AREA)

Abstract

Provided is a microscope image information processing method that avoids re-imaging of a sample and can be implemented by an existing computer system. In a microscope image information processing method, a captured image (50) of a portion of a sample observed using a microscope is acquired and stored in a storage region (32) (ST1), when it is detected that the captured image (50) is stored in the storage region (32), feature point information which is information relating to a feature point in the captured image (50) is calculated, feature point information of a previous captured image and the feature point information of the captured image (50) are used (ST2) to execute matching processing of the feature point of the previous captured image and the feature point of the captured image (50), joining processing is executed on the basis of the result of the matching processing and a plurality of captured images (50) stored in the storage region (32) up to the present time to generate a joined image (52) (ST4), the joined image (52) is output for display, and the processing described above is repeated until imaging of the portion of the sample is terminated.

Description

顕微鏡画像情報処理方法、顕微鏡画像情報処理システム、およびコンピュータプログラムMicroscope image information processing method, microscope image information processing system, and computer program
 本発明は、顕微鏡画像情報を処理する方法に関する。 The present invention relates to a method of processing microscopic image information.
 従来、顕微鏡で観察したスライドガラス標本などを高精細かつ大範囲にデジタル画像化する技術として、バーチャルスライドという技術が存在する。バーチャルスライドは画像データであるため、スライドガラス標本そのものを保管しておくよりも扱いが容易である。バーチャルスライドは、例えば、遠隔病理診断や病理サンプルのデジタル保管などに使用されうる。 Conventionally, a technology called virtual slide exists as a technology for creating high-definition, large-area digital images of glass slide specimens and the like observed under a microscope. Since virtual slides are image data, they are easier to handle than storing the slide glass specimens themselves. Virtual slides can be used, for example, for remote pathology diagnosis and digital storage of pathology samples.
 バーチャルスライドを生成するためにいくつかの方法が存在し、例えば、Off-line stitchingと、Real time stitching(例えば非特許文献1)という方法が存在する。Off-line stitchingは、スライドガラス標本などの画像を広範囲にわたって複数撮影した後、オフラインで複数の撮影画像をつなぎ合わせて1枚の画像を生成する方法である。Real time stitchingは、スライドガラス標本を観察している最中に、複数の撮影画像をつなぎ合わせて1枚の画像を生成する方法である。また、バーチャルスライドを作成する方法として、Slide scannerといわれる、自動的にスライドガラス標本をスキャンする装置も存在する。 There are several methods for generating virtual slides, such as Off-line stitching and Real-time stitching (for example, Non-Patent Document 1). Off-line stitching is a method in which multiple images of a slide glass specimen or the like are taken over a wide area, and then the multiple images are stitched offline to generate a single image. Real time stitching is a method of stitching together multiple captured images to create a single image while observing a glass slide specimen. Additionally, as a method of creating virtual slides, there is also a device called a slide scanner that automatically scans a glass slide specimen.
 しかしながら、Off-line stitchingにおいては、スライドガラス標本の撮影操作中に接合中の画像をプレビューして確認することができないため、いざ接合処理を実行してみると、接合に失敗する、一部画像が欠落する、等の理由により再撮影作業が発生しやすいという問題があった。また、Real time stitchingは、専用のソフトウェアと、当該ソフトウェアに対応した専用のカメラが必要であり、一般的なデジタルカメラを用いることができない。よって、撮影時に臨機応変に撮影条件を変更することは困難であった。また、Slide scannerといわれるバーチャルスライドを生成するための自動化された専用の装置は、一般的に非常に高価であり、この装置を導入できる環境は限られる。 However, in offline stitching, it is not possible to preview and check the images being stitched while photographing slide glass specimens. There has been a problem in that re-photographing work is likely to occur due to reasons such as missing images. In addition, real time stitching requires dedicated software and a dedicated camera compatible with the software, and cannot be used with a general digital camera. Therefore, it has been difficult to flexibly change the photographing conditions during photographing. Additionally, a specialized automated device called a slide scanner for generating virtual slides is generally very expensive, and there are limited environments in which this device can be introduced.
 本発明は、上記課題に鑑みてなされたものである。 The present invention has been made in view of the above problems.
 上記課題を解決するために、本発明の一態様は、コンピュータシステムによって実行される顕微鏡画像情報処理方法であって、顕微鏡を用いて観察される試料の一部分の撮影画像を取得して記憶領域に保存する第1のステップと、前記撮影画像が前記記憶領域に保存されたことを検知すると、前記撮影画像の特徴点に関する情報である特徴点情報を算出する第2のステップと、前回の前記撮影画像の前記特徴点情報と、前記撮影画像の前記特徴点情報と、を用いて、前回の前記撮影画像の前記特徴点と前記撮影画像の前記特徴点とのマッチング処理を実行する第3のステップと、前記マッチング処理の結果と、現在までに前記記憶領域に保存された複数の前記撮影画像と、に基づいて接合処理を実行して接合済み画像を生成する第4のステップと、前記接合済み画像を表示出力する第5のステップと、を含み、前記試料の一部分の撮影が終了するまで、前記第1のステップから前記第5のステップまでを繰り返す、顕微鏡画像情報処理方法である。 In order to solve the above problems, one aspect of the present invention is a microscope image information processing method executed by a computer system, which acquires a captured image of a portion of a sample observed using a microscope and stores it in a storage area. a first step of storing the captured image; and a second step of calculating feature point information, which is information regarding the feature points of the captured image, when it is detected that the captured image has been saved in the storage area; a third step of executing a matching process between the feature points of the previous captured image and the feature points of the captured image using the feature point information of the image and the feature point information of the captured image; a fourth step of generating a joined image by performing a joining process based on the result of the matching process and the plurality of captured images that have been saved in the storage area to date; and a fifth step of displaying and outputting an image, and repeating the first step to the fifth step until the photographing of a portion of the sample is completed.
 また、本発明の他の態様は、顕微鏡を用いて観察される試料の一部分の撮影画像を取得して記憶領域に保存する第1のステップを実行し、前記撮影画像が前記記憶領域に保存されたことを検知すると、前記撮影画像の特徴点に関する情報である特徴点情報を算出する第2のステップを実行し、前回の前記撮影画像の前記特徴点情報と、前記撮影画像の前記特徴点情報と、を用いて、前回の前記撮影画像の前記特徴点と前記撮影画像の前記特徴点とのマッチング処理を実行する第3のステップを実行し、前記マッチング処理の結果と、現在までに前記記憶領域に保存された複数の前記撮影画像と、に基づいて接合処理を実行して接合済み画像を生成する第4のステップを実行し、前記接合済み画像を表示出力する第5のステップを実行し、前記試料の一部分の撮影が終了するまで、前記第1のステップから前記第5のステップまでを繰り返す、顕微鏡画像情報処理システムである。 In another aspect of the present invention, the first step of acquiring a photographed image of a part of a sample observed using a microscope and storing it in a storage area is performed, and the photographed image is stored in the storage area. When it is detected, a second step of calculating feature point information, which is information regarding the feature points of the captured image, is executed, and the feature point information of the previously captured image and the feature point information of the captured image are calculated. and a third step of executing a matching process between the feature point of the previous captured image and the feature point of the captured image using a fourth step of performing a joining process to generate a joined image based on the plurality of captured images stored in a region; and a fifth step of displaying and outputting the joined image. , is a microscope image information processing system that repeats the first step to the fifth step until the photographing of a portion of the sample is completed.
 また、本発明の他の態様は、上記の顕微鏡画像情報処理方法をコンピュータシステムに実行させる、コンピュータプログラムである。 Another aspect of the present invention is a computer program that causes a computer system to execute the above-described microscope image information processing method.
本発明の一実施形態に係る顕微鏡画像情報処理方法における逐次接合処理の概要を説明する図である。FIG. 3 is a diagram illustrating an overview of sequential joining processing in a microscope image information processing method according to an embodiment of the present invention. 本発明の一実施形態に係る顕微鏡画像情報処理方法における全体構成処理の概要を説明する図である。FIG. 2 is a diagram illustrating an overview of overall configuration processing in a microscope image information processing method according to an embodiment of the present invention. N番目の画像において検出された複数の特徴点と、直前の接合済み画像において検出された複数の特徴点とのマッチングを行った結果の一例を示す図である。FIG. 7 is a diagram illustrating an example of the results of matching a plurality of feature points detected in the Nth image with a plurality of feature points detected in the immediately previous merged image. 連結無向グラフを用いたグローバル変換行列の再計算処理について概念的に説明する図である。FIG. 7 is a diagram conceptually explaining a process of recalculating a global transformation matrix using a connected undirected graph. 連結無向グラフを用いたグローバル変換行列の再計算処理について概念的に説明する図である。FIG. 7 is a diagram conceptually explaining a process of recalculating a global transformation matrix using a connected undirected graph. 本発明の一実施形態に係る顕微鏡画像情報処理方法における逐次接合処理のフローチャートの一例を示す図である。It is a figure which shows an example of the flowchart of the sequential joining process in the microscope image information processing method based on one Embodiment of this invention. 逐次接合処理により生成された接合済み画像(一部)の一例(歪み補正無し)を示す図であるFIG. 7 is a diagram showing an example (partially) of a spliced image (without distortion correction) generated by sequential splicing processing. 逐次接合処理により生成された接合済み画像(一部)の一例(歪み補正あり)を示す図であるFIG. 7 is a diagram illustrating an example (with distortion correction) of a spliced image (part) generated by sequential splicing processing. ステップS120(2回目以降の画像ファイルに対する処理)における処理のフローチャートの一例を示す図である。FIG. 7 is a diagram illustrating an example of a flowchart of processing in step S120 (processing for the image file from the second time onwards). 本発明の一実施形態に係る顕微鏡画像情報処理方法における全体構成処理のフローチャートの一例を示す図である。FIG. 3 is a diagram showing an example of a flowchart of overall configuration processing in a microscope image information processing method according to an embodiment of the present invention. 全体構成処理のバンドル調整(歪み補正)を行った接合済み画像(一部)の一例を示す図である。FIG. 7 is a diagram illustrating an example of a spliced image (partially) that has been subjected to bundle adjustment (distortion correction) in the overall configuration process. 全画像を接合して生成される画像の一例を示す図である。It is a figure which shows an example of the image produced|generated by joining all images. 矩形領域が複数のタイルに分割される一例を示す図である。FIG. 3 is a diagram illustrating an example in which a rectangular area is divided into multiple tiles. 逐次接合処理における、領域重複画像の追加マッチング処理の効果を示す図である。FIG. 7 is a diagram illustrating the effect of additional matching processing of region overlapping images in sequential joining processing. 逐次接合処理における、領域重複画像の追加マッチング処理の効果を示す図である。FIG. 7 is a diagram illustrating the effect of additional matching processing of region overlapping images in sequential joining processing. 逐次接合処理における、Max Spanning Treeの計算と簡易的なバンドル調整による画像どうしのズレの軽減効果を示す図である。FIG. 7 is a diagram illustrating the effect of reducing misalignment between images by calculating Max Spanning Tree and simple bundle adjustment in sequential splicing processing. 逐次接合処理における、Max Spanning Treeの計算と簡易的なバンドル調整による画像どうしのズレの軽減効果を示す図である。FIG. 7 is a diagram illustrating the effect of reducing misalignment between images by calculating Max Spanning Tree and simple bundle adjustment in sequential splicing processing. 全体構成処理における歪み補正の効果を示す図である。FIG. 7 is a diagram showing the effect of distortion correction in overall configuration processing. 全体構成処理における歪み補正の効果を示す図である。FIG. 7 is a diagram showing the effect of distortion correction in overall configuration processing. コンピュータ装置30のハードウェア構成の一例を示す図である。3 is a diagram showing an example of a hardware configuration of a computer device 30. FIG.
 以下、図面を参照しながら本発明の実施形態について詳しく説明する。
(顕微鏡画像情報処理方法の概要)
 図1および図2は、本実施形態に係る顕微鏡画像情報処理方法の概要を説明する図である。本実施形態に係る顕微鏡画像情報処理方法は、既存の一般的な顕微鏡10、カメラ20、およびコンピュータ装置30を含んで構成される顕微鏡画像情報処理システム1によって実行されうる。本実施形態に係る顕微鏡画像情報処理方法は、大きく分けて、逐次接合処理と全体構成処理の2つのフレーズによって構成される。逐次接合処理は、スライドガラス標本の一部の画像が撮影されるたびに最新の接合済み画像を生成し、最終的にはスライドガラス標本全体の接合済み画像を生成する処理である。また、全体構成処理は、スライドガラス標本の撮影完了後に実行される処理であり、逐次接合処理が施された接合済み画像に対して実行される処理である。全体構成処理は、主に接合済み画像の質を向上させる各種処理と処理後の画像の保存を行う。逐次接合処理では主に少ない待ち時間で処理が実行され、全体構成処理では主に時間または計算コストのかかる処理が実行される。
Embodiments of the present invention will be described in detail below with reference to the drawings.
(Overview of microscope image information processing method)
FIGS. 1 and 2 are diagrams illustrating an overview of the microscope image information processing method according to the present embodiment. The microscope image information processing method according to the present embodiment can be executed by a microscope image information processing system 1 that includes an existing general microscope 10, a camera 20, and a computer device 30. The microscope image information processing method according to this embodiment is roughly divided into two phrases: sequential joining processing and overall configuration processing. The sequential bonding process is a process that generates the latest bonded image every time an image of a part of the slide glass specimen is taken, and finally generates a bonded image of the entire slide glass specimen. Further, the overall configuration process is a process that is executed after the photographing of the slide glass specimen is completed, and is a process that is executed on the joined images that have been sequentially subjected to the joining process. The overall configuration processing mainly performs various processing to improve the quality of the spliced images and saves the processed images. In sequential joining processing, processing is mainly executed with a small waiting time, and in overall configuration processing, processing is mainly executed that requires time or calculation cost.
 図1は、逐次接合処理の概要を説明する図である。ユーザは、顕微鏡10によってスライドガラス標本を観察しつつ、カメラ用ソフトウェア40を用いて、スライドガラス標本の各部分をカメラ20により順次撮影していく。撮影されたスライドガラス標本の各部分の画像50は、撮影のたびに、カメラ用ソフトウェア40によって、コンピュータ装置30が備えるハードディスク装置(HDD)32に書き込まれる(ST1)。また、接合ソフトウェア42によってHDD32は監視されており(ST2)、HDD32に画像ファイル50の書き込みがあったことが検知されるたびに、書き込まれた画像ファイル50がHDD32から読み出されて(ST3)、接合処理が実行される(ST4)。スライドガラス標本の各部分の画像50が撮影されてHDD32に保存されるたびに、HDD32から撮影画像が読み出されて接合処理が実行されるという処理が繰り返されて、撮影のたびに接合済み画像52が更新されていく。そして、接合済み画像52が更新されるたびに、更新された接合済み画像52がディスプレイ等の表示装置によってユーザにプレビュー表示される。本実施形態に係る顕微鏡画像情報処理方法においては、生成途中の接合済み画像52が随時ユーザにプレビューされるため、ユーザは適切な接合済み画像52が生成されつつあることを随時確認することが可能である。これにより、従来のOff-line stitchingによるバーチャルスライド生成のように、撮影が終了した後に再撮影作業が発生する可能性を低減させることが可能である。 FIG. 1 is a diagram illustrating an overview of sequential joining processing. While observing the glass slide specimen through the microscope 10, the user uses the camera software 40 to sequentially photograph each part of the glass slide specimen with the camera 20. The photographed image 50 of each part of the slide glass specimen is written to the hard disk drive (HDD) 32 included in the computer device 30 by the camera software 40 each time the photograph is taken (ST1). Furthermore, the HDD 32 is monitored by the joining software 42 (ST2), and each time it is detected that an image file 50 has been written to the HDD 32, the written image file 50 is read from the HDD 32 (ST3). , a joining process is executed (ST4). Each time an image 50 of each part of the slide glass specimen is photographed and stored in the HDD 32, the photographed image is read out from the HDD 32 and the joining process is performed. This process is repeated, and each time the image is taken, a joined image is created. 52 will be updated. Then, each time the spliced image 52 is updated, the updated spliced image 52 is displayed as a preview to the user on a display device such as a display. In the microscope image information processing method according to the present embodiment, the merged image 52 that is currently being generated is previewed by the user at any time, so the user can confirm at any time that the appropriate merged image 52 is being generated. It is. As a result, it is possible to reduce the possibility that re-photographing work will occur after the photographing is completed, as in the case of virtual slide generation using conventional off-line stitching.
 逐次接合処理は、より具体的には以下のようにして実行されうる。すなわち、前回撮影されてHDD32に保存された画像と、新たに撮影されてHDD32に保存された撮影画像50との特徴点のマッチングが実行されて両画像間の変換行列が計算される。さらにその結果を基に、HDD32に保存済みの画像であって移動先領域が重複する他の画像との特徴点のマッチングが実行されて変換行列が計算される。計算対象を保存されている全ての撮影画像とするのではなく、移動先領域が重複する画像のみに限定することで、ユーザの待ち時間を削減することが可能である。また、必要に応じてMax Spanning Treeの計算や簡易的なバンドル調整が行われることにより、画像どうしのズレの蓄積を軽減させることが可能である。 More specifically, the sequential joining process can be executed as follows. That is, matching of the feature points between the previously captured image and stored in the HDD 32 and the newly captured image 50 captured and stored in the HDD 32 is performed, and a transformation matrix between the two images is calculated. Furthermore, based on the results, feature point matching with another image that is already stored in the HDD 32 and whose movement destination area overlaps is executed, and a transformation matrix is calculated. The user's waiting time can be reduced by limiting the calculation target to only images with overlapping movement destination areas, rather than all stored captured images. Furthermore, by calculating the Max Spanning Tree and performing simple bundle adjustment as necessary, it is possible to reduce the accumulation of misalignment between images.
 図2は、全体構成処理の概要を説明する図である。例えばユーザによって逐次接合処理が完了した旨の入力がなされると、接合ソフトウェア42は、画像の質を向上させるための処理を実行し(ST5)、当該処理実行後の接合済み画像52をHDD32に保存する(ST6)。 FIG. 2 is a diagram illustrating an overview of the overall configuration processing. For example, when the user inputs that the sequential joining process has been completed, the joining software 42 executes processing to improve the quality of the images (ST5), and stores the joined images 52 after performing the processing on the HDD 32. Save (ST6).
 全体構成処理は、より具体的には、例えば以下のように実行されうる。
(1)バンドル調整(全特徴点ペアのズレを最小化するような変換行列と、カメラ20のレンズの歪み補正の計算)
(2)シームの計算(画像間のベストな切れ目を探索する)
(3)露光補正(画像間において露光時間やヴィネッティング(vignetting)の補正)
(4)ブレンド処理(画像間の切れ目が目立たなくるよう画像を合成する)
(5)コンピュータ装置30のHDD32への全体構成処理済み画像の書き込み
 上記処理の一部は全体画像の小領域(タイル)に分割して実施される。この場合、(2)~(5)または(3)~(5)はタイル毎に処理されてよい。これにより一度の処理に使用されるメモリ量や計算時間を削減することが可能である。
More specifically, the overall configuration process may be executed as follows, for example.
(1) Bundle adjustment (calculation of a transformation matrix that minimizes the deviation of all feature point pairs and distortion correction of the lens of the camera 20)
(2) Seam calculation (search for the best break between images)
(3) Exposure correction (correction of exposure time and vignetting between images)
(4) Blending processing (combining images so that the breaks between images are less noticeable)
(5) Writing the overall configuration processed image to the HDD 32 of the computer device 30 A part of the above processing is performed by dividing the entire image into small areas (tiles). In this case, (2) to (5) or (3) to (5) may be processed for each tile. This makes it possible to reduce the amount of memory and calculation time used for one-time processing.
 なお、本実施形態の説明においては、スライドガラス標本を画像化して生成されるバーチャルスライドを一例として説明するが、これはあくまで一例である。本実施形態に係る顕微鏡画像情報処理方法は、例えば、シャーレで培養している細胞の画像化などにおいても同様に適用可能である。 Note that in the description of this embodiment, a virtual slide generated by imaging a slide glass specimen will be described as an example, but this is just an example. The microscope image information processing method according to the present embodiment is similarly applicable to, for example, imaging cells cultured in a petri dish.
 以下、逐次接合処理および全体構成処理についてさらに詳細に説明する。
(逐次接合処理)
 図3を参照し、(N+1)回目の逐次接合を行うために(N:1以上の整数)、N回目に撮影された撮影画像(以下、説明の便宜上、「Last(N)画像」という)50´と、新たに撮影されてHDD32に保存された(N+1)回目の撮影画像(以下、説明の便宜上、「New(N+1)画像」という)50とについて、既知の方法を用いて複数の特徴点の検出および特徴量の算出と、検出された特徴点のマッチングが実行される。図3は、Last(N)画像50´において検出された複数の特徴点と、New(N+1)画像50において検出された複数の特徴点とのマッチングを行った結果の一例を示す図である。なお、図3においては説明の便宜上、マッチング結果のうちの一部のみが破線で示されている。次に、当該マッチング結果に基づいて、Last(N)画像50´とNew(N+1)画像50との間で変換行列RN,N+1が計算される。変換行列RN,N+1は、Last(N)画像50´とNew(N+1)画像50との間の移動距離および回転量を示す行列(アフィン変換行列)である。すなわち、以下の関係式が成り立つ。
The sequential joining process and the overall configuration process will be described in more detail below.
(Sequential joining process)
Referring to FIG. 3, in order to perform the (N+1)th sequential joining (N: an integer greater than or equal to 1), the photographed image photographed the Nth time (hereinafter referred to as the "Last (N) image" for convenience of explanation) 50' and the (N+1)th photographed image (hereinafter referred to as "New (N+1) image" for convenience of explanation) 50 newly photographed and stored in the HDD 32, a plurality of characteristics are determined using a known method. Detection of points, calculation of feature amounts, and matching of detected feature points are performed. FIG. 3 is a diagram showing an example of the result of matching a plurality of feature points detected in the Last (N) image 50' and a plurality of feature points detected in the New (N+1) image 50. Note that in FIG. 3, for convenience of explanation, only some of the matching results are shown by broken lines. Next, a transformation matrix R N,N+1 is calculated between the Last (N) image 50' and the New (N+1) image 50 based on the matching result. The transformation matrix R N,N+1 is a matrix (affine transformation matrix) indicating the movement distance and rotation amount between the Last (N) image 50' and the New (N+1) image 50. That is, the following relational expression holds true.
Figure JPOXMLDOC01-appb-M000003
Figure JPOXMLDOC01-appb-M000003
ここで、行列 Here, the matrix
Figure JPOXMLDOC01-appb-M000004
Figure JPOXMLDOC01-appb-M000004
は、New(N+1)画像50の特徴点を表す行列である。また、行列 is a matrix representing the feature points of the New (N+1) image 50. Also, the matrix
Figure JPOXMLDOC01-appb-M000005
       
Figure JPOXMLDOC01-appb-M000005
       
は、Last(N)画像50´の特徴点を表す行列である。算出された変換行列RN,N+1は、配列などの形式でコンピュータ装置30のHDD32等の記憶領域に逐次格納される。また、変換行列RN,N+1の成分a、b、c、d、e、fは顕微鏡の性能によって自由度を下げてもよい。例えば、顕微鏡の光学系や稼働部が十分な精度を有する場合、撮影される画像間の関係は平行移動と見なせるため、a=e=1、b=d=0として、c、fのみ変数としてよい(自由度2)。また、精度がやや悪い場合やスライドガラス標本の固定が緩い場合など、平行移動だけでなく回転も考慮する必要がある場合は、a=cosθ、b=sinθ、d=-sinθ、e=cosθとしてθ、c、fを変数としてよい(自由度3)。さらに精度が悪い場合や、撮影途中でレンズ倍率が変更になる場合などは、a=t・cosθ、b=sinθ、d=-sinθ、e=t・cosθとしてt、θ、c、fを変数としてよい(自由度4)。顕微鏡や標本に応じて適切な自由度を設定することにより、接合に要する計算時間やメモリ消費量の節減や、接合の精度向上が期待できる。 is a matrix representing the feature points of the Last(N) image 50'. The calculated transformation matrices R N,N+1 are sequentially stored in a storage area such as the HDD 32 of the computer device 30 in the form of an array or the like. Furthermore, the degrees of freedom of the components a, b, c, d, e, and f of the transformation matrix R N,N+1 may be lowered depending on the performance of the microscope. For example, if the optical system and operating parts of a microscope have sufficient precision, the relationship between the captured images can be considered as a parallel movement, so a = e = 1, b = d = 0, and only c and f are variables. Good (2 degrees of freedom). In addition, when it is necessary to consider not only parallel movement but also rotation, such as when the accuracy is slightly poor or the slide glass specimen is not fixed loosely, use a = cos θ, b = sin θ, d = -sin θ, and e = cos θ. θ, c, and f may be variables (3 degrees of freedom). If the accuracy is poor or the lens magnification changes during shooting, set t, θ, c, and f as variables such as a=t・cosθ, b=sinθ, d=−sinθ, and e=t・cosθ. (4 degrees of freedom). By setting appropriate degrees of freedom according to the microscope and specimen, it is expected that the calculation time and memory consumption required for joining can be reduced, and the accuracy of joining can be improved.
 次に、New(N+1)画像50のグローバル変換行列を計算する。「グローバル変換行列」とは、New(N+1)画像50が特定の画像を基準として、当該特定の画像からどれぐらい移動および回転をしたかを表す情報である。ここでは1番目の撮影画像を基準として、New(N+1)画像50のグローバル変換行列をRN+1とすると、以下の関係が成り立つ。 Next, a global transformation matrix for the New (N+1) image 50 is calculated. The "global transformation matrix" is information representing how much the New (N+1) image 50 has moved and rotated from a specific image with respect to the specific image. Here, if the global transformation matrix of the New (N+1) image 50 is R N+1 with the first captured image as a reference, the following relationship holds true.
Figure JPOXMLDOC01-appb-M000006
Figure JPOXMLDOC01-appb-M000006
 すなわち、New(N+1)画像50のグローバル変換行列RN+1は、N回目の逐次接合よりも以前の各接合処理においてそれぞれ算出された、直前の撮影画像と新たな撮影画像との間の各変換行列R1,2、R2,3、・・・RN,N+1の積として算出されうる。算出されたNew(N+1)画像50のグローバル変換行列RN+1を用いて、基準となる特定の画像(ここでは1番目の撮影画像)の画像データとNew(N+1)画像50の画像データとの接合処理が行われ、接合済み画像52が生成される。 That is, the global transformation matrix R N+1 of the New (N+1) image 50 is each transformation matrix between the immediately previous captured image and the new captured image, which is calculated in each joining process before the Nth sequential joining. It can be calculated as the product of R 1,2 , R 2,3 , . . . R N,N+1 . Using the calculated global transformation matrix R N+1 of the New (N+1) image 50, the image data of a specific reference image (here, the first captured image) and the image data of the New (N+1) image 50 are joined. Processing is performed and a spliced image 52 is generated.
 ところで、上述したような方法によってグローバル変換行列RN+1を算出することが可能であるが、RN+1を算出するために複数の行列R1,2、R2,3、・・・RN,N+1を掛け合わせていくため、各行列において生じた誤差が蓄積していくことになる。本実施形態においてはこの点を考慮して、さらに以下のような処理を行った。 By the way, it is possible to calculate the global transformation matrix R N+1 by the method described above, but in order to calculate R N+1 , a plurality of matrices R 1,2 , R 2,3 , . . . R N,N+1 are required. , the errors occurring in each matrix will accumulate. In this embodiment, the following processing was further performed in consideration of this point.
 すなわち、グローバル変換行列RN+1を算出することにより基準となる画像(1回目の画像)からのNew(N+1)画像50の大まかな移動先(以下、「移動先領域」という)が分かる。そして、New(N+1)画像の移動先領域と、過去に算出された各画像についての移動先領域とを比較することにより、Last(N)画像50´の他に、New(N+1)画像50と移動先領域が重複する過去の接合済み画像が存在するか検索する。ここで、「移動先領域が重複する」とは、例えば、移動先領域間の重複面積が“0”を超える場合に重複すると判定されるようになっていてもよいし、予め定められた値を閾値として移動先領域間の重複面積が当該閾値を超えた場合に重複すると判断されるようになっていてもよい。K番目(0<K<N)の接合済み画像50´がNew(N+1)画像50と移動先領域が重複していると判断されたとすると、New(N+1)画像50とK番目の接合済み画像50´との間でも特徴点のマッチングが行われ、K番目の接合済み画像50´とNew(N+1)画像50との間の変換行列RK,N+1が計算される。前記の一連の処理を「領域重複画像の追加マッチング処理」と呼ぶ。また、これらの情報に基づき、各画像を頂点とした連結無向グラフが計算される。 That is, by calculating the global transformation matrix R N+1 , the rough destination (hereinafter referred to as "movement destination area") of the New (N+1) image 50 from the reference image (first image) can be found. Then, by comparing the movement destination area of the New (N+1) image with the movement destination area of each image calculated in the past, in addition to the Last (N) image 50', the New (N+1) image 50 and the movement destination area of each image calculated in the past are compared. Search for past merged images with overlapping destination areas. Here, "movement destination areas overlap" may be such that, for example, it is determined that the movement destination areas overlap when the overlapping area between them exceeds "0", or a predetermined value may be used. If the overlapping area between the destination areas exceeds the threshold value, it may be determined that the destination areas overlap. If it is determined that the K-th (0<K<N) joined image 50' overlaps the New (N+1) image 50 in the movement destination area, then the New (N+1) image 50 and the K-th joined image 50', and a transformation matrix R K,N+1 between the K-th spliced image 50' and the New (N+1) image 50 is calculated. The above-mentioned series of processing is called "additional matching processing for region overlapping images." Also, based on this information, a connected undirected graph with each image as a vertex is calculated.
 図4Aおよび図4Bは前記グラフを用いたグローバル変換行列の再計算処理について概念的に説明する図である。グローバル変換行列の再計算前は、図4Aに示されるように、1番目の画像、2番目の画像、3番目の画像、K番目の画像、N番目の画像、N+1番目の画像(New(N+1)画像50)の順番に各画像を辿るパスが決定されていた。領域重複画像の追加マッチング処理が実行されることにより、例えば、図4Bに示されるように、K番目の画像とN+1番目の画像とがリンクされる。そして、K番目の画像の特徴点とN+1番目の画像の特徴点とのマッチング処理の結果に基づいて、K番目の画像とN+1番目の画像との間の変換行列RK,N+1が計算される。また、例えばMax Spanning Treeを計算することによって全体の画像を辿る最適なパスが決定されうる。Max Spanning Treeは重み付き連結無向グラフにおいて、重み総和が最大となる全域木のことであり、クラスカル法などのアルゴリズムで計算できることが知られている。本発明においては各画像間でマッチングした特徴点の数をグラフの重みとして用いることができるが、別の指標を重みとして用いてもよい。Max Spanning Treeを計算した後、当該Treeの中心を新たな基準とする。この処理により、画像を辿るパスとして図4Aのように1番目の画像からN+1番目の画像までを順番に辿るパスよりも、全体の中央付近、例えば図4Bにおける3番目の画像を基準としたほうが全体のルートが短くなるため(行列を掛け合わせる数が少なくなり誤差が低減される)、3番目の画像が新たな基準の画像として決定されうる。この場合、以下の関係が成り立つ。 FIGS. 4A and 4B are diagrams conceptually explaining the recalculation process of the global transformation matrix using the graph. Before recalculating the global transformation matrix, as shown in FIG. 4A, the first image, the second image, the third image, the Kth image, the Nth image, and the ) Image 50) A path was determined to follow each image in order. By performing the additional matching process for region overlapping images, the K-th image and the N+1-th image are linked, for example, as shown in FIG. 4B. Then, a transformation matrix R K,N+1 between the K-th image and the N+1-th image is calculated based on the result of the matching process between the feature points of the K-th image and the feature points of the N+1-th image. . Further, the optimal path for tracing the entire image can be determined by calculating the Max Spanning Tree, for example. Max Spanning Tree is a spanning tree with the maximum sum of weights in a weighted connected undirected graph, and it is known that it can be calculated using algorithms such as Kruskal's method. In the present invention, the number of feature points matched between each image can be used as the weight of the graph, but another index may be used as the weight. After calculating the Max Spanning Tree, the center of the tree is used as a new reference. Through this process, it is better to use the image near the center of the image as a reference, for example, the third image in FIG. 4B, rather than the path that sequentially follows from the first image to the N+1-th image as shown in FIG. 4A. Since the overall route is shorter (fewer matrices are multiplied and errors are reduced), the third image can be determined as the new reference image. In this case, the following relationship holds true.
Figure JPOXMLDOC01-appb-M000007
Figure JPOXMLDOC01-appb-M000007
 上記のようにして計算されるグローバル変換行列RN+1を用いてN+1番目の撮影画像についての逐次接合処理を実行することにより、より誤差を低減させることができる。
(全体構成処理)
 全体構成処理は、スライドガラス標本の撮影が全て完了した後に実行される処理であり、全体構成処理によって接合済み画像の質が向上する。本実施形態における全体構成処理は、より具体的には以下のような各処理が実行される。
(1)バンドル調整(全特徴点ペアのズレを最小化するような変換行列と、カメラ20のレンズの歪み補正の計算)
(2)シームの計算(画像間のベストな切れ目を探索する)
(3)露光補正(画像間において露光時間やヴィネッティングの補正)
(4)ブレンド処理(画像間の切れ目が目立たなくるよう画像を合成する)
(5)コンピュータ装置30のHDD32への全体構成処理済み画像の書き込み
 上記処理の一部は全体画像の小領域(タイル)に分割して実施される。この場合、(2)~(5)または(3)~(5)はタイル毎に処理されてよい。これにより一度の処理に使用されるメモリ量や計算時間を削減することが可能である。
(バンドル調整)
 上述したように、逐次接合処理においては特徴点のマッチングが行われる。特徴点が変換行列で示されるように移動および回転した時に、マッチングした特徴点の間でどれぐらいの誤差が出るか(再投影誤差)を全特徴点について計算することによって、全体の誤差が計算されうる。この全体の誤差を最小二乗法を用いて最小化することにより、逐次接合済み画像を全体としてより質の良い画像とすることができる。一例として、レーベンバーグ・マーカート(Levenberg-Marquardt)法を用いることができる。レーベンバーグ・マーカート法は非線形最小二乗問題の解法の一つである。
By performing sequential joining processing on the N+1-th captured image using the global transformation matrix R N+1 calculated as described above, errors can be further reduced.
(Overall configuration processing)
The overall configuration process is a process that is executed after all the images of the slide glass specimens are completed, and the overall configuration process improves the quality of the bonded image. More specifically, the following processes are executed in the overall configuration process in this embodiment.
(1) Bundle adjustment (calculation of a transformation matrix that minimizes the deviation of all feature point pairs and distortion correction of the lens of the camera 20)
(2) Seam calculation (search for the best break between images)
(3) Exposure correction (correction of exposure time and vignetting between images)
(4) Blending processing (combining images so that the breaks between images are less noticeable)
(5) Writing the overall configuration processed image to the HDD 32 of the computer device 30 A part of the above processing is performed by dividing the entire image into small areas (tiles). In this case, (2) to (5) or (3) to (5) may be processed for each tile. This makes it possible to reduce the amount of memory and calculation time used for one-time processing.
(bundle adjustment)
As described above, matching of feature points is performed in the sequential joining process. When the feature points are moved and rotated as indicated by the transformation matrix, the total error is calculated by calculating the amount of error between matched feature points (reprojection error) for all feature points. It can be done. By minimizing this overall error using the least squares method, the sequentially joined images can be made into images with better quality as a whole. As an example, the Levenberg-Marquardt method can be used. The Levenberg-Marquardt method is one of the methods for solving nonlinear least squares problems.
Figure JPOXMLDOC01-appb-M000008
 
Figure JPOXMLDOC01-appb-M000008
 
ここで、本実施形態においては、
:処理時点iにおける画像の変換行列の成分
:処理時点iにおけるJacobian(右肩のTは転置を示す)
:処理時点iにおける特徴点間の誤差
λ:誤差の大きさに応じて調整されるゼロ以上の実数
I:単位行列
として式(1)が計算されうる。
Here, in this embodiment,
x i : Component of the transformation matrix of the image at processing time i J i : Jacobian at processing time i (T on the right shoulder indicates transposition)
r i : Error λ between feature points at processing time i: Real number I greater than or equal to zero that is adjusted according to the size of the error: Equation (1) can be calculated as a unit matrix.
 上記式(1)により、ある処理時点のパラメータxに対してパラメータJとパラメータrが計算される。また、計算されたパラメータJとパラメータrを用いて、次のパラメータxについてさらにパラメータJとパラメータrが計算される。これを繰り返していくことで、マッチングした全特徴点間の再投影誤差を最小化するようなグローバル変換行列を計算し、これによりバンドル調整が実行されうる。
(処理フロー:逐次接合処理)
 図5は、逐次接合処理のフローチャートの一例を示す図である。
According to the above equation (1), the parameter J and the parameter r are calculated for the parameter x at a certain processing point. Furthermore, using the calculated parameters J and r, parameters J and r are further calculated for the next parameter x. By repeating this process, a global transformation matrix that minimizes the reprojection error between all matched feature points is calculated, and bundle adjustment can be performed based on this.
(Processing flow: sequential joining process)
FIG. 5 is a diagram showing an example of a flowchart of sequential joining processing.
 まず、顕微鏡画像情報処理システム1のユーザから、コンピュータ装置30上で動作するカメラ用ソフトウェア40や接合ソフトウェア42などのアプリケーションを介して、HDD32上の監視対象とするフォルダの指定の入力を受け付ける(ステップS102)。接合ソフトウェア42は、一定時間ごとに指定されたフォルダの更新を確認する(ステップS104)。接合ソフトウェア4が逐次接合処理の終了の旨の入力を受け付けない限り(ステップS106:No)、逐次接合処理を続行し、ステップS108に進む。 First, input from the user of the microscope image information processing system 1 to designate a folder to be monitored on the HDD 32 is received via an application such as camera software 40 or joining software 42 running on the computer device 30 (step S102). The joining software 42 checks for updates to the designated folder at regular intervals (step S104). Unless the joining software 4 receives an input indicating the end of the sequential joining process (step S106: No), the sequential joining process is continued and the process proceeds to step S108.
 ステップS108において、接合ソフトウェア42がステップS104にて確認したフォルダについて画像ファイルの更新がなかったと判断した場合には(ステップS108:No)、ステップS104に戻る。接合ソフトウェア42が画像ファイルの更新があったと判断した場合、すなわち新たなスライドガラス標本の画像が撮影されてフォルダに保存された場合には(ステップS108:Yes)、当該画像ファイルの更新が画像ファイルの初回の保存か、すなわち1回目のスライドガラス標本の撮影画像の保存によるものであるか判断する(ステップS110)。当該画像ファイルの更新が初回の保存ではない場合には(ステップS110:No)、2回目以降の保存画像ファイルに対する処理を実行する(ステップS120)。当該処理について後に詳述する。 In step S108, if the joining software 42 determines that the image file has not been updated for the folder confirmed in step S104 (step S108: No), the process returns to step S104. If the bonding software 42 determines that the image file has been updated, that is, if a new image of the slide glass specimen has been taken and saved in the folder (step S108: Yes), the image file has been updated. It is determined whether this is the first storage, that is, the first storage of the captured image of the slide glass specimen (step S110). If the image file is not updated for the first time (step S110: No), processing for the second and subsequent saved image files is executed (step S120). The process will be described in detail later.
 画像ファイルの更新が画像ファイルの初回の保存であった場合には(ステップS110:Yes)、保存された画像ファイルをHDD32から読み込み、当該画像ファイルにおける特徴点および特徴量を計算し、当該計算結果を、各画像の特徴点および特徴量の情報を保存するための配列に格納する(ステップS112)。この時、予め歪みパラメータやヴィネッティングパラメータなどが分かっている場合には、画像ファイルの読み込み時に補正を行ってもよい。カメラ20の光学系レンズによって撮影画像には歪みが生じうる。一般的に、「歪みのない真の座標x,y,z」から「歪みのある撮影後の座標u,v」の変換は、例えば以下の式で計算されうることが知られている。(例えば、Zhengyou Zhang. A flexible new technique for camera calibration. Pattern Analysis and Machine Intelligence, IEEE Transactions on, 22(11):1330-1334, 2000. https://www.microsoft.com/en-us/research/wp-content/uploads/2016/02/tr98-71.pdf) If the update of the image file is the first saving of the image file (step S110: Yes), the saved image file is read from the HDD 32, feature points and feature amounts in the image file are calculated, and the calculation result is are stored in an array for storing information on feature points and feature amounts of each image (step S112). At this time, if the distortion parameters, vignetting parameters, etc. are known in advance, correction may be performed when reading the image file. Distortion may occur in the captured image due to the optical system lens of the camera 20. It is generally known that the transformation from "true coordinates x, y, z without distortion" to "coordinates u, v after photographing with distortion" can be calculated, for example, by the following formula. (For example, Zhengyou Zhang. A flexible new technique for camera calibration. Pattern Analysis and Machine Intelligence, IEEE Transactions on, 22(11):1330-1334, 2000. https://www.microsoft.com/en-us/research /wp-content/uploads/2016/02/tr98-71.pdf)
Figure JPOXMLDOC01-appb-M000009
Figure JPOXMLDOC01-appb-M000009
 ここで、k、p、c、c、f、fは歪みパラメータであり、これらの値は歪み補正の実行前に推定しておく。より具体的には、歪み補正の実行前に、同じサイズの正方形が並んだ格子状のパターンを、スライドガラス標本を撮影するカメラ20と同一のカメラ20で撮影する。そして、撮影された画像の歪みからパラメータk、p、c、c、f、fの値が推定されうる。推定されたパラメータの値を用いて、(u,v)→(x,y)という逆変換(ここでは2次元系なのでZ座標は無視する)をすることで撮影画像の歪みを補正することができる。なお、解析的な逆関数は存在しないため、ニュートン法などの近似解を計算するアルゴリズムで逆変換することができる。また、考慮するのは、一部のパラメータのみでもよい(例えば、パラメータkの値およびパラメータpの値のみ考慮し、他のパラメータの値は“0”とみなす、など)。また、他の歪みモデルを用いて歪み補正を行ってもよい。 Here, k n , p n , c x , c y , f x , and f y are distortion parameters, and these values are estimated before performing distortion correction. More specifically, before performing the distortion correction, a grid pattern in which squares of the same size are lined up is photographed using the same camera 20 that photographs the slide glass specimen. Then, the values of the parameters k n , p n , c x , c y , f x , and f y can be estimated from the distortion of the photographed image. Using the estimated parameter values, it is possible to correct the distortion of the captured image by inversely transforming (u, v) → (x, y) (here, the Z coordinate is ignored since it is a two-dimensional system). can. Note that since there is no analytical inverse function, inverse transformation can be performed using an algorithm that calculates an approximate solution, such as Newton's method. Further, only some parameters may be considered (for example, only the value of the parameter k 1 and the value of the parameter p 1 are considered, and the values of other parameters are regarded as "0", etc.). Further, distortion correction may be performed using other distortion models.
 以上のように撮影画像の歪みを補正し、歪み補正後の各撮影画像を用いて接合済み画像を生成することで、接合済み画像におけるズレを低減させることができる。図6Aおよび図6Bは、本願の発明者らが本実施形態に係る顕微鏡画像情報処理方法を用いて生成した接合済み画像(一部)の一例を示す図である。図6Aは撮影画像に対して歪み補正を行わずに逐次接合を行った時の接合済み画像を示し、図6Bは撮影画像に対して歪み補正を行った後に逐次接合を行った時の接合済み画像を示す。図6Aの接合済み画像の誤差は約3.32e+05であり、図6Bの接合済み画像の誤差は約1.57e+04であった。各撮影画像に歪み補正を施して逐次接合を行うほうが誤差が大きく低減された。なお、誤差(再投影誤差)は、以下の手順で算出された。すなわち、(1)グローバル変換行列を用いて各画像の特徴点の基準座標系(基準となる画像の座標系)における座標を計算する。(2)マッチングした特徴点の間で、基準座標系におけるずれを計算する。理想的にはマッチングした特徴点を基準座標系に移した場合、ぴったりと重なるはずであるが、歪みなどがあるためそうならない。これが誤差となる。 As described above, by correcting the distortion of the photographed images and generating a spliced image using each photographed image after distortion correction, it is possible to reduce the deviation in the spliced images. FIGS. 6A and 6B are diagrams showing an example of a joined image (partially) generated by the inventors of the present application using the microscope image information processing method according to the present embodiment. Figure 6A shows the merged images obtained by sequentially joining the captured images without performing distortion correction, and Figure 6B shows the merged images obtained by sequentially joining the captured images after performing distortion correction. Show images. The error for the stitched image in FIG. 6A was about 3.32e+05, and the error for the stitched image in FIG. 6B was about 1.57e+04. Errors were significantly reduced by applying distortion correction to each captured image and sequentially combining them. Note that the error (reprojection error) was calculated using the following procedure. That is, (1) the coordinates of the feature points of each image in the reference coordinate system (the coordinate system of the reference image) are calculated using the global transformation matrix. (2) Calculate the deviation in the reference coordinate system between the matched feature points. Ideally, if the matched feature points were transferred to the reference coordinate system, they would overlap exactly, but due to distortions, etc., this does not happen. This becomes an error.
 図5に戻り、次に、画像ファイルを予め定められた比率で縮小して、画像データを保存するための配列に格納する(ステップS114)。縮小画像をRAM33あるいはHDD32等に保持することで、オリジナルサイズの画像を持つより消費メモリや消費記憶容量を削減することができる。ただし、画像ファイルは縮小せずにフルサイズの画像データが配列に格納されてもよく、これ以降、接合済み画像は縮小画像ではなく、フルサイズの画像として生成され扱われてもよい。 Returning to FIG. 5, next, the image file is reduced at a predetermined ratio and stored in an array for saving image data (step S114). By retaining the reduced image in the RAM 33, HDD 32, etc., memory consumption and storage capacity consumption can be reduced compared to having the original size image. However, full-size image data may be stored in the array without reducing the image file, and from now on, the spliced image may be generated and treated as a full-size image rather than a reduced image.
 ステップS114にて縮小された1回目の撮影画像のグローバル変換行列(ここでは単位行列となる)を、グローバル変換行列を保存してくおくための配列に格納する(ステップS116)。ステップS114にて保存された縮小画像データとステップS116にて保存されたグローバル変換行列(単位行列)を用いてプレビュー画像を合成し、プレビュー表示する。なお、1枚目の撮影画像の縮小画像をプレビュー表示してもよい(ステップS118)。 The global transformation matrix (here, the unit matrix) of the first captured image reduced in step S114 is stored in an array for saving the global transformation matrix (step S116). A preview image is synthesized using the reduced image data saved in step S114 and the global transformation matrix (unit matrix) saved in step S116, and the preview image is displayed. Note that a preview of a reduced image of the first photographed image may be displayed (step S118).
 以上の処理は、ステップS106において、逐次接合処理が完了した(すなわち、バーチャルスライド生成が完了した)などの理由により、接合ソフトウェア42が、例えばキーボードやマウス等の入力装置を介してユーザからの逐次接合処理の終了の旨の入力を受け付けるまで(ステップS106:Yes)、続行する。ユーザからの逐次接合処理の終了の旨の入力を受け付けた場合には、逐次結合処理を終了して全体構成処理に移行する。
(処理フロー:ステップS120の処理)
 図7は、ステップS120(2回目以降の画像ファイルに対する処理)における処理のフローチャートの一例を示す図である。
The above process is performed in step S106 when the joining software 42 receives a sequential request from the user via an input device such as a keyboard or a mouse due to reasons such as the completion of the sequential joining process (that is, the completion of virtual slide generation). The process continues until an input indicating the end of the joining process is received (step S106: Yes). When receiving an input from the user to end the sequential joining process, the sequential joining process is ended and the process proceeds to the overall configuration process.
(Processing flow: Processing of step S120)
FIG. 7 is a diagram illustrating an example of a flowchart of the process in step S120 (processing for the image file from the second time onward).
 まず、n回目(n:2以上の整数)にHDD32に保存されたスライドガラス標本の一部の画像である画像nを読み込み、特徴点および特徴量を計算する(ステップS1202)。この時、図5のステップS112と同様に、予め歪みパラメータやヴィネッティングパラメータなどが分かっている場合には、画像ファイルの読み込み時に補正を行ってもよい。次に、画像nと、(n-1)回目に撮影された画像である画像n-1との間で特徴点のマッチングを行い、両画像間の変換行列(以下、「画像ペア行列」という)を計算する(ステップS1204)。ステップS1204の処理が成功した場合(ステップS1206:Yes)、画像nの移動先領域を計算し、この計算結果に基づいて、保存済みスライドガラス標本の全画像のうち移動先領域が重複する1または複数の画像(以下「ペア画像」という場合もある。図4Bの例における画像Kが該当)を選択する(ステップS1208)。次に、画像nとステップS1208において選択された全てのペア画像との間で特徴点のマッチングを行い、それぞれの画像ペア行列(図4Bの例における変換行列RK,N+1)を計算する(ステップS1210)。ステップS1210にて実行される特徴点のマッチングは計算コストの大きい処理であり、保存済み画像の全てについて特徴点のマッチングを行うとユーザの待ち時間が大きくなる。よって、ステップS1208にて選択された移動先領域が重複する画像のみを対象として特徴点のマッチングを行うことでユーザの待ち時間を削減することが可能である。 First, image n, which is a partial image of a slide glass specimen stored in the HDD 32, is read for the nth time (n: an integer of 2 or more), and feature points and feature amounts are calculated (step S1202). At this time, similar to step S112 in FIG. 5, if the distortion parameters, vignetting parameters, etc. are known in advance, correction may be performed when reading the image file. Next, feature point matching is performed between image n and image n-1, which is the image taken the (n-1)th time, and a transformation matrix (hereinafter referred to as "image pair matrix") between both images is performed. ) is calculated (step S1204). If the process in step S1204 is successful (step S1206: Yes), the destination area of image n is calculated, and based on this calculation result, one or A plurality of images (hereinafter also referred to as "paired images"; image K in the example of FIG. 4B corresponds to this) are selected (step S1208). Next, matching of feature points is performed between image n and all paired images selected in step S1208, and each image pair matrix (transformation matrix R K,N+1 in the example of FIG. 4B) is calculated (step S1210). The matching of feature points executed in step S1210 is a process with high calculation cost, and if the matching of feature points is performed for all saved images, the user's waiting time will increase. Therefore, it is possible to reduce the user's waiting time by performing feature point matching only on images in which the movement destination areas selected in step S1208 overlap.
 次に、画像nの特徴点および特徴量を配列に格納する(ステップS1212)。画像nを予め定められた比率で縮小し、縮小された画像の画像データを配列に格納する(ステップS1214)。ステップS1210において得られた画像ペア行列を配列に格納する(ステップS1216)。ステップS1216において格納された画像ペア行列からグローバル変換行列を計算し、配列に格納する(ステップS1218)。なお、画像nのグローバル行列は、画像nおよび画像nのペア画像(ステップS1208にて選択された画像。図4Bの例における画像K)の間の画像ペア行列と、画像nのペア画像のグローバル変換行列との積として計算することができる。また、ステップS1218においては、必要に応じて保存済み画像の全画像のグローバル変換行列の再計算を行うようになっていてもよい。「必要に応じて」とは、例えば、「所定回数ごとに」「ユーザから入力を受け付けた場合」「マッチングした全特徴点間の再投影誤差が一定値に達した場合」等がありうる。また、全画像のグローバル変換行列の再計算を行う場合、特徴マッチング数を重みとしたMax Spanning Treeを計算するようになっていてもよい。Treeの中心の画像(図4Bの例における3番目の画像)を新たな基準画像とし(3番目の画像のグローバル変換行列は単位行列となる)、Treeのエッジ順に画像ペア行列の積を計算することで各画像のグローバル行列を計算することができる。また、これらに加えて小回数(例えば2回の繰り返し)のバンドル調整を行い、グローバル変換行列を計算してもよい。バンドル調整は、例えばレーベンバーグ・マーカート法によって、マッチングした全特徴点間の再投影誤差を最小化することによって行うことができる。また、全画像のグローバル変換行列の再計算結果に基づいて、計算済みの画像ペア行列を更新してもよい(例えば、画像Aおよび画像Bのグローバル変換行列をそれぞれMa、Mbとすると、画像Aと画像Bとの間の画像ペア行列は行列Maと行列Mb-1との積で計算することができる)。これにより次回の再計算時に誤差の少ない状態から再計算を開始することができる。 Next, the feature points and feature amounts of image n are stored in an array (step S1212). The image n is reduced by a predetermined ratio, and the image data of the reduced image is stored in an array (step S1214). The image pair matrix obtained in step S1210 is stored in an array (step S1216). A global transformation matrix is calculated from the image pair matrix stored in step S1216 and stored in an array (step S1218). Note that the global matrix of image n is the image pair matrix between image n and the paired image of image n (the image selected in step S1208, image K in the example of FIG. 4B), and the global matrix of the paired image of image n. It can be calculated as a product with a transformation matrix. Further, in step S1218, the global transformation matrix of all the saved images may be recalculated as necessary. "As needed" may include, for example, "every predetermined number of times,""when input is received from the user,""when the reprojection error between all matched feature points reaches a certain value," and the like. Furthermore, when recalculating the global transformation matrix for all images, a Max Spanning Tree may be calculated with the number of feature matchings as weights. Set the image at the center of the Tree (the third image in the example of Figure 4B) as the new reference image (the global transformation matrix of the third image becomes the identity matrix), and calculate the product of the image pair matrices in order of the edges of the Tree. This allows us to calculate the global matrix for each image. Additionally, in addition to these, bundle adjustment may be performed a small number of times (for example, twice) to calculate the global transformation matrix. Bundle adjustment can be performed by minimizing the reprojection error between all matched feature points, for example by the Levenberg-Marquardt method. Furthermore, the calculated image pair matrix may be updated based on the recalculation results of the global transformation matrices of all images (for example, if the global transformation matrices of image A and image B are Ma and Mb, respectively, image A The image pair matrix between and image B can be calculated as the product of matrix Ma and matrix Mb −1 ). This allows the next recalculation to be started from a state with fewer errors.
 ステップS1214にて保存された縮小画像データを用いてプレビュー画像を生成し、ディスプレイ等の表示装置に表示する(ステップS1220)。ここで、画像n-1までのプレビュー画像は合成済みであるので、この画像n-1のプレビュー画像にサイズが縮小された画像nを接合することでプレビュー画像を生成してもよい。また、ステップS1218において保存済み画像の全画像についてグローバル変換行列の再計算を行った場合は、これまでに生成されたプレビュー画像は破棄し、プレビュー画像を生成してもよい。この後、処理は図5のステップS104に遷移する。 A preview image is generated using the reduced image data saved in step S1214 and displayed on a display device such as a display (step S1220). Here, since the preview images up to image n-1 have been combined, the preview image may be generated by joining the reduced-sized image n to the preview image of image n-1. Furthermore, if the global transformation matrix is recalculated for all of the saved images in step S1218, the preview images generated so far may be discarded and preview images may be generated. After this, the process transitions to step S104 in FIG.
 また、ステップS1206において、ステップS1204の処理が成功しなかった場合(ステップS1206:No)、画像nと保存済みの全画像との間で特徴点のマッチングを行い、画像ペア行列を計算する(ステップS1222)。その結果、少なくとも一組の画像ペア(移動先領域が重複する1または複数の画像のペア)が存在する場合には(ステップS1224:Yes)、処理はステップS1212に遷移する。画像ペアが存在しない場合には(ステップS1224:No)、処理は図5のステップS104へ遷移する。 In addition, in step S1206, if the process in step S1204 is not successful (step S1206: No), matching of feature points is performed between image n and all saved images, and an image pair matrix is calculated (step S1222). As a result, if at least one pair of images (a pair of one or more images with overlapping destination areas) exists (step S1224: Yes), the process transitions to step S1212. If the image pair does not exist (step S1224: No), the process transitions to step S104 in FIG. 5.
 なお、ステップS1204の処理が成功しない場合(ステップS1206:No)、すなわち、画像nと画像n-1との特徴点のマッチングに失敗するケースとは、例えば、スライドガラス標本の撮影を、それまで撮影していた視野とは大きく異なる別の視野から再スタートする場合などに発生しうる(必ずしもスライドガラス標本の全領域を一筆書きでなぞるように撮影するとは限らないため)。また、ステップS1222の処理は、画像ペア行列が一つでも計算できた時点で打ち切ってもよい。その場合、その後の処理はステップS1208に移行する。 Note that if the processing in step S1204 is not successful (step S1206: No), that is, if the matching of feature points between image n and image n-1 fails, for example, the photographing of the slide glass specimen is This can occur when restarting from a different field of view that is significantly different from the one that was being photographed (because the entire area of the slide glass specimen is not necessarily photographed by tracing it with one stroke). Further, the process in step S1222 may be terminated when at least one image pair matrix has been calculated. In that case, the subsequent processing moves to step S1208.
 なお、図5~図7のフローにおいて用いられた、画像ペア行列が保存された配列、縮小された画像が保存された配列、グローバル変換行列が保存された配列、各画像の特徴点および特徴量が保存された配列のデータは、後述する全体構成処理に引き継がれる。また、これらのデータの全てまたは一部をファイルとしてコンピュータ装置30のHDD32等の記憶領域に保存しておくことで、全体構成処理を保留しておき、全体構成処理を後日行う、全体構成処理を別のコンピュータ装置で実行する、等が可能である。
(処理フロー:全体構成処理)
 図8は、全体構成処理のフローチャートの一例を示す図である。
In addition, the array used in the flows of FIGS. 5 to 7 includes an array in which image pair matrices are stored, an array in which reduced images are stored, an array in which global transformation matrices are stored, and feature points and feature amounts of each image. The data in the array where is saved is carried over to the overall configuration processing described later. Furthermore, by saving all or part of this data as a file in a storage area such as the HDD 32 of the computer device 30, the overall configuration process can be suspended and the overall configuration process performed at a later date. It can be executed on another computer device, etc.
(Processing flow: Overall configuration processing)
FIG. 8 is a diagram illustrating an example of a flowchart of the overall configuration process.
 逐次接合処理が実行された全画像に対して、バンドル調整が行われる。上述したように、バンドル調整はレーベンバーグ・マーカート法(上記式(1))などによって行われうる(ステップS302)。ここで、事前にカメラ20の光学系レンズの歪みパラメータが分かっていない場合、バンドル調整の調整対象パラメータとして、グローバル変換行列に加えて、全画像に共通の歪みパラメータを含め、特徴点座標の歪み補正をした上で計算される再投影誤差を最小化することによって、カメラ20の光学的な歪みの補正を行うことが可能である。すなわち、この時、上記式(1)のパラメータxは全画像の変換行列の成分と共通の歪みパラメータとを含む。すなわち、式(1)のパラメータxは、歪みを考慮しない場合、各画像のグローバル変換行列の成分(a,b,c,d)が並ぶ(x=(R,R,・・・,R)。ここで、(R=a、b、c,d))。歪みを考慮した場合は、x=(R,R,・・・,R,k,k,p,p,・・・)のように式(2)の歪みパラメータを追加したかたちになる。これが共通の歪みパラメータになる。このパラメータxを使って誤差rを計算すれば、式(1)に基づいて改善されたx(i+1)を計算することができ、これを繰り返すことで、より改善されたグローバル変換行列と歪みパラメータが得られる。初期値としては、歪み無し(k=p=・・・=0)に仮決めして、式(1)を繰り返し計算することでk,p,・・・が定まっていく。また、各画像に固有の歪みパラメータを想定して計算することも可能だが、共通の歪みパラメータを用いることによって、より計算時間を抑制することが可能であり、スライドガラス標本を観察する場所を変えるだけでカメラ20のレンズの変更は伴わない場合は充分に実用的である。なお、ステップS302では特徴点座標のみを用いて歪みパラメータを計算するため、実際に各画像の全画素を対象として歪み補正を行うのは以降の処理である。 Bundle adjustment is performed on all images on which the sequential splicing process has been performed. As described above, the bundle adjustment may be performed by the Levenberg-Marquardt method (the above equation (1)) or the like (step S302). Here, if the distortion parameters of the optical system lens of the camera 20 are not known in advance, in addition to the global transformation matrix, distortion parameters common to all images are included as adjustment target parameters for bundle adjustment, and the distortion parameters of the feature point coordinates are included. By minimizing the reprojection error calculated after correction, it is possible to correct the optical distortion of the camera 20. That is, at this time, the parameter x in the above equation (1) includes the components of the transformation matrix of all images and the common distortion parameter. That is, if distortion is not taken into consideration, the parameter x i in equation (1) is such that the components (a, b, c, d) of the global transformation matrix of each image are arranged (x i =(R 1 , R 2 , . . . ., R n ), where (R 1 =a 1 , b 1 , c 1 , d 1 )). When considering distortion, the distortion parameter of equation ( 2 ) is It becomes the form by adding . This becomes a common distortion parameter. By calculating the error r using this parameter x i , it is possible to calculate an improved x (i+1) based on equation (1), and by repeating this, an even more improved global transformation matrix and distortion can be calculated. Parameters are obtained. As initial values, k 1 , p 1 , . . . are determined by tentatively setting no distortion (k 1 =p 1 = . . . =0) and repeatedly calculating equation (1). It is also possible to calculate by assuming distortion parameters specific to each image, but by using common distortion parameters, calculation time can be further reduced, and the location where the slide glass specimen is observed can be changed. This is sufficiently practical if the lens of the camera 20 is not changed. Note that since the distortion parameters are calculated using only the feature point coordinates in step S302, it is the subsequent processing that actually performs distortion correction targeting all pixels of each image.
 次に、縮小画像(ステップS114およびS1214において配列に格納された縮小画像)について歪み補正が行われる(ステップS304)。なお、逐次接合処理に先んじて事前に歪みパラメータが分かっており、図5のステップS112および図7のステップS1202において各撮影画像について歪み補正をすでに行っている場合は、本ステップでの補正は省略される。図9は、図6Aおよび図6Bと同じ撮影画像に関して、式(1)のパラメータxに共通の歪みパラメータを含めることで歪み補正を行ったうえで画像の接合をした場合の接合済み画像の例を示す図である。図6Aの接合済み画像の誤差は約3.32e+05であり、図6Bの接合済み画像の誤差は約1.57e+04であった。図9の接合済み画像の誤差は約1.55e+03であった。図9の接合済み画像の誤差は、図6Bに示された、予め撮影画像に対して歪み補正を行って逐次接合を行った接合済み画像よりも、さらに誤差が低減されていることが分かる。 Next, distortion correction is performed on the reduced image (the reduced image stored in the array in steps S114 and S1214) (step S304). Note that if the distortion parameters are known in advance prior to the sequential splicing process and distortion correction has already been performed for each captured image in step S112 in FIG. 5 and step S1202 in FIG. 7, the correction in this step is omitted. be done. FIG. 9 is an example of a spliced image obtained by performing distortion correction on the same captured images as in FIGS. 6A and 6B by including a common distortion parameter in the parameter x in equation (1), and then splicing the images. FIG. The error for the stitched image in FIG. 6A was about 3.32e+05, and the error for the stitched image in FIG. 6B was about 1.57e+04. The error of the spliced image in FIG. 9 was approximately 1.55e+03. It can be seen that the error in the spliced image in FIG. 9 is further reduced compared to the spliced image shown in FIG. 6B, in which the captured images are subjected to distortion correction in advance and spliced sequentially.
 次に、縮小画像を用いて撮影画像間のつなぎ目(シーム)を計算する(ステップS306)。シームの計算が行われることにより、適切な切れ目においてが画像を接合することができる。シームを計算する方法として、例えば、ボロノイ(Voronoi)図を用いる方法、動的計画法(Dynamic programing)、グラフカット(Graphcut)、などの既存の手法を用いることができる。なお、シームの計算には時間を要するため、本実施形態においては、ユーザの待ち時間が長くとも許容されるように、逐次接合処理ではなく全体構成処理にてシームの計算が実行されることとしている。また、フルサイズの画像を使うとシームの計算に時間がかかるが、本実施形態においては上述したように縮小画像を用いており、これは計算時間の短縮に寄与する。 Next, a seam between the photographed images is calculated using the reduced image (step S306). By performing seam calculations, images can be joined at appropriate breaks. As a method for calculating the seam, for example, existing methods such as a method using a Voronoi diagram, dynamic programming, and graph cut can be used. Note that seam calculation takes time, so in this embodiment, seam calculation is performed in overall configuration processing rather than sequential joining processing so that the user's waiting time can be long. There is. Further, if a full-size image is used, it takes time to calculate the seam, but in this embodiment, a reduced image is used as described above, which contributes to shortening the calculation time.
 次に、全撮影画像を接合して生成される画像の大きさ(幅および高さ)が計算される。図10は、全画像を接合して生成される画像の一例を示す図である。なお、図10は説明の便宜上、簡潔化して示されている。画像が6枚(画像1~6)あるとすると、画像1~6を接合して生成される画像は、画像1~6全体を含むように構成される画像60である。本ステップでは、画像60の大きさ(幅w,高さh)を以下のように計算する。すなわち、(1)ステップS302にて得られたグローバル変換行列と歪み補正パラメータとを用いて、各画像1~6の(基準座標系における)移動先領域を計算し(各画像1~6の大きさ(幅および高さ)から計算することができる)、(2)(1)で得られた全移動先領域を含む矩形領域を計算する。また、計算された領域は複数の小領域(以下、「タイル」「タイル画像」等という)に分割される(ステップS308)。図11は、矩形領域60が複数(ここでは説明の簡便化のため2つ)のタイル65、66に分割される一例を示す図である。また、タイル65、66とそれぞれ交差する画像をタイル画像セットとする。本例では、タイル65のタイル画像セットは、画像1、2、3、4、5であり、タイル66のタイル画像セットは、画像4、5、6である。例えばタイル65の接合処理をするとタイル65のサイズより大きい画像が生成されるが、はみ出した部分は切り落とす。また、タイルの大きさや形状は、以下のようにして決定されうる。すなわち、(1)予め定められた大きさの矩形(1024pixel×1024pixelなど)、(2)ユーザに設定ファイルなどで指定させる、(3)コンピュータ装置30のメモリを考慮して自動決定される、等となっていてよい。(3)については、例えば、「タイル画像セットに含まれる画像の数×フルサイズ画像1枚のサイズ」により1タイルを処理する際に必要なメモリを見積ることができるため、この必要メモリ量がソフトウェアにおいて使用可能なメモリの10%を超えないようにタイルの大きさを決定する、等となっていてよい。また、タイルの形状は、予め定められた特定の形状(正方形など)としてもよいし、個々の画像1~6の相似形(縦横比3:4の長方形など)としてもよい。以下、ステップS310~S318の処理では、画像を接合するため、フルサイズの画像が使用される。また、フルサイズの画像が全て一度に処理されるとメモリ使用量と計算時間が大きくなるため、画像全体がタイルに分割されてタイルごとにステップS310~S318の処理が実行される。 Next, the size (width and height) of the image generated by joining all the captured images is calculated. FIG. 10 is a diagram showing an example of an image generated by joining all images. Note that FIG. 10 is shown in a simplified manner for convenience of explanation. Assuming that there are six images (images 1 to 6), the image generated by joining images 1 to 6 is image 60, which is configured to include all of images 1 to 6. In this step, the size (width w, height h) of the image 60 is calculated as follows. That is, (1) using the global transformation matrix and distortion correction parameters obtained in step S302, calculate the movement destination area (in the reference coordinate system) of each image 1 to 6 (the size of each image 1 to 6). (2) Calculate a rectangular area that includes the entire destination area obtained in (1). Further, the calculated area is divided into a plurality of small areas (hereinafter referred to as "tiles", "tile images", etc.) (step S308). FIG. 11 is a diagram showing an example in which the rectangular area 60 is divided into a plurality of (here, two for simplicity of explanation) tiles 65 and 66. Also, images that intersect with the tiles 65 and 66 are defined as a tile image set. In this example, the tile image set for tile 65 is images 1, 2, 3, 4, 5, and the tile image set for tile 66 is images 4, 5, 6. For example, when tile 65 is joined, an image larger than the size of tile 65 is generated, but the protruding portion is cut off. Further, the size and shape of the tile can be determined as follows. That is, (1) a rectangle with a predetermined size (1024 pixels x 1024 pixels, etc.), (2) the user is required to specify it in a configuration file, etc., (3) it is automatically determined by taking into account the memory of the computer device 30, etc. It is fine. Regarding (3), for example, the memory required to process one tile can be estimated by "the number of images included in the tile image set x the size of one full-size image", so this required memory amount is The size of the tiles may be determined so that the software does not exceed 10% of the available memory, and so on. Further, the shape of the tile may be a specific predetermined shape (such as a square), or may be a shape similar to each of the images 1 to 6 (such as a rectangle with an aspect ratio of 3:4). Hereinafter, in the processing of steps S310 to S318, full-size images are used to join the images. Furthermore, if a full-size image is all processed at once, the amount of memory used and the calculation time will increase, so the entire image is divided into tiles and the processes of steps S310 to S318 are executed for each tile.
 まず、本ステップにおいて処理対象となっているタイルに対して、移動先領域が交差する画像を特定する。図11の例においては、タイル65のタイル画像セットは、画像1、2、3、4、5であり、タイル66のタイル画像セットは、画像4、5、6である。(ステップS310)。(特定された画像のセットを「タイル画像セット」という。)次に、ステップS310にて特定されたタイル画像セットの各画像ファイルがHDD32から読み出される(ステップS312)。ステップS312において読み出されたタイル画像セットについて、歪みパラメータ(予め分かっている歪みパラメータまたは上記式(1)のパラメータxに共通の歪みパラメータを含ませるようにして推測された歪みパラメータのいずれでもよい)に基づいた歪み補正が行われ、さらに予めヴィネッティングパラメータが分かっている場合には、当該ヴィネッティングパラメータに基づいた画像補正を行う。予めヴィネッティングパラメータが分かっていない場合には、次のステップにて露光補正を行う(ステップS314)。露光補正の方法は、例えば、M. Uyttendaele, A. Eden and R. Skeliski (2001) "Eliminating ghosting and exposure artifacts in image mosaics," Proceedings of the 2001 IEEE Computer Society Conference on Computer Vision and Pattern Recognition. CVPR 2001, p. II(https://www.cs.jhu.edu/~misha/ReadingSeminar/Papers/Uyttendaele01.pdf)に開示された方法を用いることができる。 First, in this step, an image where the movement destination area intersects with the tile to be processed is identified. In the example of FIG. 11, the tile image set of tile 65 is images 1, 2, 3, 4, and 5, and the tile image set of tile 66 is images 4, 5, and 6. (Step S310). (The specified set of images is referred to as a "tile image set.") Next, each image file of the tile image set specified in step S310 is read from the HDD 32 (step S312). Regarding the tile image set read out in step S312, the distortion parameter (either a distortion parameter known in advance or a distortion parameter estimated by including a common distortion parameter in the parameter x of the above equation (1) may be used. ) is performed, and if the vignetting parameter is known in advance, image correction is performed based on the vignetting parameter. If the vignetting parameters are not known in advance, exposure correction is performed in the next step (step S314). Exposure compensation methods are described, for example, in M. Uyttendaele, A. Eden and R. Skeliski (2001) "Eliminating ghosting and exposure artifacts in image mosaics," Proceedings of the 2001 IEEE Computer Society Conference on Computer Vision and Pattern Recognition. CVPR 2001 , p. II (https://www.cs.jhu.edu/~misha/ReadingSeminar/Papers/Uyttendaele01.pdf) can be used.
 ステップS302において算出されたグローバル変換行列に従ってタイル画像セットの全画像を移動して配置する(ステップS316)。この際、ステップS306において計算されたシーム情報を用いて全画像を接合する。また、接合の際にブレンド処理を行ってもよい。ブレンド処理は、リニアブレンド、マルチバンドブレンドなど様々な既知の手法が用いられてよい。例えば、Richard Szeliski (2007), "Image Alignment and Stitching: A Tutorial", Foundations and Trends (R) in Computer Graphics and Vision: Vol. 2: No. 1, pp 1-104. (https://www.nowpublishers.com/article/Details/CGV-009)に開示された方法が用いられうる。 All images in the tile image set are moved and arranged according to the global transformation matrix calculated in step S302 (step S316). At this time, all images are joined using the seam information calculated in step S306. Further, a blending process may be performed at the time of joining. Various known techniques such as linear blending and multiband blending may be used for the blending process. For example, Richard Szeliski (2007), "Image Alignment and Stitching: A Tutorial", Foundations and Trends(R) in Computer Graphics and Vision: Vol. 2: No. 1, pp 1-104. (https://www. nowpublishers.com/article/Details/CGV-009) may be used.
 ステップS316において合成された画像(タイル)を一時ファイルとしてコンピュータ装置30のHDD32等の記憶領域に保存する(ステップS318)。全てのタイルについて、以上のステップS310~S318の処理が実行される。 The image (tile) synthesized in step S316 is saved as a temporary file in a storage area such as the HDD 32 of the computer device 30 (step S318). The above processing of steps S310 to S318 is executed for all tiles.
 各タイルについてステップS318において保存された一時ファイルをコンピュータ装置30のHDD32等の記憶領域から読み出し、最終ファイルに順次書き込む(ステップS320)。これにより、全体構成処理済みの最終的なバーチャルスライド画像がコンピュータ装置30のHDD32等の記憶領域に保存される。 The temporary file saved in step S318 for each tile is read from a storage area such as the HDD 32 of the computer device 30, and sequentially written into the final file (step S320). As a result, the final virtual slide image that has been subjected to the overall composition processing is stored in a storage area such as the HDD 32 of the computer device 30.
 図12Aおよび図12Bは、逐次接合処理における、領域重複画像の追加マッチング処理の効果を示す図である。図12Aのグラフは、横軸が画像の登録順、縦軸がマッチングを試みた画像の数を表す。図12Bのグラフは、横軸が画像の登録順、縦軸がマッチングに要した時間を表す。ここでは逐次接合処理において、三種類の方式を用いた。第一の方式は、新たな画像(New(N+1)画像)を検出した際に、それまでに接合した全ての画像に対してマッチングを試みる方式である。第二の方式は、領域重複画像の追加マッチング処理を行う方式である。第三の方式は、新たな画像(New(N+1)画像)を検出した際に、直前の接合済み画像(Last(N)画像)のみに対してマッチングを試みる方式である。第一の方式では画像の登録順が後になるにつれて登録対象となる画像の数が線形に増加する。このため、マッチングに要する時間も増大しており、ユーザの待ち時間が次第に増加してしまう。第二の方式ではマッチングを試みた画像の数は変動しつつも画像の登録順にかかわらずほぼ一定である。このため、マッチングに要する時間もほぼ定常である。第三の方式ではマッチングを試みる画像の数は常に1であり、従ってマッチングに要する時間も定常である。しかしながら、第三の方式を用いる場合、新たな画像(New(N+1)画像)と、直前の接合済み画像(Last(N)画像)以外の画像との位置関係が分からないため、接合の繰り返しや光学系の歪みに起因するズレを、Max Spanning Treeの計算やバンドル調整により修正することができない。このように、第二の方式(領域重複画像の追加マッチング処理)を用いることで、マッチングに要する時間(ユーザの待ち時間)を定常に保ちつつ、画像どうしのズレを修正することができる。 FIGS. 12A and 12B are diagrams illustrating the effect of additional matching processing of region overlapping images in sequential joining processing. In the graph of FIG. 12A, the horizontal axis represents the registration order of images, and the vertical axis represents the number of images for which matching was attempted. In the graph of FIG. 12B, the horizontal axis represents the registration order of images, and the vertical axis represents the time required for matching. Here, three types of methods were used in the sequential bonding process. The first method is a method in which when a new image (New (N+1) image) is detected, matching is attempted with respect to all images that have been joined up to that point. The second method is a method that performs additional matching processing for region-overlapping images. The third method is a method in which when a new image (New (N+1) image) is detected, matching is attempted only with respect to the immediately previous spliced image (Last (N) image). In the first method, the number of images to be registered increases linearly as the images are registered later in the order. For this reason, the time required for matching is also increasing, and the user's waiting time gradually increases. In the second method, the number of images for which matching is attempted varies, but remains almost constant regardless of the order in which the images are registered. Therefore, the time required for matching is also almost constant. In the third method, the number of images for which matching is attempted is always 1, and therefore the time required for matching is also constant. However, when using the third method, the positional relationship between the new image (New (N+1) image) and images other than the previous merged image (Last (N) image) is not known, so Misalignment caused by repetition or optical system distortion cannot be corrected by Max Spanning Tree calculations or bundle adjustment. In this way, by using the second method (additional matching processing of region-overlapping images), it is possible to correct the deviation between images while keeping the time required for matching (user waiting time) constant.
 図13Aおよび図13Bは、逐次接合処理における、Max Spanning Treeの計算と簡易的なバンドル調整とによる画像どうしのズレの軽減効果を示す図である。図13Aは膵臓がんのHE染色スライド(撮影枚数320枚)の場合のグラフを示し、図13Bは肝臓のHE染色スライド(撮影枚数59枚)の場合のグラフを示す。これらのグラフは、接合を10回行うごとにMax Spanning Treeの計算と簡易的なバンドル調整とを行った場合と、これらを行わなかった場合とにおける、逐次接合処理が終了した時点での画像どうしのズレの大きさ(RMS: root mean square)を示している。いずれのグラフにおいてもMax Spanning Treeの計算と簡易的なバンドル調整とを行った場合において画像どうしのズレが大幅に軽減されていることが分かる。ズレがあまりに大きいとユーザが適切な接合済み画像が生成されているかを確認するのに支障が出るため、必要に応じてMax Spanning Treeの計算や簡易的なバンドル調整によってズレを軽減することは有効である。 FIGS. 13A and 13B are diagrams illustrating the effect of reducing misalignment between images by calculating Max Spanning Tree and simple bundle adjustment in sequential splicing processing. FIG. 13A shows a graph for pancreatic cancer HE-stained slides (320 images taken), and FIG. 13B shows a graph for liver HE-stained slides (59 images taken). These graphs show the relationship between the images at the end of the sequential splicing process, when Max Spanning Tree calculation and simple bundle adjustment were performed every 10 times, and when these were not performed. It shows the size of the deviation (RMS: root mean square). In both graphs, it can be seen that the deviation between images is significantly reduced when Max Spanning Tree calculation and simple bundle adjustment are performed. If the deviation is too large, it will be difficult for the user to confirm whether an appropriate spliced image has been generated, so it is effective to reduce the deviation by calculating the Max Spanning Tree or making simple bundle adjustments as necessary. It is.
 図14Aおよび図14Bは、全体構成処理における歪み補正の効果を示す図である。これらは、特に、上述したステップS304の処理に関する。図14Aは膵臓がんのHE染色スライド(撮影枚数320枚)の場合のグラフを示し、図14Bは肝臓のHE染色スライド(撮影枚数59枚)の場合のグラフを示す。また、これらの図は、上述した式(1)のパラメータxに共通の歪みパラメータを含めることで歪み補正を行った場合と、歪み補正を行わなかった場合とにおける、ズレの大きさ(RMS)を示している。なお、ここでは、RMSは特徴点のペア一つあたりのズレをピクセル単位で示した数値である。歪み補正を行った場合、0.5ピクセル以下までズレが低減されていることが分かる。これは目視不能なレベルのズレであり、歪み補正の効果が極めて大きいことを示している。 FIGS. 14A and 14B are diagrams illustrating the effect of distortion correction in overall configuration processing. These particularly relate to the process of step S304 described above. FIG. 14A shows a graph for pancreatic cancer HE-stained slides (320 images taken), and FIG. 14B shows a graph for liver HE-stained slides (59 images taken). In addition, these figures show the magnitude of the deviation (RMS) between the case where distortion correction is performed by including a common distortion parameter in the parameter x of equation (1) mentioned above, and the case where distortion correction is not performed. It shows. Note that here, RMS is a numerical value indicating the deviation per pair of feature points in pixel units. It can be seen that when distortion correction is performed, the deviation is reduced to 0.5 pixel or less. This is a level of deviation that is invisible to the naked eye, indicating that the effect of distortion correction is extremely large.
(顕微鏡画像情報処理システムの構成)
 図1および図2に示されるように、本実施形態に係る顕微鏡画像情報処理方法を実行する顕微鏡画像情報処理システム1は、顕微鏡10と、カメラ20と、コンピュータ装置30とを含んで構成されうる。顕微鏡10、カメラ20、およびコンピュータ装置30は、既存の一般的なコンピュータ装置を用いることができる。図15は、コンピュータ装置30のハードウェア構成の一例を示す図である。コンピュータ装置30は、プロセッサ31、HDD32、RAM(Random Access Memory)33、ROM(Read Only Memory)34、CD、DVD、USBメモリ、メモリスティック、SDカード等のリムーバブルメモリ35、入出力ユーザインタフェース(キーボード、マウス、タッチパネル、スピーカ、マイク、LED(light-emitting diode)等)36、他のコンピュータ装置と通信可能な有線/無線の通信インタフェース37、ディスプレイ38、等の一般的なコンピュータ装置と同様のハードウェア構成を備えうる。コンピュータ装置30は、例えばHDD32に記憶されているコンピュータプログラム(カメラ用ソフトウェア40、接合ソフトウェア42、およびその他各種のコンピュータプログラム)および処理対象の各種データをRAM33等のメモリに読み出して実行することで、上述した本実施形態に係る顕微鏡画像情報処理方法の各処理を実現しうる。
(Configuration of microscope image information processing system)
As shown in FIGS. 1 and 2, a microscope image information processing system 1 that executes the microscope image information processing method according to the present embodiment may include a microscope 10, a camera 20, and a computer device 30. . As the microscope 10, camera 20, and computer device 30, existing general computer devices can be used. FIG. 15 is a diagram showing an example of the hardware configuration of the computer device 30. The computer device 30 includes a processor 31, an HDD 32, a RAM (Random Access Memory) 33, a ROM (Read Only Memory) 34, a removable memory 35 such as a CD, DVD, USB memory, memory stick, or SD card, and an input/output user interface (keyboard). , a mouse, a touch panel, a speaker, a microphone, an LED (light-emitting diode), etc.) 36, a wired/wireless communication interface 37 that can communicate with other computer devices, a display 38, and other hardware similar to general computer devices. ware configuration. The computer device 30 reads computer programs (camera software 40, joining software 42, and various other computer programs) stored in the HDD 32 and various data to be processed into a memory such as a RAM 33 and executes them. Each process of the microscope image information processing method according to the present embodiment described above can be realized.
 なお、図1および図2においては、コンピュータ装置30は1つの装置として図示されているが、2つ以上のコンピュータ装置によって構成されていてもよい。例えば、スライドガラス標本の各部分の画像をHDD32に保存する第1のコンピュータ装置と、これとは別体の第2のコンピュータ装置が、第1のコンピュータ装置に保存された複数の撮影画像を用いて逐次接合処理以降の処理を実行するようになっていてもよい。この際、例えば、第1のコンピュータ装置のHDD32に一時的に保存されたスライドガラス標本の撮影画像は、自動的に第2のコンピュータ装置へ有線または無線の通信によって送信され、第2のコンピュータ装置はこれをトリガーとして、以降の逐次接合処理および全体構成処理を実行するようになっていてもよい。また、別の方法として、例えば、第1のコンピュータ装置が逐次接合処理を実行し、逐次接合処理が完了した逐次接合済み画像および全体構成処理に必要なデータを第2のコンピュータ装置に有線または無線の通信によって送信するようになっていてもよい。そして、第2のコンピュータ装置は送信された接合済み画像および全体構成処理に必要なデータを用いて全体構成処理を実行するようになっていてもよい。 Although the computer device 30 is illustrated as one device in FIGS. 1 and 2, it may be configured by two or more computer devices. For example, a first computer device stores images of each part of a slide glass specimen in the HDD 32, and a second computer device separate from this computer device uses a plurality of captured images stored in the first computer device. The process after the joining process may be executed sequentially. At this time, for example, the photographed image of the slide glass specimen temporarily stored in the HDD 32 of the first computer device is automatically transmitted to the second computer device by wired or wireless communication, and may be configured to use this as a trigger to execute subsequent sequential joining processing and overall configuration processing. Alternatively, for example, a first computer device executes the sequential joining process, and sends the sequentially joined images for which the sequential joining process has been completed and the data necessary for the overall configuration processing to the second computer device by wire or wirelessly. The information may be transmitted by communication. The second computer device may then execute the overall configuration process using the sent joined images and data necessary for the overall configuration process.
 また、本実施形態においてはスライドガラス標本の各部分の撮影画像や接合済み画像はHDD32に保存されることとして説明したが、他の記録媒体に保存されるようになっていてもよい。 Furthermore, in this embodiment, the photographed images and joined images of each part of the slide glass specimen are described as being stored in the HDD 32, but they may be stored in other recording media.
 本実施形態に係る顕微鏡画像情報処理方法によれば、スライドガラス標本の各部分の画像が撮影されながら逐次接合処理が実行され、接合途中の画像が逐次ユーザにプレビュー表示される。ユーザはプレビュー表示を見て、適切に接合処理が実行されているかをその都度確認しつつ、スライドガラス標本の撮影を進めることができる。これにより、従来のOff-line stitchingのように最終的に接合に失敗して、スライドガラス標本を再撮影する作業が必要となることを回避しうる。 According to the microscope image information processing method according to the present embodiment, the bonding process is performed sequentially while images of each part of the slide glass specimen are photographed, and images in the middle of bonding are sequentially displayed as previews to the user. The user can proceed with photographing the slide glass specimen while viewing the preview display and confirming whether the bonding process has been properly executed each time. This makes it possible to avoid the need for re-imaging the slide glass specimen due to the eventual failure of joining as in conventional off-line stitching.
 また、本実施形態に係る顕微鏡画像情報処理方法によれば、従来のReal time stitchingのように自動電動ステージや専用のカメラなどは不要であり、既存のイメージングシステム(顕微鏡、カメラ、コンピュータ装置)を使用して実現可能である。これにより、システムの導入コストが低く、日常的に使い慣れた顕微鏡を使用して見慣れた撮影画像でバーチャルスライドを生成可能である。また、バーチャルスライドを作成する従来の方法として、Slide scannerといわれる、自動的にスライドガラス標本をスキャンする装置も存在するが、このような装置は非常に高価であるが、本実施形態に係る顕微鏡画像情報処理方法によれば、このような専用の装置は不要である。 Furthermore, according to the microscope image information processing method according to the present embodiment, unlike conventional real-time stitching, an automatic motorized stage or a dedicated camera is not required, and an existing imaging system (microscope, camera, computer device) can be used. It is possible to implement using As a result, the installation cost of the system is low, and virtual slides can be generated using familiar photographic images using a microscope that is familiar to daily use. Furthermore, as a conventional method for creating virtual slides, there is a device called a slide scanner that automatically scans slide glass specimens, but such devices are very expensive, but the microscope according to this embodiment According to the image information processing method, such a dedicated device is not required.
 また、蛍光サンプルを撮影するためには露光時間が500m~1秒程度必要である。また、多重に染色されたサンプルのバーチャルスライドを生成したい場合はフィルタの手動の切り替えが必要となる。例えば従来のReal time stitchingでは、このように撮影条件を臨機応変に変更すること等は困難であった。これに対し、本実施形態に係る顕微鏡画像情報処理方法は、サンプル画像を1枚ずつ撮影していくため蛍光サンプルにも対応可能である。 Furthermore, in order to photograph a fluorescent sample, an exposure time of about 500 m to 1 second is required. Also, if you want to generate virtual slides of multiply stained samples, manual switching of filters is required. For example, in conventional real-time stitching, it is difficult to change the shooting conditions on the fly. On the other hand, the microscope image information processing method according to the present embodiment can also be applied to fluorescent samples because the sample images are photographed one by one.
 また、本実施形態に係る顕微鏡画像情報処理方法によれば、プレビュー画像は縮小画像によって生成し、全体構成処理において元のサイズの接合済み画像を生成することができるため、使用するメモリの削減と処理スピードの改善に資する。 Furthermore, according to the microscope image information processing method according to the present embodiment, a preview image is generated as a reduced image, and a spliced image of the original size can be generated in the overall configuration processing, so that the amount of memory used can be reduced. Contributes to improving processing speed.
 ここまで、本発明の一実施形態について説明したが、本発明は上述の実施形態に限定されず、その技術的思想の範囲内において種々異なる形態にて実施されてよいことは言うまでもない。 Although one embodiment of the present invention has been described so far, it goes without saying that the present invention is not limited to the above-described embodiment and may be implemented in various different forms within the scope of its technical idea.
 また、本発明の範囲は、図示され記載された例示的な実施形態に限定されるものではなく、本発明が目的とするものと均等な効果をもたらすすべての実施形態をも含む。さらに、本発明の範囲は、各請求項により画される発明の特徴の組み合わせに限定されるものではなく、すべての開示されたそれぞれの特徴のうち特定の特徴のあらゆる所望する組み合わせによって画されうる。 Furthermore, the scope of the present invention is not limited to the exemplary embodiments shown and described, but also includes all embodiments that provide equivalent effects to the object of the present invention. Furthermore, the scope of the invention is not limited to the combinations of inventive features delineated by each claim, but may be defined by any desired combination of specific features of each and every disclosed feature. .
 なお、以下のような構成も本発明の技術的範囲に属する。
(1)
 コンピュータシステムによって実行される顕微鏡画像情報処理方法であって、
 顕微鏡を用いて観察される試料の一部分の撮影画像を取得して記憶領域に保存する第1のステップと、
 前記撮影画像が前記記憶領域に保存されたことを検知すると、前記撮影画像の特徴点に関する情報である特徴点情報を算出する第2のステップと、
 前回の前記撮影画像の前記特徴点情報と、前記撮影画像の前記特徴点情報と、を用いて、前回の前記撮影画像の前記特徴点と前記撮影画像の前記特徴点とのマッチング処理を実行する第3のステップと、
 前記マッチング処理の結果と、現在までに前記記憶領域に保存された複数の前記撮影画像と、に基づいて接合処理を実行して接合済み画像を生成する第4のステップと、
 前記接合済み画像を表示出力する第5のステップと、
 を含み、
 前記試料の一部分の撮影が終了するまで、前記第1のステップから前記第5のステップまでを繰り返す、顕微鏡画像情報処理方法。
(2)
 前記第4のステップにおいて生成される接合済み画像は、予め定められた比率で縮小された前記撮影画像を用いて生成される画像である、上記(1)の顕微鏡画像情報処理方法。
(3)
 前記第3のステップにおいて、前記記憶領域に保存されている前記撮影画像のうちの特定の画像を基準とした、前記第1のステップにて保存された前記撮影画像のグローバル変換行列が算出される、上記(1)または(2)の顕微鏡画像情報処理方法。
(4)
 前記グローバル変換行列はMax Spanning Treeによって算出される、上記(3)の顕微鏡画像情報処理方法。
(5)
 前記第3のステップにおける前記マッチング処理が行われた全特徴点間の再投影誤差を最小二乗法を用いて最小化することによって前記グローバル変換行列を調整する、上記(4)の顕微鏡画像情報処理方法。
(6)
 前記試料の一部分の撮影が終了するまで、前記第1のステップから前記第5のステップが繰り返されることで生成された前記接合済み画像を含む領域について、当該領域と、全ての前記撮影画像と、をグローバル変換行列に従って配置するステップを、前記第5のステップの後にさらに含む、上記(1)の顕微鏡画像情報処理方法。
(7)
 前記試料の一部分の撮影が終了するまで、前記第1のステップから前記第5のステップが繰り返されることで生成された前記接合済み画像を含む領域を複数の小領域に分割する第6のステップと、
 前記複数の小領域の各々について、
  処理対象の小領域と交差する1または複数の前記撮影画像を特定し、
  前記処理対象の小領域と、前記特定された1または複数の前記撮影画像と、をグローバル変換行列に従って配置する
 ことを実行する第7のステップと、
 をさらに含む、上記(1)~(4)の顕微鏡画像情報処理方法。
(8)
 前記第6のステップにおいて、前記試料の一部分の撮影が終了するまで、前記第1のステップから前記第5のステップが繰り返されることで生成された前記接合済み画像に対してバンドル調整を実行してから、接合済み画像を複数の小領域に分割する、上記(7)に記載の顕微鏡画像情報処理方法。
(9)
 前記バンドル調整は以下のレーベンバーグ・マーカート法で定義される式を評価することにより実行される、上記(8)に記載の顕微鏡画像情報処理方法。
Note that the following configurations also belong to the technical scope of the present invention.
(1)
A method for processing microscopic image information performed by a computer system, the method comprising:
a first step of acquiring a captured image of a portion of the sample observed using a microscope and storing it in a storage area;
a second step of calculating feature point information that is information regarding feature points of the captured image when it is detected that the captured image is saved in the storage area;
Using the feature point information of the previous captured image and the feature point information of the captured image, a matching process is performed between the feature points of the previously captured image and the feature points of the captured image. The third step and
a fourth step of performing a joining process to generate a joined image based on the result of the matching process and the plurality of captured images that have been stored in the storage area to date;
a fifth step of displaying and outputting the spliced image;
including;
A microscope image information processing method, wherein the first step to the fifth step are repeated until a portion of the sample is photographed.
(2)
The microscopic image information processing method according to (1) above, wherein the spliced image generated in the fourth step is an image generated using the photographed image reduced at a predetermined ratio.
(3)
In the third step, a global transformation matrix of the photographed image stored in the first step is calculated based on a specific image among the photographed images stored in the storage area. , the microscope image information processing method according to (1) or (2) above.
(4)
The microscope image information processing method according to (3) above, wherein the global transformation matrix is calculated by Max Spanning Tree.
(5)
Microscope image information processing according to (4) above, wherein the global transformation matrix is adjusted by minimizing the reprojection error between all the feature points subjected to the matching process in the third step using the least squares method. Method.
(6)
For a region including the joined image generated by repeating the first step to the fifth step until the photographing of a portion of the sample is completed, the region and all the photographed images, The method for processing microscopic image information according to (1) above, further comprising the step of arranging the images according to the global transformation matrix after the fifth step.
(7)
a sixth step of dividing a region including the merged image generated by repeating the first step to the fifth step into a plurality of small regions until the photographing of a portion of the sample is completed; ,
For each of the plurality of small areas,
identifying one or more of the captured images that intersect with the small area to be processed;
a seventh step of arranging the small area to be processed and the identified one or more captured images according to a global transformation matrix;
The microscopic image information processing method according to any of (1) to (4) above, further comprising:
(8)
In the sixth step, bundle adjustment is performed on the spliced image generated by repeating the first step to the fifth step until imaging of a portion of the sample is completed. The microscope image information processing method according to (7) above, wherein the merged image is divided into a plurality of small regions.
(9)
The microscope image information processing method according to (8) above, wherein the bundle adjustment is performed by evaluating an equation defined by the Levenberg-Marquardt method below.
Figure JPOXMLDOC01-appb-M000010
Figure JPOXMLDOC01-appb-M000010
ただし、
:処理時点iにおける画像の変換行列の成分
:処理時点iにおけるJacobian(右肩のTは転置を示す)
:処理時点iにおける特徴点間の誤差
λ:誤差の大きさに応じて調整されるゼロ以上の実数
I:単位行列
(10)
 前記バンドル調整は以下のレーベンバーグ・マーカート法で定義される式を評価することにより実行される、上記(8)に記載の顕微鏡画像情報処理方法。
however,
x i : Component of the transformation matrix of the image at processing time i J i : Jacobian at processing time i (T on the right shoulder indicates transposition)
r i : Error λ between feature points at processing time i: Real number greater than or equal to zero that is adjusted according to the size of the error I: Identity matrix (10)
The microscope image information processing method according to (8) above, wherein the bundle adjustment is performed by evaluating an equation defined by the Levenberg-Marquardt method below.
Figure JPOXMLDOC01-appb-M000011

 
Figure JPOXMLDOC01-appb-M000011

 
ただし、
:処理時点iにおける画像の変換行列の成分および歪み補正パラメータ
:処理時点iにおけるJacobian(右肩のTは転置を示す)
:処理時点iにおける特徴点間の誤差
λ:誤差の大きさに応じて調整されるゼロ以上の実数
I:単位行列
(11)
 前記試料は蛍光試料である、上記(1)~(7)の顕微鏡画像情報処理方法。
(12)
 前記コンピュータシステムは、第1のコンピュータ装置と、前記第1のコンピュータ装置とは別体である第2のコンピュータ装置と、を含んで構成され、
 前記第1のステップは前記第1のコンピュータ装置において実行され、
 前記第2のステップ~前記第5のステップは前記第2のコンピュータ装置において実行される、上記(1)~(8)の顕微鏡画像情報処理方法。
(13)
 顕微鏡を用いて観察される試料の一部分の撮影画像を取得して記憶領域に保存する第1のステップを実行し、
 前記撮影画像が前記記憶領域に保存されたことを検知すると、前記撮影画像の特徴点に関する情報である特徴点情報を算出する第2のステップを実行し、
 前回の前記撮影画像の前記特徴点情報と、前記撮影画像の前記特徴点情報と、を用いて、前回の前記撮影画像の前記特徴点と前記撮影画像の前記特徴点とのマッチング処理を実行する第3のステップを実行し、
 前記マッチング処理の結果と、現在までに前記記憶領域に保存された複数の前記撮影画像と、に基づいて接合処理を実行して接合済み画像を生成する第4のステップを実行し、
 前記接合済み画像を表示出力する第5のステップを実行し、
 前記試料の一部分の撮影が終了するまで、前記第1のステップから前記第5のステップまでを繰り返す、顕微鏡画像情報処理システム。
(14)
 上記(1)~(12)の顕微鏡画像情報処理方法をコンピュータシステムに実行させる、コンピュータプログラム。
however,
x i : Components of the image transformation matrix at processing time i and distortion correction parameters J i : Jacobian at processing time i (T on the right shoulder indicates transposition)
r i : Error λ between feature points at processing time i: Real number greater than or equal to zero that is adjusted according to the size of the error I: Identity matrix (11)
The microscopic image information processing method of (1) to (7) above, wherein the sample is a fluorescent sample.
(12)
The computer system includes a first computer device and a second computer device that is separate from the first computer device,
the first step is performed on the first computer device;
The microscopic image information processing method according to (1) to (8) above, wherein the second step to the fifth step are executed in the second computer device.
(13)
Performing a first step of acquiring a captured image of a portion of the sample observed using a microscope and storing it in a storage area,
When it is detected that the photographed image is stored in the storage area, a second step of calculating feature point information, which is information regarding the feature points of the photographed image, is executed;
Using the feature point information of the previous captured image and the feature point information of the captured image, a matching process is performed between the feature points of the previously captured image and the feature points of the captured image. Execute the third step,
a fourth step of performing a joining process to generate a joined image based on the result of the matching process and the plurality of captured images that have been stored in the storage area to date;
performing a fifth step of displaying and outputting the spliced image;
A microscope image information processing system that repeats the first step to the fifth step until a portion of the sample is photographed.
(14)
A computer program that causes a computer system to execute the microscope image information processing method of (1) to (12) above.
1…顕微鏡画像情報処理システム
10…顕微鏡
20…カメラ
30…コンピュータ装置
31…プロセッサ
32…ハードディスク装置(HDD)
33…RAM
34…ROM
35…リムーバブルメモリ
36…入出力ユーザインタフェース
37…通信インタフェース
38…ディスプレイ
40…カメラ用ソフトウェア
42…接合ソフトウェア
50、50´…スライドガラス標本の撮影画像
 
1...Microscope image information processing system 10...Microscope 20...Camera 30...Computer device 31...Processor 32...Hard disk device (HDD)
33...RAM
34...ROM
35... Removable memory 36... Input/output user interface 37... Communication interface 38... Display 40... Camera software 42... Bonding software 50, 50'... Captured image of slide glass specimen

Claims (14)

  1.  コンピュータシステムによって実行される顕微鏡画像情報処理方法であって、
     顕微鏡を用いて観察される試料の一部分の撮影画像を取得して記憶領域に保存する第1のステップと、
     前記撮影画像が前記記憶領域に保存されたことを検知すると、前記撮影画像の特徴点に関する情報である特徴点情報を算出する第2のステップと、
     前回の前記撮影画像の前記特徴点情報と、前記撮影画像の前記特徴点情報と、を用いて、前回の前記撮影画像の前記特徴点と前記撮影画像の前記特徴点とのマッチング処理を実行する第3のステップと、
     前記マッチング処理の結果と、現在までに前記記憶領域に保存された複数の前記撮影画像と、に基づいて接合処理を実行して接合済み画像を生成する第4のステップと、
     前記接合済み画像を表示出力する第5のステップと、
     を含み、
     前記試料の一部分の撮影が終了するまで、前記第1のステップから前記第5のステップまでを繰り返す、顕微鏡画像情報処理方法。
    A method for processing microscopic image information performed by a computer system, the method comprising:
    a first step of acquiring a captured image of a portion of the sample observed using a microscope and storing it in a storage area;
    a second step of calculating feature point information that is information regarding feature points of the captured image when it is detected that the captured image is saved in the storage area;
    Using the feature point information of the previous captured image and the feature point information of the captured image, a matching process is performed between the feature points of the previously captured image and the feature points of the captured image. The third step and
    a fourth step of performing a joining process to generate a joined image based on the result of the matching process and the plurality of captured images that have been stored in the storage area to date;
    a fifth step of displaying and outputting the spliced image;
    including;
    A microscope image information processing method, wherein the first step to the fifth step are repeated until a portion of the sample is photographed.
  2.  前記第4のステップにおいて生成される接合済み画像は、予め定められた比率で縮小された前記撮影画像を用いて生成される画像である、請求項1に記載の顕微鏡画像情報処理方法。 2. The microscope image information processing method according to claim 1, wherein the merged image generated in the fourth step is an image generated using the captured image reduced at a predetermined ratio.
  3.  前記第3のステップにおいて、前記記憶領域に保存されている前記撮影画像のうちの特定の画像を基準とした、前記第1のステップにて保存された前記撮影画像のグローバル変換行列が算出される、請求項1に記載の顕微鏡画像情報処理方法。 In the third step, a global transformation matrix of the photographed image stored in the first step is calculated based on a specific image among the photographed images stored in the storage area. The microscope image information processing method according to claim 1.
  4.  前記グローバル変換行列はMax Spanning Treeによって算出される、請求項3に記載の顕微鏡画像情報処理方法。 The microscope image information processing method according to claim 3, wherein the global transformation matrix is calculated by Max Spanning Tree.
  5.  前記第3のステップにおける前記マッチング処理が行われた全特徴点間の再投影誤差を最小二乗法を用いて最小化することによって前記グローバル変換行列を調整する、請求項4に記載の顕微鏡画像情報処理方法。 5. Microscope image information according to claim 4, wherein the global transformation matrix is adjusted by minimizing reprojection errors between all feature points subjected to the matching process in the third step using a least squares method. Processing method.
  6.  前記試料の一部分の撮影が終了するまで、前記第1のステップから前記第5のステップが繰り返されることで生成された前記接合済み画像を含む領域について、当該領域と、全ての前記撮影画像と、をグローバル変換行列に従って配置するステップを、前記第5のステップの後にさらに含む、請求項1に記載の顕微鏡画像情報処理方法。 For a region including the joined image generated by repeating the first step to the fifth step until the photographing of a portion of the sample is completed, the region and all the photographed images, 2. The microscope image information processing method according to claim 1, further comprising the step of arranging the following in accordance with a global transformation matrix after the fifth step.
  7.  前記試料の一部分の撮影が終了するまで、前記第1のステップから前記第5のステップが繰り返されることで生成された前記接合済み画像を含む領域を複数の小領域に分割する第6のステップと、
     前記複数の小領域の各々について、
      処理対象の小領域と交差する1または複数の前記撮影画像を特定し、
      前記処理対象の小領域と、前記特定された1または複数の前記撮影画像と、をグローバル変換行列に従って配置する
     ことを実行する第7のステップと、
     をさらに含む、請求項1に記載の顕微鏡画像情報処理方法。
    a sixth step of dividing a region including the merged image generated by repeating the first step to the fifth step into a plurality of small regions until the photographing of a portion of the sample is completed; ,
    For each of the plurality of small areas,
    identifying one or more of the captured images that intersect with the small area to be processed;
    a seventh step of arranging the small area to be processed and the identified one or more captured images according to a global transformation matrix;
    The microscope image information processing method according to claim 1, further comprising:
  8.  前記第6のステップにおいて、前記試料の一部分の撮影が終了するまで、前記第1のステップから前記第5のステップが繰り返されることで生成された前記接合済み画像に対してバンドル調整を実行してから、接合済み画像を複数の小領域に分割する、請求項7に記載の顕微鏡画像情報処理方法。 In the sixth step, bundle adjustment is performed on the spliced image generated by repeating the first step to the fifth step until imaging of a portion of the sample is completed. 8. The microscope image information processing method according to claim 7, wherein the merged image is divided into a plurality of small regions.
  9.  前記バンドル調整は以下のレーベンバーグ・マーカート法で定義される式を評価することにより実行される、請求項8に記載の顕微鏡画像情報処理方法。
    Figure JPOXMLDOC01-appb-M000001

    ただし、
    :処理時点iにおける画像の変換行列の成分
    :処理時点iにおけるJacobian(右肩のTは転置を示す)
    :処理時点iにおける特徴点間の誤差
    λ:誤差の大きさに応じて調整されるゼロ以上の実数
    I:単位行列
    9. The microscope image information processing method according to claim 8, wherein the bundle adjustment is performed by evaluating an expression defined by the following Levenberg-Marquardt method.
    Figure JPOXMLDOC01-appb-M000001

    however,
    x i : Component of the transformation matrix of the image at processing time i J i : Jacobian at processing time i (T on the right shoulder indicates transposition)
    r i : Error λ between feature points at processing time i: Real number greater than or equal to zero that is adjusted according to the size of the error I: Identity matrix
  10.  前記バンドル調整は以下のレーベンバーグ・マーカート法で定義される式を評価することにより実行される、請求項8に記載の顕微鏡画像情報処理方法。
    Figure JPOXMLDOC01-appb-M000002

    ただし、
    :処理時点iにおける画像の変換行列の成分および歪み補正パラメータ
    :処理時点iにおけるJacobian(右肩のTは転置を示す)
    :処理時点iにおける特徴点間の誤差
    λ:誤差の大きさに応じて調整されるゼロ以上の実数
    I:単位行列
    9. The microscope image information processing method according to claim 8, wherein the bundle adjustment is performed by evaluating an expression defined by the following Levenberg-Marquardt method.
    Figure JPOXMLDOC01-appb-M000002

    however,
    x i : Components of the image transformation matrix at processing time i and distortion correction parameters J i : Jacobian at processing time i (T on the right shoulder indicates transposition)
    r i : Error λ between feature points at processing time i: Real number greater than or equal to zero that is adjusted according to the size of the error I: Identity matrix
  11.  前記試料は蛍光試料である、請求項1に記載の顕微鏡画像情報処理方法。 The microscope image information processing method according to claim 1, wherein the sample is a fluorescent sample.
  12.  前記コンピュータシステムは、第1のコンピュータ装置と、前記第1のコンピュータ装置とは別体である第2のコンピュータ装置と、を含んで構成され、
     前記第1のステップは前記第1のコンピュータ装置において実行され、
     前記第2のステップ~前記第5のステップは前記第2のコンピュータ装置において実行される、請求項1に記載の顕微鏡画像情報処理方法。
    The computer system includes a first computer device and a second computer device that is separate from the first computer device,
    the first step is performed on the first computer device;
    The microscope image information processing method according to claim 1, wherein the second step to the fifth step are executed in the second computer device.
  13.  顕微鏡を用いて観察される試料の一部分の撮影画像を取得して記憶領域に保存する第1のステップを実行し、
     前記撮影画像が前記記憶領域に保存されたことを検知すると、前記撮影画像の特徴点に関する情報である特徴点情報を算出する第2のステップを実行し、
     前回の前記撮影画像の前記特徴点情報と、前記撮影画像の前記特徴点情報と、を用いて、前回の前記撮影画像の前記特徴点と前記撮影画像の前記特徴点とのマッチング処理を実行する第3のステップを実行し、
     前記マッチング処理の結果と、現在までに前記記憶領域に保存された複数の前記撮影画像と、に基づいて接合処理を実行して接合済み画像を生成する第4のステップを実行し、
     前記接合済み画像を表示出力する第5のステップを実行し、
     前記試料の一部分の撮影が終了するまで、前記第1のステップから前記第5のステップまでを繰り返す、顕微鏡画像情報処理システム。
    Performing a first step of acquiring a captured image of a portion of the sample observed using a microscope and storing it in a storage area,
    When it is detected that the photographed image is stored in the storage area, a second step of calculating feature point information, which is information regarding the feature points of the photographed image, is executed;
    Using the feature point information of the previous captured image and the feature point information of the captured image, a matching process is performed between the feature points of the previously captured image and the feature points of the captured image. Execute the third step,
    a fourth step of performing a joining process to generate a joined image based on the result of the matching process and the plurality of captured images that have been stored in the storage area to date;
    performing a fifth step of displaying and outputting the spliced image;
    A microscope image information processing system that repeats the first step to the fifth step until a portion of the sample is photographed.
  14.  請求項1~12に記載の顕微鏡画像情報処理方法をコンピュータシステムに実行させる、コンピュータプログラム。
     
    A computer program that causes a computer system to execute the microscope image information processing method according to any one of claims 1 to 12.
PCT/JP2023/009276 2022-04-26 2023-03-10 Microscope image information processing method, microscope image information processing system, and computer program WO2023210185A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2022-072002 2022-04-26
JP2022072002 2022-04-26

Publications (1)

Publication Number Publication Date
WO2023210185A1 true WO2023210185A1 (en) 2023-11-02

Family

ID=88518505

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2023/009276 WO2023210185A1 (en) 2022-04-26 2023-03-10 Microscope image information processing method, microscope image information processing system, and computer program

Country Status (1)

Country Link
WO (1) WO2023210185A1 (en)

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2008513164A (en) * 2004-09-22 2008-05-01 シーメンス メディカル ソリューションズ ユーエスエー インコーポレイテッド Image segmentation using isometric tree
JP2008224626A (en) * 2007-03-15 2008-09-25 Canon Inc Information processor, method for processing information, and calibration tool
JP2013070212A (en) * 2011-09-22 2013-04-18 Fuji Xerox Co Ltd Image processor and image processing program
JP2014086925A (en) * 2012-10-24 2014-05-12 Fuji Xerox Co Ltd Information processing device, information processing system, and program
JP2014529922A (en) * 2011-08-02 2014-11-13 ビューズアイキュー インコーポレイテッドViewsIQ Inc. Apparatus and method for digital microscope imaging
JP2015501471A (en) * 2011-10-10 2015-01-15 ユニヴェルシテ・ブレーズ・パスカル−クレルモン・2 Calibration method for on-board computer-based vision system
JP2015186016A (en) * 2014-03-24 2015-10-22 株式会社Jvcケンウッド image processing apparatus, image processing method, program and camera
JP2016039390A (en) * 2014-08-05 2016-03-22 株式会社日立製作所 Image generation method and device
JP2016520894A (en) * 2013-03-18 2016-07-14 ゼネラル・エレクトリック・カンパニイ Reference in multiple collected slide images
JP2017102405A (en) * 2015-12-04 2017-06-08 オリンパス株式会社 Microscope, image pasting method, and program
JP2021043082A (en) * 2019-09-11 2021-03-18 株式会社東芝 Position estimating device, moving body control system, position estimating method, and program

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2008513164A (en) * 2004-09-22 2008-05-01 シーメンス メディカル ソリューションズ ユーエスエー インコーポレイテッド Image segmentation using isometric tree
JP2008224626A (en) * 2007-03-15 2008-09-25 Canon Inc Information processor, method for processing information, and calibration tool
JP2014529922A (en) * 2011-08-02 2014-11-13 ビューズアイキュー インコーポレイテッドViewsIQ Inc. Apparatus and method for digital microscope imaging
JP2013070212A (en) * 2011-09-22 2013-04-18 Fuji Xerox Co Ltd Image processor and image processing program
JP2015501471A (en) * 2011-10-10 2015-01-15 ユニヴェルシテ・ブレーズ・パスカル−クレルモン・2 Calibration method for on-board computer-based vision system
JP2014086925A (en) * 2012-10-24 2014-05-12 Fuji Xerox Co Ltd Information processing device, information processing system, and program
JP2016520894A (en) * 2013-03-18 2016-07-14 ゼネラル・エレクトリック・カンパニイ Reference in multiple collected slide images
JP2015186016A (en) * 2014-03-24 2015-10-22 株式会社Jvcケンウッド image processing apparatus, image processing method, program and camera
JP2016039390A (en) * 2014-08-05 2016-03-22 株式会社日立製作所 Image generation method and device
JP2017102405A (en) * 2015-12-04 2017-06-08 オリンパス株式会社 Microscope, image pasting method, and program
JP2021043082A (en) * 2019-09-11 2021-03-18 株式会社東芝 Position estimating device, moving body control system, position estimating method, and program

Similar Documents

Publication Publication Date Title
CN108932735B (en) Method for generating deep learning sample
JP4937850B2 (en) Microscope system, VS image generation method thereof, and program
US10809515B2 (en) Observation method and specimen observation apparatus
US6816187B1 (en) Camera calibration apparatus and method, image processing apparatus and method, program providing medium, and camera
Majumder et al. Immersive teleconferencing: a new algorithm to generate seamless panoramic video imagery
KR20160121798A (en) Hmd calibration with direct geometric modeling
JP2007323615A (en) Image processor and processing method thereof
EP1533751B1 (en) A method for correcting distortions in multi-focus image stacks
US11373366B2 (en) Method for improving modeling speed of digital slide scanner
JP2014029380A (en) Information processing device, information processing method, program, and image display device
US10798333B2 (en) Cell observation system
WO2023210185A1 (en) Microscope image information processing method, microscope image information processing system, and computer program
CN102652321A (en) Image synthesis device and image synthesis program
JP2017055916A (en) Image generation apparatus, image generation method, and program
JP2007322404A (en) Image processing device and its processing method
JP3377548B2 (en) Microscope image observation system
US9721371B2 (en) Systems and methods for stitching metallographic and stereoscopic images
JP2007322403A (en) Image processing apparatus and its processing method
JP2005266718A (en) Microscopic image photographing system
CN112017123A (en) Imaging method of virtual microscope
CN113159169A (en) Image splicing method based on prior target feature point guidance for matching deformation and joint cutting optimization
JP2006113001A (en) Three-dimensional measuring method and device by photogrammetry
TWI726105B (en) System, method and computer program product for automatically generating a wafer image to design coordinate mapping
EP1394739A1 (en) Mosaicing from microscopic images of a specimen
Averkin et al. Using the method of depth reconstruction from focusing for microscope images

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 23795932

Country of ref document: EP

Kind code of ref document: A1