WO2023210185A1 - Procédé de traitement d'informations d'image de microscope, système de traitement d'informations d'image de microscope et programme informatique - Google Patents
Procédé de traitement d'informations d'image de microscope, système de traitement d'informations d'image de microscope et programme informatique Download PDFInfo
- Publication number
- WO2023210185A1 WO2023210185A1 PCT/JP2023/009276 JP2023009276W WO2023210185A1 WO 2023210185 A1 WO2023210185 A1 WO 2023210185A1 JP 2023009276 W JP2023009276 W JP 2023009276W WO 2023210185 A1 WO2023210185 A1 WO 2023210185A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- image
- images
- information processing
- microscope
- processing method
- Prior art date
Links
- 230000010365 information processing Effects 0.000 title claims abstract description 56
- 238000001000 micrograph Methods 0.000 title claims abstract description 52
- 238000003672 processing method Methods 0.000 title claims abstract description 47
- 238000004590 computer program Methods 0.000 title claims description 6
- 238000012545 processing Methods 0.000 claims abstract description 91
- 238000003384 imaging method Methods 0.000 claims abstract description 8
- 238000000034 method Methods 0.000 claims description 139
- 230000008569 process Effects 0.000 claims description 93
- 239000011159 matrix material Substances 0.000 claims description 87
- 230000009466 transformation Effects 0.000 claims description 65
- 238000012937 correction Methods 0.000 claims description 36
- 230000017105 transposition Effects 0.000 claims description 5
- 238000010586 diagram Methods 0.000 description 34
- 239000011521 glass Substances 0.000 description 31
- 238000004364 calculation method Methods 0.000 description 22
- 230000000694 effects Effects 0.000 description 11
- 238000002156 mixing Methods 0.000 description 6
- 230000003287 optical effect Effects 0.000 description 5
- 238000004891 communication Methods 0.000 description 4
- 230000007704 transition Effects 0.000 description 3
- 206010061902 Pancreatic neoplasm Diseases 0.000 description 2
- 238000004458 analytical method Methods 0.000 description 2
- 238000004422 calculation algorithm Methods 0.000 description 2
- 230000008859 change Effects 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 210000004185 liver Anatomy 0.000 description 2
- 208000015486 malignant pancreatic neoplasm Diseases 0.000 description 2
- 201000002528 pancreatic cancer Diseases 0.000 description 2
- 208000008443 pancreatic carcinoma Diseases 0.000 description 2
- 230000007170 pathology Effects 0.000 description 2
- PXFBZOLANLWPMH-UHFFFAOYSA-N 16-Epiaffinine Natural products C1C(C2=CC=CC=C2N2)=C2C(=O)CC2C(=CC)CN(C)C1C2CO PXFBZOLANLWPMH-UHFFFAOYSA-N 0.000 description 1
- 238000009825 accumulation Methods 0.000 description 1
- 238000007796 conventional method Methods 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 238000003745 diagnosis Methods 0.000 description 1
- 229940079593 drug Drugs 0.000 description 1
- 239000003814 drug Substances 0.000 description 1
- 230000006870 function Effects 0.000 description 1
- 238000003702 image correction Methods 0.000 description 1
- 238000009434 installation Methods 0.000 description 1
- 239000000203 mixture Substances 0.000 description 1
- 238000003909 pattern recognition Methods 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 238000004904 shortening Methods 0.000 description 1
- 230000008685 targeting Effects 0.000 description 1
- 230000001131 transforming effect Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G02—OPTICS
- G02B—OPTICAL ELEMENTS, SYSTEMS OR APPARATUS
- G02B21/00—Microscopes
- G02B21/36—Microscopes arranged for photographic purposes or projection purposes or digital imaging or video purposes including associated control and data processing arrangements
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/40—Scaling of whole images or parts thereof, e.g. expanding or contracting
Definitions
- the present invention relates to a method of processing microscopic image information.
- virtual slide exists as a technology for creating high-definition, large-area digital images of glass slide specimens and the like observed under a microscope. Since virtual slides are image data, they are easier to handle than storing the slide glass specimens themselves. Virtual slides can be used, for example, for remote pathology diagnosis and digital storage of pathology samples.
- Off-line stitching is a method in which multiple images of a slide glass specimen or the like are taken over a wide area, and then the multiple images are stitched offline to generate a single image.
- Real time stitching is a method of stitching together multiple captured images to create a single image while observing a glass slide specimen.
- a slide scanner that automatically scans a glass slide specimen.
- Alessandro Gherardi1 and Alessandro Bevilacqua “Real-time whole slide mosaicing for non-automated microscopes in histopathology analysis”, [online], March 30, 2013, National Library of Medicine, [Retrieved April 18, 2022], Internet ⁇ URL: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3678752/>
- the present invention has been made in view of the above problems.
- one aspect of the present invention is a microscope image information processing method executed by a computer system, which acquires a captured image of a portion of a sample observed using a microscope and stores it in a storage area.
- the first step of acquiring a photographed image of a part of a sample observed using a microscope and storing it in a storage area is performed, and the photographed image is stored in the storage area.
- a second step of calculating feature point information which is information regarding the feature points of the captured image, is executed, and the feature point information of the previously captured image and the feature point information of the captured image are calculated.
- Another aspect of the present invention is a computer program that causes a computer system to execute the above-described microscope image information processing method.
- FIG. 3 is a diagram illustrating an overview of sequential joining processing in a microscope image information processing method according to an embodiment of the present invention.
- FIG. 2 is a diagram illustrating an overview of overall configuration processing in a microscope image information processing method according to an embodiment of the present invention.
- FIG. 7 is a diagram illustrating an example of the results of matching a plurality of feature points detected in the Nth image with a plurality of feature points detected in the immediately previous merged image.
- FIG. 7 is a diagram conceptually explaining a process of recalculating a global transformation matrix using a connected undirected graph.
- FIG. 7 is a diagram conceptually explaining a process of recalculating a global transformation matrix using a connected undirected graph.
- FIG. 7 is a diagram showing an example (partially) of a spliced image (without distortion correction) generated by sequential splicing processing.
- FIG. 7 is a diagram illustrating an example (with distortion correction) of a spliced image (part) generated by sequential splicing processing.
- FIG. 7 is a diagram illustrating an example of a flowchart of processing in step S120 (processing for the image file from the second time onwards).
- FIG. 3 is a diagram showing an example of a flowchart of overall configuration processing in a microscope image information processing method according to an embodiment of the present invention.
- FIG. 7 is a diagram illustrating an example of a spliced image (partially) that has been subjected to bundle adjustment (distortion correction) in the overall configuration process. It is a figure which shows an example of the image produced
- FIG. 3 is a diagram illustrating an example in which a rectangular area is divided into multiple tiles.
- FIG. 7 is a diagram illustrating the effect of additional matching processing of region overlapping images in sequential joining processing.
- FIG. 7 is a diagram illustrating the effect of additional matching processing of region overlapping images in sequential joining processing.
- FIG. 7 is a diagram illustrating the effect of reducing misalignment between images by calculating Max Spanning Tree and simple bundle adjustment in sequential splicing processing.
- FIG. 7 is a diagram illustrating the effect of reducing misalignment between images by calculating Max Spanning Tree and simple bundle adjustment in sequential splicing processing.
- FIG. 7 is a diagram showing the effect of distortion correction in overall configuration processing.
- FIG. 7 is a diagram showing the effect of distortion correction in overall configuration processing.
- 3 is a diagram showing an example of a hardware configuration of a computer device 30.
- FIGS. 1 and 2 are diagrams illustrating an overview of the microscope image information processing method according to the present embodiment.
- the microscope image information processing method according to the present embodiment can be executed by a microscope image information processing system 1 that includes an existing general microscope 10, a camera 20, and a computer device 30.
- the microscope image information processing method according to this embodiment is roughly divided into two phrases: sequential joining processing and overall configuration processing.
- the sequential bonding process is a process that generates the latest bonded image every time an image of a part of the slide glass specimen is taken, and finally generates a bonded image of the entire slide glass specimen.
- the overall configuration process is a process that is executed after the photographing of the slide glass specimen is completed, and is a process that is executed on the joined images that have been sequentially subjected to the joining process.
- the overall configuration processing mainly performs various processing to improve the quality of the spliced images and saves the processed images.
- sequential joining processing processing is mainly executed with a small waiting time
- overall configuration processing processing is mainly executed that requires time or calculation cost.
- FIG. 1 is a diagram illustrating an overview of sequential joining processing.
- the user uses the camera software 40 to sequentially photograph each part of the glass slide specimen with the camera 20.
- the photographed image 50 of each part of the slide glass specimen is written to the hard disk drive (HDD) 32 included in the computer device 30 by the camera software 40 each time the photograph is taken (ST1).
- the HDD 32 is monitored by the joining software 42 (ST2), and each time it is detected that an image file 50 has been written to the HDD 32, the written image file 50 is read from the HDD 32 (ST3).
- a joining process is executed (ST4).
- the photographed image is read out from the HDD 32 and the joining process is performed. This process is repeated, and each time the image is taken, a joined image is created. 52 will be updated. Then, each time the spliced image 52 is updated, the updated spliced image 52 is displayed as a preview to the user on a display device such as a display.
- the merged image 52 that is currently being generated is previewed by the user at any time, so the user can confirm at any time that the appropriate merged image 52 is being generated. It is. As a result, it is possible to reduce the possibility that re-photographing work will occur after the photographing is completed, as in the case of virtual slide generation using conventional off-line stitching.
- the sequential joining process can be executed as follows. That is, matching of the feature points between the previously captured image and stored in the HDD 32 and the newly captured image 50 captured and stored in the HDD 32 is performed, and a transformation matrix between the two images is calculated. Furthermore, based on the results, feature point matching with another image that is already stored in the HDD 32 and whose movement destination area overlaps is executed, and a transformation matrix is calculated.
- the user's waiting time can be reduced by limiting the calculation target to only images with overlapping movement destination areas, rather than all stored captured images. Furthermore, by calculating the Max Spanning Tree and performing simple bundle adjustment as necessary, it is possible to reduce the accumulation of misalignment between images.
- FIG. 2 is a diagram illustrating an overview of the overall configuration processing.
- the joining software 42 executes processing to improve the quality of the images (ST5), and stores the joined images 52 after performing the processing on the HDD 32. Save (ST6).
- the overall configuration process may be executed as follows, for example.
- Bundle adjustment (calculation of a transformation matrix that minimizes the deviation of all feature point pairs and distortion correction of the lens of the camera 20)
- Seam calculation search for the best break between images
- Exposure correction correction of exposure time and vignetting between images
- Blending processing (combining images so that the breaks between images are less noticeable)
- Writing the overall configuration processed image to the HDD 32 of the computer device 30 A part of the above processing is performed by dividing the entire image into small areas (tiles). In this case, (2) to (5) or (3) to (5) may be processed for each tile. This makes it possible to reduce the amount of memory and calculation time used for one-time processing.
- microscope image information processing method is similarly applicable to, for example, imaging cells cultured in a petri dish.
- FIG. 3 is a diagram showing an example of the result of matching a plurality of feature points detected in the Last (N) image 50' and a plurality of feature points detected in the New (N+1) image 50. Note that in FIG. 3, for convenience of explanation, only some of the matching results are shown by broken lines.
- a transformation matrix R N,N+1 is calculated between the Last (N) image 50' and the New (N+1) image 50 based on the matching result.
- the transformation matrix R N,N+1 is a matrix (affine transformation matrix) indicating the movement distance and rotation amount between the Last (N) image 50' and the New (N+1) image 50. That is, the following relational expression holds true.
- a cos ⁇
- b sin ⁇
- d -sin ⁇
- e cos ⁇
- the "global transformation matrix” is information representing how much the New (N+1) image 50 has moved and rotated from a specific image with respect to the specific image.
- the global transformation matrix of the New (N+1) image 50 is R N+1 with the first captured image as a reference, the following relationship holds true.
- the global transformation matrix R N+1 of the New (N+1) image 50 is each transformation matrix between the immediately previous captured image and the new captured image, which is calculated in each joining process before the Nth sequential joining. It can be calculated as the product of R 1,2 , R 2,3 , . . . R N,N+1 .
- the image data of a specific reference image here, the first captured image
- the image data of the New (N+1) image 50 are joined. Processing is performed and a spliced image 52 is generated.
- the rough destination (hereinafter referred to as "movement destination area”) of the New (N+1) image 50 from the reference image (first image) can be found. Then, by comparing the movement destination area of the New (N+1) image with the movement destination area of each image calculated in the past, in addition to the Last (N) image 50', the New (N+1) image 50 and the movement destination area of each image calculated in the past are compared. Search for past merged images with overlapping destination areas.
- “movement destination areas overlap” may be such that, for example, it is determined that the movement destination areas overlap when the overlapping area between them exceeds "0", or a predetermined value may be used.
- the overlapping area between the destination areas exceeds the threshold value, it may be determined that the destination areas overlap. If it is determined that the K-th (0 ⁇ K ⁇ N) joined image 50' overlaps the New (N+1) image 50 in the movement destination area, then the New (N+1) image 50 and the K-th joined image 50', and a transformation matrix R K,N+1 between the K-th spliced image 50' and the New (N+1) image 50 is calculated.
- additional matching processing for region overlapping images is calculated. Also, based on this information, a connected undirected graph with each image as a vertex is calculated.
- FIGS. 4A and 4B are diagrams conceptually explaining the recalculation process of the global transformation matrix using the graph.
- the first image, the second image, the third image, the Kth image, the Nth image, and the ) Image 50 A path was determined to follow each image in order.
- the K-th image and the N+1-th image are linked, for example, as shown in FIG. 4B.
- a transformation matrix R K,N+1 between the K-th image and the N+1-th image is calculated based on the result of the matching process between the feature points of the K-th image and the feature points of the N+1-th image. .
- the optimal path for tracing the entire image can be determined by calculating the Max Spanning Tree, for example.
- Max Spanning Tree is a spanning tree with the maximum sum of weights in a weighted connected undirected graph, and it is known that it can be calculated using algorithms such as Kruskal's method. In the present invention, the number of feature points matched between each image can be used as the weight of the graph, but another index may be used as the weight.
- the center of the tree is used as a new reference. Through this process, it is better to use the image near the center of the image as a reference, for example, the third image in FIG. 4B, rather than the path that sequentially follows from the first image to the N+1-th image as shown in FIG. 4A. Since the overall route is shorter (fewer matrices are multiplied and errors are reduced), the third image can be determined as the new reference image. In this case, the following relationship holds true.
- the overall configuration process is a process that is executed after all the images of the slide glass specimens are completed, and the overall configuration process improves the quality of the bonded image. More specifically, the following processes are executed in the overall configuration process in this embodiment.
- Bundle adjustment (calculation of a transformation matrix that minimizes the deviation of all feature point pairs and distortion correction of the lens of the camera 20) (2) Seam calculation (search for the best break between images) (3) Exposure correction (correction of exposure time and vignetting between images) (4) Blending processing (combining images so that the breaks between images are less noticeable) (5)
- Writing the overall configuration processed image to the HDD 32 of the computer device 30 A part of the above processing is performed by dividing the entire image into small areas (tiles). In this case, (2) to (5) or (3) to (5) may be processed for each tile. This makes it possible to reduce the amount of memory and calculation time used for one-time processing. (bundle adjustment) As described above, matching of feature points is performed in the sequential joining process.
- the total error is calculated by calculating the amount of error between matched feature points (reprojection error) for all feature points. It can be done. By minimizing this overall error using the least squares method, the sequentially joined images can be made into images with better quality as a whole.
- the Levenberg-Marquardt method can be used.
- the Levenberg-Marquardt method is one of the methods for solving nonlinear least squares problems.
- x i Component of the transformation matrix of the image at processing time i J i : Jacobian at processing time i (T on the right shoulder indicates transposition)
- r i Error ⁇ between feature points at processing time i: Real number I greater than or equal to zero that is adjusted according to the size of the error: Equation (1) can be calculated as a unit matrix.
- FIG. 5 is a diagram showing an example of a flowchart of sequential joining processing.
- step S102 input from the user of the microscope image information processing system 1 to designate a folder to be monitored on the HDD 32 is received via an application such as camera software 40 or joining software 42 running on the computer device 30 (step S102).
- the joining software 42 checks for updates to the designated folder at regular intervals (step S104). Unless the joining software 4 receives an input indicating the end of the sequential joining process (step S106: No), the sequential joining process is continued and the process proceeds to step S108.
- step S108 if the joining software 42 determines that the image file has not been updated for the folder confirmed in step S104 (step S108: No), the process returns to step S104. If the bonding software 42 determines that the image file has been updated, that is, if a new image of the slide glass specimen has been taken and saved in the folder (step S108: Yes), the image file has been updated. It is determined whether this is the first storage, that is, the first storage of the captured image of the slide glass specimen (step S110). If the image file is not updated for the first time (step S110: No), processing for the second and subsequent saved image files is executed (step S120). The process will be described in detail later.
- step S110 If the update of the image file is the first saving of the image file (step S110: Yes), the saved image file is read from the HDD 32, feature points and feature amounts in the image file are calculated, and the calculation result is are stored in an array for storing information on feature points and feature amounts of each image (step S112).
- the distortion parameters, vignetting parameters, etc. are known in advance, correction may be performed when reading the image file. Distortion may occur in the captured image due to the optical system lens of the camera 20. It is generally known that the transformation from "true coordinates x, y, z without distortion” to "coordinates u, v after photographing with distortion” can be calculated, for example, by the following formula. (For example, Zhengyou Zhang.
- k n , p n , c x , c y , f x , and f y are distortion parameters, and these values are estimated before performing distortion correction. More specifically, before performing the distortion correction, a grid pattern in which squares of the same size are lined up is photographed using the same camera 20 that photographs the slide glass specimen. Then, the values of the parameters k n , p n , c x , c y , f x , and f y can be estimated from the distortion of the photographed image.
- inverse transformation can be performed using an algorithm that calculates an approximate solution, such as Newton's method. Further, only some parameters may be considered (for example, only the value of the parameter k 1 and the value of the parameter p 1 are considered, and the values of other parameters are regarded as "0", etc.). Further, distortion correction may be performed using other distortion models.
- FIGS. 6A and 6B are diagrams showing an example of a joined image (partially) generated by the inventors of the present application using the microscope image information processing method according to the present embodiment.
- Figure 6A shows the merged images obtained by sequentially joining the captured images without performing distortion correction
- Figure 6B shows the merged images obtained by sequentially joining the captured images after performing distortion correction. Show images.
- the error for the stitched image in FIG. 6A was about 3.32e+05
- the error for the stitched image in FIG. 6B was about 1.57e+04.
- Errors were significantly reduced by applying distortion correction to each captured image and sequentially combining them.
- the error was calculated using the following procedure. That is, (1) the coordinates of the feature points of each image in the reference coordinate system (the coordinate system of the reference image) are calculated using the global transformation matrix. (2) Calculate the deviation in the reference coordinate system between the matched feature points. Ideally, if the matched feature points were transferred to the reference coordinate system, they would overlap exactly, but due to distortions, etc., this does not happen. This becomes an error.
- the image file is reduced at a predetermined ratio and stored in an array for saving image data (step S114).
- the reduced image By retaining the reduced image in the RAM 33, HDD 32, etc., memory consumption and storage capacity consumption can be reduced compared to having the original size image.
- full-size image data may be stored in the array without reducing the image file, and from now on, the spliced image may be generated and treated as a full-size image rather than a reduced image.
- the global transformation matrix (here, the unit matrix) of the first captured image reduced in step S114 is stored in an array for saving the global transformation matrix (step S116).
- a preview image is synthesized using the reduced image data saved in step S114 and the global transformation matrix (unit matrix) saved in step S116, and the preview image is displayed. Note that a preview of a reduced image of the first photographed image may be displayed (step S118).
- step S106 The above process is performed in step S106 when the joining software 42 receives a sequential request from the user via an input device such as a keyboard or a mouse due to reasons such as the completion of the sequential joining process (that is, the completion of virtual slide generation). The process continues until an input indicating the end of the joining process is received (step S106: Yes). When receiving an input from the user to end the sequential joining process, the sequential joining process is ended and the process proceeds to the overall configuration process. (Processing flow: Processing of step S120)
- FIG. 7 is a diagram illustrating an example of a flowchart of the process in step S120 (processing for the image file from the second time onward).
- image n which is a partial image of a slide glass specimen stored in the HDD 32, is read for the nth time (n: an integer of 2 or more), and feature points and feature amounts are calculated (step S1202).
- n an integer of 2 or more
- correction may be performed when reading the image file.
- feature point matching is performed between image n and image n-1, which is the image taken the (n-1)th time, and a transformation matrix (hereinafter referred to as "image pair matrix”) between both images is performed. ) is calculated (step S1204).
- step S1204 If the process in step S1204 is successful (step S1206: Yes), the destination area of image n is calculated, and based on this calculation result, one or A plurality of images (hereinafter also referred to as "paired images"; image K in the example of FIG. 4B corresponds to this) are selected (step S1208).
- step S1208 Next, matching of feature points is performed between image n and all paired images selected in step S1208, and each image pair matrix (transformation matrix R K,N+1 in the example of FIG. 4B) is calculated (step S1210).
- the matching of feature points executed in step S1210 is a process with high calculation cost, and if the matching of feature points is performed for all saved images, the user's waiting time will increase. Therefore, it is possible to reduce the user's waiting time by performing feature point matching only on images in which the movement destination areas selected in step S1208 overlap.
- the feature points and feature amounts of image n are stored in an array (step S1212).
- the image n is reduced by a predetermined ratio, and the image data of the reduced image is stored in an array (step S1214).
- the image pair matrix obtained in step S1210 is stored in an array (step S1216).
- a global transformation matrix is calculated from the image pair matrix stored in step S1216 and stored in an array (step S1218).
- the global matrix of image n is the image pair matrix between image n and the paired image of image n (the image selected in step S1208, image K in the example of FIG. 4B), and the global matrix of the paired image of image n. It can be calculated as a product with a transformation matrix.
- the global transformation matrix of all the saved images may be recalculated as necessary.
- "As needed” may include, for example, “every predetermined number of times,”"when input is received from the user,””when the reprojection error between all matched feature points reaches a certain value,” and the like.
- a Max Spanning Tree may be calculated with the number of feature matchings as weights. Set the image at the center of the Tree (the third image in the example of Figure 4B) as the new reference image (the global transformation matrix of the third image becomes the identity matrix), and calculate the product of the image pair matrices in order of the edges of the Tree. This allows us to calculate the global matrix for each image.
- bundle adjustment may be performed a small number of times (for example, twice) to calculate the global transformation matrix. Bundle adjustment can be performed by minimizing the reprojection error between all matched feature points, for example by the Levenberg-Marquardt method.
- the calculated image pair matrix may be updated based on the recalculation results of the global transformation matrices of all images (for example, if the global transformation matrices of image A and image B are Ma and Mb, respectively, image A
- the image pair matrix between and image B can be calculated as the product of matrix Ma and matrix Mb ⁇ 1 ). This allows the next recalculation to be started from a state with fewer errors.
- a preview image is generated using the reduced image data saved in step S1214 and displayed on a display device such as a display (step S1220).
- the preview image since the preview images up to image n-1 have been combined, the preview image may be generated by joining the reduced-sized image n to the preview image of image n-1. Furthermore, if the global transformation matrix is recalculated for all of the saved images in step S1218, the preview images generated so far may be discarded and preview images may be generated. After this, the process transitions to step S104 in FIG.
- step S1206 if the process in step S1204 is not successful (step S1206: No), matching of feature points is performed between image n and all saved images, and an image pair matrix is calculated (step S1222). As a result, if at least one pair of images (a pair of one or more images with overlapping destination areas) exists (step S1224: Yes), the process transitions to step S1212. If the image pair does not exist (step S1224: No), the process transitions to step S104 in FIG. 5.
- step S1204 is not successful (step S1206: No), that is, if the matching of feature points between image n and image n-1 fails, for example, the photographing of the slide glass specimen is This can occur when restarting from a different field of view that is significantly different from the one that was being photographed (because the entire area of the slide glass specimen is not necessarily photographed by tracing it with one stroke). Further, the process in step S1222 may be terminated when at least one image pair matrix has been calculated. In that case, the subsequent processing moves to step S1208.
- the array used in the flows of FIGS. 5 to 7 includes an array in which image pair matrices are stored, an array in which reduced images are stored, an array in which global transformation matrices are stored, and feature points and feature amounts of each image.
- the data in the array where is saved is carried over to the overall configuration processing described later. Furthermore, by saving all or part of this data as a file in a storage area such as the HDD 32 of the computer device 30, the overall configuration process can be suspended and the overall configuration process performed at a later date. It can be executed on another computer device, etc.
- FIG. 8 is a diagram illustrating an example of a flowchart of the overall configuration process.
- Bundle adjustment is performed on all images on which the sequential splicing process has been performed.
- the bundle adjustment may be performed by the Levenberg-Marquardt method (the above equation (1)) or the like (step S302).
- the distortion parameters of the optical system lens of the camera 20 are not known in advance, in addition to the global transformation matrix, distortion parameters common to all images are included as adjustment target parameters for bundle adjustment, and the distortion parameters of the feature point coordinates are included.
- the parameter x in the above equation (1) includes the components of the transformation matrix of all images and the common distortion parameter.
- the distortion parameter of equation ( 2 ) is It becomes the form by adding . This becomes a common distortion parameter.
- step S304 distortion correction is performed on the reduced image (the reduced image stored in the array in steps S114 and S1214) (step S304).
- the distortion parameters are known in advance prior to the sequential splicing process and distortion correction has already been performed for each captured image in step S112 in FIG. 5 and step S1202 in FIG. 7, the correction in this step is omitted. be done.
- FIG. 9 is an example of a spliced image obtained by performing distortion correction on the same captured images as in FIGS. 6A and 6B by including a common distortion parameter in the parameter x in equation (1), and then splicing the images.
- FIG. The error for the stitched image in FIG. 6A was about 3.32e+05, and the error for the stitched image in FIG.
- the error of the spliced image in FIG. 9 was approximately 1.55e+03. It can be seen that the error in the spliced image in FIG. 9 is further reduced compared to the spliced image shown in FIG. 6B, in which the captured images are subjected to distortion correction in advance and spliced sequentially.
- a seam between the photographed images is calculated using the reduced image (step S306).
- images can be joined at appropriate breaks.
- a method for calculating the seam for example, existing methods such as a method using a Voronoi diagram, dynamic programming, and graph cut can be used. Note that seam calculation takes time, so in this embodiment, seam calculation is performed in overall configuration processing rather than sequential joining processing so that the user's waiting time can be long. There is. Further, if a full-size image is used, it takes time to calculate the seam, but in this embodiment, a reduced image is used as described above, which contributes to shortening the calculation time.
- FIG. 10 is a diagram showing an example of an image generated by joining all images. Note that FIG. 10 is shown in a simplified manner for convenience of explanation. Assuming that there are six images (images 1 to 6), the image generated by joining images 1 to 6 is image 60, which is configured to include all of images 1 to 6. In this step, the size (width w, height h) of the image 60 is calculated as follows. That is, (1) using the global transformation matrix and distortion correction parameters obtained in step S302, calculate the movement destination area (in the reference coordinate system) of each image 1 to 6 (the size of each image 1 to 6). (2) Calculate a rectangular area that includes the entire destination area obtained in (1).
- FIG. 11 is a diagram showing an example in which the rectangular area 60 is divided into a plurality of (here, two for simplicity of explanation) tiles 65 and 66. Also, images that intersect with the tiles 65 and 66 are defined as a tile image set.
- the tile image set for tile 65 is images 1, 2, 3, 4, 5, and the tile image set for tile 66 is images 4, 5, 6. For example, when tile 65 is joined, an image larger than the size of tile 65 is generated, but the protruding portion is cut off. Further, the size and shape of the tile can be determined as follows.
- the memory required to process one tile can be estimated by "the number of images included in the tile image set x the size of one full-size image", so this required memory amount is The size of the tiles may be determined so that the software does not exceed 10% of the available memory, and so on.
- the shape of the tile may be a specific predetermined shape (such as a square), or may be a shape similar to each of the images 1 to 6 (such as a rectangle with an aspect ratio of 3:4).
- steps S310 to S318 full-size images are used to join the images. Furthermore, if a full-size image is all processed at once, the amount of memory used and the calculation time will increase, so the entire image is divided into tiles and the processes of steps S310 to S318 are executed for each tile.
- Step S310 an image where the movement destination area intersects with the tile to be processed is identified.
- the tile image set of tile 65 is images 1, 2, 3, 4, and 5
- the tile image set of tile 66 is images 4, 5, and 6.
- Step S310 each image file of the tile image set specified in step S310 is read from the HDD 32 (step S312).
- the distortion parameter either a distortion parameter known in advance or a distortion parameter estimated by including a common distortion parameter in the parameter x of the above equation (1) may be used.
- step S314 Exposure compensation methods are described, for example, in M. Uyttendaele, A. Eden and R. Skeliski (2001) "Eliminating ghosting and exposure artifacts in image mosaics," Proceedings of the 2001 IEEE Computer Society Conference on Computer Vision and Pattern Recognition. CVPR 2001 , p. II (https://www.cs.jhu.edu/ ⁇ misha/ReadingSeminar/Papers/Uyttendaele01.pdf) can be used.
- a blending process may be performed at the time of joining.
- Various known techniques such as linear blending and multiband blending may be used for the blending process. For example, Richard Szeliski (2007), “Image Alignment and Stitching: A tutorial", Foundations and Trends(R) in Computer Graphics and Vision: Vol. 2: No. 1, pp 1-104. (https://www. nowpublishers.com/article/Details/CGV-009) may be used.
- step S316 The image (tile) synthesized in step S316 is saved as a temporary file in a storage area such as the HDD 32 of the computer device 30 (step S318).
- the above processing of steps S310 to S318 is executed for all tiles.
- step S318 The temporary file saved in step S318 for each tile is read from a storage area such as the HDD 32 of the computer device 30, and sequentially written into the final file (step S320).
- step S320 the final virtual slide image that has been subjected to the overall composition processing is stored in a storage area such as the HDD 32 of the computer device 30.
- FIGS. 12A and 12B are diagrams illustrating the effect of additional matching processing of region overlapping images in sequential joining processing.
- the horizontal axis represents the registration order of images
- the vertical axis represents the number of images for which matching was attempted.
- the horizontal axis represents the registration order of images
- the vertical axis represents the time required for matching.
- the first method is a method in which when a new image (New (N+1) image) is detected, matching is attempted with respect to all images that have been joined up to that point.
- the second method is a method that performs additional matching processing for region-overlapping images.
- the third method is a method in which when a new image (New (N+1) image) is detected, matching is attempted only with respect to the immediately previous spliced image (Last (N) image).
- the number of images to be registered increases linearly as the images are registered later in the order. For this reason, the time required for matching is also increasing, and the user's waiting time gradually increases.
- the number of images for which matching is attempted varies, but remains almost constant regardless of the order in which the images are registered. Therefore, the time required for matching is also almost constant.
- the number of images for which matching is attempted is always 1, and therefore the time required for matching is also constant.
- FIGS. 13A and 13B are diagrams illustrating the effect of reducing misalignment between images by calculating Max Spanning Tree and simple bundle adjustment in sequential splicing processing.
- FIG. 13A shows a graph for pancreatic cancer HE-stained slides (320 images taken)
- FIG. 13B shows a graph for liver HE-stained slides (59 images taken).
- RMS root mean square
- FIGS. 14A and 14B are diagrams illustrating the effect of distortion correction in overall configuration processing. These particularly relate to the process of step S304 described above.
- FIG. 14A shows a graph for pancreatic cancer HE-stained slides (320 images taken)
- FIG. 14B shows a graph for liver HE-stained slides (59 images taken).
- RMS is a numerical value indicating the deviation per pair of feature points in pixel units. It can be seen that when distortion correction is performed, the deviation is reduced to 0.5 pixel or less. This is a level of deviation that is invisible to the naked eye, indicating that the effect of distortion correction is extremely large.
- a microscope image information processing system 1 that executes the microscope image information processing method according to the present embodiment may include a microscope 10, a camera 20, and a computer device 30.
- a microscope 10, camera 20, and computer device 30 existing general computer devices can be used.
- FIG. 15 is a diagram showing an example of the hardware configuration of the computer device 30.
- the computer device 30 includes a processor 31, an HDD 32, a RAM (Random Access Memory) 33, a ROM (Read Only Memory) 34, a removable memory 35 such as a CD, DVD, USB memory, memory stick, or SD card, and an input/output user interface (keyboard).
- the computer device 30 reads computer programs (camera software 40, joining software 42, and various other computer programs) stored in the HDD 32 and various data to be processed into a memory such as a RAM 33 and executes them. Each process of the microscope image information processing method according to the present embodiment described above can be realized.
- the computer device 30 is illustrated as one device in FIGS. 1 and 2, it may be configured by two or more computer devices.
- a first computer device stores images of each part of a slide glass specimen in the HDD 32
- a second computer device separate from this computer device uses a plurality of captured images stored in the first computer device.
- the process after the joining process may be executed sequentially.
- the photographed image of the slide glass specimen temporarily stored in the HDD 32 of the first computer device is automatically transmitted to the second computer device by wired or wireless communication, and may be configured to use this as a trigger to execute subsequent sequential joining processing and overall configuration processing.
- a first computer device executes the sequential joining process, and sends the sequentially joined images for which the sequential joining process has been completed and the data necessary for the overall configuration processing to the second computer device by wire or wirelessly.
- the information may be transmitted by communication.
- the second computer device may then execute the overall configuration process using the sent joined images and data necessary for the overall configuration process.
- the photographed images and joined images of each part of the slide glass specimen are described as being stored in the HDD 32, but they may be stored in other recording media.
- the bonding process is performed sequentially while images of each part of the slide glass specimen are photographed, and images in the middle of bonding are sequentially displayed as previews to the user.
- the user can proceed with photographing the slide glass specimen while viewing the preview display and confirming whether the bonding process has been properly executed each time. This makes it possible to avoid the need for re-imaging the slide glass specimen due to the eventual failure of joining as in conventional off-line stitching.
- an automatic motorized stage or a dedicated camera is not required, and an existing imaging system (microscope, camera, computer device) can be used. It is possible to implement using As a result, the installation cost of the system is low, and virtual slides can be generated using familiar photographic images using a microscope that is familiar to daily use. Furthermore, as a conventional method for creating virtual slides, there is a device called a slide scanner that automatically scans slide glass specimens, but such devices are very expensive, but the microscope according to this embodiment According to the image information processing method, such a dedicated device is not required.
- the microscope image information processing method according to the present embodiment can also be applied to fluorescent samples because the sample images are photographed one by one.
- a preview image is generated as a reduced image, and a spliced image of the original size can be generated in the overall configuration processing, so that the amount of memory used can be reduced. Contributes to improving processing speed.
- the scope of the present invention is not limited to the exemplary embodiments shown and described, but also includes all embodiments that provide equivalent effects to the object of the present invention. Furthermore, the scope of the invention is not limited to the combinations of inventive features delineated by each claim, but may be defined by any desired combination of specific features of each and every disclosed feature. .
- a method for processing microscopic image information performed by a computer system comprising: a first step of acquiring a captured image of a portion of the sample observed using a microscope and storing it in a storage area; a second step of calculating feature point information that is information regarding feature points of the captured image when it is detected that the captured image is saved in the storage area; Using the feature point information of the previous captured image and the feature point information of the captured image, a matching process is performed between the feature points of the previously captured image and the feature points of the captured image.
- a global transformation matrix of the photographed image stored in the first step is calculated based on a specific image among the photographed images stored in the storage area.
- the microscope image information processing method according to (1) or (2) above the microscope image information processing method according to (1) or (2) above.
- the microscopic image information processing method according to any of (1) to (4) above, further comprising: (8)
- bundle adjustment is performed on the spliced image generated by repeating the first step to the fifth step until imaging of a portion of the sample is completed.
- x i Components of the image transformation matrix at processing time i and distortion correction parameters J i : Jacobian at processing time i (T on the right shoulder indicates transposition)
- r i Error ⁇ between feature points at processing time i: Real number greater than or equal to zero that is adjusted according to the size of the error
- I Identity matrix
- the computer system includes a first computer device and a second computer device that is separate from the first computer device, the first step is performed on the first computer device;
- the microscopic image information processing method according to (1) to (8) above, wherein the second step to the fifth step are executed in the second computer device.
- a first step of acquiring a captured image of a portion of the sample observed using a microscope and storing it in a storage area When it is detected that the photographed image is stored in the storage area, a second step of calculating feature point information, which is information regarding the feature points of the photographed image, is executed; Using the feature point information of the previous captured image and the feature point information of the captured image, a matching process is performed between the feature points of the previously captured image and the feature points of the captured image.
Landscapes
- Physics & Mathematics (AREA)
- Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Chemical & Material Sciences (AREA)
- Analytical Chemistry (AREA)
- Optics & Photonics (AREA)
- Theoretical Computer Science (AREA)
- Image Processing (AREA)
- Microscoopes, Condenser (AREA)
Abstract
L'invention concerne un procédé de traitement d'informations d'image de microscope qui évite la réimagerie d'un échantillon et peut être mis en œuvre par un système informatique existant. Dans un procédé de traitement d'informations d'image de microscope, une image capturée (50) d'une partie d'un échantillon observé à l'aide d'un microscope est acquise et stockée dans une région de stockage (32) (ST1), lorsqu'il est détecté que l'image capturée (50) est stockée dans la région de stockage (32), des informations de point caractéristique qui sont des informations relatives à un point caractéristique dans l'image capturée (50) sont calculées, des informations de point caractéristique d'une image capturée précédente et les informations de point caractéristique de l'image capturée (50) sont utilisées (ST2) pour exécuter un traitement de mise en correspondance du point caractéristique de l'image capturée précédente et du point caractéristique de l'image capturée (50), un traitement de jonction est exécuté sur la base du résultat du traitement de mise en correspondance et d'une pluralité d'images capturées (50) stockées dans la région de stockage (32) jusqu'à l'instant actuel pour générer une image jointe (52) (ST4), l'image jointe (52) est émise pour un affichage, et le traitement décrit ci-dessus est répété jusqu'à ce que l'imagerie de la partie de l'échantillon soit terminée.
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2022072002 | 2022-04-26 | ||
JP2022-072002 | 2022-04-26 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2023210185A1 true WO2023210185A1 (fr) | 2023-11-02 |
Family
ID=88518505
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2023/009276 WO2023210185A1 (fr) | 2022-04-26 | 2023-03-10 | Procédé de traitement d'informations d'image de microscope, système de traitement d'informations d'image de microscope et programme informatique |
Country Status (1)
Country | Link |
---|---|
WO (1) | WO2023210185A1 (fr) |
Citations (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2008513164A (ja) * | 2004-09-22 | 2008-05-01 | シーメンス メディカル ソリューションズ ユーエスエー インコーポレイテッド | 等周ツリーを使用する画像セグメンテーション |
JP2008224626A (ja) * | 2007-03-15 | 2008-09-25 | Canon Inc | 情報処理装置、情報処理方法、校正治具 |
JP2013070212A (ja) * | 2011-09-22 | 2013-04-18 | Fuji Xerox Co Ltd | 画像処理装置、画像処理プログラム |
JP2014086925A (ja) * | 2012-10-24 | 2014-05-12 | Fuji Xerox Co Ltd | 情報処理装置、情報処理システム、およびプログラム |
JP2014529922A (ja) * | 2011-08-02 | 2014-11-13 | ビューズアイキュー インコーポレイテッドViewsIQ Inc. | デジタル顕微鏡撮像の装置及び方法 |
JP2015501471A (ja) * | 2011-10-10 | 2015-01-15 | ユニヴェルシテ・ブレーズ・パスカル−クレルモン・2 | 車載搭載型のコンピュータ・ベース視覚システムの校正方法 |
JP2015186016A (ja) * | 2014-03-24 | 2015-10-22 | 株式会社Jvcケンウッド | 画像処理装置、画像処理方法、プログラム及びカメラ |
JP2016039390A (ja) * | 2014-08-05 | 2016-03-22 | 株式会社日立製作所 | 画像生成方法および装置 |
JP2016520894A (ja) * | 2013-03-18 | 2016-07-14 | ゼネラル・エレクトリック・カンパニイ | 複数収集スライド画像における参照 |
JP2017102405A (ja) * | 2015-12-04 | 2017-06-08 | オリンパス株式会社 | 顕微鏡、画像貼り合わせ方法、プログラム |
JP2021043082A (ja) * | 2019-09-11 | 2021-03-18 | 株式会社東芝 | 位置推定装置、移動体制御システム、位置推定方法およびプログラム |
-
2023
- 2023-03-10 WO PCT/JP2023/009276 patent/WO2023210185A1/fr unknown
Patent Citations (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2008513164A (ja) * | 2004-09-22 | 2008-05-01 | シーメンス メディカル ソリューションズ ユーエスエー インコーポレイテッド | 等周ツリーを使用する画像セグメンテーション |
JP2008224626A (ja) * | 2007-03-15 | 2008-09-25 | Canon Inc | 情報処理装置、情報処理方法、校正治具 |
JP2014529922A (ja) * | 2011-08-02 | 2014-11-13 | ビューズアイキュー インコーポレイテッドViewsIQ Inc. | デジタル顕微鏡撮像の装置及び方法 |
JP2013070212A (ja) * | 2011-09-22 | 2013-04-18 | Fuji Xerox Co Ltd | 画像処理装置、画像処理プログラム |
JP2015501471A (ja) * | 2011-10-10 | 2015-01-15 | ユニヴェルシテ・ブレーズ・パスカル−クレルモン・2 | 車載搭載型のコンピュータ・ベース視覚システムの校正方法 |
JP2014086925A (ja) * | 2012-10-24 | 2014-05-12 | Fuji Xerox Co Ltd | 情報処理装置、情報処理システム、およびプログラム |
JP2016520894A (ja) * | 2013-03-18 | 2016-07-14 | ゼネラル・エレクトリック・カンパニイ | 複数収集スライド画像における参照 |
JP2015186016A (ja) * | 2014-03-24 | 2015-10-22 | 株式会社Jvcケンウッド | 画像処理装置、画像処理方法、プログラム及びカメラ |
JP2016039390A (ja) * | 2014-08-05 | 2016-03-22 | 株式会社日立製作所 | 画像生成方法および装置 |
JP2017102405A (ja) * | 2015-12-04 | 2017-06-08 | オリンパス株式会社 | 顕微鏡、画像貼り合わせ方法、プログラム |
JP2021043082A (ja) * | 2019-09-11 | 2021-03-18 | 株式会社東芝 | 位置推定装置、移動体制御システム、位置推定方法およびプログラム |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108932735B (zh) | 一种生成深度学习样本的方法 | |
JP4937850B2 (ja) | 顕微鏡システム、そのvs画像生成方法、プログラム | |
US10809515B2 (en) | Observation method and specimen observation apparatus | |
US6816187B1 (en) | Camera calibration apparatus and method, image processing apparatus and method, program providing medium, and camera | |
Majumder et al. | Immersive teleconferencing: a new algorithm to generate seamless panoramic video imagery | |
KR20160121798A (ko) | 직접적인 기하학적 모델링이 행해지는 hmd 보정 | |
JP2007323615A (ja) | 画像処理装置及びその処理方法 | |
EP1533751B1 (fr) | Procédé de correction de distortions d'une pile d'images multifocales | |
US11373366B2 (en) | Method for improving modeling speed of digital slide scanner | |
JP2014029380A (ja) | 情報処理装置、情報処理方法、プログラム及び画像表示装置 | |
CN113496503B (zh) | 点云数据的生成及实时显示方法、装置、设备及介质 | |
WO2023210185A1 (fr) | Procédé de traitement d'informations d'image de microscope, système de traitement d'informations d'image de microscope et programme informatique | |
US11841494B2 (en) | Optical imaging device for a microscope | |
CN102652321A (zh) | 图像合成装置以及图像合成程序 | |
JP2017055916A (ja) | 画像生成装置、画像生成方法及びプログラム | |
JP2007323616A (ja) | 画像処理装置及びその処理方法 | |
JP2007322404A (ja) | 画像処理装置及びその処理方法 | |
JP3377548B2 (ja) | 顕微鏡画像観察システム | |
US9721371B2 (en) | Systems and methods for stitching metallographic and stereoscopic images | |
JP2006113001A (ja) | 写真測量による3次元計測方法及び装置 | |
CN112017123A (zh) | 一种虚拟显微镜的成像方法 | |
TWI726105B (zh) | 用於自動產生一晶圓影像以設計座標映射之系統、方法及電腦程式產品 | |
EP1394739A1 (fr) | Génération de mosaique à base d'images microscopiques d'un echantillon | |
Averkin et al. | Using the method of depth reconstruction from focusing for microscope images | |
CN118210138B (zh) | 一种基于数字化应用的载玻片检测定位系统及方法 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 23795932 Country of ref document: EP Kind code of ref document: A1 |
|
ENP | Entry into the national phase |
Ref document number: 2024517890 Country of ref document: JP Kind code of ref document: A |