WO2018082085A1 - 基于序列切片的显微镜图像采集方法 - Google Patents

基于序列切片的显微镜图像采集方法 Download PDF

Info

Publication number
WO2018082085A1
WO2018082085A1 PCT/CN2016/104852 CN2016104852W WO2018082085A1 WO 2018082085 A1 WO2018082085 A1 WO 2018082085A1 CN 2016104852 W CN2016104852 W CN 2016104852W WO 2018082085 A1 WO2018082085 A1 WO 2018082085A1
Authority
WO
WIPO (PCT)
Prior art keywords
sample
image
microscope
navigation map
point
Prior art date
Application number
PCT/CN2016/104852
Other languages
English (en)
French (fr)
Inventor
李国庆
马宏图
韩华
魏利新
Original Assignee
中国科学院自动化研究所
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 中国科学院自动化研究所 filed Critical 中国科学院自动化研究所
Priority to PCT/CN2016/104852 priority Critical patent/WO2018082085A1/zh
Priority to US16/066,879 priority patent/US10699100B2/en
Publication of WO2018082085A1 publication Critical patent/WO2018082085A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/60Type of objects
    • G06V20/69Microscopic objects, e.g. biological cells or cellular parts
    • G06V20/693Acquisition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • G06F18/2148Generating training patterns; Bootstrap methods, e.g. bagging or boosting characterised by the process organisation or structure, e.g. boosting cascade
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
    • G06V10/754Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries involving a deformation of the sample pattern or of the reference pattern; Elastic matching
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/60Type of objects
    • G06V20/69Microscopic objects, e.g. biological cells or cellular parts
    • G06V20/695Preprocessing, e.g. image segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V30/00Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
    • G06V30/10Character recognition
    • G06V30/24Character recognition characterised by the processing or recognition method
    • G06V30/248Character recognition characterised by the processing or recognition method involving plural approaches, e.g. verification by template match; Resolving confusion among similar patterns, e.g. "O" versus "Q"
    • G06V30/2504Coarse or fine approaches, e.g. resolution of ambiguities or multiscale approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/217Validation; Performance evaluation; Active pattern learning techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2411Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on the proximity to a decision surface, e.g. support vector machines
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/60Type of objects
    • G06V20/69Microscopic objects, e.g. biological cells or cellular parts
    • G06V20/698Matching; Classification

Definitions

  • Embodiments of the present invention relate to the field of imaging technologies, and in particular, to a microscope image acquisition method based on sequence slices.
  • Ultra-thin sequence slice three-dimensional imaging technology refers to the technique of cutting biological tissue into ultra-thin sequence sections or slices and imaging them one by one using electron microscopy; according to different segmentation techniques, it is mainly divided into sequence cross-section imaging and sequence slice imaging.
  • Sequence slicing can be automatically collected by automatic film collector or manually collected manually; automatic collection technology is constantly developing, and automatic film collection has the advantages of uniform film density and labor saving; however, there are unstable film quality and sample slice space utilization. Low disadvantages. At present, most of the slices are manually collected and placed on the carrier.
  • the manual filming has the advantages of high speed, flexible solution and mature filming technology. However, the manual filming also has a large slice density and a film orientation. The disadvantage of inconsistent angles is shown in Figure 1. The current automatic acquisition of microscope images for sequence slicing is a difficult problem to be solved.
  • Embodiments of the present invention provide a microscope image acquisition method based on sequence slice to solve the technical problem of how to efficiently perform automatic image collection of a sample region of interest.
  • a microscope slice acquisition method based on sequence slice characterized in that the method comprises:
  • the sequence slice sample is placed in the microscope to establish a coordinate transformation matrix of the navigation map-microscope actual sampling space coordinates, and any pixel point navigation in the navigation map is positioned to the microscope field center;
  • the image of the sample points is continuously collected.
  • the method for image processing and machine learning is used to identify the sequence slice sample in the navigation map, which specifically includes:
  • the navigation map is segmented by using a Mean-Shift algorithm to obtain a foreground sample target area and a background area;
  • said y represents said excitation function
  • the ⁇ 2 represents the mean of the non-sample background region; the ⁇ 2 represents the variance of the non-sample background region;
  • the ⁇ i represents the mean within the sliding window region; the ⁇ i represents the The variance within the sliding window area;
  • W represents the weight;
  • the navigation map is detected by using an SVM classifier to obtain a sample slice position
  • the acquiring an image of a region of the navigation map that does not include a slice around the positive sample position, and obtaining a negative sample of the training set specifically includes:
  • An image of a region of the navigation map that does not include a slice around the positive sample position is rotated, superimposed with random noise, blurred, and gamma transformed to obtain a negative sample of the training set.
  • the sequence slice sample is placed in the microscope to establish a coordinate transformation matrix of the navigation map-microscope actual sampling space coordinates, and any pixel point in the navigation map is navigated to the center of the microscope field of view.
  • a coordinate transformation matrix of the navigation map-microscope actual sampling space coordinates any pixel point in the navigation map is navigated to the center of the microscope field of view.
  • a navigation map is established according to the spatial coordinate affine transformation method- a coordinate transformation matrix of the actual sampling space coordinates of the microscope;
  • any pixel point in the navigation map is projected to a corresponding position in the microscope field of view, thereby navigating any pixel point of the navigation map to the center of the microscope field of view.
  • the positioning of the sequence slice sample in the low-resolution field of view, and performing the binding of the sample collection parameters specifically includes:
  • the microscope navigation is sequentially corresponding to the sample points, and combined with the sample database, the binding of the sample collection parameters is performed.
  • the sequentially collating the microscope to the sample point and combining the sample database to perform binding of the sample collection parameters includes:
  • Parameter binding is performed on the clear imaging parameters based on the sample database.
  • the determining the multi-scale multi-angle template matching center point position specifically includes:
  • the obtaining the clear imaging parameters in the vicinity of the current parameter specifically includes:
  • the scanned imaging image is convoluted by using a discretized Gaussian convolution kernel to obtain an output image
  • an evaluation value of the image quality is calculated according to the following formula:
  • the Value represents the image quality evaluation value
  • Img th represents the result of the truncation
  • the Sum (Img th) represents the sum of pixel truncated results
  • the W (Img th) represents the result of the truncated Width
  • the H (Img th ) represents the height of the truncation result
  • the evaluation value of the image quality is used as the relative relative calculation value under the current scanning parameter, and the local optimization value is gradually approached by using a local optimization algorithm to obtain the clear imaging parameter near the current parameter.
  • the combining the sample collection parameters and the relative position relationship to perform continuous image collection of sample points includes:
  • a continuous sample image is output based on image acquisition of each of the plurality of sample points.
  • Embodiments of the present invention provide a microscope image acquisition method based on sequence slices.
  • the method comprises: acquiring a sequence slice sample and a navigation map thereof; using image processing and machine learning methods to identify and mark the sequence slice sample in the navigation map; placing the sequence slice sample in a microscope to navigate any pixel point in the navigation map Positioning to the center of the field of view of the microscope; positioning the sequence slice samples in the low-resolution field of view for binding of sample acquisition parameters; recording the center point of the high-resolution acquisition area and the sample template after matching based on the sample acquisition parameters The relative positional relationship between the samples; combined with the sample collection parameters and the relative positional relationship, the image of the sample points is continuously collected.
  • the technical solution solves the technical problem of how to automatically complete the automatic acquisition of the image of the sample region of interest, realizes the automatic recognition of the pixel coordinates of each slice sample in the complete navigation map, and uses the complete navigation map to view the field of the microscope.
  • the center locates each sample slice position and calculates the rotation angle of the sample when it is taken, quickly discriminates the sharpness of the microscope scan image, and also realizes large-scale sequence samples in a microscope (electronic or optical) with an automatic control interface.
  • Automated imaging of the lower continuous area suitable for three-dimensional imaging of large-scale sequence samples (tomographic) after stereo cutting; especially suitable for collecting sequence samples on large-area planar carriers (semiconductor wafers), such as ultra-thin sequence slice three-dimensional imaging technology .
  • FIG. 1 is a schematic flow chart of a microscopy image acquisition method based on sequence slicing according to an embodiment of the invention.
  • the embodiment of the present invention provides a microscope image acquisition method based on the sequence slice.
  • the method may include: Step S100 to Step S150. among them:
  • S100 Acquire a sequence slice sample and its navigation map.
  • the acquisition of the navigation map can be completely taken by the optical camera, or can be obtained by taking a partial image through a high-resolution microscope and then splicing into a complete navigation map.
  • S110 Perform image recognition and machine learning methods to identify and mark sequence slice samples in the navigation map.
  • This step records the position of the coordinate points of each sequence slice sample in the navigation map, and can determine the exact position of each sequence slice sample in the navigation map.
  • this step may include:
  • S111 The navigation map is segmented by using the Mean-Shift algorithm to obtain a foreground sample target area and a background area.
  • the N ⁇ N sliding window can be used to sequentially slide over the entire navigation map to segment the image and segment the foreground sample target area and the background area.
  • N represents a pixel unit; the stepping can be flexibly set according to an actual sample, and preferably, the step is N/9 (rounded).
  • S112 Calculate a minimum circumscribed rectangle of the foreground sample target area and the background area edge contour.
  • the method of calculating the minimum circumscribed rectangle of the contour may first binarize the region, then perform a Laplacian operation on the binarized image, and finally connect the maximum point in turn, thereby obtaining a minimum circumscribed rectangle.
  • S113 Align the minimum circumscribed rectangle of the foreground sample target area and the background area edge contour with the minimum circumscribed rectangle of the sample template, and remove the foreground sample target area and the background area whose width to length ratio and the area size are too large.
  • the error tolerance in this step is 10%.
  • y represents the excitation function
  • S i the Euclidean distance of the eigenvalues of the sliding window region
  • S i ⁇ ( ⁇ i , ⁇ i ) - ( ⁇ 1 , ⁇ 1 ) ⁇
  • B i the Euclidean distance of the sample eigenvalues
  • B i ⁇ ( ⁇ i , ⁇ i )-( ⁇ 2 , ⁇ 2 ) ⁇
  • ⁇ 1 represents the mean of the target region of the sample
  • ⁇ 1 represents the variance of the target region of the sample
  • ⁇ 2 represents the mean of the background region of the non-sample
  • ⁇ 2 denotes the variance of the non-sample background region
  • ⁇ i denotes the mean value in the sliding window region
  • ⁇ i denotes the variance in the sliding window region
  • W denotes the weight value for adjusting the weights of the two terms of the summation, preferably, W is 500.
  • the mean and variance (sample eigenvalues) of the sample target area and the non-sample background area can be obtained by inputting a navigation map, selecting a sample template, and then sampling and counting the selected sample template slices.
  • S115 Perform threshold segmentation on the result of the excitation function to determine the position of the positive sample in the navigation map.
  • the threshold segmentation may be performed in such a manner that a threshold is set according to the result of the excitation function, and then threshold binarization is performed to implement threshold segmentation.
  • the binarization threshold can be obtained by, for example, a k-means clustering method or the like.
  • the position above the threshold is selected as the position of the positive sample.
  • the image area is intercepted by navigating the position of the positive sample in the map to form a training set.
  • S116 intercepting the navigation map with the position of the positive sample in the navigation map as a center point, obtaining a training set positive sample, and collecting an image of a region of the navigation map that does not include the slice around the positive sample position, and obtaining a negative sample of the training set.
  • the step of obtaining the negative samples of the training set may specifically include: rotating, superimposing random noise, blurring, and gamma transform on the image of the region that does not include the slice around the positive sample position in the navigation map to obtain a negative sample of the training set.
  • This step performs a sample update on the training set obtained by the position of the positive sample in the navigation map.
  • an image of N ⁇ N (pixel unit) can be intercepted as a training set positive sample.
  • the process of rotating, superimposing random noise, blurring (eg, Gaussian blur), and gamma transforming into sample gains is to prevent over-fitting from training.
  • S117 Normalize the training set positive sample and the training set negative sample, and extract the HOG feature.
  • This step can use the HOG feature for SVM classifier training.
  • S118 The navigation map is detected by using an SVM classifier to obtain a sample slice position.
  • a trained SVM classifier can be used to automatically detect a complete navigation map using an N ⁇ N (pixel unit) sliding window to obtain a sample slice location.
  • step S119 merging the minimum circumscribed rectangle obtained in step S113 with the sample slice position to implement the identification mark of the sequence slice sample.
  • the fusion method can improve the detection accuracy.
  • step S119a may be further included after step S119.
  • S120 The sequence slice sample is placed in a microscope, and a coordinate transformation matrix of the navigation map-microscope actual sampling space coordinates is established, and any pixel point navigation in the navigation map is positioned to the center of the microscope field of view.
  • this step may further include:
  • S121 Select any three points in the navigation map that are not on the same line, and record the pixel coordinate positions of the three points in the navigation map.
  • S122 Determine the position of the microscope stage when the three points are at the imaging center.
  • This step can be determined by observation under a microscope.
  • step S123 Based on the result of step S121 and step S122, according to the spatial coordinate affine transformation method, a coordinate transformation matrix of the navigation map-microscope actual sampling space coordinates is established.
  • the three points in the navigation map that are not on the same line are: P1, P2, and P3.
  • the pixel coordinates of the three points in the navigation map are represented by vectors. A(a1, a2, a3), where a1(w1, h1).
  • the position of the microscope stage at three points in the imaging center is denoted as B(b1, b2, b3), where b1(x1, y1).
  • Step A Solve the coordinate transformation matrix using the following formula:
  • M represents a coordinate transformation matrix
  • Step B Solve the following formula:
  • a represents an arbitrary pixel in the navigation map
  • b represents a position corresponding to a in the field of view of the microscope.
  • Step C Using the displacement command in the microscope's automatic control interface, complete the navigation map and click to locate each sample slice.
  • a database table of each batch of samples can be established, and relevant parameters are sequentially stored in the order of collection.
  • relevant parameters are sequentially stored in the order of collection.
  • this step may further include:
  • S131 setting clear imaging parameters at high resolution, and subjecting the clear imaging image at low resolution to Gaussian blur, as a sample template, and establishing a sample database based on clear imaging parameters and sample templates at low resolution.
  • the first point of the continuous sequence sample can be used as the basic initialization point, the clear imaging parameters at high resolution are debugged and set, and the clear imaging parameters are stored in the sample database table.
  • the high resolution can be set according to the imaging requirements, and is generally set at 2000 times or more. Clear means that the edges in the image are clear.
  • the low resolution may be a case where the magnification is 70 to 150 times.
  • the sample template can be viewed as a thumbnail of the sample at low resolution.
  • the method moves the sample to the same position where the template is imaged.
  • the resolution of the sample template can be set to 512, 1024 or 2048.
  • the scan resolution can be set to twice the resolution of the sample template.
  • represents a Gaussian distribution parameter
  • x and y represent the horizontal and vertical coordinates of the pixel.
  • the Gaussian convolution kernel (isotropic) has a width of 27.
  • the microscope navigation may be sequentially corresponding to the sample points in the order of the sample database table, and parameter binding is performed.
  • this step may further include:
  • S1321 Determine the multi-scale multi-angle template matching center point position and store it in the sample database.
  • this step may further include:
  • S13211 Calculate an image pyramid of the sample template and the scanned image to be searched.
  • the scanned image to be searched is another image area including a slice on the navigation map.
  • the embodiment of the present invention adopts an image pyramid strategy.
  • each layer is downsampled by the next layer (preferably half each in length and width).
  • the embodiment of the present invention can establish a template image and an image pyramid of the scanned image to be searched by using a method of recursive downsampling.
  • the recursive downsampling can be performed according to the following formula to calculate the template.
  • S13212 Set the rotation angle range, coarse angle and its interval and fine angle and its interval.
  • This step initializes the rotation angle range Agl t , the coarse angle interval Agl C and the fine angle area Agl F .
  • Agl t represents an absolute value of the preset value of the angle range matching the slice sample template, and preferably, Agl t is 90°.
  • the coarse angle interval may define an angular range of 180 degrees, and preferably, Agl C is 10°.
  • the fine angle interval is preferably 1°.
  • the result of dividing the range of the rotation angle is ⁇ i ⁇ j .
  • the template matching is performed from the top of the image pyramid, and the normalized cross-correlation operation is performed to obtain the best matching position (best matching point) of the layer.
  • this step may include:
  • SA2 According to the following formula, Perform a normalized cross-correlation operation with the sample template to obtain a matching probability map:
  • x, y represent the pixel coordinates in the search image
  • R(x, y) represents the value of the matrix R at the (x, y) point, ie the template image coordinate position
  • x', y' represents the pixel in the sample template Coordinate
  • Img T represents the sample template ImgMod
  • Img I represents
  • SA3 Select the matching point of the rotation angle ⁇ c image of the maximum value in the matching probability map as the matching optimal point (which can also be called the best matching point).
  • ⁇ c represents a coarse angle
  • 1 ⁇ c ⁇ N and N takes a positive integer.
  • R(p(i,j)) represents the R matrix of the matching point p(i,j);
  • R(x i ,y j ) represents the pixel point (x i ,y j ) R matrix; 0 ⁇ i ⁇ W (R), 0 ⁇ j ⁇ H (R), 1 ⁇ c ⁇ N.
  • W(R) represents the width of the R matrix;
  • H(R) represents the height of the R matrix, and N takes a positive integer.
  • ⁇ F represents a fine angle and 1 ⁇ F ⁇ M.
  • S13214 a sub-region centered on the point of the layer corresponding to the best matching point of the previous layer, and performing a normalized cross-correlation operation on the rotated image and the sample template to determine the optimal matching point of the bottom layer of the image pyramid Match the center point position for multi-scale multi-angle templates.
  • the sliding window will be automatically translated to Inside the boundary, the normalized cross-correlation operation is performed to obtain the best matching position of the layer.
  • the best matching position of the bottom layer of the image pyramid is finally obtained.
  • S1322 Move the center of the microscope field of view to the position of the sequence slice sample in the microscope coordinate system, and quickly scan the imaged image, and use the three-point Newtonian two interpolation method to approximate the local optimum value of the image quality, and obtain the current Clear imaging parameters near the parameters.
  • this step may further include:
  • S13221 The scanned image is convoluted by using a discretized Gaussian convolution kernel to obtain an output image.
  • the scanned imaging image can be any area on the navigation map that needs to be acquired, which can be acquired in real time.
  • the convolution calculation of the image to be detected can be performed according to the discretized Gaussian convolution kernel of the following formula:
  • ln represents a bidirectional isotropic side length
  • ln takes an odd number, preferably, ln is 3
  • x, y represent pixel coordinates
  • represents a Gaussian distribution parameter
  • this step can perform a Gaussian difference operation according to the following Gaussian difference operator:
  • ln takes an odd number, preferably, ln is 3; dln is an even number greater than 0, preferably dln is 64;
  • DoG ln, dln (Img) represents Gaussian difference operation; ABS( ⁇ ) represents absolute value of the image ;
  • G ln + dln (Img) represents a Gaussian image with different degrees of blur;
  • G ln (Img) represents an output image obtained in step S13221.
  • the mean convolution kernel E nc can be convolved with Img DoG using nc x nc, (preferably, nc takes 5).
  • Img DoG represents the result of Gaussian difference operation.
  • Img conv (i,j) 0
  • Img conv (i, j) represents the position of a certain pixel on the convolution result (convolution image); Th thresh represents the truncation threshold.
  • the truncation threshold is obtained according to the following formula:
  • Th thresh Sum(Img conv )/num
  • Value represents the image quality evaluation value
  • Img th represents the results are truncated
  • Sum (Img th) represents the sum of the results are truncated pixels
  • W (Img th) represents the width of the truncated results
  • H (Img th) represents the height of the truncated result.
  • S13226 The image quality evaluation value is used as the relative relative calculation value under the current scanning parameter, and the local optimization value is gradually approached by the local optimization algorithm to obtain clear imaging parameters near the current parameter.
  • this step can use Newton's dichotomy to make a gradual approximation.
  • the evaluation value of the image acquired by the current microscope in the state S is Value
  • adjusting the relevant parameters of the microscope causes the microscope to be in the state S i+ .
  • the positive sign + indicates the direction in which the parameters are adjusted.
  • the relevant parameters of the adjusted microscope can be, for example, a focusing parameter.
  • the parameter can be determined according to different microscopes, and can be different depending on the type of the microscope and the manufacturer. After the adjustment, get Value i+ .
  • Value i+ > Value set Value i+ to Value and repeat until it is less than Value, then take (Value+Value i+ )/2 as the new Value until the new Value and the last Value. The interval between them is less than a threshold ⁇ , which is also determined according to the specific model of the microscope.
  • sample acquisition parameters Save clear imaging parameters (sample acquisition parameters) to the sample database table and perform parameter binding for all samples in the sample database.
  • S140 Record the relative positional relationship between the center point of the high-resolution acquisition area and the center point after the sample template is matched based on the binding of the sample collection parameters.
  • a high resolution acquisition area can be initialized prior to performing this step.
  • When collecting first find the center of the sample template, and then move to the area of interest according to the above relative positional relationship.
  • S150 Combine the sample collection parameters and the relative positional relationship to perform continuous image acquisition of the sample points.
  • This step combines sample collection parameters to perform navigation and positioning of sample points, fine positioning of target areas, real-time adjustment of acquired image sharpness, and continuous acquisition of large area sub-images.
  • this step may further include:
  • the sample collection parameters of the sample point binding can be read from the bound sample database table.
  • the step may include: performing sample template matching using a multi-scale multi-angle normalized cross-correlation algorithm based on the sample template, and positioning the position of the sample in the microscope coordinate system and the scan deflection angle between the sample and the template.
  • the scan deflection angle can be the scan deflection angle of the electron beam or light source.
  • the imaging parameters can be adjusted in the order of working focal length, bidirectional phase dispersion, and working focal length.
  • the multi-region collection point is a region of interest corresponding to each slice.
  • the displacement may be established as 0, the deflection angle of scan ⁇ c0 + affine transformation matrix M r ⁇ F0 will multizone collection point p i by M r Inverse microscope to the actual coordinates p 'i.
  • ⁇ c0 represents a coarse angle
  • 1 ⁇ c0 ⁇ N N takes a positive integer
  • ⁇ F0 represents a fine angle
  • 1 ⁇ F0 ⁇ M M takes a positive integer.
  • S155 Move the center of the microscope field of view to the actual coordinate system of the microscope to scan the preset high-resolution image point by point, and complete image collection of multiple points of the sample point.
  • image quality assessment is required after acquisition, and re-acquisition is required for poor image quality.
  • S156 Output continuous sample images based on image acquisition of multiple regions of each sample point.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Molecular Biology (AREA)
  • Biomedical Technology (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Software Systems (AREA)
  • Computing Systems (AREA)
  • Databases & Information Systems (AREA)
  • Medical Informatics (AREA)
  • Data Mining & Analysis (AREA)
  • General Engineering & Computer Science (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Biology (AREA)
  • Microscoopes, Condenser (AREA)
  • Image Processing (AREA)

Abstract

一种基于序列切片的显微镜图像采集方法,包括获取序列切片样本及其导航图(S100);采用图像处理与机器学习的方法对导航图中的序列切片样本进行识别标记(S110);将序列切片样本置于显微镜中,将导航图中任一像素点导航定位至显微镜视场中心(S120);在低分辨率视场下定位序列切片样本,进行样本采集参数的装订(S130);基于样本采集参数的装订,记录高分辨率采集区域的中心点与样本模板匹配后的中心点之间的相对位置关系(S140);结合样本采集参数及相对位置关系,进行样本点的图像连续采集(S150)。通过这种技术方案解决了如何高效地完成感兴趣样本区域图像的自动采集的技术问题,实现了大规模序列样本在带有自动控制接口的显微镜下连续区域的自动化成像。

Description

基于序列切片的显微镜图像采集方法 技术领域
本发明实施例涉及成像技术领域,具体涉及一种基于序列切片的显微镜图像采集方法。
背景技术
超薄序列切片三维成像技术,是指将生物组织切分成超薄序列断面或切片,并使用电子显微镜对其逐一成像的技术;按照不同的切分技术主要分为序列断面成像和序列切片成像两种:(1)序列断面成像方式是切片和成像结合在电镜样品腔内完成,该类方法在横向和纵向上分辨率接近,具备各向同性,而缺点在于损毁性切片方式,一旦发现数据出现缺陷,将无法修复,不利于大规模重建工作的开展;(2)序列切片成像方式将样品使用超薄切片装置切分成序列切片,用亲疏水性适合的基片收集后放入电镜内对序列切片成像。
序列切片可以通过自动收片机自动收集或者是人工手动收集;自动收集技术正在不断发展,自动收片具有收片密度均匀、节省人力等优点;但存在收片质量不稳定、样本切片空间利用率低等缺点。目前,绝大多数的切片是靠手动收取并放置在载物座上的,手工收片具有速度快、方案灵活且收片技术成熟的优点;但手工收片也有切片分布密度大、收片方位角度不一致的缺点,如图1所示。目前针对序列切片的显微镜图像自动采集是一项需要解决的难题。
有鉴于此,特提出本发明。
发明内容
本发明实施例提供一种基于序列切片的显微镜图像采集方法,以解决如何高效地完成感兴趣样本区域图像的自动采集的技术问题。
为了实现上述目的,提供了以下技术方案:
一种基于序列切片的显微镜图像采集方法,其特征在于,所述方法包括:
获取序列切片样本及其导航图;
采用图像处理与机器学习的方法对所述导航图中的所述序列切片样本进行识别标记;
将所述序列切片样本置于所述显微镜中,建立导航图-显微镜实际采样空间坐标的坐标转换矩阵,将所述导航图中任一像素点导航定位至显微镜视场中心;
在低分辨率视场下定位所述序列切片样本,进行样本采集参数的装订;
基于所述样本采集参数的装订,记录高分辨率采集区域的中心点与样本模板匹配后的中心点之间的相对位置关系;
结合所述样本采集参数及所述相对位置关系,进行样本点的图像连续采集。
优选地,所述采用图像处理与机器学习的方法对所述导航图中的所述序列切片样本进行识别标记,具体包括:
采用Mean-Shift算法对所述导航图进行分割,得到前景样本目标区域与背景区域;
计算所述前景样本目标区域与所述背景区域边缘轮廓的最小外接矩形;
将所述前景样本目标区域与所述背景区域边缘轮廓的所述最小外接矩形与样本模板的最小外接矩形进行比对,剔除宽长比和面积大小相差过大的前景样本目标区域与背景区域;
确定如下激励函数:
Figure PCTCN2016104852-appb-000001
其中,所述y表示所述激励函数;所述Si表示滑动窗口区域特征值的欧式距离,所述Si=‖(μii)-(μ1,σ1)‖;所述Bi表示样本特征值的欧式距离,所述Bi=‖(μii)-(μ2,σ2)‖;所述μ1表示样本目标区域的均值;所述σ1表示所述样本目标区域的方差;所述μ2表示非样本背景区域的均值;所述σ2表示非样本背景区域的方差;所述μi表示滑动窗口区域内的均值;所述σi表示所述滑动窗口区域内的方差;W表示权值;
对所述激励函数的结果进行阈值分割,确定所述导航图中正样本的位置;
以所述导航图中正样本的位置为中心点对所述导航图进行截取,得到训练集正样本,并且采集所述导航图中正样本位置周围不包含切片的区域的图像,得到训练集负样本;
归一化所述训练集正样本和所述训练集负样本,并提取HOG特征;
利用SVM分类器对所述导航图进行检测,得到样本切片位置;
将剔除操作得到的最小外接矩形与所述样本切片位置进行融合, 实现对所述序列切片样本的识别标记。
优选地,所述采集所述导航图中正样本位置周围不包含切片的区域的图像,得到训练集负样本,具体包括:
对所述导航图中正样本位置周围不包含切片的区域的图像进行旋转、叠加随机噪声、模糊及伽马变换,得到训练集负样本。
优选地,所述将所述序列切片样本置于所述显微镜中,建立导航图-显微镜实际采样空间坐标的坐标转换矩阵,将所述导航图中任一像素点导航定位至显微镜视场中心,具体包括:
选取所述导航图中任意不在同一直线上的三个点,并记录所述三个点在所述导航图中的像素坐标位置;
确定所述三个点在成像中心时的显微镜载物台的位置;
基于所述三个点在所述导航图中的所述像素坐标位置和所述三个点在成像中心时的显微镜载物台的所述位置,根据空间坐标仿射变换方法,建立导航图-显微镜实际采样空间坐标的坐标转换矩阵;
利用坐标转换矩阵,将所述导航图中任一像素点投影至显微镜视场中的对应位置,从而将所述导航图任一像素点导航定位至所述显微镜视场中心。
优选地,所述在低分辨率视场下定位所述序列切片样本,进行样本采集参数的装订,具体包括:
设置高分辨率下清晰成像参数,并将低分辨率下清晰成像图像经高斯模糊之后,作为样本模板,并基于低分辨率下清晰成像参数和所述样本模板建立样本数据库;
依次将所述显微镜导航对应于样本点上,并结合所述样本数据库,进行样本采集参数的装订。
优选地,所述依次将所述显微镜导航对应于样本点上,并结合所述样本数据库,进行样本采集参数的装订,具体包括:
确定多尺度多角度模板匹配中心点位置,并将其存入样本数据库;
将所述显微镜视场中心移动至所述序列切片样本在显微镜坐标系的位置,并快速扫描成像图像,并对所述扫描成像图像使用三分点的牛顿二插法逼近图像质量的局部最优值,得到当前参数附近的清晰成像参数;
基于所述样本数据库,对所述清晰成像参数进行参数装订。
优选地,所述确定多尺度多角度模板匹配中心点位置具体包括:
计算所述样本模板和待搜索扫描图像的图像金字塔;
设定旋转角度范围、粗略角度及其区间和精细角度及其区间;
将所述待搜索扫描图像旋转所述粗略角度和所述精细角度,并针对图像金字塔顶层,对旋转后的图像与所述样本模板进行归一化互相关运算,确定匹配最佳点;
以上一层的匹配最佳点对应的本层的点为中心的一子区域,对旋转后的图像与样本模板进行归一化互相关运算,将图像金字塔最底层的匹配最佳点确定为所述多尺度多角度模板匹配中心点位置。
优选地,所述得到当前参数附近的清晰成像参数具体包括:
采用离散化高斯卷积核对所述扫描成像图像进行卷积,得到输出图像;
对所述输出图像进行高斯差分运算;
将均值卷积核与高斯差分运算结果进行卷积,得到卷积结果;
对所述卷积结果进行阈值截断,得到截断结果;
基于所述截断结果根据以下公式计算图像质量的评估值:
Figure PCTCN2016104852-appb-000002
其中,所述Value表示所述图像质量的评估值;所述Imgth表示所述截断结果;所述Sum(Imgth)表示截断结果像素总和;所述W(Imgth)表示所述截断结果的宽度;所述H(Imgth)表示所述截断结果的高度;
将所述图像质量的评估值作为当前扫描参数下的清晰度相对计算值,并采用局部最优化算法逐步逼近局部最优值,得到当前参数附近的所述清晰成像参数。
优选地,所述结合所述样本采集参数及所述相对位置关系,进行样本点的图像连续采集,具体包括:
读取样本采集参数;
将所述显微镜视场中心移动到需要进行高分辨率成像的位置,并设置扫描偏转角;
调整成像参数;
通过仿射变换矩阵将多区域采集点反算至显微镜实际坐标系;
将所述显微镜视场中心移动至所述显微镜实际坐标系逐点扫描预设的高分辨率图像,完成样本点多区域的图像采集;
基于各所述样本点多区域的图像采集,输出连续样本图像。
与现有技术相比,上述技术方案可以具有以下有益效果:
本发明实施例提供一种基于序列切片的显微镜图像采集方法。该方法包括获取序列切片样本及其导航图;采用图像处理与机器学习的方法对导航图中的序列切片样本进行识别标记;将序列切片样本置于显微镜中,将导航图中任一像素点导航定位至显微镜视场中心;在低分辨率视场下定位序列切片样本,进行样本采集参数的装订;基于样本采集参数的装订,记录高分辨率采集区域的中心点与样本模板匹配后的中心点之间的相对位置关系;结合样本采集参数及相对位置关系,进行样本点的图像连续采集。通过该技术方案解决了如何高效地完成感兴趣样本区域图像的自动采集的技术问题,实现了在完整导航图中自动化地识别出每个切片样本的像素坐标,利用完整导航图将显微镜的视场中心定位到每一个样本切片位置并计算出该样本收片时的旋转角度,快速地判别显微镜扫描图像的清晰度,还实现了大规模序列样本在带有自动控制接口的显微镜(电子或光学)下连续区域的自动化成像,适用于立体切割后的大范围序列样本(断层)显微镜三维成像;特别适合收集在大面积平面载体(半导体晶圆)上的序列样本,比如超薄序列切片三维成像技术。
附图说明
图1为根据本发明实施例的基于序列切片的显微镜图像采集方法的流程示意图。
具体实施方式
下面结合附图和具体的实施例来详细地说明本发明。
针对序列切片的显微镜图像自动采集,本发明实施例提出一种基于序列切片的显微镜图像采集方法,如图1所示,该方法可以包括:步骤S100至步骤S150。其中:
S100:获取序列切片样本及其导航图。
其中,导航图的获取可以通过光学相机完整拍摄,也可以通过高分辨率显微镜拍摄局部图像,然后再拼接成完整的导航图来获得。
S110:采用图像处理与机器学习的方法对导航图中的序列切片样本进行识别标记。
本步骤记录每一个序列切片样本在导航图中的坐标点位置,进而能够确定每一个序列切片样本在导航图中的确切位置。
具体地,本步骤可以包括:
S111:采用Mean-Shift算法对导航图进行分割,得到前景样本目标区域与背景区域。
例如,可以利用N×N的滑动窗口依次滑过整幅导航图,来对图像进行分割,分割出前景样本目标区域与背景区域。其中,N表示像素单位;步进可根据实际样本灵活设置,优选地,步进为N/9(取整)。
S112:计算前景样本目标区域与背景区域边缘轮廓的最小外接矩形。
计算轮廓的最小外接矩形的方法例如可以先对区域进行二值化,然后对二值化后的图像进行拉普拉斯算子运算,最后依次连接最大值点,从而得到最小外接矩形。
S113:将前景样本目标区域与背景区域边缘轮廓的最小外接矩形与样本模板的最小外接矩形进行比对,剔除宽长比和面积大小相差过大的前景样本目标区域与背景区域。
本步骤中误差允许范围为10%。
S114:确定如下激励函数:
Figure PCTCN2016104852-appb-000003
其中,y表示激励函数;Si表示滑动窗口区域特征值的欧式距离,Si=‖(μii)-(μ1,σ1)‖;Bi表示样本特征值的欧式距离,Bi=‖(μii)-(μ2,σ2)‖;μ1表示样本目标区域的均值;σ1表示样本目标区域的方差;μ2表示非样本背景区域的均值;σ2表示非样本背景区域的方差;μi表示滑动窗口区域内的均值;σi表示滑动窗口区域内的方差;W表示权值,用来调节求和式的两项的权值,优选地,W为500。
样本目标区域与非样本背景区域的均值与方差(样本特征值)可以通过输入导航图,选取样本模板,然后对选取的样本模板切片进行采样和统计而得到。
S115:对激励函数的结果进行阈值分割,确定导航图中正样本的位置。
其中,示例性地,阈值分割可以按照以下方式进行:根据激励函数的结果,设定阈值,然后进行阈值二值化,以实现阈值分割。其中,二值化阈值可以通过诸如k-means聚类方法等来获得。
在确定导航图中正样本的位置时,例如,可以根据二值化的结果, 选择高于阈值的位置(图像区域)为正样本的位置。
本步骤通过导航图中正样本的位置,截取图像区域,从而构成训练集。
S116:以导航图中正样本的位置为中心点对导航图进行截取,得到训练集正样本,并且采集导航图中正样本位置周围不包含切片的区域的图像,得到训练集负样本。
进一步地,得到训练集负样本的步骤具体可以包括:对导航图中正样本位置周围不包含切片的区域的图像进行旋转、叠加随机噪声、模糊及伽马变换,来得到训练集负样本。
本步骤对通过导航图中正样本的位置得到的训练集进行样本更新。
其中,例如可以截取N×N(像素单位)的图像,作为训练集正样本。
旋转、叠加随机噪声、模糊(例如:高斯模糊)及伽马变换为样本增益的过程,这是为了防止训练带来过拟合的问题。
S117:归一化训练集正样本和训练集负样本,并提取HOG特征。
在实际应用中,可以使用Nv维度的HOG特征向量,较佳地,Nv=8100。
本步骤可以利用HOG特征进行SVM分类器训练。
S118:利用SVM分类器对导航图进行检测,得到样本切片位置。
例如,可以使用训练好的SVM分类器,采取N×N(像素单位)的滑动窗口对完整的导航图进行自动检测,从而得到样本切片位置。
S119:将步骤S113得到的最小外接矩形与样本切片位置进行融合,实现对序列切片样本的识别标记。
在实际应用中,会存在漏检的情况,但是,漏检的切片不会完全一致,因此,采用融合的办法可提高检测准确率。
优选地,在步骤S119之后还可以包括步骤S119a。
S119a:通过手工方式确认融合结果的正确性,并输出导航图自动识别检测结果。
S120:将序列切片样本置于显微镜中,建立导航图-显微镜实际采样空间坐标的坐标转换矩阵,将导航图中任意像素点导航定位至显微镜视场中心。
具体地,本步骤可以进一步包括:
S121:选取导航图中任意不在同一直线上的三个点,并记录该三个点在导航图中的像素坐标位置。
S122:确定该三个点在成像中心时的显微镜载物台的位置。
本步骤可以通过在显微镜下观察来确定。
S123:基于步骤S121和步骤S122的结果,根据空间坐标仿射变换方法,建立导航图-显微镜实际采样空间坐标的坐标转换矩阵。
S124:利用坐标转换矩阵,将导航图中任一像素点投影至显微镜视场中的对应位置,从而将导航图任一像素点导航定位至显微镜视场中心。
作为示例,假设:取导航图中任意不在同一直线上的三个点为:P1、P2和P3,三个点在导航图中的像素坐标位置,以向量表示为 A(a1,a2,a3),其中a1(w1,h1)。三个点在成像中心时的显微镜载物台的位置,记为B(b1,b2,b3),其中b1(x1,y1)。
步骤A:利用以下公式求解坐标转换矩阵:
B=M·A
其中,M表示坐标转换矩阵。
步骤B:求解如下公式:
b=M·a
其中,a表示导航图中任意像素点;b表示显微镜视场中与a对应的位置。
上述公式通过下式来求解:
Si=‖(μii)-(μ1,σ1)‖
由于Si=‖(μii)-(μ1,σ1)‖为一个三元一次方程组,因此最少需要三个不在同一直线上的坐标点,才能求解出M中的系数。
步骤C:使用显微镜的自动控制接口中的位移指令,完成导航图点击定位每一个样本切片。
S130:在低分辨率视场下定位序列切片样本,进行样本采集参数的装订。
为了保证图像自动采集的顺序,可以建立每一批次样本的数据库表格,按照收片顺序依次存储相关参数。通过较低分辨率下的快速扫描,对全部样本按顺序完整地扫描一遍,完成样本采集参数的装订,并将采集的参数保存起来,从而装订完成每一个样本的位置、方位角以及清晰化成像参数。
具体地,本步骤可以进一步包括:
S131:设置高分辨率下清晰成像参数,并将低分辨率下清晰成像图像经高斯模糊之后,作为样本模板,并基于低分辨率下清晰成像参数和样本模板建立样本数据库。
连续切片的自动采集是一个长时间的采集过程,持续数间可能长至数天或几周;以第一个样本作为基本初始化样本,记录拍摄高分辨率图像,由于连续切片之间切片的状态有连续性,状态改变不大;但还是存在微小的差别,鉴于此,当采集后续样本时,需要在前一个样本的电镜状态参数上进行微调。
在具体实施过程中,可以将连续序列样本第一个点作为基本初始化点,调试并设置高分辨率下清晰成像参数,并将该清晰成像参数存储至样本数据库表格。这里,高分辨率可以根据成像需求设置,一般设置在2000倍以上。清晰是指图像中的边缘清晰。
任意选取一块完整的样本作为匹配模板,将低分辨率下清晰成像图像经高斯模糊之后,保存为样本模板(即切片样本模板)并连同成像参数一同保存至样本数据库表格。
这里,低分辨率下可以为放大倍数在70至150倍的情况。样本模板可以视为低分辨率下样本的缩略照。
因为此前记录的样本位置是在导航图中识别出来的,但是在电子显微镜中,该样本位置还是会存在偏差,使得样本不会位于显微镜视场的正中心位置,所以,需要采用旋转模板匹配的方法将样本移动到同样本模板成像一致的位置上去。其中样本模板的分辨率可为设置为 512、1024或2048。扫描分辨率可以设置为两倍的样本模板的分辨率。
在具体实施过程中,可以对下式进行离散化,以实现高斯模糊:
Figure PCTCN2016104852-appb-000004
其中,σ表示高斯分布参数;x、y表示像素的横纵坐标。
优选地,高斯卷积核(各向同性)的宽度为27。
S132:依次将显微镜导航对应于样本点上,并结合样本数据库,进行样本采集参数的装订。
本步骤可以按照样本数据库表格的顺序依次将显微镜导航对应样本点上,并进行参数装订。
具体地,本步骤可以进一步包括:
S1321:确定多尺度多角度模板匹配中心点位置,并将其存入样本数据库。
具体地,本步骤可以进一步包括:
S13211:计算样本模板和待搜索扫描图像的图像金字塔。
其中,待搜索扫描图像为导航图上的其他包含切片的图像区域。
由于显微镜的图像很大,如果直接计算多角度的匹配最佳点(即最佳匹配点),这是很耗费时间的,所以,本发明实施例采取图像金字塔的策略。在图像金字塔中,每一层都是下面一层的下采样(优选地,长宽各为一半)。
本发明实施例可以利用递归下采样的方法建立模板图像和待搜索扫描图像的图像金字塔。
在具体实施过程中,可以根据下式进行递归的下采样,计算模板 图像和待搜索扫描图像的图像金字塔,直到图像的最小边小于64为止:
Imgn+1=fdownsample(Imgn),n>0
l=Min(m,n)|Max(W(ImgModm),H(ImgModm))
≤64,Max(W(ImgObsn),H(ImgObsn))≤64)
其中,fdownsample(Imgn)表示递归的下采样;ImgMod表示样本模板;ImgObs表示待搜索扫描图像;W(ImgModm)表示样本模板的宽度;H(ImgModm)表示样本模板的高度;W(ImgObsn)表示待搜索扫描图像的宽度;H(ImgObsn)表示待搜索扫描图像的高度;m表示样本模板的金字塔层级;n表示待搜索扫描图像的金字塔层级;l表示最终对样本模板和待搜索扫描图像两幅图像生成图像金字塔的层数。n=0表示金字塔最底层的原始图像。
S13212:设定旋转角度范围、粗略角度及其区间和精细角度及其区间。
本步骤初始化旋转角度范围Aglt、粗略角度区间AglC和精细角度区AglF。其中,Aglt代表跟切片样本模板匹配角度范围预设值的绝对值,优选地,Aglt为90°。粗略角度区间可以划定180度的角度范围,优选地,AglC为10°。对粗略角度区间进行划分的结果为粗略角度θi,i=1,2…N,N取正整数。精细角度区间优选地为1°。对精细角度区间进行划分的结果为精细角度αj,j=1,2…M,M取正整数。对旋转角度范围的划分结果为θi±αj
S13213:将待搜索扫描图像旋转粗略角度和精细角度,并针对图 像金字塔顶层,对旋转后的图像与样本模板进行归一化互相关运算,确定匹配最佳点。
本步骤自图像金字塔顶层开始进行模板匹配,进行归一化互相关运算,可以得到本层的最佳匹配位置(最佳匹配点)。
具体地,本步骤可以包括:
SA1:将待搜索扫描图像ImgObs依次旋转粗略角度θi,得到
Figure PCTCN2016104852-appb-000005
SA2:按照下式,对
Figure PCTCN2016104852-appb-000006
和样本模板进行归一化互相关运算,得到匹配度概率图:
Figure PCTCN2016104852-appb-000007
式中,x、y表示搜索图像中像素坐标;R(x,y)表示矩阵R在(x,y)点的值,也即模板图像坐标位置;x′、y′表示样本模板中像素的坐标;ImgT表示样本模板ImgMod;ImgI表示
Figure PCTCN2016104852-appb-000008
SA3:选取匹配度概率图中最大值所在旋转角度θc图像的匹配点为匹配最佳点(也可以称之为最佳匹配点)。其中,θc表示粗略角度,1≤c≤N,N取正整数。
下面以一优选实施例来详细说明得到匹配最佳点的过程。
SB1:根据下式确定粗略匹配位置结果:
Figure PCTCN2016104852-appb-000009
式中,
Figure PCTCN2016104852-appb-000010
表示金字塔最顶层中的匹配最佳点;R(p(i,j))表示匹配点p(i,j)的R矩阵;R(xi,yj)表示像素点(xi,yj)的R矩阵;0≤i≤W(R),0≤j≤H(R),1≤c≤N。其中,W(R)表示R矩 阵的宽度;H(R)表示R矩阵的高度,N取正整数。
SB2:将待搜索扫描图像依次旋转θc±αj,j=1,2…M角度。
SB3:对待搜索扫描图像和样本模板进行归一化互相关运算。
SB4:取所有单幅匹配值最佳结果中的最大值所在旋转角度θcF图像的匹配点作为匹配最佳点
Figure PCTCN2016104852-appb-000011
其中,αF表示精细角度,1<F<M。
S13214:以上一层的匹配最佳点对应的本层的点为中心的一子区域,对旋转后的图像与样本模板进行归一化互相关运算,将图像金字塔最底层的匹配最佳点确定为多尺度多角度模板匹配中心点位置。
在进行模板匹配的时候,从顶层开始,以上一层的匹配最佳点对应的本层的点为中心的一个子区域(其可以为矩形子图,其长度length=2×Max(W(ImgModlp),H(ImgModlp),W(ImgModlp)表示模板图像的宽度,H(ImgModlp)表示模板图像的高度,l-1≥lp≥0,如边界超出,则将滑动窗口自动平移至边界内部),进行归一化互相关运算,得到本层的最佳匹配位置。以此类推,最终得到图像金字塔最底层的最佳匹配位置。通过建立图像金字塔,可以加速模板匹配操作,效率可以提高1到2个数量级。
S1322:将显微镜视场中心移动至序列切片样本在显微镜坐标系的位置,并快速扫描成像图像,并对扫描成像图像使用三分点的牛顿二插法逼近图像质量的局部最优值,得到当前参数附近的清晰成像参数。
具体地,本步骤可以进一步包括:
S13221:采用离散化高斯卷积核对扫描成像图像进行卷积,得到输出图像。
扫描成像图像可以是导航图上任意一个需要采集的区域,其可以实时获取。
可以根据下式的离散化高斯卷积核对待检测图像进行卷积计算:
Figure PCTCN2016104852-appb-000012
其中,ln表示双向同性边长,ln取奇数,优选地,ln为3;x、y表示像素坐标;σ表示高斯分布参数。
S13222:对输出图像进行高斯差分运算。
具体地,本步骤可以根据以下高斯差分算子进行高斯差分运算:
DoGln,dln(Img)=ABS(Gln(Img)-Gln+dln(Img))
其中,ln取奇数,优选地,ln为3;dln为大于0的偶数,优选地,dln取64;DoGln,dln(Img)表示进行高斯差分运算;ABS(·)表示对图像取绝对值;Gln+dln(Img)表示模糊程度不同的高斯图像;Gln(Img)表示步骤S13221得到的输出图像。
S13223:将均值卷积核与高斯差分运算结果进行卷积,得到卷积结果。
例如:可以采用nc×nc,(优选地,nc取5)的均值卷积核Enc同ImgDoG进行卷积。其中,ImgDoG表示高斯差分运算结果。
S13224:对卷积结果进行阈值截断,得到截断结果。
作为示例,可以根据下式进行截断:
Imgconv(i,j)=0|Imgconv(i,j)<Ththresh
其中,Imgconv(i,j)表示卷积结果(卷积图像)上的某一像素的位置;Ththresh表示截断阈值。
根据以下公式求取截断阈值:
Ththresh=Sum(Imgconv)/num
其中,Sum(Imgconv)表示图像像素值总和;num表示非零点统计数;Imgconv表示均值卷积核与高斯差分运算结果进行卷积的卷积结果;Ththresh表示截断阈值。
S13225:基于截断结果根据以下公式计算图像质量的评估值:
Figure PCTCN2016104852-appb-000013
其中,Value表示图像质量的评估值;Imgth表示截断结果;Sum(Imgth)表示截断结果像素总和;W(Imgth)表示截断结果的宽度;H(Imgth)表示截断结果的高度。
S13226:将图像质量的评估值作为当前扫描参数下的清晰度相对计算值,并采用局部最优化算法逐步逼近局部最优值,得到当前参数附近的清晰成像参数。
其中,本步骤可以采用牛顿二分法来做逐步逼近。例如:假定当前显微镜在状态S的情况下采集到图像的评估值为Value,那么调节显微镜的相关参数使得显微镜处于状态Si+中。这里,正号+表示参数的调节方向。所调节的显微镜的相关参数例如可以为调焦参数,在具体实施过程中,该参数可以根据不同的显微镜来定,可根据显微镜的型号、厂商的不同而有所不同。调节之后,获得Valuei+,若Valuei+>Value,则将Valuei+设置为Value并重复直到小于Value后,取 (Value+Valuei+)/2作为新的Value,直到新的Value和上次的Value之间的间隔小于一个阈值θ,该θ也是需要根据显微镜的具体型号来确定。
S1323:基于多尺度多角度模板匹配中心点位置,对清晰成像参数进行参数装订。
将清晰成像参数(样本采集参数)保存至样本数据库表格,并进行样本数据库中所有样本的参数装订。
S140:基于样本采集参数的装订,记录高分辨率采集区域的中心点与样本模板匹配后的中心点之间的相对位置关系。
在进行本步骤之前可以初始化高分辨率的采集区域。在进行采集时,首先找到样本模板中心,再根据上述相对位置关系移动到感兴趣的区域。
S150:结合样本采集参数及相对位置关系,进行样本点的图像连续采集。
本步骤结合样本采集参数,进行样本点的导航定位、目标区域精细定位、采集图像清晰度实时调整、大区域子图像连续采集。
具体地,本步骤可以进一步包括:
S151:读取样本采集参数。
本步骤可以从装订好的样本数据库表格中读取该样本点装订的样本采集参数。
S152:将显微镜视场中心移动到需要进行高分辨率成像的位置,并设置扫描偏转角。
具体地,本步骤可以包括:基于样本模板,使用多尺度多角度归一化互相关算法进行样本模板匹配,定位样本在显微镜坐标系的位置以及样本和模板之间的扫描偏转角。扫描偏转角可以为电子束或光源的扫描偏转角。
S153:调整成像参数。
在实际应用中,可以按照工作焦距、双向相散、工作焦距的顺序调整成像参数。
S154:通过仿射变换矩阵将多区域采集点反算至显微镜实际坐标系。
其中,多区域采集点为每块切片对应的感兴趣区域。
在具体实施过程中,可以建立位移为0,扫描偏转角为θc0F0的仿射变换矩阵Mr,将多区域采集点pi通过Mr反算至显微镜实际坐标系p′i。其中,θc0表示粗略角度,1<c0<N,N取正整数;αF0表示精细角度,1<F0<M,M取正整数。
S155:将显微镜视场中心移动至显微镜实际坐标系逐点扫描预设的高分辨率图像,完成样本点多区域的图像采集。
例如:移动显微镜视场中心点至p′i,设置扫描偏转角为θc0F0;逐点扫描预设的高分辨率图像。
为了采集出高质量的图像,在采集之后还需要进行图像质量评估,对于图像质量过差的情况需要重新采集。
S156:基于各样本点多区域的图像采集,输出连续样本图像。
上述实施例中虽然将各个步骤按照上述先后次序的方式进行了 描述,但是本领域技术人员可以理解,为了实现本实施例的效果,不同的步骤之间不必按照这样的次序执行,其可以同时(并行)执行或以颠倒的次序执行,这些简单的变化都在本发明的保护范围之内。
还应当注意,本说明书中使用的语言主要是为了可读性和教导的目的而选择的,而不是为了解释或者限定本发明的主题而选择的。
本发明并不限于上述实施方式,在不背离本发明实质内容的情况下,本领域普通技术人员可以想到的任何变形、改进或替换均落入本发明的保护范围。

Claims (9)

  1. 一种基于序列切片的显微镜图像采集方法,其特征在于,所述方法包括:
    获取序列切片样本及其导航图;
    采用图像处理与机器学习的方法对所述导航图中的所述序列切片样本进行识别标记;
    将所述序列切片样本置于所述显微镜中,建立导航图-显微镜实际采样空间坐标的坐标转换矩阵,将所述导航图中任一像素点导航定位至显微镜视场中心;
    在低分辨率视场下定位所述序列切片样本,进行样本采集参数的装订;
    基于所述样本采集参数的装订,记录高分辨率采集区域的中心点与样本模板匹配后的中心点之间的相对位置关系;
    结合所述样本采集参数及所述相对位置关系,进行样本点的图像连续采集。
  2. 根据权利要求1所述的方法,其特征在于,所述采用图像处理与机器学习的方法对所述导航图中的所述序列切片样本进行识别标记,具体包括:
    采用Mean-Shift算法对所述导航图进行分割,得到前景样本目标区域与背景区域;
    计算所述前景样本目标区域与所述背景区域边缘轮廓的最小外接矩形;
    将所述前景样本目标区域与所述背景区域边缘轮廓的所述最小 外接矩形与样本模板的最小外接矩形进行比对,剔除宽长比和面积大小相差过大的前景样本目标区域与背景区域;
    确定如下激励函数:
    Figure PCTCN2016104852-appb-100001
    其中,所述y表示所述激励函数;所述Si表示滑动窗口区域特征值的欧式距离,所述Si=||(μii)-(μ1,σ1)||;所述Bi表示样本特征值的欧式距离,所述Bi=||(μii)-(μ2,σ2)||;所述μ1表示样本目标区域的均值;所述σ1表示所述样本目标区域的方差;所述μ2表示非样本背景区域的均值;所述σ2表示非样本背景区域的方差;所述μi表示滑动窗口区域内的均值;所述σi表示所述滑动窗口区域内的方差;W表示权值;
    对所述激励函数的结果进行阈值分割,确定所述导航图中正样本的位置;
    以所述导航图中正样本的位置为中心点对所述导航图进行截取,得到训练集正样本,并且采集所述导航图中正样本位置周围不包含切片的区域的图像,得到训练集负样本;
    归一化所述训练集正样本和所述训练集负样本,并提取HOG特征;
    利用SVM分类器对所述导航图进行检测,得到样本切片位置;
    将剔除操作得到的最小外接矩形与所述样本切片位置进行融合,实现对所述序列切片样本的识别标记。
  3. 根据权利要求2所述的方法,其特征在于,所述采集所述导航图中正样本位置周围不包含切片的区域的图像,得到训练集负样本, 具体包括:
    对所述导航图中正样本位置周围不包含切片的区域的图像进行旋转、叠加随机噪声、模糊及伽马变换,得到训练集负样本。
  4. 根据权利要求1所述的方法,其特征在于,所述将所述序列切片样本置于所述显微镜中,建立导航图-显微镜实际采样空间坐标的坐标转换矩阵,将所述导航图中任一像素点导航定位至显微镜视场中心,具体包括:
    选取所述导航图中任意不在同一直线上的三个点,并记录所述三个点在所述导航图中的像素坐标位置;
    确定所述三个点在成像中心时的显微镜载物台的位置;
    基于所述三个点在所述导航图中的所述像素坐标位置和所述三个点在成像中心时的显微镜载物台的所述位置,根据空间坐标仿射变换方法,建立导航图-显微镜实际采样空间坐标的坐标转换矩阵;
    利用所述坐标转换矩阵,将所述导航图中任一像素点投影至显微镜视场中的对应位置,从而将所述导航图任一像素点导航定位至所述显微镜视场中心。
  5. 根据权利要求1所述的方法,其特征在于,所述在低分辨率视场下定位所述序列切片样本,进行样本采集参数的装订,具体包括:
    设置高分辨率下清晰成像参数,并将低分辨率下清晰成像图像经高斯模糊之后,作为样本模板,并基于低分辨率下清晰成像参数和所述样本模板建立样本数据库;
    依次将所述显微镜导航对应于样本点上,并结合所述样本数据库, 进行样本采集参数的装订。
  6. 根据权利要求5所述的方法,其特征在于,所述依次将所述显微镜导航对应于样本点上,并结合所述样本数据库,进行样本采集参数的装订,具体包括:
    确定多尺度多角度模板匹配中心点位置,并将其存入样本数据库;
    将所述显微镜视场中心移动至所述序列切片样本在显微镜坐标系的位置,并快速扫描成像图像,并对所述扫描成像图像使用三分点的牛顿二插法逼近图像质量的局部最优值,得到当前参数附近的清晰成像参数;
    基于所述样本数据库,对所述清晰成像参数进行参数装订。
  7. 根据权利要求6所述的方法,其特征在于,所述确定多尺度多角度模板匹配中心点位置具体包括:
    计算所述样本模板和待搜索扫描图像的图像金字塔;
    设定旋转角度范围、粗略角度及其区间和精细角度及其区间;
    将所述待搜索扫描图像旋转所述粗略角度和所述精细角度,并针对所述图像金字塔顶层,对旋转后的图像与所述样本模板进行归一化互相关运算,确定匹配最佳点;
    以上一层的匹配最佳点对应的本层的点为中心的一子区域,对所述旋转后的图像与所述样本模板进行归一化互相关运算,将所述图像金字塔最底层的匹配最佳点确定为所述多尺度多角度模板匹配中心点位置。
  8. 根据权利要求6所述的方法,其特征在于,所述得到当前参 数附近的清晰成像参数具体包括:
    采用离散化高斯卷积核对所述扫描成像图像进行卷积,得到输出图像;
    对所述输出图像进行高斯差分运算;
    将均值卷积核与高斯差分运算结果进行卷积,得到卷积结果;
    对所述卷积结果进行阈值截断,得到截断结果;
    基于所述截断结果根据以下公式计算图像质量的评估值:
    Figure PCTCN2016104852-appb-100002
    其中,所述Value表示所述图像质量的评估值;所述Imgth表示所述截断结果;所述Sum(Imgth)表示截断结果像素总和;所述W(Imgth)表示所述截断结果的宽度;所述H(Imgth)表示所述截断结果的高度;
    将所述图像质量的评估值作为当前扫描参数下的清晰度相对计算值,并采用局部最优化算法逐步逼近局部最优值,得到当前参数附近的所述清晰成像参数。
  9. 根据权利要求1所述的方法,其特征在于,所述结合所述样本采集参数及所述相对位置关系,进行样本点的图像连续采集,具体包括:
    读取样本采集参数;
    将所述显微镜视场中心移动到需要进行高分辨率成像的位置,并设置扫描偏转角;
    调整成像参数;
    通过仿射变换矩阵将多区域采集点反算至显微镜实际坐标系;
    将所述显微镜视场中心移动至所述显微镜实际坐标系逐点扫描预设的高分辨率图像,完成样本点多区域的图像采集;
    基于各所述样本点多区域的图像采集,输出连续样本图像。
PCT/CN2016/104852 2016-11-07 2016-11-07 基于序列切片的显微镜图像采集方法 WO2018082085A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
PCT/CN2016/104852 WO2018082085A1 (zh) 2016-11-07 2016-11-07 基于序列切片的显微镜图像采集方法
US16/066,879 US10699100B2 (en) 2016-11-07 2016-11-07 Method for microscopic image acquisition based on sequential section

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2016/104852 WO2018082085A1 (zh) 2016-11-07 2016-11-07 基于序列切片的显微镜图像采集方法

Publications (1)

Publication Number Publication Date
WO2018082085A1 true WO2018082085A1 (zh) 2018-05-11

Family

ID=62075385

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2016/104852 WO2018082085A1 (zh) 2016-11-07 2016-11-07 基于序列切片的显微镜图像采集方法

Country Status (2)

Country Link
US (1) US10699100B2 (zh)
WO (1) WO2018082085A1 (zh)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110619318A (zh) * 2019-09-27 2019-12-27 腾讯科技(深圳)有限公司 基于人工智能的图像处理方法、显微镜、系统和介质
CN112435218A (zh) * 2020-11-04 2021-03-02 南京火眼锐视信息科技有限公司 一种文档图像的形变度评估、筛选方法和装置

Families Citing this family (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018001099A1 (zh) * 2016-06-30 2018-01-04 上海联影医疗科技有限公司 一种血管提取方法与系统
KR102237696B1 (ko) * 2019-04-29 2021-04-08 계명대학교 산학협력단 자궁경부암 세포 영상 자동 분석을 위한 인공지능 컴퓨터 보조 진단 시스템 및 그 제어 방법
CN110427993B (zh) * 2019-07-24 2023-04-21 中南大学 基于气象参数的高速列车导航盲区定位方法
CN110672608B (zh) * 2019-10-15 2022-04-12 南京泰立瑞信息科技有限公司 一种全切片扫描路径动态规划方法及系统
CN110807766B (zh) * 2019-10-17 2023-03-31 西安工程大学 一种基于可见光图像的双防震锤位移识别方法
CN111310748B (zh) * 2020-03-17 2023-04-07 郑州航空工业管理学院 一种汉画像石体育图像采集比对装置
CN111862109B (zh) * 2020-06-28 2024-02-23 国网山东省电力公司德州供电公司 多目标采集、图像识别及自动标注识别结果的系统和装置
CN112630242B (zh) * 2020-12-03 2023-01-10 成都先进金属材料产业技术研究院股份有限公司 一种扫描电镜样品导航方法
CN113628247A (zh) * 2021-07-29 2021-11-09 华中科技大学 一种自由运动样本的自动化搜索成像方法及系统
CN113643181B (zh) * 2021-08-10 2022-04-19 南京农业大学 一种原位阵列式根系表型监测系统及其工作方法
CN114155444B (zh) * 2021-10-22 2024-04-26 中国科学院长春光学精密机械与物理研究所 一种基于航天摆扫成像系统的在轨目标检测方法
CN113920437B (zh) * 2021-12-14 2022-04-12 成都考拉悠然科技有限公司 一种导电粒子识别方法、系统、存储介质和计算机设备
CN114511559B (zh) * 2022-04-18 2022-10-11 杭州迪英加科技有限公司 染色鼻息肉病理切片质量多维评价方法、系统及介质
CN115524343B (zh) * 2022-09-29 2023-06-20 哈尔滨工业大学 一种冰晶体物理结构的细观表征方法
CN116311243B (zh) * 2023-03-22 2023-10-24 生态环境部长江流域生态环境监督管理局生态环境监测与科学研究中心 一种基于显微镜图像的藻类检测方法及系统

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6606413B1 (en) * 1998-06-01 2003-08-12 Trestle Acquisition Corp. Compression packaged image transmission for telemicroscopy
CN101706458A (zh) * 2009-11-30 2010-05-12 中北大学 高分辨率印刷电路板自动检测系统及检测方法
CN101788709A (zh) * 2010-03-10 2010-07-28 广西大学 一种数字共焦显微仪光学切片采集驱动器
CN102436551A (zh) * 2011-11-10 2012-05-02 西安电子科技大学 基于目标跟踪的计算机辅助胃癌诊断方法
CN103020631A (zh) * 2012-11-23 2013-04-03 西安电子科技大学 基于星型模型的人体运动识别方法
CN103577038A (zh) * 2012-07-19 2014-02-12 索尼公司 用于导航堆叠的显微图像的方法和设备
CN106570484A (zh) * 2016-11-07 2017-04-19 中国科学院自动化研究所 基于序列切片的显微镜图像采集方法

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6606413B1 (en) * 1998-06-01 2003-08-12 Trestle Acquisition Corp. Compression packaged image transmission for telemicroscopy
CN101706458A (zh) * 2009-11-30 2010-05-12 中北大学 高分辨率印刷电路板自动检测系统及检测方法
CN101788709A (zh) * 2010-03-10 2010-07-28 广西大学 一种数字共焦显微仪光学切片采集驱动器
CN102436551A (zh) * 2011-11-10 2012-05-02 西安电子科技大学 基于目标跟踪的计算机辅助胃癌诊断方法
CN103577038A (zh) * 2012-07-19 2014-02-12 索尼公司 用于导航堆叠的显微图像的方法和设备
CN103020631A (zh) * 2012-11-23 2013-04-03 西安电子科技大学 基于星型模型的人体运动识别方法
CN106570484A (zh) * 2016-11-07 2017-04-19 中国科学院自动化研究所 基于序列切片的显微镜图像采集方法

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
NIE, XIONG ET AL.: "Research on the Method of Auto-Collecting Serial Optical Section for Digital Confocal Microscope", CHINESE JOURNAL OF SCIENTIFIC INSTRUMENT, vol. 31, no. 9, 30 September 2010 (2010-09-30), pages 2149 - 2152 *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110619318A (zh) * 2019-09-27 2019-12-27 腾讯科技(深圳)有限公司 基于人工智能的图像处理方法、显微镜、系统和介质
US11790672B2 (en) 2019-09-27 2023-10-17 Tencent Technology (Shenzhen) Company Limited Image processing method, microscope, image processing system, and medium based on artificial intelligence
CN112435218A (zh) * 2020-11-04 2021-03-02 南京火眼锐视信息科技有限公司 一种文档图像的形变度评估、筛选方法和装置

Also Published As

Publication number Publication date
US20190012520A1 (en) 2019-01-10
US10699100B2 (en) 2020-06-30

Similar Documents

Publication Publication Date Title
WO2018082085A1 (zh) 基于序列切片的显微镜图像采集方法
CN106570484B (zh) 基于序列切片的显微镜图像采集方法
EP3382644B1 (en) Method for 3d modelling based on structure from motion processing of sparse 2d images
CN109903313B (zh) 一种基于目标三维模型的实时位姿跟踪方法
CN109978839B (zh) 晶圆低纹理缺陷的检测方法
CN110736747B (zh) 一种细胞液基涂片镜下定位的方法及系统
CN108981672A (zh) 基于单目机器人与测距传感器结合的舱门实时定位方法
CN106529587B (zh) 基于目标点识别的视觉航向识别方法
CN106611416B (zh) 一种医学图像中肺分割的方法及装置
US9418421B1 (en) Automation of biopsy specimen handling
CA3136674C (en) Methods and systems for crack detection using a fully convolutional network
CN111127613B (zh) 基于扫描电子显微镜的图像序列三维重构方法及系统
CN114565675A (zh) 一种在视觉slam前端去除动态特征点的方法
Zou et al. Path voting based pavement crack detection from laser range images
JP2024012432A (ja) 検査システム、及び非一時的コンピュータ可読媒体
EP1410004A2 (en) Evaluation of microscope slides
Han et al. Shape context based object recognition and tracking in structured underwater environment
Li et al. Perspective-consistent multifocus multiview 3D reconstruction of small objects
CN113125434A (zh) 图像分析系统和控制拍摄样本图像的方法
RU2647645C1 (ru) Способ устранения швов при создании панорамных изображений из видеопотока кадров в режиме реального времени
CN109919969B (zh) 一种利用深度卷积神经网络实现视觉运动控制的方法
Sarkar et al. A robust method for inter-marker whole slide registration of digital pathology images using lines based features
EP2966594A1 (en) Method and device for image extraction from a video
He et al. Inward-region-growing-based accurate partitioning of closely stacked objects for bin-picking
McCarthy et al. Automated internode length measurement of cotton plants under field conditions

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 16920922

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 16920922

Country of ref document: EP

Kind code of ref document: A1