CN115060367B - Whole-slide data cube acquisition method based on microscopic hyperspectral imaging platform - Google Patents

Whole-slide data cube acquisition method based on microscopic hyperspectral imaging platform Download PDF

Info

Publication number
CN115060367B
CN115060367B CN202210659906.9A CN202210659906A CN115060367B CN 115060367 B CN115060367 B CN 115060367B CN 202210659906 A CN202210659906 A CN 202210659906A CN 115060367 B CN115060367 B CN 115060367B
Authority
CN
China
Prior art keywords
focusing
image
acquisition
blank
hyperspectral
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210659906.9A
Other languages
Chinese (zh)
Other versions
CN115060367A (en
Inventor
李庆利
刘伟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
East China Normal University
Original Assignee
East China Normal University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by East China Normal University filed Critical East China Normal University
Priority to CN202210659906.9A priority Critical patent/CN115060367B/en
Publication of CN115060367A publication Critical patent/CN115060367A/en
Application granted granted Critical
Publication of CN115060367B publication Critical patent/CN115060367B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01JMEASUREMENT OF INTENSITY, VELOCITY, SPECTRAL CONTENT, POLARISATION, PHASE OR PULSE CHARACTERISTICS OF INFRARED, VISIBLE OR ULTRAVIOLET LIGHT; COLORIMETRY; RADIATION PYROMETRY
    • G01J3/00Spectrometry; Spectrophotometry; Monochromators; Measuring colours
    • G01J3/28Investigating the spectrum
    • G01J3/2823Imaging spectrometer
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01JMEASUREMENT OF INTENSITY, VELOCITY, SPECTRAL CONTENT, POLARISATION, PHASE OR PULSE CHARACTERISTICS OF INFRARED, VISIBLE OR ULTRAVIOLET LIGHT; COLORIMETRY; RADIATION PYROMETRY
    • G01J3/00Spectrometry; Spectrophotometry; Monochromators; Measuring colours
    • G01J3/28Investigating the spectrum
    • G01J2003/283Investigating the spectrum computer-interfaced
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A40/00Adaptation technologies in agriculture, forestry, livestock or agroalimentary production
    • Y02A40/10Adaptation technologies in agriculture, forestry, livestock or agroalimentary production in agriculture

Landscapes

  • Physics & Mathematics (AREA)
  • Spectroscopy & Molecular Physics (AREA)
  • General Physics & Mathematics (AREA)
  • Microscoopes, Condenser (AREA)

Abstract

The invention discloses a full-slide data cube acquisition method based on a microscopic hyperspectral imaging platform, which comprises the following steps: constructing a control computer for providing a software interface; setting a spectrum range and a band number based on a software interface, and acquiring a preview image; preprocessing the preview image to obtain a preprocessing result; generating an acquisition task sequence based on the preprocessing result; acquiring data based on the acquisition task sequence to acquire a hyperspectral image; and splicing and registering the hyperspectral images to obtain the hyperspectral full-slide data cube. The invention provides a full-slide data cube acquisition method of a hyperspectral imaging platform, which ensures the accuracy and efficiency of the whole system and ensures more convenient and efficient pathological diagnosis, aiming at the defect that the doctor cannot have comprehensive diagnosis on the whole Zhang Bopian in the information provided by the large-area image acquisition method in the medical field.

Description

Whole-slide data cube acquisition method based on microscopic hyperspectral imaging platform
Technical Field
The invention belongs to the technical field of image stitching, and particularly relates to a full-slide data cube acquisition method based on a microscopic hyperspectral imaging platform.
Background
Since its advent, hyperspectral technology has been widely used in the field of remote sensing, playing an important role in geological exploration, fine agriculture, ecological environment, urban remote sensing, and military target detection. In recent years, hyperspectral is no longer suitable for macroscopic scenes, and hyperspectral techniques are applied to microscopic imaging scenes such as microorganism detection, crop analysis and other fields. The hyperspectral image breaks through the two-dimensional restriction of the traditional color image, gives the third-dimensional spectral information to the hyperspectral image, and provides more information and new ideas for the identification and classification of microscopic images. In the medical field, although the hyperspectral technology is applied, a comprehensive hyperspectral data acquisition method cannot be provided for the whole Zhang Bopian, and the operation of doctors is complicated and the diagnosis efficiency is low.
In addition, in order to solve the problem of smaller field of view of a single image, a plurality of large-area image acquisition methods, mainly large-area hyperspectral image acquisition methods, are presented in the current medical field. However, in pathological diagnostic procedures, it is often necessary to study all the areas of the sample present in a slide, so that it is necessary to achieve hyperspectral whole slide data cube acquisition. However, full slide image acquisition means the degree of automation and accuracy of the acquisition imaging platform compared to large area image acquisition. For example, during a full slide scan, imaging out-of-focus problems often result from tissue sample irregularities, which require the imaging platform to focus the current acquisition field of view in real time. For another example, after the whole acquisition process is finished, the imaging platform outputs a full-slide high-definition large image, which means that accurate splicing operation is required to be achieved on each acquired view field small image. At the same time, the total time consumption of the scanning process is also required. Therefore, how to meet the above requirements and provide high-definition large images of hyperspectral full glass for doctors is needed to be solved.
Disclosure of Invention
The invention aims to provide a full-slide data cube acquisition method based on a microscopic hyperspectral imaging platform, so as to solve the problems in the prior art.
In order to achieve the above object, the present invention provides a full slide data cube collection method based on a microscopic hyperspectral imaging platform, comprising:
constructing a control computer for providing a software interface;
setting a spectrum range and a band number based on the software interface, and acquiring a preview image;
preprocessing the preview image to obtain a preprocessing result;
generating an acquisition task sequence based on the preprocessing result;
acquiring data based on the acquisition task sequence to acquire a hyperspectral image;
and splicing and registering the hyperspectral images to obtain a hyperspectral full-slide data cube.
Optionally, the preprocessing the preview image includes:
identifying a sample area in the preview image and marking the sample area;
establishing a mapping relation between the software coordinates of the preview image and the physical coordinates of the microscopic field of view;
calculating the number of steps required to move transversely and longitudinally and the coordinates of all acquisition points in the sample area based on the sample area;
selecting a pre-focusing point based on the sample area, and acquiring a pre-focusing point set;
automatically focusing the pre-focusing point set to obtain a focusing result;
and realizing focal plane modeling based on the focusing result.
Optionally, the process of establishing the mapping relation between the software coordinates of the preview image and the physical coordinates of the microscopic field of view includes:
calculating a sample presence region of a corresponding stage based on the sample region;
acquiring shooting parameters of a preview camera, and calculating physical offset vectors of pixels of the preview camera and a field of view;
acquiring physical coordinates of a field of view of a preview camera, and calculating physical coordinates of pixels through a coordinate rotation transformation matrix from the field of view of the preview camera to an objective table based on the physical offset vector;
and based on the pixel physical coordinates, combining the mapping vector of the physical coordinates of the field of view and the physical coordinates of the microscopic field of view to obtain the corresponding physical coordinates of the microscopic field of view.
Optionally, the process of selecting the pre-focusing point based on the sample region includes:
downsampling the preview image;
performing gradient calculation on the downsampled image sample area to obtain a calculation result;
performing preliminary screening of a pre-focusing point set based on the pixel point with the highest gradient in the calculation result;
the set of pre-focus points is supplemented based on a spatially uniform distribution.
Optionally, the process of automatically focusing the pre-focusing point includes:
generating an automatic focusing task sequence based on the pre-focusing point set;
acquiring a current focusing task based on the automatic focusing task sequence;
acquiring two images with the same view field and fixed height difference, and obtaining rough focusing heights of the two images;
and shooting and searching the image gradient in the real-time field of view to obtain the accurate focusing height.
Optionally, the process of generating the auto-focus task sequence based on the pre-focus point set includes:
adding each pre-focusing point in the pre-focusing point set into a pre-focusing point sequence based on a Z-type searching mode; the Z-type searching mode comprises the following steps: dividing a sample area into a plurality of columns according to equal width, starting from the uppermost pre-focusing point of the leftmost column, searching the subsequent pre-focusing point in a forward direction in a stepping way, adding a pre-focusing point sequence, after reaching a column boundary, stepping the sample area by one column width to the right, and sequentially searching the pre-focusing point in a reverse direction, and adding the pre-focusing point sequence; repeating the steps until the rightmost column is searched;
and generating an automatic focusing task sequence based on the pre-focusing sequence and combining shooting parameters.
Optionally, the process of implementing focal plane modeling based on the focusing result includes:
calculating four pre-focusing points closest to the horizontal two-dimensional distance of each acquisition point in the sample area;
in the four pre-focusing points, two focal planes are respectively fitted with three-dimensional coordinates of the third and fourth pre-focusing points by utilizing the two nearest pre-focusing points;
interpolating the two focal planes to calculate the heights of the acquisition points in the two planes;
and calculating the distance between each acquisition point and the fitted two focal planes, taking the distance as a weight, and respectively weighting the Z-axis coordinates in the corresponding two planes to obtain the focusing height, thereby completing the focal plane modeling.
Optionally, the process of generating the acquisition task sequence based on the preprocessing result includes:
adding the coordinates of all the acquisition points into a scanning sequence in a Z-type searching mode with the column width of 1;
and acquiring shooting parameters of the microscopic hyperspectral image and focusing heights of all acquisition points, and generating an acquisition task sequence by combining the scanning sequence.
Optionally, the process of data acquisition based on the acquisition task sequence includes:
acquiring a current acquisition task based on the acquisition task sequence; and carrying out data acquisition based on the current acquisition task to respectively obtain gray original data and color original data.
Optionally, the process of stitching and registering the hyperspectral image includes:
the splicing process comprises the following steps: uniformly cutting an image to be detected into small blocks, classifying the small blocks, and obtaining a blank detection result;
the blank detection result comprises: if all the small blocks are classified as blank, the current image to be detected is blank; otherwise, the blank is non-blank;
the blank detection is to identify a blank region image;
the blank area image is an image of an area which is transparent and has no tissue on the glass slide;
acquiring a first blank view field image in an acquisition area based on the blank detection result;
replacing the content of the blank area image of the gray original data with a white gray value;
carrying out image enhancement on the gray original data non-blank area image;
processing non-blank parts of the acquired gray original image based on the blank view field image and based on a beer-lambert law, eliminating the influence of difference of a slide and a light source, and obtaining an image to be spliced;
the registration process includes: acquiring prior information and a reference wave band required by registration; the prior information is an offset vector of adjacent fields of view; and splicing the hyperspectral images based on the offset vector.
The invention has the technical effects that:
the invention provides a full-slide data cube acquisition method of a hyperspectral imaging platform for the defect that information provided by a large-area image acquisition method in the medical field cannot enable doctors to have comprehensive diagnosis on the whole Zhang Bopian. The invention provides a set of method required by a whole glass slide scanning process, which comprises the following steps: the pre-focusing point distribution method, the automatic focusing method, the focal plane establishing method, the hyperspectral splicing method and the like in the area to be scanned ensure the precision and the efficiency of the whole system and enable pathological diagnosis to be more convenient and efficient.
Drawings
The accompanying drawings, which are included to provide a further understanding of the application, illustrate and explain the application and are not to be construed as limiting the application. In the drawings:
FIG. 1 is a flow chart of a full slide data cube acquisition method based on a microscopic hyperspectral imaging platform in an embodiment of the invention;
FIG. 2 is a flow chart of establishing a mapping relationship between software coordinates and physical coordinates of a microscope field of view and selecting a pre-focusing point in an embodiment of the present invention;
FIG. 3 is a schematic flow chart of auto-focusing and acquiring the focusing height of each acquisition point according to an embodiment of the present invention;
FIG. 4 is a schematic flow chart of stitching microscopic hyperspectral images in an embodiment of the present invention;
FIG. 5 is a system block diagram of a microhyperspectral imaging platform in an embodiment of the present invention;
reference numerals: 1-optical microscope, 2-gray level camera, 3-color camera, 4-preview camera, 5-reflection light source, 6-transmission light source, 7-precision triaxial electric stage, 8-integrated controller, 9-acousto-optic coordinated controller, 10-driver, 11-control computer, 12-beam splitter.
Detailed Description
It should be noted that, in the case of no conflict, the embodiments and features in the embodiments may be combined with each other. The present application will be described in detail below with reference to the accompanying drawings in conjunction with embodiments.
It should be noted that the steps illustrated in the flowcharts of the figures may be performed in a computer system such as a set of computer executable instructions, and that although a logical order is illustrated in the flowcharts, in some cases the steps illustrated or described may be performed in an order other than that illustrated herein.
Example 1
As shown in fig. 1-5, a full-slide data cube collection method based on a microscopic hyperspectral imaging platform is provided in this embodiment.
As shown in fig. 5, the hyperspectral imaging platform provided in this embodiment includes: an optical microscope 1, a scientific complementary metal oxide semiconductor gray scale industrial camera (gray scale camera) 2, a scientific complementary metal oxide semiconductor color industrial camera (color camera) 3, a universal serial bus preview camera (preview camera) 4, a reflection light source 5, a transmission light source 6, a precision triaxial electric stage 7 and a comprehensive controller 8 thereof, an acousto-optic coordination controller 9 and a driver 10 thereof control a computer 11 and a beam splitter 12. The optical microscope 1 functions to provide different magnification to the sample. The function of the scientific grade complementary metal oxide semiconductor gray scale industrial camera 2 is to collect hyperspectral data. The function of the scientific CMOS color industrial camera 3 is to obtain a color image of the current field of view, which is used for blank detection and other steps. The usb preview camera 4 is used to provide a preview of the slide to be scanned before starting the full slide scan, which is used to identify the sample presence area, pre-focus selection, and the like. The effect of the reflective light source 5, the transmissive light source 6 is to provide an excitation light source that contacts the sample from different directions. The precision triaxial electric stage 7 functions to achieve accurate displacement and record position. The integrated controller 8 of the precision triaxial electric stage is operative to interact with the precision triaxial electric stage 7, the transmission light source 6 and interface with the control computer 11 and provide a memory module storing XYZ triaxial coordinate information. The acousto-optic tunable controller 9 functions to control the wavelength range of the passing light. The acousto-optic tunable controller driver 10 is used for outputting an electric tuning signal to control the acousto-optic tunable controller 9 to pass through the wavelength range of light and interface with the control computer 11. The control computer 11 is used for providing a software interface, an operator can interact with hardware in the software, and calculation power and storage support are provided for each image processing method. The beam splitter 12 functions to transmit light to both the grayscale camera and the color camera.
As shown in fig. 1, the present embodiment provides a large-area data cube collection method based on a microscopic hyperspectral imaging platform, including:
101: the microscopic hyperspectral imaging platform provides a software interface, and an operator can set the spectral range and the band number through the software interface.
102: preprocessing is carried out according to the preview image.
After the preview image is obtained, the computer computing system is controlled to preprocess the preview image, and the subsequent required data is obtained.
The pretreatment process mainly comprises the following steps:
1. establishing a mapping relation between software coordinates and microscope visual field physical coordinates, and selecting a pre-focusing point on the basis;
2. and (5) carrying out automatic focusing on the pre-focusing point and modeling to obtain the focusing height of each acquisition point.
103: a collection task sequence is generated.
And controlling a computer computing system to generate a scanning sequence according to the acquired point coordinates obtained by preprocessing.
1031: the control computer adds the coordinate information of each acquisition point into the scanning sequence in a Z type with the column width of 1.
The Z-shaped adding process comprises the following steps:
1. dividing a rectangular frame of a sample existence area into a plurality of columns according to a fixed column width;
2. searching each point from the uppermost point of the leftmost column in the forward direction;
3. stepping to the right one column width after reaching the edge of the column;
4. gradually searching each point in the opposite direction of the current column;
5. stepping to the right one column width after reaching the column edge;
6. returning to 2 until the rightmost column completes the search.
1032: a collection task sequence is generated.
And the control computer computing system acquires shooting parameters of the acquired microscopic hyperspectral image and the focusing height of each acquisition point obtained by a focal plane modeling result, and combines a scanning sequence to generate an acquisition task sequence.
104: and (5) data acquisition.
And the microscopic hyperspectral imaging platform performs data acquisition according to the acquisition task sequence.
1041: and acquiring a current acquisition task.
The control computer computing system acquires a current acquisition task from the task sequence, and sets the coordinate and the focusing height of a next point to be acquired according to the acquisition task.
1042: moving to the collection point.
The control computer computing system controls the precise triaxial electric object stage to move to the corresponding position by the set acquisition point coordinates and the focusing height.
1043: the computer computing system is controlled to control the camera to collect.
After the precise triaxial electric object stage moves to the corresponding position, the control computer computing system controls the hyperspectral camera and the color camera to acquire data, and gray original data and color original data are obtained.
105: and splicing microscopic hyperspectral images.
And after the data acquisition is completed, the computer computing system is controlled to splice the acquired hyperspectral image data.
106: a whole slide data cube is generated.
After the splicing is completed, the computer computing system is controlled to acquire hyperspectral full-slide large-image information according to the information of each sub-image and each wave band, and the acquisition of a hyperspectral full-slide data cube is completed.
As shown in fig. 2, the present embodiment provides a large-area data cube collection method based on a microscopic hyperspectral imaging platform, including:
201: the presence of a sample area in the preview image is identified and marked.
The control computer computing system takes the preview image as input of the neural network and outputs the preview image as a sample existence area rectangular box.
202: and establishing a mapping relation between the software coordinates of the image of the preview camera and the physical coordinates of the microscopic field.
The control computer computing system is used for acquiring object stage coordinates, namely microscopic view field coordinates, according to software coordinates of a preview image sample area, but in the acquisition process, the precise triaxial electric object stage is required to be used for converting the software coordinates into the microscopic view field coordinates.
2021: and calculating the existence area of the corresponding object stage sample.
The physical length of the object stage corresponding to one pixel in the preview image is measured in advance, and the control computer computing system calculates the actual corresponding object stage range according to the rectangular frame range of the sample existence area.
2022: a physical offset vector of a pixel of the preview camera relative to the field of view of the preview camera is calculated.
The physical coordinates of the field of view of the preview camera correspond to the central position of the preview image, and the control computer computing system obtains the physical offset by combining the physical length corresponding to one pixel according to the offset vector of a certain pixel of the preview image and the central position.
2023: and calculating physical coordinates of pixels in the preview image.
The physical coordinates of the field of view of the preview camera are set in advance, and the physical offset is multiplied by the corresponding rotation transformation matrix to obtain the actual offset because the preview camera has a 90-degree rotation relationship with the object stage coordinates. The control computer computing system obtains the physical coordinates of the pixels by adding the physical coordinates of the field of view of the preview camera to the actual offset.
2024: and calculating physical coordinates of the pixels corresponding to the microscope field of view.
The mapping relation between the preview camera view field and the microscopic view field physical coordinate is measured in advance, and the mapping relation refers to the offset vector from the center of the preview image to the microscopic view field. The control computer computing system adds the offset vector to the physical coordinate of a pixel to complete the conversion of the coordinate of the microscopic field.
203: the number of steps of the required movement in the transverse and longitudinal directions is calculated, and the coordinates of all the acquisition points in the sample area are calculated.
The characteristic point matching is needed according to the overlapping part of the adjacent images during the subsequent registration of the acquired images, so that the moving step length of the object stage is required to meet the condition that the proper overlapping width exists between the adjacent images. The size of the range of the objective table corresponding to the single field of view of the microscope is measured in advance, and the single-step transverse and longitudinal moving distance of the objective table can be set according to requirements. According to the single-step moving distance, the control computer computing system can calculate the number of steps required to move horizontally and longitudinally and the coordinates of all the acquisition points by combining the size of the rectangular frame range of the sample existence area.
204: and selecting a pre-focusing point.
The whole slide collection process needs to ensure that the stage height is always the focusing height. If a focusing operation is performed on each acquisition point, the total time consumption is greatly increased, and the requirement of instant shooting and instant reading is not met. Thus, the control computer computing system can select a portion prior to full slide acquisition that represents a pre-focusing point for a slice of the regional acquisition point.
2041: and (5) downsampling.
The control computer computing system downsampling the preview image can reduce the time consumption of the selection process on the premise of ensuring the quality of the prefocus selection.
2042: image gradients are calculated.
The control computer computing system uses the Laplace operator to compute the image gradient of each pixel point in the rectangular frame of the sample existence area.
2043: and (5) preliminary screening of the pre-focusing point set.
The control computer computing system firstly arranges all pixel points in a rectangular frame of a sample existence area according to the descending order of the gradient to form a gradient-pixel point sequence, and the pixel point with the highest gradient is placed into a pre-focusing point sequence. And then sequentially taking points in the gradient-point sequence and judging whether the distance between the points in the focusing sequence and each point is larger than a set threshold value. If yes, adding a pre-focusing sequence, otherwise, judging the subsequent points in sequence. And finally forming a preliminary pre-focusing point set.
2044: the pre-focus point set is supplemented.
The preliminary generated pre-focusing point set uses the spatial characteristics of the image, but the generated pre-focusing point distribution is not uniform. Thus, the method is applicable to a variety of applications. The control computer computing system supplements the prefocusing point set by using a uniform distribution method to ensure the uniformity of the distribution of the prefocusing points of the sheet.
As shown in fig. 3, the present embodiment provides a large-area data cube collection method based on a microscopic hyperspectral imaging platform, including:
301: and (5) automatic focusing.
Because of the large number of pre-pairs of focuses, the operator needs to manually focus each pre-pair of focuses one by one for a long time. Thus, the control computer automatically fits the focal height of the pre-pair focus by an algorithm.
3011: an auto-focus task sequence is generated.
The control computer computing system generates a prefocusing order sequence according to the sample existence region and the prefocusing point set.
The generation steps mainly comprise:
1. the control computer computing system divides the rectangular frame of the sample existence area into a plurality of columns according to the equal width, and adds the coordinate information of all the pre-focusing points into the pre-focusing point sequence in a Z-shaped adding mode based on the column width.
2. And controlling a computer computing system to acquire microscopic hyperspectral image shooting parameters, and combining the pre-focusing sequence to generate an automatic focusing task sequence.
3012: and acquiring the current focusing task.
And the control computer computing system acquires the pre-focusing point coordinates to be focused currently according to the automatic focusing execution process.
3013: moving to the current pre-focusing point.
The control computer computing system controls the precise triaxial electric object stage to move to the current pre-focusing point coordinate.
3014: two images of the same field of view with a fixed height difference are acquired.
The control computer computing system controls the precise triaxial electric object stage to move by a fixed step length in the vertical direction of the current pre-focusing point acquisition position, and controls the color camera to acquire an image before and after the movement.
3015: a coarse focus height is acquired.
And the control computer computing system preliminarily obtains a rough value of the focusing height through the neural network according to the two images with the height difference of a fixed value.
3016: moving to a coarse focus height.
The control computer computing system controls the precise triaxial motorized stage to move to the rough focus height.
3017: a precise focal height is obtained.
The computer computing system is controlled to perform accurate focusing according to the definition of the real-time image captured by the color camera, wherein the definition is represented by image gradients.
The accurate focusing process comprises the following steps:
1. setting the number k of stages of searching;
2、T 0 the stage precise triaxial electric stage moves a small distance S in the vertical direction 0 Determining the subsequent searching direction according to the definition changes of the initial image frame and the moved image frame, entering T 1 Stage;
3. each step of the precise triaxial electric object stage moves to a searching direction by a larger step length S 0 Moving to lower definition, stopping moving, halving step length, and entering next stage T 2
4. The precise triaxial electric object stage reverses the searching direction, gradually moves the current step length until the definition becomes low, stops moving, halving the step length and enters the next stage.
5. Returning to 4 until T is completed k And (5) searching in a stage.
302: focal plane modeling.
3021: four pre-focusing points closest to each acquisition point of the sample area are calculated.
The control computer computing system calculates four pre-focusing points with closest horizontal two-dimensional distances between the pre-focusing point set and the control computer computing system according to the coordinate information of each acquisition point, and sorts the four pre-focusing points according to the descending distance order, wherein the four pre-focusing points are respectively P 1 、P 2 、P 3 、P 4
3022: two focal planes are fitted.
The control computer respectively makes the four pre-focusing points closest to each focusing point at P 1 、P 2 、P 3 And P 1 、P 2 、P 4 Three pairs of focuses are grouped into a group, and a three-point method is used for constructing a three-dimensional space plane.
3023: the heights of the acquisition points in the two planes are calculated respectively.
And the control computer computing system interpolates according to the two planes obtained by fitting and the horizontal coordinates of each pair of focuses to respectively obtain the heights in the two planes.
3024: weighting to obtain the focusing height of each acquisition point.
The control computer computing system calculates the distance between each acquisition point and two planes, wherein the Z coordinate of each acquisition point is 0. And respectively weighting the heights of the acquisition points in two planes by taking the distance as a weight to obtain the focusing height.
As shown in fig. 4, the present embodiment provides a large-area data cube collecting method based on a microscopic hyperspectral imaging platform, including:
401: preprocessing the acquired image.
4011: and (5) blank detection.
The computer computing system is controlled to identify an image of a region of the color raw data that is transparent and free of tissue on the slide.
The identification process comprises the following steps:
1. cutting the current view field image into a plurality of small blocks on average;
2. judging whether each small image is a blank image or not, if so, judging that the current view field is blank; otherwise, the blank is not blank.
4012: and acquiring a blank field image.
The control computer computing system controls the first blank image acquired by the hyperspectral camera.
4013: blank image processing.
The control computer computing system replaces the hyperspectral data image content determined to be blank with a gray value that is nearly white.
4014: and (5) enhancing the image.
The gray scale image of light of different wavelengths at a fixed stage height is not necessarily clear for each image, and refocusing once for each wavelength can greatly increase the time taken to focus. Therefore, the computer computing system is controlled to realize deblurring of the acquired images of all wavelengths by using an image enhancement algorithm after the acquisition is completed.
4015: the image is processed using beer-lambert law.
Due to the non-uniformity of the distribution of the light sources on the camera sensor, the imaging quality of the hyperspectral camera is affected. Therefore, the control computer computing system uses the beer-lambert law, and performs spectrum correction by using blank field images, so that the influence of an imaging light source is effectively weakened or removed.
402: image registration is performed.
After the hyperspectral image preprocessing is completed, the computer computing system is controlled to splice the hyperspectral images with all wavelengths.
4021: the prior information required for registration is acquired.
The full-plectrum scanning has high requirement on algorithm instantaneity, and long time is consumed if characteristic point matching is performed on every two images in adjacent fields of view. The method finds that the offset vector tends to be a vector when researching the images of the adjacent fields in the horizontal and vertical directions, and obtains the offset vector by a method of calculating and averaging for many times. And generating definition of the obtained image by comparing offset vectors obtained by taking different wave bands as references, and selecting the offset vector corresponding to the wave band with the highest definition as a spliced reference vector.
4022: and splicing the hyperspectral images.
And the control computer computing system splices the hyperspectral images of the adjacent fields of view according to the offset vector, and finally a hyperspectral high-definition large image is formed.
The foregoing is merely a preferred embodiment of the present application, but the scope of the present application is not limited thereto, and any changes or substitutions easily contemplated by those skilled in the art within the technical scope of the present application should be covered by the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (4)

1. The full-slide data cube acquisition method based on the microscopic hyperspectral imaging platform is characterized by comprising the following steps of:
constructing a control computer for providing a software interface;
setting a spectrum range and a band number based on the software interface, and acquiring a preview image;
preprocessing the preview image to obtain a preprocessing result;
generating an acquisition task sequence based on the preprocessing result;
acquiring data based on the acquisition task sequence to acquire a hyperspectral image;
splicing and registering the hyperspectral images to obtain a hyperspectral full-slide data cube;
the process of preprocessing the preview image comprises the following steps:
identifying a sample area in the preview image and marking the sample area;
establishing a mapping relation between the software coordinates of the preview image and the physical coordinates of the microscopic field of view;
calculating the number of steps required to move transversely and longitudinally and the coordinates of all acquisition points in the sample area based on the sample area;
selecting a pre-focusing point based on the sample area, and acquiring a pre-focusing point set;
automatically focusing the pre-focusing point set to obtain a focusing result;
realizing focal plane modeling based on the focusing result;
the process for establishing the mapping relation between the software coordinates of the preview image and the physical coordinates of the microscopic field of view comprises the following steps:
calculating a sample presence region of a corresponding stage based on the sample region;
acquiring shooting parameters of a preview camera, and calculating physical offset vectors of pixels of the preview camera and a field of view;
acquiring physical coordinates of a field of view of a preview camera, and calculating physical coordinates of pixels through a coordinate rotation transformation matrix from the field of view of the preview camera to an objective table based on the physical offset vector;
based on the pixel physical coordinates, combining the mapping vector of the physical coordinates of the field of view and the physical coordinates of the microscopic field of view to obtain corresponding physical coordinates of the microscopic field of view;
the process of selecting the pre-focusing point based on the sample region includes:
downsampling the preview image;
performing gradient calculation on the downsampled image sample area to obtain a calculation result;
performing preliminary screening of a pre-focusing point set based on the pixel point with the highest gradient in the calculation result;
supplementing the pre-focusing point set based on spatially uniform distribution;
the process of automatically focusing the pre-focusing point comprises the following steps:
generating an automatic focusing task sequence based on the pre-focusing point set;
acquiring a current focusing task based on the automatic focusing task sequence;
acquiring two images with the same view field and fixed height difference, and obtaining rough focusing heights of the two images;
shooting and searching image gradients in a real-time view field to obtain an accurate focusing height;
the process of generating an auto-focus task sequence based on the set of pre-focus points includes:
adding each pre-focusing point in the pre-focusing point set into a pre-focusing point sequence based on a Z-type searching mode; the Z-type searching mode comprises the following steps: dividing a sample area into a plurality of columns according to equal width, starting from the uppermost pre-focusing point of the leftmost column, searching the subsequent pre-focusing point in a forward direction in a stepping way, adding a pre-focusing point sequence, after reaching a column boundary, stepping the sample area by one column width to the right, and sequentially searching the pre-focusing point in a reverse direction, and adding the pre-focusing point sequence; repeating the steps until the rightmost column is searched;
generating an automatic focusing task sequence based on the pre-focusing sequence and combining shooting parameters;
the process for realizing focal plane modeling based on the focusing result comprises the following steps:
calculating four pre-focusing points closest to the horizontal two-dimensional distance of each acquisition point in the sample area;
in the four pre-focusing points, two focal planes are respectively fitted with three-dimensional coordinates of the third and fourth pre-focusing points by utilizing the two nearest pre-focusing points;
interpolating the two focal planes to calculate the heights of the acquisition points in the two planes;
and calculating the distance between each acquisition point and the fitted two focal planes, taking the distance as a weight, and respectively weighting the Z-axis coordinates in the corresponding two planes to obtain the focusing height, thereby completing the focal plane modeling.
2. The microscopic hyperspectral imaging platform based whole-slide data cube collection method of claim 1, wherein the process of generating a collection task sequence based on the preprocessing result comprises:
adding the coordinates of all the acquisition points into a scanning sequence in a Z-type searching mode with the column width of 1;
and acquiring shooting parameters of the microscopic hyperspectral image and focusing heights of all acquisition points, and generating an acquisition task sequence by combining the scanning sequence.
3. The microscopic hyperspectral imaging platform based whole-slide data cube collection method according to claim 1, wherein the process of data collection based on the collection task sequence comprises:
acquiring a current acquisition task based on the acquisition task sequence; and carrying out data acquisition based on the current acquisition task to respectively obtain gray original data and color original data.
4. The method of whole-slide data cube collection based on a microscopic hyperspectral imaging platform of claim 1, wherein the process of stitching and registering the hyperspectral images comprises:
the splicing process comprises the following steps: uniformly cutting an image to be detected into small blocks, classifying the small blocks, and obtaining a blank detection result;
the blank detection result comprises: if all the small blocks are classified as blank, the current image to be detected is blank; otherwise, the blank is non-blank;
the blank detection is to identify a blank region image;
the blank area image is an image of an area which is transparent and has no tissue on the glass slide;
acquiring a first blank view field image in an acquisition area based on the blank detection result;
replacing the content of the blank area image of the gray original data with a white gray value;
carrying out image enhancement on the gray original data non-blank area image;
processing non-blank parts of the acquired gray original image based on the blank view field image and based on a beer-lambert law, eliminating the influence of difference of a slide and a light source, and obtaining an image to be spliced;
the registration process includes: acquiring prior information and a reference wave band required by registration; the prior information is an offset vector of adjacent fields of view; and splicing the hyperspectral images based on the offset vector.
CN202210659906.9A 2022-06-13 2022-06-13 Whole-slide data cube acquisition method based on microscopic hyperspectral imaging platform Active CN115060367B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210659906.9A CN115060367B (en) 2022-06-13 2022-06-13 Whole-slide data cube acquisition method based on microscopic hyperspectral imaging platform

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210659906.9A CN115060367B (en) 2022-06-13 2022-06-13 Whole-slide data cube acquisition method based on microscopic hyperspectral imaging platform

Publications (2)

Publication Number Publication Date
CN115060367A CN115060367A (en) 2022-09-16
CN115060367B true CN115060367B (en) 2023-04-21

Family

ID=83199745

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210659906.9A Active CN115060367B (en) 2022-06-13 2022-06-13 Whole-slide data cube acquisition method based on microscopic hyperspectral imaging platform

Country Status (1)

Country Link
CN (1) CN115060367B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN118090713A (en) * 2024-02-22 2024-05-28 武汉中纪生物科技有限公司 ELISA (enzyme-linked immunosorbent assay) spot image acquisition method
CN117911245A (en) * 2024-02-28 2024-04-19 华东师范大学 Large-area data cube acquisition method based on microscopic hyperspectral imaging platform

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050270528A1 (en) * 1999-04-09 2005-12-08 Frank Geshwind Hyper-spectral imaging methods and devices
US7652765B1 (en) * 2004-03-06 2010-01-26 Plain Sight Systems, Inc. Hyper-spectral imaging methods and devices
US20150044098A1 (en) * 2012-01-30 2015-02-12 Scanadu Incorporated Hyperspectral imaging systems, units, and methods
US8797431B2 (en) * 2012-08-29 2014-08-05 General Electric Company Method of controlling the resolution of a hyperspectral image
CN109489816B (en) * 2018-10-23 2021-02-26 华东师范大学 Microscopic hyperspectral imaging platform and large-area data cube acquisition method
US11151736B1 (en) * 2020-05-30 2021-10-19 Center For Quantitative Cytometry Apparatus and method to obtain unprocessed intrinsic data cubes for generating intrinsic hyper-spectral data cubes

Also Published As

Publication number Publication date
CN115060367A (en) 2022-09-16

Similar Documents

Publication Publication Date Title
CN115060367B (en) Whole-slide data cube acquisition method based on microscopic hyperspectral imaging platform
US8350905B2 (en) Microscope system, image generating method, and program for practicing the same
CN111626936B (en) Quick panoramic stitching method and system for microscopic images
US20190268573A1 (en) Digital microscope apparatus for reimaging blurry portion based on edge detection
US11454781B2 (en) Real-time autofocus focusing algorithm
US20100272334A1 (en) Microscope System, Specimen Observation Method, and Computer Program Product
EP3420393B1 (en) System for generating a synthetic 2d image with an enhanced depth of field of a biological sample
JP7292979B2 (en) Image processing device and image processing method
CN106210520B (en) A kind of automatic focusing electronic eyepiece and system
CN114424102A (en) Image processing apparatus and method for use in an autofocus system
US20120249770A1 (en) Method for automatically focusing a microscope on a predetermined object and microscope for automatic focusing
WO2020258434A1 (en) Phase imaging method and device employing tie, and readable storage medium
CN108665436B (en) Multi-focus image fusion method and system based on gray mean reference
CN114267606B (en) Wafer height detection method and device
US11356593B2 (en) Methods and systems for single frame autofocusing based on color- multiplexed illumination
CN105651699B (en) It is a kind of based on the dynamic of area array cameras with burnt method
US8179575B2 (en) Chromatic registration for biological sample imaging
Aleksandrovich et al. Search the optimal border for combination of image pairs using neural networks
JP5996462B2 (en) Image processing apparatus, microscope system, and image processing method
CN106683064B (en) A kind of multi-focus image fusing method based on two dimension coupling convolution
US11422349B2 (en) Dual processor image processing
Ishihara et al. Depth estimation using spectrally varying defocus blur
Averkin et al. Using the method of depth reconstruction from focusing for microscope images
CN116612007B (en) Method and device for splicing sub-apertures on surface of optical element
Luo Adaptive measurement method for area chromatic confocal microscopy

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant