CN116823655A - Scanned image correction method, device and equipment - Google Patents

Scanned image correction method, device and equipment Download PDF

Info

Publication number
CN116823655A
CN116823655A CN202310758687.4A CN202310758687A CN116823655A CN 116823655 A CN116823655 A CN 116823655A CN 202310758687 A CN202310758687 A CN 202310758687A CN 116823655 A CN116823655 A CN 116823655A
Authority
CN
China
Prior art keywords
image
visual field
full
images
field scanning
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310758687.4A
Other languages
Chinese (zh)
Inventor
郑洪坤
刘敏
欧阳锋
秦志远
张梦龙
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Qingdao Baichuang Intelligent Manufacturing Technology Co ltd
Original Assignee
Qingdao Baichuang Intelligent Manufacturing Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Qingdao Baichuang Intelligent Manufacturing Technology Co ltd filed Critical Qingdao Baichuang Intelligent Manufacturing Technology Co ltd
Priority to CN202310758687.4A priority Critical patent/CN116823655A/en
Publication of CN116823655A publication Critical patent/CN116823655A/en
Pending legal-status Critical Current

Links

Landscapes

  • Image Processing (AREA)

Abstract

The application provides a scanning image correction method, a device and equipment, and relates to the technical field of space transcriptome sequencing, wherein the method comprises the following steps: collecting a plurality of visual field scanning images of the biochip in a state that the tissue slice is attached to the biochip, wherein the biochip comprises a substrate pattern, the substrate pattern comprises a plurality of key points, and overlapping areas exist between any two adjacent visual field scanning images; according to the overlapping area between any adjacent visual field scanning images, fusion processing is carried out on the multiple visual field scanning images, so as to obtain a full-frame scanning image of the biochip; determining first position information of a plurality of key points in the full-scan image according to the substrate pattern; and correcting the full-frame scanned image according to the first position information and the substrate pattern to obtain a target full-frame scanned image. The scheme of the application realizes the accurate alignment of the gene expression information captured by the biochip and the region represented by the full-scan image.

Description

Scanned image correction method, device and equipment
Technical Field
The application relates to the technical field of space transcriptome sequencing, in particular to a scanning image correction method, a scanning image correction device and scanning image correction equipment.
Background
In multicellular organisms, gene expression in individual cells occurs strictly in a specific temporal and spatial order, i.e., the gene expression is time-specific and space-specific.
For spatial specificity, gene expression information is currently located to the original spatial location of genes, typically by spatial transcriptome sequencing techniques, by in situ expression analysis and histological analysis of tissue sections on a biochip. After the gene expression information for each spatial location is acquired, further analysis is often required against high resolution tissue staining images or fluorescent images.
However, the whole scan image acquired by the current scanning instrument is more or less misplaced or overlapped, so that the gene expression information captured by the biochip and the region represented by the whole scan image cannot be aligned, and the accuracy of the data of the spatial position is adversely affected.
Disclosure of Invention
The application provides a scanning image correction method, a device and equipment, which are used for solving the problem that the gene expression information captured by the biochip and the region represented by the full-width scanning image cannot be aligned.
In a first aspect, the present application provides a scanned image correction method, comprising:
Collecting a plurality of visual field scanning images of a biochip in a state that a tissue slice is attached to the biochip, wherein the biochip comprises a substrate pattern, the substrate pattern comprises a plurality of key points, and overlapping areas exist between any two adjacent visual field scanning images;
according to the overlapping area between any adjacent visual field scanning images, carrying out fusion processing on the multiple visual field scanning images to obtain a full-frame scanning image of the biochip;
determining first position information of the plurality of key points in the full-width scanning image according to the substrate pattern;
and correcting the full-frame scanning image according to the first position information and the substrate pattern to obtain a target full-frame scanning image.
In one possible implementation manner, the fusing processing is performed on the multiple view scan images according to the overlapping area between any adjacent view scan images to obtain a full scan image of the biochip, including:
determining the relative position between any adjacent visual field scanning images according to the overlapping area between any adjacent visual field scanning images;
and according to the relative positions between any adjacent visual field scanning images, carrying out fusion processing on the multiple visual field scanning images to obtain the full-frame scanning image.
In one possible embodiment, the determining the relative position between any adjacent field of view scanned images according to the overlapping region between any adjacent field of view scanned images includes:
determining a second field-of-view scan image adjacent to any first field-of-view scan image of the plurality of field-of-view scan images;
performing two-dimensional window sliding processing on the first visual field scanning image and the second visual field scanning image according to the overlapping area between the first visual field scanning image and the second visual field scanning image, and determining matching parameters between the first visual field scanning image and the second visual field scanning image;
and determining the relative position between the first visual field scanning image and the second visual field scanning image according to the matching parameters.
In one possible implementation manner, the fusing processing is performed on the multiple view scan images according to the relative position between any adjacent view scan images to obtain the full scan image, including:
determining the position of a reference visual field scanning image in the visual field scanning images, wherein the reference visual field scanning image is any one of the visual field scanning images;
Determining the positions of other visual field scanning images according to the positions of the reference visual field scanning images and the relative positions between any adjacent visual field scanning images, wherein the other visual field scanning images are visual field scanning images except the reference visual field scanning images in the plurality of visual field scanning images;
and carrying out fusion processing on the multiple visual field scanning images according to the positions of the reference visual field scanning images and the positions of the other visual field scanning images to obtain the full-frame scanning image.
In one possible implementation manner, the determining, according to the substrate pattern, first position information of the plurality of keypoints in the full-scan image includes:
determining a key point identification model corresponding to the substrate pattern;
inputting the full-width scanning image into the key point identification model to obtain the first position information;
the key point identification model is obtained by training based on a plurality of groups of training samples, any one group of training samples comprises a sample full-width scanning image and label information corresponding to the sample full-width image, the sample full-width scanning image comprises the substrate pattern, and the label information comprises position information of a plurality of key points in the sample full-width scanning image.
In one possible implementation manner, the correcting the full-scan image according to the first position information and the substrate pattern to obtain a target full-scan image includes:
determining second position information of the plurality of key points in the full-width scanning image according to the substrate pattern and the size of the full-width scanning image;
and correcting the full-frame scanned image according to the first position information and the second position information to obtain the target full-frame scanned image.
In one possible implementation manner, the correcting the full-scan image according to the first position information and the second position information to obtain the target full-scan image includes:
determining a correction conversion matrix according to the first position information and the second position information;
and carrying out mapping transformation processing on the full-frame scanning image according to the correction transformation matrix to obtain the target full-frame scanning image.
In a second aspect, the present application provides a scanned image correction device comprising:
the acquisition module is used for acquiring a plurality of visual field scanning images of the biochip in a state that the tissue slice is attached to the biochip, wherein the biochip comprises a substrate pattern, the substrate pattern comprises a plurality of key points, and overlapping areas exist between any two adjacent visual field scanning images;
The processing module is used for carrying out fusion processing on the multiple visual field scanning images according to the overlapping area between any adjacent visual field scanning images to obtain a full-width scanning image of the biochip;
a determining module, configured to determine first position information of the plurality of key points in the full-scan image according to the substrate pattern;
and the correction module is used for correcting the full-frame scanning image according to the first position information and the substrate pattern to obtain a target full-frame scanning image.
In a possible implementation manner, the processing module is specifically configured to:
determining the relative position between any adjacent visual field scanning images according to the overlapping area between any adjacent visual field scanning images;
and according to the relative positions between any adjacent visual field scanning images, carrying out fusion processing on the multiple visual field scanning images to obtain the full-frame scanning image.
In a possible implementation manner, the processing module is specifically configured to:
determining a second field-of-view scan image adjacent to any first field-of-view scan image of the plurality of field-of-view scan images;
Performing two-dimensional window sliding processing on the first visual field scanning image and the second visual field scanning image according to the overlapping area between the first visual field scanning image and the second visual field scanning image, and determining matching parameters between the first visual field scanning image and the second visual field scanning image;
and determining the relative position between the first visual field scanning image and the second visual field scanning image according to the matching parameters.
In a possible implementation manner, the processing module is specifically configured to:
determining the position of a reference visual field scanning image in the visual field scanning images, wherein the reference visual field scanning image is any one of the visual field scanning images;
determining the positions of other visual field scanning images according to the positions of the reference visual field scanning images and the relative positions between any adjacent visual field scanning images, wherein the other visual field scanning images are visual field scanning images except the reference visual field scanning images in the plurality of visual field scanning images;
and carrying out fusion processing on the multiple visual field scanning images according to the positions of the reference visual field scanning images and the positions of the other visual field scanning images to obtain the full-frame scanning image.
In one possible implementation manner, the determining module is specifically configured to:
determining a key point identification model corresponding to the substrate pattern;
inputting the full-width scanning image into the key point identification model to obtain the first position information;
the key point identification model is obtained by training based on a plurality of groups of training samples, any one group of training samples comprises a sample full-width scanning image and label information corresponding to the sample full-width image, the sample full-width scanning image comprises the substrate pattern, and the label information comprises position information of a plurality of key points in the sample full-width scanning image.
In one possible embodiment, the correction module is specifically configured to:
determining second position information of the plurality of key points in the full-width scanning image according to the substrate pattern and the size of the full-width scanning image;
and correcting the full-frame scanned image according to the first position information and the second position information to obtain the target full-frame scanned image.
In one possible embodiment, the correction module is specifically configured to:
Determining a correction conversion matrix according to the first position information and the second position information;
and carrying out mapping transformation processing on the full-frame scanning image according to the correction transformation matrix to obtain the target full-frame scanning image.
In a third aspect, the present application provides an electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, the processor implementing the scanned image correction method as in any one of the first aspects when executing the program.
In a fourth aspect, the present application provides a non-transitory computer readable storage medium having stored thereon a computer program which, when executed by a processor, implements the scanned image correction method as in any one of the first aspects.
The scanning image correction method, the scanning image correction device and the scanning image correction equipment provided by the embodiment of the application are used for collecting a plurality of visual field scanning images of a biochip in a state that a tissue slice is attached to the biochip, wherein the biochip comprises a substrate pattern, the substrate pattern comprises a plurality of key points, and an overlapping area exists between any two adjacent visual field scanning images; then, according to the overlapping area between any adjacent visual field scanning images, fusion processing is carried out on the multiple visual field scanning images to obtain a full-frame scanning image of the biochip, so that the stitching of the visual field scanning images is realized; further, first position information of a plurality of key points in the full-frame scanned image is determined according to the substrate pattern, so that correction processing is carried out on the full-frame scanned image according to the first position information and the substrate pattern, and a target full-frame scanned image is obtained. The generated image errors can be corrected through the first position information and the second position information of the plurality of key points in the full-scan image, so that the accurate alignment of the gene expression information captured by the biochip and the region represented by the full-scan image is realized.
Drawings
In order to more clearly illustrate the application or the technical solutions of the prior art, the following description will briefly explain the drawings used in the embodiments or the description of the prior art, and it is obvious that the drawings in the following description are some embodiments of the application, and other drawings can be obtained according to the drawings without inventive effort for a person skilled in the art.
Fig. 1 is a schematic flow chart of a scan image correction method according to an embodiment of the present application;
FIG. 2 is a schematic diagram of a substrate pattern according to an embodiment of the present application;
FIG. 3 is a schematic diagram of acquiring a field scan image according to an embodiment of the present application;
FIG. 4 is a schematic diagram of a field-of-view scanned image according to an embodiment of the present application;
FIG. 5 is a schematic diagram of a fusion process according to an embodiment of the present application;
FIG. 6 is a schematic diagram of a plurality of key points according to an embodiment of the present application;
fig. 7 is a schematic flow chart of a fusion process of a field scan image according to an embodiment of the present application;
FIG. 8 is a schematic flow chart of a correction process according to an embodiment of the present application;
FIG. 9 is a schematic diagram of a full-frame scanned image of a target provided by an embodiment of the present application;
FIG. 10 is an enlarged schematic view of a target full-frame scanned image after superimposing a base pattern according to an embodiment of the present application;
FIG. 11 is a schematic diagram of a scanned image correction device according to an embodiment of the present application;
fig. 12 is a schematic entity structure diagram of an electronic device according to an embodiment of the present application.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the present application more apparent, the technical solutions of the present application will be clearly and completely described below with reference to the accompanying drawings, and it is apparent that the described embodiments are some embodiments of the present application, not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the application without making any inventive effort, are intended to be within the scope of the application.
For single cells, if it is required to explore heterogeneity among cells, the cells are usually dissociated into single cell suspensions, and then single cell banking is achieved by using single cell separation techniques, such as microwells, microplates, droplets, and the like.
The method of single cell library establishment by single cell separation technology and other methods can lead the cells to lose the original spatial information of tissues, but the spatial information of the cells in the tissues is very important in practical research, especially in research of the cell fate mechanism and the spatial information related to cell lineages. Therefore, the development of spatial transcriptome techniques to achieve preservation of spatial information of cells is particularly important for the study of such cell states.
In multicellular organisms, gene expression in individual cells occurs strictly in a specific temporal and spatial order, i.e., the gene expression is time-specific and space-specific.
For time specificity, cell types and gene expression patterns in the time dimension can be resolved by sampling samples at different time points using single cell transcriptome sequencing techniques.
For space specificity, it is relatively difficult to obtain spatial information corresponding to cells. Conventional transcriptome sequencing and single-cell transcriptome sequencing are difficult to restore the original spatial information of cells, and in-situ hybridization technology is difficult to realize high-throughput detection, so that spatial transcriptome sequencing technology is generated. Spatial transcriptome sequencing techniques localize gene expression information to the original spatial location of a gene by in situ expression analysis and histological analysis of tissue sections on a biochip.
After the gene expression information for each spatial location is acquired, further analysis is often required against high resolution tissue staining images or fluorescent images. Specifically, firstly, a scanning instrument is used for scanning tissue on a biochip to obtain a corresponding full-scan image, and then the full-scan image is aligned with gene expression information captured on the biochip.
However, the full scan images acquired by the current scanning instrument are more or less misaligned or overlapped, so that the gene expression information captured by the biochip and the region represented by the full scan images cannot be aligned. Especially in high resolution biochips, fine misalignments will have a great impact on the accuracy of the overall data.
Based on this, the embodiment of the present application provides a scan image correction method to achieve alignment between gene expression information captured by a biochip and a region represented by a full-scan image. Embodiments of the present application will be described below with reference to the accompanying drawings.
Fig. 1 is a flow chart of a scan image correction method according to an embodiment of the present application, as shown in fig. 1, including:
s11, collecting a plurality of visual field scanning images of the biochip in a state that the tissue slice is attached to the biochip, wherein the biochip comprises a substrate pattern, the substrate pattern comprises a plurality of key points, and overlapping areas exist between any two adjacent visual field scanning images.
The tissue slice is a slice prepared based on biological tissue, and may be a pathological tissue slice, for example. After obtaining the tissue slice, the tissue is switched and attached to the biochip according to a professional slicing flow, then the tissue slice is treated by using corresponding reagents and fixed to be scanned.
The biochip is a device for capturing gene expression information of cells, and has a specific substrate pattern on the biochip, and the substrate patterns on different biochips may be the same or different. The substrate pattern includes, but is not limited to, a circular array, a square array, and the like.
Fig. 2 is a schematic view of a substrate pattern provided in an embodiment of the present application, as shown in fig. 2, on a biochip, where in fig. 2, a region 21 of the biochip is illustrated as a region 22 after being enlarged, and a sub-region 23 in the region 22 is illustrated as a sub-region 24 after being enlarged. In fig. 2, it can be seen that the biochip comprises lines which are arranged vertically and horizontally, and which together form a substrate pattern on the biochip.
The substrate pattern comprises a plurality of key points, the number of the key points can be set according to the requirement, and the positions of the key points in the substrate pattern can also be set according to the requirement. For example, in fig. 2, the intersection points of the lines that alternate between the horizontal and vertical directions are taken as key points in the substrate pattern, and as shown in fig. 2, the point a, the point B, the point C and the point D are all key points in the substrate pattern.
On a biochip, a plurality of biological barcodes, which may also be referred to as barceload spots, DNA barcodes, etc., are included. In fig. 2, the individual dots illustrated in the sub-region 24 are biological barcodes in the biochip.
The function of the biological barcode is to capture information on the gene expression of the cell. In an embodiment of the present application, a biological barcode on a biochip is used to capture gene expression information of individual cells in a tissue section. The biological bar code on the biochip has completed capturing the gene expression information of each cell in the tissue switch before the tissue slice is attached to the biochip.
After the tissue slice is attached to the biochip and fixed, a plurality of visual field scanning images of the biochip can be acquired, wherein the visual fields of the different visual field scanning images are different, the visual field of any visual field scanning image is a local area of the biochip, the visual fields of the plurality of visual field scanning images jointly form a global area of the whole biochip, and an overlapping area exists between any two adjacent visual field scanning images.
Specifically, after the tissue slice is attached to the biochip and fixed, the tissue slice is scanned by a special scanner to obtain a plurality of visual field scanning images. In the process of scanning, the position of the scanner is correspondingly changed, and in the process of changing the position of the scanner, the field of view of the scanned field of view scanning image is correspondingly changed. Optionally, the field of view scan image belongs to a bright field digital microscope scan image.
An exemplary implementation of a scanner scanning tissue slices is described below in connection with fig. 3.
Fig. 3 is a schematic diagram of acquiring a field scan image according to an embodiment of the present application, as shown in fig. 3, including a biochip 31 and a scanner (not shown).
The scanner scans the local area 311 in the biochip 31 initially to obtain a visual field scanning image corresponding to the local area 311, then moves its position, scans the local area 312 in the biochip 31 to obtain a visual field scanning image corresponding to the local area 312, and so on. The partial region 311 is adjacent to the partial region 312, and there is an overlap region 300 between the field-of-view scanned image corresponding to the partial region 311 and the field-of-view scanned image corresponding to the partial region 312.
The scanning order of the scanner can be preset, and the scanner scans different local areas in the biochip 31 according to the preset scanning order, so as to obtain corresponding visual field scanning images.
S12, according to the overlapping area between any adjacent visual field scanning images, fusion processing is carried out on the visual field scanning images, and the full-frame scanning image of the biochip is obtained.
Fig. 4 is a schematic diagram of a view scan image provided in an embodiment of the present application, as shown in fig. 4, after scanning by a scanner, a plurality of view scan images may be obtained, and then the plurality of view scan images may be rearranged according to a scanning sequence of the scanner, and according to a position number, a position of the view scan image corresponds to an actual spatial position of the view scan image on a biochip.
For any two adjacent view scan images in the plurality of view scan images, since there is an overlapping area between the two view scan images, the two view scan images can be fused according to the overlapping area, and a fused image can be obtained.
The process of fusion can be seen in fig. 5.
Fig. 5 is a schematic diagram of a fusion process provided in an embodiment of the present application, as shown in fig. 5, illustrating a fusion process between 4 field-of-view scanned images, which are image 51, image 52, image 53, and image 54, respectively.
Wherein the image 51 and the image 52 are adjacent, and there is an overlapping area to the right of the image 51 and the left of the image 52; image 51 is adjacent to image 53, and there is an overlapping area between the lower side of image 51 and the upper side of image 53; image 53 is adjacent to image 54, with an overlapping region to the right of image 53 and to the left of image 54; image 52 is adjacent to image 54, with an overlap region between the lower edge of image 52 and the upper edge of image 54. The respective overlapping areas are illustrated in fig. 5.
The fusion process is a process of fusing each view scan image into one image according to the overlapping region. After fusion, the original overlapping regions are merged into one region. For example, in fig. 5, image 51, image 52, image 53, and image 54 are fused to obtain image 55.
The fusion process may be performed for each adjacent one of the plurality of field-of-view scanned images by using the above-described implementation. After all the visual field scanning images are fused, a corresponding full-frame scanning image can be obtained, wherein the visual field of the full-frame scanning image is the global area of the biochip, including the complete tissue section.
S13, determining first position information of a plurality of key points in the full-scan image according to the substrate pattern.
In one possible implementation, for the obtained full scan image, the position information of the substrate pattern in the full scan image may be identified, and then the positions of the plurality of key points relative to the substrate pattern are combined to determine the first position information of the plurality of key points in the full scan image.
In one possible implementation, the first location information of the plurality of keypoints in the full scan image may also be determined by a keypoint identification model.
Specifically, first, a key point recognition model corresponding to a substrate pattern is determined. The key point recognition model is obtained by training a plurality of groups of training samples in advance, and any group of training samples comprises a sample full-scan image and label information corresponding to the sample full-scan image.
The sample whole scan image is an image obtained by fusing the sample tissue slices scanned by the scanner, and the sample tissue slices are attached to the sample biochip, and the sample biochip includes the substrate pattern, so the sample whole scan image also includes the substrate pattern. The substrate pattern includes a plurality of key points, and the label information includes position information of the plurality of key points in the sample full scan image. The tag information may be in the form of an image, data, or both. The image form refers to an image obtained by labeling key points in the sample full-scan image, and the data form refers to positions of a plurality of key points in the sample full-scan image, which are labeled by coordinate data.
Fig. 6 is a schematic diagram of a plurality of key points provided in an embodiment of the present application, and as shown in fig. 6, a plurality of key points on a substrate pattern are illustrated. In fig. 6, the key points are the intersections of the lines, i.e., the black points in fig. 6.
In fig. 6, an example is an image obtained by labeling a sample full-scan image (a part of an image), which may be directly used as tag information of the sample full-scan image, or coordinate information of a plurality of key points may be determined based on the labeled image, and the coordinate information of the plurality of key points may be used as tag information of the sample full-scan image.
S14, correcting the full-frame scanned image according to the first position information and the substrate pattern to obtain a target full-frame scanned image.
The first position information is position information of a plurality of key points which are actually determined on the full-scan image. The substrate pattern comprises a plurality of key points, and various information such as the key points, the number of the key points and the like can be determined at which positions according to the design of the substrate pattern. And correcting the full-frame scanned image based on the information of the plurality of key points fed back by the design of the substrate pattern and the first position information of the plurality of key points in the full-frame scanned image to obtain the target full-frame scanned image. Since the target full-scan image is obtained by correcting the full-scan image based on the first positional information and the substrate pattern, the target full-scan image and the biochip are aligned, and thus accurate alignment of the gene expression information captured by the biochip and the region represented by the full-scan image is also achieved.
On the basis of any one of the above embodiments, the following describes the scheme of the embodiment of the application in detail with reference to the accompanying drawings.
Fig. 7 is a schematic flow chart of a fusion process of a field-of-view scanned image according to an embodiment of the present application, as shown in fig. 7, including:
S71, determining the relative position between any adjacent visual field scanning images according to the overlapping area between any adjacent visual field scanning images.
For any first field-of-view scan image of the plurality of field-of-view scan images, first determining a second field-of-view scan image adjacent to the first field-of-view scan image, wherein the number of second field-of-view scan images adjacent to the first field-of-view scan image may be one or more. In one possible implementation, the second view scan image may be all view scan images adjacent to the first view scan image, or may be a partial view scan image adjacent to the first view scan image, for example, including view scan images located to the left and/or above the first view scan image, which is not limited by the embodiments of the present application.
After the second visual field scanning image is determined, performing two-dimensional window sliding processing on the first visual field scanning image and the second visual field scanning image according to the overlapping area between the first visual field scanning image and the second visual field scanning image, and determining matching parameters between the first visual field scanning image and the second visual field scanning image. Wherein the matching parameter is used to indicate a best matching position between the first field of view scanned image and the second field of view scanned image.
The two-dimensional window sliding process may be, for example, a process of overlapping the first view scan image and the second view scan image first, and then sliding the second view scan image according to a preset step length, so that the relative positions of the first view scan image and the second view scan image are changed. Then, matching is performed on the portion of the first field-of-view scanned image and the second field-of-view scanned image in the same region (i.e., the overlapping region of the first field-of-view scanned image and the second field-of-view scanned image), and the similarity between the portion of the first field-of-view scanned image in the region and the portion of the second field-of-view scanned image in the region is determined. In the case where the similarity exceeds a preset threshold, the matching position at this time is determined as the best matching position between the first field-of-view scanned image and the second field-of-view scanned image. The relative position between the first field-of-view scanned image and the second field-of-view scanned image can be determined from the position of the first field-of-view scanned image and the position of the second field-of-view scanned image at this time.
The relative position may be indicated in the form of relative coordinates. For example, the relative coordinate value of the upper left corner of the second visual field scanning image with respect to the upper left corner of the first visual field scanning image may be determined based on the relative position between the first visual field scanning image and the second visual field scanning image with reference to the upper left corner of the first visual field scanning image, and when the position of the first visual field scanning image is determined, the position of the second visual field scanning image may be determined based on the relative coordinate value. For example, the relative coordinate value of the upper left corner of the second visual field scanning image with respect to the upper right corner of the first visual field scanning image may be determined based on the relative position between the first visual field scanning image and the second visual field scanning image with reference to the upper right corner of the first visual field scanning image; the lower left corner of the first field-of-view scanned image may be used as a reference, and the relative coordinate value of the lower left corner of the second field-of-view scanned image with respect to the lower left corner of the first field-of-view scanned image may be determined according to the relative position between the first field-of-view scanned image and the second field-of-view scanned image, etc., which is not limited by the embodiment of the present application.
S72, according to the relative positions of any adjacent visual field scanning images, fusion processing is carried out on the visual field scanning images, and a full-frame scanning image is obtained.
Among the plurality of field-of-view scan images, the position of a reference field-of-view scan image is first determined, wherein the reference field-of-view scan image is any one of the plurality of field-of-view scan images, and may be, for example, the 1 st, 2 nd, 3 rd, etc. of the plurality of field-of-view scan images. The position of the reference field of view scan image may also be represented by coordinates of the reference field of view scan image, and the coordinates of the reference field of view scan image may be represented by coordinates of a point on the reference field of view scan image, for example, coordinates of an upper left corner, coordinates of an upper right corner, coordinates of a lower left corner, coordinates of a lower right corner, coordinates of a middle point, and the like of the reference field of view scan image.
After the position of the reference field-of-view scan image is determined, since the relative position between any adjacent field-of-view scan images has been previously determined, the positions of other field-of-view scan images can be determined from the position of the reference field-of-view scan image and the relative position between any adjacent field-of-view scan images, and the positions of other field-of-view scan images can be represented by the coordinates of the other field-of-view scan images.
Taking the reference field scan image as the first field scan image as an example, the field scan image adjacent to the first field scan image is the second field scan image and the fifth field scan image, the second field scan image is adjacent to the third field scan image, the fifth field scan image is adjacent to the fourth field scan image and the sixth field scan image, after determining the absolute coordinates of the first field scan image, the absolute coordinates of the second field scan image can be determined according to the relative positional relationship between the first field scan image and the second field scan image, and the absolute coordinates of the fifth field scan image can be determined according to the relative positional relationship between the first field scan image and the fifth field scan image.
Further, after the absolute coordinates of the second view scan image are determined, the absolute coordinates of the third view scan image may be determined according to the relative positional relationship between the second view scan image and the third view scan image; after the absolute coordinates of the fifth field-of-view scanned image are determined, the absolute coordinates of the fourth field-of-view scanned image can be determined from the relative positional relationship between the fifth field-of-view scanned image and the fourth field-of-view scanned image, and the absolute coordinates of the sixth field-of-view scanned image can be determined from the relative positional relationship between the fifth field-of-view scanned image and the sixth field-of-view scanned image. Further, absolute coordinates of all the field scan images can be determined.
In summary, after determining the positions of the reference field-of-view scanned images, the positions of the other field-of-view scanned images may be determined in combination with the relative positions between any adjacent field-of-view scanned images. And then, according to the positions of the reference visual field scanning images and the positions of other reference visual field scanning images, carrying out fusion processing on the plurality of visual field scanning images to obtain a full-frame scanning image.
After the full scan image is obtained, determining the first position information of the plurality of key points in the full scan image according to the substrate pattern, and the specific implementation process may refer to the description related to S13, which is not repeated herein.
After the first position information of the plurality of key points in the full-scan image is obtained, the full-scan image is corrected according to the first position information and the substrate pattern on the biochip to obtain a target full-scan image, which is described below with reference to fig. 8.
Fig. 8 is a schematic flow chart of a correction process according to an embodiment of the present application, as shown in fig. 8, including:
s81, determining second position information of the plurality of key points in the full-scan image according to the substrate pattern and the size of the full-scan image.
The first position information is position information of a plurality of key points obtained by performing recognition processing according to an actually obtained full-scan image, and it is to be noted that correction processing is only required because the obtained full-scan image and the biochip theoretical full-scan image have certain errors due to various reasons such as a certain degree of distortion in the scanning process of the scanner and errors in the fusion process.
According to the substrate pattern on the biochip and the size of the full scan image, the second position information of the plurality of key points in the full scan image can be determined, wherein the second position information is the theoretical position of the plurality of key points in the full scan image. The specific implementation process is that the whole scanning image is subjected to scaling and the like according to the size of the whole scanning image and the size of the biochip, so that the size of the whole scanning image is matched with the size of the biochip, and then the second position information of the plurality of key points in the whole scanning image is determined according to the design of the substrate pattern and the degree (which can be represented by a matrix) of the scaling and the like of the whole scanning image. For example, the size of the full scan image is determined, the full scan image needs to be scaled to obtain a biochip-matched image, the processed full scan image is recorded as an image a, and if a key point exists at a point B of the biochip according to the design of the substrate pattern, a corresponding point C on the image a can be determined according to the position of the point B on the biochip, and the position of the point C on the image a is the same as the position of the point B on the biochip. Because the image A is obtained after the whole scanning image is zoomed and the like, the original whole scanning image is obtained by carrying out the reverse processing on the image A, and the position of the C point on the whole scanning image, namely the second position information of the key point, can be determined.
S82, correcting the full-frame scanned image according to the first position information and the second position information to obtain a target full-frame scanned image.
After the first position information and the second position information of the plurality of key points are obtained, a correction conversion matrix for realizing conversion between the first position information and the second position information can be determined based on the first position information and the second position information.
And then, carrying out mapping transformation processing on the full-frame scanned image according to the correction transformation matrix to obtain a final full-frame scanned image without errors. The correction conversion matrix indicates conversion from first position information to second position information, wherein the first position information is position information of a plurality of key points determined according to the fused full-scan image, the second position information is theoretical position information of the plurality of key points in the full-scan image, and correction of the plurality of key points is realized from the first position information to the second position information. And a corrected transformation matrix based on the first position information and the second position information characterizes a transformation parameter from the first position information to the second position information. Therefore, the full-frame scanned image is subjected to mapping transformation processing based on the correction transformation matrix, so that the correction of the full-frame scanned image can be realized, and the corrected target full-frame scanned image is obtained. The mapping transformation processing performed on the full scan image based on the correction transformation matrix is to multiply the pixel matrix in the full scan image with the correction transformation matrix, which may include scaling the full scan image, translating the full scan image, and the like, which is not limited by the embodiment of the present application.
Fig. 9 is a schematic diagram of a full-frame scanned image of a target provided by an embodiment of the present application, and fig. 10 is an enlarged schematic diagram of a full-frame scanned image of a target after a base pattern is superimposed provided by an embodiment of the present application. As shown in fig. 9, after the correction processing, a complete full-scan image of the target can be obtained. As shown in fig. 10, in order to evaluate the accuracy of the correction, a base pattern of a biochip is superimposed on the obtained target full-scan image, and the left example in fig. 10 is an image obtained without the scan image correction method according to the embodiment of the present application, and it can be seen that, in the case where the correction is not performed, there is a ghost or an error in the image, and the accuracy is low. The right example in fig. 10 shows an image obtained by adopting the scanned image correction method according to the embodiment of the present application, and it can be seen that the images are completely overlapped and no ghost exists in the case of correction by adopting the scheme, so that the accuracy is high.
In the scan image correction method provided by the embodiment of the application, a plurality of view scan images of a biochip are collected in a state that a tissue slice is attached to the biochip, the biochip comprises a substrate pattern, the substrate pattern comprises a plurality of key points, and overlapping areas exist between any two adjacent view scan images; then, according to the overlapping area between any adjacent visual field scanning images, fusion processing is carried out on the multiple visual field scanning images to obtain a full-frame scanning image of the biochip, so that the stitching of the visual field scanning images is realized; further, first position information of a plurality of key points in the full-frame scanned image is determined according to the substrate pattern, so that correction processing is carried out on the full-frame scanned image according to the first position information and the substrate pattern, and a target full-frame scanned image is obtained. The generated image errors can be corrected through the first position information and the second position information of the plurality of key points in the full-scan image, so that the accurate alignment of the gene expression information captured by the biochip and the region represented by the full-scan image is realized, and the requirement of high-precision alignment with subsequent items is met.
Fig. 11 is a schematic structural diagram of a scanned image correction device according to an embodiment of the present application, as shown in fig. 11, including:
an acquisition module 111, configured to acquire a plurality of view scan images of a biochip in a state where a tissue slice is attached to the biochip, where the biochip includes a substrate pattern, the substrate pattern includes a plurality of key points, and an overlapping area exists between any adjacent view scan images;
a processing module 112, configured to perform fusion processing on the multiple view scan images according to an overlapping region between any adjacent view scan images, so as to obtain a full-frame scan image of the biochip;
a determining module 113, configured to determine first position information of the plurality of key points in the full-scan image according to the substrate pattern;
and the correction module 114 is configured to perform correction processing on the full-frame scanned image according to the first position information and the substrate pattern, so as to obtain a target full-frame scanned image.
In one possible implementation, the processing module 112 is specifically configured to:
determining the relative position between any adjacent visual field scanning images according to the overlapping area between any adjacent visual field scanning images;
And according to the relative positions between any adjacent visual field scanning images, carrying out fusion processing on the multiple visual field scanning images to obtain the full-frame scanning image.
In one possible implementation, the processing module 112 is specifically configured to:
determining a second field-of-view scan image adjacent to any first field-of-view scan image of the plurality of field-of-view scan images;
performing two-dimensional window sliding processing on the first visual field scanning image and the second visual field scanning image according to the overlapping area between the first visual field scanning image and the second visual field scanning image, and determining matching parameters between the first visual field scanning image and the second visual field scanning image;
and determining the relative position between the first visual field scanning image and the second visual field scanning image according to the matching parameters.
In one possible implementation, the processing module 112 is specifically configured to:
determining the position of a reference visual field scanning image in the visual field scanning images, wherein the reference visual field scanning image is any one of the visual field scanning images;
determining the positions of other visual field scanning images according to the positions of the reference visual field scanning images and the relative positions between any adjacent visual field scanning images, wherein the other visual field scanning images are visual field scanning images except the reference visual field scanning images in the plurality of visual field scanning images;
And carrying out fusion processing on the multiple visual field scanning images according to the positions of the reference visual field scanning images and the positions of the other visual field scanning images to obtain the full-frame scanning image.
In one possible implementation, the determining module 113 is specifically configured to:
determining a key point identification model corresponding to the substrate pattern;
inputting the full-width scanning image into the key point identification model to obtain the first position information;
the key point identification model is obtained by training based on a plurality of groups of training samples, any one group of training samples comprises a sample full-width scanning image and label information corresponding to the sample full-width image, the sample full-width scanning image comprises the substrate pattern, and the label information comprises position information of a plurality of key points in the sample full-width scanning image.
In one possible implementation, the correction module 114 is specifically configured to:
determining second position information of the plurality of key points in the full-width scanning image according to the substrate pattern and the size of the full-width scanning image;
and correcting the full-frame scanned image according to the first position information and the second position information to obtain the target full-frame scanned image.
In one possible implementation, the correction module 114 is specifically configured to:
determining a correction conversion matrix according to the first position information and the second position information;
and carrying out mapping transformation processing on the full-frame scanning image according to the correction transformation matrix to obtain the target full-frame scanning image.
The scan image correction apparatus provided in the embodiment of the present application is used for executing the above method embodiment, and its implementation principle and technical effects are similar, and this embodiment will not be described herein.
Fig. 12 illustrates a physical structure diagram of an electronic device, as shown in fig. 12, which may include: processor 1210, communication interface (Communications Interface), 1220, memory 1230 and communication bus 1240, wherein processor 1210, communication interface 1220 and memory 1230 communicate with each other via communication bus 1240. Processor 1210 may invoke logic instructions in memory 1230 to perform a scanned image correction method comprising: collecting a plurality of visual field scanning images of a biochip in a state that a tissue slice is attached to the biochip, wherein the biochip comprises a substrate pattern, the substrate pattern comprises a plurality of key points, and overlapping areas exist between any two adjacent visual field scanning images; according to the overlapping area between any adjacent visual field scanning images, carrying out fusion processing on the multiple visual field scanning images to obtain a full-frame scanning image of the biochip; determining first position information of the plurality of key points in the full-width scanning image according to the substrate pattern; and correcting the full-frame scanning image according to the first position information and the substrate pattern to obtain a target full-frame scanning image.
In addition, the logic instructions in the memory 1230 described above may be implemented in the form of software functional units and sold or used as a stand-alone product, stored in a computer-readable storage medium. Based on this understanding, the technical solution of the present application may be embodied essentially or in a part contributing to the prior art or in a part of the technical solution, in the form of a software product stored in a storage medium, comprising several instructions for causing a computer device (which may be a personal computer, a server, a network device, etc.) to perform all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), a magnetic disk, or an optical disk, or other various media capable of storing program codes.
In another aspect, the present application also provides a computer program product, the computer program product including a computer program, the computer program being storable on a non-transitory computer readable storage medium, the computer program, when executed by a processor, being capable of executing the scan image correction method provided in the above embodiments, the method comprising: collecting a plurality of visual field scanning images of a biochip in a state that a tissue slice is attached to the biochip, wherein the biochip comprises a substrate pattern, the substrate pattern comprises a plurality of key points, and overlapping areas exist between any two adjacent visual field scanning images; according to the overlapping area between any adjacent visual field scanning images, carrying out fusion processing on the multiple visual field scanning images to obtain a full-frame scanning image of the biochip; determining first position information of the plurality of key points in the full-width scanning image according to the substrate pattern; and correcting the full-frame scanning image according to the first position information and the substrate pattern to obtain a target full-frame scanning image.
In yet another aspect, the present application also provides a non-transitory computer readable storage medium having stored thereon a computer program which, when executed by a processor, is implemented to perform the scan image correction method provided by the above embodiments, the method comprising: collecting a plurality of visual field scanning images of a biochip in a state that a tissue slice is attached to the biochip, wherein the biochip comprises a substrate pattern, the substrate pattern comprises a plurality of key points, and overlapping areas exist between any two adjacent visual field scanning images; according to the overlapping area between any adjacent visual field scanning images, carrying out fusion processing on the multiple visual field scanning images to obtain a full-frame scanning image of the biochip; determining first position information of the plurality of key points in the full-width scanning image according to the substrate pattern; and correcting the full-frame scanning image according to the first position information and the substrate pattern to obtain a target full-frame scanning image.
The apparatus embodiments described above are merely illustrative, wherein the elements illustrated as separate elements may or may not be physically separate, and the elements shown as elements may or may not be physical elements, may be located in one place, or may be distributed over a plurality of network elements. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of this embodiment. Those of ordinary skill in the art will understand and implement the present application without undue burden.
From the above description of the embodiments, it will be apparent to those skilled in the art that the embodiments may be implemented by means of software plus necessary general hardware platforms, or of course may be implemented by means of hardware. Based on this understanding, the foregoing technical solution may be embodied essentially or in a part contributing to the prior art in the form of a software product, which may be stored in a computer readable storage medium, such as ROM/RAM, a magnetic disk, an optical disk, etc., including several instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to execute the method described in the respective embodiments or some parts of the embodiments.
Finally, it should be noted that: the above embodiments are only for illustrating the technical solution of the present application, and are not limiting; although the application has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit and scope of the technical solutions of the embodiments of the present application.

Claims (10)

1. A scanned image correction method, comprising:
collecting a plurality of visual field scanning images of a biochip in a state that a tissue slice is attached to the biochip, wherein the biochip comprises a substrate pattern, the substrate pattern comprises a plurality of key points, and overlapping areas exist between any two adjacent visual field scanning images;
according to the overlapping area between any adjacent visual field scanning images, carrying out fusion processing on the multiple visual field scanning images to obtain a full-frame scanning image of the biochip;
determining first position information of the plurality of key points in the full-width scanning image according to the substrate pattern;
and correcting the full-frame scanning image according to the first position information and the substrate pattern to obtain a target full-frame scanning image.
2. The method according to claim 1, wherein the fusing the plurality of field-of-view scanned images according to the overlapping area between any adjacent field-of-view scanned images to obtain a full-scan image of the biochip comprises:
determining the relative position between any adjacent visual field scanning images according to the overlapping area between any adjacent visual field scanning images;
And according to the relative positions between any adjacent visual field scanning images, carrying out fusion processing on the multiple visual field scanning images to obtain the full-frame scanning image.
3. The method of claim 2, wherein determining the relative position between any adjacent field of view scanned images based on the overlap region between any adjacent field of view scanned images comprises:
determining a second field-of-view scan image adjacent to any first field-of-view scan image of the plurality of field-of-view scan images;
performing two-dimensional window sliding processing on the first visual field scanning image and the second visual field scanning image according to the overlapping area between the first visual field scanning image and the second visual field scanning image, and determining matching parameters between the first visual field scanning image and the second visual field scanning image;
and determining the relative position between the first visual field scanning image and the second visual field scanning image according to the matching parameters.
4. The method according to claim 2, wherein the fusing the plurality of field-of-view scanned images according to the relative position between any adjacent field-of-view scanned images to obtain the full-scan image includes:
Determining the position of a reference visual field scanning image in the visual field scanning images, wherein the reference visual field scanning image is any one of the visual field scanning images;
determining the positions of other visual field scanning images according to the positions of the reference visual field scanning images and the relative positions between any adjacent visual field scanning images, wherein the other visual field scanning images are visual field scanning images except the reference visual field scanning images in the plurality of visual field scanning images;
and carrying out fusion processing on the multiple visual field scanning images according to the positions of the reference visual field scanning images and the positions of the other visual field scanning images to obtain the full-frame scanning image.
5. The method of any of claims 1-4, wherein determining first location information of the plurality of keypoints in the full-scan image from the substrate pattern comprises:
determining a key point identification model corresponding to the substrate pattern;
inputting the full-width scanning image into the key point identification model to obtain the first position information;
the key point identification model is obtained by training based on a plurality of groups of training samples, any one group of training samples comprises a sample full-width scanning image and label information corresponding to the sample full-width image, the sample full-width scanning image comprises the substrate pattern, and the label information comprises position information of a plurality of key points in the sample full-width scanning image.
6. The method according to any one of claims 1 to 4, wherein the correcting the full-scan image according to the first position information and the substrate pattern to obtain a target full-scan image includes:
determining second position information of the plurality of key points in the full-width scanning image according to the substrate pattern and the size of the full-width scanning image;
and correcting the full-frame scanned image according to the first position information and the second position information to obtain the target full-frame scanned image.
7. The method of claim 6, wherein correcting the full scan image based on the first location information and the second location information to obtain the target full scan image comprises:
determining a correction conversion matrix according to the first position information and the second position information;
and carrying out mapping transformation processing on the full-frame scanning image according to the correction transformation matrix to obtain the target full-frame scanning image.
8. A scanned image correction device, comprising:
the acquisition module is used for acquiring a plurality of visual field scanning images of the biochip in a state that the tissue slice is attached to the biochip, wherein the biochip comprises a substrate pattern, the substrate pattern comprises a plurality of key points, and overlapping areas exist between any two adjacent visual field scanning images;
The processing module is used for carrying out fusion processing on the multiple visual field scanning images according to the overlapping area between any adjacent visual field scanning images to obtain a full-width scanning image of the biochip;
a determining module, configured to determine first position information of the plurality of key points in the full-scan image according to the substrate pattern;
and the correction module is used for correcting the full-frame scanning image according to the first position information and the substrate pattern to obtain a target full-frame scanning image.
9. An electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the processor implements the scanned image correction method of any one of claims 1 to 7 when the program is executed by the processor.
10. A non-transitory computer readable storage medium having stored thereon a computer program, wherein the computer program when executed by a processor implements the scanned image correction method according to any one of claims 1 to 7.
CN202310758687.4A 2023-06-26 2023-06-26 Scanned image correction method, device and equipment Pending CN116823655A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310758687.4A CN116823655A (en) 2023-06-26 2023-06-26 Scanned image correction method, device and equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310758687.4A CN116823655A (en) 2023-06-26 2023-06-26 Scanned image correction method, device and equipment

Publications (1)

Publication Number Publication Date
CN116823655A true CN116823655A (en) 2023-09-29

Family

ID=88123568

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310758687.4A Pending CN116823655A (en) 2023-06-26 2023-06-26 Scanned image correction method, device and equipment

Country Status (1)

Country Link
CN (1) CN116823655A (en)

Similar Documents

Publication Publication Date Title
US7940998B2 (en) System and method for re-locating an object in a sample on a slide with a microscope imaging device
EP2095332B1 (en) Feature-based registration of sectional images
US6980677B2 (en) Method, system, and computer code for finding spots defined in biological microarrays
US6587586B1 (en) Extracting textual information from a video sequence
CN103543277B (en) A kind of blood group result recognizer based on gray analysis and category identification
EP3779868B1 (en) Fluorescence image registration method, gene sequencing instrument and system, and storage medium
EP3341890B1 (en) Tissue microarray analysis
CN104395759B (en) The method and system recognized for graphical analysis
US7551762B2 (en) Method and system for automatic vision inspection and classification of microarray slides
EP2089831B1 (en) Method to automatically decode microarray images
CN101840499A (en) Bar code decoding method and binarization method thereof
JP5246201B2 (en) Image processing apparatus and image processing program
CN116823655A (en) Scanned image correction method, device and equipment
KR20110053416A (en) Method and apparatus for imaging of features on a substrate
CN102369553A (en) Microscopy
CN116823607A (en) Scanned image processing method, device and equipment
Bowman et al. Automated analysis of gene-microarray images
Paulik et al. Staining Independent Nonrigid Iterative Registration Method, for Microscopic Samples
CN115331735B (en) Chip decoding method and device
US20240257320A1 (en) Jitter correction image analysis
Venkataraman et al. Automated image analysis of fluorescence microscopic images to identify protein-protein interactions
Figueroa et al. Robust spots finding in microarray images with distortions
Memarian et al. Automated System for Image Analysis of Yeast Colonies: A Novel Application in Functional Genomics
Maji Image skew detection and correction in regular images and document images
KR20050048727A (en) System and method for bochip image analysis

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination