CN112819735B - Real-time large-scale image synthesis algorithm of microscope system - Google Patents

Real-time large-scale image synthesis algorithm of microscope system Download PDF

Info

Publication number
CN112819735B
CN112819735B CN202011623367.0A CN202011623367A CN112819735B CN 112819735 B CN112819735 B CN 112819735B CN 202011623367 A CN202011623367 A CN 202011623367A CN 112819735 B CN112819735 B CN 112819735B
Authority
CN
China
Prior art keywords
image
matching
image synthesis
matching pair
time
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011623367.0A
Other languages
Chinese (zh)
Other versions
CN112819735A (en
Inventor
李磊
刘淑斌
郑若鹏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sichuan University
Original Assignee
Sichuan University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sichuan University filed Critical Sichuan University
Priority to CN202011623367.0A priority Critical patent/CN112819735B/en
Publication of CN112819735A publication Critical patent/CN112819735A/en
Application granted granted Critical
Publication of CN112819735B publication Critical patent/CN112819735B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10056Microscopic image
    • G06T2207/10061Microscopic image from scanning electron microscope
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20092Interactive image processing based on input by user
    • G06T2207/20104Interactive definition of region of interest [ROI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses a real-time large-scale image synthesis algorithm of a microscope system, which relates to the technical field of microscopic optical imaging and image processing, adopts a multi-thread parallel image acquisition type, and is convenient and quick; the characteristics are extracted only from the ROI by utilizing the multithreading technology, so that the calculation of characteristic points which do not contribute to a composite image is avoided, the calculation time and the calculation steps are greatly reduced, the influence of wrong characteristic points is avoided, and the accuracy and the efficiency of the extracted characteristic points are improved; under the GPU acceleration environment, the matching pairings of all images are obtained in parallel according to a method from dense to sparse, calculation of sparse regions of feature points is greatly reduced, matching of dense feature sets is preferentially guaranteed, and processing efficiency and accuracy are improved; the optimal matching pairs are screened based on the length and the slope of the reference line, so that the calculation process is simplified; the algorithm is easy to realize real-time large-scale image synthesis of a microscope system.

Description

Real-time large-scale image synthesis algorithm of microscope system
Technical Field
The invention relates to the technical field of micro-optical imaging and image processing, in particular to a real-time large-scale image synthesis algorithm for a microscope system.
Background
Due to the limitation of the optical imaging principle, the field of view of the conventional optical microscope is limited to a relatively small range, the field of view of most conventional optical microscopes (10 ×) is limited to below 2.5mm, and the microscopic imaging is performed in a relatively clumsy scanning mode (lens scanning, turntable scanning) and a moving stage mode. The method not only brings great inconvenience to observing a specific area of a sample, particularly controls local details of an interested area, but also has no way to rapidly observe a tiny object in a large range and the like. Processing time in the microscopy domain is at a premium and it is highly unscientific to return to processing data while scanning. However, although there is an image synthesis algorithm that can be used for microscopic imaging, the image synthesis algorithm in the conventional sense cannot perform real-time imaging due to slow speed, and a mode of collecting and storing images first and then selecting post-processing is often adopted; or large-scale imaging cannot be formed due to large calculation amount, low precision and the like, which causes great trouble to real-time large-scale microscopic imaging. Therefore, an image synthesis algorithm which is fast, effective and capable of real-time large-scale imaging is urgently needed, namely a microscopic image synthesis algorithm with high video frame rate and high synthesis precision is needed to meet the requirements of engineering, and the algorithm can be widely applied to the research fields of life science, medical treatment, micro-nano optics and the like.
Disclosure of Invention
The present invention aims to provide a real-time large-scale image synthesis algorithm for a microscopy system that alleviates the above problems.
In order to alleviate the above problems, the technical scheme adopted by the invention is as follows:
the invention provides a real-time large-scale image synthesis algorithm of a microscope system, which comprises the following steps:
s1, parallelly collecting a plurality of images of different visual fields of the sample to form an image sequence S, wherein two adjacent images have an overlapping area, and W is any one image in S;
s2, determining the ROI of each image in S, and setting WROI to represent the ROI of W;
s3, extracting feature point sets of all image ROIs in S in parallel, extracting the feature point sets of the WROI according to the image to be matched of W, recording the image to be matched as WD, wherein the image to be matched refers to the image which has an overlapped area with W in S, and setting WDROI to represent the interested area of WD;
s4, under the GPU acceleration environment, M matching pair sets of each image in S are solved in parallel,
for W, the method for acquiring M matching pairs is as follows:
s41, solving the distribution density situation of the characteristic points of the WROI based on a probability density function according to the characteristic point set of the WROI, dividing the WROI into N sub-regions according to the distribution density situation of the characteristic points in a dense-to-sparse sequence, taking the first M sub-regions as effective sub-regions, and enabling the effective sub-regions of the WROI to be in one-to-one correspondence with the effective sub-regions of the WDROI in the dense-to-sparse sequence;
s42, for each effective subregion of the WROI, matching the characteristic point of the effective subregion with the characteristic point of the corresponding effective subregion to obtain a plurality of matching pairs and form a W matching pair set;
s5, connecting the center of each image in the S with the center of the image to be matched to obtain a reference line;
s6, screening out a plurality of optimal matching pairs from the matching pair set based on the length and the slope of the reference line;
and S7, determining an image synthesis area, preprocessing the image synthesis area, and synthesizing the image synthesis area by adopting an optimal matching pair to obtain a final image.
The technical effect of the scheme is as follows:
1) a multi-thread parallel image acquisition mode is adopted, so that the method is convenient and rapid;
2) the characteristics are extracted only from the ROI by utilizing the multithreading technology, so that the calculation of characteristic points which do not contribute to a composite image is avoided, the calculation time and the calculation steps are greatly reduced, the influence of wrong characteristic points is avoided, and the accuracy and the efficiency of the extracted characteristic points are improved;
3) under the GPU acceleration environment, the matching pairings of all images are obtained in parallel according to a method from dense to sparse, calculation of sparse regions of feature points is greatly reduced, matching of dense feature sets is preferentially guaranteed, and processing efficiency and accuracy are improved;
4) the optimal matching pairs are screened based on the length and the slope of the reference line, so that the calculation process is simplified;
5) the algorithm is easy to realize real-time large-scale image synthesis of a microscope system.
In a preferred embodiment of the present invention, in the step S1, the overlapping ratio of two adjacent images is 10% to 20%.
The technical effect of the scheme is as follows:
repeated experiments prove that the number of the characteristic points used for calculation is most suitable for maintaining the overlapping proportion, the redundancy or the deficiency of the number of the characteristic points is avoided, and the calculation efficiency is improved.
In a preferred embodiment of the present invention, in step S2, for a certain image in S, an image boundary expression is used to represent the ROI.
The technical effect of the scheme is as follows:
and the ROI is normalized by adopting a boundary expression, so that the calculation range is narrowed, the influence of an invalid feature point set is indirectly removed, and the search efficiency is accelerated.
In a preferred embodiment of the present invention, in step S3, the method for extracting the feature point set of the WROI includes: setting the pixel value of the W non-ROI as 0 in a mask mode, setting the pixel value of the WD non-ROI as 255, and then extracting a plurality of characteristic points of the WROI through an ORB algorithm to form a characteristic point set.
The technical effect of the scheme is as follows: the extracted feature points are guaranteed to be the feature sets actually participating in the operation, the calculation redundancy of invalid feature points is avoided, the extraction speed is increased, and the matching accuracy is guaranteed.
In a preferred embodiment of the present invention, in the step S4, N is greater than or equal to 3 and less than or equal to 9, M is greater than or equal to 1 and less than or equal to 3, and M is less than N.
The technical effect of the scheme is as follows:
the value taking mode is an optimal result obtained through multiple experimental comparisons, too large N value can cause too many characteristic points in an interval to influence calculated amount, and too small N value can cause effective characteristic points not to meet the requirement of image synthesis.
In a preferred embodiment of the present invention, in the step S41, the formula for calculating the distribution density of the feature points is as follows:
Figure GDA0003376385580000041
Figure GDA0003376385580000042
wherein a and b are integration interval ranges, f (i) is a probability distribution function of the ith subinterval feature point, f (M) is a probability distribution function of the Mth subinterval feature point, and JsecIs a judging function for judging the distribution of the characteristic points of the ith sub-area and the Mth sub-area.
The technical effect of the scheme is as follows:
the method avoids the defect of directly solving threshold modes such as a mean value, a median value and the like (namely, the influence of a part of unsuitable characteristic points) in the traditional method, can be used for solving the pixel value of the characteristic point in each subinterval most nearly and truly, and provides guarantee for the accuracy of the subsequent calculation.
In a preferred embodiment of the present invention, in step S42, for each valid subregion of the WROI, the method for matching the feature points thereof with the feature points of its corresponding valid subregion is: for each characteristic point of the effective subarea, solving the Euclidean distance between each characteristic point of the corresponding effective subarea and the characteristic point; and matching each characteristic point of the effective sub-area with the characteristic point with the shortest Euclidean distance to obtain a matching pair.
The technical effect of the scheme is as follows:
the Euclidean distance is adopted for preliminary matching, so that the method is simple and effective, and the rough matching process of the feature point set is easy to realize.
In a preferred embodiment of the present invention, the step S6 specifically includes the following steps:
s61, setting two end points of a reference line as O1 and O2 respectively, calculating the length LO12 and the slope kO12 of the reference line, and setting a matching pair set of two images corresponding to the reference line as PEDJ 1;
s62, initializing the number H of the optimal matching pairs to be 0, inputting a screening number threshold H1 of the matching pairs, and setting a difference threshold delta;
s63, if H < H1 and there is a matching pair in PEDJ1 that has not been selected for the first time, continuing to execute step S64,
if H is less than H1, and all matching pairs in PEDJ1 are selected for the first time but not selected for the second time, then a plurality of matching pairs in PEDJ1 which are not selected as the optimal matching pairs are combined into a set PEDJ2, the step S66 is skipped,
if H is less than H1, and all matching pairs in PEDJ2 are selected for the second time but not for the third time, then a plurality of matching pairs in PEDJ2 which are not selected as the optimal matching pairs are combined into a set PEDJ3, the step S68 is skipped,
if all the matching pairs in H-H1 or PEDJ3 are selected for the third time, jumping to step S70;
s64, selecting a matching pair which is not selected for the first time from PEDJ1, and setting two characteristic points of the matching pair as A1 and A2 respectively, wherein A1 and O1 are in one figure, A2 and O2 are in the other figure, and calculating the distance LA12 between A1 and A2, the slope kA12 of the connecting line of A1 and A2, the distance LAO1 between A1 and O1, and the distance LAO2 between A2 and O2;
s65, if LA12+ LAO1+ LAO2 is LO12 and kA12 is kA12, taking the currently selected matching pair as an optimal matching pair, H is H +1, and jumping to step S63, otherwise, directly jumping to step S63;
s66, selecting a matching pair which is not selected for the second time from PEDJ2, and setting two characteristic points of the matching pair as B1 and B2 respectively, wherein B1 and O1 are in one figure, B2 and O2 are in the other figure, and calculating the distance LB12 between B1 and B2, the slope kB12 of the connecting line of B1 and B2, the distance LBO1 between B1 and O1, and the distance LBO2 between B2 and O2;
s67, if LB12+ LBO1+ LBO2-LO12 is smaller than δ and kB12 is kO12, taking the currently selected matching pair as the optimal matching pair, H being H +1, jumping to step S63, otherwise, directly jumping to step S63;
s68, selecting a matching pair which is not selected for the third time from PEDJ3, and setting two characteristic points of the matching pair as C1 and C2 respectively, wherein C1 and O1 are in one picture, C2 and O2 are in the other picture, and calculating the connecting line slope kC12 of C1 and C2;
s69, if kC12 is equal to kO12, taking the currently selected matching pair as the optimal matching pair, where H is equal to H +1, and jumping to step S63, otherwise, directly jumping to step S63;
and S70, finishing the screening, and storing all the currently selected optimal matching pairs.
The technical effect of the scheme is as follows: the method adopts three screening modes with different precisions to screen the optimal matching pairs, if the number of the optimal matching pairs screened by the high-precision screening mode is enough, the screening with low precision is not needed, and the screening method can ensure that the screened matching pairs are the optimal matching pairs.
In a preferred embodiment of the present invention, in step S7, the method for determining the image synthesis area by using the image boundary expression to be stitched includes: calculating the distribution rule of pixel brightness values around the image synthesis area by using a probability density function, calculating the mathematical expectation E (X) of the brightness values, assigning a weight according to the proportion of each pixel brightness value, and normalizing the weight;
the mathematical expectation is:
E=∑i∈(a,b)IK(x,y)SK(x, y) wherein IK(x, y) is the brightness value of the Kth pixel, SK(x, y) isThe feature distribution probability of the Kth pixel point and the probability in the interval from a to b are normalized, and the sum is 1;
the method for calculating the weight coefficient of each pixel value comprises the following steps:
Figure GDA0003376385580000061
δ is an error coefficient.
The technical effect of the scheme is as follows:
the problem of exposure difference caused by shooting reasons is solved by adopting a pixel brightness value redistribution mode, the situation of overexposure or underexposure of the image can not occur, the large-scale synthetic image keeps uniform and uniform resolution, and the target of an observer for overall observation of the specimen is met.
In order to make the aforementioned objects, features and advantages of the present invention comprehensible, embodiments accompanied with figures are described in detail below.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings needed to be used in the embodiments will be briefly described below, it should be understood that the following drawings only illustrate some embodiments of the present invention and therefore should not be considered as limiting the scope, and for those skilled in the art, other related drawings can be obtained according to the drawings without inventive efforts.
FIG. 1 is a schematic diagram of a microscopic imaging system to which the algorithm of the present invention is applied;
FIG. 2 is a flow chart of a real-time large-scale image synthesis algorithm for the microscopy system of the present invention;
FIG. 3 is a schematic illustration of the overlapping of multiple sub-fields of view of the present invention;
FIG. 4 is a drawing of a microscopic sample to be synthesized in the present invention;
fig. 5 is a large-scale micrograph of the image after processing by the image synthesis algorithm of the present invention.
The reference numbers in the figures are as follows:
the system comprises a support 1, a microscope objective lens array 2, a CMOS camera array 3, a sample object stage 4, a base 5, a light source 6, a data transmission module 7, an image processing module 8 and an adjusting converter 9.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, but not all, embodiments of the present invention. The components of embodiments of the present invention generally described and illustrated in the figures herein may be arranged and designed in a wide variety of different configurations.
Thus, the following detailed description of the embodiments of the present invention, presented in the figures, is not intended to limit the scope of the invention, as claimed, but is merely representative of selected embodiments of the invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Referring to fig. 1, the algorithm provided by the present invention can be applied to the microscopic imaging system shown in the figure, and the system includes:
the bracket 1 is used for supporting the main structure of the microscope system and plays an auxiliary role;
the microscope objective array 2 is used for acquiring the acquired data of each zoom objective and transmitting the acquired data to the CMOS camera array 3;
the CMOS camera array 3 is used for carrying out microscopic imaging on the collected sample information;
a sample stage 4 for placing or carrying a microscopic sample;
the base 5 is used for balancing the gravity of the whole microscope system so as to keep the whole microscope in a balanced state;
the light source 6 is used for polishing the microscopic sample and transmitting sample information to the microscope objective array 2, and the illumination light source can be a coaxial light source or an LED surface light source;
the data transmission module 7 is used for transmitting huge data volume and transmitting imaging data of the CMOS camera array 3 to the image processing module 8, the data transmission device can be a switch, the input end can be connected with the CMOS camera through a plurality of network interfaces, the output end is connected with a computer through a network cable, and real-time data transmission is realized in a network transmission mode;
the image synthesis algorithm for the microscope system provided by the embodiment of the application is completed by a carrier (which may include a server carrying a display card, a desktop computer and a notebook computer) of the image processing module 8;
and the adjusting converter 9 is used for controlling the focusing capability of the microscope and ensuring the definition of the acquired image.
Fig. 2 presents a flow chart of an image synthesis algorithm for a microscopy system according to an embodiment of the present application. As shown in fig. 2, the method comprises the steps of:
the application of the invention provides an embodiment of a real-time large-scale image synthesis algorithm, and FIG. 2 shows a flow chart, which specifically comprises the following steps:
and S1, acquiring nine images with different views of the pine cone sample in parallel to form an image sequence S { S1, S2 and S3 … … S9}, wherein two adjacent images have an overlapping area, and W is any image in S.
In this embodiment, the overlapping ratio of two adjacent images is 10% -20%.
Because the traditional serial mode needs to finish processing the thread 1 and then process the thread 2, which easily causes time delay, in the embodiment, the traditional serial acquisition transmission is upgraded to 9-path parallel acquisition transmission, a plurality of threads are simultaneously used for processing data, and the multithreading technology is adopted to simultaneously process the plurality of threads in parallel, so that the efficiency is greatly improved.
And S2, determining the region of interest ROI of each image in S, and setting WROI to represent the region of interest of W.
In this embodiment, an expression of the boundary condition is obtained through coordinates of the point, and coordinates of the pixel point are set to be (x, f (x)), where the expression specifically is: f (x) ═ ax + b, where the region of interest ROI of the image is determined, the specific process is: according to all the obtained boundary formulas, the specific direction and position of the region of interest are judged, and the judging method comprises the following steps: and (3) standardizing the direction and the position of the region of interest according to the sequence of increasing from top to bottom and increasing from left to right, wherein in order to better reflect the standardization of boundary conditions, the upper left vertex of the image is defined as a coordinate system origin, and the downward direction and the rightward direction are defined as positive directions.
And S3, extracting feature point sets of the ROI of each image in S in parallel, wherein for WROI, the feature point sets are extracted according to the image to be matched of W, the image to be matched refers to an image which has an overlapped region with W in S and is marked as WD, and WDROI is used for representing a region of interest of WD.
In this embodiment, a multithreading technique is used to extract feature points in a region of interest ROI, which is different from a conventional serial manner in which only one image feature point can be extracted at a time, a multithreading technique is used to simultaneously start multiple threads to process multiple images in parallel, each thread processes one image and simultaneously extracts feature points of the multiple images, and the method for extracting a feature point set of WROI specifically includes:
setting the pixel value of the W non-ROI as 0 in a mask mode, setting the pixel value of the WD non-ROI as 255, and then extracting a plurality of characteristic points of the WROI through an ORB algorithm to form a characteristic point set.
S4, under the GPU acceleration environment, 2 matching pair sets of each image in S are solved in parallel,
for W, the method for acquiring 2 matching pairs is as follows:
s41, according to the WROI feature point set, based on the probability density function, calculating the WROI feature point distribution density, and calculating the formula as follows:
Figure GDA0003376385580000091
wherein a and b are integration interval ranges, f (i) is a probability distribution function of the ith subinterval feature point, f (M) is a probability distribution function of the Mth subinterval feature point, and JsecIs a judging function for judging the distribution of the characteristic points of the ith sub-area and the Mth sub-area,
according to the obtained distribution density situation of the feature points, dividing the WROI into 7 sub-regions according to the sequence from dense to sparse, wherein the specific method comprises the following steps: dividing the x coordinate in the region of interest into 7 equal parts by using the space coordinate system established in the step S2, taking the first 2 sub-regions as effective sub-regions (the error rate of the feature set in the sparse region is often high, so that the matching of the dense feature set is preferentially ensured), and according to the sequence from dense to sparse, the effective sub-regions of the WROI and the effective sub-regions of the WDROI are in one-to-one correspondence;
s42, for each valid subregion of the WROI, matching its feature points with the feature points of its corresponding valid subregion, the specific method is: for each characteristic point of the effective subarea, solving the Euclidean distance between each characteristic point of the corresponding effective subarea and the characteristic point; and matching each characteristic point of the effective sub-area with the characteristic point with the shortest Euclidean distance to obtain a matching pair, and forming a matching pair set of W by the obtained multiple matching pairs.
And S5, connecting the center of each image in the S with the center of the image to be matched to obtain a reference line.
S6, screening out a plurality of optimal matching pairs from the matching pair set based on the length and the slope of the reference line, wherein the specific method comprises the following steps:
s61, setting two end points of a reference line as O1 and O2 respectively, calculating the length LO12 and the slope kO12 of the reference line, and setting a matching pair set of two images corresponding to the reference line as PEDJ 1;
s62, initializing the number H of the optimal matching pairs to be 0, inputting a screening number threshold H1 of the matching pairs, and setting a difference threshold delta;
s63, if H < H1 and there is a matching pair in PEDJ1 that has not been selected for the first time, continuing to execute step S64,
if H is less than H1, and all matching pairs in PEDJ1 are selected for the first time but not selected for the second time, then a plurality of matching pairs in PEDJ1 which are not selected as the optimal matching pairs are combined into a set PEDJ2, the step S66 is skipped,
if H is less than H1, and all matching pairs in PEDJ2 are selected for the second time but not for the third time, then a plurality of matching pairs in PEDJ2 which are not selected as the optimal matching pairs are combined into a set PEDJ3, the step S68 is skipped,
if all the matching pairs in H-H1 or PEDJ3 are selected for the third time, jumping to step S70;
s64, selecting a matching pair which is not selected for the first time from PEDJ1, and setting two characteristic points of the matching pair as A1 and A2 respectively, wherein A1 and O1 are in one figure, A2 and O2 are in the other figure, and calculating the distance LA12 between A1 and A2, the slope kA12 of the connecting line of A1 and A2, the distance LAO1 between A1 and O1, and the distance LAO2 between A2 and O2;
s65, if LA12+ LAO1+ LAO2 is LO12 and kA12 is kA12, taking the currently selected matching pair as an optimal matching pair, H is H +1, and jumping to step S63, otherwise, directly jumping to step S63;
s66, selecting a matching pair which is not selected for the second time from PEDJ2, and setting two characteristic points of the matching pair as B1 and B2 respectively, wherein B1 and O1 are in one figure, B2 and O2 are in the other figure, and calculating the distance LB12 between B1 and B2, the slope kB12 of the connecting line of B1 and B2, the distance LBO1 between B1 and O1, and the distance LBO2 between B2 and O2;
s67, if LB12+ LBO1+ LBO2-LO12 is smaller than δ and kB12 is kO12, taking the currently selected matching pair as the optimal matching pair, H being H +1, jumping to step S63, otherwise, directly jumping to step S63;
s68, selecting a matching pair which is not selected for the third time from PEDJ3, and setting two characteristic points of the matching pair as C1 and C2 respectively, wherein C1 and O1 are in one picture, C2 and O2 are in the other picture, and calculating the connecting line slope kC12 of C1 and C2;
s69, if kC12 is equal to kO12, taking the currently selected matching pair as the optimal matching pair, where H is equal to H +1, and jumping to step S63, otherwise, directly jumping to step S63;
and S70, finishing the screening, and storing all the currently selected optimal matching pairs.
In this embodiment, the following distance formula is used to calculate the distance:
Figure GDA0003376385580000113
the slope is calculated using the following slope formula:
Figure GDA0003376385580000111
wherein, Ci(x, y) is the coordinate of the feature set, G (x, y) is the coordinate of the center point of the image, N is the number of feature points, (x, y) is the coordinate of the feature seti,yi) Representing the coordinates of the ith feature point.
And S7, determining an image synthesis area, preprocessing the image synthesis area, synthesizing the image synthesis area by adopting the optimal matching pair to obtain a final image, and uniformly exposing the final image and displaying the final image on a display screen in real time.
In this embodiment, the method for determining the image synthesis area by using the image boundary expression to be stitched includes: the probability density function is used for solving the distribution rule of the pixel brightness values I (x, y) around the image synthesis area, the mathematical expectation E (X) of the brightness values is solved, a weight is assigned according to the proportion of each pixel brightness value, the weight is normalized, and the brightness value weight which is closer to the mathematical expectation value theoretically is larger.
In this embodiment, the mathematical expectation formula is:
E=∑i∈(a,b)IK(x,y)SK(x, y) wherein IK(x, y) is the brightness value of the Kth pixel, SK(x, y) is the characteristic distribution probability of the Kth pixel point, the probabilities in the interval from a to b are normalized, and the sum is 1;
the method for calculating the weight coefficient of each pixel value comprises the following steps:
Figure GDA0003376385580000112
δ is an error coefficient having a small influence, and when δ is 0.02, this error coefficient is considered to be preferable.
The traditional microscope system usually needs slow scanning time, a complex system and a post-processing mode, so that the defects of inconvenient operation, overlong waiting time and incapability of accurate judgment are caused, and in addition, the imaging precision is not high enough. As shown in FIG. 3, the invention avoids the traditional scanning mode, adopts a multi-view-field integrated real-time observation strategy for the sample, improves the space bandwidth product, and saves the space volume to the maximum extent because each objective lens is arranged in a convergent manner. Fig. 4 shows a set of sub-images acquired by a microscope system to be processed according to an embodiment of the present invention, before the processing by the present invention, the microscope images under a plurality of independent small fields are obtained, it is expected that the microscope images under each field have differences in exposure due to a complex environment, and the effect after the processing by the present invention is as shown in fig. 5.
The above description is only a preferred embodiment of the present invention and is not intended to limit the present invention, and various modifications and changes may be made by those skilled in the art. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (9)

1. A real-time large-scale image synthesis algorithm of a microscope system is characterized by comprising the following steps:
s1, parallelly collecting a plurality of images of different visual fields of the sample to form an image sequence S, wherein two adjacent images have an overlapping area, and W is any one image in S;
s2, determining the ROI of each image in S, and setting WROI to represent the ROI of W;
s3, extracting feature point sets of all image ROIs in S in parallel, extracting the feature point sets of the WROI according to the image to be matched of W, recording the image to be matched as WD, wherein the image to be matched refers to the image which has an overlapped area with W in S, and setting WDROI to represent the interested area of WD;
s4, under the GPU acceleration environment, M matching pair sets of each image in S are solved in parallel,
for W, the method for acquiring M matching pairs is as follows:
s41, solving the distribution density situation of the characteristic points of the WROI based on a probability density function according to the characteristic point set of the WROI, dividing the WROI into N sub-regions according to the distribution density situation of the characteristic points in a dense-to-sparse sequence, taking the first M sub-regions as effective sub-regions, and enabling the effective sub-regions of the WROI to be in one-to-one correspondence with the effective sub-regions of the WDROI in the dense-to-sparse sequence;
s42, for each effective subregion of the WROI, matching the characteristic point of the effective subregion with the characteristic point of the corresponding effective subregion to obtain a plurality of matching pairs and form a W matching pair set;
s5, connecting the center of each image in the S with the center of the image to be matched to obtain a reference line;
s6, screening out a plurality of optimal matching pairs from the matching pair set based on the length and the slope of the reference line;
and S7, determining an image synthesis area, preprocessing the image synthesis area, and synthesizing the image synthesis area by adopting an optimal matching pair to obtain a final image.
2. The real-time large-scale image synthesis algorithm of the microscope system according to claim 1, wherein in the step S1, the overlapping ratio of two adjacent images is 10% to 20%.
3. The real-time large-scale image synthesis algorithm of the microscope system as claimed in claim 1, wherein in step S2, for a certain image in S, the ROI is represented by an image boundary expression.
4. The real-time large-scale image synthesis algorithm of the microscopy system as claimed in claim 1, wherein in step S3, the method for extracting the feature point set of the WROI comprises: setting the pixel value of the W non-ROI as 0 in a mask mode, setting the pixel value of the WD non-ROI as 255, and then extracting a plurality of characteristic points of the WROI through an ORB algorithm to form a characteristic point set.
5. The real-time large-scale image synthesis algorithm of the microscope system as claimed in claim 1, wherein in step S4, N is greater than or equal to 3 and less than or equal to 9, M is greater than or equal to 1 and less than or equal to 3, and M is less than N.
6. The real-time large-scale image synthesis algorithm of the microscope system according to claim 1, wherein in step S41, the formula for the distribution density of the feature points is as follows:
si=∫a bf(i)di
Figure FDA0003376385570000021
wherein a and b are integration interval ranges, f (i) is a probability distribution function of the ith subinterval feature point, f (M) is a probability distribution function of the Mth subinterval feature point, and JsecIs a judging function for judging the distribution of the characteristic points of the ith sub-area and the Mth sub-area.
7. The real-time large-scale image synthesis algorithm of the microscope system as claimed in claim 1, wherein in step S42, for each active sub-area of the WROI, the method for matching the feature points with the feature points of its corresponding active sub-area is: for each characteristic point of the effective subarea, solving the Euclidean distance between each characteristic point of the corresponding effective subarea and the characteristic point; and matching each characteristic point of the effective sub-area with the characteristic point with the shortest Euclidean distance to obtain a matching pair.
8. The real-time large-scale image synthesis algorithm of the microscope system as claimed in claim 1, wherein the step S6 specifically comprises the following steps:
s61, setting two end points of a reference line as O1 and O2 respectively, calculating the length LO12 and the slope kO12 of the reference line, and setting a matching pair set of two images corresponding to the reference line as PEDJ 1;
s62, initializing the number H of the optimal matching pairs to be 0, inputting a screening number threshold H1 of the matching pairs, and setting a difference threshold delta;
s63, if H < H1 and there is a matching pair in PEDJ1 that has not been selected for the first time, continuing to execute step S64,
if H is less than H1, and all matching pairs in PEDJ1 are selected for the first time but not selected for the second time, then a plurality of matching pairs in PEDJ1 which are not selected as the optimal matching pairs are combined into a set PEDJ2, the step S66 is skipped,
if H is less than H1, and all matching pairs in PEDJ2 are selected for the second time but not for the third time, then a plurality of matching pairs in PEDJ2 which are not selected as the optimal matching pairs are combined into a set PEDJ3, the step S68 is skipped,
if all the matching pairs in H-H1 or PEDJ3 are selected for the third time, jumping to step S70;
s64, selecting a matching pair which is not selected for the first time from PEDJ1, and setting two characteristic points of the matching pair as A1 and A2 respectively, wherein A1 and O1 are in one figure, A2 and O2 are in the other figure, and calculating the distance LA12 between A1 and A2, the slope kA12 of the connecting line of A1 and A2, the distance LAO1 between A1 and O1, and the distance LAO2 between A2 and O2;
s65, if LA12+ LAO1+ LAO2 is LO12 and kA12 is kA12, taking the currently selected matching pair as an optimal matching pair, H is H +1, and jumping to step S63, otherwise, directly jumping to step S63;
s66, selecting a matching pair which is not selected for the second time from PEDJ2, and setting two characteristic points of the matching pair as B1 and B2 respectively, wherein B1 and O1 are in one figure, B2 and O2 are in the other figure, and calculating the distance LB12 between B1 and B2, the slope kB12 of the connecting line of B1 and B2, the distance LBO1 between B1 and O1, and the distance LBO2 between B2 and O2;
s67, if LB12+ LBO1+ LBO2-LO12 is smaller than δ and kB12 is kO12, taking the currently selected matching pair as the optimal matching pair, H being H +1, jumping to step S63, otherwise, directly jumping to step S63;
s68, selecting a matching pair which is not selected for the third time from PEDJ3, and setting two characteristic points of the matching pair as C1 and C2 respectively, wherein C1 and O1 are in one picture, C2 and O2 are in the other picture, and calculating the connecting line slope kC12 of C1 and C2;
s69, if kC12 is equal to kO12, taking the currently selected matching pair as the optimal matching pair, where H is equal to H +1, and jumping to step S63, otherwise, directly jumping to step S63;
and S70, finishing the screening, and storing all the currently selected optimal matching pairs.
9. The real-time large-scale image synthesis algorithm of the microscope system according to claim 1, wherein in step S7, the image synthesis area is determined by using the image boundary expression to be stitched, and the method for preprocessing the image synthesis area specifically includes: calculating the distribution rule of pixel brightness values around the image synthesis area by using a probability density function, calculating the mathematical expectation E (X) of the brightness values, assigning a weight according to the proportion of each pixel brightness value, and normalizing the weight;
the mathematical expectation is: e ═ Σi∈(a,b)IK(x,y)SK(x, y) wherein IK(x, y) is the brightness value of the Kth pixel, SK(x, y) is the characteristic distribution probability of the Kth pixel point, the probabilities in the interval from a to b are normalized, and the sum is 1;
the method for calculating the weight coefficient of each pixel value comprises the following steps:
Figure FDA0003376385570000031
δ is an error coefficient.
CN202011623367.0A 2020-12-31 2020-12-31 Real-time large-scale image synthesis algorithm of microscope system Active CN112819735B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011623367.0A CN112819735B (en) 2020-12-31 2020-12-31 Real-time large-scale image synthesis algorithm of microscope system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011623367.0A CN112819735B (en) 2020-12-31 2020-12-31 Real-time large-scale image synthesis algorithm of microscope system

Publications (2)

Publication Number Publication Date
CN112819735A CN112819735A (en) 2021-05-18
CN112819735B true CN112819735B (en) 2022-02-01

Family

ID=75856022

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011623367.0A Active CN112819735B (en) 2020-12-31 2020-12-31 Real-time large-scale image synthesis algorithm of microscope system

Country Status (1)

Country Link
CN (1) CN112819735B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI826185B (en) * 2022-12-15 2023-12-11 宏碁股份有限公司 External parameter determination method and image processing device

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7072500B2 (en) * 2004-05-07 2006-07-04 Wisconsin Alumni Research Foundation Image locking system for DNA micro-array synthesis
CN104166972A (en) * 2013-05-17 2014-11-26 中兴通讯股份有限公司 Terminal and method for realizing image processing
CN108830788A (en) * 2018-04-25 2018-11-16 安徽师范大学 A kind of plain splice synthetic method of histotomy micro-image
CN109584156B (en) * 2018-10-18 2022-01-28 中国科学院自动化研究所 Microscopic sequence image splicing method and device

Also Published As

Publication number Publication date
CN112819735A (en) 2021-05-18

Similar Documents

Publication Publication Date Title
US7248282B2 (en) Microscopy imaging system and method
US9386211B2 (en) Fully automatic rapid microscope slide scanner
US10444486B2 (en) Systems and methods for detection of blank fields in digital microscopes
CN111007661A (en) Microscopic image automatic focusing method and device based on deep learning
US9678325B2 (en) Analysis apparatus, analysis program, and analysis system
EP2671113A1 (en) Fast auto-focus in microscopic imaging
WO2018227465A1 (en) Sparse positive source separation model-based image deblurring algorithm
CN112819735B (en) Real-time large-scale image synthesis algorithm of microscope system
US20210358160A1 (en) Method and system for determining plant leaf surface roughness
JP2015192238A (en) Image data generation device and image data generation method
JP2020057391A (en) Object tracking using image segmentation
Çelik et al. A real-time defective pixel detection system for LCDs using deep learning based object detectors
WO1996041301A1 (en) Image enhancement method and apparatus
CN111462005A (en) Method, apparatus, computer device and storage medium for processing microscopic image
JP5513038B2 (en) 3D cell image analyzer
CN113899698A (en) Real-time focusing and centering adjustment method and device for in-situ test platform
WO2020010634A1 (en) Cell image processing system and method, automatic smear reading device, and storage medium
Cooper et al. Real time multi-modal super-resolution microscopy through Super-Resolution Radial Fluctuations (SRRF-Stream)
CN111505816A (en) High-flux electron microscope imaging method and system
CN112258493B (en) Method, system, equipment and medium for quickly identifying and positioning two-dimensional material on substrate
JP2015191362A (en) Image data generation apparatus and image data generation method
CN114897693A (en) Microscopic image super-resolution method based on mathematical imaging theory and generation countermeasure network
WO2018098833A1 (en) Height measuring and estimation method of uneven surface of microscope slide, and microscope
CN113852761A (en) Automatic focusing method of intelligent digital microscope
JP2022121167A (en) Image processing device, control method of image processing device, and program

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant