CN117593228A - Confocal endoscope image optimization method and related equipment - Google Patents

Confocal endoscope image optimization method and related equipment Download PDF

Info

Publication number
CN117593228A
CN117593228A CN202311601542.XA CN202311601542A CN117593228A CN 117593228 A CN117593228 A CN 117593228A CN 202311601542 A CN202311601542 A CN 202311601542A CN 117593228 A CN117593228 A CN 117593228A
Authority
CN
China
Prior art keywords
data
data set
information
correction
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311601542.XA
Other languages
Chinese (zh)
Inventor
段西尧
马骁萧
冯宇
孟辰
叶欢
丁莽
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jingwei Shida Medical Technology Suzhou Co ltd
Original Assignee
Jingwei Shida Medical Technology Suzhou Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jingwei Shida Medical Technology Suzhou Co ltd filed Critical Jingwei Shida Medical Technology Suzhou Co ltd
Priority to CN202311601542.XA priority Critical patent/CN117593228A/en
Publication of CN117593228A publication Critical patent/CN117593228A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/38Registration of image sequences
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/35Determination of transform parameters for the alignment of images, i.e. image registration using statistical methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10068Endoscopic image

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Probability & Statistics with Applications (AREA)
  • Image Processing (AREA)

Abstract

The application discloses a confocal endoscope image optimization method and related equipment, and relates to the field of image processing, wherein the method comprises the following steps: acquiring a data set to be calculated, wherein the data set to be calculated is acquired by performing data denoising operation on data to be processed; acquiring an alignment parameter based on the data set to be calculated; acquiring a correction target value based on the data set to be calculated; performing alignment operation on the sinusoidal sampling data point set based on the alignment parameters so as to obtain an aligned sinusoidal sampling data point set; and correcting the aligned sine sampling data point set based on the correction target value to obtain an optimized data set.

Description

Confocal endoscope image optimization method and related equipment
Technical Field
The present disclosure relates to the field of image processing, and more particularly, to a confocal endoscopic image optimization method and related apparatus.
Background
The confocal endoscope is medical equipment which can extend into a human body by means of channels such as a gastroscope, a colonoscope and the like to acquire local histological images so as to realize accurate diagnosis of micro focus, gastrointestinal lesions and early gastrointestinal canceration. There are two important components in the scan control module of a confocal endoscope: a resonant mirror and a galvanometer vibrating mirror. The resonant mirror acts to rapidly scan light in a horizontal direction (also referred to as an X-galvanometer mirror), and the galvanometer mirror acts to scan light in a vertical direction (also referred to as a Y-galvanometer mirror), which cooperate to obtain a two-dimensional planar image.
The resonant mirror operates on the principle of reciprocating rotation through a certain angle along a rotation axis, steering when reaching the edge of the scanning range, and rotating in the opposite direction. The scanning directions of two adjacent lines are opposite, and because the scanning speed is high, the scanning starting points of the two adjacent lines are difficult to keep consistent, so that the two adjacent lines are shifted, the angular speed of sinusoidal characteristics during scanning is low at the edge, and the speed is high at the middle, so that the whole has distortion of stretching at two ends and compressing at the middle. In the related art, an accurate image optimization method is lacking.
Disclosure of Invention
In the summary, a series of concepts in a simplified form are introduced, which will be further described in detail in the detailed description. The summary of the present application is not intended to define the key features and essential features of the claimed subject matter, nor is it intended to be used to determine the scope of the claimed subject matter.
In a first aspect, the present application proposes a confocal endoscopic image optimization method, the method comprising:
acquiring a data set to be calculated, wherein the data set to be calculated is acquired by performing data denoising operation on data to be processed;
Acquiring an alignment parameter based on the data set to be calculated;
acquiring a correction target value based on the data set to be calculated;
performing alignment operation on the sinusoidal sampling data point set based on the alignment parameters so as to obtain an aligned sinusoidal sampling data point set;
and correcting the aligned sine sampling data point set based on the correction target value to obtain an optimized data set.
Optionally, the method further comprises:
selecting a first data set and two second data sets from the calculation data set, wherein each data in the second data set is adjacent to the data in the first data set in the calculation data set, and the second data in the same order with the different second data sets and the first data set are positioned on two sides of the first data in the first data set in the calculation data set;
performing offset operation on the second data set by taking the first data set as a reference, and calculating an alignment cost value;
and determining a target alignment parameter based on the alignment cost value.
Optionally, the selecting a first data set and two second data sets from the calculated data sets includes:
determining a start line number and an end line number of a target area in the image in the calculation dataset;
Performing segmentation processing based on the starting line number, the ending line number and a preset segment number to obtain the nearest odd line or even line at the segmentation position so as to form the first data set;
second data adjacent to the first data in the first data set is acquired based on the first data set to form two second data sets.
Optionally, the determining the start line number and the end line number of the target area in the image in the calculation dataset includes:
acquiring a pixel value histogram of the image data in the calculation dataset;
acquiring threshold information by using an OTSU algorithm based on the pixel value histogram;
and determining the starting line number and the ending line number based on the threshold information.
Optionally, the method further comprises:
the alignment cost value cost is calculated by:
wherein P represents the number of lines used in the alignment operation, eid and sil represent the end line number and the start line number, od, respectively p (id) represents the id-th data point, ed in even row p at offset D 1 p (id) and ed 2 p (id) respectively represent the corresponding points of the id-th data point in the odd-numbered row p in two different even-numbered row data sets,for a double summation symbol, it is indicated that all selected odd rows and each data point in these rows are to be iterated, +. >Andis the absolute value difference, representing the difference between the parity rows of a given data point at the D offset,is a normalization factor.
Optionally, the method further comprises:
acquiring a uniform scanning sampling point data set and a sinusoidal sampling data point set;
calculating the center difference information of the data in the uniform scanning sampling point data set and the sinusoidal sampling data point set;
calculating a reference variable based on the central value difference information, the stretching coefficient, the galvanometer scanning frequency and the sine sampling frequency;
determining a correction reference value based on the reference variable and the reference variable threshold information;
and performing approximate rounding operation according to the sum of the correction reference value and the central column information of the image in the sinusoidal sampling data point set so as to obtain a correction target value.
Optionally, the reference variable threshold information includes first threshold information and second threshold information, the second threshold information is larger than the first threshold information, the correction reference value includes a first correction reference value, a second correction reference value and a third correction reference value,
the determining a correction reference value based on the reference variable and the reference variable threshold information includes:
determining the first correction reference value as a correction reference value in the case where the reference variable is smaller than the first threshold information, or determining the second correction reference value as a correction reference value in the case where the reference variable is larger than the second threshold information:
Determining the first correction reference value dn based on 11
Wherein f scan For the scanning frequency of the galvanometer, f sample Is the sinusoidal sampling frequency;
determining the second correction value dn based on 12
Wherein f scan For the scanning frequency of the galvanometer, f sample Is the sinusoidal sampling frequency;
determining the third correction value dn based on 13
Wherein f scan For the scanning frequency of the galvanometer, f sample Is sinusoidal sampling frequency, alpha is stretching coefficient, dn 2 For the center difference information, arcsin () is an arcsine function.
Optionally, the method further comprises:
performing binarization processing operation on the image data to be calculated in the data set to be calculated to obtain a binarized image set;
performing morphological closing operation on the binary image data in the binary image set to obtain a closed image set;
acquiring height information, width information and edge-most column information of an elliptic-like area from a closed image in the closed image set;
and calculating correction parameters based on the height information, the width information and the edge-most column information, wherein the correction parameters comprise stretching coefficient information and center column information.
Optionally, the method further comprises:
calculating the stretch coefficient information based on the height information and the width information;
The above center column information C is calculated according to the following formula:
wherein C is the coordinates of the geometric center point of the image, C 0 H is the information of the most edge column 1 For the height information, W 1 For the width information, f sample For sampling frequency f scan For the scanning frequency arcsin () is an arcsine function.
In a second aspect, the present application also proposes a confocal endoscopic image optimization apparatus comprising:
the first acquisition unit is used for acquiring a data set to be calculated, wherein the data set to be calculated is acquired by performing data denoising operation on data to be processed;
the second acquisition unit is used for acquiring alignment parameters based on the data set to be calculated;
a third acquisition unit configured to acquire a correction target value based on the data set to be calculated;
a fourth obtaining unit, configured to perform an alignment operation on the sinusoidal sampling data point set based on the alignment parameter, so as to obtain an aligned sinusoidal sampling data point set;
and a fifth acquisition unit for performing a correction operation on the aligned sinusoidal sampling data point set based on the correction target value to acquire an optimized data set.
In summary, the confocal endoscope image optimization method of the embodiment of the application includes: acquiring a data set to be calculated, wherein the data set to be calculated is acquired by performing data denoising operation on data to be processed; acquiring an alignment parameter based on the data set to be calculated; acquiring a correction target value based on the data set to be calculated; performing alignment operation on the sinusoidal sampling data point set based on the alignment parameters so as to obtain an aligned sinusoidal sampling data point set; and correcting the aligned sine sampling data point set based on the correction target value to obtain an optimized data set. According to the confocal endoscope image optimization method, the alignment parameters and the correction target values are calculated in the preprocessing mode, the image data are aligned based on the alignment parameters, and the correction is performed based on the correction target values. By switching the preprocessing and normal working modes, the endoscope can adapt to different use scenes, and high-quality image data can be acquired under any condition. The method can align and correct the original data acquired by the confocal endoscope so as to eliminate the problems of misalignment and distortion of the data generated in the scanning process of the resonant mirror.
Additional advantages, objects, and features of the disclosure will be set forth in part in the description which follows and in part will become apparent to those having ordinary skill in the art upon examination of the disclosure and practice of the disclosure.
Drawings
Various other advantages and benefits will become apparent to those of ordinary skill in the art upon reading the following detailed description of the preferred embodiments. The drawings are only for purposes of illustrating the preferred embodiments and are not to be construed as limiting the specification. Also, like reference numerals are used to designate like parts throughout the figures. In the drawings:
fig. 1 is a schematic flow chart of a method for optimizing a confocal endoscope image according to an embodiment of the present application;
FIG. 2 is a schematic diagram of a spatial position of a resonant mirror during scanning according to an embodiment of the present disclosure;
fig. 3 is a schematic diagram of a resonant mirror reciprocating scanning sampling principle provided in an embodiment of the present application;
FIG. 4 is a schematic diagram showing the relationship between angular velocity and spatial position of a resonant mirror during scanning according to an embodiment of the present disclosure;
fig. 5 is a schematic diagram of determining a start line and an end line of an effective area of an image according to an embodiment of the present application;
Fig. 6 is a schematic diagram of a pixel histogram according to an embodiment of the present application;
fig. 7 is a schematic diagram of an alignment scenario at different offset values according to an embodiment of the present application;
fig. 8 is a schematic diagram of an image to be calculated according to an embodiment of the present application;
FIG. 9 is a schematic diagram of a binarized image according to an embodiment of the present disclosure;
FIG. 10 is a schematic view of a closed image according to an embodiment of the present disclosure;
FIG. 11 is a schematic diagram of a correction algorithm according to an embodiment of the present disclosure;
FIG. 12 is a schematic diagram of an uncorrected image according to an embodiment of the present application;
FIG. 13 is a schematic diagram of a corrected image according to an embodiment of the present application;
fig. 14 is a schematic structural diagram of a confocal endoscopic image optimization apparatus provided in an embodiment of the present application.
Detailed Description
The terms "first," "second," "third," "fourth" and the like in the description and in the claims of this application and in the above-described figures, if any, are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged where appropriate such that the embodiments described herein may be implemented in other sequences than those illustrated or otherwise described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus. The following description of the embodiments of the present application will be made clearly and fully with reference to the accompanying drawings, in which it is evident that the embodiments described are only some, but not all, of the embodiments of the present application.
FIG. 2 is a schematic diagram of the spatial position of a resonant mirror during scanning in the related art; FIG. 3 is a schematic diagram of a resonant mirror reciprocation scanning sampling principle in the related art; fig. 4 is a schematic diagram showing the relationship between angular velocity and spatial position during scanning of a resonant mirror in the related art. As shown in fig. 2 to 4, the resonant mirror operates on a principle of reciprocating rotation by a certain angle along a rotation axis, turning when reaching the edge of the scanning range, and rotating in the opposite direction. And the angular velocity during scanning varies sinusoidally with the spatial position.
Confocal endoscopy is typically performed using equally spaced samples. The original image obtained by sampling has the following problems due to the characteristics of reciprocation and sine in the scanning process of the resonant mirror: (1) the scanning directions of two adjacent rows are reversed; (2) Because the scanning speed is high, the scanning starting points of two adjacent lines are difficult to be consistent, so that the two adjacent lines are shifted; (3) The angular velocity of the sinusoidal features during scanning is slow at the edges and fast in the middle, resulting in distortion of the whole with stretching at both ends and compression in the middle.
Inversion, shifting and distortion result in the obtained image not conforming to the actual shape of the object. Such images, if applied clinically, may provide the user with erroneous information, which in turn may lead to the user making erroneous diagnostic results, which is not acceptable. Therefore, the confocal endoscope is used for aligning displacement and correcting distortion so as to eliminate the problems caused by the scanning characteristics of the resonant mirror, provide the user with an image which is correct and has the same shape as the actual shape, and further provide accurate diagnosis information for clinic. In order to solve at least some of the problems described above, the present application proposes a confocal endoscopic image optimization method for performing a correction operation on an image.
Referring to fig. 1, a schematic flow chart of a method for optimizing a confocal endoscope image according to an embodiment of the present application may specifically include:
s110, acquiring a data set to be calculated, wherein the data set to be calculated is acquired by performing data denoising operation on data to be processed;
the data set to be calculated is obtained by denoising operation of the data set to be processed, and the data set to be processed is obtained by overturning operation of the original image data obtained by the confocal endoscope in the preprocessing mode, so that all images in the data set to be processed have the same acquisition direction.
S120, acquiring alignment parameters based on the data set to be calculated;
illustratively, alignment parameters are extracted from the denoised dataset, the alignment parameters being used to adjust the data points so that they are aligned in a particular arrangement.
S130, acquiring a correction target value based on the data set to be calculated;
illustratively, after the data alignment, a correction target value is further calculated, the correction target value indicating the ideal state that each data point should reach in the final optimized data set.
S140, aligning the sinusoidal sampling data point set based on the alignment parameters so as to obtain an aligned sinusoidal sampling data point set;
Illustratively, the acquired alignment parameters are used to operate on the sinusoidal sampled data point set to align it with a predetermined standard or model to obtain an aligned sinusoidal sampled data point set.
And S150, performing correction operation on the aligned sine sampling data point set based on the correction target value so as to acquire an optimized data set.
Finally, the aligned data point set is corrected according to the calculated correction target value, and an optimized data set is obtained.
It should be noted that, the confocal endoscope may set a preprocessing mode and a normal working mode, in the preprocessing mode, acquire a data set to be calculated, perform the work of calculating the alignment parameter and the correction target value through the data set to be calculated, and store the alignment parameter and the correction target value in the corresponding storage unit after calculating the alignment parameter and the correction target value. And in a normal working mode, carrying out alignment operation and correction operation on the acquired image in real time by calling the alignment parameter and the correction target value.
In summary, according to the method for optimizing a confocal endoscope image provided by the embodiment of the application, the alignment parameter and the correction target value are calculated in the preprocessing mode, the image data is aligned based on the alignment parameter, and the correction operation is performed based on the correction target value. By switching the preprocessing and normal working modes, the endoscope can adapt to different use scenes, and high-quality image data can be acquired under any condition. The method can align and correct the original data acquired by the confocal endoscope so as to eliminate the problems of misalignment and distortion of the data generated in the scanning process of the resonant mirror.
In some examples, the above method further comprises:
selecting a first data set and two second data sets from the calculation data set, wherein each data in the second data set is adjacent to the data in the first data set in the calculation data set, and the second data in the same order with the different second data sets and the first data set are positioned on two sides of the first data in the first data set in the calculation data set;
performing offset operation on the second data set by taking the first data set as a reference, and calculating an alignment cost value;
and determining a target alignment parameter based on the alignment cost value.
Illustratively, one first data set and two second data sets are selected from the data sets to be calculated. Each data point in the second data set is spatially adjacent to a data point in the first data set, and the same order of data points in each second data set flank a particular data point in the first data set. The second data set is offset with respect to the first data set to achieve optimal alignment. By calculating the alignment cost value, it is possible to evaluate the similarity or difference between the data sets by cross-correlation, euclidean distance, etc. This alignment cost value reflects the accuracy and effect of the alignment. Based on the computed alignment cost value, an optimal alignment parameter, such as offset, rotation angle, or scaling factor, is determined. And (3) finding an alignment parameter capable of minimizing the difference between the first data set and the second data set to ensure that the data sets have optimal consistency and corresponding relation.
The embodiment provides a specific calculation method of the alignment parameters, and the refinement and the high efficiency of data processing are ensured by selecting a proper data set and performing spatial adjacency analysis. The data sets can be precisely aligned by calculating the alignment cost values and adjusting the alignment parameters based on these values, thereby ensuring consistency and comparability of the data. The precisely aligned and optimized data set provides a solid basis for subsequent image analysis and interpretation, thereby enhancing the reliability and effectiveness of the analysis results. The method can at least partially eliminate the problems caused by the scanning characteristics of the resonant mirror, provide the user with the correct image with the same actual shape, and further provide the clinic with accurate diagnosis information.
In some examples, the selecting one first data set and two second data sets from the calculated data sets includes:
determining a start line number and an end line number of a target area in the image in the calculation dataset;
performing segmentation processing based on the starting line number, the ending line number and a preset segment number to obtain the nearest odd line or even line at the segmentation position so as to form the first data set;
second data adjacent to the first data in the first data set is acquired based on the first data set to form two second data sets.
The specific method for selecting one first data set and two second data sets in the calculation data set is specifically described by taking the first data set as an odd-line number data set and the second data set as an even-line number data set as an example. The specific method for selecting P odd line numbers is to divide the image between the starting line (sl) and the ending line (el) into P+1 segments in the vertical direction, and P segmentation intervals are all used for selecting the odd line number nearest to each interval. As shown in fig. 5, the scanning range is equally divided into 4 segments, and the selected 3 odd-numbered rows are the intervals of segment 1 and segment 2, segment 2 and segment 3, and segment 3 and segment 4, respectively.
Correspondingly, two even-numbered line number sets SEL are selected 1 And SEL (presentation of) 2 Each set contains P elements. SEL (SEL) 1 The P even line numbers in the set are the corresponding P odd line numbers minus 1, SEL in SOL 2 The P even line numbers in the set are the corresponding P odd line numbers in the SOL plus 1. I.e.
p=0,1,L,P-1
Record P odd lines in SOL setThe data in (a) is
OD={od p },p=0,1,L,P-1
Recording SEL 1 P even lines in the setThe data in (a) is
Recording SEL 2 P even lines in the setThe data in (a) is
In conclusion, the scanning range is equally divided, the nearest odd lines in each interval are selected, the uniformity of data sampling in space is ensured, the acquisition of representative strong data is facilitated, and sampling deviation is avoided. By selecting the adjacent odd and even rows, data comparison can be conveniently performed. The method simplifies the data processing flow by systematically selecting the data rows. The selection rule is clear and easy to implement, which is particularly important when processing a large amount of data. Since adjacent parity rows are selected, they are closely related in space. This close spatial correlation facilitates subsequent data alignment work, especially in applications requiring accurate alignment of image data.
In some examples, determining the start line number and the end line number of the target region in the image in the computing dataset includes:
acquiring a pixel value histogram of the image data in the calculation dataset;
acquiring threshold information by using an OTSU algorithm based on the pixel value histogram;
and determining the starting line number and the ending line number based on the threshold information.
Illustratively, the image data in the calculation dataset is analyzed to obtain a frequency distribution, i.e., a pixel value histogram, for each pixel value. The OTSU algorithm is an automatic thresholding method that selects the best threshold by maximizing the inter-class variance, which can divide the image into two parts, the active area and the inactive area. With the threshold value obtained by the OTSU algorithm, it is possible to determine which line numbers of image data are important. Determining the start line number and the end line number is important for focusing on a specific part of the image. In medical imaging, only a certain region of the image may be of interest, which can be determined by analyzing the pixel value histogram and applying the OTSU algorithm.
Specifically, the method for determining the start line sl and the end line el comprises the following steps: for images Statistics of pixel value histograms, results as described in fig. 6, the threshold T is obtained using the OTSU algorithm on the above histograms, and the example histogram of fig. 6 uses the OTSU algorithm to obtain the threshold t=88.
The sl is found by the following pseudocode:
the pseudo code traverses from the first line of the image to the last line. H is the total number of lines of the image. Within each row, the traversal starts from the second pixel until the penultimate pixel. N is the total number of columns of the image. For each pixel in the currently traversed row it is checked whether its value and its left and right neighboring pixels are both greater than a certain preset threshold T. This check is to identify a continuous change in luminance in the horizontal direction, which may represent an edge or a feature line. If a sequence of pixels satisfying the condition is found in a certain line, this line is recorded as the starting line S. After the initial line is recorded, the whole traversal process is finished, and the initial line number is returned.
The el is found by the following pseudocode:
traversing up from the last row (H-1) of the image to the first row, in each row, traversing from the second column to the penultimate column. For each pixel point D (i, j), it and the pixel values D (i, j-1) and D (i, j+1) on its left and right sides are checked whether both are greater than a certain threshold T. If the above condition is satisfied, the line number i of the line is recorded. Once a line number satisfying the condition is found, this line number is returned and the program is ended.
In summary, the method provided by the embodiment of the application determines the target area in the image by an automatic method, so that subjectivity and time cost of manual selection are reduced. The OTSU algorithm automatically determines the threshold by calculation, which is more efficient than manually adjusting the threshold. The OTSU algorithm provides an objective method to determine the optimal threshold based on statistical principles, thus maintaining consistency across different image sets.
In some examples, the above method further comprises:
the alignment cost value cost is calculated by:
wherein P represents the number of lines used in the alignment operation, eid and sil represent the end line number and the start line number, od, respectively p (id) represents the id-th data point, ed in even row p at offset D 1 p (id) and ed 2 p (id) respectively represent the corresponding points of the id-th data point in the odd-numbered row p in two different even-numbered row data sets,for a double summation symbol, it is indicated that all selected odd rows and each data point in these rows are to be iterated, +.>Andis the absolute value difference, representing the difference between the parity rows of a given data point at the D offset,is a normalization factor.
Illustratively, the offset is an integer, denoted by a. cost represents the alignment cost at offset a. When a is sequentially taken from [1-N 1 ,N 1 -1]When the integer value is within the range, the cost is calculated. The even-numbered data is offset by a with reference to the odd-numbered data. As shown in fig. 7, the different offset values are shown as-4,0,3. od denotes odd line data, and ed denotes even line data. The numbers in the boxes represent the data element indices in each row of data.
And finding out the overlapping part of the offset odd-line data and the even-line data. The overlap is indicated by an odd line subscript. The beginning subscript and the ending subscript of the overlap are noted as sed and eid, respectively:
sid=max(a,0)
eid=min(N 1 -1,N 1 -1+a)
the alignment cost value cost formula provided by the formula provides a quantization method to evaluate the data alignment effect, and the best alignment parameter can be found by minimizing the cost function, so that the accuracy and the reliability of the image processing task are improved.
In some examples, the above method further comprises:
acquiring a uniform scanning sampling point data set and a sinusoidal sampling data point set;
calculating the center difference information of the data in the uniform scanning sampling point data set and the sinusoidal sampling data point set;
calculating a reference variable based on the central value difference information, the stretching coefficient, the galvanometer scanning frequency and the sine sampling frequency;
Determining a correction reference value based on the reference variable and the reference variable threshold information;
and performing approximate rounding operation according to the sum of the correction reference value and the central column information of the image in the sinusoidal sampling data point set so as to obtain a correction target value.
Illustratively, two types of sampled data are collected, a uniformly scanned sampled point data set and a sinusoidal sampled data point set. The endoscopic image from the ideal without scanning distortion is sampled uniformly, while the sinusoidal sampled data set is the set of sampled data points that are distorted by the vibrating mirror motion during the scanning process for the reasons described above. Comparing the uniformly scanned dataset with the sinusoidally sampled dataset, in particular their central positions, calculates difference information between the central points, i.e. central difference information, which differences may reflect systematic deviations that may occur during scanning. One or more reference variables are calculated using the center difference information and the stretch coefficients, the galvanometer scanning frequency, and the sinusoidal sampling frequency. The reference variables are used to describe the scan distortion patterns in the dataset, providing the necessary information for subsequent corrective actions. The sum of the correction reference value and the image center column information that has been calculated is used. The sum is subjected to an approximate rounding operation to obtain a corrected target value. The correction target value is used to adjust the image data in the sinusoidal sample data set. The correction operation may include a translation, rotation, or other transformation of the data points in the image to compensate for distortion or stretching due to the scan.
An object of embodiments of the present application is to correct image distortion so that the image more accurately reflects actual visual information. In this way, each sampling point of the image is adjusted according to the calculated target value, thereby improving the overall image quality and reducing the influence caused by distortion. Such a correction procedure is typically automated, and can improve processing efficiency, ensuring consistency and repeatability of results.
In some examples, the reference variable threshold information includes first threshold information and second threshold information, the second threshold information is greater than the first threshold information, the correction reference value includes a first correction reference value, a second correction reference value, and a third correction reference value,
the determining a correction reference value based on the reference variable and the reference variable threshold information includes:
determining the first correction reference value as a correction reference value in the case where the reference variable is smaller than the first threshold information, or determining the second correction reference value as a correction reference value in the case where the reference variable is larger than the second threshold information:
determining the first correction reference value dn based on 11
Wherein f scan For the scanning frequency of the galvanometer, f sample Is the sinusoidal sampling frequency;
determining the second correction value dn based on 12
Wherein f scan For the scanning frequency of the galvanometer, f sample Is the sinusoidal sampling frequency;
determining the third correction value dn based on 13
Wherein f scan For the scanning frequency of the galvanometer, f sample Is sinusoidal sampling frequency, alpha is stretching coefficient, dn 2 As the central differenceInformation, arcsin () is an arcsine function.
Illustratively, the first corrected reference value is calculated from the galvanometer sweep frequency and the sinusoidal sampling frequency and is an inverse number of each other, and the third corrected reference value is determined from the galvanometer sweep frequency, the sinusoidal sampling frequency, and an arcsine value that is related based on the product of the stretch coefficient, the center difference information, the galvanometer sweep frequency, and the sinusoidal sampling frequency.
In some examples, the above method further comprises:
performing binarization processing operation on the image data to be calculated in the data set to be calculated to obtain a binarized image set;
performing morphological closing operation on the binary image data in the binary image set to obtain a closed image set;
acquiring height information, width information and edge-most column information of an elliptic-like area from a closed image in the closed image set;
And calculating correction parameters based on the height information, the width information and the edge-most column information, wherein the correction parameters comprise stretching coefficient information and center column information.
Exemplary, as shown in fig. 8, a schematic diagram of an image to be calculated is provided in an embodiment of the present application. The image dataset to be corrected for preparation of the dataset reduces the pixel values in the image to two possible values, typically 0 and 1, to separate the foreground object and the background, i.e. the active image area and the inactive image area, thereby forming a binarized image as shown in fig. 9. By selecting an appropriate threshold, a setting of 1 for pixel values above the threshold will generally represent foreground and a setting of 0 below the threshold will represent background. Object continuity in the binarized image is improved, holes and cracks in the object are filled, and the boundary is clearer. Firstly, performing expansion operation on an image, and expanding a foreground object; then an erosion operation is performed to restore the original size of the object, but preserve the filling effect. Further, a closed image set is obtained, as shown in fig. 10, which is a closed image provided by the present embodiment, wherein the foreground object boundary is closed, and no holes and breaks exist. As shown in fig. 10, an object of an elliptical-like shape is extracted from the closed image, and size information thereof is acquired. Objects in the closed image are analyzed, their height, width are measured, and the position of these objects in the image, i.e. the edge most column information, is determined. The collected height, width and edge most column information will be used for correction parameter calculation in subsequent steps. The extracted size information is used to generate image correction parameters including a stretch factor, and image center column information. Correction parameters, such as stretch coefficients, are calculated by a mathematical model based on the physical dimensions of the ellipse-like regions and the pixel dimensions in the image to correct for scale distortion in the image. The resulting stretch factor information and center column information can be used to adjust the entire image set to improve the quality of the image.
In some examples, the above method further comprises:
calculating the stretch coefficient information based on the height information and the width information;
the above center column information C is calculated according to the following formula:
wherein C is the coordinates of the geometric center point of the image, C 0 H is the information of the most edge column 1 For the height information, W 1 For the width information, f sample For sampling frequency f scan For the scanning frequency arcsin () is an arcsine function.
The stretch coefficient information α may be exemplified by the height information H 1 And width information W 1 The calculation can be specifically performed according to the following formula:
by way of example, by correcting the parameter C, it is possible to correct distortions possibly introduced during the image scanning, such as distortions caused by the curvature of the scan line or by the non-linear scanning speed. When the corrected image is subjected to subsequent processing such as feature extraction and edge detection, a more accurate result can be obtained. This correction method is not limited to a specific image type, and is applicable to various image data acquired by the scanning apparatus.
In some examples, some behavior vals1[ N ] in the input data is noted 1 ]The data after data correction is vals2[ N ] 2 ]. As shown in fig. 11, the algorithm scans the sample point index n uniformly 2 Subscript n to a sine sampling point 1 Correspondingly, uniformly scanning the subscript n of the sampling point 2 The luminance value is equal to the sine sampling point index n 1 Is a luminance value of (a). Fig. 11 is a schematic diagram of a correction algorithm provided in an embodiment of the present application, and fig. 12 is a schematic diagram of an uncorrected image provided in an embodiment of the present application. Fig. 13 is a schematic diagram of an overcorrected image according to an embodiment of the present application.
This can be achieved by the following correction algorithm 1 and correction algorithm 2:
the pseudo code of correction algorithm 1 is as follows:
input:
vals1[N 1 ]sinusoidal scan signal data
f scan -galvanometer scanning frequency, in Hz;
f sample -sinusoidal sampling frequency, in Hz;
c, a central column;
alpha-stretch coefficient;
and (3) outputting:
vals2[N 2 ]-uniformly scanning signal data
The steps are as follows:
the round function in 12 rows is a near-round function.
The main steps of the correction algorithm 1 include the following:
1. initializing variables:
vals1[ N ]: an array of the initial data set is stored.
f scan : scanning frequency, in Hz.
f sample : sampling frequency in Hz.
C: center column.
Alpha: the stretch coefficient is used for adjusting the correction amplitude.
2. Calculating intermediate variables:
vals2[ N ]: an array of corrected data is stored.
C 2 : calculated as half the length of the vals1 array, for determining the center position of the dataset.
3. Each data point was processed in cycles:
for N from 0 to N-1 (length of dataset), calculate the distortion d for each point n2
d n2 Calculated as n minus C 2 ,d n2 To characterize the offset relative to the dataset center.
4. According to d n2 Different correction methods are chosen for the values of (a):
the first threshold information may be-1, the second threshold information is 1, and when the reference variable is smaller than-1, the reference variable is:determining the first correction reference value as a correction reference value in the case where the reference variable is smaller than the first threshold information; determining the second correction reference value as a correction reference value in the case where the reference variable is greater than the second threshold information; and determining the third correction reference value as a correction reference value when the reference variable is greater than or equal to the first threshold information and less than or equal to the second threshold information.
Determining the first correction reference value dn based on 11
Wherein f scan For the scanning frequency of the galvanometer, f sample Is a sinusoidal sampling frequency.
Determining the second correction value dn based on 12
Wherein f scan For the scanning frequency of the galvanometer, f sample Is a sinusoidal sampling frequency.
Determining the third correction value dn based on 13
Wherein f scan For the scanning frequency of the galvanometer, f sample Is sinusoidal sampling frequency, alpha is stretching coefficient, dn 2 For the center difference information, arcsin () is an arcsine function.
5. Calculating and correcting an index:
using round function to pair c+d 1 Rounding the result of (2) to obtain a corrected index n 1
If n 1 And is smaller than 0 or larger than N-1, the value of the first or last element of the array vals1 is set to be the value of the first or last element of the array vals1 respectively, and index out-of-range is prevented.
If d n1 The corresponding value is obtained directly from vals1 within the effective range. The algorithm effectively assigns a corrected value to each data point, thereby generating a new data set vals2 that reflects the corrected sampled data.
The pseudo code of correction algorithm 2 is as follows:
input:
vals1[N 1 ]sinusoidal scan signal data
f scan -galvanometer scanning frequency, in Hz;
f sample -sinusoidal sampling frequency, in Hz;
c, a central column;
alpha-stretch coefficient;
and (3) outputting:
vals2[N 2 ]-uniformly scanning signal data
The steps are as follows:
/>
the main steps of the correction algorithm 2 include the following: 1. initializing variables:
vals1[N]: an array of the initial data set is stored. f (f) scan : scanning frequency, in Hz.
f sample : sampling frequency in Hz.
C: center column.
Alpha: the stretch coefficient is used for adjusting the correction amplitude. 2. Calculating intermediate variables:
vals2[ N ]: an array of corrected data is stored.
C 2 : calculated as half the length of the vals1 array, for determining the center position of the dataset.
3. Each data point was processed in cycles:
for N from 0 to N-1 (length of dataset), calculate the distortion d for each point n2
d n2 Calculated as n minus C 2 ,d n2 To characterize the offset relative to the dataset center.
4. According to d n2 Different correction methods are chosen for the values of (a):
the first threshold information may be-1, and the second threshold information is1, when the reference variable is less than-1, the reference variable is:determining the first correction reference value as a correction reference value in the case where the reference variable is smaller than the first threshold information; determining the second correction reference value as a correction reference value in the case where the reference variable is greater than the second threshold information; and determining the third correction reference value as a correction reference value when the reference variable is greater than or equal to the first threshold information and less than or equal to the second threshold information.
Referring to fig. 14, an embodiment of a confocal endoscopic image optimization apparatus in an embodiment of the present application may include:
A first obtaining unit 21, configured to obtain a data set to be calculated, where the data set to be calculated is obtained by performing a data denoising operation on data to be processed;
a second obtaining unit 22, configured to obtain an alignment parameter based on the data set to be calculated;
a third acquisition unit 23 for acquiring a correction target value based on the data set to be calculated;
a fourth obtaining unit 24, configured to perform an alignment operation on the sinusoidal sampling data point set based on the alignment parameter, so as to obtain an aligned sinusoidal sampling data point set;
a fifth acquisition unit 25 for performing a correction operation on the aligned sinusoidal sample data point set based on the correction target value to acquire an optimized data set.
In the foregoing embodiments, the descriptions of the embodiments are focused on, and for those portions of one embodiment that are not described in detail, reference may be made to the related descriptions of other embodiments.
It will be appreciated by those skilled in the art that embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment, or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
Embodiments of the present application also provide a computer program product comprising computer software instructions that, when run on a processing device, cause the processing device to perform the confocal endoscopic image optimization procedure in the corresponding embodiment
The computer program product includes one or more computer instructions. When the computer program instructions are loaded and executed on a computer, the processes or functions in accordance with embodiments of the present application are produced in whole or in part. The computer may be a general purpose computer, a special purpose computer, a computer network, or other programmable apparatus. The computer instructions may be stored in a computer-readable storage medium or transmitted from one computer-readable storage medium to another computer-readable storage medium, for example, the computer instructions may be transmitted from one website, computer, server, or data center to another website, computer, server, or data center by a wired (e.g., coaxial cable, fiber optic, digital subscriber line (digital subscriber line, DSL)) or wireless (e.g., infrared, wireless, microwave, etc.). Computer readable storage media can be any available media that can be stored by a computer or data storage devices such as servers, data centers, etc. that contain an integration of one or more available media. Usable media may be magnetic media (e.g., floppy disks, hard disks, magnetic tapes), optical media (e.g., DVDs), or semiconductor media (e.g., solid State Disks (SSDs)), among others.
It will be clear to those skilled in the art that, for convenience and brevity of description, specific working procedures of the above-described systems, apparatuses and units may refer to corresponding procedures in the foregoing method embodiments, which are not repeated herein.
In the several embodiments provided in this application, it should be understood that the disclosed systems, apparatuses, and methods may be implemented in other ways. For example, the apparatus embodiments described above are merely illustrative, e.g., the division of elements is merely a logical functional division, and there may be additional divisions of actual implementation, e.g., multiple elements or components may be combined or integrated into another system, or some features may be omitted, or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be an indirect coupling or communication connection via some interfaces, devices or units, which may be in electrical, mechanical or other form.
The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed over a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in each embodiment of the present application may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in software functional units.
The integrated units, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application may be embodied in essence or a part contributing to the prior art or all or part of the technical solution in the form of a software product stored in a storage medium, including several instructions to cause a computer device (which may be a personal computer, a server, or a network device, etc.) to perform all or part of the steps of the methods of the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a random access Memory (Random Access Memory, RAM), a magnetic disk, or an optical disk, or other various media capable of storing program codes.
The above embodiments are only for illustrating the technical solution of the present application, and not for limiting the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit and scope of the corresponding technical solutions.

Claims (10)

1. A method for optimizing a confocal endoscopic image, comprising:
acquiring a data set to be calculated, wherein the data set to be calculated is acquired by performing data denoising operation on data to be processed;
acquiring an alignment parameter based on the data set to be calculated;
acquiring a correction target value based on the data set to be calculated;
performing alignment operation on the sinusoidal sampling data point set based on the alignment parameters to obtain an aligned sinusoidal sampling data point set;
and correcting the aligned sine sampling data point set based on the correction target value to obtain an optimized data set.
2. The confocal endoscopic image optimization method of claim 1, further comprising:
Selecting a first data set and two second data sets from the calculation data set, wherein each data in the second data set is adjacent to the data in the first data set in the calculation data set, and the second data in the same order with the different second data sets and the first data set are positioned on two sides of the first data in the first data set in the calculation data set;
performing offset operation on the second data set by taking the first data set as a reference, and calculating an alignment cost value;
and determining a target alignment parameter based on the alignment cost value.
3. The method of optimizing confocal endoscopic images according to claim 2, wherein selecting one first data set and two second data sets from the calculated data sets comprises:
determining a start line number and an end line number of a target area in the image in the calculation dataset;
performing segmentation processing based on the starting line number, the ending line number and a preset segment number to acquire the nearest odd line or even line at the segmentation position so as to form the first data set;
second data adjacent to first data in the first data set is acquired based on the first data set to form two second data sets.
4. A confocal endoscopic image optimization method according to claim 3, wherein said determining a start line number and an end line number of a target area in an image in said calculation dataset comprises:
acquiring a pixel value histogram of image data in the calculation dataset;
acquiring threshold information by using an OTSU algorithm based on the pixel value histogram;
the starting line number and the ending line number are determined based on the threshold information.
5. The confocal endoscopic image optimization method of claim 2, further comprising:
the alignment cost value cost is calculated by:
wherein P represents the number of lines used in the alignment operation, eid and sil represent the end line number and the start line number, od, respectively p (id) represents the id-th data point, ed in even row p at offset D 1 p (id) and ed 2 p (id) representing the corresponding points of the id-th data point in the odd-numbered row p in two different even-numbered row data sets, respectively, ">For a double summation symbol, it is indicated that all selected odd rows and each data point in these rows are to be iterated, +.>Andis the absolute value difference, representing the difference between the parity rows of a given data point at the D offset, Is a normalization factor.
6. The confocal endoscopic image optimization method of claim 1, further comprising:
acquiring a uniform scanning sampling point data set and a sinusoidal sampling data point set;
calculating center difference information of the data in the uniform scanning sampling point data set and the sinusoidal sampling data point set;
calculating a reference variable based on the central value difference information, the stretching coefficient, the galvanometer scanning frequency and the sine sampling frequency;
determining a corrected reference value based on the reference variable and the reference variable threshold information;
and performing approximate rounding operation according to the sum value of the correction reference value and the central column information of the image in the sinusoidal sampling data point set so as to obtain a correction target value.
7. The confocal endoscopic image optimization method of claim 6, wherein the reference variable threshold information includes first threshold information and second threshold information, the second threshold information being greater than the first threshold information, the correction reference values include a first correction reference value, a second correction reference value, and a third correction reference value,
the determining a corrected reference value based on the reference variable and the reference variable threshold information includes:
Determining the first correction reference value as a correction reference value in the case where the reference variable is smaller than the first threshold information, or determining the second correction reference value as a correction reference value in the case where the reference variable is larger than the second threshold information:
determining the first corrected reference value dn based on 11
Wherein f scan For the scanning frequency of the galvanometer, f sample Is the sinusoidal sampling frequency;
determining the second correction value dn based on 12
Wherein f scan For the scanning frequency of the galvanometer, f sample Is the sinusoidal sampling frequency;
determining the third correction value dn based on 13
Wherein f scan For the scanning frequency of the galvanometer, f sample Is sinusoidal sampling frequency, alpha is stretching coefficient, dn 2 For the center difference information, arcsin () is an arcsine function.
8. The confocal endoscopic image optimization method of claim 5, further comprising:
performing binarization processing operation on the image data to be calculated in the data set to be calculated to obtain a binarized image set;
performing morphological closing operation on the binary image data in the binary image set to obtain a closed image set;
acquiring height information, width information and edge-most column information of an elliptic-like area from a closed image in the closed image set;
And calculating correction parameters based on the height information, the width information and the edge-most column information, wherein the correction parameters comprise stretching coefficient information and center column information.
9. The confocal endoscopic image optimization method of claim 7, further comprising:
calculating the stretch coefficient information according to the height information and the width information;
calculating the center column information C according to the following formula:
wherein C is the coordinates of the geometric center point of the image, C 0 H is the information of the most edge column 1 For the height information, W 1 For the width information, f sample For sampling frequency f scan For the scanning frequency arcsin () is an arcsine function.
10. A confocal endoscopic image optimization apparatus comprising:
the first acquisition unit is used for acquiring a data set to be calculated, wherein the data set to be calculated is acquired by performing data denoising operation on data to be processed;
a second acquisition unit for acquiring an alignment parameter based on the data set to be calculated;
a third acquisition unit configured to acquire a correction target value based on the data set to be calculated;
a fourth obtaining unit, configured to perform an alignment operation on the sinusoidal sampling data point set based on the alignment parameter, so as to obtain an aligned sinusoidal sampling data point set;
And a fifth acquisition unit for performing a correction operation on the aligned sinusoidal sampling data point set based on the correction target value to acquire an optimized data set.
CN202311601542.XA 2023-11-28 2023-11-28 Confocal endoscope image optimization method and related equipment Pending CN117593228A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311601542.XA CN117593228A (en) 2023-11-28 2023-11-28 Confocal endoscope image optimization method and related equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311601542.XA CN117593228A (en) 2023-11-28 2023-11-28 Confocal endoscope image optimization method and related equipment

Publications (1)

Publication Number Publication Date
CN117593228A true CN117593228A (en) 2024-02-23

Family

ID=89922643

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311601542.XA Pending CN117593228A (en) 2023-11-28 2023-11-28 Confocal endoscope image optimization method and related equipment

Country Status (1)

Country Link
CN (1) CN117593228A (en)

Similar Documents

Publication Publication Date Title
US7903851B2 (en) Method and system for vertebrae and intervertebral disc localization in magnetic resonance images
Miri et al. Retinal image analysis using curvelet transform and multistructure elements morphology by reconstruction
US10366504B2 (en) Image processing apparatus and image processing method for performing three-dimensional reconstruction of plurality of images
JP6150583B2 (en) Image processing apparatus, endoscope apparatus, program, and operation method of image processing apparatus
JP4485947B2 (en) Method for processing an image acquired by a guide comprising a plurality of optical fibers
Ravì et al. Effective deep learning training for single-image super-resolution in endomicroscopy exploiting video-registration-based reconstruction
WO2007142682A2 (en) Method for detecting streaks in digital images
CN106981090B (en) Three-dimensional reconstruction method for in-tube stepping unidirectional beam scanning tomographic image
US8582839B2 (en) Ultrasound system and method of forming elastic images capable of preventing distortion
CN117291843B (en) Efficient management method for image database
CN108022220B (en) Ultrasonic image speckle noise removing method
CN117593228A (en) Confocal endoscope image optimization method and related equipment
CN111161852B (en) Endoscope image processing method, electronic equipment and endoscope system
CN107529962B (en) Image processing apparatus, image processing method, and recording medium
CN116740501A (en) Training method and application of image blurring region restoration compensation model
EP2693397B1 (en) Method and apparatus for noise reduction in an imaging system
CN117541633A (en) Confocal endoscope image alignment parameter calculation method and related equipment
WO2018104609A1 (en) Method for tracking a target in a sequence of medical images, associated device, terminal apparatus and computer programs
WO2021009804A1 (en) Method for learning threshold value
CN117635495A (en) Multi-index-value confocal endoscope image correction method and related equipment
CN117541569A (en) Confocal endoscope invalid image screening method and related equipment
CN117635494A (en) Multi-reference-value confocal endoscope image correction method and related equipment
CN116664560B (en) Gastrointestinal tract image data segmentation method
CN113298711B (en) Optical fiber bundle multi-frame image super-resolution reconstruction method and device
CN117710233B (en) Depth of field extension method and device for endoscopic image

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination