CN117710250A - Method for eliminating honeycomb structure imaged by fiberscope - Google Patents

Method for eliminating honeycomb structure imaged by fiberscope Download PDF

Info

Publication number
CN117710250A
CN117710250A CN202410156831.1A CN202410156831A CN117710250A CN 117710250 A CN117710250 A CN 117710250A CN 202410156831 A CN202410156831 A CN 202410156831A CN 117710250 A CN117710250 A CN 117710250A
Authority
CN
China
Prior art keywords
point
quadrant
value
points
pixel
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202410156831.1A
Other languages
Chinese (zh)
Other versions
CN117710250B (en
Inventor
任智强
孙明建
李圣波
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jiangsu Wuyou Microinvasive Medical Technology Co ltd
Original Assignee
Jiangsu Wuyou Microinvasive Medical Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jiangsu Wuyou Microinvasive Medical Technology Co ltd filed Critical Jiangsu Wuyou Microinvasive Medical Technology Co ltd
Priority to CN202410156831.1A priority Critical patent/CN117710250B/en
Publication of CN117710250A publication Critical patent/CN117710250A/en
Application granted granted Critical
Publication of CN117710250B publication Critical patent/CN117710250B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/20Image enhancement or restoration using local operators
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20024Filtering details
    • G06T2207/20032Median filtering

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Image Processing (AREA)

Abstract

The invention relates to the technical field of image processing, in particular to a method for eliminating a fiber mirror imaging honeycomb structure, which comprises the steps of obtaining a complete image through interpolation, locating a foreground area, locating an optical fiber center area, finely adjusting the optical fiber center area, recovering the image according to the optical fiber center area, sharpening the recovery details to obtain a processed image, and better recovering image information.

Description

Method for eliminating honeycomb structure imaged by fiberscope
Technical Field
The invention relates to the technical field of image processing, in particular to a method for eliminating a honeycomb structure imaged by a fiberscope.
Background
Using fiberscope imaging can have a honeycomb structure that severely affects the viewing experience, and to alleviate this situation, three algorithms are mainly used in the prior art to eliminate the honeycomb structure: (1) spatial domain filtering; (2) frequency domain filtering; and (3) interpolation reconstruction.
The spatial domain filtering mainly comprises gaussian filtering, median filtering, mean filtering and the like. On this basis, the honeycomb structure is eliminated by using a method of histogram equalization and then Gaussian filtering of the image. However, the spatial domain method generally has the effect of removing the image and blurring the image.
And (3) frequency domain filtering: frequency domain information of an image is first acquired using a two-dimensional fast fourier transform. Next, a specially designed gaussian low pass filter is sufficient to remove the frequency domain information of the honeycomb pattern. Finally, a restored image may be obtained by applying inverse fast fourier transform to the filtered frequency domain information. The method of frequency domain filtering uses a designed band-stop filter, so that the honeycomb structure and the real effective information are difficult to completely separate, and the processed image also has serious halation.
Interpolation reconstruction: firstly, finding out the approximate optical fiber center, scoring each optical fiber center by using Gaussian distribution, obtaining the accurate optical fiber center by integrating Gaussian distribution scoring information and optical fiber bundle size information, and obtaining an image by interpolating according to pixels of the optical fiber center. A method for segmenting the cladding and the center of the optical fiber is also proposed, and an improved non-local mean (NLM) algorithm is adopted to denoise and repair the fiber bundle image, so that the honeycomb structure is effectively removed. Compared with spatial domain filtering and frequency domain filtering, the imaging is clearer, but the conventional interpolation reconstruction mode is aimed at a single image and is not suitable for an endoscope camera system.
Disclosure of Invention
In view of the above, the present invention is directed to a method for eliminating a honeycomb structure formed by a fiberscope, so as to solve the problem that the existing interpolation reconstruction algorithm is not suitable for an endoscopic camera system.
Based on the above objects, the present invention provides a method for eliminating a fiberscope imaging honeycomb structure, comprising the steps of:
s1, recovering an original raw image into a complete image through interpolation;
s2, positioning a foreground region in the complete image;
s3, obtaining an optical fiber center candidate region in the foreground region through a self-adaptive median filtering algorithm;
s4, fine tuning is carried out on the optical fiber center candidate region to obtain an optical fiber center region;
s5, respectively expanding each pixel point of R, G, B channels of an original raw image, finding out a point which is a corresponding channel pixel and belongs to an optical fiber central area in an expanded window, and processing the pixel points by combining a bilinear interpolation method through a preset rule to obtain all R channel images, G channel images and B channel images, thereby obtaining RGGB images of the processed raw domain;
s6, sharpening the RGGB image of the processed raw domain to obtain an image with the honeycomb structure eliminated.
Preferably, S1 further comprises: and carrying out bilinear interpolation by using the G channel of the original raw image to obtain a complete image.
Preferably, S2 further comprises:
s21, selecting pixel values of four corner areas of the complete image I to obtain a maximum pixel value I_MAX;
s22, adding an OFFSET value I_MAX_OFFSET on the basis of the I_MAX to obtain a foreground threshold value I_TH;
s23, judging whether the I_TH is larger than the I_CUTOFF, if so, setting the I_TH as the I_CUTOFF, otherwise, keeping unchanged;
and S24, locating the area with the pixel value larger than the foreground threshold I_TH in the complete image I as a foreground area.
Preferably, S3 further comprises:
s31, MEDIAN filtering is used in a foreground region to obtain an image I_MEDIAN;
s32, adding an OFFSET value I_MEDIAN_OFFSET on the basis of the I_MEDIAN to obtain a central threshold value I_M_TH;
and S33, judging the region with the pixel value larger than the CENTER threshold I_M_TH in the complete image I as an optical fiber CENTER candidate region center_P.
Preferably, in step S31, the median-filtered window size calculation step includes:
s311, calculating a variable x according to the following formula:
where r represents the foreground region radius and N represents the number of bundles;
s312, judging whether x is smaller than 3, if yes, setting to 3, and rounding x to obtain an integer value, wherein the integer value is used as the MEDIAN SIZE MEDIAN_SIZE of MEDIAN filtering.
Preferably, S4 further comprises:
s41, respectively processing R, G, B channels of an original raw image, and expanding each pixel point into a fine tuning window, wherein the size of the fine tuning window is equal to the size of a median filtered window;
s42, selecting points in the optical fiber center candidate area in the fine tuning window, and averaging pixel values of the points to obtain values of the fine-tuned pixel points, so that the optical fiber center area containing the fine-tuned pixel information is obtained.
Preferably, the processing the pixel point by a predetermined rule and combining with bilinear interpolation method includes:
constructing a distance MATRIX DIS_MATRIX, wherein the window SIZE DIS_SIZE of the distance MATRIX is equal to the window SIZE of median filtering;
extending a distance window with the SIZE of DIS_SIZE multiplied by DIS_SIZE by taking the pixel point i as the center;
selecting a point which is not only a channel corresponding to a pixel point in a distance window, but also a point in the central area of the optical fiber, dividing the obtained point into four quadrants, and finding the nearest point of the distance i in each quadrant by combining a distance matrix:
if the nearest point does not exist, i is not processed to be equal to the original pixel value;
if there is only one nearest point, the pixel value of i is set to the value of that nearest point;
if only two or three four quadrant points exist, selecting the nearest point value as the pixel value of i;
if only one three-quadrant or two-four-quadrant point exists, calculating to obtain a new pixel value of i by using a bilinear interpolation mode;
if only two or three-quadrant points exist, comparing whether the sum of the distance values of the three-quadrant points is larger than 2 times of the distance value of the second quadrant point, if so, obtaining a new pixel value of i by using point bilinear interpolation of the three-quadrant points, otherwise, setting the pixel value of i as the value of the second quadrant point;
if only two and four-quadrant points exist, comparing whether the sum of the distance values of the two and four-quadrant points is larger than 2 times of the distance value of the first quadrant point, if so, obtaining a new pixel value of i by using point bilinear interpolation of the two and four-quadrant points, otherwise, setting the pixel value of the value as the value of the first quadrant point;
if only the two-three-four-quadrant points exist, comparing whether the sum of the distance values of the two-four-quadrant points is larger than 2 times of the distance value of the third quadrant point, if so, obtaining a new pixel value of i by using the two-four-quadrant point bilinear interpolation, otherwise, setting the pixel value of i as the value of the third quadrant point;
if only a three-four quadrant point exists, comparing whether the sum of the distance values of the three-quadrant points is larger than 2 times of the distance value of the fourth quadrant point, if so, obtaining a new pixel value of i by using point bilinear interpolation of the three-quadrant points, otherwise, setting the pixel value of i as the value of the fourth quadrant point;
if the four quadrants have points, comparing the distances between a three-quadrant point and a two-four-quadrant point, performing bilinear interpolation by using the smaller pair of points, and if the distance is less than or equal to 2, comparing the gradients, and selecting the pair of points with small gradients to perform bilinear interpolation.
Preferably, the sharpening processing of the RGGB image of the processed raw domain includes:
selecting a corresponding area NEW_FG from the RGGB image NEW_RAW_IMG of the processed RAW domain according to the foreground area, and processing R, G, G, B channels by the following steps:
the method comprises the steps of performing convolution processing on corresponding channel pixels by using 11 x 11 Gaussian cores NEW_FG to obtain NEW_FG_CONV, subtracting NEW_FG_CONV from NEW_FG to obtain DETAIL and edge information NEW_FG_DETAIL, amplifying NEW_FG_DETAIL by preset M times, and adding the amplified NEW_FG_FG_RES to obtain an enhanced result NEW_FG_RES, namely a pixel value after corresponding channel sharpening processing.
The invention has the beneficial effects that:
the invention is based on the way of interpolation reconstruction and therefore will be superior to spatial domain filtering and frequency domain filtering.
The invention starts from the imaging principle, and applies an algorithm to the raw image to restore the image information better.
The method and the device position the center of the optical fiber in a self-adaptive median mode, have small calculated amount and good effect, and are matched with a fine adjustment optical fiber center algorithm, so that the pixel value of the optical fiber center area is more accurate.
The interpolation method provided by the invention is simple and efficient, and the overall effect accords with the expected result.
The final image of the invention uses a sharpening mode to recover edge details, so that the picture effect is further enhanced.
Drawings
In order to more clearly illustrate the invention or the technical solutions of the prior art, the drawings which are used in the description of the embodiments or the prior art will be briefly described, it being obvious that the drawings in the description below are only of the invention and that other drawings can be obtained from them without inventive effort for a person skilled in the art.
FIG. 1 is a schematic flow chart of a method for eliminating a fiber-imaged honeycomb structure according to an embodiment of the invention;
FIG. 2 is an image of a honeycomb structure prior to processing in accordance with an embodiment of the present invention;
FIG. 3 is a graph of the final result after processing according to an embodiment of the present invention.
Detailed Description
The present invention will be further described in detail with reference to specific embodiments in order to make the objects, technical solutions and advantages of the present invention more apparent.
It is to be noted that unless otherwise defined, technical or scientific terms used herein should be taken in a general sense as understood by one of ordinary skill in the art to which the present invention belongs. The terms "first," "second," and the like, as used herein, do not denote any order, quantity, or importance, but rather are used to distinguish one element from another. The word "comprising" or "comprises", and the like, means that elements or items preceding the word are included in the element or item listed after the word and equivalents thereof, but does not exclude other elements or items. The terms "connected" or "connected," and the like, are not limited to physical or mechanical connections, but may include electrical connections, whether direct or indirect. "upper", "lower", "left", "right", etc. are used merely to indicate relative positional relationships, which may also be changed when the absolute position of the object to be described is changed.
As shown in fig. 1, an embodiment of the present disclosure provides a method of eliminating a fiberscope-imaged honeycomb structure, comprising the steps of:
s1, recovering an original raw image into a complete image through interpolation;
because the invention acts on the RAW domain, the raw_IMG of the original RAW image is in the RGGB format of the Bayer, and the data cannot be directly used, the data needs to be restored into a complete image through interpolation, and the subsequent processing is convenient. Since the format is RGGB, where the information of the G channel is the most, bilinear interpolation is performed using the G channel in this embodiment to obtain the complete image I.
S2, positioning a foreground region in the complete image, wherein the method specifically comprises the following steps of:
s21, selecting pixel values of four corner areas of the complete image I to obtain a maximum pixel value I_MAX, wherein the size and shape of the pixel areas selected by the four corner areas can be defined by users, for example, selecting a window size of 9 multiplied by 9, and clinging to the four corner areas;
s22, adding an OFFSET value I_MAX_OFFSET on the basis of the I_MAX to obtain a foreground threshold I_TH, wherein the OFFSET value I_MAX_OFFSET is a settable value, and the default value is set to be 2 in the embodiment.
S23, judging whether the I_TH is larger than the I_CUTOFF, if so, setting the I_TH as the I_CUTOFF, otherwise, keeping unchanged;
and S24, locating the area with the pixel value larger than the foreground threshold I_TH in the complete image I as a foreground area.
The foreground region is selected by the OSTU oxford algorithm in a mode of positioning the foreground region, and the foreground region is selected by the OSTU oxford algorithm due to the fact that the foreground region is selected due to the change of the size of a picture, so that the foreground region is positioned by the OSTU oxford algorithm in the embodiment, and compared with the OSTU oxford algorithm, the foreground region is more robust.
S3, obtaining an optical fiber center candidate region in the foreground region through an adaptive median filtering algorithm, wherein the method specifically comprises the following steps:
s31, MEDIAN filtering is used in a foreground region to obtain an image I_MEDIAN;
s32, adding an OFFSET value I_MEDIAN_OFFSET on the basis of the I_MEDIAN to obtain a central threshold value I_M_TH;
s33, judging the region with the pixel value larger than the CENTER threshold I_M_TH in the complete image I as an optical fiber CENTER candidate region center_P, wherein the value of the center_P in the candidate region is 1, and otherwise, the value of the center_P is 0.
The MEDIAN SIZE of the window is mean SIZE, which depends on the SIZE of the foreground region, and increases when the foreground region is large, or decreases when the foreground region is large. This is because the optical fiber bundle of a general medical fiberscope has about N optical fibers (N is 20000 by default) and corresponds to a foreground region in the screen. However, due to the optical bayonet zoom, the size of the pixel area corresponding to each ray will vary with the zoom factor, so in order to make the algorithm more robust, it is necessary to estimate how many pixels are in the area occupied by each ray. In this embodiment, the variable x is calculated according to the following formula, when x is smaller than 3, the variable x is set to 3, and then the integer value is obtained by rounding x, and the value is MEDIAN_SIZE;
,
where r represents the foreground region radius and N is the number of bundles.
S4, starting from the imaging principle of a fiberscope, only the pixel in the center area in the middle of the optical fiber is the most correct, and the image point light sources in the surrounding areas are generally gradually reduced. And the CENTER candidate region center_p is a region and is not a point, so fine tuning of the region is required. Therefore, fine tuning is required for the optical fiber center candidate region, so as to obtain the optical fiber center region, which specifically includes:
the RGB area pixels of the original raw_img image are processed respectively, that is, the current R channel pixel point selects the pixel value of the corresponding R channel in the window refine_size, so that the pixels are required to be the points in the candidate area center_p (if the current R channel pixel point is not in center_p, no calculation is performed), and then the value of the current pixel point can be obtained by averaging the pixels. The G and B channels are calculated separately in this manner, resulting in a trimmed fiber center region refine_p containing the actual pixel information.
Where the window REFINE SIZE is equal to the SIZE of the medium SIZE, but requires a minimum value of REFINE SIZE of 9.
S5, recovering an image according to the central area of the optical fiber, wherein the method specifically comprises the following steps:
constructing a distance MATRIX DIS_MATRIX, wherein the window SIZE DIS_SIZE of the distance MATRIX is equal to the window SIZE of median filtering;
RGB area pixels of the original raw_img image are processed separately. Extending a DIS_SIZE window by taking the current pixel point R_i of the R channel as the center;
selecting pixels which are not only R channels in a distance window, but also points in the central area of the optical fiber, dividing the obtained points into four quadrants, and finding the nearest point from R_i in each quadrant by combining a distance matrix:
if the nearest point does not exist, R_i is not processed to be equal to the original pixel value;
if there is only one nearest point, the pixel value of R_i is set to the value of the nearest point;
if there are only two or three four quadrant points, then selecting the nearest point value as the pixel value of R_i;
if only one three-quadrant or two-four-quadrant point exists, calculating to obtain a new pixel value of R_i by using a bilinear interpolation mode;
if only two or three-quadrant points exist, comparing whether the sum of the distance values of the three-quadrant points is larger than 2 times of the distance value of the second-quadrant point, if so, obtaining a new pixel value of i by using point bilinear interpolation of the three-quadrant points, otherwise, setting the pixel value of R_i as the value of the second-quadrant point;
if only two and four-quadrant points exist, comparing whether the sum of the distance values of the two and four-quadrant points is larger than 2 times of the distance value of the first quadrant point, if so, obtaining a new pixel value of R_i by using point bilinear interpolation of the two and four-quadrant points, otherwise, setting the pixel value of the value as the value of the first quadrant point;
if only the two-three-four-quadrant points exist, comparing whether the sum of the distance values of the two-four-quadrant points is larger than 2 times of the distance value of the third-quadrant point, if so, obtaining a new pixel value of R_i by using the point bilinear interpolation of the two-four-quadrant points, otherwise, setting the pixel value of i as the value of the third-quadrant point;
if only a three-four quadrant point exists, comparing whether the sum of the distance values of the three-quadrant points is larger than 2 times of the distance value of the fourth quadrant point, if so, obtaining a new pixel value of R_i by using point bilinear interpolation of the three-quadrant points, otherwise, setting the pixel value of R_i as the value of the fourth quadrant point;
if the four quadrants have points, comparing the distances between a three-quadrant point and a two-four-quadrant point, performing bilinear interpolation by using the smaller pair of points, and if the distance is less than or equal to 2, comparing the gradients, and selecting the pair of points with small gradients to perform bilinear interpolation.
All R channel images can be interpolated as described above. And the images of the G and B channels can be obtained by analogy.
The image finally interpolated by the above operation is still the RGGB image of the RAW domain, and is called new_raw_img at this time.
S6, sharpening the RGGB image NEW_RAW_IMG of the processed RAW domain to obtain an image for eliminating the honeycomb structure, wherein the method specifically comprises the following steps of:
and selecting a corresponding region NEW_FG from the NEW_RAW_IMG image according to the foreground region, performing convolution processing on R channel pixels by using 11 x 11 Gaussian kernels NEW_FG to obtain NEW_FG_CONV, subtracting NEW_FG_CONV from NEW_FG to obtain DETAILs and edge information NEW_FG_DETAIL, amplifying the NEW_FG_DETAIL by M times, and adding the amplified NEW_FG_RES to obtain an enhanced result NEW_FG_RES. Wherein the default value of M is 4. And analogically, the pixel value of the G, G, B channel can be obtained, so that the sharpening process is completed.
The honeycomb structure image shown in fig. 2 is processed by the method for eliminating the honeycomb structure formed by the fiberscope provided by the specification, and the final result diagram is shown in fig. 3, so that the method can be used for eliminating the honeycomb structure well and can be applied to an endoscope image pickup system.
The method starts from an imaging principle, the fiberscope is formed by wrapping a plurality of optical fiber bundles, the distance between the optical fibers can lead the pixels of the part of the region acquired by the cmos to be black, and the step of obtaining the RGB image from the raw image through the decoding Siek process can lead to erroneous interpolation due to the part of black pixels, so that the RGB image is a poor image obtained through the erroneous information interpolation. Many interpolation reconstruction methods are processing on RGB images, that is, processing on a worse image, and the image information is always lost, so the method applies an algorithm to the raw image, and better restores the image information.
In more interpolation reconstruction methods, the implementation of the algorithm for positioning the optical fiber center point is complex. For example, researchers use gaussian distributions to score fiber center points, researchers use subpixel positioning to position fiber center points, researchers use multiple pieces of picture information to position center points, and so on. The method uses a self-adaptive median to position the optical fiber center, has small calculated amount and good effect, and is matched with a fine adjustment optical fiber center algorithm, so that the pixel value of the optical fiber center area is more accurate.
In the interpolation reconstruction method, the reconstruction part has a delaunay triangulation interpolation mode used by researchers, and the interpolation mode has large calculation amount and consumes more resources. The interpolation method provided by the invention is simple and efficient, and the overall effect accords with the expected result.
And finally, the edge details of the image are restored by using a sharpening mode, and the picture effect is further enhanced.
Those of ordinary skill in the art will appreciate that: the discussion of any of the embodiments above is merely exemplary and is not intended to suggest that the scope of the invention (including the claims) is limited to these examples; the technical features of the above embodiments or in the different embodiments may also be combined within the idea of the invention, the steps may be implemented in any order and there are many other variations of the different aspects of the invention as described above, which are not provided in detail for the sake of brevity.
The present invention is intended to embrace all such alternatives, modifications and variances which fall within the broad scope of the appended claims. Therefore, any omission, modification, equivalent replacement, improvement, etc. of the present invention should be included in the scope of the present invention.

Claims (8)

1. A method of eliminating a fiberscope imaging honeycomb structure, said method comprising the steps of:
s1, recovering an original raw image into a complete image through interpolation;
s2, positioning a foreground region in the complete image;
s3, obtaining an optical fiber center candidate region in the foreground region through a self-adaptive median filtering algorithm;
s4, fine tuning is carried out on the optical fiber center candidate region to obtain an optical fiber center region;
s5, respectively expanding each pixel point of R, G, B channels of an original raw image, finding out a point which is a corresponding channel pixel and belongs to an optical fiber central area in an expanded window, and processing the pixel points by combining a bilinear interpolation method through a preset rule to obtain all R channel images, G channel images and B channel images, thereby obtaining RGGB images of the processed raw domain;
s6, sharpening the RGGB image of the processed raw domain to obtain an image with the honeycomb structure eliminated.
2. The method of removing a fiberscope imaging honeycomb structure of claim 1, wherein S1 further comprises: and carrying out bilinear interpolation by using the G channel of the original raw image to obtain a complete image.
3. The method of removing a fiberscope imaging honeycomb structure of claim 1, wherein S2 further comprises:
s21, selecting pixel values of four corner areas of the complete image I to obtain a maximum pixel value I_MAX;
s22, adding an OFFSET value I_MAX_OFFSET on the basis of the I_MAX to obtain a foreground threshold value I_TH;
s23, judging whether the I_TH is larger than the I_CUTOFF, if so, setting the I_TH as the I_CUTOFF, otherwise, keeping unchanged;
and S24, locating the area with the pixel value larger than the foreground threshold I_TH in the complete image I as a foreground area.
4. The method of removing a fiberscope imaging honeycomb structure of claim 1, wherein S3 further comprises:
s31, MEDIAN filtering is used in a foreground region to obtain an image I_MEDIAN;
s32, adding an OFFSET value I_MEDIAN_OFFSET on the basis of the I_MEDIAN to obtain a central threshold value I_M_TH;
and S33, judging the region with the pixel value larger than the CENTER threshold I_M_TH in the complete image I as an optical fiber CENTER candidate region center_P.
5. The method of removing a fiberscope imaging honeycomb structure according to claim 4, wherein in S31, the median filtered window size calculating step comprises:
s311, calculating a variable x according to the following formula:
where r represents the foreground region radius and N represents the number of bundles;
s312, judging whether x is smaller than 3, if yes, setting to 3, and rounding x to obtain an integer value, wherein the integer value is used as the MEDIAN SIZE MEDIAN_SIZE of MEDIAN filtering.
6. The method of removing a fiberscope imaging honeycomb structure of claim 1, wherein S4 further comprises:
s41, respectively processing R, G, B channels of an original raw image, and expanding each pixel point into a fine tuning window, wherein the size of the fine tuning window is equal to the size of a median filtered window;
s42, selecting points in the optical fiber center candidate area in the fine tuning window, and averaging pixel values of the points to obtain values of the fine-tuned pixel points, so that the optical fiber center area containing the fine-tuned pixel information is obtained.
7. The method of removing a fiberscope imaging honeycomb structure according to claim 1, wherein processing the pixel points by a predetermined rule in combination with bilinear interpolation comprises:
constructing a distance MATRIX DIS_MATRIX, wherein the window SIZE DIS_SIZE of the distance MATRIX is equal to the window SIZE of median filtering;
extending a distance window with the SIZE of DIS_SIZE multiplied by DIS_SIZE by taking the pixel point i as the center;
selecting a point which is not only a channel corresponding to a pixel point in a distance window, but also a point in the central area of the optical fiber, dividing the obtained point into four quadrants, and finding the nearest point of the distance i in each quadrant by combining a distance matrix:
if the nearest point does not exist, i is not processed to be equal to the original pixel value;
if there is only one nearest point, the pixel value of i is set to the value of that nearest point;
if only two or three four quadrant points exist, selecting the nearest point value as the pixel value of i;
if only one three-quadrant or two-four-quadrant point exists, calculating to obtain a new pixel value of i by using a bilinear interpolation mode;
if only two or three-quadrant points exist, comparing whether the sum of the distance values of the three-quadrant points is larger than 2 times of the distance value of the second quadrant point, if so, obtaining a new pixel value of i by using point bilinear interpolation of the three-quadrant points, otherwise, setting the pixel value of i as the value of the second quadrant point;
if only two and four-quadrant points exist, comparing whether the sum of the distance values of the two and four-quadrant points is larger than 2 times of the distance value of the first quadrant point, if so, obtaining a new pixel value of i by using point bilinear interpolation of the two and four-quadrant points, otherwise, setting the pixel value of the value as the value of the first quadrant point;
if only the two-three-four-quadrant points exist, comparing whether the sum of the distance values of the two-four-quadrant points is larger than 2 times of the distance value of the third quadrant point, if so, obtaining a new pixel value of i by using the two-four-quadrant point bilinear interpolation, otherwise, setting the pixel value of i as the value of the third quadrant point;
if only a three-four quadrant point exists, comparing whether the sum of the distance values of the three-quadrant points is larger than 2 times of the distance value of the fourth quadrant point, if so, obtaining a new pixel value of i by using point bilinear interpolation of the three-quadrant points, otherwise, setting the pixel value of i as the value of the fourth quadrant point;
if the four quadrants have points, comparing the distances between a three-quadrant point and a two-four-quadrant point, performing bilinear interpolation by using the smaller pair of points, and if the distance is less than or equal to 2, comparing the gradients, and selecting the pair of points with small gradients to perform bilinear interpolation.
8. The method of removing a fiberscope imaging honeycomb structure according to claim 1, wherein sharpening the RGGB image of the processed raw domain comprises:
selecting a corresponding area NEW_FG from the RGGB image NEW_RAW_IMG of the processed RAW domain according to the foreground area, and processing R, G, G, B channels by the following steps:
the method comprises the steps of performing convolution processing on corresponding channel pixels by using 11 x 11 Gaussian cores NEW_FG to obtain NEW_FG_CONV, subtracting NEW_FG_CONV from NEW_FG to obtain DETAIL and edge information NEW_FG_DETAIL, amplifying NEW_FG_DETAIL by preset M times, and adding the amplified NEW_FG_FG_RES to obtain an enhanced result NEW_FG_RES, namely a pixel value after corresponding channel sharpening processing.
CN202410156831.1A 2024-02-04 2024-02-04 Method for eliminating honeycomb structure imaged by fiberscope Active CN117710250B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410156831.1A CN117710250B (en) 2024-02-04 2024-02-04 Method for eliminating honeycomb structure imaged by fiberscope

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410156831.1A CN117710250B (en) 2024-02-04 2024-02-04 Method for eliminating honeycomb structure imaged by fiberscope

Publications (2)

Publication Number Publication Date
CN117710250A true CN117710250A (en) 2024-03-15
CN117710250B CN117710250B (en) 2024-04-30

Family

ID=90159265

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410156831.1A Active CN117710250B (en) 2024-02-04 2024-02-04 Method for eliminating honeycomb structure imaged by fiberscope

Country Status (1)

Country Link
CN (1) CN117710250B (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090092363A1 (en) * 2006-03-14 2009-04-09 Fraunhofer-Gesellschaft Zur Foerderung Der Angewandten Forschung E.V. Method and device for generating a structure-free fiberscopic picture
CN107622491A (en) * 2017-10-16 2018-01-23 南京亘瑞医疗科技有限公司 Fibre bundle image analysis method and device
CN111415312A (en) * 2020-04-08 2020-07-14 中国科学院苏州生物医学工程技术研究所 Confocal endoscope image processing method, system and computer equipment
KR20220164282A (en) * 2021-06-04 2022-12-13 한국과학기술연구원 Method for postprocessing fiberscope image processing not using calibration and fiberscope system performing the same
CN115953421A (en) * 2022-12-22 2023-04-11 郑州大学 Harris honeycomb vertex extraction method for detecting regularity of honeycomb structure
CN116452662A (en) * 2023-04-03 2023-07-18 无锡海斯凯尔医学技术有限公司 Method and device for extracting pixel coordinates of optical fiber center and electronic equipment
CN116597016A (en) * 2023-05-09 2023-08-15 中国工程物理研究院总体工程研究所 Optical fiber endoscope image calibration method

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090092363A1 (en) * 2006-03-14 2009-04-09 Fraunhofer-Gesellschaft Zur Foerderung Der Angewandten Forschung E.V. Method and device for generating a structure-free fiberscopic picture
CN107622491A (en) * 2017-10-16 2018-01-23 南京亘瑞医疗科技有限公司 Fibre bundle image analysis method and device
CN111415312A (en) * 2020-04-08 2020-07-14 中国科学院苏州生物医学工程技术研究所 Confocal endoscope image processing method, system and computer equipment
KR20220164282A (en) * 2021-06-04 2022-12-13 한국과학기술연구원 Method for postprocessing fiberscope image processing not using calibration and fiberscope system performing the same
CN115953421A (en) * 2022-12-22 2023-04-11 郑州大学 Harris honeycomb vertex extraction method for detecting regularity of honeycomb structure
CN116452662A (en) * 2023-04-03 2023-07-18 无锡海斯凯尔医学技术有限公司 Method and device for extracting pixel coordinates of optical fiber center and electronic equipment
CN116597016A (en) * 2023-05-09 2023-08-15 中国工程物理研究院总体工程研究所 Optical fiber endoscope image calibration method

Non-Patent Citations (8)

* Cited by examiner, † Cited by third party
Title
C. WINTER等: "Automatic Adaptive Enhancement for Images Obtained With Fiberscopic Endoscopes", IEEE TRANSACTIONS ON BIOMEDICAL ENGINEERING, 30 September 2006 (2006-09-30) *
CHRISTIAN MUNZENMAYER等: "Texture-based computer-assisted diagnosis for fiberscopic images", 2009 ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY, 13 November 2009 (2009-11-13) *
JASON GENG等: "Review of 3-D Endoscopic Surface Imaging Techniques", IEEE SENSORS JOURNAL, 31 December 2013 (2013-12-31) *
OMAR ZENTENO等: "Spatial and Spectral Calibration of a Multispectral-Augmented Endoscopic Prototype", SPRINGLINK, COMPUTER VISION, IMAGING AND COMPUTER GRAPHICS THEORY AND APPLICATIONS, 24 July 2019 (2019-07-24) *
张欢: "基于光纤胃镜的血流血氧成像技术研究", 中国优秀硕士学位论文全文数据库医药卫生科技辑, 28 February 2023 (2023-02-28) *
曹令: "核辐射环境下图像采集及降噪系统设计", 中国优秀硕士学位论文全文数据库工程科技II辑, 31 January 2023 (2023-01-31) *
王家福;杨敏;杨莉;张云;袁菁;刘谦;侯晓华;付玲;: "用于细胞成像的共聚焦内窥镜", ENGINEERING, 15 September 2015 (2015-09-15) *
覃涵: "基于光纤的自由活动小鼠神经钙信号记录和成像研究", 中国博士学位论文全文数据库基础科学辑, 31 March 2022 (2022-03-31) *

Also Published As

Publication number Publication date
CN117710250B (en) 2024-04-30

Similar Documents

Publication Publication Date Title
CN111275626B (en) Video deblurring method, device and equipment based on ambiguity
CN109743473A (en) Video image 3 D noise-reduction method, computer installation and computer readable storage medium
WO2017100971A1 (en) Deblurring method and device for out-of-focus blurred image
Wang et al. A graph-based joint bilateral approach for depth enhancement
CN112070657B (en) Image processing method, device, system, equipment and computer storage medium
Jeong et al. Multi-frame example-based super-resolution using locally directional self-similarity
GB2547842A (en) Image processing device and method, image pickup device, program, and recording medium
CN113298761B (en) Image filtering method, device, terminal and computer readable storage medium
EP3438923B1 (en) Image processing apparatus and image processing method
CN108234826B (en) Image processing method and device
CN111031241B (en) Image processing method and device, terminal and computer readable storage medium
Teranishi et al. Improvement of robustness blind image restoration method using failing detection process
Motohashi et al. A study on blind image restoration of blurred images using R-map
Alam et al. Space-variant blur kernel estimation and image deblurring through kernel clustering
CN110689486A (en) Image processing method, device, equipment and computer storage medium
CN110852947B (en) Infrared image super-resolution method based on edge sharpening
CN117710250B (en) Method for eliminating honeycomb structure imaged by fiberscope
CN104504667B (en) image processing method and device
CN112446837B (en) Image filtering method, electronic device and storage medium
CN115631171A (en) Picture definition evaluation method, system and storage medium
CN113938578B (en) Image blurring method, storage medium and terminal equipment
Bareja et al. An improved iterative back projection based single image super resolution approach
CN114372938A (en) Image self-adaptive restoration method based on calibration
He et al. Joint motion deblurring and superresolution from single blurry image
Lal et al. A comparative study on CNN based low-light image enhancement

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant