CN117522719B - Bronchoscope image auxiliary optimization system based on machine learning - Google Patents

Bronchoscope image auxiliary optimization system based on machine learning Download PDF

Info

Publication number
CN117522719B
CN117522719B CN202410017385.6A CN202410017385A CN117522719B CN 117522719 B CN117522719 B CN 117522719B CN 202410017385 A CN202410017385 A CN 202410017385A CN 117522719 B CN117522719 B CN 117522719B
Authority
CN
China
Prior art keywords
pixel
pixels
image
window
expansion window
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202410017385.6A
Other languages
Chinese (zh)
Other versions
CN117522719A (en
Inventor
宗政
吴菊
张霞
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zigong First Peoples Hospital
Original Assignee
Zigong First Peoples Hospital
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zigong First Peoples Hospital filed Critical Zigong First Peoples Hospital
Priority to CN202410017385.6A priority Critical patent/CN117522719B/en
Publication of CN117522719A publication Critical patent/CN117522719A/en
Application granted granted Critical
Publication of CN117522719B publication Critical patent/CN117522719B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Abstract

The invention relates to the technical field of image processing, in particular to a bronchoscope image auxiliary optimization system based on machine learning, which comprises the following steps: the image acquisition module is used for acquiring a bronchus image; the reference region acquisition module is used for acquiring a reference region of each pixel in the bronchial image according to the bronchial image; the correction confusion degree acquisition module is used for obtaining the initial confusion degree of each pixel according to the reference area of each pixel and obtaining the correction confusion degree of each pixel according to the initial confusion degree; the image enhancement module is used for obtaining corrected clustering distances of every two pixels according to the corrected confusion degree and obtaining a plurality of independent texture areas according to the corrected clustering distances; an enhanced bronchial image is obtained from each independent texture region. Thereby realizing the enhancement of focus areas in the bronchial image by analyzing the condition that each area in the bronchial image accords with focus characteristics.

Description

Bronchoscope image auxiliary optimization system based on machine learning
Technical Field
The invention relates to the technical field of image processing, in particular to a bronchoscope image auxiliary optimization system based on machine learning.
Background
Bronchoscopes can directly observe the internal conditions of bronchi and part of lungs, and provide first-hand visual evidence for diseases or abnormalities. Because the bronchus is the structure inside the human body, natural light can not penetrate into the human body, a light supplementing lamp is arranged in the bronchoscope to light the local area of the bronchus, and then a built-in camera in the bronchoscope collects the bronchus image.
Since the bronchial image is acquired in the light-supplementing environment, there will be a highlight region in the bronchial image, and the gray scale value of the highlight region is generally larger. Meanwhile, the inflammation focus in the bronchus is generally white, so that the gray value of the inflammation focus area is also larger. Light in the bronchial image can interfere with the extraction of the focal region. Thus, in order to eliminate the interference of light rays, enhancement processing is required for the bronchial image.
Whereas conventional enhancement methods compress smaller gray values less and stretch larger gray values more. Thus, the conventional enhancement method cannot distinguish the gray value of the focal region from the gray value of the highlight region generated by the light. It is therefore desirable to devise an enhancement method to distinguish focal areas from other areas of gray values to facilitate focal area extraction.
Disclosure of Invention
The invention provides a bronchoscope image auxiliary optimization system based on machine learning, which aims to solve the existing problems: how to distinguish a lesion area from gray values of other areas by image enhancement.
The bronchoscope image auxiliary optimization system based on machine learning adopts the following technical scheme:
one embodiment of the invention provides a machine learning-based bronchoscope image auxiliary optimization system, which comprises the following modules:
the image acquisition module is used for acquiring a bronchus image;
the reference region acquisition module is used for acquiring a reference region of each pixel in the bronchial image according to the gray value difference of the pixels in the bronchial image;
the correction confusion degree acquisition module is used for obtaining the initial confusion degree of each pixel according to the area of the reference area of each pixel and the complexity of the edge, and obtaining the correction confusion degree of each pixel according to the initial confusion degree of each pixel and the gradient of each pixel in the reference area;
the image enhancement module is used for obtaining corrected clustering distances of every two pixels according to the corrected confusion degree of each pixel, and carrying out cluster analysis on the pixels in the bronchial image according to the corrected clustering distances of every two pixels to obtain a plurality of independent texture areas; and carrying out enhancement processing on the bronchus image according to the correction confusion degree of each pixel in each independent texture region to obtain an enhanced bronchus image.
Preferably, the obtaining the reference area of each pixel in the bronchial image according to the gray value difference of the pixel in the bronchial image includes the following specific methods:
presetting a window side lengthTaking each pixel as a center, acquiring a window of L, and recording the window as a reference window of each pixel; acquiring the variance of gray values of all pixels in a reference window of each pixel, marking the variance as the gray variance of the reference window of each pixel, and marking the mean value of the gray variances of all pixels in the bronchial image as the reference variance;
for any one pixel, marking a reference window of each pixel as a first expansion window, and marking the difference value between the gray level variance of the first expansion window and the reference variance as the deviation of the first expansion window; obtaining a second expansion window according to the deviation of the first expansion window, obtaining the gray variance of the second expansion window, recording the difference value between the gray variance of the second expansion window and the reference variance as the deviation of the second expansion window, and obtaining a third expansion window according to the deviation of the second expansion window; and the same is repeated until the number of pixels in the expansion window is greater than or equal to a preset maximum critical value B, less than or equal to a preset minimum critical value A or the deviation of the expansion window is equal to 0, so as to obtain a plurality of expansion windows;
and selecting the expansion window with the smallest deviation from all expansion windows of each pixel point as a reference area of each pixel.
Preferably, the method for obtaining the second expansion window according to the deviation of the first expansion window includes the following specific steps:
when the deviation of the first expansion window is larger than 0, acquiring the pixel of the outermost circle in the first expansion window as the peripheral pixel of the first expansion window, and removing one pixel from the peripheral pixel of the first expansion window to obtain a second expansion window;
when the deviation of the first expansion window is smaller than 0, peripheral pixels of the first expansion window are obtained, pixels which are adjacent to the peripheral pixels of the first expansion window and do not belong to the first expansion window are obtained and marked as outer adjacent pixels of the first expansion window, and optionally, the outer adjacent pixels of one first expansion window are added on the first expansion window to obtain a second expansion window.
Preferably, the obtaining the initial confusion degree of each pixel according to the area of the reference area and the complexity of the edge of each pixel includes the following specific methods:
acquiring edge complexity of a reference area of each pixel;
the calculation method for obtaining the initial confusion degree of each pixel according to the edge complexity and the area of the reference area of each pixel comprises the following steps:
wherein,represents the area of the reference area of the ith pixel, of->Representing a linear normalization process,/->Edge complexity of the reference area representing the ith pixel,/->Indicating the initial degree of confusion for the ith pixel.
Preferably, the acquiring the edge complexity of the reference area of each pixel includes the following specific methods:
acquiring peripheral pixels of a reference area of each pixel, and carrying out chain code analysis on all peripheral pixels of the reference area of each pixel to obtain a chain code sequence of the reference area of each pixel; the variance of all data in the chain code sequence of the reference region of each pixel is noted as the edge complexity of the reference region of each pixel.
Preferably, the obtaining the corrected clutter level of each pixel according to the initial clutter level of each pixel and the gradient of each pixel in the reference area includes the following specific methods:
acquiring a gradient decentration angle of a reference pixel of each pixel according to the gradient of each pixel in the reference area of each pixel;
the calculation method for obtaining the correction confusion degree of each pixel according to the gradient decentering angle and the initial confusion degree of the reference pixel of each pixel comprises the following steps:
wherein,indicating the Euclidean distance of the ith pixel from the qth pixel of the reference area, +.>Gradient decentration angle of the q-th reference pixel representing the i-th pixel, +.>Representing the number of pixels in the reference area of the ith pixel, a>Indicating the initial degree of confusion of the ith pixel, < >>Indicating the degree of confusion in correction for the i-th pixel.
Preferably, the method for obtaining the gradient decentration angle of the reference pixel of each pixel according to the gradient of each pixel in the reference area of each pixel includes the following specific steps:
the pixels in the reference area of each pixel are marked as reference pixels of each pixel, and any one reference pixel of each pixel is marked as a target reference pixel of each pixel; acquiring a vector formed by each pixel and a target reference pixel, and recording an included angle between the vector and the gradient direction of the target reference pixel as a gradient eccentric angle between each pixel and the target reference pixel;
the gradient decentration angle of the reference pixel of each pixel is acquired.
Preferably, the obtaining the corrected clustering distance of each two pixels according to the corrected confusion degree of each pixel includes the following specific methods:
wherein,indicating the degree of confusion of the correction of the ith pixel, < >>Indicating the degree of confusion of the correction of the j-th pixel, < >>Representing the Euclidean distance between the ith pixel and the jth pixel,/and>representing the modified cluster distance between the i-th pixel and the j-th pixel.
Preferably, the clustering analysis is performed on the pixels in the bronchial image according to the corrected clustering distance of every two pixels to obtain a plurality of independent texture regions, and the specific method includes:
setting a clustering parameter of pixels, and carrying out clustering processing on the pixels by using an ISODATA algorithm based on the clustering parameter and a corrected clustering distance among the pixels to obtain a plurality of independent texture areas.
Preferably, the enhancing the bronchus image according to the correction confusion degree of each pixel in each independent texture area to obtain an enhanced bronchus image comprises the following specific steps:
acquiring the average value of the correction confusion degree of all pixels of each independent texture region, and marking the average value as the correction confusion degree of each independent texture region; and multiplying the gray value of each pixel of each independent texture region in the bronchial image by the correction confusion degree of the independent texture region to realize image enhancement and obtain the enhanced bronchial image.
The technical scheme of the invention has the beneficial effects that:
the method comprises the steps of obtaining a bronchial image, obtaining a reference area of each pixel according to gray value difference, obtaining initial confusion degree of each pixel according to edge complexity of the reference area of each pixel and area of the reference area because gray value difference of a focus area in the bronchial image is large and the edges of the focus area are irregular, describing distinguishing features of focuses according to the initial confusion degree, and analyzing the condition that gray value change of the reference area of each pixel accords with the focus area to obtain corrected confusion degree of each pixel, clustering and enhancing pixels in the bronchial image according to the corrected confusion degree of each pixel to obtain enhanced bronchial image.
Drawings
In order to more clearly illustrate the embodiments of the invention or the technical solutions in the prior art, the drawings that are required in the embodiments or the description of the prior art will be briefly described, it being obvious that the drawings in the following description are only some embodiments of the invention, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a block diagram of a machine learning based bronchoscope image aided optimization system of the present invention;
fig. 2 is a view of a bronchus image including an inflammatory lesion provided by the present invention.
Detailed Description
In order to further describe the technical means and effects adopted by the invention to achieve the preset aim, the following detailed description is given below of the bronchoscope image auxiliary optimization system based on machine learning according to the invention, which is provided by combining the attached drawings and the preferred embodiment. In the following description, different "one embodiment" or "another embodiment" means that the embodiments are not necessarily the same. Furthermore, the particular features, structures, or characteristics of one or more embodiments may be combined in any suitable manner.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs.
The following specifically describes a specific scheme of the bronchoscope image auxiliary optimization system based on machine learning provided by the invention with reference to the accompanying drawings.
Referring to fig. 1, a block diagram of a bronchoscope image aided optimization system based on machine learning according to an embodiment of the present invention is shown, the system includes the following modules:
an image acquisition module 101 for acquiring bronchial images.
In order to realize the bronchoscope image auxiliary optimization system based on machine learning, which is provided by the embodiment, firstly, a bronchoscope image needs to be acquired.
The specific process for collecting the bronchus image is as follows: collecting a bronchus image by using a bronchoscope, wherein fig. 2 shows a Zhi Qiguan image, a black rectangular frame of the bronchus image shows inflammation infiltration type focus, a black circle shows high-brightness points generated by light, and the bronchus image is subjected to graying treatment to obtain a gray level image of the bronchus image. For convenience of description, the grayscale image of the bronchial image will be still referred to as a bronchial image hereinafter.
A reference region acquisition module 102, configured to obtain a reference region of each pixel according to the bronchial image.
It should be noted that, because there is a gray scale difference between the focal region and the non-focal region in the bronchial image, the region division may be performed according to the gray scale value of the pixel.
Specifically, a window side length is presetTaking each pixel as a center, acquiring a window of L, and recording the window as a reference window of each pixel. The variance of the gray level values of all pixels in the reference window of each pixel is obtained and is recorded as the gray level variance of the reference window of each pixel, and the mean value of the gray level variances of all pixels in the bronchial image is recorded as the reference variance.
In this embodiment, the description is given taking L as 7 as an example, and other values may be taken in other embodiments, and the embodiment is not particularly limited.
It should be noted that, since the above process only artificially divides some windows, different pixels cannot be divided at this time, and similar pixels are divided together, and thus adjustment is required based on the windows.
Further, for any one pixel, the reference window of each pixel is denoted as a first extended window, and the difference between the gray-scale variance and the reference variance of the first extended window is denoted as the deviation of the first extended window. Obtaining a second expansion window according to the deviation of the first expansion window, obtaining the gray variance of the second expansion window, recording the difference value between the gray variance of the second expansion window and the reference variance as the deviation of the second expansion window, and obtaining a third expansion window according to the deviation of the second expansion window. And the same is repeated until the number of pixels in the expansion window is greater than or equal to a preset maximum critical value B, less than or equal to a preset minimum critical value A or the deviation of the expansion window is equal to 0, so as to obtain a plurality of expansion windows.
In this embodiment, the example of taking a 4 and b 196 is described, and other embodiments may take other values, which is not particularly limited.
Further, the method for obtaining the second expansion window according to the deviation of the first expansion window comprises the following steps:
when the deviation of the first expansion window is larger than 0, acquiring the pixel of the outermost circle in the first expansion window as the peripheral pixel of the first expansion window, and removing one pixel from the peripheral pixels of the first expansion window to obtain a second expansion window.
When the deviation of the first expansion window is smaller than 0, peripheral pixels of the first expansion window are acquired, and pixels which are adjacent to the peripheral pixels of the first expansion window and do not belong to the first expansion window are acquired and are marked as outer adjacent pixels of the first expansion window. Optionally, an outer adjacent pixel of the first expansion window is added on the first expansion window, so as to obtain a second expansion window.
Further, an expansion window with the smallest deviation is selected from all expansion windows of each pixel point to be used as a reference area of each pixel.
It should be noted that, for the pixels at the edge of the bronchial image, when the size of the window of the pixels does not meet the requirement, only the window as large as possible is acquired within the size range.
Thus, the reference area of each pixel is obtained, the gray value of the reference area of each pixel is similar, and the reference area of each pixel generally describes an object with the same attribute, wherein the object with the same attribute can be an independent focus area, an independent highlight area or an independent non-focus area and a non-highlight area.
The corrected chaotic degree obtaining module 103 is configured to obtain an initial chaotic degree of each pixel according to the reference area of each pixel, and obtain a corrected chaotic degree of each pixel according to the initial chaotic degree of each pixel.
Specifically, peripheral pixels of a reference area of each pixel are obtained, and chain code analysis is performed on all peripheral pixels of the reference area of each pixel to obtain a chain code sequence of the reference area of each pixel. The variance of all data in the chain code sequence of the reference region of each pixel is noted as the edge complexity of the reference region of each pixel.
The initial degree of clutter for each pixel is:
wherein,the larger the value, which represents the area of the reference area of the ith pixel, the more pixels are needed to reach the reference variance, further indicating that the gray scale difference around the ith pixel is smaller, and thus the smaller the initial degree of confusion of the ith pixel,representing a linear normalization process,/->The greater the value, the greater the edge complexity of the reference region representing the ith pixel, which indicates that the direction of the edge of the ith reference region is more variable, and the greater the complexity of the outer edge of the texture in which the ith pixel is located, and thus the greater the degree of initial confusion of the ith pixel>Indicating the initial degree of confusion for the ith pixel.
It should be noted that, when calculating the initial clutter, the gray level difference of the surrounding pixels of each pixel and the complexity of the outer edge of the texture where each pixel is located are mainly considered. However, not only the lesion area but also the highlight area in the bronchial image has such a characteristic, and thus the highlight area and the lesion area cannot be distinguished by the initial degree of confusion.
To further distinguish between the highlighted and focal areas, the distinguishing features of the two areas need to be further analyzed. The highlight region generally has a phenomenon that the gray value decreases from the highlight center to the periphery, but the focus region does not have the phenomenon, so that the two regions can be distinguished based on the feature.
Further, pixels within the reference region of each pixel are noted as reference pixels for each pixel, and any one of the reference pixels for each pixel is noted as a target reference pixel for each pixel. And acquiring a vector formed by each pixel and the target reference pixel, and recording an included angle between the vector and the gradient direction of the target reference pixel as a gradient eccentric angle between each pixel and the target reference pixel. The gradient decentration angle of the reference pixel of each pixel is acquired.
The method for calculating the correction confusion degree of each pixel comprises the following steps:
wherein,indicating the Euclidean distance of the ith pixel from the qth pixel of the reference area, +.>The larger the value is, the less centralistic the gradient direction of the q-th reference pixel of the i-th pixel is, and the gray value of the highlight region has a tendency to decrease from the center to the periphery, so that the gradient direction of the highlight region is generally directed from the highlight center to the periphery, so that when the reference region of the i-th pixel is the highlight region, the smaller the gradient decentration angle of the reference pixel of the i-th pixel should be, and thus the smaller the likelihood that the i-th pixel is the highlight region is, the greater the likelihood that the focus region is, because the greater the corresponding degree of correction disorder is, the greater the degree of correction disorder is>Representing the number of pixels in the reference area of the ith pixel, a>Representing a distance weight, the larger the value indicating that the distance between the ith pixel and the qth pixel is smaller, and thus the greater the influence of the qth pixel on the ith pixel, the +.>Indicating the initial degree of confusion for the ith pixel. />Indicating the degree of confusion in correction for the i-th pixel.
Thus, the degree of confusion of correction for each pixel is obtained. The degree of confusion is an index obtained by describing the condition that each pixel meets the characteristics of the focus area, and the focus area can be distinguished from the non-focus area by the index.
The image enhancement module 104 is configured to perform clustering processing according to the correction confusion degree of each pixel to obtain a plurality of independent texture regions, and perform enhancement processing on the bronchial image according to each independent texture region and the correction confusion degree to obtain an enhanced bronchial image.
It should be noted that, in order to separate the focal region from the non-focal region in the bronchial image, the bronchial image needs to be clustered, and the conventional method of the ISODATA algorithm sets the clustering distance by using the gray values between pixels and the distance between pixels, and the clustering method does not consider the characteristics of the focal region, so that the clustering method cannot separate the focal region from the non-focal region. Therefore, in order to realize better separation of the focus area and the non-focus area, the clustering distance in the algorithm needs to be corrected.
Specifically, the calculation method of the corrected clustering distance between pixels is as follows:
wherein,indicating the degree of confusion of the correction of the ith pixel, < >>Indicating the degree of confusion of the correction of the j-th pixel, < >>Representing the Euclidean distance between the ith pixel and the jth pixel,/and>representing the modified cluster distance between the i-th pixel and the j-th pixel.
Further, a clustering parameter is set: number of cluster centersWherein->Representing the number of pixels in the bronchial image, L representing a preset window side length, +.>The number of the clustering centers is represented, the number of the minimum category elements is 5, the internal difference of categories is 10, the category merging threshold is 8, and the maximum logarithm of the clustering centers which can be merged in one iteration operation is +.>The maximum number of iterations is 30.
It should be noted that, the clustering parameter is a parameter that needs to be set manually in the ISODATA algorithm, and because the ISODATA algorithm is an existing algorithm, the setting process is more conventional, and no detailed description is given here.
Further, based on the clustering parameters and the corrected clustering distance between pixels, the pixels are clustered by using an ISODATA algorithm to obtain a plurality of independent texture areas.
The segmentation of the focus area and the non-focus area is completed by using a clustering algorithm, and then image enhancement processing is needed based on the segmentation result.
Further, a mean value of the correction confusion degree of all pixels of each independent texture area is obtained and is recorded as the correction confusion degree of each independent texture area. And normalizing the correction confusion degree of each independent texture region by using a maximum value and minimum value normalization method to obtain the normalized correction confusion degree of each independent texture region. For ease of description, the normalized degree of correction confusion for each independent texture region is subsequently noted as the degree of correction confusion for each independent texture region. And multiplying the gray value of each pixel of each independent texture region in the bronchial image by the correction confusion degree of the independent texture region to realize image enhancement and obtain the enhanced bronchial image.
The gray value of each pixel of the independent texture region in the bronchial image is multiplied by the degree of confusion in correction of the independent texture region, and the obtained gray value is subjected to a down rounding process.
The above description is only of the preferred embodiments of the present invention and is not intended to limit the invention, but any modifications, equivalent substitutions, improvements, etc. within the principles of the present invention should be included in the scope of the present invention.

Claims (8)

1. Machine learning-based bronchoscope image auxiliary optimization system is characterized by comprising the following modules:
the image acquisition module is used for acquiring a bronchus image;
the reference region acquisition module is used for acquiring a reference region of each pixel in the bronchial image according to the gray value difference of the pixels in the bronchial image;
the correction confusion degree acquisition module is used for obtaining the initial confusion degree of each pixel according to the area of the reference area of each pixel and the complexity of the edge, and obtaining the correction confusion degree of each pixel according to the initial confusion degree of each pixel and the gradient of each pixel in the reference area;
the method for obtaining the initial confusion degree of each pixel according to the area of the reference area of each pixel and the complexity of the edge comprises the following specific steps: acquiring edge complexity of a reference area of each pixel;
the calculation method for obtaining the initial confusion degree of each pixel according to the edge complexity and the area of the reference area of each pixel comprises the following steps:
H i =norm(-S i )×σ i
wherein S is i Representing the area of the reference region of the i-th pixel, norm () represents the linear normalization process,σ i representing the edge complexity of the reference region of the ith pixel, H i Representing the initial degree of confusion for the ith pixel;
the method for acquiring the edge complexity of the reference area of each pixel comprises the following specific steps: acquiring peripheral pixels of a reference area of each pixel, and carrying out chain code analysis on all peripheral pixels of the reference area of each pixel to obtain a chain code sequence of the reference area of each pixel; the variance of all data in the chain code sequence of the reference region of each pixel is recorded as the edge complexity of the reference region of each pixel;
the image enhancement module is used for obtaining corrected clustering distances of every two pixels according to the corrected confusion degree of each pixel, and carrying out cluster analysis on the pixels in the bronchial image according to the corrected clustering distances of every two pixels to obtain a plurality of independent texture areas; and carrying out enhancement processing on the bronchus image according to the correction confusion degree of each pixel in each independent texture region to obtain an enhanced bronchus image.
2. The machine learning-based bronchoscope image auxiliary optimization system according to claim 1, wherein the obtaining the reference area of each pixel in the bronchoscope image according to the gray value difference of the pixel in the bronchoscope image comprises the following specific methods:
presetting a window side length L, taking each pixel as a center, acquiring a window of L, and recording the window as a reference window of each pixel; acquiring the variance of gray values of all pixels in a reference window of each pixel, marking the variance as the gray variance of the reference window of each pixel, and marking the mean value of the gray variances of all pixels in the bronchial image as the reference variance;
for any one pixel, marking a reference window of each pixel as a first expansion window, and marking the difference value between the gray level variance of the first expansion window and the reference variance as the deviation of the first expansion window; obtaining a second expansion window according to the deviation of the first expansion window, obtaining the gray variance of the second expansion window, recording the difference value between the gray variance of the second expansion window and the reference variance as the deviation of the second expansion window, and obtaining a third expansion window according to the deviation of the second expansion window; and the same is repeated until the number of pixels in the expansion window is greater than or equal to a preset maximum critical value B, less than or equal to a preset minimum critical value A or the deviation of the expansion window is equal to 0, so as to obtain a plurality of expansion windows;
and selecting the expansion window with the smallest deviation from all expansion windows of each pixel point as a reference area of each pixel.
3. The machine learning based bronchoscope image aided optimization system of claim 2, wherein the obtaining the second expansion window according to the deviation of the first expansion window comprises the following specific methods:
when the deviation of the first expansion window is larger than 0, acquiring the pixel of the outermost circle in the first expansion window as the peripheral pixel of the first expansion window, and removing one pixel from the peripheral pixel of the first expansion window to obtain a second expansion window;
when the deviation of the first expansion window is smaller than 0, peripheral pixels of the first expansion window are obtained, pixels which are adjacent to the peripheral pixels of the first expansion window and do not belong to the first expansion window are obtained and marked as outer adjacent pixels of the first expansion window, and optionally, the outer adjacent pixels of one first expansion window are added on the first expansion window to obtain a second expansion window.
4. The machine learning based bronchoscope image aided optimization system of claim 1, wherein the obtaining the corrected clutter level of each pixel according to the initial clutter level of each pixel and the gradient of each pixel in the reference area comprises the following specific methods:
acquiring a gradient decentration angle of a reference pixel of each pixel according to the gradient of each pixel in the reference area of each pixel;
the calculation method for obtaining the correction confusion degree of each pixel according to the gradient decentering angle and the initial confusion degree of the reference pixel of each pixel comprises the following steps:
wherein d i,q Representing the euclidean distance of the ith pixel from the qth pixel of the reference region,gradient decentration angle, Q, representing the Q-th reference pixel of the i-th pixel i Represents the number of pixels in the reference area of the ith pixel, H i Indicating the initial degree of confusion of the ith pixel, H' i Indicating the degree of confusion in correction for the i-th pixel.
5. The machine-learning-based bronchoscope image aided optimization system of claim 4, wherein the acquiring the gradient decentration angle of the reference pixel of each pixel according to the gradient of each pixel in the reference area of each pixel comprises the following specific methods:
the pixels in the reference area of each pixel are marked as reference pixels of each pixel, and any one reference pixel of each pixel is marked as a target reference pixel of each pixel; acquiring a vector formed by each pixel and a target reference pixel, and recording an included angle between the vector and the gradient direction of the target reference pixel as a gradient eccentric angle between each pixel and the target reference pixel;
the gradient decentration angle of the reference pixel of each pixel is acquired.
6. The machine learning-based bronchoscope image auxiliary optimization system according to claim 1, wherein the obtaining the corrected clustering distance of every two pixels according to the corrected confusion degree of every pixel comprises the following specific steps:
wherein H' i Indicating the degree of confusion of the correction of the ith pixel, H' j Represents the degree of confusion of the correction of the jth pixel, d i,j Representing the Euclidean distance, D ', between the ith pixel and the jth pixel' i,j Representing the modified cluster distance between the i-th pixel and the j-th pixel.
7. The machine-learning-based bronchoscope image auxiliary optimization system according to claim 1, wherein the clustering analysis is performed on pixels in the bronchoscope image according to the corrected clustering distance of every two pixels to obtain a plurality of independent texture areas, and the specific method comprises the following steps:
setting a clustering parameter of pixels, and carrying out clustering processing on the pixels by using an ISODATA algorithm based on the clustering parameter and a corrected clustering distance among the pixels to obtain a plurality of independent texture areas.
8. The machine-learning-based bronchoscope image auxiliary optimization system according to claim 1, wherein the method for enhancing the bronchoscope image according to the correction confusion degree of each pixel in each independent texture area to obtain the enhanced bronchoscope image comprises the following specific steps:
acquiring the average value of the correction confusion degree of all pixels of each independent texture region, and marking the average value as the correction confusion degree of each independent texture region; and multiplying the gray value of each pixel of each independent texture region in the bronchial image by the correction confusion degree of the independent texture region to realize image enhancement and obtain the enhanced bronchial image.
CN202410017385.6A 2024-01-05 2024-01-05 Bronchoscope image auxiliary optimization system based on machine learning Active CN117522719B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410017385.6A CN117522719B (en) 2024-01-05 2024-01-05 Bronchoscope image auxiliary optimization system based on machine learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410017385.6A CN117522719B (en) 2024-01-05 2024-01-05 Bronchoscope image auxiliary optimization system based on machine learning

Publications (2)

Publication Number Publication Date
CN117522719A CN117522719A (en) 2024-02-06
CN117522719B true CN117522719B (en) 2024-03-22

Family

ID=89764941

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410017385.6A Active CN117522719B (en) 2024-01-05 2024-01-05 Bronchoscope image auxiliary optimization system based on machine learning

Country Status (1)

Country Link
CN (1) CN117522719B (en)

Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2002015113A2 (en) * 2000-08-14 2002-02-21 University Of Maryland, Baltimore County Mammography screening to detect and classify microcalcifications
CN114648530A (en) * 2022-05-20 2022-06-21 潍坊医学院 CT image processing method
CN114994102A (en) * 2022-08-04 2022-09-02 武汉钰品研生物科技有限公司 X-ray-based food foreign matter traceless rapid detection method
CN115330820A (en) * 2022-10-14 2022-11-11 江苏启灏医疗科技有限公司 Tooth image segmentation method based on X-ray film
CN116109644A (en) * 2023-04-14 2023-05-12 东莞市佳超五金科技有限公司 Surface defect detection method for copper-aluminum transfer bar
CN116269467A (en) * 2023-05-19 2023-06-23 中国人民解放军总医院第八医学中心 Information acquisition system before debridement of wounded patient
CN116310290A (en) * 2023-05-23 2023-06-23 山东中泳电子股份有限公司 Method for correcting swimming touch pad feedback time
CN116342583A (en) * 2023-05-15 2023-06-27 山东超越纺织有限公司 Anti-pilling performance detection method for spinning production and processing
CN116611748A (en) * 2023-07-20 2023-08-18 吴江市高瑞庭园金属制品有限公司 Titanium alloy furniture production quality monitoring system
CN116630314A (en) * 2023-07-24 2023-08-22 日照元鼎包装有限公司 Image processing-based preservation carton film coating detection method
CN116934755A (en) * 2023-09-18 2023-10-24 中国人民解放军总医院第八医学中心 Pulmonary tuberculosis CT image enhancement system based on histogram equalization
CN117218029A (en) * 2023-09-25 2023-12-12 南京邮电大学 Night dim light image intelligent processing method based on neural network

Patent Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2002015113A2 (en) * 2000-08-14 2002-02-21 University Of Maryland, Baltimore County Mammography screening to detect and classify microcalcifications
CN114648530A (en) * 2022-05-20 2022-06-21 潍坊医学院 CT image processing method
CN114994102A (en) * 2022-08-04 2022-09-02 武汉钰品研生物科技有限公司 X-ray-based food foreign matter traceless rapid detection method
CN115330820A (en) * 2022-10-14 2022-11-11 江苏启灏医疗科技有限公司 Tooth image segmentation method based on X-ray film
CN116109644A (en) * 2023-04-14 2023-05-12 东莞市佳超五金科技有限公司 Surface defect detection method for copper-aluminum transfer bar
CN116342583A (en) * 2023-05-15 2023-06-27 山东超越纺织有限公司 Anti-pilling performance detection method for spinning production and processing
CN116269467A (en) * 2023-05-19 2023-06-23 中国人民解放军总医院第八医学中心 Information acquisition system before debridement of wounded patient
CN116310290A (en) * 2023-05-23 2023-06-23 山东中泳电子股份有限公司 Method for correcting swimming touch pad feedback time
CN116611748A (en) * 2023-07-20 2023-08-18 吴江市高瑞庭园金属制品有限公司 Titanium alloy furniture production quality monitoring system
CN116630314A (en) * 2023-07-24 2023-08-22 日照元鼎包装有限公司 Image processing-based preservation carton film coating detection method
CN116934755A (en) * 2023-09-18 2023-10-24 中国人民解放军总医院第八医学中心 Pulmonary tuberculosis CT image enhancement system based on histogram equalization
CN117218029A (en) * 2023-09-25 2023-12-12 南京邮电大学 Night dim light image intelligent processing method based on neural network

Non-Patent Citations (5)

* Cited by examiner, † Cited by third party
Title
Differentiable Topology-Preserved Distance Transform for Pulmonary Airway Segmentation;Minghui Zhang 等;《Computer Vision and Pattern Recognition》;20220917;1-10 *
Double-lumen tubes and bronchial blockers;M. Patel 等;《BJA Education》;20230704;416-424 *
图像灰度增强算法的研究;高赟;《中国优秀硕士学位论文全文数据库 (信息科技辑)》;20070615;I138-550 *
磁瓦缺陷图像的分割与检测研究;张梦;《中国优秀硕士学位论文全文数据库 (工程科技Ⅱ辑)》;20210915;C042-132 *
荧光支气管镜在肺癌诊断中的应用价值;张霞 等;《实用癌症杂志》;20130925;第28卷(第5期);507-509 *

Also Published As

Publication number Publication date
CN117522719A (en) 2024-02-06

Similar Documents

Publication Publication Date Title
CN109584209B (en) Vascular wall plaque recognition apparatus, system, method, and storage medium
US20190163950A1 (en) Large scale cell image analysis method and system
CN111524137A (en) Cell identification counting method and device based on image identification and computer equipment
WO2023137914A1 (en) Image processing method and apparatus, electronic device, and storage medium
CN111291701B (en) Sight tracking method based on image gradient and ellipse fitting algorithm
CN115661669B (en) Method and system for monitoring illegal farmland occupancy based on video monitoring
CN111489309B (en) Sparse unmixing pretreatment device and method
CN110555866A (en) Infrared target tracking method for improving KCF feature descriptor
CN112883824A (en) Finger vein feature recognition device for intelligent blood sampling and recognition method thereof
CN108898132A (en) A kind of terahertz image dangerous material recognition methods based on Shape context description
CN113450305A (en) Medical image processing method, system, equipment and readable storage medium
CN111881924B (en) Dark-light vehicle illumination identification method combining illumination invariance and short-exposure illumination enhancement
CN117522719B (en) Bronchoscope image auxiliary optimization system based on machine learning
CN115862121B (en) Face quick matching method based on multimedia resource library
CN116758336A (en) Medical image intelligent analysis system based on artificial intelligence
CN115830319A (en) Strabismus iris segmentation method based on attention mechanism and verification method
CN115909401A (en) Cattle face identification method and device integrating deep learning, electronic equipment and medium
CN110010228B (en) Face skin perspective algorithm based on image analysis
CN114399494A (en) Abnormal cell detection and segmentation method, device, equipment and storage medium
CN113139930A (en) Thyroid slice image classification method and device, computer equipment and storage medium
CN117557587B (en) Endoscope cold light source brightness automatic regulating system
Chehdi et al. A blind system to identify and filter degradations affecting an image
CN117541800B (en) Laryngoscope image-based laryngeal anomaly segmentation method
CN117575953B (en) Detail enhancement method for high-resolution forestry remote sensing image
CN114022473B (en) Horizon detection method based on infrared image

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant