CN116485819B - Ear-nose-throat examination image segmentation method and system - Google Patents

Ear-nose-throat examination image segmentation method and system Download PDF

Info

Publication number
CN116485819B
CN116485819B CN202310735613.9A CN202310735613A CN116485819B CN 116485819 B CN116485819 B CN 116485819B CN 202310735613 A CN202310735613 A CN 202310735613A CN 116485819 B CN116485819 B CN 116485819B
Authority
CN
China
Prior art keywords
window
characteristic
pixel point
sub
subdivided
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202310735613.9A
Other languages
Chinese (zh)
Other versions
CN116485819A (en
Inventor
何福芹
孙爱莲
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Affiliated Hospital of University of Qingdao
Original Assignee
Affiliated Hospital of University of Qingdao
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Affiliated Hospital of University of Qingdao filed Critical Affiliated Hospital of University of Qingdao
Priority to CN202310735613.9A priority Critical patent/CN116485819B/en
Publication of CN116485819A publication Critical patent/CN116485819A/en
Application granted granted Critical
Publication of CN116485819B publication Critical patent/CN116485819B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/762Arrangements for image or video recognition or understanding using pattern recognition or machine learning using clustering, e.g. of similar faces in social networks
    • G06V10/763Non-hierarchical techniques, e.g. based on statistics of modelling distributions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/80Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10081Computed x-ray tomography [CT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A90/00Technologies having an indirect contribution to adaptation to climate change
    • Y02A90/10Information and communication technologies [ICT] supporting adaptation to climate change, e.g. for weather forecasting or climate simulation

Abstract

The invention relates to the technical field of image segmentation, in particular to an ear-nose-throat examination image segmentation method and system, wherein the method comprises the following steps: acquiring a target otorhinolaryngological examination image, and dividing a region to be subdivided from the target otorhinolaryngological examination image; determining a fusion value corresponding to each pixel point to be subdivided; determining a first classification threshold and a second classification threshold; decomposing a preset window corresponding to each pixel point to be subdivided; performing neighbor aggregation analysis processing on each feature sub-window in the feature sub-window set; performing element discrete analysis processing on each characteristic sub-window; determining a target weight corresponding to each pixel point to be subdivided; and carrying out self-adaptive fuzzy C-means clustering segmentation on the area to be subdivided. According to the method, the target otorhinolaryngological examination image is subjected to data processing, so that the segmentation of the target otorhinolaryngological examination image is realized, the accuracy and the efficiency of segmentation of the otorhinolaryngological examination image are improved, and the method is applied to image segmentation.

Description

Ear-nose-throat examination image segmentation method and system
Technical Field
The invention relates to the technical field of image segmentation, in particular to an ear-nose-throat examination image segmentation method and system.
Background
With the development of technology, the application of image segmentation technology is becoming more and more widespread. For example, the otorhinolaryngological examination image may be segmented for ease of viewing or interpretation by a healthcare worker. Currently, when dividing an image, the following methods are generally adopted: and clustering and dividing the image by adopting a preset weight through a fuzzy C-means clustering algorithm, wherein the clustering and dividing can be the image dividing realized based on the clustering algorithm.
However, when the above-described method is adopted to segment the otorhinolaryngological examination image, there are often the following technical problems:
firstly, because the weight participating in fuzzy C-means clustering is often a weight set based on artificial subjective experience, the set result is often inaccurate, and thus the accuracy of segmenting the ear-nose-throat examination image is often low;
secondly, because the characteristics of each pixel point in the otorhinolaryngologic examination image are often different, only one weight is adopted, and fuzzy C-means clustering segmentation is carried out on the whole Zhang Er nasotolaryngologic examination image, so that the clustering convergence rate is slow, and the efficiency of segmenting the otorhinolaryngologic examination image is low.
Disclosure of Invention
The summary of the invention is provided to introduce a selection of concepts in a simplified form that are further described below in the detailed description. The summary of the invention is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used to limit the scope of the claimed subject matter.
In order to solve the technical problem of low accuracy in segmentation of an ear-nose-throat examination image, the invention provides an ear-nose-throat examination image segmentation method and system.
In a first aspect, the present invention provides a method for segmenting an otorhinolaryngological examination image, the method comprising:
acquiring a target otorhinolaryngological examination image, and dividing a region to be subdivided from the target otorhinolaryngological examination image;
determining a fusion value corresponding to each pixel point to be subdivided according to the CT value and the gray value corresponding to each pixel point to be subdivided in the region to be subdivided;
determining a first classification threshold and a second classification threshold according to fusion values corresponding to each pixel point to be subdivided in the region to be subdivided;
decomposing a preset window corresponding to each pixel point to be subdivided according to the first classification threshold and the second classification threshold to obtain a characteristic sub-window set corresponding to the pixel point to be subdivided;
Performing neighbor aggregation analysis processing on each feature sub-window in the feature sub-window set to obtain a neighbor aggregation index corresponding to the feature sub-window;
performing element discrete analysis processing on each characteristic sub-window to obtain a target discrete index corresponding to the characteristic sub-window;
determining a target weight corresponding to each pixel point to be subdivided according to a target discrete index and a neighbor aggregation index corresponding to each characteristic sub-window in a characteristic sub-window set corresponding to each pixel point to be subdivided;
and performing self-adaptive fuzzy C-means clustering segmentation on the region to be segmented according to the target weight corresponding to each pixel point to be segmented in the region to be segmented to obtain a target segmentation region set.
Optionally, the determining a first classification threshold and a second classification threshold according to the fusion value corresponding to each pixel point to be subdivided in the region to be subdivided includes:
determining a fusion histogram corresponding to the region to be subdivided according to fusion values corresponding to each pixel point to be subdivided in the region to be subdivided;
screening the highest three wave crests from the wave crests of the fusion histogram to serve as target wave crests, and obtaining a target wave crest sequence;
Determining fusion values corresponding to the first two target peaks in the target peak sequence as two endpoints of a first interval;
determining fusion values corresponding to the last two target peaks in the target peak sequence as two endpoints of a second interval;
determining a first classification threshold value through a threshold value calculation method according to pixel points of which fusion values belong to a first interval in the region to be subdivided;
and determining a second classification threshold value through a threshold value calculation method according to the pixel points of which the fusion value belongs to the second interval in the region to be subdivided.
Optionally, the decomposing the preset window corresponding to each pixel to be subdivided according to the first classification threshold and the second classification threshold to obtain a feature sub-window set corresponding to the pixel to be subdivided, including:
deleting the pixel points with fusion values larger than or equal to the first classification threshold value in the preset window corresponding to the pixel points to be subdivided to obtain a first sub-window;
deleting the pixels with fusion values smaller than the first classification threshold or larger than the second classification threshold in the preset window corresponding to the pixels to be subdivided to obtain a second sub-window;
Deleting the pixel points, of which the fusion values in the preset window are smaller than or equal to the second classification threshold value, corresponding to the pixel points to be subdivided to obtain a third sub-window;
and determining the first sub-window, the second sub-window and the third sub-window as characteristic sub-windows to obtain a characteristic sub-window set corresponding to the pixel points to be subdivided.
Optionally, the performing neighbor aggregation analysis processing on each feature sub-window in the feature sub-window set to obtain a neighbor aggregation index corresponding to the feature sub-window includes:
each pixel point in the characteristic sub-window is determined to be a characteristic pixel point, and a characteristic pixel point set corresponding to the characteristic sub-window is obtained;
determining the Euclidean distance between each characteristic pixel point in the characteristic pixel point set and the central position of the characteristic sub-window as a target distance index corresponding to the characteristic pixel point;
determining the number of pixel points in a preset neighborhood corresponding to each characteristic pixel point as the target number corresponding to the characteristic pixel point;
and determining a neighbor aggregation index corresponding to the characteristic sub-window according to the target distance index and the target number corresponding to each characteristic pixel point in the characteristic pixel point set, wherein the target distance index and the neighbor aggregation index are in negative correlation, and the target number and the neighbor aggregation index are in positive correlation.
Optionally, performing element discrete analysis processing on each feature sub-window to obtain a target discrete index corresponding to the feature sub-window, including:
each pixel point in the characteristic sub-window is determined to be a characteristic pixel point, and a characteristic pixel point set corresponding to the characteristic sub-window is obtained;
dividing the region where the feature pixel point set is located according to the position corresponding to the feature pixel point to obtain a connection region set corresponding to the feature sub-window;
performing differential discrete analysis processing on each characteristic pixel point in each connection region in the connection region set to obtain differential discrete corresponding to the characteristic pixel point;
determining a pixel point closest to the central position of a characteristic sub-window to which each connection region belongs in each connection region as a reference characteristic point corresponding to the connection region;
determining path dispersion corresponding to each characteristic pixel point according to a reference characteristic point corresponding to a connecting area to which the characteristic pixel point belongs;
determining a first discrete index corresponding to each connection region according to the difference discrete and the path discrete corresponding to each characteristic pixel point in each connection region, wherein the difference discrete and the path discrete are positively correlated with the first discrete index;
Determining a second discrete index corresponding to each connection region according to the reference feature point corresponding to each connection region;
determining a third discrete index corresponding to each connection region according to a first discrete index and a second discrete index corresponding to each connection region, wherein the first discrete index and the second discrete index are positively correlated with the third discrete index;
and determining a target discrete index corresponding to the characteristic sub-window according to a third discrete index corresponding to each connecting region in the connecting region set corresponding to the characteristic sub-window, wherein the third discrete index and the target discrete index are positively correlated.
Optionally, the performing a difference discrete analysis process on each feature pixel point in each connection region in the connection region set to obtain a difference discrete corresponding to the feature pixel point includes:
if the number of the pixel points in the connection area to which the feature pixel points belong is larger than the preset number, determining the absolute value of the difference value of the fusion representative value corresponding to the feature pixel points and the fusion representative value corresponding to the connection area to which the feature pixel points belong as the difference dispersion corresponding to the feature pixel points, wherein the fusion representative value corresponding to the connection area is the average value of the fusion values corresponding to all the feature pixel points in the connection area;
If the number of the pixel points in the connecting area to which the characteristic pixel points belong is smaller than or equal to the preset number, determining a first preset factor as the difference dispersion corresponding to the characteristic pixel points.
Optionally, the determining, according to the reference feature point corresponding to the connection area to which each feature pixel point belongs, the path dispersion corresponding to the feature pixel point includes:
if the number of the pixel points in the connecting area to which the characteristic pixel points belong is larger than the preset number, determining the shortest path distance between the characteristic pixel points and the reference characteristic points as the path dispersion corresponding to the characteristic pixel points;
if the number of the pixel points in the connecting area to which the characteristic pixel points belong is smaller than or equal to the preset number, determining a second preset factor as the path dispersion corresponding to the characteristic pixel points.
Optionally, the determining, according to the reference feature point corresponding to each connection area, a second discrete index corresponding to the connection area includes:
determining the Euclidean distance between the center position of the preset window and the vertex position of the preset window as a first reference distance;
determining the Euclidean distance between the reference feature point corresponding to the connection region and the central position of the feature sub-window to which the connection region belongs as a second reference distance corresponding to the connection region;
And determining the ratio of the second reference distance corresponding to the connection region to the first reference distance as a second discrete index corresponding to the connection region.
Optionally, the determining, according to the target discrete index and the neighbor aggregation index corresponding to each feature sub-window in the feature sub-window set corresponding to each pixel to be subdivided, the target weight corresponding to the pixel to be subdivided includes:
determining a first weight factor corresponding to each characteristic sub-window according to a target discrete index and a neighbor aggregation index corresponding to each characteristic sub-window, wherein the target discrete index and the first weight factor are positively correlated, and the neighbor aggregation index and the first weight factor are negatively correlated;
determining the number of pixel points in each characteristic sub-window and the ratio of the number of pixel points in a preset window as a second weight factor corresponding to the characteristic sub-window;
determining a reference weight corresponding to the pixel point to be subdivided according to a first weight factor and a second weight factor corresponding to each characteristic sub-window in the characteristic sub-window set corresponding to the pixel point to be subdivided, wherein the first weight factor is positively correlated with the reference weight, and the second weight factor is negatively correlated with the reference weight;
And carrying out linear transformation on the reference weight corresponding to the pixel point to be subdivided to obtain a target weight corresponding to the pixel point to be subdivided.
In a second aspect, the present invention provides an otorhinolaryngological examination image segmentation system comprising a processor and a memory, said processor being adapted to process instructions stored in said memory to implement an otorhinolaryngological examination image segmentation method as described above.
The invention has the following beneficial effects:
according to the method for segmenting the ear, nose and throat examination image, disclosed by the invention, the segmentation of the target ear, nose and throat examination image is realized by carrying out data processing on the target ear, nose and throat examination image, the technical problems of low accuracy and efficiency in segmenting the ear, nose and throat examination image are solved, and the accuracy and efficiency in segmenting the ear, nose and throat examination image are improved. Firstly, the region to be subdivided is segmented from the acquired target otorhinolaryngological examination image, so that the region to be subdivided can be conveniently and finely segmented later. Then, because the sizes of the CT value and the gray value often reflect the densities of different tissues or organs, the CT value and the gray value corresponding to the pixel point to be subdivided are comprehensively considered, and the determined fusion value can represent the densities of different tissues or organs and can facilitate the subsequent segmentation of different tissue areas in the area to be subdivided. Then, the fusion value corresponding to each pixel point to be subdivided in the region to be subdivided is comprehensively considered, so that the accuracy of determining the first classification threshold value and the second classification threshold value can be improved. And continuing to comprehensively consider the first classification threshold and the second classification threshold, so that the accuracy of determining the characteristic sub-window set corresponding to each pixel point to be subdivided can be improved. And then, comprehensively considering the target discrete index and the neighbor aggregation index corresponding to each characteristic sub-window in the characteristic sub-window set corresponding to each pixel point to be subdivided, and improving the target weight corresponding to each pixel point to be subdivided. Finally, based on the target weight corresponding to each pixel point to be subdivided in the area to be subdivided, the adaptive fuzzy C-means clustering segmentation is carried out on the area to be subdivided, so that the fine segmentation of the area to be subdivided is realized. And secondly, the invention quantifies the target weight corresponding to each pixel to be subdivided, and compared with the whole image adopting the same weight, the invention adaptively sets a target weight for each pixel to be subdivided, thus compared with the existing fuzzy C-means clustering, the invention improves the clustering convergence rate, thereby improving the image segmentation efficiency.
Drawings
In order to more clearly illustrate the embodiments of the invention or the technical solutions and advantages of the prior art, the following description will briefly explain the drawings used in the embodiments or the description of the prior art, and it is obvious that the drawings in the following description are only some embodiments of the invention, and other drawings can be obtained according to the drawings without inventive effort for a person skilled in the art.
FIG. 1 is a flow chart of an ear-nose-throat examination image segmentation method of the present invention;
FIG. 2 is an exploded view of a default window according to the present invention;
FIG. 3 is a schematic diagram of a connection region generation of the present invention;
fig. 4 is a schematic diagram of a shortest path between a feature pixel point and a reference feature point according to the present invention.
Wherein, the reference numerals include: a feature sub-window 301, a reference window 302, and feature pixel points 401.
Detailed Description
In order to further describe the technical means and effects adopted by the present invention to achieve the preset purpose, the following detailed description is given below of the specific implementation, structure, features and effects of the technical solution according to the present invention with reference to the accompanying drawings and preferred embodiments. In the following description, different "one embodiment" or "another embodiment" means that the embodiments are not necessarily the same. Furthermore, the particular features, structures, or characteristics of one or more embodiments may be combined in any suitable manner.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs.
The invention provides an ear-nose-throat examination image segmentation method, which comprises the following steps:
acquiring a target otorhinolaryngological examination image, and dividing a region to be subdivided from the target otorhinolaryngological examination image;
determining a fusion value corresponding to each pixel point to be subdivided according to the CT value and the gray value corresponding to each pixel point to be subdivided in the region to be subdivided;
determining a first classification threshold and a second classification threshold according to fusion values corresponding to each pixel point to be subdivided in the region to be subdivided;
decomposing a preset window corresponding to each pixel point to be subdivided according to the first classification threshold and the second classification threshold to obtain a characteristic sub-window set corresponding to the pixel point to be subdivided;
performing neighbor aggregation analysis processing on each feature sub-window in the feature sub-window set to obtain a neighbor aggregation index corresponding to the feature sub-window;
performing element discrete analysis processing on each characteristic sub-window to obtain a target discrete index corresponding to the characteristic sub-window;
determining a target weight corresponding to the pixel points to be subdivided according to a target discrete index and a neighbor aggregation index corresponding to each characteristic sub-window in the characteristic sub-window set corresponding to each pixel point to be subdivided;
And carrying out self-adaptive fuzzy C-means clustering segmentation on the region to be segmented according to the target weight corresponding to each pixel point to be segmented in the region to be segmented to obtain a target segmentation region set.
The following detailed development of each step is performed:
referring to fig. 1, a flow of some embodiments of an otorhinolaryngological examination image segmentation method of the present invention is shown. The ear-nose-throat examination image segmentation method comprises the following steps:
step S1, acquiring a target otorhinolaryngological examination image, and dividing a region to be subdivided from the target otorhinolaryngological examination image.
In some embodiments, a target otorhinolaryngological examination image may be acquired and the region to be subdivided may be segmented from the target otorhinolaryngological examination image.
The target otorhinolaryngological examination image may be an image of the otorhinolaryngological region to be detected. For example, the target otorhinolaryngological examination image may be an otorhinolaryngological CT (Computed Tomography, computerized tomography) image to be detected after the denoising process. The ear, nose and throat to be detected can be the ear, nose and throat to be detected. The region to be subdivided may be a region to be subjected to refinement segmentation. For example, the region to be subdivided may be a nose region to be detected. The nose region to be detected may be the nose region of the ear, nose and throat to be detected.
The region to be subdivided is segmented from the acquired target otorhinolaryngological examination image, so that the region to be subdivided can be finely segmented later.
As an example, this step may include the steps of:
first, acquiring an ear-nose-throat CT image corresponding to the ear-nose-throat to be detected through a CT device, and taking the ear-nose-throat CT image as the ear-nose-throat CT image to be detected.
Wherein the CT device may be a device for acquiring CT images. For example, the CT apparatus may be a CT machine. The otorhinolaryngological CT image to be detected may be a CT image of the otorhinolaryngological to be detected.
And secondly, denoising the ear-nose-throat CT image to obtain a target ear-nose-throat examination image.
For example, a denoising technique may be used to denoise the CT image of the ear, nose and throat, and the image obtained after denoising is determined as the target ear, nose and throat examination image. Among these, denoising techniques may be, but are not limited to: mean filter denoising, median filter denoising and bilateral filter denoising.
It should be noted that, because the collected CT images of the ear, nose and throat may contain more noise, thereby affecting the definition of the images, and further possibly affecting doctors to obtain correct inspection results, so that denoising treatment is performed on the CT images of the ear, nose and throat, noise interference can be reduced to a certain extent, the definition of the images is improved, thereby being convenient for doctors to obtain correct inspection results, and in order to preserve more detailed information in the CT images of the ear, nose and throat, image characteristics are enhanced, and denoising treatment can be performed on the CT images of the ear, nose and throat by using a bilateral filtering denoising technology.
And thirdly, segmenting the region to be subdivided from the target otorhinolaryngological examination image by a Scale-Invariant Feature Transform (Scale-invariant feature transform) feature matching algorithm.
For example, if the region to be subdivided is a nose region to be detected, the segmenting the nose region to be detected from the target otorhinolaryngological inspection image may include: and matching the sample nose region with the target otorhinolaryngological examination image through a SIFT feature matching algorithm to obtain an ROI (Region of Interest ), and determining the nose region extracted from the target otorhinolaryngological examination image as the nose region to be detected. The sample nose region may be the nose region of a pre-acquired sample ear, nose, and throat. The sample otorhinolaryngology may be a normal otorhinolaryngology. The method for acquiring the sample nose region can be as follows: the nose region can be marked from the CT image corresponding to the ear, nose and throat of the sample in a manual mode, and the marked nose region is determined to be the nose region of the sample.
It should be noted that, because the nose position and the face position are relatively fixed, and in the process of scanning the CT image corresponding to the nose, the region of the whole head is often collected, so that the sample nose region and the target otorhinolaryngological examination image can be matched through the SIFT feature matching algorithm to obtain the ROI, the nose region is extracted from the target otorhinolaryngological examination image, and the nose region to be detected only including the nose can be obtained and recorded as the region to be subdivided.
And S2, determining a fusion value corresponding to each pixel point to be subdivided according to the CT value and the gray value corresponding to each pixel point to be subdivided in the area to be subdivided.
In some embodiments, the fusion value corresponding to each pixel to be subdivided may be determined according to the CT value and the gray value corresponding to each pixel to be subdivided in the region to be subdivided.
The pixel points to be subdivided may be pixel points in the region to be subdivided. The CT value corresponding to the pixel to be subdivided may be the CT value of the pixel to be subdivided. The gray value corresponding to the pixel to be subdivided may be the gray value of the pixel to be subdivided. The gray value may be in the range of 0, 255. The range of CT values can be [ -1000, 1000].
It should be noted that, because the sizes of the CT values and the gray values often reflect densities of different tissues or organs, the CT values and the gray values corresponding to the pixels to be subdivided are comprehensively considered, and the determined fusion values can represent densities of different tissues or organs, so that different tissue regions in the region to be subdivided can be conveniently segmented subsequently.
As an example, this step may include the steps of:
the first step, respectively normalizing CT values and gray values corresponding to pixel points to be subdivided to obtain a first standard index and a second standard index corresponding to the pixel points to be subdivided.
The value ranges of the first standard index and the second standard index can be the same. For example, the first criteria index and the second criteria index may be [0, 255]. The first standard index may be a normalized CT value. The second standard indicator may be a normalized gray value.
For example, normalizing the CT value and the gray value corresponding to the pixel to be subdivided respectively to obtain a first standard indicator and a second standard indicator corresponding to the pixel to be subdivided may include the following substeps:
and a first sub-step, normalizing CT values corresponding to each pixel point to be subdivided through linear function normalization to obtain a reference index corresponding to each pixel point to be subdivided.
The reference index may be a CT value normalized by a linear function. The reference index may have a range of values of 0, 1. The linear function normalization may be normalization by linear transformation.
And a second sub-step, namely determining the product of the reference index corresponding to the pixel point to be subdivided and 255 as a first standard index corresponding to the pixel point to be subdivided.
And a third sub-step, determining a value corresponding to a gray value corresponding to the pixel to be subdivided as a second standard index corresponding to the pixel to be subdivided.
And secondly, determining a product of a first standard index corresponding to the pixel points to be subdivided and a first preset weight as a first fusion index corresponding to the pixel points to be subdivided.
The first preset weight may be a weight of a first standard index set in advance. For example, the first preset weight may be 0.6.
And thirdly, determining the product of a second standard index corresponding to the pixel points to be subdivided and a second preset weight as a second fusion index corresponding to the pixel points to be subdivided.
The second preset weight may be a weight of a second standard index set in advance. The sum of the first preset weight and the second preset weight may be 1. For example, the second preset weight may be 0.4.
And fourthly, determining the sum of the first fusion index and the second fusion index corresponding to the pixel points to be subdivided as a fusion value corresponding to the pixel points to be subdivided.
For example, taking the nose region to be detected as an example, taking the pixel point to be subdivided as the pixel point in the nose region to be detected, the formula corresponding to the fusion value corresponding to the pixel point in the nose region to be detected may be:
wherein, the liquid crystal display device comprises a liquid crystal display device,is the fusion value corresponding to the ith pixel point in the nose area to be detected. Is a first preset weight, e.g.,may be 0.6.Is a second preset weight, e.g.,0.4 may be taken.The value is obtained by normalizing the CT value corresponding to the ith pixel point in the nose area to be detected.The value is obtained by normalizing the gray value corresponding to the ith pixel point in the nose area to be detected. i is the serial number of the pixel point in the nose area to be detected.
It should be noted that the number of the substrates,andthe larger the region, the greater the density of the local tissue or organ corresponding to the ith pixel point in the nose region to be detected. Thus (2)The density of the local tissue or organ corresponding to the ith pixel point in the nose area to be detected can be characterized, andthe larger the region, the greater the density of the local tissue or organ corresponding to the ith pixel point in the nose region to be detected. Second, consider comprehensivelyAndcan makeThe value of (2) is more in line with the actual situation. Furthermore, the method comprises the steps of,andall are normalized values, and can eliminate the influence among different dimensions, so thatAndmay be added.
And S3, determining a first classification threshold and a second classification threshold according to fusion values corresponding to the pixel points to be subdivided in the region to be subdivided.
In some embodiments, the first classification threshold and the second classification threshold may be determined according to the fusion values corresponding to the pixel points to be subdivided in the region to be subdivided.
Wherein the first classification threshold and the second classification threshold may be thresholds for classification. The second classification threshold may be greater than the first classification threshold.
It should be noted that, by comprehensively considering the fusion values corresponding to the pixel points to be subdivided in the area to be subdivided, the accuracy of determining the first classification threshold and the second classification threshold can be improved.
As an example, this step may include the steps of:
and determining a fusion histogram corresponding to the region to be subdivided according to the fusion value corresponding to each pixel point to be subdivided in the region to be subdivided.
The fused histogram may be a histogram with the fused value as an abscissa and the frequency or frequency as an ordinate. The fusion histogram may be that all pixels to be subdivided in the region to be subdivided are counted according to the magnitude of the fusion value, so as to count the occurrence frequency or frequency of the pixels to be subdivided.
And step two, screening out the highest three peaks from the peaks of the fusion histogram, and taking the highest three peaks as target peaks to obtain a target peak sequence.
For example, the highest three peaks can be screened out from all peaks of the fusion histogram, the screened highest three peaks are used as target peaks, three target peaks are obtained, and the three target peaks are sequenced according to fusion values corresponding to the target peaks, so that a target peak sequence is obtained. The fusion value corresponding to the target peak may be an abscissa included in the coordinate point where the target peak is located.
And thirdly, determining fusion values corresponding to the first two target peaks in the target peak sequence as two endpoints of the first interval.
For example, a smaller fusion value of fusion values corresponding to the first two target peaks in the target peak sequence may be determined as a left end point of the first interval, and a larger fusion value of fusion values corresponding to the first two target peaks in the target peak sequence may be determined as a right end point of the first interval.
And fourthly, determining fusion values corresponding to the last two target peaks in the target peak sequence as two endpoints of the second interval.
For example, a smaller fusion value of fusion values corresponding to the last two target peaks in the target peak sequence may be determined as a left end point of the second interval, and a larger fusion value of fusion values corresponding to the last two target peaks in the target peak sequence may be determined as a right end point of the second interval.
And fifthly, determining a first classification threshold value through a threshold value calculation method according to the pixel points of the fusion value belonging to the first interval in the region to be subdivided.
The threshold value calculation method may be a method for calculating a threshold value. For example, the threshold calculation method may be a maximum inter-class variance method.
For example, the threshold may be obtained by a maximum inter-class variance method based on the pixel points in the region to be subdivided, where the fusion value belongs to the first section, and the threshold obtained at this time may be determined as the first classification threshold.
And sixthly, determining a second classification threshold value through a threshold value calculation method according to the pixel points of which the fusion value belongs to the second interval in the region to be subdivided.
For example, the threshold may be obtained by a maximum inter-class variance method based on the pixel points in the region to be subdivided where the fusion value belongs to the second section, and the threshold obtained at this time may be determined as the second classification threshold.
It should be noted that, since the region to be subdivided is a region in the target otorhinolaryngological inspection image, and the pixel points in the target otorhinolaryngological inspection image are often classified into three types, namely, a black background portion, a white bone portion, and a gray cell tissue, for example, if the region to be subdivided is a nose region to be detected, the pixel points in the region to be subdivided are often classified into three types, namely, a black background portion, a white bone portion, and a gray cell tissue. The top three peaks screened out can often represent three categories of the area to be subdivided. Therefore, the first classification threshold value and the second classification threshold value can be determined in the first interval and the second interval through the threshold value calculation method, and the region to be subdivided can be conveniently divided into three types through the first classification threshold value and the second classification threshold value.
And S4, decomposing a preset window corresponding to each pixel to be subdivided according to the first classification threshold and the second classification threshold to obtain a characteristic sub-window set corresponding to the pixel to be subdivided.
In some embodiments, the preset window corresponding to each pixel to be subdivided may be decomposed according to the first classification threshold and the second classification threshold, so as to obtain the feature sub-window set corresponding to the pixel to be subdivided.
The preset window may be a preset window. For example, the preset window may be a 7×7 window. The pixel point to be subdivided can be located at the center of a preset window corresponding to the pixel point to be subdivided. The feature sub-window may be the same size as the preset window. The number of pixels in the feature sub-window set may be equal to the number of pixels in the preset window.
It should be noted that, by comprehensively considering the first classification threshold and the second classification threshold, the accuracy of determining the feature sub-window set corresponding to each pixel point to be subdivided can be improved.
As an example, this step may include the steps of:
and a first step of deleting the pixels with fusion values greater than or equal to the first classification threshold value in the preset window corresponding to the pixels to be subdivided to obtain a first sub-window.
And a second step of deleting the pixels with the fusion value smaller than the first classification threshold or larger than the second classification threshold in the preset window corresponding to the pixels to be subdivided to obtain a second sub-window.
And thirdly, deleting the pixels with fusion values smaller than or equal to the second classification threshold value in the preset window corresponding to the pixels to be subdivided to obtain a third sub-window.
And step four, determining the first sub-window, the second sub-window and the third sub-window as characteristic sub-windows to obtain a characteristic sub-window set corresponding to the pixel points to be subdivided.
For example, the preset window may be a 3×3 window, and the first classification threshold may be 95, the second classification threshold may be 115, as shown in fig. 2, the grid graph on the left of the equal sign may represent the preset window, and the three grid graphs on the right of the equal sign may represent the first sub-window, the second sub-window, and the third sub-window, respectively. The preset window may contain 9 pixels, and fusion values corresponding to the 9 pixels may be 75, 100, 97, 83, 121, 98, 89, 116 and 123 respectively. The first sub-window may contain 3 pixels, and fusion values corresponding to the 3 pixels may be 75, 83, and 89, respectively. The second sub-window may contain 3 pixels, and the fusion values corresponding to the 3 pixels may be 100, 97, and 98, respectively. The third sub-window may contain 3 pixels, and the fusion values corresponding to the 3 pixels may be 121, 116, and 123, respectively.
It should be noted that, through the first classification threshold and the second classification threshold, the pixels in the preset window corresponding to each pixel to be subdivided may be classified into three types, and each feature sub-window in the feature sub-window set corresponding to each pixel to be subdivided may be composed of different types of pixels.
And S5, performing neighbor aggregation analysis processing on each feature sub-window in the feature sub-window set to obtain neighbor aggregation indexes corresponding to the feature sub-windows.
In some embodiments, a neighbor aggregation analysis process may be performed on each feature sub-window in the feature sub-window set to obtain a neighbor aggregation indicator corresponding to the feature sub-window.
As an example, this step may include the steps of:
and determining each pixel point in the characteristic sub-window as a characteristic pixel point to obtain a characteristic pixel point set corresponding to the characteristic sub-window.
The feature pixel point set corresponding to the feature sub-window may include: all pixels within the feature sub-window.
And secondly, determining the Euclidean distance between each characteristic pixel point in the characteristic pixel point set and the central position of the characteristic sub-window as a target distance index corresponding to the characteristic pixel point.
For example, for any one feature pixel point in any one feature sub-window, the euclidean distance between the center position of the feature pixel point and the center position of the feature sub-window may be determined as the euclidean distance between the feature pixel point and the center position of the feature sub-window, and the euclidean distance is used as the target distance index corresponding to the feature pixel point.
And thirdly, determining the number of the pixel points in the preset neighborhood corresponding to each characteristic pixel point as the target number corresponding to the characteristic pixel points.
The preset neighborhood may be a preset neighborhood. For example, the preset neighborhood may be a 4 neighborhood. The number of targets corresponding to the feature pixel points may be equal to the number of pixel points in a preset neighborhood corresponding to the feature pixel point, that is, the number of feature pixel points in the preset neighborhood corresponding to the feature pixel point.
And step four, determining a neighbor aggregation index corresponding to the characteristic sub-window according to the target distance index and the target number corresponding to each characteristic pixel point in the characteristic pixel point set.
Wherein the target distance index may be inversely related to the neighbor aggregation index. The target number may be positively correlated with the neighbor aggregation indicator.
For example, taking the nose region to be detected as an example of the nose region to be detected, and taking the pixel point to be detected as the pixel point in the nose region to be detected, the formula corresponding to the neighbor aggregation index corresponding to the determined feature sub-window may be:
wherein, the liquid crystal display device comprises a liquid crystal display device,the method is a neighbor aggregation index corresponding to a j-th characteristic sub-window in a characteristic sub-window set corresponding to an i-th pixel point in a nose region to be detected.Is the nose area to be detectedIn the characteristic pixel point set corresponding to the j characteristic sub-window in the characteristic sub-window set corresponding to the i pixel point, the target distance index corresponding to the t characteristic pixel point; i.e., the euclidean distance between the t-th feature pixel point and the center position of the j-th feature sub-window.The Euclidean distance between the central position of the preset window and the vertex position of the preset window is the largest Euclidean distance among the Euclidean distances between the central pixel point of the preset window and all the pixel points in the preset window.The target quantity corresponding to the t-th characteristic pixel point in the characteristic pixel point set corresponding to the j-th characteristic sub-window in the characteristic sub-window set corresponding to the i-th pixel point in the nose region to be detected; namely the number of the feature pixel points in the preset neighborhood corresponding to the t-th feature pixel point. The number of the pixel points in the j-th characteristic sub-window in the characteristic sub-window set corresponding to the i-th pixel point in the nose region to be detected is the number of the characteristic pixel points in the characteristic pixel point set corresponding to the j-th characteristic sub-window.Is a sub-neighbor aggregation factor corresponding to the t-th characteristic pixel point in a characteristic pixel point set corresponding to the j-th characteristic sub-window in a characteristic sub-window set corresponding to the i-th pixel point in the nose region to be detected.Is thatNormalized values.Is a normalization function, and normalization can be achieved.And (3) withAnd has negative correlation.And (3) withAnd shows positive correlation. i is the serial number of the pixel point in the nose area to be detected. j is the sequence number of the feature sub-window in the feature sub-window set corresponding to the ith pixel point in the nose area to be detected. And t is the sequence number of the feature pixel point in the feature pixel point set corresponding to the j-th feature sub-window.
When the following is performedThe larger the feature pixel point is, the closer the feature pixel point is to the center position of the j-th feature sub-window, and the closer the feature pixel point is to the i-th pixel point. When (when)When the number of the pixels in the preset neighborhood corresponding to the t-th feature pixel is larger, and the number of the pixels around the t-th feature pixel is denser. Thus when The larger the pixel points in the jth feature sub-window are, the closer the pixel points are to the ith pixel point, and the denser the pixel points in the jth feature sub-window are, the more likely the ith pixel point is a clustering center, and the smaller the weight of the ith pixel point is needed.
And S6, performing element discrete analysis processing on each characteristic sub-window to obtain a target discrete index corresponding to the characteristic sub-window.
In some embodiments, element discrete analysis processing may be performed on each feature sub-window to obtain the target discrete index corresponding to the feature sub-window.
As an example, this step may include the steps of:
and determining each pixel point in the characteristic sub-window as a characteristic pixel point to obtain a characteristic pixel point set corresponding to the characteristic sub-window.
And secondly, dividing the region where the feature pixel point set is located according to the position corresponding to the feature pixel point to obtain a connection region set corresponding to the feature sub-window.
The region where the feature pixel point set is located may be a region formed by all feature pixels in the feature pixel point set, that is, a region formed by all pixels in the feature sub-window.
For example, according to the position corresponding to the feature pixel point, dividing the region where the feature pixel point set is located to obtain the connection region set corresponding to the feature sub-window may include the following sub-steps:
a first sub-step of randomly selecting a feature pixel point from the feature pixel point set as a seed pixel point, when feature pixel points exist in a preset neighborhood corresponding to the seed pixel point, determining each feature pixel point existing in the preset neighborhood corresponding to the seed pixel point as a seed pixel point, and when feature pixel points except the determined seed pixel point exist in the preset neighborhood corresponding to the determined seed pixel point, repeating the seed pixel point determining step until the pixel points in the preset neighborhood corresponding to all the seed pixel points are seed pixel points, and determining the region formed by all the seed pixel points as a first region.
The determining the seed pixel point may include: and determining each characteristic pixel point existing in the preset adjacent area corresponding to the seed pixel point as the seed pixel point.
And a second sub-step of, when there are feature pixels other than the first region in the feature pixel point set, updating the feature pixel point set to all feature pixels other than the first region in the feature pixel point set, repeating, for example, the first sub-step included in the second step included as an example in step S6, and obtaining yet another first region.
And a third sub-step of repeating, for example, the second sub-step included in the second step included as an example included in step S6 until all the obtained first areas constitute a connection area set when no feature pixel other than the first area exists in the feature pixel set, when the feature pixel other than the first area exists in the feature pixel set.
For example, the preset neighborhood may be a 4 neighborhood, as shown in fig. 3, an area formed by all pixel points in the feature sub-window 301 may be divided, and the obtained connection area set corresponding to the feature sub-window 301 may include 3 connection areas, where the 3 connection areas are respectively: the area filled by the horizontal lines within the reference window 302, the area filled by the vertical lines within the reference window 302, and the area filled by the diagonal lines within the reference window 302. The reference window 302 may be a feature sub-window 301 after region segmentation. The squares filled by lines within the feature sub-window 301 and the reference window 302 may characterize the pixel points.
And thirdly, performing difference discrete analysis processing on each characteristic pixel point in each connection region in the connection region set to obtain the difference discrete corresponding to the characteristic pixel point.
For example, performing a differential discrete analysis process on each feature pixel point in each connection region in the connection region set to obtain a differential discrete corresponding to the feature pixel point may include the following substeps:
and a first sub-step of determining the absolute value of the difference between the fusion value corresponding to the feature pixel point and the fusion representative value corresponding to the connection region to which the feature pixel point belongs as the difference dispersion corresponding to the feature pixel point if the number of the pixel points in the connection region to which the feature pixel point belongs is larger than the preset number.
The preset number may be a preset number. For example, the preset number may be 1. The fusion representative value corresponding to the connection region may be a mean value of fusion values corresponding to all feature pixel points in the connection region.
And a second sub-step, if the number of the pixel points in the connection area to which the characteristic pixel points belong is smaller than or equal to the preset number, determining the first preset factor as the difference dispersion corresponding to the characteristic pixel points.
The first preset factor may be a preset factor. For example, the method for obtaining the first preset factor may include: determining the range of fusion values corresponding to the pixel points in all the first sub-windows as a first range; determining the range of fusion values corresponding to the pixel points in all the second sub-windows as a second range; determining the range of fusion values corresponding to the pixel points in all the third sub-windows as a third range; and determining the maximum value of the first polar difference, the second polar difference and the third polar difference as a first preset factor.
As another example, the method for obtaining the first preset factor may include: screening out characteristic pixel points with the number of pixel points larger than the preset number in the affiliated connection area from all characteristic pixel point sets, and taking the characteristic pixel points as first characteristic points to obtain a first characteristic point set; determining the absolute value of the difference value between the fusion value corresponding to each first feature point in the first feature point set and the fusion representative value corresponding to the connection area to which the first feature point belongs as a first candidate factor corresponding to the first feature point to obtain a first candidate factor set; and screening the largest first candidate factor from the first candidate factor set to be used as a first preset factor.
It should be noted that, because the difference dispersion can represent the relative difference between the feature pixel point and the surrounding similar pixel points, and the degree of dispersion distribution, that is, the greater the relative difference between the feature pixel point and the surrounding similar pixel points, the more dissimilar the feature pixel point and the surrounding similar pixel points are often described, the more scattered the distribution between the feature pixel point and the surrounding similar pixel points is often described, when the preset number is 1, the feature pixel points with the number of the pixel points in the connecting area being less than or equal to the preset number are often isolated pixel points in a certain category, that is, the surrounding of the feature pixel point is often the pixel points in a category different from the feature pixel point. A relatively large variance can be provided for the feature pixel.
For example, taking a nose region to be detected as an example of the region to be subdivided, taking the pixel points to be subdivided as the pixel points in the nose region to be detected, if the preset number is 1, determining a formula corresponding to the difference dispersion corresponding to the feature pixel points may be:
wherein, the liquid crystal display device comprises a liquid crystal display device,the difference corresponding to the t-th characteristic pixel point in the characteristic pixel point set corresponding to the j-th characteristic sub-window in the characteristic sub-window set corresponding to the i-th pixel point in the nose region to be detected is discrete.Is the fusion value corresponding to the t characteristic pixel point in the characteristic pixel point set corresponding to the j characteristic sub-window in the characteristic sub-window set corresponding to the i pixel point in the nose region to be detected.The fusion representative value corresponding to the connection area of the t-th characteristic pixel point in the characteristic pixel point set corresponding to the j-th characteristic sub-window in the characteristic sub-window set corresponding to the i-th pixel point in the nose area to be detected.Is thatIs the absolute value of (c).The number of the pixel points in the connection area to which the t-th characteristic pixel point belongs is the number of the pixel points in the characteristic pixel point set corresponding to the j-th characteristic sub-window in the characteristic sub-window set corresponding to the i-th pixel point in the nose area to be detected; i.e. the number of feature pixels in the connection area where the t-th feature pixel is located. R is a first predetermined factor. i is the nose region to be tested And the serial number of the pixel point. j is the sequence number of the feature sub-window in the feature sub-window set corresponding to the ith pixel point in the nose area to be detected. And t is the sequence number of the feature pixel point in the feature pixel point set corresponding to the j-th feature sub-window.
When the following is performedWhen the difference between the fusion value corresponding to the t-th feature pixel point and the fusion representative value corresponding to the connection area to which the t-th feature pixel point belongs is larger, the relative difference between the t-th feature pixel point and surrounding similar pixel points is larger, the feature pixel point is dissimilar from surrounding similar pixel points, and the distribution between the feature pixel point and surrounding similar pixel points is scattered. When (when)When the t-th feature pixel point is always an isolated pixel point in a certain category, namely the periphery of the t-th feature pixel point is always a pixel point in a category different from the t-th feature pixel point, so that it is unnecessary to calculate a fusion representative value corresponding to a connection area to which the feature pixel point belongs at the moment, and therefore a first preset factor can be set as a difference dispersion corresponding to the feature pixel point, wherein the first preset factor can be greater than
And fourthly, determining the pixel point closest to the central position of the characteristic sub-window to which the connection region belongs in each connection region as a reference characteristic point corresponding to the connection region.
Wherein, the characteristic sub-window to which the connection region belongs is the characteristic sub-window to which the connection region belongs.
For example, a pixel point closest to the center position of the feature sub-window to which the connection region belongs in a certain connection region may be determined as the reference feature point corresponding to the connection region.
Fifthly, determining path dispersion corresponding to the characteristic pixel points according to the reference characteristic points corresponding to the connecting areas to which the characteristic pixel points belong.
The connection area to which the feature pixel belongs is the connection area to which the feature pixel belongs.
For example, according to the reference feature point corresponding to the connection area to which each feature pixel point belongs, determining the path discrete corresponding to the feature pixel point may include the following substeps:
and a first sub-step of determining the shortest path distance between the feature pixel point and the reference feature point as the path dispersion corresponding to the feature pixel point if the number of the pixel points in the connection area to which the feature pixel point belongs is greater than a preset number.
The preset number may be a preset number. For example, the preset number may be 1. The shortest path distance between a feature pixel point and a reference feature point may be a distance corresponding to the shortest path between the feature pixel point and the reference feature point. The shortest path distance between a feature pixel point and a reference feature point may be the number of minimum pixel points between the feature pixel point and the reference feature point.
For example, for a first feature pixel point in a certain connection area, if the number of feature pixel points in the connection area is greater than a preset number, the shortest path distance between the first feature pixel point and a reference feature point corresponding to the connection area may be determined as the path dispersion corresponding to the first feature pixel point.
As shown in fig. 4, the square filled with vertical lines may represent a reference feature point, the shortest path between the feature pixel point 401 and the reference feature point may be the path in which the arrow in fig. 4 is located, and the shortest path distance between the feature pixel point 401 and the reference feature point may be 3.
And a second sub-step, if the number of the pixel points in the connection area to which the characteristic pixel points belong is smaller than or equal to the preset number, determining a second preset factor as the path dispersion corresponding to the characteristic pixel points.
The second preset factor may be a preset factor. For example, the second preset factor may be 1.
As another example, the method for obtaining the second preset factor may include: screening out characteristic pixel points with the number of pixel points larger than the preset number in the affiliated connection area from all characteristic pixel point sets, and taking the characteristic pixel points as first characteristic points to obtain a first characteristic point set; determining the shortest path distance between each first feature point in the first feature point set and a reference feature point corresponding to a connection area to which the first feature point belongs as a second candidate factor corresponding to the first feature point to obtain a second candidate factor set; and screening the largest second candidate factor from the second candidate factor set to be used as a second preset factor.
It should be noted that, because the path dispersion can represent the shortest path distance between the feature pixel point and the reference feature point, and the dispersion distribution degree, that is, the larger the shortest path distance between the feature pixel point and the reference feature point, the more discrete the distribution of the feature pixel point and the reference feature point is usually described, when the preset number is 1, the feature pixel points in the connection area, where the number of the pixel points is less than or equal to the preset number, are often isolated pixel points in a certain category, that is, the surrounding of the feature pixel point is often the pixel points in a category different from the feature pixel point. A relatively large path dispersion can be provided for the feature pixel.
And sixthly, determining a first discrete index corresponding to the connection region according to the difference discrete and the path discrete corresponding to each characteristic pixel point in each connection region.
Wherein the difference dispersion and the path dispersion may both be positively correlated with the first dispersion indicator.
And seventh, determining a second discrete index corresponding to the connection region according to the reference feature point corresponding to each connection region.
For example, determining the second discrete index corresponding to the connection region according to the reference feature point corresponding to each connection region may include the following substeps:
And a first sub-step of determining the Euclidean distance between the center position of the preset window and the vertex position of the preset window as a first reference distance.
And a second sub-step of determining the Euclidean distance between the reference feature point corresponding to the connection region and the central position of the feature sub-window to which the connection region belongs as a second reference distance corresponding to the connection region.
For example, the euclidean distance between the reference feature point corresponding to a certain connection region and the center position of the feature sub-window to which the connection region belongs may be determined as the second reference distance corresponding to the connection region.
And a third sub-step of determining a ratio of the second reference distance corresponding to the connection region to the first reference distance as a second discrete index corresponding to the connection region.
Eighth, determining a third discrete index corresponding to each connection area according to the first discrete index and the second discrete index corresponding to each connection area.
Wherein, the first discrete index and the second discrete index can be positively correlated with the third discrete index.
And a ninth step of determining a target discrete index corresponding to the feature sub-window according to a third discrete index corresponding to each connection region in the connection region set corresponding to the feature sub-window.
Wherein the third discrete index may be positively correlated with the target discrete index.
For example, taking the nose region to be detected as an example, taking the pixel point to be subdivided as the pixel point in the nose region to be detected, the formula corresponding to the target discrete index corresponding to the determined feature sub-window may be:
wherein, the liquid crystal display device comprises a liquid crystal display device,is a target discrete index corresponding to the jth characteristic sub-window in the characteristic sub-window set corresponding to the ith pixel point in the nose region to be detected.The method comprises the steps that in a characteristic sub-window set corresponding to an ith pixel point in a nose area to be detected, a first discrete index corresponding to an a-th connection area is in a connection area set corresponding to a jth characteristic sub-window.The number of the characteristic pixel points in the a-th connection region in the connection region set corresponding to the j-th characteristic sub-window in the characteristic sub-window set corresponding to the i-th pixel point in the nose region to be detected.The method comprises the steps that in a characteristic sub-window set corresponding to an ith pixel point in a nose area to be detected, paths corresponding to a b characteristic pixel point in an a connection area are discrete in a connection area set corresponding to a jth characteristic sub-window.The difference corresponding to the b-th characteristic pixel point in the a-th connecting area is discrete in the connecting area set corresponding to the j-th characteristic sub-window in the characteristic sub-window set corresponding to the i-th pixel point in the nose area to be detected. Andare all in contact withAnd shows positive correlation.The method is characterized in that the method comprises the steps that in a characteristic sub-window set corresponding to an ith pixel point in a nose area to be detected, a second discrete index corresponding to an a-th connection area is in a connection area set corresponding to a jth characteristic sub-window.Is the euclidean distance between the center position of the preset window and the vertex position of the preset window, i.e. the first reference distance.The method comprises the steps that in a characteristic sub-window set corresponding to an ith pixel point in a nose area to be detected, a second reference distance corresponding to an a-th connection area is selected from a connection area set corresponding to a jth characteristic sub-window; namely the Euclidean distance between the reference feature point corresponding to the a-th connection region and the central position of the feature sub-window to which the a-th connection region belongs.Is the number of connection areas in the connection area set corresponding to the j-th characteristic sub-window in the characteristic sub-window set corresponding to the i-th pixel point in the nose area to be detected.Andare all in contact withAnd shows positive correlation. i is the serial number of the pixel point in the nose area to be detected. j is the sequence number of the feature sub-window in the feature sub-window set corresponding to the ith pixel point in the nose area to be detected. a is the serial number of the connection area in the connection area set corresponding to the jth feature sub-window. b is the serial number of the feature pixel point in the a-th connection area.
When the following is performedThe larger the difference between the pixel points in the a-th connection region, the larger the difference, and the longer the shortest path distance between each pixel point in the a-th connection region and the reference pixel point; the more discrete the distribution between the pixel points within the a-th connection region tends to be. When (when)The larger the distance between the (a) th connection region and the (i) th pixel point in the nose region to be detected, the more the distance is, the more the (a) th connection region isThe more discrete the relative between the a connection areas and the i-th pixel point in the nose area to be detected. Thus whenThe larger the pixel point distribution in the jth feature sub-window is, the more discrete the pixel point distribution is, and the more the distance between the a-th connection area and the ith pixel point in the nose area to be detected is, the more likely the ith pixel point in the nose area to be detected is not a clustering center is, and the more the ith pixel point in the nose area to be detected needs a larger weight.
And S7, determining a target weight corresponding to the pixel points to be subdivided according to the target discrete index and the neighbor aggregation index corresponding to each characteristic sub-window in the characteristic sub-window set corresponding to each pixel point to be subdivided.
In some embodiments, the target weight corresponding to the pixel to be subdivided may be determined according to a target discrete index and a neighbor aggregation index corresponding to each feature sub-window in the feature sub-window set corresponding to each pixel to be subdivided.
It should be noted that, comprehensively considering the target discrete index and the neighbor aggregation index corresponding to each feature sub-window in the feature sub-window set corresponding to each pixel to be subdivided, the target weight corresponding to each pixel to be subdivided is improved.
As an example, this step may include the steps of:
the first step, determining a first weight factor corresponding to each characteristic sub-window according to a target discrete index and a neighbor aggregation index corresponding to each characteristic sub-window.
Wherein the target discrete index may be positively correlated with the first weight factor. The neighbor aggregation indicator may be inversely related to the first weight factor.
And secondly, determining the number of pixel points in each characteristic sub-window and the ratio of the number of pixel points in a preset window as a second weight factor corresponding to the characteristic sub-window.
And thirdly, determining the reference weight corresponding to the pixel point to be subdivided according to the first weight factor and the second weight factor corresponding to each characteristic sub-window in the characteristic sub-window set corresponding to the pixel point to be subdivided.
Wherein the first weight factor may be positively correlated with the reference weight. The second weight factor may be inversely related to the reference weight.
And fourthly, linearly transforming the reference weight corresponding to the pixel point to be subdivided to obtain the target weight corresponding to the pixel point to be subdivided.
For example, taking the nose region to be detected as an example, taking the pixel point to be subdivided as the pixel point in the nose region to be detected, the formula corresponding to the target weight corresponding to the pixel point in the nose region to be detected may be:
wherein, the liquid crystal display device comprises a liquid crystal display device,is the target weight corresponding to the ith pixel point in the nose area to be detected. p is a preset slope, for example, p may be 1.5.q is a preset intercept, e.g., q may be 1, which may be such thatThe value of (2) is [1,2.5 ]]。[1,2.5]Is a common weight range when fuzzy C-means clustering is adopted.Is thatNormalized values.Is a standard deviation normalization function, and can realize normalization.Is the reference weight corresponding to the ith pixel point in the nose area to be detected.Is the number of feature sub-windows in the feature sub-window set corresponding to the ith pixel point in the nose area to be detected.Is a third weight factor corresponding to the j-th characteristic sub-window in the characteristic sub-window set corresponding to the i-th pixel point in the nose region to be detected. Is the number of pixel points in the j-th characteristic sub-window in the characteristic sub-window set corresponding to the i-th pixel point in the nose region to be detected. N is the number of pixels within the preset window. For example, if the preset window is a 7×7 window, N may be 49.Is a second weight factor corresponding to the j-th characteristic sub-window in the characteristic sub-window set corresponding to the i-th pixel point in the nose region to be detected.Is a target discrete index corresponding to the jth characteristic sub-window in the characteristic sub-window set corresponding to the ith pixel point in the nose region to be detected.The method is a neighbor aggregation index corresponding to a j-th characteristic sub-window in a characteristic sub-window set corresponding to an i-th pixel point in a nose region to be detected. i is the serial number of the pixel point in the nose area to be detected. j is the sequence number of the feature sub-window in the feature sub-window set corresponding to the ith pixel point in the nose area to be detected.
When the following is performedThe larger the pixel point distribution in the jth characteristic sub-window is, the more discrete the pixel point distribution is, and the more the distance between the (a) th connection area and the (i) th pixel point in the nose area to be detected is, the more the pixel point distribution is, the more the distance between the (a) th connection area and the (i) th pixel point isThe more likely that the ith pixel point in the nose area is not a cluster center, the more weight is needed to indicate the ith pixel point in the nose area to be detected. When (when) The larger the pixel points in the jth feature sub-window are, the closer the pixel points are to the ith pixel point in the nose area to be detected, the denser the pixel points in the jth feature sub-window are, the more likely the ith pixel point in the nose area to be detected is a clustering center, and the smaller the weight of the ith pixel point in the nose area to be detected is. When (when)The larger the number of pixels in the jth feature sub-window, the more the number of pixels in the jth feature sub-window is relatively concentrated, the more likely the ith pixel in the nose region to be detected is a cluster center, and the smaller the weight of the ith pixel in the nose region to be detected is required. Thus whenThe larger andandwhen the pixel points are smaller, the pixel point position distribution in each characteristic sub-window corresponding to the ith pixel point in the nose area to be detected is more dispersed, and the fusion value difference is larger; the more likely that the ith pixel point in the nose area to be detected is not a clustering center, the larger the target weight corresponding to the ith pixel point in the nose area to be detected needs to be set, and iteration needs to be continued to find an optimal clustering center. When (when) The smaller andandwhen the pixel points are larger, the more concentrated the distribution of the pixel point positions in each characteristic sub-window corresponding to the ith pixel point in the nose region to be detected is, and the smaller the fusion value difference is; the more likely the ith pixel point in the nose area to be detected is a clustering center, the smaller the target weight corresponding to the ith pixel point in the nose area to be detected is required to be set, so that the effect of rapid convergence is achieved.The value range of the target weight value can be set as a preset value range.
And S8, performing self-adaptive fuzzy C-means clustering segmentation on the region to be segmented according to the target weight corresponding to each pixel point to be segmented in the region to be segmented to obtain a target segmentation region set.
In some embodiments, the adaptive fuzzy C-means clustering segmentation may be performed on the to-be-segmented area according to the target weights corresponding to the to-be-segmented pixel points in the to-be-segmented area, so as to obtain a target segmented area set.
The target segmentation region in the target segmentation region set may be a region where the obtained cluster is located after performing adaptive fuzzy C-means clustering. The adaptive fuzzy C-means clustering segmentation may be image segmentation achieved by adaptive fuzzy C-means clustering. The adaptive fuzzy C-means clustering segmentation may be a modified fuzzy C-means clustering, and the modified points mainly comprise: and modifying a preset weight value participating in clustering into a target weight value corresponding to each pixel point, so that the self-adaption of the weight value is realized, and the effect of rapid convergence is achieved. And secondly, the target weight is quantized based on a plurality of factors, and compared with the manually preset weight, the target weight is more objective and accurate.
It should be noted that, based on the target weight corresponding to each pixel point to be subdivided in the region to be subdivided, the adaptive fuzzy C-means clustering segmentation is performed on the region to be subdivided, so that the fine segmentation of the region to be subdivided is realized. And secondly, the invention quantifies the target weight corresponding to each pixel to be subdivided, and compared with the whole image adopting the same weight, the invention adaptively sets a target weight for each pixel to be subdivided, thus compared with the existing fuzzy C-means clustering, the invention improves the clustering convergence rate, thereby improving the image segmentation efficiency.
As an example, the adaptive fuzzy C-means clustering can be performed on the to-be-segmented area according to the preset clustering number, membership degrees of each to-be-segmented pixel point in the to-be-segmented area and various types of pixels, and target weights corresponding to each to-be-segmented pixel point, and the area where each clustered cluster obtained after clustering is located is determined as a target segmented area, so as to obtain a target segmented area set. Wherein, each pixel point to be subdivided and each membership degree can take a random number between [0,1 ]. The preset number of clusters may be 7.
Optionally, if the region to be subdivided is a nose region to be detected, and the nose region to be detected is segmented into 7 target segmentation regions by adopting adaptive fuzzy C-means clustering segmentation, the 7 target segmentation regions may be: the nasal septum region, the left nasal cavity region, the right nasal cavity region, the left lower nasal cavity region, the right lower nasal cavity region, the left middle nasal cavity region and the right middle nasal cavity region can screen out 2 target division regions with black color according to the gray value characteristics of the 7 target division regions (the 2 target division regions with black color are often the left nasal cavity region and the right nasal cavity region), and the target division region with the largest aspect ratio of the smallest circumscribed rectangle of the remaining 5 target division regions can be used as the nasal septum region according to the shape characteristics of the nasal septum. The method comprises the steps of using RBF (Radial Basis Function, kernel function) as a kernel function of SVM (Support Vector Machine ) to transform low-dimensional inseparable feature data into high-dimensional space, utilizing the support vector machine to realize optimal classification of nasal septum types, marking classification results, uploading the marked classification results to an auxiliary diagnosis system, enabling a diagnostician to refer to the classification results in the auxiliary diagnosis system, and better knowing patient conditions according to medical experience, so that medical risks are reduced, the working efficiency of the doctor can be improved, and the doctor is helped to make more accurate judgment.
Based on the same inventive concept as the above-described method embodiments, the present invention provides an otorhinolaryngological examination image segmentation system comprising a memory, a processor and a computer program stored on the memory and executable on the processor, which computer program, when executed by the processor, implements the steps of an otorhinolaryngological examination image segmentation method.
In conclusion, the region to be subdivided is firstly segmented from the acquired target otorhinolaryngological examination image, so that the region to be subdivided can be conveniently and finely segmented later. Then, because the sizes of the CT value and the gray value often reflect the densities of different tissues or organs, the CT value and the gray value corresponding to the pixel point to be subdivided are comprehensively considered, and the determined fusion value can represent the densities of different tissues or organs and can facilitate the subsequent segmentation of different tissue areas in the area to be subdivided. Then, the fusion value corresponding to each pixel point to be subdivided in the region to be subdivided is comprehensively considered, so that the accuracy of determining the first classification threshold value and the second classification threshold value can be improved. And continuing, dividing the pixel points in the preset window corresponding to each pixel point to be subdivided into three types through the first classification threshold and the second classification threshold, wherein each characteristic sub-window in the characteristic sub-window set corresponding to each pixel point to be subdivided can be composed of different types of pixel points. And then, comprehensively considering the target discrete index and the neighbor aggregation index corresponding to each characteristic sub-window in the characteristic sub-window set corresponding to each pixel point to be subdivided, and improving the target weight corresponding to each pixel point to be subdivided. Finally, based on the target weight corresponding to each pixel point to be subdivided in the area to be subdivided, the adaptive fuzzy C-means clustering segmentation is carried out on the area to be subdivided, so that the fine segmentation of the area to be subdivided is realized. And secondly, the invention quantifies the target weight corresponding to each pixel to be subdivided, and compared with the whole image adopting the same weight, the invention adaptively sets a target weight for each pixel to be subdivided, thus compared with the existing fuzzy C-means clustering, the invention improves the clustering convergence rate, thereby improving the image segmentation efficiency.
The above embodiments are only for illustrating the technical solution of the present invention, and are not limiting; although the invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit of the invention and are intended to be included within the scope of the invention.

Claims (7)

1. An otorhinolaryngological examination image segmentation method, characterized by comprising the steps of:
acquiring a target otorhinolaryngological examination image, and dividing a region to be subdivided from the target otorhinolaryngological examination image;
determining a fusion value corresponding to each pixel point to be subdivided according to the CT value and the gray value corresponding to each pixel point to be subdivided in the region to be subdivided;
determining a first classification threshold and a second classification threshold according to fusion values corresponding to each pixel point to be subdivided in the region to be subdivided;
decomposing a preset window corresponding to each pixel point to be subdivided according to the first classification threshold and the second classification threshold to obtain a characteristic sub-window set corresponding to the pixel point to be subdivided;
Performing neighbor aggregation analysis processing on each feature sub-window in the feature sub-window set to obtain a neighbor aggregation index corresponding to the feature sub-window;
performing element discrete analysis processing on each characteristic sub-window to obtain a target discrete index corresponding to the characteristic sub-window;
determining a target weight corresponding to each pixel point to be subdivided according to a target discrete index and a neighbor aggregation index corresponding to each characteristic sub-window in a characteristic sub-window set corresponding to each pixel point to be subdivided;
according to the target weight corresponding to each pixel point to be subdivided in the region to be subdivided, performing self-adaptive fuzzy C-means clustering segmentation on the region to be subdivided to obtain a target segmentation region set;
the determining a first classification threshold and a second classification threshold according to the fusion values corresponding to the pixel points to be subdivided in the region to be subdivided comprises:
determining a fusion histogram corresponding to the region to be subdivided according to fusion values corresponding to each pixel point to be subdivided in the region to be subdivided;
screening the highest three wave crests from the wave crests of the fusion histogram to serve as target wave crests, and obtaining a target wave crest sequence;
Determining fusion values corresponding to the first two target peaks in the target peak sequence as two endpoints of a first interval;
determining fusion values corresponding to the last two target peaks in the target peak sequence as two endpoints of a second interval;
determining a first classification threshold value through a threshold value calculation method according to pixel points of which fusion values belong to a first interval in the region to be subdivided;
determining a second classification threshold value through a threshold value calculation method according to pixel points of which fusion values belong to a second interval in the region to be subdivided;
performing neighbor aggregation analysis processing on each feature sub-window in the feature sub-window set to obtain a neighbor aggregation index corresponding to the feature sub-window, including:
each pixel point in the characteristic sub-window is determined to be a characteristic pixel point, and a characteristic pixel point set corresponding to the characteristic sub-window is obtained;
determining the Euclidean distance between each characteristic pixel point in the characteristic pixel point set and the central position of the characteristic sub-window as a target distance index corresponding to the characteristic pixel point;
determining the number of pixel points in a preset neighborhood corresponding to each characteristic pixel point as the target number corresponding to the characteristic pixel point;
Determining a neighbor aggregation index corresponding to the characteristic sub-window according to target distance indexes and target quantity corresponding to each characteristic pixel point in the characteristic pixel point set, wherein the target distance indexes are in negative correlation with the neighbor aggregation index, and the target quantity is in positive correlation with the neighbor aggregation index;
performing element discrete analysis processing on each characteristic sub-window to obtain a target discrete index corresponding to the characteristic sub-window, wherein the method comprises the following steps:
each pixel point in the characteristic sub-window is determined to be a characteristic pixel point, and a characteristic pixel point set corresponding to the characteristic sub-window is obtained;
dividing the region where the feature pixel point set is located according to the position corresponding to the feature pixel point to obtain a connection region set corresponding to the feature sub-window;
performing differential discrete analysis processing on each characteristic pixel point in each connection region in the connection region set to obtain differential discrete corresponding to the characteristic pixel point;
determining a pixel point closest to the central position of a characteristic sub-window to which each connection region belongs in each connection region as a reference characteristic point corresponding to the connection region;
determining path dispersion corresponding to each characteristic pixel point according to a reference characteristic point corresponding to a connecting area to which the characteristic pixel point belongs;
Determining a first discrete index corresponding to each connection region according to the difference discrete and the path discrete corresponding to each characteristic pixel point in each connection region, wherein the difference discrete and the path discrete are positively correlated with the first discrete index;
determining a second discrete index corresponding to each connection region according to the reference feature point corresponding to each connection region;
determining a third discrete index corresponding to each connection region according to a first discrete index and a second discrete index corresponding to each connection region, wherein the first discrete index and the second discrete index are positively correlated with the third discrete index;
and determining a target discrete index corresponding to the characteristic sub-window according to a third discrete index corresponding to each connecting region in the connecting region set corresponding to the characteristic sub-window, wherein the third discrete index and the target discrete index are positively correlated.
2. The method for segmenting the otorhinolaryngological examination image according to claim 1, wherein the decomposing the preset window corresponding to each pixel to be segmented according to the first classification threshold and the second classification threshold to obtain the characteristic sub-window set corresponding to the pixel to be segmented comprises:
Deleting the pixel points with fusion values larger than or equal to the first classification threshold value in the preset window corresponding to the pixel points to be subdivided to obtain a first sub-window;
deleting the pixels with fusion values smaller than the first classification threshold or larger than the second classification threshold in the preset window corresponding to the pixels to be subdivided to obtain a second sub-window;
deleting the pixel points, of which the fusion values in the preset window are smaller than or equal to the second classification threshold value, corresponding to the pixel points to be subdivided to obtain a third sub-window;
and determining the first sub-window, the second sub-window and the third sub-window as characteristic sub-windows to obtain a characteristic sub-window set corresponding to the pixel points to be subdivided.
3. The method for segmenting the ear-nose-throat examination image according to claim 1, wherein the performing a differential discrete analysis process on each characteristic pixel point in each connection region in the connection region set to obtain a differential discrete corresponding to the characteristic pixel point comprises:
if the number of the pixel points in the connection area to which the feature pixel points belong is larger than the preset number, determining the absolute value of the difference value of the fusion representative value corresponding to the feature pixel points and the fusion representative value corresponding to the connection area to which the feature pixel points belong as the difference dispersion corresponding to the feature pixel points, wherein the fusion representative value corresponding to the connection area is the average value of the fusion values corresponding to all the feature pixel points in the connection area;
If the number of the pixel points in the connecting area to which the characteristic pixel points belong is smaller than or equal to the preset number, determining a first preset factor as the difference dispersion corresponding to the characteristic pixel points.
4. The method for segmenting an otorhinolaryngological examination image according to claim 1, wherein determining the path dispersion corresponding to each feature pixel point according to the reference feature point corresponding to the connection region to which the feature pixel point belongs comprises:
if the number of the pixel points in the connecting area to which the characteristic pixel points belong is larger than the preset number, determining the shortest path distance between the characteristic pixel points and the reference characteristic points as the path dispersion corresponding to the characteristic pixel points;
if the number of the pixel points in the connecting area to which the characteristic pixel points belong is smaller than or equal to the preset number, determining a second preset factor as the path dispersion corresponding to the characteristic pixel points.
5. The method for segmenting an otorhinolaryngological examination image as claimed in claim 1, wherein said determining a second discrete index corresponding to each connection region from the reference feature point corresponding to said connection region comprises:
determining the Euclidean distance between the center position of the preset window and the vertex position of the preset window as a first reference distance;
Determining the Euclidean distance between the reference feature point corresponding to the connection region and the central position of the feature sub-window to which the connection region belongs as a second reference distance corresponding to the connection region;
and determining the ratio of the second reference distance corresponding to the connection region to the first reference distance as a second discrete index corresponding to the connection region.
6. The method for segmenting the otorhinolaryngological examination image according to claim 1, wherein the determining the target weight corresponding to the pixel point to be subdivided according to the target discrete index and the neighbor aggregation index corresponding to each feature sub-window in the feature sub-window set corresponding to each pixel point to be subdivided comprises:
determining a first weight factor corresponding to each characteristic sub-window according to a target discrete index and a neighbor aggregation index corresponding to each characteristic sub-window, wherein the target discrete index and the first weight factor are positively correlated, and the neighbor aggregation index and the first weight factor are negatively correlated;
determining the number of pixel points in each characteristic sub-window and the ratio of the number of pixel points in a preset window as a second weight factor corresponding to the characteristic sub-window;
Determining a reference weight corresponding to the pixel point to be subdivided according to a first weight factor and a second weight factor corresponding to each characteristic sub-window in the characteristic sub-window set corresponding to the pixel point to be subdivided, wherein the first weight factor is positively correlated with the reference weight, and the second weight factor is negatively correlated with the reference weight;
and carrying out linear transformation on the reference weight corresponding to the pixel point to be subdivided to obtain a target weight corresponding to the pixel point to be subdivided.
7. An otorhinolaryngological examination image segmentation system comprising a processor and a memory, the processor for processing instructions stored in the memory to implement an otorhinolaryngological examination image segmentation method as claimed in any one of claims 1 to 6.
CN202310735613.9A 2023-06-21 2023-06-21 Ear-nose-throat examination image segmentation method and system Active CN116485819B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310735613.9A CN116485819B (en) 2023-06-21 2023-06-21 Ear-nose-throat examination image segmentation method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310735613.9A CN116485819B (en) 2023-06-21 2023-06-21 Ear-nose-throat examination image segmentation method and system

Publications (2)

Publication Number Publication Date
CN116485819A CN116485819A (en) 2023-07-25
CN116485819B true CN116485819B (en) 2023-09-01

Family

ID=87221751

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310735613.9A Active CN116485819B (en) 2023-06-21 2023-06-21 Ear-nose-throat examination image segmentation method and system

Country Status (1)

Country Link
CN (1) CN116485819B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116824168B (en) * 2023-08-29 2023-11-21 青岛市中医医院(青岛市海慈医院、青岛市康复医学研究所) Ear CT feature extraction method based on image processing

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105989594A (en) * 2015-02-12 2016-10-05 阿里巴巴集团控股有限公司 Image region detection method and device
CN109544568A (en) * 2018-11-30 2019-03-29 长沙理工大学 Destination image partition method, device and equipment
CN109584233A (en) * 2018-11-29 2019-04-05 广西大学 Three-dimensional image segmentation method based on subjective threshold value and three-dimensional label technology
CN110136151A (en) * 2019-03-28 2019-08-16 青岛大学附属医院 The system and method for liver volume is obtained based on CT image
CN112308823A (en) * 2020-10-14 2021-02-02 杭州三坛医疗科技有限公司 Method and device for positioning region of interest in medical image
CN113515847A (en) * 2021-05-12 2021-10-19 中国矿业大学 Heterogeneous rock digital core modeling method based on K-means clustering algorithm
WO2022013599A1 (en) * 2020-07-15 2022-01-20 Universidad Técnica Federico Santa María System and method for assisting with the diagnosis of otolaryngologic diseases from the analysis of images
CN114677391A (en) * 2022-05-26 2022-06-28 青岛大学附属医院 Spine image segmentation method
CN115086060A (en) * 2022-06-30 2022-09-20 深信服科技股份有限公司 Flow detection method, device and equipment and readable storage medium
WO2022257759A1 (en) * 2021-06-08 2022-12-15 百果园技术(新加坡)有限公司 Image banding artifact removal method and apparatus, and device and medium

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE102009027448A1 (en) * 2009-07-03 2011-01-05 Forschungszentrum Jülich GmbH Knowledge-based segmentation of weakening regions of the head

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105989594A (en) * 2015-02-12 2016-10-05 阿里巴巴集团控股有限公司 Image region detection method and device
CN109584233A (en) * 2018-11-29 2019-04-05 广西大学 Three-dimensional image segmentation method based on subjective threshold value and three-dimensional label technology
CN109544568A (en) * 2018-11-30 2019-03-29 长沙理工大学 Destination image partition method, device and equipment
CN110136151A (en) * 2019-03-28 2019-08-16 青岛大学附属医院 The system and method for liver volume is obtained based on CT image
WO2022013599A1 (en) * 2020-07-15 2022-01-20 Universidad Técnica Federico Santa María System and method for assisting with the diagnosis of otolaryngologic diseases from the analysis of images
CN112308823A (en) * 2020-10-14 2021-02-02 杭州三坛医疗科技有限公司 Method and device for positioning region of interest in medical image
CN113515847A (en) * 2021-05-12 2021-10-19 中国矿业大学 Heterogeneous rock digital core modeling method based on K-means clustering algorithm
WO2022257759A1 (en) * 2021-06-08 2022-12-15 百果园技术(新加坡)有限公司 Image banding artifact removal method and apparatus, and device and medium
CN114677391A (en) * 2022-05-26 2022-06-28 青岛大学附属医院 Spine image segmentation method
CN115086060A (en) * 2022-06-30 2022-09-20 深信服科技股份有限公司 Flow detection method, device and equipment and readable storage medium

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
《基于主动轮廓模型的医学图像分割方法研究》;孙文燕;《博士电子期刊出版信息》;第四章 *

Also Published As

Publication number Publication date
CN116485819A (en) 2023-07-25

Similar Documents

Publication Publication Date Title
Bozek et al. A survey of image processing algorithms in digital mammography
US9256941B2 (en) Microcalcification detection and classification in radiographic images
WO2021129323A1 (en) Ultrasound image lesion describing method and apparatus, computer device, and storage medium
EP0757544B1 (en) Computerized detection of masses and parenchymal distortions
CN107977952A (en) Medical image cutting method and device
CN115359052B (en) Medical image enhancement method based on clustering algorithm
CN107358267B (en) Mammary gland ultrasonic image multi-element classification system and method based on cross-correlation characteristics
CN108241865B (en) Ultrasound image-based multi-scale and multi-subgraph hepatic fibrosis multistage quantitative staging method
US20120099771A1 (en) Computer aided detection of architectural distortion in mammography
CN110706225B (en) Tumor identification system based on artificial intelligence
CN116485819B (en) Ear-nose-throat examination image segmentation method and system
CN108765427A (en) A kind of prostate image partition method
Pham et al. A comparison of texture models for automatic liver segmentation
Kumar et al. Image processing in diabetic related causes
Cabral et al. Fractal analysis of breast masses in mammograms
Santos et al. A skin lesion semi-supervised segmentation method
Patel et al. Medical image enhancement using histogram processing and feature extraction for cancer classification
Florindo et al. Texture descriptors by a fractal analysis of three-dimensional local coarseness
Araújo et al. Active contours for overlapping cervical cell segmentation
Sarada et al. Spatial Intuitionistic Fuzzy C-means with Calcifications enhancement based on Nonsubsampled Shearlet Transform to detect Masses and Microcalcifications from MLO Mammograms
Cheng et al. Dental hard tissue morphological segmentation with sparse representation-based classifier
Sebastian et al. A novel model of feature extraction for lung cysts detection in CT image using Minutiae based Mumford and Shah functional model
Rizzi et al. A fully automatic system for detection of breast microcalcification clusters
Qi et al. Breast mass segmentation in mammography using improved dynamic programming
Remya et al. Brain tumor findings in patient with a novel cascaded function

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant