CN119478564A - Tumor partition recognition method and system based on multi-scale image analysis - Google Patents
Tumor partition recognition method and system based on multi-scale image analysis Download PDFInfo
- Publication number
- CN119478564A CN119478564A CN202510060008.5A CN202510060008A CN119478564A CN 119478564 A CN119478564 A CN 119478564A CN 202510060008 A CN202510060008 A CN 202510060008A CN 119478564 A CN119478564 A CN 119478564A
- Authority
- CN
- China
- Prior art keywords
- boundary
- gray
- data
- tumor
- distribution
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/764—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/26—Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
- G06V10/267—Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion by performing operations on regions, e.g. growing, shrinking or watersheds
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/28—Quantising the image, e.g. histogram thresholding for discrimination between background and foreground patterns
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/42—Global feature extraction by analysis of the whole pattern, e.g. using frequency domain transformations or autocorrelation
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/46—Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/52—Scale-space analysis, e.g. wavelet analysis
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/54—Extraction of image or video features relating to texture
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/74—Image or video pattern matching; Proximity measures in feature spaces
- G06V10/761—Proximity, similarity or dissimilarity measures
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V2201/00—Indexing scheme relating to image or video recognition or understanding
- G06V2201/03—Recognition of patterns in medical or anatomical images
- G06V2201/032—Recognition of patterns in medical or anatomical images of protuberances, polyps nodules, etc.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Multimedia (AREA)
- General Physics & Mathematics (AREA)
- Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Computing Systems (AREA)
- General Health & Medical Sciences (AREA)
- Medical Informatics (AREA)
- Software Systems (AREA)
- Evolutionary Computation (AREA)
- Databases & Information Systems (AREA)
- Artificial Intelligence (AREA)
- Health & Medical Sciences (AREA)
- Image Analysis (AREA)
- Image Processing (AREA)
Abstract
The invention relates to the technical field of medical image analysis, in particular to a tumor partition identification method and system based on multi-scale image analysis, comprising the following steps: based on the input medical tumor partition image, defining a gray value distribution range of pixel blocks, dividing pixel block groups according to fixed sizes, calculating neighborhood gray values of the pixel blocks, and determining a distribution change set through accumulation of gray difference values of the pixel block groups to generate a multi-scale gray distribution map. According to the invention, through calculating the gray value distribution and the gray difference value of the pixel block group, the image detail recognition of a tumor area is enhanced, the analysis capability of a tumor boundary is improved, the texture feature difference of the boundary area is analyzed, the gray gradient difference value and the connection characteristic are quantized, the analysis precision of the geometric characteristic is improved, the space geometric expansion data and the texture characteristic are combined, the partition precision and the image analysis accuracy are improved, the false recognition of a boundary fuzzy area is effectively avoided, and a finer tumor heterogeneity research basis is provided.
Description
Technical Field
The invention relates to the technical field of medical image analysis, in particular to a tumor partition identification method and system based on multi-scale image analysis.
Background
The field of medical image analysis technology, which involves the use of algorithms and vision processing techniques to resolve image data generated by various medical imaging devices (such as CT, MRI, ultrasound, etc.), has a central goal to aid in medical research by improving the interpretability of the image data, is commonly used for image reconstruction, image enhancement, visual content extraction and classification, and automated labeling of image content. The technique not only increases the efficiency of extracting information from medical images, but also improves the management and analysis process of image data, supporting medical research and educational application.
The method for identifying tumor partition is characterized by distinguishing and marking different areas in a tumor image by utilizing an image analysis technology, wherein the areas have different biological characteristics and clinical significance, the method focuses on accurately distinguishing different parts of the tumor from a medical image, such as a tumor core, a proliferation area and a necrosis area, the distinguishing has important significance for researching the growth mode and the development stage of the tumor, especially in the biomedical research and education related to the tumor, the method can provide deep understanding of tumor heterogeneity, and the main purposes of the method include supporting medical research and exploring the biological behavior and variability of the tumor by accurate image analysis.
The prior art tumor partition identification method relies on rough division of tumor regions and ignores subtle biological differences in the regions to some extent, and most of the prior art methods are based on simple threshold segmentation and single-scale image processing, which results in limited accuracy of tumor partition, especially when processing tumor boundary blurring or multiple heterogeneous regions, and excessively simplified segmentation results are easy to occur. Due to the lack of deep analysis of complex relationships among gray scale differences, texture features and geometric characteristics, the prior art cannot effectively identify and distinguish detailed information such as proliferation areas, necrosis areas and the like in tumors, and the defect directly affects research and clinical decision support of tumor behaviors. For example, the traditional method can confuse the tumor core with the proliferation area, so that misjudgment on the tumor growth mode is caused, the prior art is difficult to fully utilize the spatial information of the image, the accurate calibration of the boundary and the partition is insufficient, and the in-depth analysis of the tumor partition and the breadth of tumor heterogeneity research are limited.
Disclosure of Invention
The invention aims to solve the defects in the prior art, and provides a tumor partition identification method and system based on multi-scale image analysis.
In order to achieve the purpose, the invention adopts the following technical scheme that the tumor partition identification method based on multi-scale image analysis comprises the following steps:
s1, defining a gray value distribution range of pixel blocks based on an input medical tumor partition image, dividing pixel block groups according to fixed sizes, calculating neighborhood gray values of the pixel blocks, and determining a distribution change set through accumulation of gray difference values of the pixel block groups to generate a multi-scale gray distribution map;
S2, extracting a boundary region of a pixel block based on the multi-scale gray level distribution map, analyzing texture feature differences of the boundary region, selecting a region of a connection relation in a neighborhood, acquiring boundary distribution characteristic data, quantifying the connection relation and a gray level gradient difference value, and analyzing the boundary region and the connection characteristic to generate a boundary connection weight table;
s3, based on the boundary connection weight table, extracting geometric characteristics in a weight region, analyzing perimeter, area and edge continuity of the boundary, comparing geometric characteristic values to generate candidate geometric characteristic matrixes, taking the candidate geometric characteristic matrixes as centers, and analyzing spatial expansion characteristics of a neighborhood to obtain spatial geometric expansion data;
S4, analyzing the characteristic distribution of the boundary region based on the space geometric expansion data and combining the texture characteristics of the expansion region and the candidate region, calibrating the boundary line of each partition, extracting pixel blocks in the tumor partition, and carrying out characteristic statistics on the shape and the distribution of the pixel blocks to obtain a tumor region partition identification characteristic numerical value table.
As a further aspect of the present invention, the step of obtaining the multi-scale gray-scale distribution map specifically includes:
S111, defining a pixel block gray value distribution range in a medical tumor partition image based on the input medical tumor partition image, dividing the image into a plurality of pixel block groups according to a fixed size, wherein each pixel block consists of continuous pixel points, and obtaining an initial pixel block group distribution array;
S112, carrying out gray value analysis on each pixel block group in the initial pixel block group distribution array, calculating and accumulating gray difference values between adjacent pixel block groups, and adopting a formula:
;
Carrying out normalization processing and measuring gray level change to obtain a gray level difference value accumulation array;
Wherein, Represents the firstThe gray values of the individual pixel blocks,Representing the average gray value of the neighborhood,AndRepresenting the maximum and minimum gray values respectively,Representing a normalized sum of gray scale variations in a group of pixel blocks,Representing the total pixel block number;
S113, analyzing the distribution change among pixel block groups by utilizing the information recorded in the gray difference value accumulation array, and determining a distribution change set according to the statistical data of the gray differences to obtain a multi-scale gray distribution map.
As a further aspect of the present invention, the step of obtaining the boundary distribution characteristic data specifically includes:
S211, analyzing the multi-scale gray distribution map, extracting a boundary region according to the gray gradient change rate and contrast change of pixel blocks, comparing gray gradient differences of adjacent pixel blocks, screening a high gradient region through a gradient difference threshold value, and obtaining preliminary boundary region characteristic data;
s212, extracting neighborhood connection features from the preliminary boundary region characteristic data, and adopting the formula:
;
Analyzing the connection strength of each pixel pair, setting a threshold according to the statistical distribution of the connection strength, and generating a local connection region in the neighborhood;
Wherein, AndRepresenting the gray values of the pixels in the neighborhood,Representing the coordinates of the corresponding pixel point,Representing a set of neighborhood pixels,Is a fine-tuning factor which is used to adjust the tuning frequency,Representing the connection weights between pixels;
s213, performing texture feature analysis by utilizing the local connection region in the neighborhood, comparing texture feature differences of the differentiated boundary region, and evaluating the integrity of the boundary region to obtain boundary distribution characteristic data.
As a further aspect of the present invention, the step of obtaining the boundary connection weight table specifically includes:
S221, dividing image pixels based on the boundary distribution characteristic data, extracting boundary areas and non-boundary areas in the image, performing pixel gray value classification, distinguishing the differentiated areas through a threshold value, and marking the boundary areas to obtain boundary area data;
s222, carrying out difference analysis on gray values in the boundary region by pixels based on the boundary region data, comparing gray differences of adjacent pixels, and quantizing the differences to obtain quantized gray gradient difference data;
S223, based on the quantized gray gradient difference data, carrying out boundary region connection relation analysis, quantizing the gray gradient difference of each connection by traversing the connection points, and distributing weight for each connection according to the gray gradient difference, so as to generate a boundary connection weight table.
As a further aspect of the present invention, the step of obtaining the candidate geometric feature matrix specifically includes:
s311, based on the boundary connection weight table, extracting each boundary point in the region, recording the connection weight between adjacent regions by traversing the connection condition of each pixel, judging the connection strength, and sequencing the connection points to obtain perimeter data of the tumor partition boundary;
S312, measuring the real-time length of the boundary in each boundary area based on the perimeter data of the tumor partition boundary, identifying the boundary position, and obtaining boundary area data through pixel superposition analysis;
S313, analyzing the continuity of edges in each region based on the boundary area data, detecting the breaking points in each region, comparing the boundary point connection states, performing edge continuity analysis, and comparing the boundary perimeter data, the boundary area data and the edge continuity data to generate candidate geometric feature matrixes.
As a further scheme of the invention, the acquisition steps of the space geometry expansion data specifically comprise:
S321, calculating a neighborhood average value of each geometric feature according to the candidate geometric feature matrix, screening a neighborhood range associated with the central feature, judging whether each neighborhood feature deviates from the central feature, and generating a candidate neighborhood geometric feature set;
s322, analyzing the space expansion data of each neighborhood feature based on the candidate neighborhood geometric feature set, analyzing the neighborhood feature expansion and the center feature, and adopting a formula:
;
calculating a spatial expansion characteristic coefficient to generate a spatial expansion coefficient distribution result;
Wherein, Representing the coefficient of the spatial expansion characteristic,For the neighborhood feature value,As the value of the central characteristic,As a parameter of the weight-bearing element,Is the neighborhood feature quantity;
s323, extracting a space expansion region by setting a threshold according to the space expansion coefficient distribution result, carrying out geometric characteristic aggregation on the region, and analyzing the global relation with the candidate geometric characteristics to obtain space geometric expansion data.
As a further aspect of the present invention, the step of obtaining the tumor area partition identification feature value table specifically includes:
S411, analyzing texture characteristics of an expansion area and a candidate area based on the space geometric expansion data, extracting characteristic distribution of a boundary area, using the data for demarcating tumor boundary lines, and establishing a boundary line characteristic data set;
s412, extracting pixel blocks corresponding to tumor boundaries from the image data set by utilizing the boundary line characteristic data set, and carrying out statistical analysis on the shapes and the distribution of the pixel blocks, wherein the statistical analysis result comprises the brightness mean value, the brightness variance and the shape parameters of the regional pixel blocks, so as to obtain the shape and the distribution statistical data;
S413, capturing tumor shape and distribution data according to the shape and distribution statistical data, analyzing the geometric attribute and the position of each data point, extracting the characteristics of boundary definition, compactness and direction, and adopting the formula:
;
generating a tumor area partition identification characteristic value table;
Wherein, A numerical value representing an identification characteristic of the tumor region,Is the luminance average of the region pixel block,Is the variance of the luminance and,Is a parameter of the shape of the article,AndIs an adjustment factor determined based on the data sensitivity analysis.
A tumor partition identification system based on multi-scale image analysis, wherein the tumor partition identification system based on multi-scale image analysis is used for executing the tumor partition identification method based on multi-scale image analysis, and the system comprises:
the gray distribution analysis module divides pixel block groups according to preset sizes based on the input medical tumor partition image, extracts gray values in each pixel block, counts and classifies pixel blocks in a differential gray range, performs gray distribution statistical analysis, and generates a multi-scale gray distribution map;
The boundary texture analysis module extracts a boundary region in the tumor partition image based on the multi-scale gray distribution map, identifies texture features in the boundary region, calculates and compares gray differences of adjacent pixels by using gray gradients, carries out quantization classification according to gradient values, and generates a boundary connection weight table;
the geometric feature acquisition module extracts geometric features in the boundary region based on the boundary connection weight table, including perimeter, area and edge connectivity, adopts perimeter-to-area ratio and edge curvature index to perform morphological analysis on the region, compares the geometric features of the region, and acquires space geometric expansion data by combining space expansion information;
the space geometric expansion feature analysis module is used for analyzing expansion proportion and distribution features of each region based on the space geometric expansion data, identifying space expansion features of the differential region and carrying out space expansion feature analysis to obtain region expansion proportion data;
The space partition demarcation module analyzes morphological changes and space distribution characteristics in the region based on the region expansion proportion data, identifies the space distribution mode of the differentiated region by combining morphological characteristics of the boundary, performs region refinement division according to the identification result, and generates a tumor region partition identification characteristic numerical table.
Compared with the prior art, the invention has the advantages and positive effects that:
According to the invention, through carrying out fine gray value distribution calculation and accumulation of pixel block group gray difference values on the medical tumor partition image, not only is the image detail identification in the tumor area enhanced, but also the boundary analysis capability of different tumor areas is improved. The texture feature difference of the boundary region is further analyzed, the neighborhood connection relation is combined, the gray gradient difference and the connection characteristic are quantized, so that the geometric characteristic analysis of the boundary region is more accurate, particularly the perimeter, the area and the edge continuity of the boundary, rich data support is provided for the subsequent construction of the geometric feature matrix, the boundary line of a tumor partition is accurately calibrated by combining space geometric expansion data and texture characteristics, the pixel blocks in the partition are subjected to statistical analysis, the partition precision of the tumor region is enhanced, the fineness and the accuracy of image analysis are improved, the problems of misidentification and missed judgment of the boundary fuzzy region in the traditional method are effectively avoided, and meanwhile, a finer tumor heterogeneity research basis is provided.
Drawings
FIG. 1 is a schematic workflow diagram of the present invention;
FIG. 2 is a flow chart of a multi-scale gray scale distribution map according to the present invention;
FIG. 3 is a flow chart of boundary distribution characteristic data according to the present invention;
FIG. 4 is a flow chart of the boundary connecting weights table of the present invention;
FIG. 5 is a flow chart of a candidate geometric feature matrix according to the present invention;
FIG. 6 is a flow chart of the spatial geometry expansion data in the present invention;
FIG. 7 is a flow chart of the numerical table of the identification characteristics of the tumor area partition in the present invention.
Detailed Description
The present invention will be described in further detail with reference to the drawings and examples, in order to make the objects, technical solutions and advantages of the present invention more apparent. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the invention.
In the description of the present invention, it should be understood that the terms "length," "width," "upper," "lower," "front," "rear," "left," "right," "vertical," "horizontal," "top," "bottom," "inner," "outer," and the like indicate orientations or positional relationships based on the orientation or positional relationships shown in the drawings, merely to facilitate describing the present invention and simplify the description, and do not indicate or imply that the devices or elements referred to must have a specific orientation, be configured and operated in a specific orientation, and therefore should not be construed as limiting the present invention. Furthermore, in the description of the present invention, the meaning of "a plurality" is two or more, unless explicitly defined otherwise.
Example 1:
Referring to fig. 1, the invention provides a method for identifying tumor partition based on multi-scale image analysis, which comprises the following steps:
s1, defining a gray value distribution range of pixel blocks based on an input medical tumor partition image, dividing pixel block groups according to fixed sizes, calculating neighborhood gray values of the pixel blocks, and determining a distribution change set through accumulation of gray difference values of the pixel block groups to generate a multi-scale gray distribution map;
S2, extracting boundary areas of pixel blocks based on a multi-scale gray level distribution map, analyzing texture feature differences of the boundary areas, selecting areas of connection relations in adjacent areas, obtaining boundary distribution characteristic data, quantifying the connection relations and gray level gradient differences, and analyzing the boundary areas and the connection characteristics to generate a boundary connection weight table;
S3, extracting geometric characteristics in a weight region based on a boundary connection weight table, analyzing perimeter, area and edge continuity of the boundary, comparing geometric characteristic values to generate candidate geometric characteristic matrixes, taking the candidate geometric characteristic matrixes as centers, and analyzing spatial expansion characteristics of a neighborhood to obtain spatial geometric expansion data;
S4, analyzing the characteristic distribution of the boundary region based on the space geometric expansion data and combining the texture characteristics of the expansion region and the candidate region, calibrating the boundary line of each partition, extracting pixel blocks in the tumor partition, and carrying out characteristic statistics on the shape and the distribution of the pixel blocks to obtain a tumor region partition identification characteristic numerical table.
The multi-scale gray distribution map comprises pixel block gray difference values, pixel block group gray difference value accumulation and distribution change sets, boundary distribution characteristic data comprise boundary distribution density, boundary distribution form and boundary distribution rules, a boundary connection weight table comprises boundary region texture feature differences, connection region gray gradient difference values and boundary connection relations, a candidate geometric feature matrix comprises boundary perimeter, boundary area and edge continuity, space geometric expansion data comprise expansion region texture characteristics, candidate region geometric forms and boundary feature distribution, and a tumor region partition identification feature numerical table comprises pixel block shapes, pixel block distribution and boundary line features in partitions.
Referring to fig. 2, the steps for acquiring the multi-scale gray scale distribution map specifically include:
S111, defining a pixel block gray value distribution range in a medical tumor partition image based on the input medical tumor partition image, dividing the image into a plurality of pixel block groups according to a fixed size, wherein each pixel block consists of continuous pixel points, and obtaining an initial pixel block group distribution array;
In the analysis of medical tumor partition images, firstly, the gray value distribution range of each pixel block is defined, the key of the process is to extract effective information from an original image, the definition of the gray value distribution range is based on the statistical characteristics of the image, such as the maximum value and the minimum value of pixel intensity, the image is divided according to the preset pixel block size based on the statistical characteristics, each pixel block comprises a fixed number of pixel points, the pixel points together form a pixel block group, the statistical analysis is carried out on the pixel block group, an initial pixel block group distribution array is obtained, the array is the basis of the subsequent analysis, the processing process can be accurately controlled by analyzing each element in the array one by one, each operation is ensured to be carried out based on actual image data, and a solid basis is provided for the subsequent image processing and analysis.
S112, carrying out gray value analysis on each pixel block group in the initial pixel block group distribution array, calculating and accumulating gray difference values between adjacent pixel block groups, and adopting a formula:
;
Carrying out normalization processing and measuring gray level change to obtain a gray level difference value accumulation array;
Wherein, Represents the firstThe gray values of the individual pixel blocks,Representing the average gray value of the neighborhood,AndRepresenting the maximum and minimum gray values respectively,Representing a normalized sum of gray scale variations in a group of pixel blocks,Representing the total pixel block number;
The formula has the advantages that the gray level change among different pixel blocks can be effectively measured by carrying out normalization processing on the difference value between the gray level value of each pixel block and the neighborhood average gray level value, which is particularly important for analyzing the texture and structure information of the image, and the normalization processing ensures that the calculation result can reflect the actual gray level change condition no matter how the brightness of the original image is;
A pixel block group is set to include 10 pixel blocks, the gray value of each pixel block is {200,180,220,210,230,240,260,250,240,230}, the neighborhood average gray value is 225, the maximum and minimum gray values are 260 and 180, respectively, according to the formula, the absolute value of the difference between the gray value and the average gray value of each pixel block is calculated first, and then divided by the gray range (260-180=80), the result is:
;
the results indicate that the normalized sum of the gray level changes in this pixel block set is 4.375, which indicates that the gray level change of this region is large compared to the average gray level, and is a feature region in the image, such as a tumor region or an important medical image feature.
S113, analyzing the distribution change among pixel block groups by utilizing the information recorded in the gray difference value accumulation array, and determining a distribution change set according to the statistical data of the gray difference to obtain a multi-scale gray distribution map;
The gray level variation data recorded by the gray level difference accumulation array are analyzed, the array is used as an analysis basis, the gray level variation data comprise gray level value comparison results of each pixel block and the neighborhood thereof, heterogeneous areas in the image can be effectively identified through the data, the areas are related to pathological changes, such as growth areas of tumors, the image can be subjected to multi-scale gray level distribution analysis through statistics of the gray level difference values, a multi-scale gray level distribution map is generated, the gray level distribution condition of the image under different scales can be displayed, and more information is provided for doctors.
Referring to fig. 3, the steps for obtaining the boundary distribution characteristic data specifically include:
S211, analyzing a multi-scale gray distribution map, extracting a boundary region according to gray gradient change rate and contrast change of pixel blocks, comparing gray gradient differences of adjacent pixel blocks, screening a high gradient region through a gradient difference threshold value, and acquiring preliminary boundary region characteristic data;
When analyzing the multi-scale gray distribution map, firstly, preprocessing is needed to be carried out on image data, including noise reduction and contrast enhancement, so that boundaries between different pixel blocks can be identified more clearly, the edge region in the image can be accurately determined by calculating the gray gradient change rate of each pixel block, the region shows high contrast change, the extraction of the boundary region is carried out by comparing the gray gradient differences between adjacent pixel blocks, a gray gradient difference threshold value is set, when the gray gradient difference of the adjacent pixel blocks is larger than the threshold value, the boundary pixel is identified, the accuracy of boundary detection is improved, false positive boundary identification is reduced, and preliminary boundary region characteristic data is acquired.
S212, extracting neighborhood connection features from the preliminary boundary region characteristic data, and adopting the formula:
;
Analyzing the connection strength of each pixel pair, setting a threshold according to the statistical distribution of the connection strength, and generating a local connection region in the neighborhood;
Wherein, AndRepresenting the gray values of the pixels in the neighborhood,Representing the coordinates of the corresponding pixel point,Representing a set of neighborhood pixels,Is a fine-tuning factor which is used to adjust the tuning frequency,Representing the connection weights between pixels;
The formula is beneficial in that it allows weighting based on the actual physical distance between pixels and gray value differences, thereby more accurately reflecting neighborhood connection strength in the image, which is particularly important in boundary detection and texture analysis, as it helps to distinguish between pixels that are only gray-scale adjacent but spatially uncorrelated;
a group of adjacent pixel points are set, wherein the gray values of the pixel p and the pixel q are respectively AndThe coordinates are respectivelyAndThe difference in the formulaThe sum of squares of the distances isTaking outPrevent zero removal error and carry the zero removal error into a formula to calculateThis indicates that there is a strong connection between pixel p and pixel q;
The result shows the connection weight obtained by calculation The value is close to 20, which is a higher connection strength, indicating that the pair of pixels has stronger connectivity visually, which helps to more accurately select those pixels that are significantly related in space and gray scale when generating local connection regions in the neighborhood, thereby improving the accuracy of boundary detection in image processing.
S213, performing texture feature analysis by utilizing the local connection region in the neighborhood, comparing texture feature differences of the differentiated boundary region, and evaluating the integrity of the boundary region to obtain boundary distribution characteristic data;
the process involves a large number of pixel data analyses including calculating the gray mean, variance and higher order statistics of the pixels in each region, and the calculation results provide scientific basis for determining the effective boundary regions in the image, ensuring that the boundary information in the image is accurately captured through the texture difference analysis of the detail level, obtaining the boundary distribution characteristic data, and directly affecting the success rate of image processing tasks such as boundary detection, image segmentation and the like.
Referring to fig. 4, the step of obtaining the boundary connection weight table specifically includes:
S221, dividing image pixels based on boundary distribution characteristic data, extracting boundary areas and non-boundary areas in the image, performing pixel gray value classification, distinguishing differentiated areas through a threshold value, and marking the boundary areas to obtain boundary area data;
Dividing the whole image into a plurality of rectangular areas according to equal intervals by adopting a partition strategy with fixed size, keeping the number of pixels in each area consistent, calculating the pixel gray value distribution in each area one by one, sorting according to the standard deviation of gray value change, determining the initial range of a boundary area, when the boundary area and a non-boundary area in the image are extracted, calculating the gray value difference between each pixel point and surrounding pixel points by traversing the pixel points in each area, marking the pixels with the difference value exceeding a preset threshold value as boundary pixels, dividing the image gray value range into a plurality of intervals when the pixel gray value classification is executed, respectively calculating the number of pixel points in each interval, counting the distribution proportion, calculating the differential area through the threshold value, distinguishing the differential area, generating a marking matrix by using a binary image, marking the boundary pixels as 1, marking the rest pixels as 0, and further processing the marking matrix to obtain the boundary area data.
S222, carrying out difference analysis on gray values in a boundary region by pixels based on the boundary region data, comparing gray differences of adjacent pixels, and quantizing the differences to obtain quantized gray gradient difference data;
Sequentially selecting pixel points in each boundary area and adjacent pixel points thereof, calculating gray value differences between the pixel points and the adjacent pixel points, generating a data matrix containing difference values, when comparing gray difference values of adjacent pixels, sequentially carrying out pixel pairing in the transverse direction, the longitudinal direction and the diagonal direction by setting a step range, generating a group of gray difference values each time by calculation, counting distribution characteristics of all paired difference values, carrying out comparison analysis on a statistical result of the gray difference values and average values and standard differences in the areas, classifying according to the actual range of the gray difference values when quantifying the difference values, generating quantized gray gradient difference value data according to the class distribution, and giving specific weights to each gradient difference value through a mapping relation for subsequent analysis.
S223, carrying out boundary region connection relation analysis based on the quantized gray gradient difference data, quantizing the gray gradient difference of each connection by traversing the connection points, and distributing weight for each connection according to the gray gradient difference to generate a boundary connection weight table;
Generating a connection matrix according to the geometric adjacency relation of pixel points of a boundary area, traversing the connection points one by one, counting the pixel gray values related to each connection point, calculating the gray difference value of pixels at two ends of the connection points, further generating a graded weight value according to the quantization rule of gray gradient difference values when the gray difference value of each connection point is quantized, recursively calculating the gray difference values of adjacent connection points layer by layer according to the geometric positions of the connection points when traversing all the connection points, allocating weights for each connection point, generating a boundary connection weight table, and constructing a comprehensive table comprising the coordinates of the connection points, the connection weights and the gray gradient difference values by using the allocated weight values to provide data support for subsequent image boundary optimization and analysis.
Referring to fig. 5, the step of obtaining the candidate geometric feature matrix specifically includes:
S311, extracting each boundary point in the region based on the boundary connection weight table, recording the connection weight between adjacent regions by traversing the connection condition of each pixel, judging the connection strength, and sequencing the connection points to obtain perimeter data of the tumor partition boundary;
When each boundary point in a tumor partition is extracted, a boundary connection weight table is scanned point by point, partition boundary characteristics in a tumor image are combined, specific pixel positions of each boundary point are positioned, connection weights between the boundary points and adjacent partitions are recorded, when connection conditions of each pixel are traversed, connection relations of boundary points in the tumor partition are represented by a two-dimensional adjacent matrix, connection weight differences among the tumor partitions are counted, boundary connection points with high weights are extracted, difference areas among the partitions are analyzed in a key mode, when connection weights among the adjacent areas are recorded, pixel connection of the cross-tumor partition is calculated, areas with significant gray value changes are focused, key point data reflecting the partition boundary characteristics are extracted, connection strength is judged, classification is carried out according to weight values, strong connection point and weak connection point positions are marked, after connection points are sequenced, the shape characteristics of the tumor boundary are analyzed, perimeter data of the tumor partition boundary are obtained finally, and a foundation is laid for tumor morphological characteristic analysis.
S312, based on perimeter data of tumor partition boundaries, measuring real-time length of the boundaries in each boundary area, identifying boundary positions, and acquiring boundary area data through pixel superposition analysis;
When the real-time length of the boundary is measured in each tumor partition, the geometric path of the partition boundary is calculated pixel by pixel, the pixel scale of a tumor image is combined, the total length of the partition boundary is accurately measured, when the boundary position is identified, the boundary points of the tumor partition are mapped into a two-dimensional image coordinate system, the track line of the boundary points is generated, the change characteristic of the boundary position is determined through the calculation of the coordinate difference value, when the boundary area data is obtained through the pixel superposition analysis, the number of all pixel points in each tumor partition and on the boundary is counted, the partition area is calculated according to the pixel size, the accuracy of the area data is verified by combining with the real scale in the image, the gray scale difference of the internal tissue and the peripheral area of the tumor is further distinguished in the analysis process, the morphological characteristics of the tumor partition can be accurately reflected by the boundary area data, and support is provided for the follow-up focus analysis.
S313, analyzing the continuity of edges in each region based on the boundary area data, detecting the breaking points in each region, and comparing the boundary perimeter data, the boundary area data and the edge continuity data to generate candidate geometric feature matrixes by comparing the boundary point connection states;
Marking positions of breaking points of a tumor partition boundary according to connection conditions of pixel inspection partition boundary points, counting the number and distribution characteristics of the breaking points, judging whether gaps between the breaking points exceed a specific threshold value by calculating distances between adjacent boundary points when detecting the breaking points in each tumor partition, recording breaking area coordinates exceeding the threshold value, comprehensively analyzing breaking point information of each boundary and the tumor partition connection state when analyzing tumor edge continuity by comparing the boundary point connection state, quantifying the integrity of the boundary, and generating candidate geometric feature matrixes including partition boundary point coordinates, connection weights, breaking point positions and gray level difference information when comparing boundary perimeter data, boundary area data and edge continuity data, thereby providing data support for further analyzing tumor focus morphology and partition characteristics.
Referring to fig. 6, the steps for acquiring the space geometry expansion data specifically include:
S321, calculating a neighborhood average value of each geometric feature according to the candidate geometric feature matrix, screening a neighborhood range associated with the central feature, judging whether each neighborhood feature deviates from the central feature, and generating a candidate neighborhood geometric feature set;
When analyzing the candidate geometric feature matrix, the focus is to know the feature change of the central feature and the neighborhood thereof, determine the position of the central feature, identify and mark the position through the matrix coordinate system, then define the neighborhood range, which is realized through the preset space radius, which determines which neighborhood features will be included, collect each neighborhood feature one by one, and compare with the central feature, evaluate the relation between the two by calculating the absolute difference between each neighborhood feature and the central feature, including sum all neighborhood feature values, then divide the feature quantity to obtain the average neighborhood feature value, this average value compares the central feature value, which is used to judge which neighborhood features are significant with the central difference, the significance is defined through the set threshold, if the difference exceeds this threshold, consider that the neighborhood feature deviates from the central feature, should be marked and rejected in the subsequent analysis, the rest neighborhood feature will be saved as the candidate neighborhood geometric feature set for further analysis.
S322, analyzing the space expansion data of each neighborhood feature based on the candidate neighborhood geometric feature set, analyzing the neighborhood feature expansion and the center feature, and adopting the formula:
;
calculating a spatial expansion characteristic coefficient to generate a spatial expansion coefficient distribution result;
Wherein, Representing the coefficient of the spatial expansion characteristic,For the neighborhood feature value,As the value of the central characteristic,As a parameter of the weight-bearing element,Is the neighborhood feature quantity;
The benefit of the formula is that the weight parameter is used The influence degrees of different neighborhood characteristics are subjected to weighted analysis, error offset caused by uneven distribution of characteristic values is avoided, and meanwhile, the accuracy of space expansion characteristic calculation is optimized;
obtaining central features through candidate neighborhood geometric feature sets And neighborhood feature valuesWherein,;
Determining weight parametersCalculating the weighted product of the weight and the absolute difference as;
Summing to obtain the moleculeThe denominator is the weight sum;
Calculating expansion coefficientThe result shows that the expansion association degree between the center feature and the neighborhood feature is 0.4615, and the method can be further used for the analysis of the salient region.
S323, extracting a space expansion region by setting a threshold value according to a space expansion coefficient distribution result, performing geometric characteristic aggregation on the region, and analyzing a global relation with candidate geometric characteristics to obtain space geometric expansion data;
The significance threshold is set to identify which regions have significant spatial expansion, and attention should be paid to the fact that feature regions with expansion coefficients greater than or equal to the threshold are selected through comparison with a preset threshold, which involves iterating a dataset, evaluating the expansion coefficients of each neighborhood feature each time, and recording feature regions exceeding the threshold, the regions are considered to have special significance for understanding the spatial geometric expansion, the significant feature regions are grouped and aggregated according to the positions in space, the process involves simple numerical comparison, analysis of the spatial relationship of the features, such as the relative position with the central feature, by combining the aggregated data of the significant regions with global geometric features, calculating the overall geometric expansion characteristics, and comprehensive analysis is helpful for understanding how the spatial features are expanded in a larger geometric context to obtain spatial geometric expansion data.
Referring to fig. 7, the steps for obtaining the tumor area partition identification feature value table specifically include:
s411, analyzing texture characteristics of an expansion area and a candidate area based on space geometric expansion data, extracting feature distribution of a boundary area, using the data for demarcating tumor boundary lines, and establishing a boundary line feature data set;
Firstly, extracting an expansion region and a candidate region from a high-resolution image, analyzing the texture characteristics of the region, calling an image segmentation tool to mark a target region in the process, acquiring indexes such as texture parameters, gray average values, gray variances, texture contrast and the like in the region by utilizing a feature extraction module, then carrying out partitioning treatment on the texture parameters, inducing the texture characteristics of each partition into independent vector representations, combining the geometric expansion characteristics and the texture parameters of the region, identifying a boundary region by comparing the similarity of the geometric expansion characteristics and the texture parameters of the region with a standard model, carrying out refinement treatment on feature distribution in the boundary region, and marking all the partitions as specific boundary line data so as to support accurate positioning of subsequent analysis, and generating a boundary line feature data set.
S412, extracting pixel blocks corresponding to tumor boundaries from the image dataset by utilizing the boundary line feature dataset, and carrying out statistical analysis on the shapes and the distribution of the pixel blocks, wherein the statistical analysis result comprises brightness mean values, brightness variances and shape parameters of the regional pixel blocks, so as to obtain shape and distribution statistical data;
The method comprises the steps of further extracting corresponding pixel blocks from an original image, calling boundary line data to spatially cut the whole image, collecting pixels of each cut area into a target object, performing statistical analysis on the target pixel blocks, respectively calculating brightness mean value, brightness variance and shape parameters of the pixel blocks, for example, fitting a shape factor through the geometric boundary of the pixel blocks, calculating length-width ratio and perimeter area ratio to quantify the shape characteristics of the pixel blocks, establishing an association matrix for all pixel block data, combining the brightness and shape data into a unified feature set, forming composite features by combining the shape parameters of the pixel blocks, obtaining shape and distribution statistical data, and providing a data basis for subsequent feature extraction.
S413, capturing tumor shape and distribution data according to the shape and distribution statistical data, analyzing the geometric attribute and the position of each data point, extracting the boundary definition, the compactness degree and the direction characteristics, and adopting the formula:
;
generating a tumor area partition identification characteristic value table;
Wherein, A numerical value representing an identification characteristic of the tumor region,Is the luminance average of the region pixel block,Is the variance of the luminance and,Is a parameter of the shape of the article,AndAn adjustment coefficient determined based on the data sensitivity analysis;
the formula is beneficial in that the three characteristics of brightness mean value, brightness variance and shape parameter ratio are integrated by adjusting coefficients AndThe flexibility of the system can adapt to the feature distribution requirements of different types of images, and the comprehensiveness of feature representation is improved;
By statistical analysis, the settings are set Obtained from the extracted pixel block luminance average data,For the luminance variance, by a global map statistical calculation,Representing shape parameters, measuring complexity of pixel blocks, adjusting coefficientsAndDetermined by regional characteristic analysis;
The calculation process comprises the following steps:
;
the result shows that the calculated identification characteristic value of the tumor area is 124, and the comprehensive brightness and shape characteristics are the quantitative basis of tumor identification in the subsequent partition diagnosis, so that a tumor area partition identification characteristic value table is obtained.
The tumor partition identification system based on the multi-scale image analysis is used for executing the tumor partition identification method based on the multi-scale image analysis, and the system comprises the following steps:
the gray distribution analysis module divides pixel block groups according to preset sizes based on the input medical tumor partition image, extracts gray values in each pixel block, counts and classifies pixel blocks in a differential gray range, performs gray distribution statistical analysis, and generates a multi-scale gray distribution map;
The boundary texture analysis module extracts a boundary region in the tumor partition image based on the multi-scale gray distribution map, identifies texture features in the boundary region, calculates and compares gray differences of adjacent pixels by using gray gradients, carries out quantization classification according to gradient values, and generates a boundary connection weight table;
The geometric feature acquisition module extracts geometric features in the boundary region based on the boundary connection weight table, including perimeter, area and edge connectivity, adopts perimeter-to-area ratio and edge curvature index to conduct morphological analysis on the region, compares the geometric features of the region, and acquires space geometric expansion data by combining space expansion information;
The space geometric expansion feature analysis module is used for analyzing the expansion proportion and distribution feature of each region based on the space geometric expansion data, identifying the space expansion feature of the differential region and carrying out space expansion feature analysis to obtain region expansion proportion data;
The space partition demarcation module analyzes morphological changes and space distribution characteristics in the region based on the region expansion proportion data, identifies the space distribution mode of the differentiated region by combining the morphological characteristics of the boundary, performs region refinement division according to the identification result, and generates a tumor region partition identification characteristic value table.
The present invention is not limited to the above embodiments, and any equivalent embodiments which can be changed or modified by the technical disclosure described above can be applied to other fields, but any simple modification, equivalent changes and modification made to the above embodiments according to the technical matter of the present invention will still fall within the scope of the technical disclosure.
Claims (8)
1. The tumor partition identification method based on the multi-scale image analysis is characterized by comprising the following steps of:
Defining a gray value distribution range of pixel blocks based on an input medical tumor partition image, dividing pixel block groups according to fixed sizes, calculating neighborhood gray values of the pixel blocks, and determining a distribution change set through accumulation of gray difference values of the pixel block groups to generate a multi-scale gray distribution map;
Extracting boundary areas of pixel blocks based on the multi-scale gray distribution map, analyzing texture feature differences of the boundary areas, selecting areas of connection relations in adjacent areas, acquiring boundary distribution characteristic data, quantifying the connection relations and gray gradient differences, and analyzing the boundary areas and the connection characteristics to generate a boundary connection weight table;
Based on the boundary connection weight table, extracting geometric characteristics in a weight region, analyzing perimeter, area and edge continuity of the boundary, comparing geometric characteristic values to generate candidate geometric feature matrixes, taking the candidate geometric feature matrixes as centers, and analyzing spatial expansion characteristics of a neighborhood to obtain spatial geometric expansion data;
Based on the space geometrical expansion data, the texture characteristics of the expansion area and the candidate area are combined, the characteristic distribution of the boundary area is analyzed, the boundary line of each partition is calibrated, the pixel blocks in the tumor partition are extracted, the characteristic statistics is carried out on the shape and the distribution of the pixel blocks, and the tumor area partition identification characteristic numerical table is obtained.
2. The method for identifying tumor regions based on multi-scale image analysis according to claim 1, wherein the step of obtaining the multi-scale gray-scale distribution map comprises the steps of:
Defining a pixel block gray value distribution range in the medical tumor partition image based on the input medical tumor partition image, dividing the image into a plurality of pixel block groups according to a fixed size, wherein each pixel block consists of continuous pixel points, and obtaining an initial pixel block group distribution array;
And carrying out gray value analysis on each pixel block group in the initial pixel block group distribution array, calculating and accumulating gray difference values between adjacent pixel block groups, and adopting a formula:
;
Carrying out normalization processing and measuring gray level change to obtain a gray level difference value accumulation array;
Wherein, Represents the firstThe gray values of the individual pixel blocks,Representing the average gray value of the neighborhood,AndRepresenting the maximum and minimum gray values respectively,Representing a normalized sum of gray scale variations in a group of pixel blocks,Representing the total pixel block number;
and analyzing the distribution change among the pixel block groups by utilizing the information recorded in the gray difference value accumulation array, and determining a distribution change set according to the statistical data of the gray differences to obtain a multi-scale gray distribution map.
3. The method for identifying tumor partition based on multi-scale image analysis according to claim 2, wherein the step of obtaining the boundary distribution characteristic data specifically comprises the steps of:
Analyzing the multi-scale gray distribution map, extracting boundary areas according to gray gradient change rate and contrast change of pixel blocks, comparing gray gradient differences of adjacent pixel blocks, screening high gradient areas through gradient difference threshold values, and obtaining preliminary boundary area characteristic data;
Extracting neighborhood connection features from the preliminary boundary region characteristic data, and adopting the formula:
;
Analyzing the connection strength of each pixel pair, setting a threshold according to the statistical distribution of the connection strength, and generating a local connection region in the neighborhood;
Wherein, AndRepresenting the gray values of the pixels in the neighborhood,Representing the coordinates of the corresponding pixel point,Representing a set of neighborhood pixels,Is a fine-tuning factor which is used to adjust the tuning frequency,Representing the connection weights between pixels;
and carrying out texture feature analysis by utilizing the local connection region in the neighborhood, comparing texture feature differences of the differentiated boundary region, and evaluating the integrity of the boundary region to obtain boundary distribution characteristic data.
4. The method for identifying tumor partition based on multi-scale image analysis according to claim 3, wherein the step of obtaining the boundary connection weight table specifically comprises the steps of:
Based on the boundary distribution characteristic data, performing image pixel division, extracting boundary areas and non-boundary areas in an image, performing pixel gray value classification, distinguishing the differential areas through a threshold value, and performing boundary area marking to obtain boundary area data;
based on the boundary region data, carrying out difference analysis on gray values in the boundary region by the pixels, comparing gray differences of adjacent pixels, and quantizing the differences to obtain quantized gray gradient difference data;
and carrying out boundary region connection relation analysis based on the quantized gray gradient difference data, quantizing the gray gradient difference of each connection by traversing the connection points, and distributing weight for each connection according to the gray gradient difference to generate a boundary connection weight table.
5. The method for identifying tumor partitions based on multi-scale image analysis according to claim 4, wherein the step of obtaining the candidate geometric feature matrix specifically comprises:
Based on the boundary connection weight table, extracting each boundary point in the region, recording the connection weight between adjacent regions by traversing the connection condition of each pixel, judging the connection strength, and sequencing the connection points to obtain perimeter data of the tumor partition boundary;
Based on perimeter data of the tumor partition boundary, measuring the real-time length of the boundary in each boundary area, identifying the boundary position, and obtaining boundary area data through pixel superposition analysis;
And analyzing the continuity of the edges in each region based on the boundary area data, detecting the breaking points in each region, and comparing the boundary perimeter data, the boundary area data and the edge continuity data to generate candidate geometric feature matrixes by comparing the boundary point connection states.
6. The method for identifying tumor partition based on multi-scale image analysis according to claim 5, wherein the step of obtaining the spatial geometrical expansion data comprises the following steps:
Calculating a neighborhood average value of each geometric feature according to the candidate geometric feature matrix, screening a neighborhood range associated with the central feature, judging whether each neighborhood feature deviates from the central feature, and generating a candidate neighborhood geometric feature set;
Based on the candidate neighborhood geometric feature set, analyzing the space expansion data of each neighborhood feature, analyzing the neighborhood feature expansion and the center feature, and adopting a formula:
;
calculating a spatial expansion characteristic coefficient to generate a spatial expansion coefficient distribution result;
Wherein, Representing the coefficient of the spatial expansion characteristic,For the neighborhood feature value,As the value of the central characteristic,As a parameter of the weight-bearing element,Is the neighborhood feature quantity;
and extracting a space expansion region by setting a threshold according to the space expansion coefficient distribution result, carrying out geometric characteristic aggregation on the region, and analyzing the global relation with the candidate geometric characteristics to obtain space geometric expansion data.
7. The method for identifying tumor regions based on multi-scale image analysis according to claim 6, wherein the step of obtaining the tumor region identification feature value table specifically comprises:
Analyzing texture characteristics of the expansion area and the candidate area based on the space geometric expansion data, extracting characteristic distribution of the boundary area, using the data for demarcating tumor boundary lines, and establishing a boundary line characteristic data set;
extracting pixel blocks corresponding to tumor boundaries from an image data set by utilizing the boundary line characteristic data set, and carrying out statistical analysis on the shapes and the distribution of the pixel blocks, wherein statistical analysis results comprise brightness mean values, brightness variances and shape parameters of regional pixel blocks, so as to obtain shape and distribution statistical data;
capturing tumor shape and distribution data according to the shape and distribution statistical data, analyzing the geometric attribute and the position of each data point, extracting the characteristics of boundary definition, compactness and direction, and adopting the formula:
;
generating a tumor area partition identification characteristic value table;
Wherein, A numerical value representing an identification characteristic of the tumor region,Is the luminance average of the region pixel block,Is the variance of the luminance and,Is a parameter of the shape of the article,AndIs an adjustment factor determined based on the data sensitivity analysis.
8. A tumor partition identification system based on multi-scale image analysis is characterized in that, the method for tumor partition identification based on multi-scale image analysis according to any one of claims 1-7, the system comprising:
the gray distribution analysis module divides pixel block groups according to preset sizes based on the input medical tumor partition image, extracts gray values in each pixel block, counts and classifies pixel blocks in a differential gray range, performs gray distribution statistical analysis, and generates a multi-scale gray distribution map;
The boundary texture analysis module extracts a boundary region in the tumor partition image based on the multi-scale gray distribution map, identifies texture features in the boundary region, calculates and compares gray differences of adjacent pixels by using gray gradients, carries out quantization classification according to gradient values, and generates a boundary connection weight table;
the geometric feature acquisition module extracts geometric features in the boundary region based on the boundary connection weight table, including perimeter, area and edge connectivity, adopts perimeter-to-area ratio and edge curvature index to perform morphological analysis on the region, compares the geometric features of the region, and acquires space geometric expansion data by combining space expansion information;
the space geometric expansion feature analysis module is used for analyzing expansion proportion and distribution features of each region based on the space geometric expansion data, identifying space expansion features of the differential region and carrying out space expansion feature analysis to obtain region expansion proportion data;
The space partition demarcation module analyzes morphological changes and space distribution characteristics in the region based on the region expansion proportion data, identifies the space distribution mode of the differentiated region by combining morphological characteristics of the boundary, performs region refinement division according to the identification result, and generates a tumor region partition identification characteristic numerical table.
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202510060008.5A CN119478564B (en) | 2025-01-15 | 2025-01-15 | Tumor partition recognition method and system based on multi-scale image analysis |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202510060008.5A CN119478564B (en) | 2025-01-15 | 2025-01-15 | Tumor partition recognition method and system based on multi-scale image analysis |
Publications (2)
| Publication Number | Publication Date |
|---|---|
| CN119478564A true CN119478564A (en) | 2025-02-18 |
| CN119478564B CN119478564B (en) | 2025-03-25 |
Family
ID=94594755
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| CN202510060008.5A Active CN119478564B (en) | 2025-01-15 | 2025-01-15 | Tumor partition recognition method and system based on multi-scale image analysis |
Country Status (1)
| Country | Link |
|---|---|
| CN (1) | CN119478564B (en) |
Cited By (6)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN119741615A (en) * | 2025-03-05 | 2025-04-01 | 中国环境科学研究院 | A method and system for identifying black and odorous water bodies based on image recognition processing |
| CN120125606A (en) * | 2025-05-15 | 2025-06-10 | 西安医学院第一附属医院 | A method and system for marking tumor boundary of digestive tract submucosal tumor |
| CN120411017A (en) * | 2025-04-21 | 2025-08-01 | 中瑞德泰生物科技集团有限公司 | CAR-T cell culture monitoring system based on image recognition |
| CN120472489A (en) * | 2025-04-27 | 2025-08-12 | 佛山道善智能机器人有限公司 | An intelligent recognition method for CAD primitives for line marking robots |
| CN120543559A (en) * | 2025-07-29 | 2025-08-26 | 杭州临安富盛装饰材料有限公司 | A method for detecting surface uniformity of impregnated paper based on texture image analysis |
| CN121169848A (en) * | 2025-09-04 | 2025-12-19 | 中国人民解放军陆军军医大学第一附属医院 | Image Processing-Based Medical Image Quality Assessment Methods and Systems |
Citations (4)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN102509273A (en) * | 2011-11-21 | 2012-06-20 | 电子科技大学 | Tumor segmentation method based on homogeneous pieces and fuzzy measure of breast ultrasound image |
| CN103985123A (en) * | 2014-05-17 | 2014-08-13 | 清华大学深圳研究生院 | Abdominal aortic aneurysm outer boundary segmentation method based on CTA images |
| WO2019148268A1 (en) * | 2018-02-02 | 2019-08-08 | University Health Network | Devices, systems, and methods for tumor visualization and removal |
| CN114359277A (en) * | 2022-03-18 | 2022-04-15 | 佛山科学技术学院 | Brain image processing method and system for stroke patient |
-
2025
- 2025-01-15 CN CN202510060008.5A patent/CN119478564B/en active Active
Patent Citations (4)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN102509273A (en) * | 2011-11-21 | 2012-06-20 | 电子科技大学 | Tumor segmentation method based on homogeneous pieces and fuzzy measure of breast ultrasound image |
| CN103985123A (en) * | 2014-05-17 | 2014-08-13 | 清华大学深圳研究生院 | Abdominal aortic aneurysm outer boundary segmentation method based on CTA images |
| WO2019148268A1 (en) * | 2018-02-02 | 2019-08-08 | University Health Network | Devices, systems, and methods for tumor visualization and removal |
| CN114359277A (en) * | 2022-03-18 | 2022-04-15 | 佛山科学技术学院 | Brain image processing method and system for stroke patient |
Non-Patent Citations (3)
| Title |
|---|
| SHIQIANG WU 等: "Bone tumor examination based on FCNN-4s and CRF fine segmentation fusion algorithm", 《JOURNAL OF BONE ONCOLOGY》, vol. 42, 31 October 2023 (2023-10-31), pages 1 - 10 * |
| 宋家琳 等: "高频超声影像肝脏包膜几何特征定量评价患者肝硬化程度", 《中国医学影像技术》, vol. 31, no. 12, 20 December 2015 (2015-12-20), pages 1907 - 1910 * |
| 李晓晨: "弱边界跟踪与肺部肿瘤分割方法研究", 《中国优秀硕士学位论文全文数据库医药卫生科技辑》, no. 2021, 15 November 2021 (2021-11-15), pages 072 - 22 * |
Cited By (9)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN119741615A (en) * | 2025-03-05 | 2025-04-01 | 中国环境科学研究院 | A method and system for identifying black and odorous water bodies based on image recognition processing |
| CN119741615B (en) * | 2025-03-05 | 2025-06-06 | 中国环境科学研究院 | Black and odorous water body identification method and system based on image identification processing |
| CN120411017A (en) * | 2025-04-21 | 2025-08-01 | 中瑞德泰生物科技集团有限公司 | CAR-T cell culture monitoring system based on image recognition |
| CN120472489A (en) * | 2025-04-27 | 2025-08-12 | 佛山道善智能机器人有限公司 | An intelligent recognition method for CAD primitives for line marking robots |
| CN120472489B (en) * | 2025-04-27 | 2026-01-02 | 佛山道善智能机器人有限公司 | CAD (computer aided design) graphic primitive intelligent recognition method for scribing robot |
| CN120125606A (en) * | 2025-05-15 | 2025-06-10 | 西安医学院第一附属医院 | A method and system for marking tumor boundary of digestive tract submucosal tumor |
| CN120125606B (en) * | 2025-05-15 | 2025-08-12 | 西安医学院第一附属医院 | A method and system for marking tumor boundary of digestive tract submucosal tumors |
| CN120543559A (en) * | 2025-07-29 | 2025-08-26 | 杭州临安富盛装饰材料有限公司 | A method for detecting surface uniformity of impregnated paper based on texture image analysis |
| CN121169848A (en) * | 2025-09-04 | 2025-12-19 | 中国人民解放军陆军军医大学第一附属医院 | Image Processing-Based Medical Image Quality Assessment Methods and Systems |
Also Published As
| Publication number | Publication date |
|---|---|
| CN119478564B (en) | 2025-03-25 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| CN119478564B (en) | Tumor partition recognition method and system based on multi-scale image analysis | |
| EP1593094B1 (en) | Image analysis for the purpose of assessing cancer | |
| CN101539991B (en) | Efficient Image Region Detection and Segmentation Method for Iris Recognition | |
| US8340372B2 (en) | Image analysis | |
| CN106295124B (en) | The method of a variety of image detecting technique comprehensive analysis gene subgraph likelihood probability amounts | |
| CN119915837B (en) | PVC floor surface coating defect detection method and system | |
| CN119991671B (en) | Automatic analysis system and method for medical images | |
| US20240304007A1 (en) | Method and system for the segmentation and clustering of nuclei based on single-cell pathological images | |
| CN119559134A (en) | Intelligent analysis and hierarchical diagnosis platform for breast cancer imaging data based on large language model | |
| CN120032790A (en) | Medical imaging report generation method based on multimodal large model preference alignment technology | |
| CN118261869A (en) | A method and system for monitoring the growth of lettuce | |
| CN120564155B (en) | A road subsidence automatic detection method and system | |
| CN120510106B (en) | Thyroid ultrasound auxiliary scanning method and system | |
| CN117635615A (en) | Defect detection method and system for punching molds based on deep learning | |
| CN114742849B (en) | Leveling instrument distance measuring method based on image enhancement | |
| CN119540231B (en) | Method, device, equipment and storage medium for detecting seed quality | |
| CN120747217A (en) | Robot navigation method based on vision | |
| CN120526152A (en) | A medical image processing method for tumor partition recognition | |
| CN118799265B (en) | Gastric Cancer Pathology Image Analysis System | |
| CN118014443B (en) | Medicine monitoring method and device | |
| CN117635551A (en) | Cervical cell sample age and health degree prediction method | |
| CN115471494A (en) | Wogan quality inspection method, device, equipment and storage medium based on image processing | |
| SayedElahl et al. | Robust segmentation model for unshaped microarray spots using fractal transformation | |
| CN121353207A (en) | Image feature recognition method and system applied to early kidney disease | |
| CN120913082B (en) | A remote sensing-driven land use monitoring system |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| PB01 | Publication | ||
| PB01 | Publication | ||
| SE01 | Entry into force of request for substantive examination | ||
| SE01 | Entry into force of request for substantive examination | ||
| GR01 | Patent grant | ||
| GR01 | Patent grant |