CN116935133B - Cardiovascular disease classification method and system based on SPECT image recognition - Google Patents

Cardiovascular disease classification method and system based on SPECT image recognition Download PDF

Info

Publication number
CN116935133B
CN116935133B CN202310950157.XA CN202310950157A CN116935133B CN 116935133 B CN116935133 B CN 116935133B CN 202310950157 A CN202310950157 A CN 202310950157A CN 116935133 B CN116935133 B CN 116935133B
Authority
CN
China
Prior art keywords
image
individual
point
value
pixel
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202310950157.XA
Other languages
Chinese (zh)
Other versions
CN116935133A (en
Inventor
史冰
李艳平
甘卓
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
First Hospitalof Hunan University Of Chinese Medicine
Original Assignee
First Hospitalof Hunan University Of Chinese Medicine
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by First Hospitalof Hunan University Of Chinese Medicine filed Critical First Hospitalof Hunan University Of Chinese Medicine
Priority to CN202310950157.XA priority Critical patent/CN116935133B/en
Publication of CN116935133A publication Critical patent/CN116935133A/en
Application granted granted Critical
Publication of CN116935133B publication Critical patent/CN116935133B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • G06V10/765Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects using rules for classification or partitioning the feature space
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10108Single photon emission computed tomography [SPECT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30048Heart; Cardiac
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30101Blood vessel; Artery; Vein; Vascular
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • Medical Informatics (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • Evolutionary Computation (AREA)
  • Software Systems (AREA)
  • Databases & Information Systems (AREA)
  • Computing Systems (AREA)
  • Artificial Intelligence (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Quality & Reliability (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses a cardiovascular disease classification method and a system based on SPECT image recognition, wherein the method comprises the following steps: image acquisition, reflection enhancement visual optimization, dynamic pixel iterative optimization segmentation, cardiovascular lesion feature extraction and chaotic optimization auxiliary classification. The invention belongs to the technical field of cardiovascular disease classification, in particular to a cardiovascular disease classification method and system based on SPECT image recognition, wherein the scheme adopts a bilateral filter function, effectively reserves the edge information of an image, and performs gamma correction on the image so that the transformed image is more natural; the heuristic algorithm is adopted to analyze the images, so that the segmentation efficiency and accuracy of the classification system are improved; and initializing a second individual position by adopting a tent-diagonal chaotic map, introducing a cauchy operator into global position updating and local position updating to update the second individual position, and improving the second individual position with low fitness value by adopting a simplex method to improve the precision and global searchability of the classification system.

Description

Cardiovascular disease classification method and system based on SPECT image recognition
Technical Field
The invention belongs to the technical field of cardiovascular disease classification, and particularly relates to a cardiovascular disease classification method and system based on SPECT image recognition.
Background
Cardiovascular disease is a serious disease that can have serious impact on human health, including myocardial ischemia, myocardial inflammation, myocardial infarction, and even sudden death. However, the existing cardiovascular lesion classification method based on SPECT image recognition has the problems that a halation phenomenon is generated when the illumination change in an image transition area is large, the boundary of an area is fuzzy, the feature extraction is difficult, and the enhancement effect is influenced; the contradictory problems that the excessive number of pixels analyzed in the image segmentation process leads to excessive smoothness of the image boundary, leads to loss of details, increases computational complexity, leads to loss of information due to the insufficient number of pixels analyzed, and reduces segmentation accuracy exist; the problems that the classification system is easy to fall into local optimum, the classification accuracy is reduced, the convergence speed is low, and risks of missed diagnosis and misdiagnosis are increased exist.
Disclosure of Invention
Aiming at the problems that the regional boundary is blurred, the feature extraction is difficult and the enhancement effect is affected when the illumination change in an image transition region is large, the invention adopts a bilateral filtering function, enhances the spatial domain while maintaining the time domain smoothness, effectively maintains the edge information of the image, ensures that the obtained image is more continuous and smooth, solves the halation problem to a certain extent, and carries out gamma correction on the image after bilateral filtering, so that the transformed image is more natural; aiming at the contradictory problems that excessive number of pixels analyzed in the image segmentation process leads to excessive smoothness of the image boundary, leads to detail loss, increases calculation complexity, and leads to information loss and reduction of segmentation accuracy, the invention adopts heuristic algorithm to analyze the image, can segment the image in a shorter time, effectively reduces calculation complexity, and improves the segmentation efficiency and accuracy of a classification system; aiming at the problems that the classification system is easy to fall into local optimum, so that the classification accuracy is reduced, the convergence speed is low, and the risks of missed diagnosis and misdiagnosis are increased, the inclined tent chaotic map is adopted to initialize the second individual position, so that the diversity of the second individual position is effectively improved, the cauchy operator is introduced in global position update and local position update to update the second individual position, and meanwhile, the simplex method is adopted to improve the second individual position with low fitness value, so that the classification system is easier to jump out of the local optimum value, and the precision and global searchability of the classification system are improved.
The technical scheme adopted by the invention is as follows: the invention provides a cardiovascular disease classification method based on SPECT image recognition, which comprises the following steps:
step S1: collecting images, namely collecting a cardiovascular lesion SPECT image;
step S2: reflection enhancement visual optimization, filtering and gamma correction processing are carried out on the V-channel image, self-adaptive enhancement processing is carried out on the S-channel image, reflection components are improved, and image enhancement is achieved;
Step S3: the dynamic pixel iterative optimization segmentation uses the position of each first body to represent one pixel in the image, and the positions of the first bodies are continuously updated through the iterative process of an optimization algorithm so as to find the most suitable pixel segmentation threshold value and realize the accurate segmentation of the image;
Step S4: extracting cardiovascular lesion features, extracting the cardiovascular lesion features from the images based on the images segmented in the step S3, and carrying out mean variance standardization treatment on the extracted cardiovascular lesion features;
Step S5: the chaotic adaptive optimal auxiliary classification method comprises the steps of initializing a second individual position in a non-repeatable mode by adopting a tent-diagonal chaotic map, introducing a cauchy operator in global position updating and local position updating to randomly update the second individual position, improving the second individual position with a low fitness value by adopting a simplex method, finding an optimal value of a chaotic adaptive optimal auxiliary classification parameter, and realizing accurate auxiliary classification.
Further, in step S2, the reflection enhancing visual optimization specifically includes the following steps:
Step S21: a first spatial conversion of converting the RGB color space into an HSV color space;
Step S22: v-channel image filtering, the formula used is as follows:
Where ω d (m, n) is a spatial domain kernel function, ω r (m, n) is a value function, t' (x, y) is a filtered V-channel image, t (x, y) is a pixel value of the image to be processed of the V-channel at position (x, y), is a variance of a spatial distance difference scale parameter,/> is a variance of a pixel difference scale parameter, Ω (b, x, y) is a set of pixels centered on (x, y) and 2b+1 is a radius in the input image, b is a filter radius, m and n are pixel indices in Ω (b, x, y), x is a multiplicative operator, t (m, n) represents a pixel value of the image to be processed of the V-channel at position (m, n), d is a spatial distance difference scale parameter, r is a pixel difference scale parameter, and x and y are pixel indices of the image of the V-channel;
step S23: gamma correction, which is based on the filtered V-channel image, of the incident component using the following formula:
L’(x,y)=Lγ(x,y);
where L' (x, y) is the gamma corrected incident component, γ is the gamma coefficient, and L γ (x, y) is the correction of the filtered incident component of the V-channel image based on the gamma coefficient;
Step S24: s channel self-adaptation enhancement, the formula is as follows:
where S' is the adaptively enhanced saturation, mean (R, G, B) is the average of the (R, G, B) color components of the corresponding pixel of the image, max (R, G, B) is the maximum of the (R, G, B) color components of the corresponding pixel of the image, min (R, G, B) is the minimum of the (R, G, B) color components of the corresponding pixel of the image, and S is the saturation of the original image;
Step S25: the second space conversion, the V channel image after gamma correction, the S channel image after self-adaption enhancement and the H channel image in the original image are synthesized into an HSV image again, and the HSV image is converted back into an RGB color space;
Step S26: the low-light image representation is represented by the following formula:
I(x,y)=O(x,y)×N(x,y);
Wherein I (x, y) is a low-illumination image, O (x, y) is an actual image after the second space conversion, and N (x, y) is an interference term;
Step S27: the reflection component is improved using the following formula:
Where I k (I, j) is the modified reflection component, f k (I, j) is the kth reflection component, I and j are pixel indices of the actual image, and O (I, j) is the pixel value of the actual image at position (I, j) after the second spatial conversion.
Further, in step S3, the iterative optimization segmentation of dynamic pixels specifically includes the following steps:
Step S31: initializing parameters, and presetting a first volume number n1, a maximum iteration number T1, a pixel threshold alpha, a local movement threshold delta, an image width w and an image height h;
Step S32: initializing first individual positions, randomly distributing each first individual on the pixel positions of the image enhanced in the step S2 according to the sizes of w and h, representing the positions of each first individual by using a parameter (c a11,ca1 2), wherein c a11∈<0,w>,ca1 2 epsilon <0, h >, a1 epsilon [1, n1], and a1 is a first individual index;
Step S33: calculating a first individual fitness value using the following formula:
Where p (c a1) is the fitness value of the 1 st first individual, c a1=(ca11,ca12),I(ca11,ca1) is the pixel value of the 1 st first individual at the image (c a11,ca1 2);
Step S34: selecting an optimal position, sorting the positions of all the first individuals according to the magnitude of the fitness value, and selecting the position of the first individual with the highest fitness value as the optimal position cbest;
step S35: global motion, the formula used is as follows:
λ=rand(0,d(ca1,cbest));
ca1'=ca1±λ;
Where c a1' is the position after global motion of the 1 st first individual, λ is a random vector, d (c a1, cbest) is the euclidean distance between the position c a1 of the 1 st first individual and the optimal position cbest, ±is the ability to go out of range of the image, (c a11,ca1 2) and (cbest 1, cbest 2) are the position coordinates of the 1 st first individual c a1 and the optimal position cbest in the image, c a1=(ca11,ca1 2), cbest = (cbest, cbest 2), respectively;
Step S36: a local motion, wherein a parameter mu between 0 and 1 is randomly generated for each first individual, and if mu > delta, the first individual performs the local motion; otherwise, the first individual does not perform a local motion, using the formula:
Wherein c a1' is the position of the 1 st first body after partial movement, ω is the moving speed, r is a random number with a value within the range of [ -5,5], σ 0 and σ 1 are the angles of partial movement and the value ranges are [0,2 pi ], and cz and cw are two different preset positions respectively;
Step S37: updating the fitness value and the optimal position of the first individual, updating the fitness value of the first individual based on the position of the first individual after the local movement, and updating the optimal position;
Step S38: judging whether the maximum iteration times are reached, if the maximum iteration times T1 are reached, outputting the optimal position, and calling grabCut functions based on the output optimal position by using a python imported OpenCV library to divide images; if the maximum number of iterations is not reached, go to step S35 for iteration.
Further, in step S4, the extracting cardiovascular disease features is based on the image segmented in step S3, extracting cardiovascular disease features such as blood flow perfusion, myocardial metabolic activity, cardiac function parameters, coronary artery stenosis degree and the like from the image, and performing mean variance normalization processing on the extracted cardiovascular disease features.
Further, in step S5, the chaotic adaptive auxiliary classification specifically includes the following steps:
step S51: constructing a training data set and a test data set, collecting a cardiovascular disease public data set as sample data, wherein the data comprises cardiovascular disease characteristics and corresponding labels, the cardiovascular disease characteristics are characteristic vectors, the corresponding labels comprise health states, myocardial ischemia, myocardial infarction, myocardial inflammation and coronary artery stenosis, 70% of sample data are randomly selected as the training data set, and the rest 30% of sample data are selected as the test data set;
Step S52: initializing parameters, namely presetting the number n2 of the second individuals, the maximum iteration times T2 and the probability p, and setting the range of a penalty factor C and a Gaussian kernel function key parameter eta in the chaotic optimization auxiliary classification, wherein the parameters (C, eta) represent the positions of the second individuals;
Step S53: the second individual location is initialized using the following formula:
Where x a2+1 is the initial position of the a2+1th second individual, x a2 is the initial position of the a2 nd second individual, ρ is a random number between (0, 1), a2 is the second individual index, and x n is a random value between (0, 1) randomly generated for each second individual;
Step S54: constructing a chaotic optimal auxiliary classification model, calling an SVM function based on current parameters (C, eta) by using a python import sklearn library, training the chaotic optimal auxiliary classification model based on a training data set, and predicting sample data of a test data set by using the trained chaotic optimal auxiliary classification model;
Step S55: calculating a second fitness value using the formula:
wherein s a2 is the fitness value of the a2 nd second individual, y1 is the true tag, y2 is the predictive tag, n3 is the number of sample data of the test dataset, and hy is the index of the sample data of the test dataset;
step S56: selecting an optimal position, sorting the positions of all the second individuals according to the magnitude of the fitness value, and selecting the position of the second individual with the highest fitness value as the optimal position xbest;
Step S57: the second individual weight is calculated using the formula:
fa2=cIa2 g
Wherein f a2 is the weight of the a2 nd second individual, c is the sensory modality, I a2 is the stimulation intensity of the a2 nd second individual, g is a power exponent ranging from [0,1 ];
Step S58: judging a position updating mode, randomly generating a parameter q between 0 and 1 for each second individual, and if q > p, turning to a step S59; otherwise, go to step S510;
step S59: global location update, the formula used is as follows:
xa2 t+1=xa2 t+(q2×xbest-xa2 t)×fa2×Cauchy(0,1);
Where x a2 t+1 is the position of the (a 2) th second individual at the (t+1) th iteration, x a2 t is the position of the (a 2) th second individual at the (t) th iteration, cauchy (0, 1) is the probability distribution function, xbest is the second individual position with the highest fitness value, i.e. the optimal position;
step S510: local location update, the formula used is as follows:
xa2 t+1=xa2 t+(q2×(xj t-xk t)-xa2 t)×fa2×Cauchy(0,1);
Where x a2 t+1 is the position of the (a 2) th second individual at the (t+1) th iteration, x a2 t is the position of the (a 2) th second individual at the (t) th iteration, x j t is the position of the (j) th second individual at the (t) th iteration, and x k t is the position of the (k) th second individual at the (t) th iteration;
Step S511: the second individual position with low fitness value is improved by the following specific steps:
Step S5111: calculating vertex fitness values, constructing n < 2+ > 1 vertex polyhedrons in an n < 2 > dimensional space of a second individual, calculating fitness values of all vertices, ranking single fitness values, determining an optimal point x 1, a secondary advantage x 2 and a worst point x 3, and calculating center points x 4 of the optimal point x 1 and the secondary advantage x 2, wherein the following formula is used:
where s' (a 2) is the fitness value of the ith vertex;
step S5112: the reflection point of the worst point is calculated using the following formula:
x5=x4+u(x4-x3);
Wherein x 5 is the reflection point of the worst point x 3, u is the reflection coefficient and has a value of 1, and x 4 is the center point of the best point x 1 and the secondary point x 2;
Step S5113: if S' (x 5)<s'(x1) is the worst point, the reflection direction is correct, and the process goes to step S5114; if S' (x 3)<s'(x5)>s'(x1) the reflection direction is incorrect, go to step S5115; if S' (x 1)<s'(x5)<s'(x3), go to step S5116;
Step S5114: calculating an expansion point and replacing the worst point, and performing expansion operation to obtain an expansion point, wherein if s' (x 6)<s'(x1), the worst point x 3 is replaced by an expansion point x 6; otherwise, the worst point x 3 is replaced with the reflection point x 5, and the following formula is used:
x6=x4+o×(x5-x4);
Where x 6 is the expansion point, o is the expansion factor and takes a value of 1.5, o× (x 5-x4) is the expansion factor o multiplied by the difference between the reflection point x 5 and the center point x 4 of the optimum point x 1 and the secondary point x 2;
Step S5115: and calculating a compression point and replacing a worst point, performing compression operation to obtain the compression point, and replacing the worst point x 3 with the compression point x 7 if s' (x 7)<s'(x3), wherein the following formula is used:
x7=x4+z(x3-x4);
Where x 7 is the compression point, z is the compression factor and takes a value of 0.5, and z (x 3-x4) is the compression factor z multiplied by the difference between the worst point x 3 and the center point x 4 of the best point x 1 and the secondary point x 2;
Step S5116: calculating a shrinkage point and replacing a worst point, if s '(x 1)<s'(x5)<s'(x3), performing shrinkage operation to obtain the shrinkage point, and if s' (x 8)<s'(x3), replacing the worst point x 3 with the shrinkage point x 8; otherwise, the worst point x 3 is replaced with the reflection point x 5, and the following formula is used:
x8=x4-v(x3-x4);
Where x 8 is the contraction point, v is the contraction factor and takes a value of 0.5, and v (x 3-x4) is the difference of the contraction factor v multiplied by the worst point x 3 and the center point x 4 of the best point x 1 and the secondary point x 2;
step S512: updating the fitness value and the optimal position of the second individual, updating the fitness value of the second individual based on the improved position of the second individual, and updating the optimal position;
Step S513: establishing a model, presetting a maximum iteration number and an fitness value threshold, outputting an optimal position if the vertex fitness value is higher than the fitness threshold, constructing a chaotic fitness auxiliary classification model, inputting the cardiovascular lesion characteristics extracted in the step S4 into the chaotic fitness auxiliary classification model, classifying data, and outputting a corresponding label; if the maximum iteration number is reached, go to step S52; otherwise, the process goes to step S57.
The invention provides a cardiovascular disease classification system based on SPECT image recognition, which comprises an image acquisition module, a reflection enhancement visual optimization module, a dynamic pixel iteration optimization segmentation module, a cardiovascular disease feature extraction module and a chaotic optimization auxiliary classification module;
The image acquisition module acquires a cardiovascular lesion SPECT image and sends the cardiovascular lesion SPECT image to the reflection enhancement visual optimization module;
the reflection enhancement visual optimization module receives the cardiovascular lesion SPECT image sent by the image acquisition module, performs filtering and gamma correction processing on the V-channel image, performs self-adaptive enhancement processing on the S-channel image, improves reflection components, and sends the enhanced image to the dynamic pixel iterative optimization segmentation module;
The dynamic pixel iterative optimization segmentation module receives the enhanced image sent by the reflection enhancement visual optimization module, analyzes the image by adopting a heuristic algorithm, reduces the calculation complexity, and sends the segmented image to the cardiovascular lesion feature extraction module;
The cardiovascular lesion feature extraction module receives the segmented image sent by the dynamic pixel iterative optimization segmentation module, extracts cardiovascular lesion features such as blood flow perfusion volume, myocardial metabolic activity, cardiac function parameters, coronary artery stenosis degree and the like from the segmented image, performs mean variance standardization processing on the extracted cardiovascular lesion features, and sends the cardiovascular lesion features after standardization processing to the chaotic optimization auxiliary classification module;
The chaotic optimization auxiliary classification module receives the normalized cardiovascular lesion characteristics sent by the cardiovascular lesion characteristic extraction module, adopts a tent-inclined chaotic map to initialize the second individual position, introduces a cauchy operator in global position update and local position update to update the second individual position, and adopts a simplex method to improve the second individual position with low fitness value, so that the classification system can jump out of a local optimal value more easily, and the precision and global searchability of the classification system are improved.
By adopting the scheme, the beneficial effects obtained by the invention are as follows:
(1) Aiming at the problems that the region boundary is blurred, the feature extraction is difficult and the enhancement effect is affected due to the fact that the halo phenomenon is generated when the illuminance change in the image transition region is large, the bilateral filtering function is adopted, the spatial domain is enhanced while the time domain smoothness is maintained, the edge information of the image is effectively reserved, the obtained image is more continuous and smooth, the halo problem is solved to a certain extent, and the gamma correction is carried out on the image after bilateral filtering, so that the transformed image is more natural.
(2) Aiming at the contradictory problems that excessive number of pixels analyzed in the image segmentation process leads to excessive smoothness of the image boundary, loss of details, increased calculation complexity, information loss caused by insufficient number of pixels analyzed and reduced segmentation accuracy, the invention adopts a heuristic algorithm to analyze the image, can segment the image in a shorter time, effectively reduces the calculation complexity and improves the segmentation efficiency and accuracy of a classification system.
(3) Aiming at the problems that the classification system is easy to fall into local optimum, so that the classification accuracy is reduced, the convergence speed is low, and the risks of missed diagnosis and misdiagnosis are increased, the inclined tent chaotic map is adopted to initialize the second individual position, so that the diversity of the second individual position is effectively improved, the cauchy operator is introduced in global position update and local position update to update the second individual position, and meanwhile, the simplex method is adopted to improve the second individual position with low fitness value, so that the classification system is easier to jump out of the local optimum value, and the precision and global searchability of the classification system are improved.
Drawings
FIG. 1 is a flow chart of a cardiovascular disease classification method based on SPECT image recognition provided by the invention;
FIG. 2 is a schematic diagram of a cardiovascular disease classification system based on SPECT image recognition provided by the present invention;
FIG. 3 is a flow chart of step S2;
FIG. 4 is a flow chart of step S3;
FIG. 5 is a flow chart of step S5;
fig. 6 is a second individual search graph.
The accompanying drawings are included to provide a further understanding of the invention and are incorporated in and constitute a part of this specification, illustrate the invention and together with the embodiments of the invention, serve to explain the invention.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and fully with reference to the accompanying drawings, in which it is evident that the embodiments described are only some, but not all embodiments of the invention; all other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
In the description of the present invention, it should be understood that the terms "upper," "lower," "front," "rear," "left," "right," "top," "bottom," "inner," "outer," and the like indicate orientation or positional relationships based on those shown in the drawings, merely to facilitate description of the invention and simplify the description, and do not indicate or imply that the devices or elements referred to must have a particular orientation, be constructed and operated in a particular orientation, and thus should not be construed as limiting the invention.
Referring to fig. 1, the method for classifying cardiovascular diseases based on SPECT image recognition provided by the present invention includes the following steps:
step S1: collecting images, namely collecting a cardiovascular lesion SPECT image;
step S2: reflection enhancement visual optimization, filtering and gamma correction processing are carried out on the V-channel image, self-adaptive enhancement processing is carried out on the S-channel image, reflection components are improved, and image enhancement is achieved;
Step S3: the dynamic pixel iterative optimization segmentation uses the position of each first body to represent one pixel in the image, and the positions of the first bodies are continuously updated through the iterative process of an optimization algorithm so as to find the most suitable pixel segmentation threshold value and realize the accurate segmentation of the image;
Step S4: extracting cardiovascular lesion features, extracting the cardiovascular lesion features from the images based on the images segmented in the step S3, and carrying out mean variance standardization treatment on the extracted cardiovascular lesion features;
Step S5: the chaotic adaptive optimal auxiliary classification method comprises the steps of initializing a second individual position in a non-repeatable mode by adopting a tent-diagonal chaotic map, introducing a cauchy operator in global position updating and local position updating to randomly update the second individual position, improving the second individual position with a low fitness value by adopting a simplex method, finding an optimal value of a chaotic adaptive optimal auxiliary classification parameter, and realizing accurate auxiliary classification.
In the second embodiment, referring to fig. 1 and 3, the reflection enhancing visual optimization in step S2 specifically includes the following steps:
Step S21: a first spatial conversion of converting the RGB color space into an HSV color space;
Step S22: v-channel image filtering, the formula used is as follows:
Where ω d (m, n) is a spatial domain kernel function, ω r (m, n) is a value function, t' (x, y) is a filtered V-channel image, t (x, y) is a pixel value of the image to be processed of the V-channel at position (x, y), is a variance of a spatial distance difference scale parameter,/> is a variance of a pixel difference scale parameter, Ω (b, x, y) is a set of pixels centered on (x, y) and 2b+1 is a radius in the input image, b is a filter radius, m and n are pixel indices in Ω (b, x, y), x is a multiplicative operator, t (m, n) represents a pixel value of the image to be processed of the V-channel at position (m, n), d is a spatial distance difference scale parameter, r is a pixel difference scale parameter, and x and y are pixel indices of the image of the V-channel;
step S23: gamma correction, which is based on the filtered V-channel image, of the incident component using the following formula:
L’(x,y)=Lγ(x,y);
where L' (x, y) is the gamma corrected incident component, γ is the gamma coefficient, and L γ (x, y) is the correction of the filtered incident component of the V-channel image based on the gamma coefficient;
Step S24: s channel self-adaptation enhancement, the formula is as follows:
where S' is the adaptively enhanced saturation, mean (R, G, B) is the average of the (R, G, B) color components of the corresponding pixel of the image, max (R, G, B) is the maximum of the (R, G, B) color components of the corresponding pixel of the image, min (R, G, B) is the minimum of the (R, G, B) color components of the corresponding pixel of the image, and S is the saturation of the original image;
Step S25: the second space conversion, the V channel image after gamma correction, the S channel image after self-adaption enhancement and the H channel image in the original image are synthesized into an HSV image again, and the HSV image is converted back into an RGB color space;
Step S26: the low-light image representation is represented by the following formula:
I(x,y)=O(x,y)×N(x,y);
Wherein I (x, y) is a low-illumination image, O (x, y) is an actual image after the second space conversion, and N (x, y) is an interference term;
Step S27: the reflection component is improved using the following formula:
Where I k (I, j) is the modified reflection component, f k (I, j) is the kth reflection component, I and j are pixel indices of the actual image, and O (I, j) is the pixel value of the actual image at position (I, j) after the second spatial conversion.
By executing the operation, aiming at the problems that the halo phenomenon is generated when the illuminance change in the image transition area is large, the boundary of the area is blurred, the feature extraction is difficult and the enhancement effect is influenced, the bilateral filtering function is adopted, the spatial domain is enhanced while the time domain smoothness is maintained, the edge information of the image is effectively reserved, the obtained image is more continuous and smooth, the halo problem is solved to a certain extent, and the gamma correction is carried out on the image after bilateral filtering, so that the image after transformation is more natural.
In the third embodiment, referring to fig. 1 and fig. 4, the dynamic pixel iterative optimization segmentation specifically includes the following steps in step S3:
Step S31: initializing parameters, and presetting a first volume number n1, a maximum iteration number T1, a pixel threshold alpha, a local movement threshold delta, an image width w and an image height h;
Step S32: initializing first individual positions, randomly distributing each first individual on the pixel positions of the image enhanced in the step S2 according to the sizes of w and h, representing the positions of each first individual by using a parameter (c a11,ca1 2), wherein c a11∈<0,w>,ca1 2 epsilon <0, h >, a1 epsilon [1, n1], and a1 is a first individual index;
Step S33: calculating a first individual fitness value using the following formula:
Where p (c a1) is the fitness value of the 1 st first individual, c a1=(ca11,ca12),I(ca11,ca1) is the pixel value of the 1 st first individual at the image (c a11,ca1 2);
Step S34: selecting an optimal position, sorting the positions of all the first individuals according to the magnitude of the fitness value, and selecting the position of the first individual with the highest fitness value as the optimal position cbest;
step S35: global motion, the formula used is as follows:
λ=rand(0,d(ca1,cbest));
ca1'=ca1±λ;
Where c a1' is the position after global motion of the 1 st first individual, λ is a random vector, d (c a1, cbest) is the euclidean distance between the position c a1 of the 1 st first individual and the optimal position cbest, ±is the ability to go out of range of the image, (c a11,ca1 2) and (cbest 1, cbest 2) are the position coordinates of the 1 st first individual c a1 and the optimal position cbest in the image, c a1=(ca11,ca1 2), cbest = (cbest, cbest 2), respectively;
Step S36: a local motion, wherein a parameter mu between 0 and 1 is randomly generated for each first individual, and if mu > delta, the first individual performs the local motion; otherwise, the first individual does not perform a local motion, using the formula:
Wherein c a1' is the position of the 1 st first body after partial movement, ω is the moving speed, r is a random number with a value within the range of [ -5,5], σ 0 and σ 1 are the angles of partial movement and the value ranges are [0,2 pi ], and cz and cw are two different preset positions respectively;
Step S37: updating the fitness value and the optimal position of the first individual, updating the fitness value of the first individual based on the position of the first individual after the local movement, and updating the optimal position;
Step S38: judging whether the maximum iteration times are reached, if the maximum iteration times T1 are reached, outputting the optimal position, and calling grabCut functions based on the output optimal position by using a python imported OpenCV library to divide images; if the maximum number of iterations is not reached, go to step S35 for iteration.
By executing the above operation, the invention analyzes the image by adopting a heuristic algorithm, can segment the image in a shorter time, effectively reduces the calculation complexity, and improves the segmentation efficiency and accuracy of the classification system, aiming at the contradictory problems that excessive number of pixels analyzed in the image segmentation process leads to excessive smoothness of the image boundary, leads to loss of details, increases the calculation complexity, and leads to information loss and reduction of segmentation accuracy due to insufficient number of pixels analyzed.
In step S4, the cardiovascular disease feature is extracted based on the image segmented in step S3, the cardiovascular disease feature such as blood perfusion, myocardial metabolic activity, cardiac function parameter and coronary artery stenosis degree is extracted therefrom, and the mean variance normalization processing is performed on the extracted cardiovascular disease feature, see fig. 1.
In a fifth embodiment, referring to fig. 1 and 5, the chaotic optimization-aiding method specifically includes the following steps in step S5:
step S51: constructing a training data set and a test data set, collecting a cardiovascular disease public data set as sample data, wherein the data comprises cardiovascular disease characteristics and corresponding labels, the cardiovascular disease characteristics are characteristic vectors, the corresponding labels comprise health states, myocardial ischemia, myocardial infarction, myocardial inflammation and coronary artery stenosis, 70% of sample data are randomly selected as the training data set, and the rest 30% of sample data are selected as the test data set;
Step S52: initializing parameters, namely presetting the number n2 of the second individuals, the maximum iteration times T2 and the probability p, and setting the range of a penalty factor C and a Gaussian kernel function key parameter eta in the chaotic optimization auxiliary classification, wherein the parameters (C, eta) represent the positions of the second individuals;
Step S53: the second individual location is initialized using the following formula:
Where x a2+1 is the initial position of the a2+1th second individual, x a2 is the initial position of the a2 nd second individual, ρ is a random number between (0, 1), a2 is the second individual index, and x n is a random value between (0, 1) randomly generated for each second individual;
Step S54: constructing a chaotic optimal auxiliary classification model, calling an SVM function based on current parameters (C, eta) by using a python import sklearn library, training the chaotic optimal auxiliary classification model based on a training data set, and predicting sample data of a test data set by using the trained chaotic optimal auxiliary classification model;
Step S55: calculating a second fitness value using the formula:
Wherein s a2 is the fitness value of the ith second individual, y1 is the true tag, y2 is the predictive tag, n3 is the number of sample data of the test dataset, and hy is the index of the sample data of the test dataset;
step S56: selecting an optimal position, sorting the positions of all the second individuals according to the magnitude of the fitness value, and selecting the position of the second individual with the highest fitness value as the optimal position xbest;
Step S57: the second individual weight is calculated using the formula:
fa2=cIa2 g
Wherein f a2 is the weight of the a2 nd second individual, c is the sensory modality, I a2 is the stimulation intensity of the a2 nd second individual, g is a power exponent ranging from [0,1 ];
Step S58: judging a position updating mode, randomly generating a parameter q between 0 and 1 for each second individual, and if q > p, turning to a step S59; otherwise, go to step S510;
step S59: global location update, the formula used is as follows:
xa2 t+1=xa2 t+(q2×xbest-xa2 t)×fa2×Cauchy(0,1);
Where x a2 t+1 is the position of the (a 2) th second individual at the (t+1) th iteration, x a2 t is the position of the (a 2) th second individual at the (t) th iteration, cauchy (0, 1) is the probability distribution function, xbest is the second individual position with the highest fitness value, i.e. the optimal position;
step S510: local location update, the formula used is as follows:
xa2 t+1=xa2 t+(q2×(xj t-xk t)-xa2 t)×fa2×Cauchy(0,1);
Where x a2 t+1 is the position of the (a 2) th second individual at the (t+1) th iteration, x a2 t is the position of the (a 2) th second individual at the (t) th iteration, x j t is the position of the (j) th second individual at the (t) th iteration, and x k t is the position of the (k) th second individual at the (t) th iteration;
Step S511: the second individual position with low fitness value is improved by the following specific steps:
Step S5111: calculating vertex fitness values, constructing n < 2+ > 1 vertex polyhedrons in an n < 2 > dimensional space of a second individual, calculating fitness values of all vertices, ranking single fitness values, determining an optimal point x 1, a secondary advantage x 2 and a worst point x 3, and calculating center points x 4 of the optimal point x 1 and the secondary advantage x 2, wherein the following formula is used:
where s' (a 2) is the fitness value of the a2 nd vertex;
step S5112: the reflection point of the worst point is calculated using the following formula:
x5=x4+u(x4-x3);
Wherein x 5 is the reflection point of the worst point x 3, u is the reflection coefficient and has a value of 1, and x 4 is the center point of the best point x 1 and the secondary point x 2;
Step S5113: if S' (x 5)<s'(x1) is the worst point, the reflection direction is correct, and the process goes to step S5114; if S' (x 3)<s'(x5)>s'(x1) the reflection direction is incorrect, go to step S5115; if S' (x 1)<s'(x5)<s'(x3), go to step S5116;
Step S5114: calculating an expansion point and replacing the worst point, and performing expansion operation to obtain an expansion point, wherein if s' (x 6)<s'(x1), the worst point x 3 is replaced by an expansion point x 6; otherwise, the worst point x 3 is replaced with the reflection point x 5, and the following formula is used:
x6=x4+o×(x5-x4);
Where x 6 is the expansion point, o is the expansion factor and takes a value of 1.5, o× (x 5-x4) is the expansion factor o multiplied by the difference between the reflection point x 5 and the center point x 4 of the optimum point x 1 and the secondary point x 2;
Step S5115: and calculating a compression point and replacing a worst point, performing compression operation to obtain the compression point, and replacing the worst point x 3 with the compression point x 7 if s' (x 7)<s'(x3), wherein the following formula is used:
x7=x4+z(x3-x4);
Where x 7 is the compression point, z is the compression factor and takes a value of 0.5, and z (x 3-x4) is the compression factor z multiplied by the difference between the worst point x 3 and the center point x 4 of the best point x 1 and the secondary point x 2;
Step S5116: calculating a shrinkage point and replacing a worst point, if s '(x 1)<s'(x5)<s'(x3), performing shrinkage operation to obtain the shrinkage point, and if s' (x 8)<s'(x3), replacing the worst point x 3 with the shrinkage point x 8; otherwise, the worst point x 3 is replaced with the reflection point x 5, and the following formula is used:
x8=x4-v(x3-x4);
Where x 8 is the contraction point, v is the contraction factor and takes a value of 0.5, and v (x 3-x4) is the difference of the contraction factor v multiplied by the worst point x 3 and the center point x 4 of the best point x 1 and the secondary point x 2;
step S512: updating the fitness value and the optimal position of the second individual, updating the fitness value of the second individual based on the improved position of the second individual, and updating the optimal position;
Step S513: establishing a model, presetting a maximum iteration number and an fitness value threshold, outputting an optimal position if the vertex fitness value is higher than the fitness threshold, constructing a chaotic fitness auxiliary classification model, inputting the cardiovascular lesion characteristics extracted in the step S4 into the chaotic fitness auxiliary classification model, classifying data, and outputting a corresponding label; if the maximum iteration number is reached, go to step S52; otherwise, the process goes to step S57.
By executing the operation, aiming at the problems that the classification accuracy is lowered, the convergence speed is low and the risks of missed diagnosis and misdiagnosis are increased due to the fact that the classification system is easy to fall into local optimum, the inclined tent chaotic map is adopted to initialize the second individual position, the diversity of the second individual position is effectively improved, the cauchy operator is introduced into global position updating and local position updating to update the second individual position, meanwhile, the simplex method is adopted to improve the second individual position with low fitness value, the classification system is easier to jump out of the local optimum value, and the precision and global searchability of the classification system are improved.
In a sixth embodiment, referring to fig. 6, the embodiment is based on the above embodiment, where the ordinate is the position of the optimal solution of the second individual, and the abscissa is the iteration number, and shows the change process that the position of the second individual continuously tends to the position of the optimal solution along with the change of the iteration number, so that the second individual approaches to a better search area, the classification system is easier to jump out of the local optimal value, and the accuracy and global searchability of the classification system are improved.
An embodiment seven, referring to fig. 2, based on the embodiment, the cardiovascular disease classification system based on SPECT image recognition provided by the invention includes an image acquisition module, a reflection enhancement visual optimization module, a dynamic pixel iterative optimization segmentation module, a cardiovascular disease feature extraction module and a chaotic optimization auxiliary classification module;
The image acquisition module acquires a cardiovascular lesion SPECT image and sends the cardiovascular lesion SPECT image to the reflection enhancement visual optimization module;
the reflection enhancement visual optimization module receives the cardiovascular lesion SPECT image sent by the image acquisition module, performs filtering and gamma correction processing on the V-channel image, performs self-adaptive enhancement processing on the S-channel image, improves reflection components, and sends the enhanced image to the dynamic pixel iterative optimization segmentation module;
The dynamic pixel iterative optimization segmentation module receives the enhanced image sent by the reflection enhancement visual optimization module, analyzes the image by adopting a heuristic algorithm, reduces the calculation complexity, and sends the segmented image to the cardiovascular lesion feature extraction module;
The cardiovascular lesion feature extraction module receives the segmented image sent by the dynamic pixel iterative optimization segmentation module, extracts cardiovascular lesion features such as blood flow perfusion volume, myocardial metabolic activity, cardiac function parameters, coronary artery stenosis degree and the like from the segmented image, performs mean variance standardization processing on the extracted cardiovascular lesion features, and sends the cardiovascular lesion features after standardization processing to the chaotic optimization auxiliary classification module;
The chaotic optimization auxiliary classification module receives the normalized cardiovascular lesion characteristics sent by the cardiovascular lesion characteristic extraction module, adopts a tent-inclined chaotic map to initialize the second individual position, introduces a cauchy operator in global position update and local position update to update the second individual position, and adopts a simplex method to improve the second individual position with low fitness value, so that the classification system can jump out of a local optimal value more easily, and the precision and global searchability of the classification system are improved.
It is noted that relational terms such as first and second, and the like are used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Moreover, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus.
Although embodiments of the present invention have been shown and described, it will be understood by those skilled in the art that various changes, modifications, substitutions and alterations can be made therein without departing from the principles and spirit of the invention, the scope of which is defined in the appended claims and their equivalents.
The invention and its embodiments have been described above with no limitation, and the actual construction is not limited to the embodiments of the invention as shown in the drawings. In summary, if one of ordinary skill in the art is informed by this disclosure, a structural manner and an embodiment similar to the technical solution should not be creatively devised without departing from the gist of the present invention.

Claims (7)

1. The cardiovascular disease classification method based on SPECT image recognition is characterized by comprising the following steps of: the method comprises the following steps:
step S1: collecting images, namely collecting a cardiovascular lesion SPECT image;
step S2: reflection enhancement visual optimization, filtering and gamma correction processing are carried out on the V-channel image, self-adaptive enhancement processing is carried out on the S-channel image, reflection components are improved, and image enhancement is achieved;
Step S3: the dynamic pixel iterative optimization segmentation uses the position of each first body to represent one pixel in the image, and the positions of the first bodies are continuously updated through the iterative process of an optimization algorithm so as to find the most suitable pixel segmentation threshold value and realize the accurate segmentation of the image;
Step S4: extracting cardiovascular lesion features, extracting the cardiovascular lesion features from the images based on the images segmented in the step S3, and carrying out mean variance standardization treatment on the extracted cardiovascular lesion features;
Step S5: the chaotic adaptive optimal auxiliary classification method comprises the steps of initializing a second individual position in a non-repeatable mode by adopting a tent-diagonal chaotic map, introducing a cauchy operator in global position updating and local position updating to randomly update the second individual position, improving the second individual position with a low fitness value by adopting a simplex method, finding an optimal value of a chaotic adaptive optimal auxiliary classification parameter, and realizing accurate auxiliary classification.
2. The method for classifying cardiovascular disease based on SPECT image recognition of claim 1 wherein: in step S2, the image enhancement specifically includes the following steps:
Step S21: a first spatial conversion of converting the RGB color space into an HSV color space;
Step S22: v-channel image filtering, the formula used is as follows:
where ω d (m, n) is a spatial domain kernel function, ω r (m, n) is a value function, t' (x, y) is a filtered V-channel image, t (x, y) is a pixel value of the image to be processed of the V-channel at position (x, y), is a variance of a spatial distance difference scale parameter,/> is a variance of a pixel difference scale parameter, Ω (b, x, y) is a set of pixels centered on (x, y) and 2b+1 is a radius in the input image, b is a filter radius, m and n are pixel indices in Ω (b, x, y), x is a multiplicative operator, t (m, n) represents a pixel value of the image to be processed of the V-channel at position (m, n), d is a spatial distance difference scale parameter, r is a pixel difference scale parameter, and x and y are pixel indices of the image of the V-channel;
step S23: gamma correction, which is based on the filtered V-channel image, of the incident component using the following formula:
L’(x,y)=Lγ(x,y);
where L' (x, y) is the gamma corrected incident component, γ is the gamma coefficient, and L γ (x, y) is the correction of the filtered incident component of the V-channel image based on the gamma coefficient;
Step S24: s channel self-adaptation enhancement, the formula is as follows:
where S' is the adaptively enhanced saturation, mean (R, G, B) is the average of the (R, G, B) color components of the corresponding pixel of the image, max (R, G, B) is the maximum of the (R, G, B) color components of the corresponding pixel of the image, min (R, G, B) is the minimum of the (R, G, B) color components of the corresponding pixel of the image, and S is the saturation of the original image;
Step S25: the second space conversion, the V channel image after gamma correction, the S channel image after self-adaption enhancement and the H channel image in the original image are synthesized into an HSV image again, and the HSV image is converted back into an RGB color space;
Step S26: the low-light image representation is represented by the following formula:
I(x,y)=O(x,y)×N(x,y);
Wherein I (x, y) is a low-illumination image, O (x, y) is an actual image after the second space conversion, and N (x, y) is an interference term;
Step S27: the reflection component is improved using the following formula:
Where I k (I, j) is the modified reflection component, f k (I, j) is the kth reflection component, I and j are pixel indices of the actual image, and O (I, j) is the pixel value of the actual image at position (I, j) after the second spatial conversion.
3. The method for classifying cardiovascular disease based on SPECT image recognition of claim 1 wherein: in step S3, the image segmentation specifically includes the following steps:
Step S31: initializing parameters, and presetting a first volume number n1, a maximum iteration number T1, a pixel threshold alpha, a local movement threshold delta, an image width w and an image height h;
Step S32: initializing first individual positions, randomly distributing each first individual on the pixel positions of the image enhanced in the step S2 according to the sizes of w and h, representing the positions of each first individual by using a parameter (c a11,ca1 2), wherein c a11∈<0,w>,ca1 2 epsilon <0, h >, a1 epsilon [1, n1], and a1 is a first individual index;
Step S33: calculating a first individual fitness value using the following formula:
Where p (c a1) is the fitness value of the 1 st first individual, c a1=(ca11,ca12),I(ca11,ca1) is the pixel value of the 1 st first individual at the image (c a11,ca1 2);
Step S34: selecting an optimal position, sorting the positions of all the first individuals according to the magnitude of the fitness value, and selecting the position of the first individual with the highest fitness value as the optimal position cbest;
step S35: global motion, the formula used is as follows:
λ=rand(0,d(ca1,cbest));
ca1'=ca1±λ;
Where c a1' is the position after global motion of the 1 st first individual, λ is a random vector, d (c a1, cbest) is the euclidean distance between the position c a1 of the 1 st first individual and the optimal position cbest, ±is the ability to go out of range of the image, (c a11,ca1 2) and (cbest 1, cbest 2) are the position coordinates of the 1 st first individual c a1 and the optimal position cbest in the image, c a1=(ca11,ca1 2), cbest = (cbest, cbest 2), respectively;
Step S36: a local motion, wherein a parameter mu between 0 and 1 is randomly generated for each first individual, and if mu > delta, the first individual performs the local motion; otherwise, the first individual does not perform a local motion, using the formula:
Wherein c a1' is the position of the 1 st first body after partial movement, ω is the moving speed, r is a random number with a value within the range of [ -5,5], σ 0 and σ 1 are the angles of partial movement and the value ranges are [0,2 pi ], and cz and cw are two different preset positions respectively;
Step S37: updating the fitness value and the optimal position of the first individual, updating the fitness value of the first individual based on the position of the first individual after the local movement, and updating the optimal position;
Step S38: judging whether the maximum iteration times are reached, if the maximum iteration times T1 are reached, outputting the optimal position, and calling grabCut functions based on the output optimal position by using a python imported OpenCV library to divide images; if the maximum number of iterations is not reached, go to step S35 for iteration.
4. The method for classifying cardiovascular disease based on SPECT image recognition of claim 1 wherein: in step S5, the cardiovascular lesion classification specifically includes the following steps:
step S51: constructing a training data set and a test data set, collecting a cardiovascular disease public data set as sample data, wherein the data comprises cardiovascular disease characteristics and corresponding labels, the cardiovascular disease characteristics are characteristic vectors, the corresponding labels comprise health states, myocardial ischemia, myocardial infarction, myocardial inflammation and coronary artery stenosis, 70% of sample data are randomly selected as the training data set, and the rest 30% of sample data are selected as the test data set;
Step S52: initializing parameters, namely presetting the number n2 of the second individuals, the maximum iteration times T2 and the probability p, and setting the range of a penalty factor C and a Gaussian kernel function key parameter eta in the chaotic optimization auxiliary classification, wherein the parameters (C, eta) represent the positions of the second individuals;
Step S53: the second individual location is initialized using the following formula:
Where x a2+1 is the initial position of the a2+1th second individual, x a2 is the initial position of the a2 nd second individual, ρ is a random number between (0, 1), a2 is the second individual index, and x n is a random value between (0, 1) randomly generated for each second individual;
Step S54: constructing a chaotic optimal auxiliary classification model, calling an SVM function based on current parameters (C, eta) by using a python import sklearn library, training the chaotic optimal auxiliary classification model based on a training data set, and predicting sample data of a test data set by using the trained chaotic optimal auxiliary classification model;
Step S55: calculating a second fitness value using the formula:
wherein s a2 is the fitness value of the a2 nd second individual, y1 is the true tag, y2 is the predictive tag, n3 is the number of sample data of the test dataset, and hy is the index of the sample data of the test dataset;
step S56: selecting an optimal position, sorting the positions of all the second individuals according to the magnitude of the fitness value, and selecting the position of the second individual with the highest fitness value as the optimal position xbest;
Step S57: the second individual weight is calculated using the formula:
fa2=cIa2 g
Wherein f a2 is the weight of the a2 nd second individual, c is the sensory modality, I a2 is the stimulation intensity of the a2 nd second individual, g is a power exponent ranging from [0,1 ];
Step S58: judging a position updating mode, randomly generating a parameter q between 0 and 1 for each second individual, and if q > p, turning to a step S59; otherwise, go to step S510;
step S59: global location update, the formula used is as follows:
xa2 t+1=xa2 t+(q2×xbest-xa2 t)×fa2×Cauchy(0,1);
Where x a2 t+1 is the position of the (a 2) th second individual at the (t+1) th iteration, x a2 t is the position of the (a 2) th second individual at the (t) th iteration, cauchy (0, 1) is the probability distribution function, xbest is the second individual position with the highest fitness value, i.e. the optimal position;
step S510: local location update, the formula used is as follows:
xa2 t+1=xa2 t+(q2×(xj t-xk t)-xa2 t)×fa2×Cauchy(0,1);
Where x a2 t+1 is the position of the (a 2) th second individual at the (t+1) th iteration, x a2 t is the position of the (a 2) th second individual at the (t) th iteration, x j t is the position of the (j) th second individual at the (t) th iteration, and x k t is the position of the (k) th second individual at the (t) th iteration;
Step S511: the second individual position with low fitness value is improved by the following specific steps:
Step S5111: calculating vertex fitness values, constructing n < 2+ > 1 vertex polyhedrons in an n < 2 > dimensional space of a second individual, calculating fitness values of all vertices, ranking single fitness values, determining an optimal point x 1, a secondary advantage x 2 and a worst point x 3, and calculating center points x 4 of the optimal point x 1 and the secondary advantage x 2, wherein the following formula is used:
where s' (a 2) is the fitness value of the a2 nd vertex;
step S5112: the reflection point of the worst point is calculated using the following formula:
x5=x4+u(x4-x3);
Wherein x 5 is the reflection point of the worst point x 3, u is the reflection coefficient and has a value of 1, and x 4 is the center point of the best point x 1 and the secondary point x 2;
Step S5113: if S' (x 5)<s'(x1) is the worst point, the reflection direction is correct, and the process goes to step S5114; if S' (x 3)<s'(x5)>s'(x1) the reflection direction is incorrect, go to step S5115; if S' (x 1)<s'(x5)<s'(x3), go to step S5116;
Step S5114: calculating an expansion point and replacing the worst point, and performing expansion operation to obtain an expansion point, wherein if s' (x 6)<s'(x1), the worst point x 3 is replaced by an expansion point x 6; otherwise, the worst point x 3 is replaced with the reflection point x 5, and the following formula is used:
x6=x4+o×(x5-x4);
Where x 6 is the expansion point, o is the expansion factor and takes a value of 1.5, o× (x 5-x4) is the expansion factor o multiplied by the difference between the reflection point x 5 and the center point x 4 of the optimum point x 1 and the secondary point x 2;
Step S5115: and calculating a compression point and replacing a worst point, performing compression operation to obtain the compression point, and replacing the worst point x 3 with the compression point x 7 if s' (x 7)<s'(x3), wherein the following formula is used:
x7=x4+z(x3-x4);
Where x 7 is the compression point, z is the compression factor and takes a value of 0.5, and z (x 3-x4) is the compression factor z multiplied by the difference between the worst point x 3 and the center point x 4 of the best point x 1 and the secondary point x 2;
Step S5116: calculating a shrinkage point and replacing a worst point, if s '(x 1)<s'(x5)<s'(x3), performing shrinkage operation to obtain the shrinkage point, and if s' (x 8)<s'(x3), replacing the worst point x 3 with the shrinkage point x 8; otherwise, the worst point x 3 is replaced with the reflection point x 5, and the following formula is used:
x8=x4-v(x3-x4);
Where x 8 is the contraction point, v is the contraction factor and takes a value of 0.5, and v (x 3-x4) is the difference of the contraction factor v multiplied by the worst point x 3 and the center point x 4 of the best point x 1 and the secondary point x 2;
step S512: updating the fitness value and the optimal position of the second individual, updating the fitness value of the second individual based on the improved position of the second individual, and updating the optimal position;
Step S513: establishing a model, presetting a maximum iteration number and an fitness value threshold, outputting an optimal position if the vertex fitness value is higher than the fitness threshold, constructing a chaotic fitness auxiliary classification model, inputting the cardiovascular lesion characteristics extracted in the step S4 into the chaotic fitness auxiliary classification model, classifying data, and outputting a corresponding label; if the maximum iteration number is reached, go to step S52; otherwise, the process goes to step S57.
5. The method for classifying cardiovascular disease based on SPECT image recognition of claim 1 wherein: in step S4, the extracting cardiovascular disease features is based on the image segmented in step S3, extracting the cardiovascular disease features of blood flow perfusion, myocardial metabolic activity, cardiac function parameters and coronary artery stenosis degree from the image, and performing mean variance normalization processing on the extracted cardiovascular disease features.
6. A cardiovascular disease classification system based on SPECT image recognition for implementing a cardiovascular disease classification method based on SPECT image recognition as claimed in any one of claims 1-5, characterized in that: the system comprises an image acquisition module, a reflection enhancement visual optimization module, a dynamic pixel iteration optimization segmentation module, a cardiovascular lesion feature extraction module and a chaotic optimization auxiliary classification module.
7. The cardiovascular disease classification system based on SPECT image recognition of claim 6 wherein: the image acquisition module acquires a cardiovascular lesion SPECT image and sends the cardiovascular lesion SPECT image to the reflection enhancement visual optimization module;
the reflection enhancement visual optimization module receives the cardiovascular lesion SPECT image sent by the image acquisition module, performs filtering and gamma correction processing on the V-channel image, performs self-adaptive enhancement processing on the S-channel image, improves reflection components, and sends the enhanced image to the dynamic pixel iterative optimization segmentation module;
The dynamic pixel iterative optimization segmentation module receives the enhanced image sent by the reflection enhancement visual optimization module, analyzes the image by adopting a heuristic algorithm, reduces the calculation complexity, and sends the segmented image to the cardiovascular lesion feature extraction module;
The cardiovascular lesion feature extraction module receives the segmented image sent by the dynamic pixel iterative optimization segmentation module, extracts blood flow perfusion volume, myocardial metabolic activity, heart function parameters and coronary artery stenosis degree cardiovascular lesion features from the segmented image, performs mean variance standardization processing on the extracted cardiovascular lesion features, and sends the cardiovascular lesion features after standardization processing to the chaotic optimal auxiliary classification module;
The chaotic optimization auxiliary classification module receives the normalized cardiovascular lesion characteristics sent by the cardiovascular lesion characteristic extraction module, adopts a tent-diagonal chaotic map to initialize the second individual position, introduces a cauchy operator in global position update and local position update to update the second individual position, and adopts a simplex method to improve the second individual position with low fitness value.
CN202310950157.XA 2023-07-31 2023-07-31 Cardiovascular disease classification method and system based on SPECT image recognition Active CN116935133B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310950157.XA CN116935133B (en) 2023-07-31 2023-07-31 Cardiovascular disease classification method and system based on SPECT image recognition

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310950157.XA CN116935133B (en) 2023-07-31 2023-07-31 Cardiovascular disease classification method and system based on SPECT image recognition

Publications (2)

Publication Number Publication Date
CN116935133A CN116935133A (en) 2023-10-24
CN116935133B true CN116935133B (en) 2024-04-16

Family

ID=88392284

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310950157.XA Active CN116935133B (en) 2023-07-31 2023-07-31 Cardiovascular disease classification method and system based on SPECT image recognition

Country Status (1)

Country Link
CN (1) CN116935133B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117314785A (en) * 2023-10-30 2023-12-29 阿尔麦德智慧医疗(湖州)有限公司 AI-based ultrasound contrast diagnosis auxiliary system

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109544540A (en) * 2018-11-28 2019-03-29 东北大学 A kind of diabetic retina picture quality detection method based on image analysis technology
CN114819038A (en) * 2022-04-14 2022-07-29 中国人民解放军空军工程大学 Target clustering method for improving image cluster algorithm based on Gaussian mapping and mixed operator
WO2023005069A1 (en) * 2021-07-27 2023-02-02 深圳市赛禾医疗技术有限公司 Ultrasonic image processing method and apparatus, and electronic device

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070139054A1 (en) * 2005-12-21 2007-06-21 Tufillaro Nicholas B Stimulation-response measurement system and method using a chaotic lock-in amplifier

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109544540A (en) * 2018-11-28 2019-03-29 东北大学 A kind of diabetic retina picture quality detection method based on image analysis technology
WO2023005069A1 (en) * 2021-07-27 2023-02-02 深圳市赛禾医疗技术有限公司 Ultrasonic image processing method and apparatus, and electronic device
CN114819038A (en) * 2022-04-14 2022-07-29 中国人民解放军空军工程大学 Target clustering method for improving image cluster algorithm based on Gaussian mapping and mixed operator

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
A modified Whale Optimization Algorithm based on Chaos Initialization and Regulation Operation;Jiang Ruiye et al.;《2019 Chinese Control Conference (CCC)》;第1-6页 *
心电信号自动分类诊断技术研究;赵玲;《中国优秀硕士学位论文全文数据库 医药卫生科技辑》;第E062-10页 *
重症脑血管病后脑心综合征的诊治体会;李艳平 等;《中国实用医药》;第100-101页 *

Also Published As

Publication number Publication date
CN116935133A (en) 2023-10-24

Similar Documents

Publication Publication Date Title
Fan et al. A hierarchical image matting model for blood vessel segmentation in fundus images
Mesejo et al. A survey on image segmentation using metaheuristic-based deformable models: state of the art and critical analysis
CN109753978B (en) Image classification method, device and computer readable storage medium
US10096108B2 (en) Medical image segmentation method and apparatus
JP6312924B2 (en) IMAGING DEVICE AND METHOD, OPERATION DEVICE AND METHOD, PROGRAM, AND RECORDING MEDIUM
CN116935133B (en) Cardiovascular disease classification method and system based on SPECT image recognition
CN109299668B (en) Hyperspectral image classification method based on active learning and cluster analysis
CN110889865B (en) Video target tracking method based on local weighted sparse feature selection
CN109448019B (en) Adaptive method for smoothing parameters of variable-split optical flow model
CN107292896A (en) Contour extraction method based on Snake models
Yang et al. Color texture segmentation based on image pixel classification
CN113344077A (en) Anti-noise solanaceae disease identification method based on convolution capsule network structure
CN111242971B (en) Target tracking method based on improved double-center particle swarm optimization algorithm
Ray et al. Edge sensitive variational image thresholding
CN109035268A (en) A kind of self-adaptive projection method method
Cai et al. Identification of grape leaf diseases based on VN-BWT and Siamese DWOAM-DRNet
CN114332166A (en) Visible light infrared target tracking method and device based on modal competition cooperative network
Uddin et al. Traditional bengali food classification using convolutional neural network
CN110717402B (en) Pedestrian re-identification method based on hierarchical optimization metric learning
CN109242885B (en) Correlation filtering video tracking method based on space-time non-local regularization
CN116342653A (en) Target tracking method, system, equipment and medium based on correlation filter
CN116433690A (en) Otsu threshold segmentation method based on gray wolf and particle swarm collaborative optimization algorithm
CN115731576A (en) Unsupervised pedestrian re-identification method based on key shielding area
CN116363733A (en) Facial expression prediction method based on dynamic distribution fusion
Helmy et al. Deep learning and computer vision techniques for microcirculation analysis: A review

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant