CN114742801A - B-ultrasonic image intelligent identification method and device based on PSO-SVM algorithm - Google Patents

B-ultrasonic image intelligent identification method and device based on PSO-SVM algorithm Download PDF

Info

Publication number
CN114742801A
CN114742801A CN202210408175.0A CN202210408175A CN114742801A CN 114742801 A CN114742801 A CN 114742801A CN 202210408175 A CN202210408175 A CN 202210408175A CN 114742801 A CN114742801 A CN 114742801A
Authority
CN
China
Prior art keywords
ultrasonic image
function
feature
target
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210408175.0A
Other languages
Chinese (zh)
Inventor
高博
李雯玥
刘近近
胡鑫
王晓庆
周建群
季敏娴
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong No 2 Peoples Hospital
Original Assignee
Guangdong No 2 Peoples Hospital
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong No 2 Peoples Hospital filed Critical Guangdong No 2 Peoples Hospital
Priority to CN202210408175.0A priority Critical patent/CN114742801A/en
Publication of CN114742801A publication Critical patent/CN114742801A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/213Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods
    • G06F18/2135Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods based on approximation criteria, e.g. principal component analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2411Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on the proximity to a decision surface, e.g. support vector machines
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10132Ultrasound image

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Biology (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Biomedical Technology (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Medical Informatics (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Quality & Reliability (AREA)
  • Image Analysis (AREA)

Abstract

The invention relates to the field of artificial intelligence, and discloses a B-mode ultrasonic image intelligent identification method and device based on a PSO-SVM algorithm, electronic equipment and a storage medium, wherein the method comprises the following steps: acquiring a training B ultrasonic image, identifying a target region of the training B ultrasonic image, and extracting a space domain texture feature and a frequency domain texture feature of the target region; performing characteristic normalization on the spatial domain texture characteristics and the frequency domain texture characteristics to obtain target texture characteristics; constructing a classification hyperplane function of the training B ultrasonic image according to the target texture characteristics, and optimizing the classification hyperplane function by using a preset particle swarm algorithm to obtain an optimized classification function; and identifying the image category of the B-ultrasonic image to be detected by using the optimized classification function to obtain the identification result of the B-ultrasonic image to be detected. The invention can improve the identification accuracy of the B-ultrasonic image.

Description

B-ultrasonic image intelligent identification method and device based on PSO-SVM algorithm
Technical Field
The invention relates to the field of artificial intelligence, in particular to a B-mode ultrasonic image intelligent identification method and device based on a PSO-SVM algorithm, electronic equipment and a computer readable storage medium.
Background
The B ultrasonic image recognition refers to a process of classifying images acquired by B ultrasonic acquisition equipment in the medical field, and doctors can be assisted in better disease diagnosis through the recognition of the B ultrasonic images.
At present, B-ultrasonic image recognition is usually realized by adopting an artificial neural network method, but the artificial neural network usually has a good classification effect only on the premise that enough training samples exist, but in the medical field, the training samples of the B-ultrasonic image are usually limited, so that the recognition accuracy of subsequent B-ultrasonic image recognition cannot be guaranteed.
Disclosure of Invention
In order to solve the technical problems, the invention provides a B-ultrasonic image intelligent identification method, a device, electronic equipment and a computer readable storage medium based on a PSO-SVM algorithm, which can improve the identification accuracy of B-ultrasonic images.
In a first aspect, the invention provides a B-mode ultrasonic image intelligent identification method based on a PSO-SVM algorithm, which comprises the following steps:
acquiring a training B ultrasonic image, identifying a target region of the training B ultrasonic image, and extracting a space domain texture feature and a frequency domain texture feature of the target region;
performing characteristic normalization on the spatial domain texture characteristics and the frequency domain texture characteristics to obtain target texture characteristics;
constructing a classification hyperplane function of the training B ultrasonic image according to the target texture characteristics, and optimizing the classification hyperplane function by using a preset particle swarm algorithm to obtain an optimized classification function;
and identifying the image category of the B-ultrasonic image to be detected by using the optimized classification function to obtain the identification result of the B-ultrasonic image to be detected.
In one possible implementation manner of the first aspect, the identifying a target region of the training B-mode ultrasound image includes:
inputting the training B ultrasonic image into a pre-trained region detection model, extracting the characteristics of the training B ultrasonic image through a convolution layer in the region detection model to obtain a characteristic image, and standardizing the characteristic image by using a standard layer in the region detection model to obtain a standard image;
performing pooling processing on the standard image by using a pooling layer in the region detection model to obtain a pooled image;
and identifying the region type of the pooled image by using a full-connected layer in the region detection model, and outputting the target region of the training B-mode ultrasonic image by using an output layer in the region detection model according to the region type.
In a possible implementation manner of the first aspect, the extracting the spatial texture feature and the frequency texture feature of the target region includes:
determining texture feature parameters of the target area, constructing a spatial domain co-occurrence matrix of the target area according to the texture feature parameters, and extracting spatial domain texture features of the target area according to the spatial domain co-occurrence matrix;
and performing wavelet decomposition on the target region to obtain a decomposed image, calculating the global features of the decomposed image, and deleting redundant feature information of the global features by using a preset dimension reduction algorithm to obtain the frequency domain texture features of the target region.
In a possible implementation manner of the first aspect, the performing feature normalization on the spatial domain texture feature and the frequency domain texture feature to obtain a target texture feature includes:
determining a normalized region of the spatial domain texture features and the frequency domain texture features;
respectively mapping the spatial domain texture features and the frequency domain texture features to the normalization region by using a preset normalization algorithm to obtain spatial domain normalization features and frequency domain normalization features;
and taking the spatial domain normalized feature and the frequency domain normalized feature as the target texture feature.
In a possible implementation manner of the first aspect, the constructing a classification hyperplane function of the training B-mode ultrasound image according to the target texture feature includes:
mapping the target texture features to preset vector coordinates to obtain feature vector coordinates;
calculating the coordinate distance between any two vector coordinates in the feature vector coordinates, and selecting the feature vector coordinate with the minimum coordinate distance as a target feature coordinate;
and constructing a boundary function of the training B-mode ultrasonic image according to the target characteristic coordinates, and constructing the classified hyperplane function according to the boundary function.
In a possible implementation manner of the first aspect, the optimizing the classification hyperplane function by using a preset particle swarm algorithm to obtain an optimized classification function includes:
initializing a particle swarm of the classified hyperplane function, and calculating the current position and the current speed of the particle swarm in the classified hyperplane function by using the preset particle swarm algorithm;
and judging whether the particle swarm is in the global optimal position of the classification hyperplane function or not according to the current position and the current speed, and obtaining the optimized classification function when the particle swarm is in the global optimal position of the classification hyperplane function.
In a possible implementation manner of the first aspect, the optimizing the classification hyperplane function by using a preset particle swarm algorithm to obtain an optimized classification function includes:
initializing a particle swarm of the classified hyperplane function, and calculating the current position and the current speed of the particle swarm in the classified hyperplane function by using the preset particle swarm algorithm;
and judging whether the particle swarm is in the global optimal position of the classification hyperplane function or not according to the current position and the current speed, and obtaining the optimized classification function when the particle swarm is in the global optimal position of the classification hyperplane function.
In a second aspect, the invention provides a B-mode ultrasonic image intelligent recognition device based on a PSO-SVM algorithm, comprising:
the texture feature extraction module is used for acquiring a training B ultrasonic image, identifying a target region of the training B ultrasonic image and extracting a space domain texture feature and a frequency domain texture feature of the target region;
the texture feature normalization module is used for performing feature normalization on the spatial domain texture features and the frequency domain texture features to obtain target texture features;
the classification function generation module is used for constructing a classification hyperplane function of the training B ultrasonic image according to the target texture characteristics and optimizing the classification hyperplane function by utilizing a preset particle swarm algorithm to obtain an optimized classification function;
and the B-ultrasonic image identification module is used for identifying the image category of the B-ultrasonic image to be detected by utilizing the optimized classification function to obtain the identification result of the B-ultrasonic image to be detected.
In a third aspect, the present invention provides an electronic device comprising:
at least one processor; and a memory communicatively coupled to the at least one processor;
wherein the memory stores a computer program executable by the at least one processor to enable the at least one processor to perform the intelligent B-mode image recognition method based on the PSO-SVM algorithm as described in any one of the above first aspects.
In a fourth aspect, the present invention provides a computer-readable storage medium, which stores a computer program, and when the computer program is executed by a processor, the method for intelligently recognizing a B-mode ultrasonic image based on a PSO-SVM algorithm as described in any one of the first aspects above is implemented.
Compared with the prior art, the technical principle and the beneficial effects of the scheme are as follows:
according to the scheme, firstly, the identification accuracy of the B ultrasonic image can be improved by identifying the target area of the training B ultrasonic image, the spatial domain texture characteristics and the frequency domain texture characteristics of the target area are extracted, the texture characteristics of the target area under different domain states can be obtained, the training capability of a subsequent classification hyperplane function is guaranteed, and therefore the identification capability of the B ultrasonic image can be improved; secondly, the spatial domain texture features and the frequency domain texture features are subjected to feature normalization to obtain target texture features, the texture features of the target area in different domain states can be obtained, the training capability of a subsequent classification hyperplane function is guaranteed, and therefore the recognition capability of the B-mode ultrasonic image can be improved; furthermore, according to the embodiment of the invention, the classified hyperplane function of the training B-mode ultrasound image is constructed according to the target texture features, the classified hyperplane function is optimized by using a preset particle swarm algorithm to obtain an optimized classification function, so that the image category of the B-mode ultrasound image to be detected is identified, the identification result of the B-mode ultrasound image to be detected is obtained, the image classification identification capability of the classified hyperplane function can be ensured, the training of a model through a large number of training samples is avoided, and the identification accuracy of the B-mode ultrasound image is ensured. Therefore, the B-ultrasonic image intelligent identification method, the B-ultrasonic image intelligent identification device, the electronic equipment and the storage medium based on the PSO-SVM algorithm can improve the identification accuracy of the B-ultrasonic image.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the invention and together with the description, serve to explain the principles of the invention.
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, and it is obvious for those skilled in the art that other drawings can be obtained according to the drawings without inventive exercise.
Fig. 1 is a schematic flow chart of a B-mode ultrasonic image intelligent recognition method based on a PSO-SVM algorithm according to an embodiment of the present invention;
FIG. 2 is a schematic flowchart illustrating a step of the intelligent B-mode ultrasonic image recognition method based on the PSO-SVM algorithm shown in FIG. 1 according to an embodiment of the present invention;
FIG. 3 is a schematic flow chart illustrating another step of the intelligent B-mode ultrasonic image recognition method based on the PSO-SVM algorithm shown in FIG. 1 according to an embodiment of the present invention;
fig. 4 is a schematic block diagram of a B-mode ultrasonic image intelligent recognition apparatus based on a PSO-SVM algorithm according to an embodiment of the present invention;
fig. 5 is a schematic diagram of an internal structure of an electronic device for implementing a B-mode ultrasonic image intelligent recognition method based on a PSO-SVM algorithm according to an embodiment of the present invention.
Detailed Description
It should be understood that the detailed description and specific examples, while indicating the invention, are intended for purposes of illustration only and are not intended to limit the scope of the invention.
The embodiment of the invention provides a PSO-SVM algorithm-based B-mode ultrasonic image intelligent recognition method, and an execution subject of the PSO-SVM algorithm-based B-mode ultrasonic image intelligent recognition method comprises but is not limited to at least one of a server, a terminal and other electronic equipment which can be configured to execute the method provided by the embodiment of the invention. In other words, the B-mode ultrasonic image intelligent recognition method based on the PSO-SVM algorithm may be performed by software or hardware installed in a terminal device or a server device, and the software may be a block chain platform. The server includes but is not limited to: a single server, a server cluster, a cloud server or a cloud server cluster, and the like. The server may be an independent server, or may be a cloud server that provides basic cloud computing services such as a cloud service, a cloud database, cloud computing, a cloud function, cloud storage, a web service, cloud communication, a middleware service, a domain name service, a security service, a Content Delivery Network (CDN), and a big data and artificial intelligence platform.
Fig. 1 is a schematic flow chart of a B-mode ultrasonic image intelligent recognition method based on a PSO-SVM algorithm according to an embodiment of the present invention. The B-mode ultrasonic image intelligent identification method based on the PSO-SVM algorithm described in the figure 1 comprises the following steps:
s1, acquiring a training B-ultrasonic image, identifying a target area of the training B-ultrasonic image, and extracting the spatial domain texture feature and the frequency domain texture feature of the target area.
In the embodiment of the invention, the training B-mode ultrasound image refers to a historical B-mode ultrasound image set and is used for realizing the construction and training of a subsequent model, such as a common liver B-mode ultrasound image, an abdomen B-mode ultrasound image, a heart B-mode ultrasound image and the like.
Furthermore, the embodiment of the invention acquires the region of interest in the training B-mode ultrasonic image by identifying the target region of the training B-mode ultrasonic image, namely acquiring the class classification region influencing the training B-mode ultrasonic image, thereby improving the processing efficiency of subsequent images.
As an embodiment of the present invention, the identifying a target region of the training B-mode ultrasound image includes: inputting the training B-mode ultrasonic image into a pre-trained region detection model, performing feature extraction on the training B-mode ultrasonic image through a convolution layer in the region detection model to obtain a feature image, standardizing the feature image by using a standard layer in the region detection model to obtain a standard image, performing pooling processing on the standard image by using a pooling layer in the region detection model to obtain a pooled image, identifying the region category of the pooled image by using a full-connection layer in the region detection model, and outputting the target region of the training B-mode ultrasonic image by using an output layer in the region detection model according to the region category.
The pre-trained region detection model is a model trained by training data and has good region detection capability, and in the invention, the region detection model can be constructed by a YOLO3 algorithm.
Further, in an optional embodiment of the present invention, the feature extraction of the training B-mode ultrasound image is implemented by convolution kernels in the convolution layer, the normalization of the feature image may be implemented by a cross-connection module in the standard layer, the pooling process of the standard image may be implemented by a pooling function in the pooling layer, such as a maximum or minimum pooling function, and the identification of the region class may be implemented by an activation function in the full-connection layer, such as a softmax function.
Furthermore, the spatial domain texture features and the frequency domain texture features of the target region are extracted to obtain the texture features of the target region in different domain states, so that the training capability of a subsequent classification hyperplane function is guaranteed, and the recognition capability of the B-mode ultrasound image can be improved.
As an embodiment of the present invention, referring to fig. 2, the extracting the spatial texture feature and the frequency texture feature of the target region includes:
s201, determining texture feature parameters of the target area, constructing a spatial domain co-occurrence matrix of the target area according to the texture feature parameters, and extracting spatial domain texture features of the target area according to the spatial domain co-occurrence matrix;
s202, carrying out wavelet decomposition on the target area to obtain a decomposed image, calculating the global feature of the decomposed image, and deleting redundant feature information of the global feature by using a preset dimension reduction algorithm to obtain the frequency domain texture feature of the target area.
The texture feature parameters refer to information dimensions used for representing subsequent spatial domain texture features, and include parameters such as contrast, correlation, contrast moment, entropy and the like, and the spatial domain co-occurrence matrix refers to a feature distribution matrix of a target region constructed on the basis of the texture feature parameters.
Further, in an optional embodiment of the present invention, the spatial domain co-occurrence matrix of the target region may be constructed by a gray level co-occurrence matrix algorithm, the spatial domain texture feature may be obtained by a variance and a mean of texture feature parameters at different angles in the spatial domain co-occurrence matrix, and optionally, the angles may be set to matrix directions of 0 degree, 45 degrees, 90 degrees, and 135 degrees.
Further, in an optional embodiment of the present invention, the wavelet decomposition of the target region may be set as a 3-layer decomposition, the global feature of the decomposed image may be obtained by calculating an energy variance of each image in the decomposed image, and the preset dimension reduction algorithm includes a principal component analysis method.
And S2, carrying out feature normalization on the spatial domain texture features and the frequency domain texture features to obtain target texture features.
According to the embodiment of the invention, the spatial domain texture features and the frequency domain texture features are subjected to feature normalization so as to map the spatial domain texture features and the frequency domain texture features to the same value range, and influence on the accuracy of subsequent data processing caused by different value ranges occupied by the texture features is eliminated.
As an embodiment of the present invention, the performing feature normalization on the spatial domain texture feature and the frequency domain texture feature to obtain a target texture feature includes: determining normalization regions of the space domain texture features and the frequency domain texture features, mapping the space domain texture features and the frequency domain texture features to the normalization regions respectively by using a preset normalization algorithm to obtain space domain normalization features and frequency domain normalization features, and taking the space domain normalization features and the frequency domain normalization features as the target texture features.
The normalized region refers to a value domain range to which the spatial texture feature and the frequency domain texture feature finally need to be mapped, and may be set to [0,1 ].
Further, in an optional embodiment of the present invention, the preset normalization algorithm includes:
Figure BDA0003602939890000081
wherein x is*The normalized feature is represented, x represents the feature to be normalized, min represents the minimum value of the feature to be normalized, and max represents the maximum value to be normalized.
S3, constructing a classification hyperplane function of the training B ultrasonic image according to the target texture features, and optimizing the classification hyperplane function by using a preset particle swarm algorithm to obtain an optimized classification model.
According to the embodiment of the invention, the classification hyperplane function of the training B-mode ultrasonic image is constructed according to the target texture characteristics, so that the classification premise of the subsequent B-mode ultrasonic image is ensured. As an embodiment of the present invention, referring to fig. 3, the constructing a classification hyperplane function of the training B-mode ultrasound image according to the target texture features includes:
s301, mapping the target texture features to preset vector coordinates to obtain feature vector coordinates;
s302, calculating the coordinate distance between any two vector coordinates in the feature vector coordinates, and selecting the feature vector coordinate with the minimum coordinate distance as a target feature coordinate;
s303, constructing a boundary function of the training B-mode ultrasonic image according to the target feature coordinates, and constructing the classification hyperplane function according to the boundary function.
The preset vector coordinates refer to classification premises for determining each texture feature in the target texture features, the coordinate distances are used for selecting associated elements in the feature vector coordinates to guarantee construction premises of subsequent boundary functions, and the boundary functions comprise left boundary functions and right boundary functions and are used for realizing classification of images.
Further, in an optional embodiment of the present invention, the boundary function of the training B-mode ultrasound image is constructed by using the following formula:
w*x=1,w*x=-1
wherein x is the target feature coordinate, and w is a fixed parameter.
Further, in an optional embodiment of the present invention, the classification hyperplane function is constructed by using the following formula:
Figure BDA0003602939890000091
wherein f (x) represents a classification hyperplane function, sign represents a boundary function, n represents the number of target texture features, aiThe ith Lagrange multiplier is represented, K represents a kernel function, b represents weight, and x and y represent vector coordinates of the target texture features.
Further, the classification hyperplane function is optimized by utilizing a preset particle swarm algorithm, so that the image classification and identification capacity of the classification hyperplane function is guaranteed.
As an embodiment of the present invention, the optimizing the classification hyperplane function by using a preset particle swarm algorithm to obtain an optimized classification function includes: initializing a particle swarm of the classified hyperplane function, calculating the current position and the current speed of the particle swarm in the classified hyperplane function by using the preset particle swarm algorithm, judging whether the particle swarm is in the global optimal position of the classified hyperplane function or not according to the current position and the current speed, and obtaining the optimized classified function when the particle swarm is in the global optimal position of the classified hyperplane function.
In the invention, the particle swarm is used for searching the optimal solution of the target texture feature in the classification hyperplane function, thereby realizing the optimization of the classification hyperplane function and ensuring the identification capability of a subsequent B ultrasonic image.
Further, in an optional embodiment of the present invention, the calculating, by using the preset particle swarm algorithm, a current position and a current speed of the particle swarm in the classification hyperplane function includes: and calculating the current position of the particle swarm in the classification hyperplane function by utilizing a position function in the preset particle swarm algorithm, and calculating the current speed of the particle swarm in the classification hyperplane function by utilizing a speed function in the preset particle swarm algorithm.
Further, in another optional embodiment of the present invention, the position function includes:
xi(t+1)=xi(t)+vi(t+1)
the speed function includes:
vi(t+1)=wvi(t)+c1r1(pi(t)-xi(t))+c2r2(g(t)-xi(t))
wherein t is the number of iterations, w is an inertia factor, c1 and c2 are learning factors taking positive values, r1 and r2 are random numbers between (0,1), vi (t) and vi (t +1) respectively represent the moving speed of the ith particle in the t iteration and the (t +1) iteration, xi (t) and xi (t +1) respectively represent the position of the ith particle in the t iteration and the (t +1) iteration, and g (t) represents the optimal position of the particle swarm in the tt iteration.
Further, in an optional embodiment of the present invention, whether the particle swarm is located at the global optimal position of the classification hyperplane function may be determined by determining whether the particle fitness of the particle swarm satisfies a preset fitness, or whether the iteration number of the particle swarm exceeds the maximum number, that is, when the particle fitness of the particle swarm satisfies the preset fitness, or the iteration number of the particle swarm exceeds the maximum number, it indicates that the particle swarm is located at the global optimal position of the classification hyperplane function, and optionally, the preset fitness and the maximum number may be set based on different service scenarios, which is not further limited herein.
And S4, identifying the image category of the B-ultrasonic image to be detected by using the optimized classification function to obtain the identification result of the B-ultrasonic image to be detected.
According to the embodiment of the invention, the image category of the B-mode ultrasonic image to be detected is identified by utilizing the optimized classification function, so that the identification accuracy of the B-mode ultrasonic image to be detected is realized, and a doctor can be assisted to make better result diagnosis, wherein the image category comprises a normal category and an abnormal category.
According to the scheme, firstly, the target area of the B-ultrasonic image is recognized and trained, so that the recognition accuracy of the B-ultrasonic image can be improved, the spatial domain texture features and the frequency domain texture features of the target area are extracted, the texture features of the target area in different domain states can be obtained, the training capability of a subsequent classification hyperplane function is guaranteed, and the recognition capability of the B-ultrasonic image can be improved; secondly, the spatial domain texture features and the frequency domain texture features are subjected to feature normalization to obtain target texture features, the texture features of the target area in different domain states can be obtained, the training capability of a subsequent classification hyperplane function is guaranteed, and therefore the recognition capability of the B-mode ultrasonic image can be improved; furthermore, according to the embodiment of the invention, the classified hyperplane function of the training B-mode ultrasound image is constructed according to the target texture features, the classified hyperplane function is optimized by using a preset particle swarm algorithm to obtain an optimized classification function, so that the image category of the B-mode ultrasound image to be detected is identified, the identification result of the B-mode ultrasound image to be detected is obtained, the image classification identification capability of the classified hyperplane function can be ensured, the training of a model through a large number of training samples is avoided, and the identification accuracy of the B-mode ultrasound image is ensured. Therefore, the B-ultrasonic image intelligent identification method based on the PSO-SVM algorithm provided by the embodiment of the invention can improve the identification accuracy of the B-ultrasonic image.
FIG. 4 is a functional block diagram of the B-mode ultrasonic image intelligent recognition device based on the PSO-SVM algorithm of the present invention.
The B-mode ultrasonic image intelligent recognition device 400 based on the PSO-SVM algorithm can be installed in electronic equipment. According to the realized function, the B-mode ultrasonic image intelligent recognition device based on the PSO-SVM algorithm can comprise a texture feature extraction module 401, a texture feature normalization module 402, a classification function generation module 403 and a B-mode ultrasonic image recognition model 404. The module of the present invention, which may also be referred to as a unit, refers to a series of computer program segments that can be executed by a processor of an electronic device and that can perform a fixed function, and that are stored in a memory of the electronic device.
In the embodiment of the present invention, the functions of the modules/units are as follows:
the texture feature extraction module 401 is configured to acquire a training B-mode ultrasound image, identify a target region of the training B-mode ultrasound image, and extract a spatial domain texture feature and a frequency domain texture feature of the target region;
the texture feature normalization module 402 is configured to perform feature normalization on the spatial domain texture features and the frequency domain texture features to obtain target texture features;
the classification function generation module 403 is configured to construct a classification hyperplane function of the training B-mode ultrasound image according to the target texture features, and optimize the classification hyperplane function by using a preset particle swarm algorithm to obtain an optimized classification function;
the B-mode ultrasound image recognition module 404 is configured to recognize the image category of the B-mode ultrasound image to be detected by using the optimized classification function, and obtain a recognition result of the B-mode ultrasound image to be detected.
In detail, when the modules in the PSO-SVM algorithm-based B-mode ultrasound image intelligent recognition apparatus 400 according to the embodiment of the present invention are used, the same technical means as the PSO-SVM algorithm-based B-mode ultrasound image intelligent recognition method described in fig. 1 to 3 are adopted, and the same technical effects can be produced, which is not described herein again.
Fig. 5 is a schematic structural diagram of an electronic device for implementing the B-mode ultrasonic image intelligent recognition method based on the PSO-SVM algorithm according to the present invention.
The electronic device may include a processor 50, a memory 51, a communication bus 52, and a communication interface 53, and may further include a computer program, such as a B-mode ultrasonic image intelligent recognition program based on a PSO-SVM algorithm, stored in the memory 51 and operable on the processor 50.
In some embodiments, the processor 50 may be composed of an integrated circuit, for example, a single packaged integrated circuit, or may be composed of a plurality of integrated circuits packaged with the same function or different functions, and includes one or more Central Processing Units (CPUs), microprocessors, digital Processing chips, graphics processors, and combinations of various control chips. The processor 50 is a Control Unit (Control Unit) of the electronic device, connects various components of the whole electronic device by using various interfaces and lines, and executes various functions and processes data of the electronic device by running or executing programs or modules stored in the memory 51 (for example, executing a B-mode image smart recognition program based on a PSO-SVM algorithm, etc.), and calling data stored in the memory 51.
The memory 51 includes at least one type of readable storage medium including flash memory, removable hard disks, multimedia cards, card-type memory (e.g., SD or DX memory, etc.), magnetic memory, magnetic disks, optical disks, and the like. The memory 51 may in some embodiments be an internal storage unit of the electronic device, e.g. a removable hard disk of the electronic device. The memory 51 may also be an external storage device of the electronic device in other embodiments, such as a plug-in mobile hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), and the like provided on the electronic device. Further, the memory 51 may also include both an internal storage unit and an external storage device of the electronic device. The memory 51 may be used not only to store application software installed in the electronic device and various types of data, such as codes of a B-mode ultrasonic image smart recognition program based on a PSO-SVM algorithm, etc., but also to temporarily store data that has been output or will be output.
The communication bus 52 may be a Peripheral Component Interconnect (PCI) bus or an Extended Industry Standard Architecture (EISA) bus. The bus may be divided into an address bus, a data bus, a control bus, etc. The bus is arranged to enable connection communication between the memory 51 and at least one processor 50 or the like.
The communication interface 53 is used for communication between the electronic device and other devices, and includes a network interface and a user interface. Optionally, the network interface may include a wired interface and/or a wireless interface (e.g., WI-FI interface, bluetooth interface, etc.), which are typically used to establish a communication connection between the electronic device and other electronic devices. The user interface may be a Display (Display), an input unit such as a Keyboard (Keyboard), and optionally a standard wired interface, a wireless interface. Alternatively, in some embodiments, the display may be an LED display, a liquid crystal display, a touch-sensitive liquid crystal display, an OLED (Organic Light-Emitting Diode) touch device, or the like. The display, which may also be referred to as a display screen or display unit, is suitable, among other things, for displaying information processed in the electronic device and for displaying a visualized user interface.
Fig. 5 shows only an electronic device with components, and those skilled in the art will appreciate that the structure shown in fig. 5 does not constitute a limitation of the electronic device, and may include fewer or more components than shown, or some components may be combined, or a different arrangement of components.
For example, although not shown, the electronic device may further include a power supply (such as a battery) for supplying power to each component, and preferably, the power supply may be logically connected to the at least one processor 50 through a power management device, so that functions of charge management, discharge management, power consumption management and the like are realized through the power management device. The power supply may also include any component of one or more dc or ac power sources, recharging devices, power failure detection circuitry, power converters or inverters, power status indicators, and the like. The electronic device may further include various sensors, a bluetooth module, a Wi-Fi module, and the like, which are not described herein again.
It is to be understood that the embodiments described are for illustrative purposes only and that the scope of the claimed invention is not limited to this configuration.
The PSO-SVM algorithm-based B-mode image intelligent recognition program stored in the memory 51 of the electronic device is a combination of a plurality of computer programs, and when running in the processor 50, can realize:
acquiring a training B ultrasonic image, identifying a target region of the training B ultrasonic image, and extracting a space domain texture feature and a frequency domain texture feature of the target region;
performing characteristic normalization on the spatial domain texture characteristics and the frequency domain texture characteristics to obtain target texture characteristics;
constructing a classification hyperplane function of the training B ultrasonic image according to the target texture characteristics, and optimizing the classification hyperplane function by using a preset particle swarm algorithm to obtain an optimized classification function;
and identifying the image category of the B-ultrasonic image to be detected by using the optimized classification function to obtain the identification result of the B-ultrasonic image to be detected.
Specifically, the processor 50 may refer to the description of the relevant steps in the embodiment corresponding to fig. 1 for a specific implementation method of the computer program, which is not described herein again.
Further, the electronic device integrated module/unit, if implemented in the form of a software functional unit and sold or used as a separate product, may be stored in a non-volatile computer-readable storage medium. The computer readable storage medium may be volatile or non-volatile. For example, the computer-readable medium may include: any entity or device capable of carrying said computer program code, recording medium, U-disk, removable hard disk, magnetic disk, optical disk, computer Memory, Read-Only Memory (ROM).
The present invention also provides a computer-readable storage medium, storing a computer program which, when executed by a processor of an electronic device, may implement:
acquiring a training B ultrasonic image, identifying a target region of the training B ultrasonic image, and extracting a space domain texture feature and a frequency domain texture feature of the target region;
performing characteristic normalization on the spatial domain texture characteristics and the frequency domain texture characteristics to obtain target texture characteristics;
constructing a classification hyperplane function of the training B ultrasonic image according to the target texture characteristics, and optimizing the classification hyperplane function by using a preset particle swarm algorithm to obtain an optimized classification function;
and identifying the image category of the B-ultrasonic image to be detected by using the optimized classification function to obtain the identification result of the B-ultrasonic image to be detected.
In the embodiments provided in the present invention, it should be understood that the disclosed apparatus, device and method can be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the modules is only one logical functional division, and other divisions may be realized in practice.
The modules described as separate parts may or may not be physically separate, and parts displayed as modules may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the present embodiment.
In addition, functional modules in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, or in a form of hardware plus a software functional module.
It will be evident to those skilled in the art that the invention is not limited to the details of the foregoing illustrative embodiments, and that the present invention may be embodied in other specific forms without departing from the spirit or essential attributes thereof.
The present embodiments are therefore to be considered in all respects as illustrative and not restrictive, the scope of the invention being indicated by the appended claims rather than by the foregoing description, and all changes which come within the meaning and range of equivalency of the claims are therefore intended to be embraced therein. Any reference signs in the claims shall not be construed as limiting the claim concerned.
It is noted that, in this document, relational terms such as "first" and "second," and the like, are used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
The foregoing are merely exemplary embodiments of the present invention, which enable those skilled in the art to understand or practice the present invention. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the invention. Thus, the present invention is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.

Claims (10)

1. A B-mode ultrasonic image intelligent recognition method based on a PSO-SVM algorithm is characterized by comprising the following steps:
acquiring a training B ultrasonic image, identifying a target region of the training B ultrasonic image, and extracting a space domain texture feature and a frequency domain texture feature of the target region;
performing characteristic normalization on the spatial domain texture characteristics and the frequency domain texture characteristics to obtain target texture characteristics;
constructing a classification hyperplane function of the training B ultrasonic image according to the target texture characteristics, and optimizing the classification hyperplane function by using a preset particle swarm algorithm to obtain an optimized classification function;
and identifying the image category of the B-ultrasonic image to be detected by using the optimized classification function to obtain the identification result of the B-ultrasonic image to be detected.
2. The method of claim 1, wherein the identifying the target region of the training B-mode ultrasound image comprises:
inputting the training B ultrasonic image into a pre-trained region detection model, performing feature extraction on the training B ultrasonic image through a convolution layer in the region detection model to obtain a feature image, and normalizing the feature image by using a standard layer in the region detection model to obtain a standard image;
performing pooling processing on the standard image by using a pooling layer in the region detection model to obtain a pooled image;
and identifying the region type of the pooled image by using a full-link layer in the region detection model, and outputting the target region of the training B-mode ultrasonic image by using an output layer in the region detection model according to the region type.
3. The method of claim 1, wherein extracting spatial and frequency texture features of the target region comprises:
determining texture feature parameters of the target area, constructing a spatial domain co-occurrence matrix of the target area according to the texture feature parameters, and extracting spatial domain texture features of the target area according to the spatial domain co-occurrence matrix;
and performing wavelet decomposition on the target region to obtain a decomposed image, calculating the global features of the decomposed image, and deleting redundant feature information of the global features by using a preset dimension reduction algorithm to obtain the frequency domain texture features of the target region.
4. The method according to claim 1, wherein the performing feature normalization on the spatial domain texture features and the frequency domain texture features to obtain a target texture feature comprises:
determining a normalized region of the spatial domain texture features and the frequency domain texture features;
respectively mapping the spatial domain texture features and the frequency domain texture features to the normalization region by using a preset normalization algorithm to obtain spatial domain normalization features and frequency domain normalization features;
and taking the spatial domain normalized feature and the frequency domain normalized feature as the target texture feature.
5. The method according to claim 1, wherein the constructing the classification hyperplane function of the training B-mode ultrasound image according to the target texture features comprises:
mapping the target texture features to preset vector coordinates to obtain feature vector coordinates;
calculating the coordinate distance between any two vector coordinates in the feature vector coordinates, and selecting the feature vector coordinate with the minimum coordinate distance as a target feature coordinate;
and constructing a boundary function of the training B-mode ultrasonic image according to the target characteristic coordinates, and constructing the classification hyperplane function according to the boundary function.
6. The method according to any one of claims 1 to 5, wherein the optimizing the classification hyperplane function by using a preset particle swarm algorithm to obtain an optimized classification function comprises:
initializing a particle swarm of the classified hyperplane function, and calculating the current position and the current speed of the particle swarm in the classified hyperplane function by using the preset particle swarm algorithm;
and judging whether the particle swarm is in the global optimal position of the classification hyperplane function or not according to the current position and the current speed, and obtaining the optimized classification function when the particle swarm is in the global optimal position of the classification hyperplane function.
7. The method as recited in claim 6, wherein said calculating a current position and a current velocity of said particle swarm in said classification hyperplane function using said predetermined particle swarm algorithm comprises:
calculating the current position of the particle swarm in the classification hyperplane function by utilizing a position function in the preset particle swarm algorithm;
and calculating the current speed of the particle swarm in the classification hyperplane function by utilizing the preset speed function in the particle swarm algorithm.
8. A B-mode ultrasonic image intelligent recognition device based on a PSO-SVM algorithm is characterized by comprising:
the texture feature extraction module is used for acquiring a training B ultrasonic image, identifying a target region of the training B ultrasonic image and extracting a space domain texture feature and a frequency domain texture feature of the target region;
the texture feature normalization module is used for performing feature normalization on the spatial domain texture features and the frequency domain texture features to obtain target texture features;
the classification function generation module is used for constructing a classification hyperplane function of the training B ultrasonic image according to the target texture characteristics and optimizing the classification hyperplane function by utilizing a preset particle swarm algorithm to obtain an optimized classification function;
and the B-ultrasonic image identification module is used for identifying the image category of the B-ultrasonic image to be detected by utilizing the optimized classification function to obtain the identification result of the B-ultrasonic image to be detected.
9. An electronic device, characterized in that the electronic device comprises:
at least one processor; and the number of the first and second groups,
a memory communicatively coupled to the at least one processor; wherein the content of the first and second substances,
the memory stores a computer program executable by the at least one processor, the computer program being executed by the at least one processor to enable the at least one processor to perform the B-mode ultrasonic image intelligent recognition method based on the PSO-SVM algorithm according to any one of claims 1 to 7.
10. A computer-readable storage medium storing a computer program, wherein the computer program, when executed by a processor, implements the PSO-SVM algorithm-based B-mode ultrasonic image intelligent recognition method according to any one of claims 1 to 7.
CN202210408175.0A 2022-04-19 2022-04-19 B-ultrasonic image intelligent identification method and device based on PSO-SVM algorithm Pending CN114742801A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210408175.0A CN114742801A (en) 2022-04-19 2022-04-19 B-ultrasonic image intelligent identification method and device based on PSO-SVM algorithm

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210408175.0A CN114742801A (en) 2022-04-19 2022-04-19 B-ultrasonic image intelligent identification method and device based on PSO-SVM algorithm

Publications (1)

Publication Number Publication Date
CN114742801A true CN114742801A (en) 2022-07-12

Family

ID=82281216

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210408175.0A Pending CN114742801A (en) 2022-04-19 2022-04-19 B-ultrasonic image intelligent identification method and device based on PSO-SVM algorithm

Country Status (1)

Country Link
CN (1) CN114742801A (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111428748A (en) * 2020-02-20 2020-07-17 重庆大学 Infrared image insulator recognition and detection method based on HOG characteristics and SVM
CN111639704A (en) * 2020-05-28 2020-09-08 深圳壹账通智能科技有限公司 Target identification method, device and computer readable storage medium

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111428748A (en) * 2020-02-20 2020-07-17 重庆大学 Infrared image insulator recognition and detection method based on HOG characteristics and SVM
CN111639704A (en) * 2020-05-28 2020-09-08 深圳壹账通智能科技有限公司 Target identification method, device and computer readable storage medium

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
付燕等: "PSO-SVM算法在肝脏B超图像识别中的应用", 《计算机测量与控制》 *
毛清华等: "《矿用带式输送机智能监测及防护技术》", 31 May 2020, 华中科技大学出版社 *
石青等: "《微小型仿生机器鼠设计与控制》", 31 December 2019, 北京理工大学出版社 *

Similar Documents

Publication Publication Date Title
CN113159147B (en) Image recognition method and device based on neural network and electronic equipment
US20230087526A1 (en) Neural network training method, image classification system, and related device
US10210418B2 (en) Object detection system and object detection method
CN112507934A (en) Living body detection method, living body detection device, electronic apparatus, and storage medium
CN111950596A (en) Training method for neural network and related equipment
WO2021189913A1 (en) Method and apparatus for target object segmentation in image, and electronic device and storage medium
TW202207077A (en) Text area positioning method and device
CN103582884A (en) Robust feature matching for visual search
CN112419326B (en) Image segmentation data processing method, device, equipment and storage medium
CN111932534A (en) Medical image picture analysis method and device, electronic equipment and readable storage medium
CN113705462A (en) Face recognition method and device, electronic equipment and computer readable storage medium
CN114049568A (en) Object shape change detection method, device, equipment and medium based on image comparison
CN114022841A (en) Personnel monitoring and identifying method and device, electronic equipment and readable storage medium
Xiao et al. Saliency detection via multi-view graph based saliency optimization
Peng et al. Pain intensity recognition via multi‐scale deep network
CN116563040A (en) Farm risk exploration method, device, equipment and storage medium based on livestock identification
CN114511569B (en) Tumor marker-based medical image identification method, device, equipment and medium
CN114742801A (en) B-ultrasonic image intelligent identification method and device based on PSO-SVM algorithm
CN115601684A (en) Emergency early warning method and device, electronic equipment and storage medium
CN114973374A (en) Expression-based risk evaluation method, device, equipment and storage medium
CN114943289A (en) User portrait classification method, device, equipment and medium based on deep learning
CN114743003A (en) Causal interpretation method, device and equipment based on image classification and storage medium
CN114267064A (en) Face recognition method and device, electronic equipment and storage medium
CN113192085A (en) Three-dimensional organ image segmentation method and device and computer equipment
Anggoro et al. Classification of Solo Batik patterns using deep learning convolutional neural networks algorithm

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination