CN117351018B - Hysteromyoma detects auxiliary system based on machine vision - Google Patents
Hysteromyoma detects auxiliary system based on machine vision Download PDFInfo
- Publication number
- CN117351018B CN117351018B CN202311650457.2A CN202311650457A CN117351018B CN 117351018 B CN117351018 B CN 117351018B CN 202311650457 A CN202311650457 A CN 202311650457A CN 117351018 B CN117351018 B CN 117351018B
- Authority
- CN
- China
- Prior art keywords
- data
- module
- model
- ultrasonic image
- image data
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 206010046798 Uterine leiomyoma Diseases 0.000 title claims abstract description 93
- 238000001514 detection method Methods 0.000 claims abstract description 57
- 238000013527 convolutional neural network Methods 0.000 claims abstract description 21
- 238000010276 construction Methods 0.000 claims abstract description 19
- 238000007781 pre-processing Methods 0.000 claims abstract description 16
- 238000012549 training Methods 0.000 claims abstract description 16
- 238000000034 method Methods 0.000 claims abstract description 14
- 201000010260 leiomyoma Diseases 0.000 claims description 51
- 208000010579 uterine corpus leiomyoma Diseases 0.000 claims description 44
- 201000007954 uterine fibroid Diseases 0.000 claims description 44
- 238000002604 ultrasonography Methods 0.000 claims description 34
- 230000006870 function Effects 0.000 claims description 24
- 238000004364 calculation method Methods 0.000 claims description 21
- 238000013461 design Methods 0.000 claims description 18
- 230000004913 activation Effects 0.000 claims description 9
- 238000011176 pooling Methods 0.000 claims description 9
- 238000000605 extraction Methods 0.000 claims description 6
- 230000008569 process Effects 0.000 claims description 6
- 238000010586 diagram Methods 0.000 claims description 4
- 230000002159 abnormal effect Effects 0.000 claims description 3
- 238000002372 labelling Methods 0.000 claims description 3
- 238000010606 normalization Methods 0.000 claims description 3
- 239000000523 sample Substances 0.000 claims description 3
- 230000009286 beneficial effect Effects 0.000 abstract description 5
- 230000009471 action Effects 0.000 description 3
- 238000013528 artificial neural network Methods 0.000 description 2
- 206010061692 Benign muscle neoplasm Diseases 0.000 description 1
- 201000004458 Myoma Diseases 0.000 description 1
- 210000001015 abdomen Anatomy 0.000 description 1
- 230000004075 alteration Effects 0.000 description 1
- 238000010191 image analysis Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000000877 morphologic effect Effects 0.000 description 1
- 238000005070 sampling Methods 0.000 description 1
- 238000012216 screening Methods 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 210000004291 uterus Anatomy 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
- G06T7/0012—Biomedical image inspection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/0464—Convolutional networks [CNN, ConvNet]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
- G06N3/084—Backpropagation, e.g. using gradient descent
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/44—Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
- G06V10/443—Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components by matching or filtering
- G06V10/449—Biologically inspired filters, e.g. difference of Gaussians [DoG] or Gabor filters
- G06V10/451—Biologically inspired filters, e.g. difference of Gaussians [DoG] or Gabor filters with interaction between the filter responses, e.g. cortical complex cells
- G06V10/454—Integrating the filters into a hierarchical structure, e.g. convolutional neural networks [CNN]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/764—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/77—Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
- G06V10/774—Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/82—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10132—Ultrasound image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30004—Biomedical image processing
- G06T2207/30096—Tumor; Lesion
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V2201/00—Indexing scheme relating to image or video recognition or understanding
- G06V2201/03—Recognition of patterns in medical or anatomical images
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02T—CLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
- Y02T10/00—Road transport of goods or passengers
- Y02T10/10—Internal combustion engine [ICE] based vehicles
- Y02T10/40—Engine management systems
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Health & Medical Sciences (AREA)
- Health & Medical Sciences (AREA)
- Evolutionary Computation (AREA)
- General Physics & Mathematics (AREA)
- Artificial Intelligence (AREA)
- Computing Systems (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Software Systems (AREA)
- Medical Informatics (AREA)
- Multimedia (AREA)
- Life Sciences & Earth Sciences (AREA)
- Biomedical Technology (AREA)
- Molecular Biology (AREA)
- Databases & Information Systems (AREA)
- Computational Linguistics (AREA)
- Mathematical Physics (AREA)
- General Engineering & Computer Science (AREA)
- Data Mining & Analysis (AREA)
- Biophysics (AREA)
- Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
- Quality & Reliability (AREA)
- Radiology & Medical Imaging (AREA)
- Biodiversity & Conservation Biology (AREA)
- Ultra Sonic Daignosis Equipment (AREA)
Abstract
The invention discloses a machine vision-based hysteromyoma detection auxiliary system which comprises a data acquisition module, a data preprocessing module, a data enhancement module, a hysteromyoma detection model construction module and an auxiliary report generation module. The invention relates to the technical field of gynecological medical treatment, in particular to a machine vision-based hysteromyoma detection auxiliary system, which adopts a data enhancement method, changes hysteromyoma images through rotation, scaling, horizontal overturning and vertical overturning operations, simulates hysteromyomas with different shapes, increases data diversity, improves the problem of data imbalance, and is beneficial to training a model to adapt to different cases; by adopting the deep convolutional neural network, representative features can be automatically extracted from image data, and the feature representation of hysteromyoma with different shapes is learned, so that the detection efficiency and generalization capability of the model are improved.
Description
Technical Field
The invention relates to the technical field of gynecological medical treatment, in particular to a machine vision-based hysteromyoma detection auxiliary system.
Background
The auxiliary system for detecting the hysteromyoma can automatically perform image analysis, dig potential characteristics and rules in image data of the hysteromyoma, facilitate early screening of the hysteromyoma, and assist doctors in providing better treatment schemes for patients, so that the later treatment cost is reduced. However, in the existing auxiliary system for detecting hysteromyoma, the morphological difference of the hysteromyoma of different patients is large, but the number of normal images is far higher than that of hysteromyoma images in reality, so that the technical problem of unbalanced data is caused; there is a technical problem that the practicality of the uterine fibroid detection auxiliary system is affected by the lack of a detection method for accurately identifying uterine fibroids of different shapes.
Disclosure of Invention
In order to solve the problems, the invention provides a machine vision-based hysteromyoma detection auxiliary system, which aims at the technical problems that the number of normal images is far higher than that of hysteromyoma images in reality and data are unbalanced when the hysteromyoma forms of different patients are greatly different, and the invention adopts a data enhancement method to simulate hysteromyoma with different shapes by rotating, zooming, horizontally overturning and vertically overturning operations, thereby increasing data diversity, improving the problem of data unbalance and being beneficial to training a model to adapt to different cases; aiming at the technical problem that the practicability of a uterine fibroid detection auxiliary system is affected due to the lack of a detection method for accurately identifying uterine fibroids in different shapes, the technical scheme adopts a deep convolution neural network, can automatically extract representative features from image data, learns the feature representation of the uterine fibroids in different shapes, and improves the detection efficiency and generalization capability of a model.
The invention provides a machine vision-based hysteromyoma detection auxiliary system, which comprises a data acquisition module, a data preprocessing module, a data enhancement module, a hysteromyoma detection model construction module and an auxiliary report generation module, wherein the data acquisition module is used for acquiring data of a uterus myoma;
the data acquisition module is used for acquiring uterine ultrasonic image data and a uterine fibroid label, sending the uterine ultrasonic image data to the data preprocessing module and sending the uterine fibroid label to the uterine fibroid detection model construction module;
the data preprocessing module is used for preprocessing the uterine ultrasound image data to obtain standard ultrasound image data, and sending the standard ultrasound image data to the data enhancement module;
the data enhancement module is used for carrying out data enhancement on standard ultrasonic image data through rotation, scaling, horizontal overturning and vertical overturning operations to obtain enhanced ultrasonic image data, and sending the enhanced ultrasonic image data to the uterine fibroid detection model construction module;
the uterine fibroid detection model construction module is used for carrying out model construction by adopting a deep convolutional neural network, designing a deep convolutional neural network structure, carrying out forward propagation and backward propagation through K times of model training to optimize a model, obtaining a uterine fibroid detection model, and sending the uterine fibroid detection model to the auxiliary report generation module;
the auxiliary report generation module is used for predicting by adopting a hysteromyoma detection model to obtain prediction data and generate an auxiliary report.
Further, in the data acquisition module, uterine ultrasonic image data are acquired through an ultrasonic probe, and uterine fibroid labels are obtained through label labeling, wherein the uterine fibroid labels comprise normal and abnormal.
Further, in the data preprocessing module, the uterine ultrasound image data is preprocessed through image denoising, contrast adjustment and intensity normalization to obtain standard ultrasound image data.
Further, in the data enhancement module, a rotation unit, a scaling unit, a horizontal turning unit, a vertical turning unit and a data merging unit are provided, and specifically include the following contents:
the rotation unit rotates each standard ultrasonic image at random angles, and the rotation calculation formula is as follows:
;
where a is the standard ultrasound image abscissa and b is the standard ultrasound image ordinate, mge 1 (a, b) is the pixel value at coordinates (a, b) after rotation of the standard ultrasound image, mge () is a pixel value function, cos () is a cosine function, sin () is a sine function, and beta is a random angle, wherein the value of the random angle is between-20 degrees and 20 degrees;
the scaling unit performs random scaling operation on each standard ultrasonic image, and the calculation formula of random scaling is as follows:
;
in mge 2 (a, b) is a pixel value at a coordinate (a, b) after scaling of a standard ultrasonic image, rd is a random scale factor, and the value of the random scale factor is between 0.8 and 1.2;
the horizontal overturning unit carries out horizontal overturning on each standard ultrasonic image with the probability of 50 percent, and the calculation formula of the horizontal overturning is as follows:
;
in mge 3 (a, b) is the pixel value at coordinates (a, b) after the standard ultrasound image is flipped horizontally, wd is the standard ultrasound image width;
the vertical overturning unit is used for vertically overturning each standard ultrasonic image with the probability of 50 percent, and the calculation formula of the vertical overturning is as follows:
;
in mge 4 (a, b) is the pixel value at coordinates (a, b) after the standard ultrasound image is flipped vertically, hg is the standard ultrasound image height;
and the data merging unit is used for obtaining new ultrasonic image data by performing rotation, scaling, horizontal overturning and vertical overturning on the standard ultrasonic image data, and merging the new ultrasonic image data with the standard ultrasonic image data to obtain enhanced ultrasonic image data.
Further, in the uterine fibroid detection model construction module, a model structure design unit, a forward propagation design unit, a backward propagation unit and a model training unit are provided, and specifically include the following contents:
the model structure design unit is used for designing a deep convolutional neural network structure, wherein the deep convolutional neural network structure comprises an input layer, three feature extraction layers, two full-connection layers and an output layer, and the feature extraction layers consist of a convolutional layer and a maximum pooling layer;
the forward propagation design unit designs a forward propagation process of a model according to the deep convolutional neural network structure, and comprises the following contents:
the input layer receives the enhanced ultrasonic image data and takes the enhanced ultrasonic image as an input image of the model;
the convolution layer is used for extracting the characteristics of the input image, and the calculation formula is as follows:
;
where x is the abscissa of the convolution feature map, y is the ordinate of the convolution feature map, clr (x, y) is the feature value at the coordinate (x, y) of the convolution feature map, i is the input image abscissa index, j is the input image ordinate index, ig (i, j) is the pixel value at the coordinate (i, j) of the input image, α is the convolution kernel weight, and u is the convolution layer bias term;
the pooling layer is used for downsampling the convolution characteristic diagram, and the calculation formula is as follows:
;
where x1 is the pooled feature map abscissa, y1 is the pooled feature map ordinate, plr (x 1, y 1) is the feature value at the pooled feature map coordinate (x 1, y 1), max plr () Is a maximum pooling operation;
the full-connection layer operation is used for learning and classifying and predicting the characteristics, and the calculation formula is as follows:
;
wherein alr is the output of the full-connection layer, reLu () is a nonlinear activation function, the nonlinear activation function adopts a linear rectification unit, mu is the weight of the full-connection layer, c is the input of the full-connection layer, and u1 is the bias term of the full-connection layer;
the output layer adopts a softmax activation function to conduct hysteromyoma label prediction, and outputs a prediction result;
the back propagation unit calculates the gradient of the loss function to the model parameters through a back propagation algorithm, so as to update the model parameters and minimize the loss function;
and the model training unit adopts a deep convolutional neural network to perform K times of model training, performs model tuning through forward propagation and backward propagation, and obtains a hysteromyoma detection model.
Further, in the auxiliary report generation module, a uterine fibroid detection model is adopted for prediction, prediction data are obtained, and an auxiliary report is generated.
By adopting the scheme, the beneficial effects obtained by the invention are as follows:
(1) Aiming at the technical problems that the shape difference of hysteromyoma of different patients is large, but the number of normal images is far higher than that of hysteromyoma images in reality, so that data is unbalanced, the method adopts a data enhancement method, changes hysteromyoma images through rotation, scaling, horizontal overturning and vertical overturning operations, simulates hysteromyoma of different shapes, increases data diversity, improves the problem of data unbalance, and is beneficial to training a model to adapt to different cases.
(2) Aiming at the technical problem that the practicability of a uterine fibroid detection auxiliary system is affected due to the lack of a detection method for accurately identifying uterine fibroids in different shapes, the technical scheme adopts a deep convolution neural network, can automatically extract representative features from image data, learns the feature representation of the uterine fibroids in different shapes, and improves the detection efficiency and generalization capability of a model.
Drawings
Fig. 1 is a block diagram of a machine vision-based hysteromyoma detection assistance system;
FIG. 2 is a flow chart of a data enhancement module;
fig. 3 is a schematic flow chart of a uterine fibroid detection model building module.
The accompanying drawings are included to provide a further understanding of the invention and are incorporated in and constitute a part of this specification, illustrate the invention and together with the embodiments of the invention, serve to explain the invention.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and fully with reference to the accompanying drawings, in which it is evident that the embodiments described are only some, but not all embodiments of the invention; all other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
In the description of the present invention, it should be understood that the terms "upper," "lower," "front," "rear," "left," "right," "top," "bottom," "inner," "outer," and the like indicate orientation or positional relationships based on those shown in the drawings, merely to facilitate description of the invention and simplify the description, and do not indicate or imply that the devices or elements referred to must have a particular orientation, be constructed and operated in a particular orientation, and thus should not be construed as limiting the invention.
Referring to fig. 1, the machine vision-based hysteromyoma detection auxiliary system provided by the invention comprises a data acquisition module, a data preprocessing module, a data enhancement module, a hysteromyoma detection model construction module and an auxiliary report generation module;
the data acquisition module is used for acquiring uterine ultrasonic image data and a uterine fibroid label, sending the uterine ultrasonic image data to the data preprocessing module and sending the uterine fibroid label to the uterine fibroid detection model construction module;
the data preprocessing module is used for preprocessing the uterine ultrasound image data to obtain standard ultrasound image data, and sending the standard ultrasound image data to the data enhancement module;
the data enhancement module is used for carrying out data enhancement on standard ultrasonic image data through rotation, scaling, horizontal overturning and vertical overturning operations to obtain enhanced ultrasonic image data, and sending the enhanced ultrasonic image data to the uterine fibroid detection model construction module;
the uterine fibroid detection model construction module is used for carrying out model construction by adopting a deep convolutional neural network, designing a deep convolutional neural network structure, carrying out forward propagation and backward propagation through K times of model training to optimize a model, obtaining a uterine fibroid detection model, and sending the uterine fibroid detection model to the auxiliary report generation module;
the auxiliary report generation module is used for predicting by adopting a hysteromyoma detection model to obtain prediction data and generate an auxiliary report.
In the second embodiment, referring to fig. 1, the embodiment is based on the above embodiment, in the data acquisition module, uterine ultrasonic image data is obtained through an abdomen ultrasonic probe with a sampling frequency range of 2-7 mhz, and a uterine fibroid label is obtained through label labeling, where the uterine fibroid label includes normal and abnormal.
In a third embodiment, referring to fig. 1, the embodiment is based on the above embodiment, and in the data preprocessing module, the uterine ultrasound image data is preprocessed through image denoising, contrast adjustment and intensity normalization, so as to obtain standard ultrasound image data.
In the data enhancement module, referring to fig. 1 and 2, a rotation unit, a scaling unit, a horizontal flip unit, a vertical flip unit, and a data merging unit are provided, which specifically includes the following contents:
the rotation unit rotates each standard ultrasonic image at random angles, and the rotation calculation formula is as follows:
;
where a is the standard ultrasound image abscissa and b is the standard ultrasound image ordinate, mge 1 (a, b) is a pixel value at coordinates (a, b) after the standard ultrasonic image rotates, mge () is a pixel value function, cos () is a cosine function, sin () is a sine function, beta is a random angle, and the value of the random angle is between-20 degrees and 20 degrees;
the scaling unit performs random scaling operation on each standard ultrasonic image, and the calculation formula of random scaling is as follows:
;
in mge 2 (a, b) is a pixel value at a coordinate (a, b) after scaling of a standard ultrasonic image, rd is a random scale factor, and the value of the random scale factor is between 0.8 and 1.2;
the horizontal overturning unit carries out horizontal overturning on each standard ultrasonic image with the probability of 50 percent, and the calculation formula of the horizontal overturning is as follows:
;
in mge 3 (a, b) is the pixel value at coordinates (a, b) after the standard ultrasound image is flipped horizontally, wd is the standard ultrasound image width;
the vertical overturning unit is used for vertically overturning each standard ultrasonic image with the probability of 50 percent, and the calculation formula of the vertical overturning is as follows:
;
in mge 4 (a, b) is the pixel value at coordinates (a, b) after the standard ultrasound image is flipped vertically, hg is the standard ultrasound image height;
the data merging unit is used for obtaining new ultrasonic image data by rotating, zooming, horizontally overturning and vertically overturning the standard ultrasonic image data, and merging the new ultrasonic image data with the standard ultrasonic image data to obtain enhanced ultrasonic image data;
by executing the operation, aiming at the technical problem that the hysteromyoma forms of different patients have large difference, but the number of normal images is far higher than that of hysteromyoma images in reality, so that data is unbalanced, the scheme adopts a data enhancement method, changes the hysteromyoma images through rotation, scaling, horizontal overturning and vertical overturning operations, simulates hysteromyoma with different shapes, increases data diversity, improves the problem of data unbalance, and is beneficial to adapting a training model to different cases.
Fifth embodiment, referring to fig. 1 and 3, the embodiment is based on the above embodiment, and in the uterine fibroid detection model building module, a model structure design unit, a forward propagation design unit, a backward propagation unit, and a model training unit are provided, and specifically includes the following:
the model structure design unit is used for designing a deep convolutional neural network structure, wherein the deep convolutional neural network structure comprises an input layer, three feature extraction layers, two full-connection layers and an output layer, and the feature extraction layers consist of a convolutional layer and a maximum pooling layer;
the forward propagation design unit designs a forward propagation process of a model according to the deep convolutional neural network structure, and comprises the following contents:
the input layer receives the enhanced ultrasonic image data and takes the enhanced ultrasonic image as an input image of the model;
the convolution layer is used for extracting the characteristics of the input image, and the calculation formula is as follows:
;
where x is the abscissa of the convolution feature map, y is the ordinate of the convolution feature map, clr (x, y) is the feature value at the coordinate (x, y) of the convolution feature map, i is the input image abscissa index, j is the input image ordinate index, ig (i, j) is the pixel value at the coordinate (i, j) of the input image, α is the convolution kernel weight, and u is the convolution layer bias term;
the pooling layer is used for downsampling the convolution characteristic diagram, and the calculation formula is as follows:
;
where x1 is the pooled feature map abscissa, y1 is the pooled feature map ordinate, plr (x 1, y 1) is the feature value at the pooled feature map coordinate (x 1, y 1), max plr () Is a maximum pooling operation;
the full-connection layer operation is used for learning and classifying and predicting the characteristics, and the calculation formula is as follows:
;
wherein alr is the output of the full-connection layer, reLu () is a nonlinear activation function, the nonlinear activation function adopts a linear rectification unit, mu is the weight of the full-connection layer, c is the input of the full-connection layer, and u1 is the bias term of the full-connection layer;
the output layer adopts a softmax activation function to conduct hysteromyoma label prediction, and outputs a prediction result;
the back propagation unit calculates the gradient of the loss function to the model parameters through a back propagation algorithm, so as to update the model parameters and minimize the loss function;
the model training unit adopts a deep convolutional neural network to perform K times of model training, performs model tuning through forward propagation and backward propagation, and obtains a hysteromyoma detection model;
by executing the operation, aiming at the technical problem that the practicability of the uterine fibroid detection auxiliary system is affected due to the lack of a detection method for accurately identifying the uterine fibroids in different shapes, the method adopts the deep convolutional neural network, can automatically extract representative features from image data, learns the feature representation of the uterine fibroids in different shapes, and improves the detection efficiency and generalization capability of the model.
In a sixth embodiment, referring to fig. 1, the embodiment is based on the above embodiment, and in the auxiliary report generating module, a uterine fibroid detection model is used for prediction, so as to obtain prediction data and generate an auxiliary report.
It is noted that relational terms such as first and second, and the like are used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Moreover, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus.
Although embodiments of the present invention have been shown and described, it will be understood by those skilled in the art that various changes, modifications, substitutions and alterations can be made therein without departing from the principles and spirit of the invention, the scope of which is defined in the appended claims and their equivalents.
The invention and its embodiments have been described above with no limitation, and the actual construction is not limited to the embodiments of the invention as shown in the drawings. In summary, if one of ordinary skill in the art is informed by this disclosure, a structural manner and an embodiment similar to the technical solution should not be creatively devised without departing from the gist of the present invention.
Claims (6)
1. Machine vision-based hysteromyoma detection assisting system, which is characterized in that: the system comprises a data acquisition module, a data preprocessing module, a data enhancement module, a uterine fibroid detection model construction module and an auxiliary report generation module;
the data acquisition module is used for acquiring uterine ultrasonic image data and a uterine fibroid label, sending the uterine ultrasonic image data to the data preprocessing module and sending the uterine fibroid label to the uterine fibroid detection model construction module;
the data preprocessing module is used for preprocessing the uterine ultrasound image data to obtain standard ultrasound image data, and sending the standard ultrasound image data to the data enhancement module;
the data enhancement module is used for carrying out data enhancement on standard ultrasonic image data through rotation, scaling, horizontal overturning and vertical overturning operations to obtain enhanced ultrasonic image data, and sending the enhanced ultrasonic image data to the uterine fibroid detection model construction module;
the uterine fibroid detection model construction module is used for carrying out model construction by adopting a deep convolutional neural network, designing a deep convolutional neural network structure, carrying out forward propagation and backward propagation through K times of model training to optimize a model, obtaining a uterine fibroid detection model, and sending the uterine fibroid detection model to the auxiliary report generation module;
the auxiliary report generation module is used for predicting by adopting a hysteromyoma detection model to obtain prediction data and generate an auxiliary report;
the data enhancement module is provided with a rotation unit, a scaling unit, a horizontal overturning unit, a vertical overturning unit and an image processing unit, and specifically comprises the following contents:
the rotation unit rotates each standard ultrasonic image at random angles, and the rotation calculation formula is as follows:
;
where a is the standard ultrasound image abscissa and b is the standard ultrasound image ordinate, mge 1 (a, b) is a pixel value at coordinates (a, b) after the standard ultrasonic image rotates, mge () is a pixel value function, cos () is a cosine function, sin () is a sine function, beta is a random angle, and the value of the random angle is between-20 degrees and 20 degrees;
the scaling unit performs random scaling operation on each standard ultrasonic image, and the calculation formula of random scaling is as follows:
;
in mge 2 (a, b) is a pixel value at a coordinate (a, b) after scaling of a standard ultrasonic image, rd is a random scale factor, and the value of the random scale factor is between 0.8 and 1.2;
the horizontal overturning unit carries out horizontal overturning on each standard ultrasonic image with the probability of 50 percent, and the calculation formula of the horizontal overturning is as follows:
;
in mge 3 (a, b) is the pixel value at coordinates (a, b) after the standard ultrasound image is flipped horizontally, wd is the standard ultrasound image width;
the vertical overturning unit is used for vertically overturning each standard ultrasonic image with the probability of 50 percent, and the calculation formula of the vertical overturning is as follows:
;
in mge 4 (a, b) is the pixel value at coordinates (a, b) after the standard ultrasound image is flipped vertically, hg is the standard ultrasound image height;
the data merging unit is used for obtaining new ultrasonic image data by rotating, zooming, horizontally overturning and vertically overturning the standard ultrasonic image data, and merging the new ultrasonic image data with the standard ultrasonic image data to obtain enhanced ultrasonic image data;
the uterine fibroid detection model construction module is provided with a model structure design unit, a forward propagation design unit, a backward propagation unit and a model training unit;
the model structure design unit designs a deep convolutional neural network structure, wherein the deep convolutional neural network structure comprises an input layer, three feature extraction layers, two full-connection layers and an output layer, and the feature extraction layers consist of a convolutional layer and a maximum pooling layer.
2. A machine vision based uterine fibroid detection aid system according to claim 1, characterized in that: the forward propagation design unit designs a forward propagation process of the model according to the deep convolutional neural network structure;
the back propagation unit calculates the gradient of the loss function to the model parameters through a back propagation algorithm, so as to update the model parameters and minimize the loss function;
the model training unit adopts a deep convolutional neural network to perform K times of model training, performs model tuning through forward propagation and backward propagation, and obtains a hysteromyoma detection model.
3. A machine vision based uterine fibroid detection aid system according to claim 2, characterized in that: the forward propagation design unit designs a forward propagation process of a model according to a deep convolutional neural network structure, and comprises the following contents:
the input layer receives the enhanced ultrasonic image data and takes the enhanced ultrasonic image as an input image of the model;
the convolution layer is used for extracting the characteristics of the input image, and the calculation formula is as follows:
;
where x is the abscissa of the convolution feature map, y is the ordinate of the convolution feature map, clr (x, y) is the feature value at the coordinate (x, y) of the convolution feature map, i is the input image abscissa index, j is the input image ordinate index, ig (i, j) is the pixel value at the coordinate (i, j) of the input image, α is the convolution kernel weight, and u is the convolution layer bias term;
the pooling layer is used for downsampling the convolution characteristic diagram, and the calculation formula is as follows:
;
where x1 is the pooled feature map abscissa, y1 is the pooled feature map ordinate, plr (x 1, y 1) is the feature value at the pooled feature map coordinate (x 1, y 1), max plr () Is a maximum pooling operation;
the full-connection layer operation is used for learning and classifying and predicting the characteristics, and the calculation formula is as follows:
;
wherein alr is the output of the full-connection layer, reLu () is a nonlinear activation function, the nonlinear activation function adopts a linear rectification unit, mu is the weight of the full-connection layer, c is the input of the full-connection layer, and u1 is the bias term of the full-connection layer;
the output layer adopts a softmax activation function to conduct hysteromyoma label prediction, and the prediction result is output.
4. A machine vision based uterine fibroid detection aid system according to claim 3, characterized in that: in the data acquisition module, uterine ultrasonic image data are acquired through an ultrasonic probe, and uterine fibroid labels are obtained through label labeling, wherein the uterine fibroid labels comprise normal and abnormal.
5. The machine vision-based uterine fibroid detection aid system of claim 4, wherein: in the data preprocessing module, the uterine ultrasound image data is preprocessed through image denoising, contrast adjustment and intensity normalization to obtain standard ultrasound image data.
6. The machine vision-based uterine fibroid detection aid system of claim 5, wherein: in the auxiliary report generation module, a uterine fibroid detection model is adopted for prediction, prediction data are obtained, and an auxiliary report is generated.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202311650457.2A CN117351018B (en) | 2023-12-05 | 2023-12-05 | Hysteromyoma detects auxiliary system based on machine vision |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202311650457.2A CN117351018B (en) | 2023-12-05 | 2023-12-05 | Hysteromyoma detects auxiliary system based on machine vision |
Publications (2)
Publication Number | Publication Date |
---|---|
CN117351018A CN117351018A (en) | 2024-01-05 |
CN117351018B true CN117351018B (en) | 2024-03-12 |
Family
ID=89367078
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202311650457.2A Active CN117351018B (en) | 2023-12-05 | 2023-12-05 | Hysteromyoma detects auxiliary system based on machine vision |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN117351018B (en) |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110084794A (en) * | 2019-04-22 | 2019-08-02 | 华南理工大学 | A kind of cutaneum carcinoma image identification method based on attention convolutional neural networks |
CN113627483A (en) * | 2021-07-09 | 2021-11-09 | 武汉大学 | Cervical OCT image classification method and device based on self-supervision texture contrast learning |
CN114399485A (en) * | 2022-01-11 | 2022-04-26 | 南方医科大学顺德医院(佛山市顺德区第一人民医院) | Hysteromyoma target image acquisition method based on residual error network structure |
CN115394432A (en) * | 2022-08-22 | 2022-11-25 | 广西医科大学附属武鸣医院(广西医科大学武鸣临床医学院、南宁市武鸣区人民医院) | Auxiliary examination and diagnosis system based on prostate ultrasound, electronic device and storage medium |
CN116721289A (en) * | 2023-06-05 | 2023-09-08 | 武汉大学 | Cervical OCT image classification method and system based on self-supervision cluster contrast learning |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP3096688A1 (en) * | 2014-01-24 | 2016-11-30 | Koninklijke Philips N.V. | System and method for three-dimensional quantitative evaluation of uterine fibroids |
-
2023
- 2023-12-05 CN CN202311650457.2A patent/CN117351018B/en active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110084794A (en) * | 2019-04-22 | 2019-08-02 | 华南理工大学 | A kind of cutaneum carcinoma image identification method based on attention convolutional neural networks |
CN113627483A (en) * | 2021-07-09 | 2021-11-09 | 武汉大学 | Cervical OCT image classification method and device based on self-supervision texture contrast learning |
CN114399485A (en) * | 2022-01-11 | 2022-04-26 | 南方医科大学顺德医院(佛山市顺德区第一人民医院) | Hysteromyoma target image acquisition method based on residual error network structure |
CN115394432A (en) * | 2022-08-22 | 2022-11-25 | 广西医科大学附属武鸣医院(广西医科大学武鸣临床医学院、南宁市武鸣区人民医院) | Auxiliary examination and diagnosis system based on prostate ultrasound, electronic device and storage medium |
CN116721289A (en) * | 2023-06-05 | 2023-09-08 | 武汉大学 | Cervical OCT image classification method and system based on self-supervision cluster contrast learning |
Also Published As
Publication number | Publication date |
---|---|
CN117351018A (en) | 2024-01-05 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN114972329B (en) | Image enhancement method and system of surface defect detector based on image processing | |
CN104156967B (en) | A kind of fetus neck hyaline layer image partition method, device and ultrasonic image-forming system | |
CN108052942B (en) | Visual image recognition method for aircraft flight attitude | |
CN108961235A (en) | A kind of disordered insulator recognition methods based on YOLOv3 network and particle filter algorithm | |
CN107274402A (en) | A kind of Lung neoplasm automatic testing method and system based on chest CT image | |
CN112132166B (en) | Intelligent analysis method, system and device for digital cell pathology image | |
CN105405119B (en) | Fetus median sagittal plane automatic testing method based on depth confidence network and threedimensional model | |
CN106709450A (en) | Recognition method and system for fingerprint images | |
CN110736747B (en) | Method and system for positioning under cell liquid-based smear mirror | |
CN111144486B (en) | Heart nuclear magnetic resonance image key point detection method based on convolutional neural network | |
CN112465880B (en) | Target detection method based on multi-source heterogeneous data cognitive fusion | |
CN104778679A (en) | Gaofen-1 satellite data-based control point graphic element rapid-matching method | |
CN110826450A (en) | Automatic suspicious article detection method based on millimeter wave image | |
CN112084871B (en) | High-resolution remote sensing target boundary extraction method based on weak supervised learning | |
CN111046877A (en) | Millimeter wave image suspicious article detection method and system | |
CN102509293A (en) | Method for detecting consistency of different-source images | |
CN105825215B (en) | It is a kind of that the instrument localization method of kernel function is embedded in based on local neighbor and uses carrier | |
CN117351018B (en) | Hysteromyoma detects auxiliary system based on machine vision | |
CN105488798B (en) | SAR image method for measuring similarity based on point set contrast | |
CN114494371A (en) | Optical image and SAR image registration method based on multi-scale phase consistency | |
CN104123724B (en) | Three-dimensional point cloud quick detection method | |
CN113012127A (en) | Cardiothoracic ratio measuring method based on chest medical image | |
CN104424653A (en) | Image edge detection method based on wavelet transformation | |
CN111428627A (en) | Mountain landform remote sensing extraction method and system | |
CN116051885A (en) | Processing and mesoscale vortex identification method for marine mesoscale vortex sample data |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |