CN117541580A - Thyroid cancer image comparison model establishment method based on deep neural network - Google Patents
Thyroid cancer image comparison model establishment method based on deep neural network Download PDFInfo
- Publication number
- CN117541580A CN117541580A CN202410022905.2A CN202410022905A CN117541580A CN 117541580 A CN117541580 A CN 117541580A CN 202410022905 A CN202410022905 A CN 202410022905A CN 117541580 A CN117541580 A CN 117541580A
- Authority
- CN
- China
- Prior art keywords
- comparison
- gray
- value
- setting
- distribution quantity
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 208000024770 Thyroid neoplasm Diseases 0.000 title claims abstract description 47
- 201000002510 thyroid cancer Diseases 0.000 title claims abstract description 47
- 238000000034 method Methods 0.000 title claims abstract description 39
- 238000013528 artificial neural network Methods 0.000 title claims abstract description 17
- 238000012549 training Methods 0.000 claims abstract description 97
- 210000001685 thyroid gland Anatomy 0.000 claims abstract description 27
- 238000000605 extraction Methods 0.000 claims abstract description 24
- 238000009826 distribution Methods 0.000 claims description 100
- 230000002093 peripheral effect Effects 0.000 claims description 52
- 241001270131 Agaricus moelleri Species 0.000 claims description 32
- 238000005520 cutting process Methods 0.000 claims description 15
- 238000012545 processing Methods 0.000 claims description 11
- 238000009432 framing Methods 0.000 claims description 3
- 238000004590 computer program Methods 0.000 description 6
- 230000006870 function Effects 0.000 description 5
- 238000004891 communication Methods 0.000 description 4
- 238000005516 engineering process Methods 0.000 description 4
- 238000012216 screening Methods 0.000 description 4
- 239000002775 capsule Substances 0.000 description 3
- 230000008878 coupling Effects 0.000 description 3
- 238000010168 coupling process Methods 0.000 description 3
- 238000005859 coupling reaction Methods 0.000 description 3
- 238000010586 diagram Methods 0.000 description 3
- 230000008569 process Effects 0.000 description 3
- 238000003860 storage Methods 0.000 description 3
- 230000009286 beneficial effect Effects 0.000 description 2
- 230000002308 calcification Effects 0.000 description 2
- 230000001788 irregular Effects 0.000 description 2
- 230000003068 static effect Effects 0.000 description 2
- 206010028980 Neoplasm Diseases 0.000 description 1
- 206010033701 Papillary thyroid cancer Diseases 0.000 description 1
- 201000011510 cancer Diseases 0.000 description 1
- 238000013135 deep learning Methods 0.000 description 1
- 238000010801 machine learning Methods 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 230000007246 mechanism Effects 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 238000007781 pre-processing Methods 0.000 description 1
- 230000009467 reduction Effects 0.000 description 1
- 208000030045 thyroid gland papillary carcinoma Diseases 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
- G06T7/0012—Biomedical image inspection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/13—Edge detection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/44—Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/74—Image or video pattern matching; Proximity measures in feature spaces
- G06V10/75—Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/82—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30004—Biomedical image processing
- G06T2207/30096—Tumor; Lesion
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02A—TECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
- Y02A90/00—Technologies having an indirect contribution to adaptation to climate change
- Y02A90/10—Information and communication technologies [ICT] supporting adaptation to climate change, e.g. for weather forecasting or climate simulation
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Evolutionary Computation (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Computing Systems (AREA)
- Software Systems (AREA)
- Artificial Intelligence (AREA)
- Medical Informatics (AREA)
- Multimedia (AREA)
- Biomedical Technology (AREA)
- Databases & Information Systems (AREA)
- Life Sciences & Earth Sciences (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- Data Mining & Analysis (AREA)
- Molecular Biology (AREA)
- General Engineering & Computer Science (AREA)
- Mathematical Physics (AREA)
- Quality & Reliability (AREA)
- Radiology & Medical Imaging (AREA)
- Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses a thyroid cancer image comparison model establishment method based on a deep neural network, which relates to the technical field of image recognition and comprises the following steps: acquiring a plurality of training images, and marking thyroid cancer areas in the training images, wherein the training images comprise thyroid cancer areas; extracting gray features of the thyroid region, and training gray features of the thyroid region of the training images to obtain gray comparison parameters; dividing the thyroid region into an integral region and a punctiform region, and performing shape feature training on the integral region and the punctiform region of a plurality of training images to obtain shape comparison parameters; the method is used for solving the problem that the existing thyroid cancer image identification method lacks a specific feature extraction means, so that effective feature comparison identification cannot be performed.
Description
Technical Field
The invention relates to the technical field of image recognition, in particular to a thyroid cancer image comparison model establishment method based on a deep neural network.
Background
Image recognition, which is a technology for processing, analyzing and understanding images by using a computer to recognize targets and objects in various modes, is a practical application of applying a deep learning algorithm, and generally, the image recognition flow is divided into four steps: image acquisition, image preprocessing, feature extraction and image recognition, wherein a deep neural network is a technology in the machine learning field, and features in a training sample are extracted through the deep neural network in the image recognition process, so that the accuracy of image recognition comparison can be improved.
In the prior art, in the process of identifying thyroid cancer images, an image identification technology is adopted to perform feature extraction comparison, for example, in chinese patent with application publication No. CN112233106a, a method for analyzing thyroid cancer ultrasonic images based on a residual capsule network is disclosed, which is to analyze ultrasonic images of thyroid cancer through the residual capsule network to obtain ultrasonic image classification identification results corresponding to the ultrasonic images of the papillary thyroid cancer to be identified, but the method only discloses a technology of analyzing and classifying images through the residual capsule network, in step S1 of the method, each image in the original ultrasonic image dataset of thyroid cancer includes one or more of irregular shape attribute, unclear boundary attribute, echo uneven attribute, calcification attribute and normal attribute, but the method only lists the features in the thyroid cancer images, and a specific identification scheme for the features is not available for effectively performing feature comparison identification on the thyroid cancer images through the method, so that a method capable of effectively extracting and comparing the features in the cancer images is also needed to solve the problems.
Disclosure of Invention
The invention aims to solve at least one of the technical problems in the prior art to a certain extent, and by extracting the characteristics of a plurality of thyroid cancer images and establishing an image comparison model based on the extracted characteristics, the accuracy of the characteristic ratio of image screening can be improved, so that the problem that the effective characteristic comparison and identification cannot be carried out due to the fact that the existing thyroid cancer image identification method lacks a specific characteristic extraction means is solved.
In order to achieve the above object, in a first aspect, the present application provides a method for establishing a thyroid cancer image comparison model based on a deep neural network, including: acquiring a plurality of training images, and marking thyroid cancer areas in the training images, wherein the training images comprise thyroid cancer areas;
extracting gray features of the thyroid region, and training gray features of the thyroid region of the training images to obtain gray comparison parameters;
dividing the thyroid region into an integral region and a punctiform region, and performing shape feature training on the integral region and the punctiform region of a plurality of training images to obtain shape comparison parameters;
and establishing an image comparison model based on the gray comparison parameter and the shape comparison parameter.
Further, acquiring a plurality of training images, and marking the thyroid cancer area in the training images includes: dividing the training image into pixels, and establishing a two-dimensional coordinate system based on the pixels;
and carrying out coordinate marking on the pixels of the thyroid cancer area in a two-dimensional coordinate system, and setting the pixels of the thyroid cancer area as comparison pixels.
Further, extracting gray features of the thyroid region, performing gray feature training on the thyroid region of the training images, and obtaining gray comparison parameters includes: setting the pixel points except the comparison pixel points as peripheral pixel points in a two-dimensional coordinate system;
setting the pixel points adjacent to the comparison pixel points and the peripheral pixel points as comparison contour pixel points, and setting the pixel points adjacent to the peripheral pixel points and the comparison contour pixel points as peripheral connecting pixel points;
calculating the average value of gray scales of the pixel points of the comparison outline in the training image, and setting the average value as the gray scales of the comparison outline; calculating the average value of the gray scales of the peripheral connected pixel points in the training image, and setting the average value as the peripheral connected gray scales; calculating the absolute value of the difference between the gray level of the comparison contour and the gray level connected with the periphery, and setting the absolute value as a gray level comparison value;
training gray features of the training images one by one through the steps to obtain a plurality of comparison contour gray scales, a plurality of peripheral connected gray scales and a plurality of gray scale comparison values;
and respectively processing the plurality of comparison contour grayscales, the plurality of peripheral connected grayscales and the plurality of gray scale comparison values through a comparison parameter extraction method to obtain gray scale comparison parameters, wherein the gray scale comparison parameters comprise a comparison contour grayscales range, a peripheral connected grayscales range and a gray scale comparison value range.
Further, the comparison parameter extraction method comprises the following steps: obtaining the maximum value and the minimum value of a group of input gray values, and setting the maximum value and the minimum value as the gray maximum value and the gray minimum value respectively; the group of gray values comprises a group of a plurality of comparison contour gray values, a plurality of peripheral connected gray values or a plurality of gray comparison values;
subtracting the gray minimum value from the gray maximum value, dividing the gray difference by a first preset quantity to obtain a first to-be-defined value, and extracting integer bits of the first to-be-defined value and adding one to obtain a second to-be-defined value;
dividing a first preset number of gray groups by taking a gray minimum value as a dividing starting point and a second to-be-defined dividing value as a dividing unit, respectively corresponding an input group of gray values to each gray group, and setting the gray group with the most number distribution of the selected gray values as a gray comparison parameter group;
the gray scale range of the gray scale comparison parameter set is set as the gray scale comparison parameter.
Further, dividing the thyroid region into an overall region and a punctate region includes: obtaining comparison pixel points in a two-dimensional coordinate system, and setting the area of the mutually connected comparison pixel points as an integral undetermined area;
the method comprises the steps of obtaining the number of the whole undetermined areas in a two-dimensional coordinate system, setting the whole undetermined areas as whole areas when the number of the whole undetermined areas is smaller than or equal to a first distribution number threshold value, and setting the whole undetermined areas as dot areas when the number of the whole undetermined areas is larger than the first distribution number threshold value.
Further, performing shape feature training on the whole area of the plurality of training images to obtain shape comparison parameters comprises: setting a peripheral circle to frame the whole area, wherein the peripheral circle is a minimum circle capable of completely framing the whole area;
each time, the radius of the peripheral circle is reduced by a first unit length to obtain an updated circle, and the center of the updated circle is consistent with the center of the peripheral circle;
setting the whole area inside each obtained updating circle as an internal area to be divided, and setting the whole area between each obtained updating circle and an externally adjacent updating circle or peripheral circle as a cutting area;
setting the areas of the mutually connected comparison pixel points in the cutting areas as cutting independent areas, counting the number of the cutting independent areas of each group of cutting areas, and setting the number as the edge divergence number; stopping reducing the radius of the update circle or the peripheral circle when the edge divergence number is smaller than or equal to a first independent number threshold;
obtaining the maximum value of the edge divergence quantity of a plurality of groups, and setting the maximum value as the overall divergence distribution quantity;
and processing the corresponding overall divergence distribution quantity of the training images by a shape comparison extraction method to obtain overall comparison parameters, wherein the overall comparison parameters comprise an overall divergence distribution quantity range.
Further, performing shape feature training on the punctiform areas of the training images to obtain shape comparison parameters, wherein the shape comparison parameters comprise: acquiring the number of punctiform areas in a training image, and setting the number as the punctiform distribution number;
and processing the dot distribution quantity corresponding to the training images by a shape comparison extraction method to obtain dot comparison parameters, wherein the dot comparison parameters comprise a dot distribution quantity range.
Further, the shape comparison and extraction method comprises the following steps: obtaining the maximum value and the minimum value of a group of input distribution quantity, and setting the maximum value and the minimum value of the distribution quantity respectively; the group of distribution quantity comprises one group of the whole divergence distribution quantity corresponding to the training images or the punctiform distribution quantity corresponding to the training images;
subtracting the minimum value of the distribution quantity from the maximum value of the distribution quantity to obtain a distribution quantity difference value, dividing the distribution quantity difference value by a second preset quantity to obtain a first distribution quantity dividing value, and extracting integer bits of the first distribution quantity dividing value to obtain a second distribution quantity dividing value;
dividing a second preset number of distribution quantity groups by taking a minimum distribution quantity value as a dividing starting point and a second distribution quantity dividing value as a dividing unit, respectively corresponding an input group of distribution quantity to each distribution quantity group, and setting a distribution quantity group with the most quantity distribution of the selected distribution quantity as a distribution quantity comparison parameter group;
setting the range of the distribution quantity comparison parameter set as a shape comparison parameter, wherein the shape comparison parameter comprises an overall comparison parameter and a point comparison parameter.
The invention has the beneficial effects that: according to the thyroid cancer region marking method, the thyroid cancer region in the training images is marked by acquiring the training images, so that the region correspondence during feature training can be accurate conveniently, and the accuracy of data acquisition is improved; gray feature extraction is carried out on a thyroid region, gray feature training is carried out on the thyroid region of a plurality of training images, gray comparison parameters are obtained, a feature comparison frame can be built in the image comparison model through the gray comparison parameters, and the efficiency and the effectiveness of primary feature comparison and identification are improved;
according to the invention, the thyroid region is divided into the integral region and the punctiform region, the integral region and the punctiform region of a plurality of training images are subjected to shape feature training to obtain shape comparison parameters, and then the feature comparison region obtained through gray level screening can be further subjected to feature comparison screening through the shape comparison parameters, so that the effectiveness and the accuracy of thyroid cancer image comparison and identification are improved; and finally, establishing an image comparison model based on the gray level comparison parameters and the shape comparison parameters, wherein the image comparison model is established based on the gray level comparison parameters and the shape comparison parameters, so that the image comparison accuracy in actual application can be improved.
Additional features and advantages of the application will be set forth in the description which follows, and in part will be obvious from the description, or may be learned by practice of the application. The objectives and other advantages of the application will be realized and attained by the structure particularly pointed out in the written description and claims hereof as well as the appended drawings.
Drawings
FIG. 1 is a flow chart of the steps of the method of the present invention;
FIG. 2 is a schematic view of the acquisition of the cut independent area of the present invention;
FIG. 3 is a schematic diagram of a training image according to the present invention including an entire region;
fig. 4 is a schematic diagram of a training image according to the present invention including a dot region.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present invention, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
Referring to fig. 1, in a method for establishing a thyroid cancer image comparison model based on a deep neural network, feature extraction is performed on a plurality of thyroid cancer images, and an image comparison model is established based on the extracted features, so that the accuracy of feature comparison of image screening can be improved, and the problem that effective feature comparison identification cannot be performed due to the fact that an existing thyroid cancer image identification method lacks a specific feature extraction means is solved.
Specifically, the thyroid cancer image comparison model establishment method based on the deep neural network comprises the following steps: step S1, acquiring a plurality of training images, marking thyroid cancer areas in the training images, wherein the training images comprise thyroid cancer areas; step S1 further comprises the following sub-steps: step S101, dividing the training image into pixels, establishing a two-dimensional coordinate system based on the pixels, and dividing the pixels according to 1280 pixels in width and 720 pixels in height when the method is implemented;
step S102, performing coordinate marking on the pixels of the thyroid cancer area in a two-dimensional coordinate system, and setting the pixels of the thyroid cancer area as comparison pixels.
Step S2, gray feature extraction is carried out on the thyroid region, gray feature training is carried out on the thyroid region of a plurality of training images, and gray comparison parameters are obtained; step S2 further includes: step S2011, setting the pixel points other than the comparison pixel point as peripheral pixel points in the two-dimensional coordinate system;
step S2012, setting the pixel points adjacent to the comparison pixel points and the peripheral pixel points as comparison contour pixel points, and setting the pixel points adjacent to the peripheral pixel points and the comparison contour pixel points as peripheral connection pixel points;
step S2013, calculating an average value of gray scales of the comparison contour pixel points in the training image, and setting the average value as the comparison contour gray scales; calculating the average value of the gray scales of the peripheral connected pixel points in the training image, and setting the average value as the peripheral connected gray scales; calculating the absolute value of the difference between the gray level of the comparison contour and the gray level connected with the periphery, and setting the absolute value as a gray level comparison value;
step S2014, training gray features of a plurality of training images one by one through steps S2011 to S2013 to obtain a plurality of comparison contour gray scales, a plurality of peripheral connected gray scales and a plurality of gray scale comparison values;
in step S2015, the several comparison contour grayscales, several peripheral connected grayscales and several gray scale comparison values are processed by the comparison parameter extraction method to obtain the gray scale comparison parameters, wherein the gray scale comparison parameters include the comparison contour grayscales, peripheral connected grayscales and gray scale comparison value ranges, and the obtained comparison contour grayscales, peripheral connected grayscales and gray scale comparison value ranges can facilitate defining the initial feature region by the gray scale comparison parameters during actual image comparison, thereby being beneficial to improving the efficiency of primary feature extraction, and further being capable of improving the accuracy of data comparison while guaranteeing the reduction of the data processing amount of the next shape comparison.
The comparison parameter extraction method comprises the following steps: step S2021, obtaining the maximum value and the minimum value of a group of input gray values, and setting the maximum value and the minimum value of the gray values as the maximum value and the minimum value of the gray values respectively; the group of gray values comprises a group of a plurality of comparison contour gray values, a plurality of peripheral connected gray values or a plurality of gray comparison values;
step S2022, subtracting the gray minimum value from the gray maximum value, dividing the gray difference by a first preset number to obtain a first to-be-defined value, and extracting integer bits of the first to-be-defined value and adding one to obtain a second to-be-defined value; when the first preset number is set to 10, for example, the gray level difference is 55, the obtained first to-be-defined value is 5.5, and the second to-be-defined value is 6;
step S2023, taking the minimum gray value as a dividing start point, taking the second to-be-defined dividing value as a dividing unit, dividing a first preset number of gray groups, respectively corresponding an input group of gray values to each gray group, and selecting the gray group with the most number distribution of the gray values as a gray comparison parameter group;
in step S2024, the gray scale range of the gray scale comparison parameter set is set as the gray scale comparison parameter.
Referring to fig. 2 to 4, step S3 is performed to divide the thyroid region into an integral region and a dot region, wherein in fig. 3, the region indicated by the gray arrow is the integral region, and the region indicated by the white arrow in fig. 4 is the dot region, and perform shape feature training on the integral region and the dot region of the plurality of training images to obtain shape comparison parameters; step S3 further includes: step S3011, obtaining comparison pixel points in a two-dimensional coordinate system, and setting the area of the mutually connected comparison pixel points as an overall undetermined area;
step S3012, obtaining the number of the integrally pending areas in the two-dimensional coordinate system, when the number of the integrally pending areas is less than or equal to the first distribution number threshold, setting the integrally pending areas as the integrally area, when the number of the integrally pending areas is greater than the first distribution number threshold, setting the integrally pending areas as the dot-shaped areas, wherein the first distribution number is set to 3, and the integrally areas are all connected together under normal conditions, so that the value of the first distribution number does not need to be set too large.
Step S3 further includes: step S3021, setting a peripheral circle to frame the entire area, wherein the peripheral circle is a minimum circle capable of completely framing the entire area;
step S3022, reducing the radius of the peripheral circle by a first unit length each time to obtain an updated circle, wherein the center of the updated circle is consistent with the center of the peripheral circle; the first unit length is set according to the side length of the pixel point, and referring to fig. 2, the radius of the update circle is different from that of the peripheral circle by one side length of the pixel point, and the first unit length is specifically set as the side length of one pixel point;
step S3023, setting the entire area inside the update circle obtained each time as an inner area to be divided, and setting the entire area between the update circle obtained each time and the update circle or the peripheral circle adjacent to the outside as a cutting area;
step S3024, setting the areas of the mutually connected comparison pixel points in the cutting areas as cutting independent areas, counting the number of the cutting independent areas of each group of cutting areas, and setting the number as the edge divergence number; stopping reducing the radius of the update circle or the peripheral circle when the edge divergence number is smaller than or equal to a first independent number threshold; setting a first independent threshold value to be 3, and stopping the shrinking operation of the updating circle or the peripheral circle when the edge divergence number is less than or equal to 3;
step S3025, obtaining the maximum value of the edge divergence numbers of the plurality of groups, and setting the maximum value as the overall divergence distribution number; in acquiring the edge divergence number, the more the edge divergence number is, the more irregular the edge of the whole area is, and the more the edge structure of the needle shape or other shape is;
in step S3026, the overall divergent distribution number corresponding to the plurality of training images is processed by the shape comparison extraction method to obtain an overall comparison parameter, where the overall comparison parameter includes an overall divergent distribution number range.
Step S3 further includes: step S3031, the number of punctiform areas in the training image is obtained and set as the punctiform distribution number; the more the dot distribution quantity is, the more calcification points in the training image are indicated;
step S3032, processing the dot distribution quantity corresponding to the training images by a shape comparison extraction method to obtain dot comparison parameters, wherein the dot comparison parameters comprise a dot distribution quantity range.
The shape comparison and extraction method comprises the following steps: step S3041, obtaining a set of maximum and minimum distribution values, which are set as the maximum and minimum distribution values, respectively; the group of distribution quantity comprises one group of the whole divergence distribution quantity corresponding to the training images or the punctiform distribution quantity corresponding to the training images;
step S3042, subtracting the minimum value of the distribution quantity from the maximum value of the distribution quantity to obtain a difference value of the distribution quantity, dividing the difference value of the distribution quantity by a second preset quantity to obtain a first distribution quantity dividing value, and extracting integer bits of the first distribution quantity dividing value to obtain a second distribution quantity dividing value; the second preset number is specifically set to be 5, for example, when the distribution quantity difference value is 99, the obtained first distribution quantity dividing value is 19.8, and the second distribution quantity dividing value is 20;
step S3043, dividing a second preset number of distribution quantity groups by taking a minimum distribution quantity value as a dividing start point and a second distribution quantity dividing value as a dividing unit, respectively corresponding an input group of distribution quantity to each distribution quantity group, and setting a distribution quantity group with the largest distribution quantity distribution of the selected distribution quantity as a distribution quantity comparison parameter group;
in step S3044, the range of the distribution amount comparison parameter set is set as a shape comparison parameter, and the shape comparison parameter includes an overall comparison parameter and a dot comparison parameter.
S4, establishing an image comparison model based on the gray comparison parameters and the shape comparison parameters; in specific implementation, the images to be compared and identified are input into an image comparison model, and the processing process of the image comparison model comprises the following steps: firstly, comparing an input image with a gray scale range of a comparison outline, a gray scale range of a peripheral interface and a gray scale comparison value range in gray scale comparison parameters, dividing a region which is needed to be identified preliminarily, setting the region as a region to be identified, and outputting a non-identification characteristic signal if the region to be identified is not extracted by the gray scale comparison parameters, wherein the non-identification characteristic signal indicates that characteristics similar to parameters in an image comparison model are not extracted in the input image; and comparing the shape of the region to be identified, and comparing the shape of the region to be identified with the integral comparison parameters and the dot comparison parameters in the shape comparison parameters to obtain the shape division type of the region to be identified, and outputting a signal without identification characteristics to be determined if the region to be identified has no similarity with the integral comparison parameters and the dot comparison parameters during shape comparison, wherein the signal without identification characteristics to be determined indicates that a risk region possibly exists in an image, and further checking is needed manually.
In a second aspect of the embodiments, the present application provides an electronic device comprising a processor and a memory storing computer readable instructions that, when executed by the processor, perform the steps of any of the methods described above. Through the above technical solution, the processor and the memory are interconnected and communicate with each other through a communication bus and/or other form of connection mechanism, the memory stores a computer program executable by the processor, which when executed by the electronic device, performs the method in any of the alternative implementations of the above embodiments to realize the following functions: firstly, acquiring a plurality of training images, and marking thyroid cancer areas in the training images; then, extracting gray features of the thyroid region, and training gray features of the thyroid region of a plurality of training images to obtain gray comparison parameters; dividing the thyroid region into an integral region and a punctiform region, and performing shape feature training on the integral region and the punctiform region of a plurality of training images to obtain shape comparison parameters; and finally, establishing an image comparison model based on the gray comparison parameters and the shape comparison parameters.
In a third aspect of the embodiments, the present application provides a storage medium having stored thereon a computer program which, when executed by a processor, performs the steps of any of the methods described above. By the above technical solution, the computer program, when executed by the processor, performs the method in any of the alternative implementations of the above embodiments to implement the following functions: firstly, acquiring a plurality of training images, and marking thyroid cancer areas in the training images; then, extracting gray features of the thyroid region, and training gray features of the thyroid region of a plurality of training images to obtain gray comparison parameters; dividing the thyroid region into an integral region and a punctiform region, and performing shape feature training on the integral region and the punctiform region of a plurality of training images to obtain shape comparison parameters; and finally, establishing an image comparison model based on the gray comparison parameters and the shape comparison parameters.
It will be appreciated by those skilled in the art that embodiments of the present invention may be provided as a method, system, or computer program product. Accordingly, the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present invention may take the form of a computer program product embodied on one or more computer-usable storage media having computer-usable program code embodied therein. The storage medium may be implemented by any type or combination of volatile or nonvolatile Memory devices, such as static random access Memory (Static Random Access Memory, SRAM), electrically erasable Programmable Read-Only Memory (Electrically Erasable Programmable Read-Only Memory, EEPROM), erasable Programmable Read-Only Memory (Erasable Programmable Read Only Memory, EPROM), programmable Read-Only Memory (PROM), read-Only Memory (ROM), magnetic Memory, flash Memory, magnetic disk, or optical disk. These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus and method may be implemented in other manners. The above-described apparatus embodiments are merely illustrative, for example, the division of the units is merely a logical function division, and there may be other manners of division in actual implementation, and for example, multiple units or components may be combined or integrated into another system, or some features may be omitted, or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be through some communication interface, device or unit indirect coupling or communication connection, which may be in electrical, mechanical or other form.
Claims (8)
1. The thyroid cancer image comparison model building method based on the deep neural network is characterized by comprising the following steps of: acquiring a plurality of training images, and marking thyroid cancer areas in the training images, wherein the training images comprise thyroid cancer areas;
extracting gray features of the thyroid region, and training gray features of the thyroid region of the training images to obtain gray comparison parameters;
dividing the thyroid region into an integral region and a punctiform region, and performing shape feature training on the integral region and the punctiform region of a plurality of training images to obtain shape comparison parameters;
and establishing an image comparison model based on the gray comparison parameter and the shape comparison parameter.
2. The method for establishing the thyroid cancer image comparison model based on the deep neural network according to claim 1, wherein the steps of obtaining a plurality of training images and marking thyroid cancer areas in the training images comprise: dividing the training image into pixels, and establishing a two-dimensional coordinate system based on the pixels;
and carrying out coordinate marking on the pixels of the thyroid cancer area in a two-dimensional coordinate system, and setting the pixels of the thyroid cancer area as comparison pixels.
3. The method for establishing the thyroid cancer image comparison model based on the deep neural network according to claim 2, wherein the steps of extracting gray features of the thyroid region, training gray features of the thyroid region of the training images, and obtaining gray comparison parameters comprise: setting the pixel points except the comparison pixel points as peripheral pixel points in a two-dimensional coordinate system;
setting the pixel points adjacent to the comparison pixel points and the peripheral pixel points as comparison contour pixel points, and setting the pixel points adjacent to the peripheral pixel points and the comparison contour pixel points as peripheral connecting pixel points;
calculating the average value of gray scales of the pixel points of the comparison outline in the training image, and setting the average value as the gray scales of the comparison outline; calculating the average value of the gray scales of the peripheral connected pixel points in the training image, and setting the average value as the peripheral connected gray scales; calculating the absolute value of the difference between the gray level of the comparison contour and the gray level connected with the periphery, and setting the absolute value as a gray level comparison value;
training gray features of the training images one by one through the steps to obtain a plurality of comparison contour gray scales, a plurality of peripheral connected gray scales and a plurality of gray scale comparison values;
and respectively processing the plurality of comparison contour grayscales, the plurality of peripheral connected grayscales and the plurality of gray scale comparison values through a comparison parameter extraction method to obtain gray scale comparison parameters, wherein the gray scale comparison parameters comprise a comparison contour grayscales range, a peripheral connected grayscales range and a gray scale comparison value range.
4. The method for establishing the thyroid cancer image comparison model based on the deep neural network according to claim 3, wherein the comparison parameter extraction method comprises the following steps: obtaining the maximum value and the minimum value of a group of input gray values, and setting the maximum value and the minimum value as the gray maximum value and the gray minimum value respectively; the group of gray values comprises a group of a plurality of comparison contour gray values, a plurality of peripheral connected gray values or a plurality of gray comparison values;
subtracting the gray minimum value from the gray maximum value, dividing the gray difference by a first preset quantity to obtain a first to-be-defined value, and extracting integer bits of the first to-be-defined value and adding one to obtain a second to-be-defined value;
dividing a first preset number of gray groups by taking a gray minimum value as a dividing starting point and a second to-be-defined dividing value as a dividing unit, respectively corresponding an input group of gray values to each gray group, and setting the gray group with the most number distribution of the selected gray values as a gray comparison parameter group;
the gray scale range of the gray scale comparison parameter set is set as the gray scale comparison parameter.
5. The method for establishing the thyroid cancer image comparison model based on the deep neural network according to claim 2, wherein dividing the thyroid region into an integral region and a punctiform region comprises: obtaining comparison pixel points in a two-dimensional coordinate system, and setting the area of the mutually connected comparison pixel points as an integral undetermined area;
the method comprises the steps of obtaining the number of the whole undetermined areas in a two-dimensional coordinate system, setting the whole undetermined areas as whole areas when the number of the whole undetermined areas is smaller than or equal to a first distribution number threshold value, and setting the whole undetermined areas as dot areas when the number of the whole undetermined areas is larger than the first distribution number threshold value.
6. The method for establishing the thyroid cancer image comparison model based on the deep neural network according to claim 5, wherein the training of the shape characteristics of the whole area of the training images to obtain the shape comparison parameters comprises the following steps: setting a peripheral circle to frame the whole area, wherein the peripheral circle is a minimum circle capable of completely framing the whole area;
each time, the radius of the peripheral circle is reduced by a first unit length to obtain an updated circle, and the center of the updated circle is consistent with the center of the peripheral circle;
setting the whole area inside each obtained updating circle as an internal area to be divided, and setting the whole area between each obtained updating circle and an externally adjacent updating circle or peripheral circle as a cutting area;
setting the areas of the mutually connected comparison pixel points in the cutting areas as cutting independent areas, counting the number of the cutting independent areas of each group of cutting areas, and setting the number as the edge divergence number; stopping reducing the radius of the update circle or the peripheral circle when the edge divergence number is smaller than or equal to a first independent number threshold;
obtaining the maximum value of the edge divergence quantity of a plurality of groups, and setting the maximum value as the overall divergence distribution quantity;
and processing the corresponding overall divergence distribution quantity of the training images by a shape comparison extraction method to obtain overall comparison parameters, wherein the overall comparison parameters comprise an overall divergence distribution quantity range.
7. The method for establishing the thyroid cancer image comparison model based on the deep neural network according to claim 6, wherein the training of the shape characteristics of the punctiform areas of the training images to obtain the shape comparison parameters comprises: acquiring the number of punctiform areas in a training image, and setting the number as the punctiform distribution number;
and processing the dot distribution quantity corresponding to the training images by a shape comparison extraction method to obtain dot comparison parameters, wherein the dot comparison parameters comprise a dot distribution quantity range.
8. The method for establishing the thyroid cancer image comparison model based on the deep neural network according to claim 7, wherein the shape comparison extraction method comprises the following steps: obtaining the maximum value and the minimum value of a group of input distribution quantity, and setting the maximum value and the minimum value of the distribution quantity respectively; the group of distribution quantity comprises one group of the whole divergence distribution quantity corresponding to the training images or the punctiform distribution quantity corresponding to the training images;
subtracting the minimum value of the distribution quantity from the maximum value of the distribution quantity to obtain a distribution quantity difference value, dividing the distribution quantity difference value by a second preset quantity to obtain a first distribution quantity dividing value, and extracting integer bits of the first distribution quantity dividing value to obtain a second distribution quantity dividing value;
dividing a second preset number of distribution quantity groups by taking a minimum distribution quantity value as a dividing starting point and a second distribution quantity dividing value as a dividing unit, respectively corresponding an input group of distribution quantity to each distribution quantity group, and setting a distribution quantity group with the most quantity distribution of the selected distribution quantity as a distribution quantity comparison parameter group;
setting the range of the distribution quantity comparison parameter set as a shape comparison parameter, wherein the shape comparison parameter comprises an overall comparison parameter and a point comparison parameter.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202410022905.2A CN117541580B (en) | 2024-01-08 | 2024-01-08 | Thyroid cancer image comparison model establishment method based on deep neural network |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202410022905.2A CN117541580B (en) | 2024-01-08 | 2024-01-08 | Thyroid cancer image comparison model establishment method based on deep neural network |
Publications (2)
Publication Number | Publication Date |
---|---|
CN117541580A true CN117541580A (en) | 2024-02-09 |
CN117541580B CN117541580B (en) | 2024-03-19 |
Family
ID=89782644
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202410022905.2A Active CN117541580B (en) | 2024-01-08 | 2024-01-08 | Thyroid cancer image comparison model establishment method based on deep neural network |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN117541580B (en) |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20110181614A1 (en) * | 2010-01-25 | 2011-07-28 | King Jen Chang | Quantification method of the feature of a tumor and an imaging method of the same |
CN111598862A (en) * | 2020-05-13 | 2020-08-28 | 北京推想科技有限公司 | Breast molybdenum target image segmentation method, device, terminal and storage medium |
CN113034426A (en) * | 2019-12-25 | 2021-06-25 | 飞依诺科技(苏州)有限公司 | Ultrasonic image focus description method, device, computer equipment and storage medium |
CN116452464A (en) * | 2023-06-09 | 2023-07-18 | 天津市肿瘤医院(天津医科大学肿瘤医院) | Chest image enhancement processing method based on deep learning |
CN116485623A (en) * | 2023-06-21 | 2023-07-25 | 齐鲁工业大学(山东省科学院) | Multispectral image gray feature watermarking method based on sixteen-element rapid accurate moment |
-
2024
- 2024-01-08 CN CN202410022905.2A patent/CN117541580B/en active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20110181614A1 (en) * | 2010-01-25 | 2011-07-28 | King Jen Chang | Quantification method of the feature of a tumor and an imaging method of the same |
CN113034426A (en) * | 2019-12-25 | 2021-06-25 | 飞依诺科技(苏州)有限公司 | Ultrasonic image focus description method, device, computer equipment and storage medium |
CN111598862A (en) * | 2020-05-13 | 2020-08-28 | 北京推想科技有限公司 | Breast molybdenum target image segmentation method, device, terminal and storage medium |
CN116452464A (en) * | 2023-06-09 | 2023-07-18 | 天津市肿瘤医院(天津医科大学肿瘤医院) | Chest image enhancement processing method based on deep learning |
CN116485623A (en) * | 2023-06-21 | 2023-07-25 | 齐鲁工业大学(山东省科学院) | Multispectral image gray feature watermarking method based on sixteen-element rapid accurate moment |
Non-Patent Citations (2)
Title |
---|
ZULFANAHRI ET AL.: "Classification of Thyroid Ultrasound Images Based on Shape Features Analysis", 《THE 2017 BIOMEDICAL ENGINEERING INTERNATIONAL CONFERENCE》, 31 December 2017 (2017-12-31) * |
赵凌昆 等: "原发灶不明癌溯源定位方法的研究进展", 《中国肿瘤临床》, 31 December 2023 (2023-12-31) * |
Also Published As
Publication number | Publication date |
---|---|
CN117541580B (en) | 2024-03-19 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109102037A (en) | Chinese model training, Chinese image-recognizing method, device, equipment and medium | |
CN108510499B (en) | Image threshold segmentation method and device based on fuzzy set and Otsu | |
CN111862044A (en) | Ultrasonic image processing method and device, computer equipment and storage medium | |
CN110321968B (en) | Ultrasonic image classification device | |
US7961968B2 (en) | Image density conversion method, image enhancement processor, and program thereof | |
CN110969046B (en) | Face recognition method, face recognition device and computer-readable storage medium | |
CN111815606B (en) | Image quality evaluation method, storage medium, and computing device | |
CN112102230B (en) | Ultrasonic section identification method, system, computer device and storage medium | |
CN107590512A (en) | The adaptive approach and system of parameter in a kind of template matches | |
CN111882559A (en) | ECG signal acquisition method and device, storage medium and electronic device | |
CN117541580B (en) | Thyroid cancer image comparison model establishment method based on deep neural network | |
CN116959712B (en) | Lung adenocarcinoma prognosis method, system, equipment and storage medium based on pathological image | |
CN111985439B (en) | Face detection method, device, equipment and storage medium | |
CN111199228B (en) | License plate positioning method and device | |
CN117237245A (en) | Industrial material quality monitoring method based on artificial intelligence and Internet of things | |
CN113537229B (en) | Bill image generation method, device, computer equipment and storage medium | |
CN115619803A (en) | Image segmentation method and system and electronic equipment | |
CN112183229B (en) | Word lattice extraction method and device of operation paper image based on calculation dynamic parameters | |
CN111753723B (en) | Fingerprint identification method and device based on density calibration | |
CN114187303A (en) | Oral cavity image processing method, device, electronic equipment and computer storage medium | |
CN115619800A (en) | COVID-19CT image segmentation method and device based on adaptive threshold segmentation | |
CN115578400A (en) | Image processing method, and training method and device of image segmentation network | |
KR20220160789A (en) | Performance evaluation method and system for multiple classification models | |
CN112084884A (en) | Scanning electron microscope image pore identification method, terminal device and storage medium | |
CN111242047A (en) | Image processing method and apparatus, electronic device, and computer-readable storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |