CN115546147A - Superficial lesion detection system based on infrared thermal imaging - Google Patents
Superficial lesion detection system based on infrared thermal imaging Download PDFInfo
- Publication number
- CN115546147A CN115546147A CN202211221996.XA CN202211221996A CN115546147A CN 115546147 A CN115546147 A CN 115546147A CN 202211221996 A CN202211221996 A CN 202211221996A CN 115546147 A CN115546147 A CN 115546147A
- Authority
- CN
- China
- Prior art keywords
- lesion
- image
- area
- infrared
- target
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
- G06T7/0012—Biomedical image inspection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/25—Determination of region of interest [ROI] or a volume of interest [VOI]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/764—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/766—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using regression, e.g. by projecting features on hyperplanes
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/77—Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
- G06V10/80—Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
- G06V10/806—Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level of extracted features
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/82—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N5/00—Details of television systems
- H04N5/30—Transforming light or analogous information into electric information
- H04N5/33—Transforming infrared radiation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10048—Infrared image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30004—Biomedical image processing
- G06T2207/30096—Tumor; Lesion
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V2201/00—Indexing scheme relating to image or video recognition or understanding
- G06V2201/03—Recognition of patterns in medical or anatomical images
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Multimedia (AREA)
- General Physics & Mathematics (AREA)
- Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Medical Informatics (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Evolutionary Computation (AREA)
- Databases & Information Systems (AREA)
- Software Systems (AREA)
- Artificial Intelligence (AREA)
- Computing Systems (AREA)
- Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
- Signal Processing (AREA)
- Quality & Reliability (AREA)
- Radiology & Medical Imaging (AREA)
- Image Processing (AREA)
Abstract
The invention relates to superficial lesion detection, in particular to a superficial lesion detection system based on infrared thermal imaging, which comprises a server and an infrared image acquisition module, wherein the server preprocesses near-infrared lesion images acquired by the infrared image acquisition module through the infrared image preprocessing module, constructs a lesion area recognition model through a lesion area recognition model construction module, and performs model training on the lesion area recognition model by utilizing a lesion area recognition model training module; the server extracts a lesion area image from the preprocessed near-infrared lesion image by combining a lesion area extraction module with a lesion area recognition model, acquires a target lesion image from the lesion area image by using a target area judgment module, and desalts the target lesion image by using a target image desalination module; the technical scheme provided by the invention can overcome the defects that the image of the lesion area can not be accurately extracted and the accuracy rate of lesion type identification is low in the prior art.
Description
Technical Field
The invention relates to superficial lesion detection, in particular to a superficial lesion detection system based on infrared thermal imaging.
Background
Ultrasound medicine has wide application in the diagnosis of superficial abnormal tissues, especially in the early diagnosis of breast cancer, carotid artery vascular plaque diagnosis and other directions, and belongs to one of the most common diagnostic means in hospitals at present. The ultrasound medical vision field is the biggest difference from the common reality vision field (such as human face detection), in that the ultrasound medical vision field can not position and detect the focus according to the texture, shape, size and the like of the image, but the ultrasound gray level image needs to be restored to an organ anatomical structure with space stereo characteristics in the brain of a doctor, and then the focus is positioned and detected. Therefore, the ultrasonic diagnosis has high requirements on the professional level of doctors, and the diagnosis accuracy is relatively low for primary hospitals lacking high-level professional knowledge.
The thermal imaging is characterized in that the temperature fields are distributed in different spaces, when the temperature of one region is obviously high or low, metabolic abnormality possibly exists in the region, and then pathological changes can occur.
However, when the infrared thermal imaging is used for detecting the superficial lesion, the image of the lesion area needs to be accurately extracted, and the edge of the image of the lesion area needs to be naturally processed, so that once the transition of the edge of the image of the lesion area is unnatural or the color difference span is large, the accurate identification of the subsequent lesion type can be influenced.
Disclosure of Invention
Technical problem to be solved
Aiming at the defects in the prior art, the invention provides a superficial lesion detection system based on infrared thermal imaging, which can effectively overcome the defects that the images of lesion areas cannot be accurately extracted and the accuracy rate of lesion type identification is low in the prior art.
(II) technical scheme
In order to realize the purpose, the invention is realized by the following technical scheme:
a superficial lesion detection system based on infrared thermal imaging comprises a server and an infrared image acquisition module, wherein the server preprocesses a near-infrared lesion image acquired by the infrared image acquisition module through an infrared image preprocessing module, constructs a lesion region identification model through a lesion region identification model construction module, and performs model training on the lesion region identification model through a lesion region identification model training module;
the server extracts a lesion area image from the preprocessed near-infrared lesion image through a lesion area extraction module in combination with a lesion area recognition model, acquires a target lesion image from the lesion area image by using a target area judgment module, and desalts the target lesion image through a target image desalinization module to obtain a target lesion desalinized image;
the server constructs a lesion type identification model through a lesion type identification model construction module, and outputs a lesion type corresponding to the near-infrared lesion image based on the identification result of the lesion type identification model on the target lesion fading image by utilizing a lesion type output module.
Preferably, the lesion region extraction module extracts a lesion region image from the preprocessed near-infrared lesion image in combination with the lesion region recognition model, and includes:
inputting the preprocessed near-infrared focus image into a trained lesion area recognition model to obtain a first lesion area image;
obtaining a second lesion area image based on an initial target area and an initial protection area of the preprocessed near-infrared lesion image;
and combining the first lesion area image and the second lesion area image to obtain a lesion area image in the preprocessed near-infrared lesion image.
Preferably, the inputting the preprocessed near-infrared lesion image into the trained lesion area recognition model to obtain a first lesion area image includes:
extracting multi-scale features in the image by using a convolutional neural network based on a residual error network in the lesion region identification model and a feature pyramid network;
based on the extracted multi-scale features, combining with a region generation network in the lesion region identification model to obtain extraction frames of all possible lesion regions in the image;
and screening the extraction frame by using a non-maximum inhibition method, and classifying the area of the screened extraction frame by using a convolutional neural network of the detection target area to obtain a first lesion area image.
Preferably, the loss function of the convolutional neural network of the detection target region is:
L=L 1 +L 2 +L 3
wherein L is 1 To extract the classification penalty value of a frame, L 2 Is the classification loss value of the regression box, L 3 Is the loss value of the lesion area.
Preferably, the obtaining a second lesion area image based on the initial target area and the initial protection area of the preprocessed near-infrared lesion image includes:
determining an initial target area needing to be faded and an initial protection area needing to be protected in the preprocessed near-infrared focus image, and reducing the initial protection area to an overlapped part of the initial target area to obtain an initial reduction image;
adjusting the initial protection area and the initial protection area image by using the initial reduction image to obtain a protection area and a protection area image;
and adjusting the initial target area by using the protection area to obtain a lesion area, and covering the initial protection area image by using the protection area image to obtain a second lesion area image.
Preferably, the target region determination module acquires a target lesion image from the lesion region image, and includes:
and determining coordinates at four corners in the lesion region image, and generating a maximum circumscribed rectangular frame of the lesion region image by using the coordinates at the four corners, wherein the image in the maximum circumscribed rectangular frame is the target lesion image.
Preferably, the target image fading module fades the target lesion image to obtain a target lesion faded image, and the target lesion faded image includes:
determining a preset square area with a target pixel point in a target lesion image as a center, and setting the weight corresponding to each pixel point according to the position relation between each pixel point and the target pixel point in the preset square area;
and carrying out weighted summation on the hue value of each pixel point and the corresponding weight to obtain a weighted hue value, replacing the hue value of the target pixel point with the weighted hue value, and fading the target lesion image to obtain a target lesion faded image.
Preferably, the lesion type output module outputs a lesion type corresponding to the near-infrared lesion image based on a recognition result of the lesion type recognition model on the target lesion desalted image, and includes:
inputting the target lesion desalination image into one path of convolutional neural network in the double-current convolutional neural network to obtain a local feature map;
inputting the preprocessed near-infrared focus image corresponding to the target lesion fading image into the other path of convolutional neural network in the double-current convolutional neural network to obtain a global feature map;
and connecting the local characteristic diagram and the global characteristic diagram, inputting the local characteristic diagram and the global characteristic diagram into a full connection layer of the double-current convolutional neural network as final characteristics, and outputting the type of the lesion corresponding to the near-infrared lesion image.
Preferably, the infrared image preprocessing module preprocesses the near-infrared lesion image acquired by the infrared image acquisition module, and includes:
filtering noise in the near-infrared focus image by using a low-pass filter, and carrying out size standardization on the near-infrared focus image through interpolation;
and acquiring a gray level image corresponding to the near-infrared focus image with the standardized size, and carrying out normalization processing on the gray level image.
Preferably, the normalizing the grayscale image includes:
the grayscale image is normalized using the following equation:
where y is the output pixel value, x is the input pixel value, p max Is the maximum pixel value, p, in a grayscale image min Is the minimum pixel value in the grayscale image.
(III) advantageous effects
Compared with the prior art, the superficial lesion detection system based on infrared thermal imaging provided by the invention has the following beneficial effects:
1) The infrared image preprocessing module preprocesses the near-infrared focus image acquired by the infrared image acquisition module, the lesion area identification model construction module constructs a lesion area identification model, and the lesion area extraction module extracts a lesion area image from the preprocessed near-infrared focus image by combining with the lesion area identification model, so that the lesion area image can be accurately extracted from the preprocessed near-infrared focus image by combining with the identification result of the lesion area identification model and the manual annotation result, and support is provided for accurate identification of subsequent lesion types;
2) The target area judgment module acquires a target lesion image from the lesion area image, the target image desalination module desalinates the target lesion image to obtain a target lesion desalinated image, and natural processing on the edge of the target lesion image is realized by desalinating the target lesion image, so that the condition of large chromatic aberration span is prevented, and support can be provided for accurate identification of subsequent lesion types;
3) Inputting a target lesion fading image into one convolution neural network in a double-current convolution neural network to obtain a local feature map, inputting a preprocessed near-infrared lesion image corresponding to the target lesion fading image into the other convolution neural network in the double-current convolution neural network to obtain a global feature map, connecting the local feature map and the global feature map, inputting the local feature map and the global feature map into a full connection layer of the double-current convolution neural network as final features, outputting a lesion type corresponding to the near-infrared lesion image, extracting multi-modal features of the near-infrared lesion image by using the double-current convolution neural network, and accurately identifying the lesion type corresponding to the near-infrared lesion image based on multi-modal features.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below. It is obvious that the drawings in the following description are only some embodiments of the invention, and that for a person skilled in the art, other drawings can be derived from them without inventive effort.
FIG. 1 is a schematic diagram of the system of the present invention;
FIG. 2 is a schematic flow chart of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention. It is to be understood that the embodiments described are only a few embodiments of the present invention, and not all embodiments. All other embodiments, which can be obtained by a person skilled in the art without inventive step based on the embodiments of the present invention, are within the scope of protection of the present invention.
A superficial lesion detection system based on infrared thermal imaging is disclosed, and shown in figures 1 and 2, the superficial lesion detection system comprises a server and an infrared image acquisition module, wherein the server preprocesses near-infrared lesion images acquired by the infrared image acquisition module through the infrared image preprocessing module, the server establishes a lesion area recognition model through a lesion area recognition model establishment module, and model training is carried out on the lesion area recognition model through the lesion area recognition model training module.
The infrared image preprocessing module preprocesses the near-infrared focus image acquired by the infrared image acquisition module, and the preprocessing comprises the following steps:
filtering noise in the near-infrared focus image by using a low-pass filter, and carrying out size standardization on the near-infrared focus image through interpolation;
and acquiring a gray image corresponding to the near-infrared focus image with the standardized size, and carrying out normalization processing on the gray image.
The normalization processing of the gray level image comprises the following steps:
the grayscale image is normalized using the following equation:
where y is the output pixel value, x is the input pixel value, p max Is the maximum pixel value, p, in a grayscale image min Is the minimum pixel value in the grayscale image.
As shown in fig. 1 and 2, the server extracts a lesion area image from the preprocessed near-infrared lesion image through a lesion area extraction module in combination with a lesion area recognition model, and obtains a target lesion image from the lesion area image by using a target area determination module, and the server desalts the target lesion image through a target image desalination module to obtain a target lesion desalt image.
1) The lesion area extraction module is combined with the lesion area recognition model to extract a lesion area image from the preprocessed near-infrared lesion image, and comprises the following steps:
inputting the preprocessed near-infrared focus image into a trained lesion area recognition model to obtain a first lesion area image;
obtaining a second lesion area image based on an initial target area and an initial protection area of the preprocessed near-infrared lesion image;
and combining the first lesion area image and the second lesion area image to obtain a lesion area image in the preprocessed near-infrared lesion area image.
(1) Inputting the preprocessed near-infrared focus image into a trained lesion area recognition model to obtain a first lesion area image, wherein the method comprises the following steps:
extracting multi-scale features in the image by using a convolutional neural network based on a residual error network in the lesion region identification model and a feature pyramid network;
based on the extracted multi-scale features, combining an area generation network in the lesion area identification model to obtain all extraction frames of possible lesion areas in the image;
and screening the extraction frame by using a non-maximum inhibition method, and classifying the area of the screened extraction frame by using a convolutional neural network of the detection target area to obtain a first lesion area image.
Wherein, the loss function of the convolutional neural network for detecting the target area is as follows:
L=L 1 +L 2 +L 3
wherein L is 1 To extract the classification penalty value of a frame, L 2 Is the classification loss value of the regression box, L 3 Is the loss value of the lesion area.
(2) Obtaining a second lesion area image based on the initial target area and the initial protection area of the preprocessed near-infrared lesion image, wherein the second lesion area image comprises:
determining an initial target region needing to be desalted and an initial protection region needing to be protected in the preprocessed near-infrared focus image (the process is manual marking), and reducing the initial protection region to be an overlapped part with the initial target region to obtain an initial reduction image;
adjusting the initial protection area and the initial protection area image by using the initial reduction image to obtain a protection area and a protection area image;
and adjusting the initial target area by using the protection area to obtain a lesion area, and covering the initial protection area image by using the protection area image to obtain a second lesion area image.
According to the technical scheme, the infrared image preprocessing module preprocesses the near-infrared focus image acquired by the infrared image acquisition module, the lesion area identification model construction module constructs the lesion area identification model, and the lesion area extraction module extracts the lesion area image from the preprocessed near-infrared focus image in combination with the lesion area identification model, so that the lesion area image can be accurately extracted from the preprocessed near-infrared focus image in combination with the identification result of the lesion area identification model and the manual labeling result, and support is provided for accurate identification of subsequent lesion types.
2) The target area judging module acquires a target lesion image from the lesion area image, and comprises:
and determining coordinates at four corners in the lesion region image, and generating a maximum circumscribed rectangular frame of the lesion region image by using the coordinates at the four corners, wherein the image in the maximum circumscribed rectangular frame is the target lesion image.
3) The target image desalination module desalinates the target lesion image to obtain a target lesion desalinated image, and the target desalinated image comprises the following steps:
determining a preset square area with a target pixel point in a target lesion image as a center, and setting the weight corresponding to each pixel point according to the position relation between each pixel point and the target pixel point in the preset square area;
and carrying out weighted summation on the hue value of each pixel point and the corresponding weight to obtain a weighted hue value, replacing the hue value of the target pixel point with the weighted hue value, and fading the target lesion image to obtain a target lesion faded image.
According to the technical scheme, the target area judgment module acquires the target lesion image from the lesion area image, the target image desalination module desalinates the target lesion image to obtain the target lesion desalinated image, natural processing of the edge of the target lesion image is achieved by desalinating the target lesion image, the condition that the chromatic aberration span is large is prevented, and support can be provided for accurate identification of the subsequent lesion type.
As shown in fig. 1 and 2, the server constructs a lesion type identification model through a lesion type identification model construction module, and outputs a lesion type corresponding to the near-infrared lesion image based on the identification result of the lesion type identification model on the target lesion desalted image by using a lesion type output module.
The lesion type output module outputs a lesion type corresponding to the near-infrared lesion image based on a recognition result of the lesion type recognition model on the target lesion desalted image, and the lesion type output module comprises:
inputting the target lesion desalination image into one path of convolutional neural network in the double-current convolutional neural network to obtain a local feature map;
inputting the preprocessed near-infrared focus image corresponding to the target lesion fading image into the other path of convolutional neural network in the double-current convolutional neural network to obtain a global feature map;
and connecting the local characteristic diagram and the global characteristic diagram, inputting the local characteristic diagram and the global characteristic diagram into a full connection layer of the double-current convolutional neural network as final characteristics, and outputting the type of the lesion corresponding to the near-infrared lesion image.
According to the technical scheme, a target lesion fading image is input into one convolutional neural network in a double-current convolutional neural network to obtain a local characteristic diagram, a preprocessed near-infrared lesion image corresponding to the target lesion fading image is input into the other convolutional neural network in the double-current convolutional neural network to obtain a global characteristic diagram, the local characteristic diagram and the global characteristic diagram are connected and input into a full connection layer of the double-current convolutional neural network as a final characteristic, a lesion type corresponding to the near-infrared lesion image is output, multi-modal characteristics of the near-infrared lesion image are extracted by using the double-current convolutional neural network, and the lesion type corresponding to the near-infrared lesion image is accurately identified based on the multi-modal characteristics.
In the technical scheme, the refrigeration type infrared detector can be adopted to obtain the near-infrared focus image, and the refrigeration type infrared detector can improve the temperature resolution by one order of magnitude, so that the more accurate near-infrared focus image can be obtained.
The above examples are only intended to illustrate the technical solution of the present invention, and not to limit it; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions of the embodiments of the present invention.
Claims (10)
1. The utility model provides a superficial pathological change detecting system based on infrared thermal imaging which characterized in that: the system comprises a server and an infrared image acquisition module, wherein the server preprocesses a near-infrared focus image acquired by the infrared image acquisition module through an infrared image preprocessing module, constructs a lesion region recognition model through a lesion region recognition model construction module, and performs model training on the lesion region recognition model by using a lesion region recognition model training module;
the server extracts a lesion area image from the preprocessed near-infrared lesion image through a lesion area extraction module in combination with a lesion area recognition model, acquires a target lesion image from the lesion area image by using a target area judgment module, and desalts the target lesion image through a target image desalinization module to obtain a target lesion desalinized image;
the server constructs a lesion type identification model through a lesion type identification model construction module, and outputs a lesion type corresponding to the near-infrared lesion image based on the identification result of the lesion type identification model to the target lesion desalted image by utilizing a lesion type output module.
2. The superficial lesion detection system based on infrared thermal imaging of claim 1, wherein: the lesion area extraction module is combined with the lesion area recognition model to extract a lesion area image from the preprocessed near-infrared lesion image, and comprises the following steps:
inputting the preprocessed near-infrared focus image into a trained lesion area recognition model to obtain a first lesion area image;
obtaining a second lesion area image based on an initial target area and an initial protection area of the preprocessed near-infrared lesion image;
and combining the first lesion area image and the second lesion area image to obtain a lesion area image in the preprocessed near-infrared lesion image.
3. The infrared thermal imaging-based superficial lesion detection system of claim 2, wherein: the method for inputting the preprocessed near-infrared focus image into the trained lesion area recognition model to obtain a first lesion area image comprises the following steps:
extracting multi-scale features in the image by using a convolutional neural network based on a residual error network in the lesion area identification model and a feature pyramid network;
based on the extracted multi-scale features, combining an area generation network in the lesion area identification model to obtain all extraction frames of possible lesion areas in the image;
and screening the extraction frame by using a non-maximum inhibition method, and classifying the screened extraction frame region by using a convolutional neural network of the detection target region to obtain a first lesion region image.
4. The infrared thermal imaging-based superficial lesion detection system of claim 3, wherein: the loss function of the convolutional neural network of the detection target area is as follows:
L=L 1 +L 2 +L 3
wherein L is 1 To extract the classification penalty value of a frame, L 2 Is the classification loss value of the regression box, L 3 Is the loss value of the lesion area.
5. The infrared thermal imaging-based superficial lesion detection system of claim 2, wherein: the method for obtaining a second lesion area image based on the initial target area and the initial protection area of the preprocessed near-infrared lesion image comprises the following steps:
determining an initial target area needing to be faded and an initial protection area needing to be protected in the preprocessed near-infrared focus image, and reducing the initial protection area to an overlapped part of the initial target area to obtain an initial reduction image;
adjusting the initial protection area and the initial protection area image by using the initial reduction image to obtain a protection area and a protection area image;
and adjusting the initial target area by using the protection area to obtain a lesion area, and covering the initial protection area image by using the protection area image to obtain a second lesion area image.
6. The infrared thermal imaging-based superficial lesion detection system of claim 2, wherein: the target area judging module acquires a target lesion image from the lesion area image, and comprises:
and determining coordinates at four corners in the lesion region image, and generating a maximum circumscribed rectangular frame of the lesion region image by using the coordinates at the four corners, wherein the image in the maximum circumscribed rectangular frame is the target lesion image.
7. The infrared thermal imaging-based superficial lesion detection system of claim 6, wherein: the target image desalination module desalinates the target lesion image to obtain a target lesion desalinated image, and the target desalinated image comprises:
determining a preset square area with a target pixel point in a target lesion image as a center, and setting the weight corresponding to each pixel point according to the position relation between each pixel point and the target pixel point in the preset square area;
and carrying out weighted summation on the hue value of each pixel point and the corresponding weight to obtain a weighted hue value, replacing the hue value of the target pixel point with the weighted hue value, and fading the target lesion image to obtain a target lesion faded image.
8. The infrared thermal imaging-based superficial lesion detection system of claim 7, wherein: the lesion type output module outputs a lesion type corresponding to the near-infrared lesion image based on a recognition result of the lesion type recognition model on the target lesion desalted image, and the lesion type output module comprises:
inputting the target lesion desalination image into one path of convolutional neural network in the double-current convolutional neural network to obtain a local feature map;
inputting the preprocessed near-infrared focus image corresponding to the target lesion fading image into the other path of convolutional neural network in the double-current convolutional neural network to obtain a global feature map;
and connecting the local characteristic diagram and the global characteristic diagram, inputting the local characteristic diagram and the global characteristic diagram into a full connection layer of the double-current convolutional neural network as final characteristics, and outputting the type of the lesion corresponding to the near-infrared lesion image.
9. The superficial lesion detection system based on infrared thermal imaging of any one of claims 1-8, wherein: the infrared image preprocessing module preprocesses the near-infrared focus image acquired by the infrared image acquisition module, and comprises:
filtering noise in the near-infrared focus image by using a low-pass filter, and carrying out size standardization on the near-infrared focus image through interpolation;
and acquiring a gray level image corresponding to the near-infrared focus image with the standardized size, and carrying out normalization processing on the gray level image.
10. The superficial lesion detection system based on infrared thermal imaging of claim 9, wherein: the normalization processing of the gray level image comprises the following steps:
the grayscale image is normalized using the following equation:
where y is the output pixel value, x is the input pixel value, p max Is the maximum pixel value, p, in a gray scale image min Is the minimum pixel value in the grayscale image.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211221996.XA CN115546147A (en) | 2022-10-08 | 2022-10-08 | Superficial lesion detection system based on infrared thermal imaging |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211221996.XA CN115546147A (en) | 2022-10-08 | 2022-10-08 | Superficial lesion detection system based on infrared thermal imaging |
Publications (1)
Publication Number | Publication Date |
---|---|
CN115546147A true CN115546147A (en) | 2022-12-30 |
Family
ID=84731419
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202211221996.XA Pending CN115546147A (en) | 2022-10-08 | 2022-10-08 | Superficial lesion detection system based on infrared thermal imaging |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN115546147A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116798583A (en) * | 2023-06-28 | 2023-09-22 | 华东师范大学 | Pathological tissue macroscopic information acquisition and analysis system and analysis method thereof |
-
2022
- 2022-10-08 CN CN202211221996.XA patent/CN115546147A/en active Pending
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116798583A (en) * | 2023-06-28 | 2023-09-22 | 华东师范大学 | Pathological tissue macroscopic information acquisition and analysis system and analysis method thereof |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Shen et al. | Domain-invariant interpretable fundus image quality assessment | |
US11514270B2 (en) | Speckle contrast analysis using machine learning for visualizing flow | |
CA2177477A1 (en) | Automated method and system for the segmentation of medical images | |
Singh et al. | Review of various image fusion algorithms and image fusion performance metric | |
CN112102385B (en) | Multi-modal liver magnetic resonance image registration system based on deep learning | |
US20030169915A1 (en) | Abnormal shadow detecting system | |
Vijan et al. | Comparative analysis of various image fusion techniques for brain magnetic resonance images | |
CN110507287A (en) | A kind of Urology Surgery intelligent digital image processing system and method | |
CN113012093B (en) | Training method and training system for glaucoma image feature extraction | |
CN115546147A (en) | Superficial lesion detection system based on infrared thermal imaging | |
CN112102282A (en) | Automatic identification method for lumbar vertebrae with different joint numbers in medical image based on Mask RCNN | |
CN114708258B (en) | Eye fundus image detection method and system based on dynamic weighted attention mechanism | |
CN115775210A (en) | Wide-area fundus camera image fusion method, system and storage medium | |
CN114693682A (en) | Spine feature identification method based on image processing | |
Anandgaonkar et al. | Brain tumor detection and identification from T1 post contrast MR images using cluster based segmentation | |
Wisaeng et al. | Automatic detection of exudates in retinal images based on threshold moving average models | |
Ali et al. | Arthroscopic multi-spectral scene segmentation using deep learning | |
Niemeijer et al. | Automated localization of the optic disc and the fovea | |
Petrovic | Multilevel image fusion | |
Vani et al. | Multi focus and multi modal image fusion using wavelet transform | |
CN111951216A (en) | Spine coronal plane balance parameter automatic measurement method based on computer vision | |
Isavand Rahmani et al. | Retinal blood vessel segmentation using gabor filter and morphological reconstruction | |
Vázquez et al. | Using retinex image enhancement to improve the artery/vein classification in retinal images | |
Muthiah et al. | Fusion of MRI and PET images using deep learning neural networks | |
Lee et al. | Enhancement of blood vessels in retinal imaging using the nonsubsampled contourlet transform |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |