CN113205141B - Parathyroid gland identification method based on image fusion technology - Google Patents

Parathyroid gland identification method based on image fusion technology Download PDF

Info

Publication number
CN113205141B
CN113205141B CN202110499036.9A CN202110499036A CN113205141B CN 113205141 B CN113205141 B CN 113205141B CN 202110499036 A CN202110499036 A CN 202110499036A CN 113205141 B CN113205141 B CN 113205141B
Authority
CN
China
Prior art keywords
image
feature
layer
convolution
feature extraction
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110499036.9A
Other languages
Chinese (zh)
Other versions
CN113205141A (en
Inventor
赵婉君
赵星
王宇
石一磊
朱精强
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Maide Intelligent Technology Wuxi Co ltd
Original Assignee
Maide Intelligent Technology Wuxi Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Maide Intelligent Technology Wuxi Co ltd filed Critical Maide Intelligent Technology Wuxi Co ltd
Priority to CN202110499036.9A priority Critical patent/CN113205141B/en
Publication of CN113205141A publication Critical patent/CN113205141A/en
Application granted granted Critical
Publication of CN113205141B publication Critical patent/CN113205141B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/253Fusion techniques of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformation in the plane of the image
    • G06T3/40Scaling the whole image or part thereof
    • G06T3/4007Interpolation-based scaling, e.g. bilinear interpolation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformation in the plane of the image
    • G06T3/40Scaling the whole image or part thereof
    • G06T3/4038Scaling the whole image or part thereof for image mosaicing, i.e. plane images composed of plane sub-images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/03Recognition of patterns in medical or anatomical images
    • G06V2201/031Recognition of patterns in medical or anatomical images of internal organs

Abstract

The application discloses a parathyroid recognition method based on an image fusion technology, which relates to the technical field of medical assistance, and the method utilizes a parathyroid recognition model obtained by training to perform feature extraction and fusion on a target fluorescent development image and a target live-action image under the same visual field of thyroid tissues to be recognized so as to recognize the parathyroid, utilizes the characteristic that the tissues such as lymph nodes and fat on the live-action image are easy to distinguish, and adopts a deep learning image fusion technology to fuse the features of the fluorescent development image and the live-action image so as to recognize the parathyroid, so that the problem of misrecognition of the parathyroid by using a near infrared autofluorescence development recognition method can be solved, the recognition and positioning accuracy of the parathyroid is higher, and the risk of damaging the parathyroid tissues in neck operations such as thyroid is reduced.

Description

Parathyroid gland identification method based on image fusion technology
Technical Field
The application relates to the technical field of medical assistance, in particular to a parathyroid recognition method based on an image fusion technology.
Background
Parathyroid glands are important endocrine glands in humans, and living parathyroid glands are generally oblate elliptic in shape, yellow or brown yellow in appearance, and are shaped like soybeans. Typically, each person has four parathyroid glands symmetrically distributed in the middle and lower parts of the back of the left and right thyroid glands, but the parathyroid glands of some people have great differences in position and number, 48% -62% of Chinese people have four parathyroid glands, 15% of Chinese people have only two parathyroid glands, about 20% of the upper parathyroid glands are asymmetric, and about 30% of the lower parathyroid glands are asymmetric. The main function of parathyroid gland is to secrete parathyroid hormone (abbreviated as PTH), the main target organs of which are kidney and bone, and the physiological function of which is to regulate calcium and phosphorus metabolism in the body and maintain balance of calcium and phosphorus. If the parathyroid gland is cut by mistake in the neck operations such as thyroid gland and the like, the parathyroid hormone secretion is insufficient, the blood calcium level is reduced, the blood phosphorus level is increased, the parathyroid gland hypofunction is caused to the patient, and the symptoms are numbness of hands and feet, lips and feet, and tetany, which seriously affect the postoperative life quality of the patient. The erroneous cutting of parathyroid glands during the operation has become one of the main causes of medical disputes in neck operations such as thyroid gland. The postoperative hypoparathyroidism caused by damaged parathyroid glands is a problem for thyroid surgeons, and about 20% -30% of thyroidectomy operations are known to miscut healthy parathyroid glands, so that hypoparathyroidism occurs after the operation of patients. Because parathyroid glands are small in size and not fixed in position and number, they are not easily distinguished from ectopic thyroid glands, ectopic thymus glands, surrounding fat and lymph node tissue, and thus the risk of miscut, injury, contusion or blood supply damage is greatly increased, and particularly, lower parathyroid glands, often "mix" with the central regional lymph nodes, have a higher risk of miscut or blood supply damage during the central regional lymph node cleaning procedure.
The accurate identification of parathyroid glands in surgery is a precondition for protecting the parathyroid glands from being miscut or damaged, and the conventional methods for identifying parathyroid glands in surgery mainly comprise the following methods, but in practical application, the following methods have the advantages and disadvantages:
1. and (5) visual recognition: the surgical field is observed visually or by means of a endoscope to find suspected parathyroid tissue. The method has the advantages of short time consumption and low cost, and has the defects of high subjectivity related to personal level of doctors and extremely high risk for the inexperienced doctors.
2. And (3) dyeing and identifying: parathyroid glands or their surrounding tissues are marked by staining with a specific dye to distinguish the parathyroid glands. Dyes used for labeling are classified into positive and negative dyes, and positive dyes are used for directly labeling parathyroid tissue by dyeing, such as blue. The negative staining agent is used for staining and marking the tissue around the parathyroid gland so as to contrast the parathyroid gland tissue, such as nano carbon injection. The dyeing identification has the advantages of relative noninvasive property, higher accuracy and high cost, and the defects of high cost and disputes in dyeing time, position and dosage, and meanwhile, partial dyeing agents have certain side effects on human bodies, such as the discomfort of nausea, vomiting, fever, hypoxia, urethral orifice pain and the like of patients possibly caused by blue.
3. Self-fluorescence development identification: studies have shown that parathyroid glands can generate near infrared autofluorescence with a peak value of 820-830 nm when irradiated by light with a wavelength of 785nm, the autofluorescence is fluorescence emitted by an intrinsic fluorescent body and is different from fluorescence generated by a fluorescent marker dye, and parathyroid glands can be distinguished from other surrounding tissues by utilizing the characteristic. The automatic fluorescent development for identifying parathyroid gland has the advantages of high accuracy, simple operation, high speed, no wound and the like, and simultaneously avoids possible side effects caused by fluorescent dye and contrast agent. There is a much wider space for research and application of autofluorescence development recognition than other recognition methods. However, in the actual use process, it is found that partial lymph nodes and adipose tissues burned by the electrotome can also generate autofluorescence to generate interference, so that the accuracy of the autofluorescence development identification is difficult to ensure.
Disclosure of Invention
Aiming at the problems and the technical requirements, the inventor provides a parathyroid gland identification method based on an image fusion technology, and the technical scheme of the application is as follows:
a parathyroid recognition method based on an image fusion technology, the method comprising:
acquiring a plurality of groups of sample images, wherein each group of sample images respectively comprises a fluorescence development image and a real image of a corresponding thyroid tissue sample under the same time and the same visual field, the fluorescence development image is a thyroid tissue image acquired by using a fluorescence imaging instrument under the irradiation of near infrared rays of a target wave band, and the real image is a thyroid tissue image acquired by using a camera under the same visual field;
training based on each group of sample images to obtain a parathyroid recognition model, wherein the parathyroid recognition model comprises a first feature extraction unit, a second feature extraction unit and a feature fusion module, the first feature extraction unit comprises a plurality of convolution layers and is used for carrying out feature extraction on a fluorescence development image in the sample images, the second feature extraction unit comprises a plurality of convolution layers and is used for carrying out feature extraction on a live-action image in the sample images, and the feature fusion module fuses features extracted by the first feature extraction unit and the second feature extraction unit;
the method comprises the steps of obtaining a target fluorescent development image and a target live-action image of thyroid tissue to be identified under the same visual field, inputting a parathyroid identification model to identify the parathyroid, and carrying out feature extraction on the target fluorescent development image by a first feature extraction unit and carrying out feature extraction on the target live-action image by a second feature extraction unit.
The method comprises the following steps that a first feature extraction unit performs feature extraction on a target fluorescent development image to obtain a first feature image with a first image size and a fifth feature image with a second image size, and a second feature extraction unit performs feature extraction on a target live-action image to obtain a second feature image with the first image size and a sixth feature image with the second image size, wherein the second image size is larger than the first image size;
the feature fusion module comprises a first feature splicing layer, a third feature extraction unit, a second feature splicing layer, a fourth feature extraction unit and a segmentation map extraction layer;
the first feature stitching layer stitches the first feature image and the second feature image according to the channel to obtain a third feature image with a first image size;
the third feature extraction unit performs feature extraction on the third feature map to obtain a fourth feature map with the second image size;
the second feature stitching layer stitches the fifth feature image, the sixth feature image and the fourth feature image according to the channels to obtain a seventh feature image with a second image size;
the fourth feature extraction unit performs feature extraction on the seventh feature image to obtain an eighth feature image with the same image size as the target fluorescence development image and the target live-action image;
the segmentation map extraction layer extracts the region where the parathyroid gland is located from the eighth feature map.
The third feature extraction unit comprises a first convolution layer, a first global pooling layer, a first multiplication layer and a first up-sampling layer, wherein the first convolution layer carries out convolution processing on a third feature map and then inputs the third feature map to the first global pooling layer and the first multiplication layer, the first global pooling layer carries out global pooling processing on output of the first convolution layer and then inputs the third feature map to the first multiplication layer, and the first multiplication layer carries out multiplication processing on output of the first convolution layer and output of the first global pooling layer and then outputs the third feature map to the first up-sampling layer to carry out up-sampling to obtain a fourth feature map.
The further technical scheme is that the first global pooling layer averages elements of each channel of the output of the first convolution layer to obtain a characteristic diagram with the size of 1*1, the characteristic diagram is input into convolution operation with the convolution kernel of 1*1, and finally, a sigmoid activation function is used for activation to obtain the output of the first global pooling layer.
The fourth feature extraction unit comprises a second convolution layer, a second global pooling layer, a second multiplication layer, a third convolution layer and a second upsampling layer, wherein the second convolution layer carries out convolution processing on the seventh feature map and then inputs the seventh feature map to the second global pooling layer and the second multiplication layer, the second global pooling layer carries out global pooling processing on the output of the second convolution layer and then inputs the output of the second multiplication layer, the second multiplication layer carries out multiplication processing on the output of the second convolution layer and the output of the second global pooling layer and then outputs the output of the second convolution layer and the output of the second multiplication layer to the third convolution layer, and the output of the third convolution layer, after the convolution operation is completed, carries out upsampling on the second upsampling layer to obtain an eighth feature map.
The eighth feature map comprises a first channel and a second channel, the segmentation map extraction layer compares the element values of the eighth feature map according to the channels, if the element value of the first channel is larger than or equal to that of the second channel, the element value is set to be 0, otherwise, the element value is set to be 1, after all the element values are processed, the element values are converted into gray maps to obtain segmentation maps, and the region where parathyroid glands are located is extracted.
The further technical scheme is that for any one convolution layer in the first feature extraction unit and the second feature extraction unit, the convolution layer sequentially executes convolution operation, batch normalization operation and activation operation.
The method further comprises the following steps:
labeling the region where the parathyroid gland is located in each group of sample images, performing binarization processing on the labeled fluorescence development images and the real scene images, setting the gray value of the pixel point of the region labeled as the parathyroid gland in the images to 255, setting the gray value of the rest pixel points to 0, obtaining the label images corresponding to the sample images, and updating model parameters by using a loss function based on the label images corresponding to each group of sample images in the training process of the parathyroid gland identification model.
The further technical scheme is that an Adam optimizer is used in the training process of the parathyroid recognition model, and a DiceLoss plus cross entropy loss function is adopted as the loss function.
The further technical scheme is that a plurality of groups of sample images are obtained, including:
shooting thyroid tissue samples through a near infrared autofluorescence imaging instrument with a camera, wherein the camera is consistent with the visual field of a near infrared autofluorescence imaging instrument probe, and is connected to a live-action imaging system; the camera and the developing instrument probe collect images at the same time, live-action images are collected through the camera, and fluorescent developing images are collected through the developing instrument probe.
The beneficial technical effects of the application are as follows:
the application discloses a parathyroid recognition method based on an image fusion technology, which utilizes the characteristic that tissues such as lymph nodes and fat on a live-action image are easy to distinguish, adopts a deep learning image fusion technology to fuse the characteristics of a fluorescence development image and the live-action image for recognizing parathyroid glands, can overcome the problem of misrecognition existing in the process of recognizing parathyroid glands by using a near infrared self-fluorescence development recognition method, has higher recognition and positioning precision on parathyroid glands, and reduces the risk of damaging parathyroid gland tissues in neck operations such as thyroid gland and the like.
Drawings
FIG. 1 is a schematic diagram of a parathyroid recognition model according to the present application.
Detailed Description
The following describes the embodiments of the present application further with reference to the drawings.
The application discloses a parathyroid gland identification method based on an image fusion technology, which comprises the following steps:
step S1, a plurality of groups of sample images are obtained, each group of sample images corresponds to one thyroid tissue sample, and each group of sample images respectively comprises a fluorescence development image and a real image of the corresponding thyroid tissue sample under the same time and the same visual field.
Wherein the fluorescence development image is an image obtained by using a fluorescence imaging instrument under the irradiation of near infrared rays of a target wave band, and the target wave band in the application is a wave band with a wavelength of 785 nm. The live-action image is an image acquired under the same field of view using a camera under shadowless lamp light.
In the step, the acquired fluorescence development image and live-action image are two types of images of the same thyroid tissue sample at the same time and under the same visual field, the application adopts the near-infrared autofluorescence imaging instrument with the camera to shoot the thyroid tissue sample, the near-infrared autofluorescence imaging instrument is the existing mature equipment and mainly comprises two parts of an imaging instrument probe and a fluorescence imaging system, the imaging instrument probe is used for acquiring the images, and the fluorescence imaging system is used for completing the operations of processing, displaying and the like of the images acquired by the imaging instrument probe. The camera can be additionally arranged on the existing near-infrared autofluorescence imaging instrument, the camera is consistent with the visual field of the probe of the near-infrared autofluorescence imaging instrument, the camera is connected to the live-action imaging system, and the live-action imaging system is used for completing operations such as processing and displaying of images acquired by the camera. When the imaging device is used, the camera and the imaging instrument probe collect images at the same time, live-action images are collected through the camera, fluorescent imaging images are collected through the imaging instrument probe, and images collected by the camera and the imaging instrument probe at the same time are used as a group of sample images. The real images acquired by the camera at different moments are stored in time sequence, and the fluorescent development images acquired by the probe of the developing instrument at different moments are also stored in time sequence.
After the sample image is acquired, a corresponding label image is required to be obtained through processing, namely, the region where the parathyroid gland is located in the sample image is required to be marked, usually, the marking is carried out by professional medical staff, and if the parathyroid gland does not exist in the image, no marking is carried out on the image. And (3) performing binarization processing on the marked fluorescent development image and the real image, setting the gray value of the pixel point of the area marked as the parathyroid gland in the image to 255 and setting the gray values of the other pixel points to 0, and obtaining a label image corresponding to the sample image.
Step S2, training based on each group of sample images to obtain a parathyroid recognition model, wherein the model structure of the parathyroid recognition model is shown in fig. 1, and the application introduces the structure of the parathyroid recognition model and the processing procedures of each layer in step S3.
In the training process of the parathyroid recognition model, model parameters are updated by using a loss function based on the label image corresponding to each group of sample images. In the training process of the parathyroid recognition model, an Adam optimizer is used, and a DiceLoss plus cross entropy loss function is adopted as the loss function. The Adam optimizer formula is as follows:
v t =β 2 v t-1 +(1-β 2 )g t 2
m t =β 1 m t-1 +(1-β 1 )g t
wherein g t A gradient at time t; m is m t The gradient index moving average at time t and the initial value m 0 =0;v t The moving average of the square index of the gradient at time t is represented, and the initial value v 0 =0; η represents a learning rate; epsilon=10 -8 Divisor 0 can be avoided; θ t Model parameters, θ, representing time t t-1 The model parameters at time t-1 are shown.
The Diceloss loss function formula isThe cross entropy loss function formula is loss= - (log ' + (1-y) log (1-y ')), where y ' represents the predicted value and y represents the tag value.
And S3, acquiring a target fluorescence development image and a target live-action image of thyroid tissue to be identified under the same visual field, and inputting the target fluorescence development image and the target live-action image into a parathyroid gland identification model. The method for acquiring the target fluorescence development image and the target live-action image is similar to the method for acquiring the fluorescence development image and the live-action image of the thyroid tissue sample in the step S1, and the acquired target fluorescence development image and target live-action image are images of thyroid tissue to be identified at the same time and under the same visual field.
The parathyroid recognition model comprises a first feature extraction unit, a second feature extraction unit and a feature fusion module, wherein the first feature extraction unit comprises a plurality of convolution layers and is used for carrying out feature extraction on an input image, and the second feature extraction unit comprises a plurality of convolution layers and is used for carrying out feature extraction on the input image. In the model training stage, the first feature extraction unit performs feature extraction on the labeled fluorescence development images in the sample images, and the second feature extraction unit performs feature extraction on the labeled live-action images in the same group of sample images. In the model use stage, the first feature extraction unit performs feature extraction on the target fluorescence development image, and the second feature extraction unit performs feature extraction on the target live-action image. During the training and use phases of the model, each set of corresponding fluoroscopic images and live images have the same image size. The feature fusion module fuses the features extracted by the first feature extraction unit and the second feature extraction unit. In particular, the method comprises the steps of,
1. the first feature extraction unit performs feature extraction on the target fluorescence developed image to obtain a first feature map F1 having a first image size and a fifth feature map F5 having a second image size, the second image size being larger than the first image size. The first feature extraction unit sequentially performs convolution operation on the target fluorescent development image through the convolution layer to sequentially reduce the image size, a fifth feature image F5 of the second image size can be extracted first, the convolution operation is continuously performed through the convolution layer to continuously reduce the image size, and finally a first feature image F1 of the first image size can be extracted.
Each convolution layer in the first feature extraction unit sequentially performs a convolution operation, a batch normalization operation, and an activation operation. The specific formula of the batch normalization operation is as followsWherein x is i Representing input, x' i Representing the processed output; mean (x) i ) Represents the mean value of the batch data, var (x i ) Representing variance of the batch data; gamma and beta respectively represent the learnable scaling and translation parameters of the model in the training process, and the initial values are respectively 1 and 0; epsilon is a small positive number set to avoid zero divisor. The activation operation uses the leak Relu activation function with the specific formula +.>Wherein x' i Representing the processed output, x i Representing the input.
More commonly, the image size of the target fluorescence development image is 512×512, and the size of the feature map is gradually reduced by a convolution operation with a step size of 2 in the first feature extraction unit, so that a fifth feature map F5 with a second image size of 128×128 and a first feature map F1 with a first image size of 32×32 can be extracted.
2. The second feature extraction unit performs feature extraction on the target live-action image to obtain a second feature map F2 with the first image size and a sixth feature map F6 with the second image size. The second feature extraction unit is similar in structure to the first feature extraction unit, and each convolution layer is also similar in structure and operation to the convolution layers in the first feature extraction unit. Therefore, in the above example, the image size of the input target live-action image is 512×512, the size of the feature map is gradually reduced by the convolution operation with the step size of 2, and the sixth feature map F6 with the second image size 128×128 and the second feature map F2 with the first image size 32×32 can be extracted.
3. The feature fusion module comprises a first feature splicing layer, a third feature extraction unit, a second feature splicing layer, a fourth feature extraction unit and a segmentation map extraction layer:
(1) The first feature stitching layer stitches the first feature map F1 and the second feature map F2 according to the channel to obtain a third feature map F3 with a first image size.
(2) The third feature extraction unit performs feature extraction on the third feature map F3 to obtain a fourth feature map F4 having the second image size. Specific: the third feature extraction unit includes a first convolution layer, a first global pooling layer, a first multiplication layer, and a first upsampling layer, wherein:
the first convolution layer carries out convolution processing on the third feature map F3 and then inputs the third feature map F3 into the first global pooling layer and the first multiplication layer, in the application, the first convolution layer uses the convolution check of 1*1 to carry out convolution operation on the third feature map F3 to fuse information of each channel, the number of channels is reduced, and the feature map with the first image size after image fusion is input into the first global pooling layer and the first multiplication layer.
The first global pooling layer performs global pooling processing on the output of the first convolution layer and inputs the output to the first multiplication layer. In the present application, the first global pooling layer performs global averaging pooling operation, convolution operation and activation operation, that is, the first global pooling layer averages the elements of each channel of the output of the first convolution layer to obtain a large valueThe feature map of 1*1 is input to a convolution operation with a convolution kernel of 1*1, and finally activated using a sigmoid activation function to obtain an output with an image size of 1*1 for the first global pooling layer. The specific formula of the sigmoid activation function is as followsWherein x' i Representing the processed output, x i Representing the input.
The first multiplication layer multiplies the output of the first convolution layer and the output of the first global pooling layer, namely, the element value of the output of the first global pooling layer is multiplied by each element on the output corresponding channel of the first convolution layer, and then the multiplied element value is output to the first upsampling layer for upsampling to obtain a fourth feature map F4 with a second image size. In one example, the first multiplication layer multiplies the output of the first convolution layer having the first image size 32×32 with the output of the first global pooling layer having the image size 1*1 to obtain a feature map having the first image size 32×32 as well, and the first upsampling layer amplifies the feature map having the first image size 32×32 to a fourth feature map F4 having the second image size 128×128 using a bilinear interpolation algorithm.
(3) The second feature stitching layer stitches the fifth feature map F5, the sixth feature map F6 and the fourth feature map F4 according to the channels to obtain a seventh feature map F7 with the second image size.
(4) The fourth feature extraction unit performs feature extraction on the seventh feature map to obtain an eighth feature map F8 having the same image size as the target fluorescent developed image and the target live-action image. Specific: the fourth feature extraction unit includes a second convolution layer, a second global pooling layer, a second multiplication layer, a third convolution layer, and a second upsampling layer, wherein:
the second convolution layer carries out convolution processing on the seventh feature map F7 and then inputs the feature map F7 into the second global pooling layer and the second multiplying layer, in the application, the second convolution layer uses the convolution check of 1*1 to carry out convolution operation on the seventh feature map F7 to fuse information of each channel, the number of channels is reduced, and the feature map with the second image size after image fusion is input into the second global pooling layer and the second multiplying layer.
The second global pooling layer performs global pooling processing on the output of the second convolution layer and inputs the processed output to the second multiplication layer, and the structure and the executed operation of the second global pooling layer are the same as those of the first global pooling layer, so that the application is not repeated.
The second multiplication layer multiplies the output of the second convolution layer and the output of the second global pooling layer and outputs the result to the third convolution layer, and the operation performed by the second multiplication layer is similar to that performed by the first multiplication layer, which is not repeated in the present application.
The third convolution layer performs a convolution operation on the output of the second multiplication layer to obtain an output having a channel number of 2 and a second image size.
The third convolution layer outputs the result to the second up-sampling layer to carry out up-sampling to obtain an eighth feature map F8 after the convolution operation is completed, and the second up-sampling layer also uses a bilinear interpolation algorithm to amplify the feature map with the second image size to the same size as the original target real image.
(5) The segmentation map extraction layer extracts the region where the parathyroid gland is located from the eighth feature map, as described above, the eighth feature map comprises a first channel and a second channel which are respectively used as two channels, the segmentation map extraction layer compares the element value of the eighth feature map according to the channels, if the element value of the first channel is greater than or equal to that of the second channel, the element value is set to be 0, otherwise, the element value is set to be 1, the segmentation map is obtained by converting the processed element values into a gray map, namely, each pixel is multiplied by 255, the region where the parathyroid gland is located is finally extracted, namely, the parathyroid gland in the tissue image of the thyroid tissue to be identified is identified, and the parathyroid gland obtained by the identification can be displayed on the target live-action image by using the segmentation map in clinical application, so that the segmentation map is convenient for clinical use.
The above is only a preferred embodiment of the present application, and the present application is not limited to the above examples. It is to be understood that other modifications and variations which may be directly derived or contemplated by those skilled in the art without departing from the spirit and concepts of the present application are deemed to be included within the scope of the present application.

Claims (9)

1. A parathyroid recognition method based on an image fusion technology, the method comprising:
acquiring a plurality of groups of sample images, wherein each group of sample images respectively comprises a fluorescence development image and a real image of a corresponding thyroid tissue sample under the same time and the same visual field, the fluorescence development image is a thyroid tissue image acquired by using a fluorescence imaging instrument under the irradiation of near infrared rays of a target wave band, and the real image is a thyroid tissue image acquired by using a camera under the same visual field;
training based on each group of sample images to obtain a parathyroid recognition model, wherein the parathyroid recognition model comprises a first feature extraction unit, a second feature extraction unit and a feature fusion module, the first feature extraction unit comprises a plurality of convolution layers and is used for carrying out feature extraction on fluorescence development images in the sample images, the second feature extraction unit comprises a plurality of convolution layers and is used for carrying out feature extraction on live-action images in the sample images, and the feature fusion module fuses features extracted by the first feature extraction unit and the second feature extraction unit;
acquiring a target fluorescent development image and a target live-action image of thyroid tissue to be identified under the same visual field, inputting the target fluorescent development image and the target live-action image into the parathyroid gland identification model to identify the parathyroid gland in the thyroid tissue image to be identified, performing feature extraction on the target fluorescent development image by the first feature extraction unit to obtain a first feature image with a first image size and a fifth feature image with a second image size, and performing feature extraction on the target live-action image by the second feature extraction unit to obtain a second feature image with the first image size and a sixth feature image with the second image size, wherein the second image size is larger than the first image size;
the feature fusion module comprises a first feature splicing layer, a third feature extraction unit, a second feature splicing layer, a fourth feature extraction unit and a segmentation map extraction layer;
the first feature stitching layer stitches the first feature image and the second feature image according to a channel to obtain a third feature image with the first image size;
the third feature extraction unit performs feature extraction on the third feature map to obtain a fourth feature map with the second image size;
the second feature stitching layer stitches the fifth feature image, the sixth feature image and the fourth feature image according to channels to obtain a seventh feature image with the second image size;
the fourth feature extraction unit performs feature extraction on the seventh feature map to obtain an eighth feature map with the same image size as the target fluorescent development image and the target live-action image;
and the segmentation map extraction layer extracts the region where the parathyroid gland is located from the eighth feature map.
2. The method of claim 1, wherein the step of determining the position of the substrate comprises,
the third feature extraction unit comprises a first convolution layer, a first global pooling layer, a first multiplication layer and a first up-sampling layer, the first convolution layer carries out convolution processing on the third feature map and then inputs the third feature map to the first global pooling layer and the first multiplication layer, the first global pooling layer carries out global pooling processing on the output of the first convolution layer and then inputs the third feature map to the first multiplication layer, and the first multiplication layer carries out multiplication processing on the output of the first convolution layer and the output of the first global pooling layer and then outputs the third feature map to the first up-sampling layer to carry out up-sampling to obtain the fourth feature map.
3. The method of claim 2 wherein the first global pooling layer averages the elements of each channel of the output of the first convolution layer to obtain a signature of size 1*1 and inputs to a convolution operation with a convolution kernel 1*1, and finally uses a sigmoid activation function to activate to obtain the output of the first global pooling layer.
4. The method of claim 1, wherein the step of determining the position of the substrate comprises,
the fourth feature extraction unit comprises a second convolution layer, a second global pooling layer, a second multiplication layer, a third convolution layer and a second upsampling layer, wherein the second convolution layer carries out convolution processing on the seventh feature image and then inputs the seventh feature image into the second global pooling layer and the second multiplication layer, the second global pooling layer carries out global pooling processing on the output of the second convolution layer and then inputs the output of the second convolution layer into the second multiplication layer, the second multiplication layer carries out multiplication processing on the output of the second convolution layer and the output of the second global pooling layer and then outputs the output of the second convolution layer to the third convolution layer, and the output of the third convolution layer after the convolution operation is completed is output to the second upsampling layer to carry out upsampling so as to obtain the eighth feature image.
5. The method of claim 1, wherein the step of determining the position of the substrate comprises,
the eighth feature map comprises a first channel and a second channel, the segmentation map extraction layer compares the element values of the eighth feature map according to the channels, if the element value of the first channel is larger than or equal to that of the second channel, the element value is set to be 0, otherwise, the element value is set to be 1, after all the element values are processed, the element values are converted into gray maps to obtain segmentation maps, and the region where parathyroid glands are located is extracted.
6. The method of claim 1, wherein the convolution layer performs a convolution operation, a batch normalization operation, and an activation operation in order for any one of the first feature extraction unit and the second feature extraction unit.
7. The method according to any one of claims 1-6, further comprising:
labeling the region where the parathyroid gland is located in each group of sample images, performing binarization processing on the labeled fluorescence development images and the real scene images, setting the gray value of the pixel point of the region labeled as the parathyroid gland in the images to 255, setting the gray value of the rest pixel points to 0, and obtaining the label image corresponding to the sample images.
8. The method of claim 7, wherein the step of determining the position of the probe is performed,
an Adam optimizer is used in the training process of the parathyroid recognition model, and a DiceLoss plus cross entropy loss function is adopted as the loss function.
9. The method of any one of claims 1-6, wherein the acquiring a plurality of sets of sample images comprises:
shooting thyroid tissue samples through a near infrared autofluorescence imaging instrument with a camera, wherein the camera is consistent with the visual field of a probe of the near infrared autofluorescence imaging instrument, and is connected to a live-action imaging system; the camera and the developing instrument probe collect images at the same time, the live-action image is collected through the camera, and the fluorescent developing image is collected through the developing instrument probe.
CN202110499036.9A 2021-05-08 2021-05-08 Parathyroid gland identification method based on image fusion technology Active CN113205141B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110499036.9A CN113205141B (en) 2021-05-08 2021-05-08 Parathyroid gland identification method based on image fusion technology

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110499036.9A CN113205141B (en) 2021-05-08 2021-05-08 Parathyroid gland identification method based on image fusion technology

Publications (2)

Publication Number Publication Date
CN113205141A CN113205141A (en) 2021-08-03
CN113205141B true CN113205141B (en) 2023-08-29

Family

ID=77030463

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110499036.9A Active CN113205141B (en) 2021-05-08 2021-05-08 Parathyroid gland identification method based on image fusion technology

Country Status (1)

Country Link
CN (1) CN113205141B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113796850A (en) * 2021-09-27 2021-12-17 四川大学华西医院 Parathyroid MIBI image analysis system, computer device, and storage medium
CN115797617A (en) * 2022-12-05 2023-03-14 杭州显微智能科技有限公司 Parathyroid gland identification method and intelligent endoscope camera system device
CN116369959B (en) * 2023-06-05 2023-08-11 杭州医策科技有限公司 Parathyroid preoperative positioning method and device based on bimodal CT

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2019200753A1 (en) * 2018-04-17 2019-10-24 平安科技(深圳)有限公司 Lesion detection method, device, computer apparatus and storage medium
CN111209810A (en) * 2018-12-26 2020-05-29 浙江大学 Bounding box segmentation supervision deep neural network architecture for accurately detecting pedestrians in real time in visible light and infrared images
CN111681273A (en) * 2020-06-10 2020-09-18 创新奇智(青岛)科技有限公司 Image segmentation method and device, electronic equipment and readable storage medium
CN112037216A (en) * 2020-09-09 2020-12-04 南京诺源医疗器械有限公司 Image fusion method for medical fluorescence imaging system

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101870837B1 (en) * 2017-11-17 2018-06-27 부경대학교 산학협력단 Parathyroid real-time imaging system

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2019200753A1 (en) * 2018-04-17 2019-10-24 平安科技(深圳)有限公司 Lesion detection method, device, computer apparatus and storage medium
CN111209810A (en) * 2018-12-26 2020-05-29 浙江大学 Bounding box segmentation supervision deep neural network architecture for accurately detecting pedestrians in real time in visible light and infrared images
CN111681273A (en) * 2020-06-10 2020-09-18 创新奇智(青岛)科技有限公司 Image segmentation method and device, electronic equipment and readable storage medium
CN112037216A (en) * 2020-09-09 2020-12-04 南京诺源医疗器械有限公司 Image fusion method for medical fluorescence imaging system

Also Published As

Publication number Publication date
CN113205141A (en) 2021-08-03

Similar Documents

Publication Publication Date Title
CN113205141B (en) Parathyroid gland identification method based on image fusion technology
US20230394660A1 (en) Wound imaging and analysis
EP2817782B1 (en) Video endoscopic system
CN109948671B (en) Image classification method, device, storage medium and endoscopic imaging equipment
CN105286768B (en) Human health status tongue coating diagnosis device based on mobile phone platform
KR101784063B1 (en) Pen-type medical fluorescence image device and system which registers multiple fluorescent images using the same
KR20100136540A (en) Locating and analyzing perforator flaps for plastic and reconstructive surgery
KR20200116107A (en) Automated monitoring of medical imaging procedures
CN106455942B (en) Processing unit, endoscope apparatus and system, the method for work of image processing apparatus
EP3821790A1 (en) Medical image processing device, medical image processing system, medical image processing method, and program
CN108319977B (en) Cervical biopsy region identification method and device based on channel information multi-mode network
CN114004969A (en) Endoscope image focal zone detection method, device, equipment and storage medium
WO2023044376A1 (en) Methods and systems for generating simulated intraoperative imaging data of a subject
Jang et al. Development of the digital tongue inspection system with image analysis
CN109711306B (en) Method and equipment for obtaining facial features based on deep convolutional neural network
CN111150369A (en) Medical assistance apparatus, medical assistance detection apparatus and method
CN113693724B (en) Irradiation method, device and storage medium suitable for fluorescence image navigation operation
CN114757894A (en) Bone tumor focus analysis system
JP2023519489A (en) Supporting medical procedures using luminescence images processed with restricted information regions identified in corresponding auxiliary images
WO2021051222A1 (en) Endoscope system, mixed light source, video acquisition device and image processor
CN115762722B (en) Image review system based on artificial intelligence
Pan et al. Detection model of nasolaryngology lesions based on multi-scale contextual information fusion and cascading mixed attention
Millan-Arias et al. General Cephalometric Landmark Detection for Different Source of X-Ray Images
Saab et al. Contribution of hyperspectral imaging in interventional environment: application to orthopedic surgery
CN116071633A (en) Method and system for identifying and tracking nasal cavity neoplasms based on neural network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant