CN110974306B - System for discernment and location pancreas neuroendocrine tumour under ultrasonic endoscope - Google Patents

System for discernment and location pancreas neuroendocrine tumour under ultrasonic endoscope Download PDF

Info

Publication number
CN110974306B
CN110974306B CN201911301799.7A CN201911301799A CN110974306B CN 110974306 B CN110974306 B CN 110974306B CN 201911301799 A CN201911301799 A CN 201911301799A CN 110974306 B CN110974306 B CN 110974306B
Authority
CN
China
Prior art keywords
image
neuroendocrine tumor
pancreatic neuroendocrine
color band
current
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201911301799.7A
Other languages
Chinese (zh)
Other versions
CN110974306A (en
Inventor
李�真
戚庆庆
冯健
左秀丽
李延青
杨晓云
邵学军
季锐
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Qingdao Medcare Digital Engineering Co ltd
Qilu Hospital of Shandong University
Original Assignee
Qingdao Medcare Digital Engineering Co ltd
Qilu Hospital of Shandong University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Qingdao Medcare Digital Engineering Co ltd, Qilu Hospital of Shandong University filed Critical Qingdao Medcare Digital Engineering Co ltd
Priority to CN201911301799.7A priority Critical patent/CN110974306B/en
Publication of CN110974306A publication Critical patent/CN110974306A/en
Application granted granted Critical
Publication of CN110974306B publication Critical patent/CN110974306B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B8/00Diagnosis using ultrasonic, sonic or infrasonic waves
    • A61B8/46Ultrasonic, sonic or infrasonic diagnostic devices with special arrangements for interfacing with the operator or the patient
    • A61B8/461Displaying means of special interest
    • A61B8/463Displaying means of special interest characterised by displaying multiple images or images and diagnostic data on one display
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B8/00Diagnosis using ultrasonic, sonic or infrasonic waves
    • A61B8/12Diagnosis using ultrasonic, sonic or infrasonic waves in body cavities or body tracts, e.g. by using catheters
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B8/00Diagnosis using ultrasonic, sonic or infrasonic waves
    • A61B8/52Devices using data or image processing specially adapted for diagnosis using ultrasonic, sonic or infrasonic waves
    • A61B8/5215Devices using data or image processing specially adapted for diagnosis using ultrasonic, sonic or infrasonic waves involving processing of medical diagnostic data
    • A61B8/5223Devices using data or image processing specially adapted for diagnosis using ultrasonic, sonic or infrasonic waves involving processing of medical diagnostic data for extracting a diagnostic or physiological parameter from medical diagnostic data

Abstract

The invention discloses a system for identifying and positioning pancreatic neuroendocrine tumors under an ultrasonic endoscope, which comprises: the image acquisition module is used for generating a training set, and is configured to label the pancreatic neuroendocrine tumor region in the sample set image by using a multi-target labeling tool; the auxiliary diagnosis module is configured to construct an auxiliary diagnosis model, and after optimization training is carried out on the auxiliary diagnosis model through a training set, pancreatic neuroendocrine tumor lesion region identification is carried out on an input preprocessed image; and the joint judgment module is configured to display the output result in a color band diagram form and is used for judging the accuracy of the pancreatic neuroendocrine tumor lesion area identification result. According to the invention, through intelligent automatic identification of PNET under the ultrasonic endoscope, PNET can be accurately identified and positioned in a large number of ultrasonic endoscope pictures generated in the ultrasonic endoscope inspection process, the detection rate of PNET is improved, and missed diagnosis is reduced.

Description

System for discernment and location pancreas neuroendocrine tumour under ultrasonic endoscope
Technical Field
The invention relates to the technical field of intelligent auxiliary diagnosis of tumors, in particular to a system for identifying and positioning pancreatic neuroendocrine tumors under an ultrasonic endoscope.
Background
The statements in this section merely provide background information related to the present disclosure and may not necessarily constitute prior art.
At present, the identification and positioning of pancreatic neuroendocrine tumors (PNET) under an ultrasonic endoscope still depend on an ultrasonic endoscope inspection operator, and in the ultrasonic endoscope scanning process, the operator relies on personal experience to identify and position the PNET in an ultrasonic image. However, because the ultrasound endoscope image of PNET lesion has very high similarity with the image of normal pancreas background, and there are a lot of ultrasound endoscope images in the continuous scanning process of the ultrasound endoscope, it has higher difficulty to identify and position PNET in the inspection process, and the examiner has rich ultrasound endoscope identification experience, and has misdiagnosis and missed diagnosis to a certain extent.
Disclosure of Invention
In order to solve the problems, the invention provides a system for identifying and positioning the pancreatic neuroendocrine tumor under an ultrasonic endoscope, which can quickly and accurately identify and position PNET in the examination process.
In some embodiments, the following technical scheme is adopted:
a system for endosonically identifying and locating a pancreatic neuroendocrine tumor, comprising:
the image acquisition module is connected to the endoscope host through an acquisition card to acquire image information of each frame acquired by the endoscope host; selecting a single frame of endoscopic image with PNET lesion to construct a sample set;
a training set making module configured to label a pancreatic neuroendocrine tumor region in the sample set image using a multi-objective labeling tool; meanwhile, generating labeled text information corresponding to the labeled position; the marked region and the marked text information corresponding to the region form a training set;
the auxiliary diagnosis module is configured to construct an auxiliary diagnosis model, and after optimization training is carried out on the auxiliary diagnosis model through a training set, pancreatic neuroendocrine tumor lesion region identification is carried out on an input preprocessed image;
and the joint judgment module is configured to display the output result in a color band diagram form and is used for judging the accuracy of the pancreatic neuroendocrine tumor lesion area identification result.
Further, the output result is displayed in the form of a color band diagram, specifically:
(1) setting an initial value of the color band diagram;
(2) judging whether the output probability of the current frame image PNET is greater than a set value; if yes, increasing the current probability value by the color bar value corresponding to the current image; when the PNET output probability is smaller than a set value, subtracting a difference value between the set value and the current output probability from a color bar value corresponding to the current image;
(3) repeating the step (2) until all the images are judged, and sequentially connecting the color band values of each frame of image according to the image input sequence to obtain a final color band diagram;
(4) and judging the reliability of the output result of the current auxiliary diagnosis model through the color change of the color band diagram.
In other embodiments, the following technical solutions are adopted:
a terminal device comprising a processor and a computer-readable storage medium, the processor being configured to implement instructions; the computer readable storage medium is used for storing a plurality of instructions adapted to be loaded by a processor and for performing the following process:
the method comprises the steps that an endoscope host is accessed through an acquisition card, and image information of each frame acquired by the endoscope host is acquired; selecting a single frame of endoscopic image with PNET lesion to construct a sample set;
marking the pancreatic neuroendocrine tumor region in the sample set image by using a multi-target marking tool; meanwhile, generating labeled text information corresponding to the labeled position; the marked region and the marked text information corresponding to the region form a training set;
constructing an auxiliary diagnosis model, and performing optimization training on a training set, and then performing pancreatic neuroendocrine tumor lesion region identification on an input preprocessed image;
and displaying the output result in a color band diagram form, and judging the accuracy of the pancreatic neuroendocrine tumor lesion area identification result.
In other embodiments, the following technical solutions are adopted:
a computer readable storage medium having stored therein a plurality of instructions adapted to be loaded by a processor of a terminal device and to perform the process of:
the method comprises the steps that an endoscope host is accessed through an acquisition card, and image information of each frame acquired by the endoscope host is acquired; selecting a single frame of endoscopic image with PNET lesion to construct a sample set;
marking the pancreatic neuroendocrine tumor region in the sample set image by using a multi-target marking tool; meanwhile, generating labeled text information corresponding to the labeled position; the marked region and the marked text information corresponding to the region form a training set;
constructing an auxiliary diagnosis model, and performing optimization training on a training set, and then performing pancreatic neuroendocrine tumor lesion region identification on an input preprocessed image;
and displaying the output result in a color band diagram form, and judging the accuracy of the pancreatic neuroendocrine tumor lesion area identification result.
Compared with the prior art, the invention has the beneficial effects that:
the invention can visually judge the certainty factor of the current output result by observing whether the color change of the color band diagram is consistent; the reliability of the diagnosis result is improved.
Through the intelligent automatic identification of PNET under the ultrasonic endoscope, the PNET can be accurately identified and positioned in a large number of generated ultrasonic endoscope pictures in the ultrasonic endoscope inspection process, the detection rate of the PNET is improved, and missed diagnosis is reduced.
Drawings
FIG. 1 is a schematic diagram of the system operation process for identifying and locating pancreatic neuroendocrine tumors under ultrasound endoscopy.
Detailed Description
It should be noted that the following detailed description is exemplary and is intended to provide further explanation of the disclosure. Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs.
It is noted that the terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of example embodiments according to the present application. As used herein, the singular forms "a", "an" and "the" are intended to include the plural forms as well, and it should be understood that when the terms "comprises" and/or "comprising" are used in this specification, they specify the presence of stated features, steps, operations, devices, components, and/or combinations thereof, unless the context clearly indicates otherwise.
Example one
In one or more embodiments, a system for endosonically identifying and locating a pancreatic neuroendocrine tumor is disclosed, comprising:
the image acquisition module is connected to the endoscope host through an acquisition card to acquire image information of each frame acquired by the endoscope host; manually selecting a single-frame endoscopic image with PNET lesion to construct a sample set;
the image preprocessing module is configured to preprocess the acquired image information;
since the image with the PNET lesion is acquired in a single piece under the ultrasonic endoscope in clinic, the private data of the patient in the image needs to be removed. To reduce the amount of calculation, the black border is removed and only the colored digestive tract area is reserved.
And (3) performing black edge algorithm processing, scaling processing and normalization processing, removing redundant boundaries of the endoscopic image after each frame of image is subjected to black edge algorithm, only reserving an ROI (region of interest), and adjusting the resolution of all the images by 416X416 resolution by adopting a bicubic interpolation scaling algorithm.
A training set making module configured to label a pancreatic neuroendocrine tumor region in the sample set image using a multi-objective labeling tool; meanwhile, generating labeled text information corresponding to the labeled position; the marked region and the marked text information corresponding to the region form a training set;
and performing PNET identification by using a target detection deep learning technology, marking a characteristic region of the image, and recording region fixed-point coordinates by using a picture frame in a disease region. The embodiment uses a Yolo target detection model, and an ultrasonic endoscopic image without PNET does not need to be marked.
The specific labeling method comprises the following steps:
observing the PNET characteristic region of each image, drawing a rectangular frame on the picture through a marking tool, and drawing a rectangle by taking the outer cut rectangle of the focus region as a central region when drawing the rectangular frame, wherein the rectangle completely contains the outer cut rectangle of the focus and is 10 pixels away from the outer cut rectangle of the focus up, down, left and right.
The auxiliary diagnosis module is configured to construct an auxiliary diagnosis model, and after optimization training is carried out on the auxiliary diagnosis model through a training set, pancreatic neuroendocrine tumor lesion region identification is carried out on an input preprocessed image;
in this embodiment, the auxiliary diagnosis model may adopt a YOLO v3 neural network model, which has the characteristics of high detection accuracy and high detection speed, and can meet the requirement of real-time detection of the digestive endoscopy.
To achieve better training, we use a dynamic learning rate, which is given by the formula:
learning_rate=base_lr*(1-epoch/train_epoch)*2;
wherein, the learning _ rate is the current learning rate, the base _ lr is the initial learning rate, the epoch is the current iteration number, and the train _ epoch training total iteration number.
And in order to avoid overfitting, observing the descending condition of the loss function in real time, and stopping training in time when the fluctuation of the loss function is not large.
The effect of model training is evaluated by a loss function. The goal of the training is to find the minimum of the loss function. Finding the minimum uses a gradient descent. The learning rate is the magnitude of the adjustment range of each batch parameter. A dynamic learning rate may speed up training. When training begins, the learning rate value is larger, the adjustment range of each batch of parameters is large, and the loss function can be reduced more quickly. When the minimum value of the loss function is reached, if the adjustment amplitude is too large, the lowest point may be crossed, so that the loss function rises, and the minimum point is oscillated around the minimum value point, so that the minimum point cannot be found, and the learning rate is dynamically reduced.
In other embodiments, the auxiliary diagnostic model specifically works as follows:
in the first stage, attention diagram and a thick boundary box are predicted through a reduced complete picture to obtain the position and the rough size of the pancreatic neuroendocrine tumor in the picture, and the down-sampling mode is favorable for reducing inference time and facilitating context information acquisition.
Specifically, an original picture is reduced to a picture with a long edge of 255, 3 attention maps are predicted in an upper sampling layer of an hourglass network and are respectively used for predicting small (smaller than 32) middle (between 32 and 96) and large (larger than 96) PNET feature areas, sizes are controlled when different sizes are predicted to facilitate subsequent clipping, a focal loss with alpha being 2 is used in training, the midpoint of a group trial Bounding Box is set to be positive, the rest are negative samples, and the central position of the PNET feature area is generated when the size is larger than a threshold t being 0.3 in testing.
The position derived from the reduced full picture is used to determine where processing is required. If cropped directly from the zoomed-out picture, some PNET feature areas may be too small to be detected accurately. Therefore, it is necessary to obtain size information on a high-resolution feature map from the beginning.
The center position (rough) is acquired from the attention map, the magnification factor (more the small target is magnified) can be selected according to the rough PNET characteristic region size, the magnification factor is set at each possible center position (x, y), and finally the image at the moment is mapped back to the original image, and the size of 255 × 255 is taken as the cutting region by taking (x, y) as the center point.
The position derived from the predicted bounding box contains more dimensional information of the feature region of the PNET. The resulting bounding box size may be used to determine the zoom size.
And finally, generating a final detection frame through a corner detection mechanism, embedding and offsetting through predicting a corner heat map of the cutting area, and finally mapping the coordinates back to the original image.
And finally, eliminating redundant frames by adopting a Soft NMS algorithm, wherein manual elimination can be adopted for detection frames contacting with the boundary of the cutting area.
And the identification result auditing module is configured to audit the identification result, re-label the pancreatic neuroendocrine tumor lesion area of the image with the identification error, and modify the labeled text information.
As the characteristics of pancreatic cancer and other conditions under the ultrasonic endoscope are similar to those of PNET, after the training of a PNET recognition model under the ultrasonic endoscope is completed, in the clinical test, the image with the wrong recognition is collected and used as a negative sample to be added into a training set for retraining.
This enables the output result accuracy of the auxiliary diagnostic model to be continually optimised.
And the joint judgment module is configured to display the output result in a color band diagram form and is used for judging the accuracy of the pancreatic neuroendocrine tumor lesion area identification result.
The neural network concludes that an image may be erroneous, and this certainty increases significantly if successive frame inferences are consistently directed to the same conclusion. The color change of the color bar can intuitively judge the certainty factor of the current AI inference.
Therefore, a color band diagram from light to deep is designed, and the output result of each frame of image corresponds to a color band value; the clinical examination PNET identification model gradually deepens the color band diagram if the suspected PNET is output (the PNET network output probability is greater than 50%), and lightens the color band diagram if the suspected PNET is output (the PNET network output probability is less than 50%).
Specifically, the method comprises the following steps:
1) setting the initial color band value to 0;
2) judging whether the output probability of the current frame image PNET is more than 50%;
3) when the PNET output probability is greater than 50%, accumulating the probability value by the current color bar value;
4) and when the PNET output probability is less than 50%, accumulating the current color bar value by the difference between the current output probability and the 50%. Namely: current color bar value primary color bar value- (50% -current PNET output probability).
And repeating the steps 2) -4) until all the images are judged, and sequentially connecting the color band values of each frame of image according to the image input sequence to obtain the final color band diagram.
Example two
In one or more embodiments, a terminal device is disclosed that includes a processor and a computer-readable storage medium, the processor to implement instructions; the computer readable storage medium is used for storing a plurality of instructions adapted to be loaded by a processor and to perform the process of figure 1:
the method comprises the steps that an endoscope host is accessed through an acquisition card, and image information of each frame acquired by the endoscope host is acquired; selecting a single frame of endoscopic image with PNET lesion to construct a sample set;
marking the pancreatic neuroendocrine tumor region in the sample set image by using a multi-target marking tool; meanwhile, generating labeled text information corresponding to the labeled position; the marked region and the marked text information corresponding to the region form a training set;
constructing an auxiliary diagnosis model, and performing optimization training on a training set, and then performing pancreatic neuroendocrine tumor lesion region identification on an input preprocessed image;
and displaying the output result in a color band diagram form, and judging the accuracy of the pancreatic neuroendocrine tumor lesion area identification result.
In other embodiments, a computer-readable storage medium is disclosed having stored therein a plurality of instructions adapted to be loaded by a processor of a terminal device and to perform the process of fig. 1:
the method comprises the steps that an endoscope host is accessed through an acquisition card, and image information of each frame acquired by the endoscope host is acquired; selecting a single frame of endoscopic image with PNET lesion to construct a sample set;
marking the pancreatic neuroendocrine tumor region in the sample set image by using a multi-target marking tool; meanwhile, generating labeled text information corresponding to the labeled position; the marked region and the marked text information corresponding to the region form a training set;
constructing an auxiliary diagnosis model, and performing optimization training on a training set, and then performing pancreatic neuroendocrine tumor lesion region identification on an input preprocessed image;
and displaying the output result in a color band diagram form, and judging the accuracy of the pancreatic neuroendocrine tumor lesion area identification result.
The specific implementation method of the above process corresponds to the working process of the corresponding functional module in the first embodiment, and is not described again.
Although the embodiments of the present invention have been described with reference to the accompanying drawings, it is not intended to limit the scope of the present invention, and it should be understood by those skilled in the art that various modifications and variations can be made without inventive efforts by those skilled in the art based on the technical solution of the present invention.

Claims (7)

1. A system for identifying and locating a pancreatic neuroendocrine tumor under ultrasound endoscopy, comprising:
the image acquisition module is connected to the endoscope host through an acquisition card to acquire image information of each frame acquired by the endoscope host; selecting a single-frame endoscopic image with pancreatic neuroendocrine tumor lesion to construct a sample set;
a training set making module configured to label a pancreatic neuroendocrine tumor region in the sample set image using a multi-objective labeling tool; meanwhile, generating labeled text information corresponding to the labeled position; the marked region and the marked text information corresponding to the region form a training set;
the auxiliary diagnosis module is configured to construct an auxiliary diagnosis model, and after optimization training is carried out on the auxiliary diagnosis model through a training set, pancreatic neuroendocrine tumor lesion region identification is carried out on an input preprocessed image; the specific process comprises the following steps: predicting an attention diagram and a thick boundary frame through the reduced complete picture to obtain the position and the rough size of a lesion region in the picture; detecting a target by checking the identified picture of the lesion region; amplifying the central position of the pathological change area acquired from the attention map by a set multiple, and mapping the image back to an original image; selecting an image with a set size as a cutting area by taking each possible central position as a center; generating a detection frame by adopting a corner detection mechanism, and finally mapping the coordinates back to the original image by predicting the corner heat map of the clipping area, embedding and offsetting; eliminating redundant frames, and finally outputting a boundary frame marked with a pancreatic neuroendocrine tumor lesion area;
the joint judgment module is configured to display the output result in a color band diagram form and is used for judging the accuracy of the pancreatic neuroendocrine tumor lesion area identification result; the configuration for displaying the output result in the form of a color band diagram specifically comprises: step (1): setting an initial value of the color band diagram; step (2): judging whether the pancreatic neuroendocrine tumor output probability of the current frame image is greater than a set value or not; if yes, increasing the current probability value by the color bar value corresponding to the current image; when the output probability of the pancreatic neuroendocrine tumor is smaller than a set value, subtracting a difference value between the set value and the current output probability from a color band value corresponding to the current image; and (3): repeating the step (2) until all the images are judged, and sequentially connecting the color band values of each frame of image according to the image input sequence to obtain a final color band diagram; and (4): and judging the reliability of the output result of the current auxiliary diagnosis model through the color change of the color band diagram.
2. The system of claim 1, further comprising: the image preprocessing module is configured to preprocess the acquired image information; the image preprocessing module carries out image preprocessing, and the image preprocessing process comprises the following steps:
removing information contained in the image that relates to patient privacy;
removing black frames of the image;
and adjusting all image resolutions to be set resolution by adopting a bicubic interpolation scaling algorithm.
3. The endoultrasound system for identifying and locating a neuroendocrine tumor of the pancreas according to claim 1, further comprising an identification result auditing module configured to audit the identification result, re-label lesion areas of the neuroendocrine tumor of the pancreas on the images with identification errors, and modify labeled text information.
4. The system for endosonographically identifying and locating neuroendocrine tumors of the pancreas according to claim 1, wherein a dynamic learning rate is used in training the aided diagnosis model, specifically:
learning_rate=base_lr*(1-epoch/train_epoch)*2;
wherein, learning _ rate is the current learning rate, base _ lr is the initial learning rate, epoch is the current iteration number, and train _ epoch is the total training iteration number.
5. The system of claim 1, wherein the aided diagnosis model displays the lesion area with lesion probability greater than a predetermined value, and stores the lesion-bearing image and corresponding label data for review.
6. A terminal device comprising a processor and a computer-readable storage medium, the processor being configured to implement instructions; a computer readable storage medium for storing a plurality of instructions adapted to be loaded by a processor and to perform the following process:
the method comprises the steps that an endoscope host is accessed through an acquisition card, and image information of each frame acquired by the endoscope host is acquired; selecting a single-frame endoscopic image with pancreatic neuroendocrine tumor lesion to construct a sample set;
marking the pancreatic neuroendocrine tumor region in the sample set image by using a multi-target marking tool; meanwhile, generating labeled text information corresponding to the labeled position; the marked region and the marked text information corresponding to the region form a training set;
constructing an auxiliary diagnosis model, and performing optimization training on a training set, and then performing pancreatic neuroendocrine tumor lesion region identification on an input preprocessed image; the specific process comprises the following steps: predicting an attention diagram and a thick boundary frame through the reduced complete picture to obtain the position and the rough size of a lesion region in the picture; detecting a target by checking the identified picture of the lesion region; amplifying the central position of the pathological change area acquired from the attention map by a set multiple, and mapping the image back to an original image; selecting an image with a set size as a cutting area by taking each possible central position as a center; generating a detection frame by adopting a corner detection mechanism, and finally mapping the coordinates back to the original image by predicting the corner heat map of the clipping area, embedding and offsetting; eliminating redundant frames, and finally outputting a boundary frame marked with a pancreatic neuroendocrine tumor lesion area;
displaying the output result in a form of a color band diagram for judging the accuracy of the pancreatic neuroendocrine tumor lesion area identification result; wherein, displaying the output result in the form of a color band diagram specifically comprises: step (1): setting an initial value of the color band diagram; step (2): judging whether the pancreatic neuroendocrine tumor output probability of the current frame image is greater than a set value or not; if yes, increasing the current probability value by the color bar value corresponding to the current image; when the output probability of the pancreatic neuroendocrine tumor is smaller than a set value, subtracting a difference value between the set value and the current output probability from a color band value corresponding to the current image; and (3): repeating the step (2) until all the images are judged, and sequentially connecting the color band values of each frame of image according to the image input sequence to obtain a final color band diagram; and (4): and judging the reliability of the output result of the current auxiliary diagnosis model through the color change of the color band diagram.
7. A computer-readable storage medium having stored therein a plurality of instructions, wherein the instructions are adapted to be loaded by a processor of a terminal device and to perform the following process:
the method comprises the steps that an endoscope host is accessed through an acquisition card, and image information of each frame acquired by the endoscope host is acquired; selecting a single-frame endoscopic image with pancreatic neuroendocrine tumor lesion to construct a sample set;
marking the pancreatic neuroendocrine tumor region in the sample set image by using a multi-target marking tool; meanwhile, generating labeled text information corresponding to the labeled position; the marked region and the marked text information corresponding to the region form a training set;
constructing an auxiliary diagnosis model, and performing optimization training on a training set, and then performing pancreatic neuroendocrine tumor lesion region identification on an input preprocessed image; the specific process comprises the following steps: predicting an attention diagram and a thick boundary frame through the reduced complete picture to obtain the position and the rough size of a lesion region in the picture; detecting a target by checking the identified picture of the lesion region; amplifying the central position of the pathological change area acquired from the attention map by a set multiple, and mapping the image back to an original image; selecting an image with a set size as a cutting area by taking each possible central position as a center; generating a detection frame by adopting a corner detection mechanism, and finally mapping the coordinates back to the original image by predicting the corner heat map of the clipping area, embedding and offsetting; eliminating redundant frames, and finally outputting a boundary frame marked with a pancreatic neuroendocrine tumor lesion area;
displaying the output result in a form of a color band diagram for judging the accuracy of the pancreatic neuroendocrine tumor lesion area identification result; wherein, displaying the output result in the form of a color band diagram specifically comprises: step (1): setting an initial value of the color band diagram; step (2): judging whether the pancreatic neuroendocrine tumor output probability of the current frame image is greater than a set value or not; if yes, increasing the current probability value by the color bar value corresponding to the current image; when the output probability of the pancreatic neuroendocrine tumor is smaller than a set value, subtracting a difference value between the set value and the current output probability from a color band value corresponding to the current image; and (3): repeating the step (2) until all the images are judged, and sequentially connecting the color band values of each frame of image according to the image input sequence to obtain a final color band diagram; and (4): and judging the reliability of the output result of the current auxiliary diagnosis model through the color change of the color band diagram.
CN201911301799.7A 2019-12-17 2019-12-17 System for discernment and location pancreas neuroendocrine tumour under ultrasonic endoscope Active CN110974306B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911301799.7A CN110974306B (en) 2019-12-17 2019-12-17 System for discernment and location pancreas neuroendocrine tumour under ultrasonic endoscope

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911301799.7A CN110974306B (en) 2019-12-17 2019-12-17 System for discernment and location pancreas neuroendocrine tumour under ultrasonic endoscope

Publications (2)

Publication Number Publication Date
CN110974306A CN110974306A (en) 2020-04-10
CN110974306B true CN110974306B (en) 2021-02-05

Family

ID=70094647

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911301799.7A Active CN110974306B (en) 2019-12-17 2019-12-17 System for discernment and location pancreas neuroendocrine tumour under ultrasonic endoscope

Country Status (1)

Country Link
CN (1) CN110974306B (en)

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP7457571B2 (en) * 2020-05-19 2024-03-28 富士フイルムヘルスケア株式会社 Ultrasound diagnostic device and diagnostic support method
CN111862109B (en) * 2020-06-28 2024-02-23 国网山东省电力公司德州供电公司 System and device for multi-target acquisition, image recognition and automatic labeling of recognition results
CN112085113B (en) * 2020-09-14 2021-05-04 四川大学华西医院 Severe tumor image recognition system and method
CN114587416A (en) * 2022-03-10 2022-06-07 山东大学齐鲁医院 Gastrointestinal tract submucosal tumor diagnosis system based on deep learning multi-target detection
CN115187596B (en) * 2022-09-09 2023-02-10 中国医学科学院北京协和医院 Neural intelligent auxiliary recognition system for laparoscopic colorectal cancer surgery
CN116269749B (en) * 2023-03-06 2023-10-10 东莞市东部中心医院 Laparoscopic bladder cancer surgical system with improved reserved nerves

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2005204958A (en) * 2004-01-23 2005-08-04 Pentax Corp Autofluorescently observable electronic endoscope apparatus and system
US9990472B2 (en) * 2015-03-23 2018-06-05 Ohio State Innovation Foundation System and method for segmentation and automated measurement of chronic wound images
CN107862694A (en) * 2017-12-19 2018-03-30 济南大象信息技术有限公司 A kind of hand-foot-and-mouth disease detecting system based on deep learning
CN109671053A (en) * 2018-11-15 2019-04-23 首都医科大学附属北京友谊医院 A kind of gastric cancer image identification system, device and its application
CN110335230A (en) * 2019-03-30 2019-10-15 复旦大学 A kind of endoscopic image lesion real-time detection method and device

Also Published As

Publication number Publication date
CN110974306A (en) 2020-04-10

Similar Documents

Publication Publication Date Title
CN110974306B (en) System for discernment and location pancreas neuroendocrine tumour under ultrasonic endoscope
CN111325739B (en) Method and device for detecting lung focus and training method of image detection model
DK2973397T3 (en) Tissue-object-based machine learning system for automated assessment of digital whole-slide glass
CN111445478B (en) Automatic intracranial aneurysm region detection system and detection method for CTA image
CN112257704A (en) Cervical fluid-based cell digital image classification method based on deep learning detection model
US11645753B2 (en) Deep learning-based multi-site, multi-primitive segmentation for nephropathology using renal biopsy whole slide images
CN110570350A (en) two-dimensional follicle detection method and device, ultrasonic equipment and readable storage medium
CN111340827A (en) Lung CT image data processing and analyzing method and system
CN108830149B (en) Target bacterium detection method and terminal equipment
CN109117890A (en) A kind of image classification method, device and storage medium
CN112215217B (en) Digital image recognition method and device for simulating doctor to read film
CN113516639B (en) Training method and device for oral cavity abnormality detection model based on panoramic X-ray film
KR20230097646A (en) Artificial intelligence-based gastroscopy diagnosis supporting system and method to improve gastro polyp and cancer detection rate
WO2021159778A1 (en) Image processing method and apparatus, smart microscope, readable storage medium and device
CN116703837B (en) MRI image-based rotator cuff injury intelligent identification method and device
CN113485555A (en) Medical image reading method, electronic equipment and storage medium
CN113160175A (en) Tumor lymphatic vessel infiltration detection method based on cascade network
CN116091522A (en) Medical image segmentation method, device, equipment and readable storage medium
CN116563305A (en) Segmentation method and device for abnormal region of blood vessel and electronic equipment
CN113469942B (en) CT image lesion detection method
CN114332858A (en) Focus detection method and device and focus detection model acquisition method
CN113255756A (en) Image fusion method and device, electronic equipment and storage medium
CN111612755A (en) Lung focus analysis method, device, electronic equipment and storage medium
Cui et al. Cobb Angle Measurement Method of Scoliosis Based on U-net Network
CN112734707A (en) Auxiliary detection method, system and device for 3D endoscope and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB03 Change of inventor or designer information

Inventor after: Li Zhen

Inventor after: Qi Qingqing

Inventor after: Feng Jian

Inventor after: Zuo Xiuli

Inventor after: Li Yanqing

Inventor after: Yang Xiaoyun

Inventor after: Shao Xuejun

Inventor after: Ji Rui

Inventor before: Li Zhen

Inventor before: Qi Qingqing

Inventor before: Feng Jian

Inventor before: Zuo Xiuli

Inventor before: Li Yanqing

Inventor before: Yang Xiaoyun

Inventor before: Shao Xuejun

Inventor before: Ji Rui

CB03 Change of inventor or designer information
GR01 Patent grant
GR01 Patent grant