CN113222051A - Image labeling method based on small intestine focus characteristics - Google Patents

Image labeling method based on small intestine focus characteristics Download PDF

Info

Publication number
CN113222051A
CN113222051A CN202110580649.5A CN202110580649A CN113222051A CN 113222051 A CN113222051 A CN 113222051A CN 202110580649 A CN202110580649 A CN 202110580649A CN 113222051 A CN113222051 A CN 113222051A
Authority
CN
China
Prior art keywords
focus
image
labeling
detection
marked
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110580649.5A
Other languages
Chinese (zh)
Inventor
陈发青
李汭恒
赵殊一
候文雨
肖治国
卢佳
赵楠
李念峰
李东旭
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Changchun University
Original Assignee
Changchun University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Changchun University filed Critical Changchun University
Priority to CN202110580649.5A priority Critical patent/CN113222051A/en
Publication of CN113222051A publication Critical patent/CN113222051A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10068Endoscopic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/03Recognition of patterns in medical or anatomical images

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Evolutionary Biology (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Quality & Reliability (AREA)
  • Image Analysis (AREA)

Abstract

The application discloses an image annotation method. The method comprises the following steps: determining at least one image block in an image to be marked; respectively determining the characteristic information of each image block, wherein the characteristic information is used for uniquely representing the corresponding image block; when target feature information matched with the reference feature information exists in each feature information, labeling the image block corresponding to the target feature information to realize labeling of a labeling object in the image to be labeled, wherein the reference feature information corresponds to the labeling object. The image labeling mode automatically labels the image blocks, so that the problem of low efficiency caused by manual labeling in the prior art is solved, and a foundation is laid for efficiently obtaining training samples.

Description

Image labeling method based on small intestine focus characteristics
Technical Field
The invention relates to the technical field of computers, in particular to an image labeling method based on the characteristics of a small intestinal focus.
Background
In the image processing research, the algorithm models proposed aiming at the automatic image labeling problem all depend on the visual feature extraction of the image, and the traditional feature extraction algorithm only extracts various visual features of low levels of the image, so that the expression capability of the visual features is reduced. In recent years, deep learning has made a breakthrough, and is mainly dependent on a complex network structure and massive data support. Most applications have difficulty providing sufficient training samples, often resulting in overfitting of the model, which results in poor training quality of the model. The existing unsupervised learning is not mature enough and can not be applied to the deep learning of the existing medical image detection. Through a semi-supervised learning mode, fine tuning training is carried out based on a pre-training model and the method is applied to a labeling method, so that the labeling effect and the detection effect are improved, and the method is more approved in medical image detection.
The extraction of the characteristics of the intestinal foci is different from the extraction of the characteristics of common image objects, the characteristics of the tissues around the focus part of the small intestine belong to one part of the characteristics of the focus, and the characteristics of the tissues need to be included in the whole focus marking, and the basic problem can be expanded to the marking of the whole capsule endoscope image.
Disclosure of Invention
The invention aims to provide an image labeling method based on the characteristics of a small intestinal focus, which aims to solve the problems of large labeling quantity and low labeling speed in the labeling problem of pathological images, improve the labeling speed and accelerate the model iterative training.
The invention provides a labeling method of a capsule endoscope focus image for deep learning, which is characterized by comprising the following steps of:
(1) acquiring the capsule lens image for manual marking, wherein when a clinician reads the film, the clinician can directly mark the focus, and the clinician can directly mark the focus;
(2) the manual labeling standard enables a deep learning network to obtain high-quality labeling data on a data level;
(3) expanding a marked area of the focus by a computer background to obtain a corresponding detection result;
(4) in the image detection process of the computer, if the accuracy of the detection result is more than 99%, the detection result of the model is considered to be correct, and the coordinate of the detection frame in the detection result is directly converted into the labeling coordinate of the labeled image;
(5) in the process of using the detection system to carry out intelligent detection, the computer can detect the focus, so that the detected focus image can be directly brought into the image training gallery by the computer to participate in the upgrading training of the model. The expanded label is determined by a manual label standard;
further, the manual labeling specifically includes:
the doctor looks up the image shot by the capsule lens and marks the focus on the image by a small frame with the length of a and the width of b. And performing data enhancement on the cut small pictures of the pathological images.
Further, the physician may directly label the lesion specifically including:
after finding the focus in the image shot by the capsule lens, the doctor can directly mark the focus by using the small marking frame.
Further, the deep learning network is: a convolutional neural network. The convolutional neural network learns better visual features by extracting multi-instance fused high-level visual features.
Further, the expanding of the labeling area of the focus by the computer background specifically comprises:
according to the length a and the width b of a small frame of a focus marked by a doctor, an adjusting parameter k is defined, and a constant m is defined
Wherein k = a/b when a/b is 1 or less
When a/b >1, k = b/a
x and y are the length and width of the marking area after expansion
X = a m when k ≧ (2-m)
Y=b*m
When (m-1) is less than or equal to K < 2-m), mapping K to an interval of 0- (m-1)
Wherein K = K- (m-1)
Then x = a m [ (2-m) + K ]
Y=b*m*[(2-m)+K]
X = a when k <0.33
y=b
Analysis of the results of multiple trials gave m =1.33
Further, the step of directly converting the coordinates of the detection frame in the detection result into the labeled coordinates of the labeled image specifically includes:
the computer expands the marked focus marking frame into the marking frame marked by the doctor, so that the doctor can see the tissues around the focus during film reading, and the diagnosis accuracy of the doctor can be greatly improved.
Further, the detection system specifically includes:
through a semi-supervised learning mode, fine tuning training is carried out based on a pre-training model and applied to a labeling method, and labeling effect and detection effect are improved.
Further, the training and upgrading of the model specifically includes:
along with the enlargement of the marked area of the focus, the detection accuracy rate is improved, and after the enlargement to a certain degree, the detection accuracy rate begins to be reduced again. The specific expression is that the length and width of the original focus are expanded and marked in the same proportion by about 1.33 times as best; if the focus part presents a slender irregular state, the marked expansion area needs to be properly reduced or not expanded; the annotation cannot exceed the effective area of the image area.
By adopting the embodiment of the invention, the requirements of the user and the deep learning algorithm are integrated, the labeling workload of a pathologist is reduced through the model prediction result, the labeling speed is accelerated, and the segmentation effect is improved.
The foregoing description is only an overview of the technical solutions of the present invention, and the embodiments of the present invention are described below in order to make the technical means of the present invention more clearly understood and to make the above and other objects, features, and advantages of the present invention more clearly understandable.
Drawings
FIG. 1 is a schematic diagram of small intestine focus labeling of a labeling method of capsule endoscope focus images for deep learning according to an embodiment of the invention;
fig. 2 is a graph showing a relationship between a lesion marking area of a small intestine and a training result in the method for marking a capsule endoscope lesion image for deep learning according to the embodiment of the present invention.
Detailed Description
The technical solutions of the present invention will be described clearly and completely with reference to the following embodiments, and it should be understood that the described embodiments are some, but not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
In the description of the present invention, it is to be understood that the terms "first", "second" and the like are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implying any number of technical features indicated. Thus, features defined as "first", "second", may explicitly or implicitly include one or more of the described features. In the description of the present invention, "a plurality" means two or more unless specifically defined otherwise.
For better clarity of the objects, contents and advantages of the present invention, the present invention will be described in further detail with reference to the accompanying drawings. The invention provides a labeling method of a capsule endoscope focus image for deep learning, which is characterized by comprising the following steps of:
the method comprises the following steps: the capsule endoscope image is obtained for manual marking, when a clinician reads the film, the marking can be directly carried out when finding the focus, and the doctor can directly mark the focus;
the manual labeling specifically comprises:
and (4) looking up the image shot by the capsule lens, and marking the focus on the image by using a small frame with the length of a and the width of b. Performing data enhancement on the cut small pictures of the pathological images;
doctors can directly mark the focus, which specifically includes:
after finding the focus in the image shot by the capsule lens, the doctor can directly mark the focus by using the small marking frame.
Step two: the manual labeling standard enables a deep learning network to obtain high-quality labeling data on a data level;
the deep learning network comprises the following steps: a convolutional neural network. The convolutional neural network learns better visual features by extracting multi-instance fused high-level visual features.
Step three: expanding a marked region of the focus by a computer background to obtain a corresponding detection result;
the computer background for marking area expansion of the focus specifically comprises the following steps:
according to the length a and the width b of a small frame of a focus marked by a doctor, an adjusting parameter k is defined, and a constant m is defined
Wherein k = a/b when a/b is 1 or less
When a/b >1, k = b/a
x and y are the length and width of the marking area after expansion
X = a m when k ≧ (2-m)
Y=b*m
When (m-1) is less than or equal to K < 2-m), mapping K to an interval of 0- (m-1)
Wherein K = K- (m-1)
Then x = a m [ (2-m) + K ]
Y=b*m*[(2-m)+K]
X = a when k <0.33
y=b
Analysis of the results of multiple trials gave m = 1.33.
Step four: in the process of image detection by a computer, if the accuracy of a detection result is more than 99%, the detection result of the model is considered to be correct, and the coordinates of a detection frame in the detection result are directly converted into the labeling coordinates of a labeled image;
the step of directly converting the coordinates of the detection frame in the detection result into the labeled coordinates of the labeled image specifically includes:
the computer expands the marked focus marking frame into the marking frame marked by the doctor, so that the doctor can see the tissues around the focus during film reading, and the diagnosis accuracy of the doctor can be greatly improved.
Step five: in the process of using the detection system to carry out intelligent detection, the computer can detect the focus, so that the detected focus image can be directly brought into the image training gallery by the computer to participate in the upgrading training of the model. The expanded label is determined by a manual label standard;
the detection system specifically comprises:
through a semi-supervised learning mode, fine tuning training is carried out based on a pre-training model and applied to a labeling method, and labeling effect and detection effect are improved.
The training and upgrading of the model specifically comprises the following steps:
along with the enlargement of the marked area of the focus, the detection accuracy rate is improved, and after the enlargement to a certain degree, the detection accuracy rate begins to be reduced again. The specific expression is that the length and width of the original focus are expanded and marked in the same proportion by about 1.33 times as best; if the focus part presents a slender irregular state, the marked expansion area needs to be properly reduced or not expanded; the annotation cannot exceed the effective area of the image area.
The above description is only a preferred embodiment of the present invention, and the protection scope of the present invention is not limited to the above embodiments, and all technical solutions belonging to the idea of the present invention belong to the protection scope of the present invention. It should be noted that modifications and embellishments within the scope of the invention may occur to those skilled in the art without departing from the principle of the invention, and are considered to be within the scope of the invention.

Claims (8)

1. A labeling method of a capsule endoscope focus image for deep learning is characterized by comprising the following steps:
(1) acquiring the capsule lens image for manual marking, wherein when a clinician reads the film, the clinician can directly mark the focus, and the clinician can directly mark the focus;
(2) the manual labeling standard enables a deep learning network to obtain high-quality labeling data on a data level;
(3) expanding a marked area of the focus by a computer background to obtain a corresponding detection result;
(4) in the image detection process of the computer, if the accuracy of the detection result is more than 99%, the detection result of the model is considered to be correct, and the coordinate of the detection frame in the detection result is directly converted into the labeling coordinate of the labeled image;
(5) in the process of using the detection system to carry out intelligent detection, the computer can detect the focus, so that the detected focus image can be directly brought into the image training gallery by the computer to participate in the upgrading training of the model; the augmented annotation is determined by manual annotation criteria.
2. The method according to claim 1, wherein the manual labeling specifically comprises:
the doctor looks up the image shot by the capsule lens and marks the focus on the image by a small frame with the length of a and the width of b.
3. The method of claim 1, wherein the physician can directly label the lesion itself by:
after finding the focus in the image shot by the capsule lens, the doctor can directly mark the focus by using the small marking frame.
4. The method of claim 2, wherein the deep learning network is: a convolutional neural network;
the convolutional neural network learns better visual features by extracting multi-instance fused high-level visual features.
5. The method of claim 3, wherein the expanding of the labeled region of the lesion in the background of the computer specifically comprises:
(1) defining an adjusting parameter k according to the length a and the width b of the small frame of the focus marked by the doctor, and defining a constant m
Wherein k = a/b when a/b is 1 or less
When a/b >1, k = b/a
x and y are the length and width of the marking area after expansion
X = a m when k ≧ (2-m)
Y=b*m
When (m-1) is less than or equal to K < 2-m), mapping K to an interval of 0- (m-1)
Wherein K = K- (m-1)
Then x = a m [ (2-m) + K ]
Y=b*m*[(2-m)+K]
X = a when k <0.33
y=b
Analysis of the results of multiple trials gave m = 1.33.
6. The method of claim 4, wherein directly converting the coordinates of the detection frame in the detection result into the labeled coordinates of the labeled image specifically comprises:
the computer expands the marked focus marking frame into the marking frame marked by the doctor, so that the doctor can see the tissues around the focus during film reading, and the diagnosis accuracy of the doctor can be greatly improved.
7. The method according to claim 5, wherein the detection system specifically comprises:
through a semi-supervised learning mode, fine tuning training is carried out based on a pre-training model and applied to a labeling method, and labeling effect and detection effect are improved.
8. The method of claim 5, wherein the training and upgrading of the model specifically comprises:
along with the enlargement of the marked area of the focus, the detection accuracy is improved, and when the area is enlarged to a certain degree, the detection accuracy begins to be reduced again;
the specific expression is that the length and width of the original focus are expanded and marked in the same proportion by about 1.33 times as best; if the focus part presents a slender irregular state, the marked expansion area needs to be properly reduced or not expanded; the annotation cannot exceed the effective area of the image area.
CN202110580649.5A 2021-05-26 2021-05-26 Image labeling method based on small intestine focus characteristics Pending CN113222051A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110580649.5A CN113222051A (en) 2021-05-26 2021-05-26 Image labeling method based on small intestine focus characteristics

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110580649.5A CN113222051A (en) 2021-05-26 2021-05-26 Image labeling method based on small intestine focus characteristics

Publications (1)

Publication Number Publication Date
CN113222051A true CN113222051A (en) 2021-08-06

Family

ID=77098932

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110580649.5A Pending CN113222051A (en) 2021-05-26 2021-05-26 Image labeling method based on small intestine focus characteristics

Country Status (1)

Country Link
CN (1) CN113222051A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115187512A (en) * 2022-06-10 2022-10-14 珠海市人民医院 Hepatocellular carcinoma great vessel invasion risk prediction method, system, device and medium
CN115578437A (en) * 2022-12-01 2023-01-06 武汉楚精灵医疗科技有限公司 Intestinal body focus depth data acquisition method and device, electronic equipment and storage medium

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115187512A (en) * 2022-06-10 2022-10-14 珠海市人民医院 Hepatocellular carcinoma great vessel invasion risk prediction method, system, device and medium
CN115187512B (en) * 2022-06-10 2024-01-30 珠海市人民医院 Method, system, device and medium for predicting invasion risk of large blood vessel of hepatocellular carcinoma
CN115578437A (en) * 2022-12-01 2023-01-06 武汉楚精灵医疗科技有限公司 Intestinal body focus depth data acquisition method and device, electronic equipment and storage medium
CN115578437B (en) * 2022-12-01 2023-03-14 武汉楚精灵医疗科技有限公司 Intestinal body focus depth data acquisition method and device, electronic equipment and storage medium

Similar Documents

Publication Publication Date Title
CN108765363B (en) Coronary artery CTA automatic post-processing system based on artificial intelligence
CN108510482B (en) Cervical cancer detection device based on colposcope images
CN110974306B (en) System for discernment and location pancreas neuroendocrine tumour under ultrasonic endoscope
CN108921854B (en) Method and system for labeling irregular lesion region of gastrointestinal endoscope image
CN113222051A (en) Image labeling method based on small intestine focus characteristics
Huang et al. Lesion-based contrastive learning for diabetic retinopathy grading from fundus images
CN111145200B (en) Blood vessel center line tracking method combining convolutional neural network and cyclic neural network
CN109934829B (en) Liver segmentation method based on three-dimensional graph segmentation algorithm
CN114581375A (en) Method, device and storage medium for automatically detecting focus of wireless capsule endoscope
CN116188479B (en) Hip joint image segmentation method and system based on deep learning
CN111583385A (en) Personalized deformation method and system for deformable digital human anatomy model
CN112102929A (en) Medical image labeling method and device, storage medium and electronic equipment
CN113160153A (en) Lung nodule screening method and system based on deep learning technology
CN116779093B (en) Method and device for generating medical image structured report and computer equipment
CN111128349A (en) GAN-based medical image focus detection marking data enhancement method and device
CN111401102A (en) Deep learning model training method and device, electronic equipment and storage medium
CN108898601A (en) Femoral head image segmentation device and dividing method based on random forest
CN111861984B (en) Method and device for determining lung region, computer equipment and storage medium
CN116563305A (en) Segmentation method and device for abnormal region of blood vessel and electronic equipment
CN113989269B (en) Traditional Chinese medicine tongue image tooth trace automatic detection method based on convolutional neural network multi-scale feature fusion
CN115546149A (en) Liver segmentation method and device, electronic device and storage medium
CN114693981A (en) Automatic knee joint feature point identification method
CN113298773A (en) Heart view identification and left ventricle detection device and system based on deep learning
CN113222989A (en) Image grading method and device, storage medium and electronic equipment
CN112634266A (en) Semi-automatic marking method, medium, equipment and device for laryngoscope image

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination