CN114037686B - Children intussusception automatic check out system based on degree of depth learning - Google Patents

Children intussusception automatic check out system based on degree of depth learning Download PDF

Info

Publication number
CN114037686B
CN114037686B CN202111323780.XA CN202111323780A CN114037686B CN 114037686 B CN114037686 B CN 114037686B CN 202111323780 A CN202111323780 A CN 202111323780A CN 114037686 B CN114037686 B CN 114037686B
Authority
CN
China
Prior art keywords
network
concentric circle
detection model
intussusception
children
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202111323780.XA
Other languages
Chinese (zh)
Other versions
CN114037686A (en
Inventor
李哲明
黄坚
沈忱
俞刚
李竞
黄寿奖
宋春泽
柴象飞
郭娜
左盼莉
钱宝鑫
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang University ZJU
Original Assignee
Zhejiang University ZJU
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang University ZJU filed Critical Zhejiang University ZJU
Priority to CN202111323780.XA priority Critical patent/CN114037686B/en
Publication of CN114037686A publication Critical patent/CN114037686A/en
Application granted granted Critical
Publication of CN114037686B publication Critical patent/CN114037686B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/047Probabilistic or stochastic networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10132Ultrasound image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20112Image segmentation details
    • G06T2207/20132Image cropping
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30028Colon; Small intestine
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30204Marker

Abstract

The invention discloses an automatic detection system for children intussusception based on deep learning, which comprises a trained concentric circle detection model, wherein concentric circles in an ultrasonic image of the abdomen of a child are detected by using the concentric circle detection model; the concentric circle detection model comprises a feature extraction network, a region generation network RPN and an ROI Pooling layer; the feature extraction network adopts a VGG16 convolutional neural network, a jump connection layer is added in the convolutional neural network, and the shallow feature and the deep feature of the convolutional neural network are combined through the jump connection layer. The model constructed by the invention realizes the automatic detection of the ultrasonic image concentric circle symptom of the children abdominal intussusception, assists the doctor to identify, reduces the reading time of artificial evaluation, and improves the diagnosis speed of the children intussusception patients.

Description

Children intussusception automatic check out system based on degree of depth learning
Technical Field
The invention belongs to the field of medical artificial intelligence, and particularly relates to an automatic detection system for children intussusception based on deep learning.
Background
Intussusception is a surgical acute abdomen disease of children, is common clinically, and mainly takes children under 2 years old as a patient. Intussusception is mainly a phenomenon that two sections of connected intestinal tubes are mutually sleeved, early and timely diagnosis and active correct treatment can prevent the intestinal tubes from necrosis, and the pain of children patients is relieved.
Ultrasound diagnosis is an atraumatic and painless examination, which is easily accepted by children and their families, and typical ultrasonic sonograms of intussusception in children can be summarized as two signs: firstly, the cross section presents the symptoms of 'concentric circles', and secondly, the longitudinal section presents the symptoms of 'sleeve marks', so that doctors mostly judge whether the intussusception problem occurs to patients through the identification of the symptoms of 'concentric circles', but increasing image data also brings burden to the diagnosis and treatment of the doctors.
Computer vision technology is commonly used in the field of image rapid intelligent processing, such as image classification, target detection and target retrieval, computer vision simulates a human vision mechanism, and has the advantages of high detection speed and low cost. In recent years, with the application of deep learning in the field of Computer vision, and particularly with the breakthrough progress in the field of medical imaging, the traditional image reading mode relying on manual image reading of doctors is broken, and the deep learning technology based on data driving enables a Computer to assist in finding focuses and improving the accuracy of Diagnosis by combining the imaging technology and the medical image processing technology with the analysis and calculation of the Computer. The system is deployed in the cloud, so that the coverage of high-level medical resources can be further expanded, and the overall medical service level is improved.
For example, chinese patent publication No. CN110634125A discloses a method and system for identifying ultrasound images of a fetus based on deep learning, the method includes detecting by an ultrasound device and sending ultrasound parameter information of the fetus to a data terminal according to a printing operation control instruction; the data terminal receives and sends the ultrasonic parameter information to the cloud server; the cloud server receives and executes segmentation operation on the ultrasonic static image based on a predetermined image segmentation model to obtain a segmentation subimage, and the segmentation subimage is input into a predetermined image classification model to obtain a classification model result; the cloud server sends the classification model result to the main control equipment; and the master control equipment receives and outputs the classification model result.
Chinese patent publication No. CN110895968A discloses an artificial intelligence medical image automatic diagnosis system, which acquires a medical microscope image and corresponding diagnostic data, labels the medical microscope image to obtain labeled data corresponding to the medical microscope image, constructs a training set and a test set based on the diagnostic data and labeled data corresponding to the medical microscope image, and obtains an optimal AI classification model and an optimal AI semantic segmentation model based on deep learning model training to realize automatic diagnosis of the medical microscope image of a detection sample.
However, for the detection of the "concentric circle" signs of the abdominal ultrasound images, the prior art has no relevant description, and the existing detection model is difficult to achieve a good detection effect.
Disclosure of Invention
The invention provides an automatic detection system for children intussusception based on deep learning, which can realize automatic detection of ultrasonic image concentric circle signs of children abdominal intussusception, assist doctors in identification, reduce the reading time of artificial evaluation and improve the diagnosis speed of children intussusception patients.
An automatic deep learning-based detection system for intussusception in children comprising a computer memory, a computer processor, and a computer program stored in said computer memory and executable on said computer processor, wherein: the trained concentric circle detection model is stored in the computer memory; the concentric circle detection model is used for detecting concentric circles in the children abdomen ultrasonic image;
the concentric circle detection model comprises a feature extraction network, a region generation network RPN and an ROI Pooling layer; the computer processor, when executing the computer program, performs the steps of:
the method comprises the steps that a child abdomen ultrasonic image to be detected is input into a feature extraction network of a concentric circle detection model after being scaled to a fixed size, the feature extraction network extracts feature mapping of the image, and the feature mapping is shared to be used for a subsequent region to generate a network RPN and a full connection layer;
generating a bounding box offset by using the region generation network RPN, performing first correction on the bounding box, and then calculating all candidate boxes;
on one hand, the RoIPooling layer integrates the feature mapping of the feature extraction network and the ROIS information of the region generation network RPN to obtain the feature mapping of the candidate frame, sends the feature mapping into a subsequent full connection layer and a Softmax network to judge the target category, and on the other hand, secondary correction of the surrounding frame is carried out by adopting regression operation to obtain the final accurate position of the detection frame.
Furthermore, the feature extraction network adopts a VGG16 convolutional neural network, the VGG16 consists of 5 groups of convolution, each group of convolution has a pooling layer, and the total number of convolution comprises 13 convolution layers, 13 activation layers and 5 pooling layers; and a jump connection layer is added between the third group of convolution and the fifth group of convolution of the convolutional neural network, and the shallow layer characteristic and the deep layer characteristic of the convolutional neural network are combined through the jump connection layer.
And a jump connection layer is added in the convolutional neural network, and the characteristics gathered by the third group of convolutions and the fifth group of convolutions of the convolutional neural network are connected through the jump connection layer, so that the shallow characteristics and the deep characteristics are combined.
Further, the process of performing the first correction of the bounding box by the region generating network RPN is as follows:
the region generation network RPN slides on the characteristic diagram by using a sliding window of 3 × 3 convolution, and 9 anchors with different preset length-width ratios and different sizes are generated for each position;
the initial anchor contains three areas: 128 × 128, 256 × 256, and 512 × 512, each area comprising three aspect ratios of 1:1, 1:2, and 2: 1; the region generation network RPN first needs to judge whether the anchor covers the target, and then performs the first coordinate correction on the anchor covered with the target.
The process of calculating all candidate frames by the region generation network RPN is as follows:
after the bounding box is corrected for the first time, the region generation network RPN judges whether each anchor is a foreground or a background by using the intersection ratio IoU, because multiple anchors may be overlapped on the same target, candidate boxes with intersection ratios higher than that of the previous ones are selected by using a non-maximum suppression method, and the candidate boxes with intersection ratios smaller than that of the previous ones are discarded.
The training process of the concentric circle detection model is as follows:
(1) acquiring abdominal ultrasonic image data of a current child intussusception patient in a hospital database, and preprocessing the image data;
(2) dividing the preprocessed data into a training set, a verification set and a test set, sending the training set into a concentric circle detection model, and training the concentric circle detection model to detect an area containing concentric circle signs; the verification set adjusts the hyper-parameters of the model, an optimizer is used for updating the parameters, the network is optimized, the learning rate is automatically adjusted, and the trained concentric circle detection model is obtained; the test set is used to estimate the generalization ability of the model after the learning process is completed;
(3) and (4) performing iterative training on the concentric circle detection model by adopting a supervised training method until the model converges or reaches a preset iteration number.
In the step (1), the pretreatment process is as follows:
cutting out all identification information and peripheral areas of examinees in the ultrasonic image to ensure that the cut image only contains a fan-shaped ultrasonic area; meanwhile, due to retrospective data, the markers marked during doctor examination exist in the ultrasonic image, and the area containing the markers is identified and removed by using the generation countermeasure type network, so that the network society is prevented from identifying the markers instead of the characteristics of the concentric circle signs.
The process of identifying and removing regions containing tags using the generative antagonistic network is as follows:
designing a generation countermeasure network comprising a generation network G and a discrimination network D; a generating network G receives a random noise z, and generates a picture through the noise, and the picture is marked as G (z); in the training process, a confrontation type network is generated to randomly remove small areas without marks in each picture, and then the confrontation type network is trained to recover the removed areas by using the surrounding environment as clues; the judging network D judges whether a picture has a marker or not; after the training of the confrontation-generating network is completed, the area containing the marker is identified and removed, and then the trained confrontation-generating network recovers the removed area, namely the recovered area does not contain the marker any more.
In the step (3), when iterative training is performed, the preset iteration times are 10 ten thousand, the iteration times are divided into 25 epochs, each epoch has 4000 steps, and the initial learning rate is 0.0001.
Compared with the prior art, the method has the following beneficial effects:
1. in the system, during the model training process, the image preprocessing part removes the markers of the doctor on the image by using the generation impedance network; thereby preventing the model from learning to recognize by recognizing the marker rather than the features of the "concentric circle" character itself.
2. The concentric circle detection model comprises a feature extraction network, a regional generation network RPN and an ROI Pooling layer, wherein a jump connection part is added in the feature extraction network, and the aim is to combine semantic information of a lower layer and a higher layer to improve the semantic expression capability, so that the model can better distinguish a standard concentric circle symptom from a non-standard concentric circle symptom.
Drawings
FIG. 1 is a network architecture diagram of a concentric circle detection model according to the present invention;
FIG. 2 is a test set of the concentric circle test model of the present invention showing the operational characteristics of the test subjects.
Detailed Description
The invention will be described in further detail below with reference to the drawings and examples, which are intended to facilitate the understanding of the invention without limiting it in any way.
An automatic detection system for children intussusception based on deep learning comprises a computer memory, a computer processor and a computer program stored in the computer memory and executable on the computer processor, wherein a trained concentric circle detection model is stored in the computer memory; the concentric circle detection model is used for detecting concentric circles in the ultrasonic image of the abdomen of the child.
The invention designs a model which is more suitable for detecting the 'concentric circle' signs in the ultrasonic images. In order to better distinguish standard "concentric circle" signs from non-standard "concentric circle" signs, the method of the present invention adds a jump connection in the convolutional neural network. The shallow feature and the deep feature of the convolutional neural network are combined through the jump connection layer, and the detection model can detect the region with unobvious background difference and unobvious 'concentric circle' signs.
As shown in fig. 1, the concentric circle detection model of the present invention adds a jump connection based on the fast RCNN network model, fuses shallow information and deep features, better excavates semantic features in an image, and completes a target detection task for a "concentric circle" feature.
Specifically, the network structure main body of the concentric circle detection model consists of 3 stages: a trunk feature extraction Network, a Region generation Network (RPN), and a ROI Pooling layer. The feature extraction network adopts a VGG16 convolutional neural network to extract the main features, a region generation network RPN is used for generating candidate target frames on an original image, the candidate target frames (anchors) are subjected to secondary classification, and only the anchors covering the target region are subjected to position regression.
The specific process of the region generation network RPN is as follows: the region generation network RPN slides on the feature map using a sliding window (3 × 3 convolution), and generates 9 anchors with different preset length-width ratios for each position. The initial anchors contained three areas (128 × 128, 256 × 256, 512 × 512), each of which contained three aspect ratios (1:1, 1:2, 2: 1). The region generation network RPN first needs to judge whether the anchor covers the target, and then needs to perform the first coordinate correction on the anchor covered with the target.
For each point in the feature map output by the RPN, a 1 × 1 convolutional layer outputs 9 × 2 — 18 values because each point corresponds to 9 anchors, each anchor having a foreground score and a background score. Another 1 × 1 convolutional layer outputs 36 values, and each point corresponds to 9 anchors, each anchor corresponding to 4 corrected coordinate values, so that 9 × 4 is 36 values.
The RPN network needs to determine whether each anchor is foreground or background by using an Intersection-over-Union (IoU), and if one anchor is IoU of a true value (GT) above 0.7, the anchor is regarded as foreground (positive sample). Similarly, if the IOU of this anchor and ground channel is below 0.3, then this anchor is taken against the background (negative examples). When the training anchors belong to the foreground and the background, the balance of positive and negative samples needs to be ensured, when the positive and negative samples are not balanced, the quantity of fewer samples needs to be expanded by using a data enhancement mode, and then the classification training is carried out by using a cross entropy loss function.
The training method of the coordinate correction of the anchor frame is mainly completed by 4 values (tx, ty, th, tw), wherein the four values respectively represent that the corrected frame makes translation in the x and y directions of the anchor (determined by tx and ty), and the length and the width are respectively amplified by a certain multiple (determined by th and tw). Here, SmoothL1loss is used to train the network parameters to obtain these four values. And because a plurality of anchors may be covered and overlapped on the same target, a non-maximum suppression method is used for selecting candidate frames which are closer to each other in intersection and discarding the candidate frames which are smaller in intersection, so that the calculation amount is reduced.
And finally, performing secondary correction of the bounding box by using the ROI Pooling layer to detect the region where the target is located.
In the invention, the training and testing process of the concentric circle detection model is as follows:
1. image pre-processing
Patient data is collected, all data are still images taken of the subject during ultrasound, and the images are stored in the DICOM format. In the image preprocessing process, all the identification information of the examinees and the peripheral area in the ultrasonic image are cut out to ensure that the cut-out image only contains the sector-shaped ultrasonic area. Because of retrospective data, the presence of markers in the ultrasound image, as noted during the physician's examination, prevents the web society from recognizing by identifying markers other than the features of the "concentric circle" image itself, using a generative confrontation-type network to identify and remove the region containing the markers.
Generating the countermeasure network includes generating a network G and a discrimination network D. A network G is generated which receives a random noise z from which pictures are generated, denoted G (z). In the training process, the generated confrontation type network randomly removes small areas without marks in each picture, and then the confrontation type network is trained to recover the removed areas by using the surrounding environment as clues. The network D judges whether a picture has a marker or not, the input parameter is x, x represents the picture, and the output D (x) represents the probability that the picture contains the marker, if the probability is 1, 100 percent represents that the picture contains the marker, and if the output is 0, the picture does not contain the marker. After the training of the confrontation-generating network is completed, the area containing the marker is identified and removed, and then the trained confrontation-generating network recovers the removed area, namely the recovered area does not contain the marker any more.
The goal of creating a network is to try to recover the removed area and create a picture without the marker to spoof the discriminating network. The goal of network discrimination is to separate the pictures containing the markers and the pictures not containing the markers generated by the network generation as much as possible. Thus, the generation network and the discrimination network constitute a dynamic "gaming process". In the most ideal situation, the generating network can generate a picture g (z) in the restored area that no longer has a marker. Since it is difficult to determine whether or not the region where the network recovery is generated contains the marker, D (g (z)) is 0.5 for the discrimination network D. Thus, a generative model G can be obtained, which can recover the removed region, and the recovered region has no marker any more, so as to achieve the purpose of removing the marker.
2. Data packet
Sending 70% of the data set as a training set into a concentric circle detection model, and training the concentric circle detection model to detect an area containing the signs of the concentric circles; adjusting the hyper-parameters of the model by using a 20% data set as a verification set, updating parameters by using an optimizer, optimizing the network, and automatically adjusting the learning rate to obtain a trained detection network; the 10% data set was used as the test set to estimate the generalization ability of the model after the learning process was completed.
3. Training phase
And training the model by adopting a supervision training method. To obtain the corresponding image label, an experienced sonographer delineates the location of the "concentric circle" symptom in the ultrasound image, and then another expert label is verified to ensure its accuracy.
Firstly, scaling an ultrasonic image to a fixed size, then sending the image to a feature extraction network, wherein the feature extraction network adopts a VGG16 convolutional neural network to comprise 13 convolutional layers, 13 activation layers and 5 pooling layers, and extracting feature mapping of the image. The feature map is shared for subsequent RPN layers and fully connected layers; generating a bounding box offset by using the region generation network RPN, performing first correction on the bounding box, and then calculating all candidate boxes; and the ROI Pooling layer integrates the feature mapping and the ROIS information to obtain the feature mapping of the candidate frame, sends the feature mapping into a subsequent full connection and Softmax network to judge the target category, and performs secondary correction on the surrounding frame by adopting regression operation to obtain the final accurate position of the detection frame. The method of the present invention performs 10 ten thousand iterations (25 epochs, 4000 steps each) of training on all ultrasound images. The initial learning rate was 0.0001.
4. Evaluation phase
The detection rate of the "concentric circles" of each image was evaluated: if the concentric circle detection model creates a bounding box around the image and the box overlaps the true position of the "concentric circle" feature, it is determined that the "concentric circle" is correctly detected.
In the embodiment of the invention, different confidence degrees are used for evaluating the detection effect, and finally 0.3 is used as a confidence degree threshold truncation point. The detected Accuracy (ACC) is calculated by dividing the number of correctly detected "concentric" feature images (TP) by the total number of "concentric" feature images. False positives are counted when the model outputs a bounding box in the region that does not contain the "concentric circle" sign (FP). The Receiver Operating Characteristic curve (ROC curve) of the final intelligent "concentric circle" symptom detection model on the test set is shown in fig. 2.
The embodiments described above are intended to illustrate the technical solutions and advantages of the present invention, and it should be understood that the above-mentioned embodiments are only specific embodiments of the present invention, and are not intended to limit the present invention, and any modifications, additions and equivalents made within the scope of the principles of the present invention should be included in the scope of the present invention.

Claims (8)

1. An automatic deep learning-based detection system for intussusception in children comprising a computer memory, a computer processor, and a computer program stored in said computer memory and executable on said computer processor, wherein: the trained concentric circle detection model is stored in the computer memory; the concentric circle detection model is used for detecting concentric circles in the children abdomen ultrasonic image;
the concentric circle detection model comprises a feature extraction network, a regional generation network (RPN) and a ROIPooling layer; the computer processor, when executing the computer program, performs the steps of:
the method comprises the steps that a child abdomen ultrasonic image to be detected is input into a feature extraction network of a concentric circle detection model after being scaled to a fixed size, the feature extraction network extracts feature mapping of the image, and the feature mapping is shared to be used for a subsequent region to generate a network RPN and a full connection layer;
generating a bounding box offset by using the region generation network RPN, performing first correction on the bounding box, and then calculating all candidate boxes;
on one hand, the RoIPooling layer integrates the feature mapping of the feature extraction network and the ROIS information of the region generation network RPN to obtain the feature mapping of the candidate frame, sends the feature mapping into a subsequent full connection layer and a Softmax network to judge the target category, and on the other hand, secondary correction of the surrounding frame is carried out by adopting regression operation to obtain the final accurate position of the detection frame.
2. The system of claim 1, wherein the feature extraction network employs a VGG16 convolutional neural network, the VGG16 comprises 5 groups of convolution with one pooling layer, each group of convolution comprises 13 convolution layers, 13 activation layers and 5 pooling layers; and a jump connection layer is added between the third group of convolution and the fifth group of convolution of the convolutional neural network, and the shallow layer feature and the deep layer feature of the convolutional neural network are combined through the jump connection layer.
3. The system according to claim 1, wherein the regional generation network RPN performs a first enclosure correction as follows:
the region generation network RPN slides on the characteristic diagram by using a sliding window of 3 × 3 convolution, and 9 anchors with different preset length-width ratios and different sizes are generated for each position;
the initial anchor contains three areas: 128 × 128, 256 × 256, and 512 × 512, each area comprising three aspect ratios of 1:1, 1:2, and 2: 1; the region generation network RPN first needs to judge whether the anchor covers the target, and then performs the first coordinate correction on the anchor covered with the target.
4. The deep learning based automatic detection system for children intussusception according to claim 3, wherein the process of calculating all candidate boxes by the region generation network RPN is as follows:
after the bounding box is corrected for the first time, the region generation network RPN judges whether each anchor is a foreground or a background by using the intersection ratio IoU, because multiple anchors may be overlapped on the same target, candidate boxes with intersection ratios higher than that of the previous ones are selected by using a non-maximum suppression method, and the candidate boxes with intersection ratios smaller than that of the previous ones are discarded.
5. The deep learning based automatic detection system for intussusception of children as claimed in claim 1, wherein said concentric circle detection model is trained as follows:
(1) acquiring abdominal ultrasonic image data of a current child intussusception patient in a hospital database, and preprocessing the image data;
(2) dividing the preprocessed data into a training set, a verification set and a test set, sending the training set into a concentric circle detection model, and training the concentric circle detection model to detect an area containing concentric circle signs; the verification set adjusts the hyper-parameters of the model, an optimizer is used for updating the parameters, the network is optimized, the learning rate is automatically adjusted, and the trained concentric circle detection model is obtained; the test set is used for estimating the generalization ability of the model after the learning process is completed;
(3) and (4) performing iterative training on the concentric circle detection model by adopting a supervised training method until the model converges or reaches a preset iteration number.
6. The system for automatic detection of children intussusception based on deep learning of claim 5, wherein in step (1), said pretreatment process is as follows:
cutting out all identification information and peripheral areas of examinees in the ultrasonic image to ensure that the cut image only contains a fan-shaped ultrasonic area; meanwhile, due to retrospective data, the markers marked during doctor examination exist in the ultrasonic image, and the area containing the markers is identified and removed by using the generation countermeasure type network, so that the network society is prevented from identifying the markers instead of the characteristics of the concentric circle signs.
7. The system of claim 6, wherein the identification and removal of the marker-containing region using the generation of the antagonistic network is performed as follows:
designing a generation countermeasure network comprising a generation network G and a discrimination network D; a generating network G receives a random noise z, and generates a picture through the noise, and the picture is marked as G (z); in the training process, a confrontation type network is generated to randomly remove small areas without marks in each picture, and then the confrontation type network is trained to recover the removed areas by using the surrounding environment as clues; the judging network D judges whether a picture has a marker or not; after the training of the confrontation-generating network is completed, the area containing the marker is identified and removed, and then the trained confrontation-generating network recovers the removed area, namely the recovered area does not contain the marker any more.
8. The system for automatically detecting intussusception of children based on deep learning in claim 5, wherein in the step (3), when performing the iterative training, the preset number of iterations is 10 ten thousand, which is divided into 25 epochs, each epoch has 4000 steps, and the initial learning rate is 0.0001.
CN202111323780.XA 2021-11-09 2021-11-09 Children intussusception automatic check out system based on degree of depth learning Active CN114037686B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111323780.XA CN114037686B (en) 2021-11-09 2021-11-09 Children intussusception automatic check out system based on degree of depth learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111323780.XA CN114037686B (en) 2021-11-09 2021-11-09 Children intussusception automatic check out system based on degree of depth learning

Publications (2)

Publication Number Publication Date
CN114037686A CN114037686A (en) 2022-02-11
CN114037686B true CN114037686B (en) 2022-05-17

Family

ID=80143703

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111323780.XA Active CN114037686B (en) 2021-11-09 2021-11-09 Children intussusception automatic check out system based on degree of depth learning

Country Status (1)

Country Link
CN (1) CN114037686B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114913169B (en) * 2022-06-10 2023-03-24 浙江大学 Neonatal necrotizing enterocolitis screening system

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
AU2003235983A8 (en) * 2002-05-13 2003-11-11 Magnolia Medical Technologies System and method for analysis of medical image data
CN110175993A (en) * 2019-05-27 2019-08-27 西安交通大学医学院第一附属医院 A kind of Faster R-CNN pulmonary tuberculosis sign detection system and method based on FPN
CN111985376A (en) * 2020-08-13 2020-11-24 湖北富瑞尔科技有限公司 Remote sensing image ship contour extraction method based on deep learning
WO2021099278A1 (en) * 2019-11-21 2021-05-27 Koninklijke Philips N.V. Point-of-care ultrasound (pocus) scan assistance and associated devices, systems, and methods
WO2021175644A1 (en) * 2020-03-05 2021-09-10 Koninklijke Philips N.V. Multi-modal medical image registration and associated devices, systems, and methods
CN113392775A (en) * 2021-06-17 2021-09-14 广西大学 Sugarcane seedling automatic identification and counting method based on deep neural network

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
AU2003235983A8 (en) * 2002-05-13 2003-11-11 Magnolia Medical Technologies System and method for analysis of medical image data
CN110175993A (en) * 2019-05-27 2019-08-27 西安交通大学医学院第一附属医院 A kind of Faster R-CNN pulmonary tuberculosis sign detection system and method based on FPN
WO2021099278A1 (en) * 2019-11-21 2021-05-27 Koninklijke Philips N.V. Point-of-care ultrasound (pocus) scan assistance and associated devices, systems, and methods
WO2021175644A1 (en) * 2020-03-05 2021-09-10 Koninklijke Philips N.V. Multi-modal medical image registration and associated devices, systems, and methods
CN111985376A (en) * 2020-08-13 2020-11-24 湖北富瑞尔科技有限公司 Remote sensing image ship contour extraction method based on deep learning
CN113392775A (en) * 2021-06-17 2021-09-14 广西大学 Sugarcane seedling automatic identification and counting method based on deep neural network

Non-Patent Citations (6)

* Cited by examiner, † Cited by third party
Title
《Experience of Pentavalent Human-bovine Reassortant Rotavirus Vaccine Among Healthy Infants in Taiwan》;Chien-ChihChang et al.;《Journal of the Formosan Medical Association》;20090515;全文 *
《RSNA2018儿科影像学》;田芷瑶 等;《放射学实践》;20190420;全文 *
《儿科门诊静脉血管分级与护士能级匹配系统的构建及应用》;徐建英 等;《护理学杂志》;20190825;全文 *
基于R-FCN算法的糖尿病眼底病变自动诊断;王嘉良等;《计算机工程与应用》;20190226(第04期);全文 *
腹部超声联合浅表超声对小儿肠套叠的诊断分析;武静等;《皖南医学院学报》;20171215(第06期);全文 *
超声高频探头在小儿腹部疾病诊断中的应用;刘艳安;《黑龙江医学》;20170515(第05期);全文 *

Also Published As

Publication number Publication date
CN114037686A (en) 2022-02-11

Similar Documents

Publication Publication Date Title
Wang et al. Prior-attention residual learning for more discriminative COVID-19 screening in CT images
Goel et al. Dilated CNN for abnormality detection in wireless capsule endoscopy images
Li et al. Attention-guided convolutional neural network for detecting pneumonia on chest x-rays
CN112365464B (en) GAN-based medical image lesion area weak supervision positioning method
CN111968091B (en) Method for detecting and classifying lesion areas in clinical image
CN110838114B (en) Pulmonary nodule detection method, device and computer storage medium
WO2021021329A1 (en) System and method for interpretation of multiple medical images using deep learning
KR102531400B1 (en) Artificial intelligence-based colonoscopy diagnosis supporting system and method
Farzaneh et al. Automated subdural hematoma segmentation for traumatic brain injured (TBI) patients
CN114037686B (en) Children intussusception automatic check out system based on degree of depth learning
US20230005140A1 (en) Automated detection of tumors based on image processing
CN111833321A (en) Window-adjusting optimization-enhanced intracranial hemorrhage detection model and construction method thereof
US20200175340A1 (en) Method and system for evaluating quality of medical image dataset for machine learning
CN111401102B (en) Deep learning model training method and device, electronic equipment and storage medium
CN111493805A (en) State detection device, method, system and readable storage medium
Habib Fusion of deep convolutional neural network with PCA and logistic regression for diagnosis of pediatric pneumonia on chest X-rays
CN112741651A (en) Method and system for processing ultrasonic image of endoscope
CN115170492A (en) Intelligent prediction and evaluation system for postoperative vision of cataract patient based on AI (artificial intelligence) technology
CN113222989A (en) Image grading method and device, storage medium and electronic equipment
CN113298773A (en) Heart view identification and left ventricle detection device and system based on deep learning
CN115908224A (en) Training method of target detection model, target detection method and training device
ASLIYAN et al. Automatic brain tumor segmentation with K-means, fuzzy c-means, self-organizing map and otsu methods
Paul et al. Computer-Aided Diagnosis Using Hybrid Technique for Fastened and Accurate Analysis of Tuberculosis Detection with Adaboost and Learning Vector Quantization
Gallo et al. Boosted wireless capsule endoscopy frames classification
Yousuf et al. Analysis of tuberculosis detection techniques using chest x-rays: A review

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant