CN117671284A - Intelligent extraction system for invasive placenta implantation image features AI - Google Patents

Intelligent extraction system for invasive placenta implantation image features AI Download PDF

Info

Publication number
CN117671284A
CN117671284A CN202311667705.4A CN202311667705A CN117671284A CN 117671284 A CN117671284 A CN 117671284A CN 202311667705 A CN202311667705 A CN 202311667705A CN 117671284 A CN117671284 A CN 117671284A
Authority
CN
China
Prior art keywords
placenta
image
features
images
feature
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202311667705.4A
Other languages
Chinese (zh)
Other versions
CN117671284B (en
Inventor
王志坚
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou Kesong Medical Intelligent Technology Co ltd
Original Assignee
Guangzhou Kesong Medical Intelligent Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangzhou Kesong Medical Intelligent Technology Co ltd filed Critical Guangzhou Kesong Medical Intelligent Technology Co ltd
Priority to CN202311667705.4A priority Critical patent/CN117671284B/en
Publication of CN117671284A publication Critical patent/CN117671284A/en
Application granted granted Critical
Publication of CN117671284B publication Critical patent/CN117671284B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Image Processing (AREA)

Abstract

The invention provides an intelligent extraction system for an invasive placenta implantation image feature AI, which relates to the field of image processing and comprises an acquisition module, a historical case database, a feature extraction module and a prediction module, wherein B-ultrasonic and nuclear Magnetic Resonance (MRI) image data are respectively input into an image segmentation network based on a DoubleU Net network, a training set and a test set based on B-ultrasonic images and MRI images are respectively constructed, an image feature recognition model is generated based on the training set and the test set, placenta implantation image features are recognized through the image feature recognition model, the prediction module is connected with the feature extraction module, and a prediction result is generated according to the image features.

Description

Intelligent extraction system for invasive placenta implantation image features AI
Technical Field
The invention relates to the field of image processing, in particular to an intelligent extraction system for invasive placenta implantation image features AI.
Background
The placenta implantation refers to that placenta villi penetrates into part of the uterine wall myometrium and occurs in early pregnancy, the placenta implantation is one of serious complications of obstetrics, which can lead to the large bleeding, shock, uterine perforation, secondary infection and even death of puerpera, the prenatal color ultrasonic screening placenta implantation is necessary for puerpera with high risk factors, the placenta implantation is one of serious complications of pregnancy, the placenta villi invades into the uterine myometrium, and the placenta implanted part cannot be normally and automatically stripped in the third birth process, which can cause massive bleeding, uterine penetration, shock of the pregnant woman, further infection and even death. Placenta implantation is often accompanied by the concurrence of placenta precursors, which can be classified into four types, i.e., low placenta, marginal placenta, complete placenta, and central placenta.
Placenta implantation due to high caesarean section yields will present an extremely Yan Jun challenge to obstetricians and also a heavy burden to the home and society. If the implanted placenta implantation and the penetrating placenta implantation cannot be identified in time, the failure of a primary hospital to perform timely referral and blind surgery may lead to death of a pregnant and lying-in woman which is difficult to avoid, while the failure of a comprehensive hospital to perform adequate blood source preparation and multi-disciplinary joint diagnosis may lead to fatal hemorrhage and hysterectomy. Therefore, how to accurately evaluate the implantation type and predict the placenta growth and the villus growth process before operation, so as to finally reduce the fatal post-partum bleeding rate is a practical problem which is urgent to be solved clinically at present.
Therefore, it is necessary to provide a new intelligent extraction system for the characteristics AI of invasive placenta implantation image to solve the above technical problems.
Disclosure of Invention
In order to solve the technical problems of accurately evaluating the implantation type before operation and predicting the implantation depth and range of the placenta, and making an operation plan in advance so as to finally reduce the fatal post-partum bleeding rate and hysterectomy rate, the invention provides an intelligent extraction system for invasive placenta implantation image features AI.
The intelligent extraction system for the invasive placenta implantation image features AI comprises an acquisition module, a processing module and a processing module, wherein the acquisition module is used for acquiring B ultrasonic image data and MRI image data of the placenta of a patient;
a historical case database for storing historical samples of B-ultrasound image data and MRI image data of invasive placenta implants, wherein the historical samples include a plurality of invasive placenta implant B-ultrasound images and MRI images;
the characteristic extraction module is used for respectively inputting the historical sample and the B-ultrasonic image data and the MRI image data acquired by the acquisition module into an image segmentation network based on a DoubleU Net network, constructing a training set and a testing set, generating an image characteristic recognition model based on the training set and the testing set, recognizing the placenta implantation image characteristic through the image characteristic recognition model, comparing the B-ultrasonic image and the MRI image data of the acquired placenta of the patient with the historical sample, predicting the implantation range and depth of the placenta, and marking the predicted position of the placenta implantation, wherein the training set is based on the B-ultrasonic image data and the MRI image data of the patient, and the testing set is based on the B-ultrasonic image data and the MRI image data of the placenta of the patient acquired by the acquisition module;
the specific steps for predicting the placenta implantation according to the image characteristics comprise:
s1, building layers for placenta images in a test set, copying n layers to obtain n+1 placenta images with different layers, synchronously overturning the n+1 placenta images by 90 degrees in any combination of left and right, horizontal and vertical directions to obtain a 6 (n+1) Zhang Tezheng image, and merging the obtained feature images;
s2, overlapping the feature map obtained in the step S1 with placenta images in the training set and respectively in different map layers, hiding the feature map, only displaying the placenta images in the training set, and carrying out convolution operation on the placenta images in the training set to obtain image enclosing region features of invasive placenta implantation, wherein the image enclosing region features comprise feature coordinates, lengths and confidence degrees;
s3, displaying the feature map, performing convolution operation on the feature map after combining the map layers to obtain image enclosing region features of the feature map, comparing the feature map with the image enclosing region features of placenta images in a training set, and marking the overlapped part to obtain preliminary region prediction features;
s4, equidistantly expanding the surrounding area features of the feature map by at least 1 pixel distance, and marking the overlapped part to obtain the prediction features of the identified area;
and S5, confirming a prediction result of the placenta implantation range and depth according to the preliminary region prediction features and the identification region prediction features, and marking the placenta implantation prediction position.
And further, redrawing the edges before combining the feature images after combining the image layers by using a deep learning algorithm again, so that the images before combining and the new images after combining are more real and natural in a joint transition area.
Further, the B-ultrasonic image data and MRI image data mainly include placenta location, placenta thickness, post-placenta low-echo zone, placenta-to-uterus interface vascular features, placenta trappings, and placenta area to volume ratio.
Further, the DoubleU Net network comprises an encoder and an adaptive pooling layer, wherein the encoder comprises a first feature extraction unit and a plurality of cascaded second feature extraction units, and the second feature extraction units at the head end and the tail end are respectively connected with the first feature extraction unit and the adaptive pooling layer.
Further, the first feature extraction unit comprises two cascaded convolution blocks, and the convolution blocks comprise a convolution layer, a batch normalization layer and an activation function layer which are cascaded in sequence.
Further, gray scale normalization of the image is required before confirming the image bounding region features, and iteration is performed starting from 0.
Further, for the image with normalized gray scale, histogram equalization processing is performed on each region in the image, gray scale gradients are obtained for a plurality of local regions of the image by an encoder, and these gradient values are combined into image feature values.
Compared with the related art, the intelligent extraction system for the invasive placenta implantation image feature AI has the following beneficial effects:
1. according to the invention, the B ultrasonic image data and the MRI image data are respectively input into an image segmentation network based on a DoubleU Net network, a training set and a testing set are constructed, an image feature recognition model is generated based on the training set and the testing set, the placenta implantation image features are recognized through the image feature recognition model, subjective error factors of people, environment and instruments are eliminated, and the method is realized by a computer machine learning method, so that time and labor are saved.
2. According to the invention, layers are built on placenta images in a test set and n placenta images are copied to obtain n+1 placenta images with different layers, the n+1 placenta images are synchronously turned 90 degrees in a turning mode of arbitrary combination in the left-right direction, the horizontal direction and the vertical direction respectively to obtain a 6 (n+1) Zhang Tezheng image, and then the obtained feature images are combined to obtain feature information with more dimensions, so that the feature point judgment and prediction are further improved, the accuracy of placenta implantation prediction can be improved, the growth of placenta and villus is predicted, and the prediction positions of the placenta and the villus growth are marked.
3. The invention extends the distance of at least 1 pixel from the surrounding area feature of the feature map equidistantly, marks the overlapping part to obtain the prediction feature of the identified area, confirms the prediction result according to the preliminary prediction feature of the identified area, avoids the condition that the accuracy of diagnosis is reduced due to the deviation of the measured related index, reserves a safety line, and improves the safety measure.
Drawings
FIG. 1 is a block flow diagram of a prediction result generated by image features provided by the invention;
fig. 2 is a system block diagram of an intelligent extraction system for invasive placenta implantation image features AI provided by the present invention;
fig. 3 is a block diagram of a DoubleU Net network according to the present invention.
Detailed Description
The invention will be further described with reference to the drawings and embodiments.
Referring to fig. 1, fig. 2, and fig. 3 in combination, fig. 1 is a block flow diagram of a prediction result generated by image features provided by the present invention; fig. 2 is a system block diagram of an intelligent extraction system for invasive placenta implantation image features AI provided by the present invention; fig. 3 is a block diagram of a DoubleU Net network according to the present invention.
As used herein, the singular forms "a", "an", "the" and "the" are intended to include the plural forms as well, unless expressly stated otherwise, as understood by those skilled in the art. It will be further understood that the terms "comprises" and/or "comprising," when used in this specification, specify the presence of stated features, procedures, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, procedures, steps, operations, elements, components, and/or groups thereof, and are defined to cover a wide variety of methods from the fields of statistics, machine learning, artificial intelligence, and data mining, that may be used to determine a predictive model. The term also includes any method suitable for predicting the outcome and includes not only methods for performing complex analyses on multiple markers, but also methods for directly comparing the expression of a single marker or tag to control tissue or to a predetermined threshold for predicting the outcome. These are further discussed in the detailed description section.
In a specific implementation process, as shown in fig. 1-3, the intelligent extraction system of the invasive placenta implantation image feature AI comprises an acquisition module, a processing module and a processing module, wherein the acquisition module is used for acquiring ultrasonic image data at the placenta of a patient;
a historical case database for storing a historical sample of B-ultrasound image data of an invasive placental implantation, wherein the historical sample comprises a plurality of invasive placental implantation growth process images;
the characteristic extraction module is used for respectively inputting the historical sample and the B-ultrasonic image data and the MRI image data acquired by the acquisition module into an image segmentation network based on a DoubleU Net network, constructing a training set and a testing set, generating an image characteristic recognition model based on the training set and the testing set, recognizing the placenta implantation image characteristic through the image characteristic recognition model, comparing the B-ultrasonic image data and the MRI image data of the acquired placenta of the patient with the historical sample, predicting the growth of the placenta and the villus, and marking the predicted positions of the placenta and the villus, wherein the training set is based on the B-ultrasonic image data and the MRI image data of the patient, and the testing set is based on the B-ultrasonic image data and the MRI image data of the placenta of the patient acquired by the acquisition module;
the specific steps of predicting the placenta implantation range and depth according to the image features comprise:
s1, building layers for placenta images in a test set, copying 1 placenta image to obtain 2 placenta images with different layers, synchronously turning over the 2 placenta images by 90 degrees in a turning manner of arbitrary combination in the left-right direction, the horizontal direction and the vertical direction to obtain 12 feature images, and merging the obtained feature images;
s2, overlapping the feature map obtained in the step S1 with placenta images in the training set and respectively in different map layers, hiding the feature map, only displaying the placenta images in the training set, and carrying out convolution operation on the placenta images in the training set to obtain image enclosing region features of invasive placenta implantation, wherein the image enclosing region features comprise feature coordinates, lengths and confidence degrees;
s3, displaying the feature map, performing convolution operation on the feature map after combining the map layers to obtain image enclosing region features of the feature map, comparing the feature map with the image enclosing region features of placenta images in a training set, and marking the overlapped part to obtain preliminary region prediction features;
s4, equidistantly expanding the surrounding area features of the feature map by at least 1 pixel distance, and marking the overlapped part to obtain the prediction features of the identified area;
and S5, confirming a prediction result of the placenta implantation range and depth according to the preliminary region prediction features and the identification region prediction features, and marking the placenta implantation prediction position.
The B ultrasonic image data and the MRI image data mainly comprise placenta position, placenta thickness, low echo band after placenta, blood vessel characteristics of placenta and uterus interface, placenta pit and placenta area volume ratio.
The DoubleU Net network comprises an encoder and a self-adaptive pooling layer, wherein the encoder comprises a first feature extraction unit and a plurality of cascaded second feature extraction units, the second feature extraction units at the head end and the tail end are respectively connected with the first feature extraction unit and the self-adaptive pooling layer, the first feature extraction unit comprises two cascaded convolution blocks, and each convolution block comprises a convolution layer, a batch normalization layer and an activation function layer which are sequentially cascaded.
Before confirming the image surrounding area characteristics, the image needs to be subjected to gray level normalization, iteration is performed from 0, histogram equalization processing is performed on each area in the image aiming at the image subjected to gray level normalization, gray level gradients are obtained for a plurality of local areas of the image through an encoder, the gradient values form image characteristic values, 0-i is assumed to be foreground pixels through a normalized histogram, the number of the foreground pixels is omega 0, the average gray level is u0, the number of background points is omega 1, the average gray level is u1, and the gray level is represented by tau 2 = omega 0 ω 1 (uu-) 2 calculates the current variance and stops the iteration after the calculation is completed i equals 255.
Example two
The specific steps for generating the prediction result according to the image characteristics comprise:
s1, building layers for placenta images in a test set, copying 2 layers to obtain placenta images of 3 different layers, synchronously turning over the placenta images by 90 degrees in a turning manner of arbitrary combination in the left-right direction, the horizontal direction and the vertical direction to obtain 18 feature images, and merging the obtained feature images;
s2, overlapping the feature map obtained in the step S1 with placenta images in the training set and respectively in different map layers, hiding the feature map, only displaying the placenta images in the training set, and carrying out convolution operation on the placenta images in the training set to obtain image enclosing region features of invasive placenta implantation, wherein the image enclosing region features comprise feature coordinates, lengths and confidence degrees;
s3, displaying the feature map, performing convolution operation on the feature map after combining the map layers to obtain image enclosing region features of the feature map, comparing the feature map with the image enclosing region features of placenta images in a training set, and marking the overlapped part to obtain preliminary region prediction features;
s4, equidistantly expanding the surrounding area features of the feature map by at least 1 pixel distance, and marking the overlapped part to obtain the prediction features of the identified area;
s5, confirming a prediction result according to the preliminary region prediction feature and the identification region prediction feature.
Example III
The specific steps for generating the prediction result according to the image characteristics comprise:
s1, building layers for placenta images in a test set, copying 3 layers to obtain placenta images of 4 different layers, synchronously turning the placenta images by 90 degrees in a turning mode of arbitrary combination in the left-right direction, the horizontal direction and the vertical direction to obtain 24 feature images, and merging the obtained feature images;
s2, overlapping the feature map obtained in the step S1 with placenta images in the training set and respectively in different map layers, hiding the feature map, only displaying the placenta images in the training set, and carrying out convolution operation on the placenta images in the training set to obtain image enclosing region features of invasive placenta implantation, wherein the image enclosing region features comprise feature coordinates, lengths and confidence degrees;
s3, displaying the feature map, performing convolution operation on the feature map after combining the map layers to obtain image enclosing region features of the feature map, comparing the feature map with the image enclosing region features of placenta images in a training set, and marking the overlapped part to obtain preliminary region prediction features;
s4, equidistantly expanding the surrounding area features of the feature map by at least 1 pixel distance, and marking the overlapped part to obtain the prediction features of the identified area;
s5, confirming a prediction result according to the preliminary region prediction feature and the identification region prediction feature.
The more the placenta images in the test set are subjected to layer establishment and the more the number of copies are, the more the feature images are obtained, and in order to avoid feature point confusion, the optimal number of copies is 3.
The circuits and control involved in the present invention are all of the prior art, and are not described in detail herein.
From the above description of embodiments, it will be apparent to those skilled in the art that the embodiments may be implemented by means of software plus a general purpose hardware platform, or may be implemented by hardware. Based on such understanding, the foregoing technical solution may be embodied essentially or in a part contributing to the related art in the form of a software product, which may be stored in a computer readable storage medium, such as ROM/RAM, a magnetic disk, an optical disk, etc., including several instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to perform the method described in the respective embodiments or some parts of the embodiments.
While the fundamental and principal features of the invention and advantages of the invention have been shown and described, it will be apparent to those skilled in the art that the invention is not limited to the details of the foregoing exemplary embodiments, but may be embodied in other specific forms without departing from the spirit or essential characteristics thereof. The present embodiments are therefore to be considered in all respects as illustrative and not restrictive, the scope of the invention being indicated by the appended claims rather than by the foregoing description, and all changes which come within the meaning and range of equivalency of the claims are therefore intended to be embraced therein.
Furthermore, it should be understood that although the present disclosure describes embodiments, not every embodiment is provided with a separate embodiment, and that this description is provided for clarity only, and that the disclosure is not limited to the embodiments described in detail below, and that the embodiments described in the examples may be combined as appropriate to form other embodiments that will be apparent to those skilled in the art.

Claims (7)

1. The intelligent extraction system for the invasive placenta implantation image features AI is characterized by comprising an acquisition module, a processing module and a processing module, wherein the acquisition module is used for acquiring B ultrasonic image data and MRI image data of the placenta of a patient;
a historical case database for storing historical samples of B-ultrasound image data and MRI image data of an invasive placenta implant, wherein the historical samples include a plurality of invasive placenta implant images;
the characteristic extraction module is used for respectively inputting the historical sample and the B-ultrasonic image data and the MRI image data acquired by the acquisition module into an image segmentation network based on a DoubleU Net network, constructing a training set and a testing set, generating an image characteristic recognition model based on the training set and the testing set, recognizing the placenta implantation image characteristic through the image characteristic recognition model, comparing the B-ultrasonic image and the MRI image data of the placenta of the acquired patient with the historical sample, predicting the implantation range and depth of the placenta, and marking the predicted position of the placenta implantation, wherein the training set is based on the B-ultrasonic image and the MRI image data of the patient, and the testing set is based on the B-ultrasonic image data and the MRI image data of the placenta of the patient acquired by the acquisition module;
the specific steps of predicting the placenta implantation range and depth according to the image features include:
s1, building layers for placenta images in a test set, copying n layers to obtain n+1 placenta images with different layers, synchronously overturning the n+1 placenta images by 90 degrees in any combination of left and right, horizontal and vertical directions to obtain a 6 (n+1) Zhang Tezheng image, and merging the obtained feature images;
s2, overlapping the feature map obtained in the step S1 with placenta images in the training set and respectively in different map layers, hiding the feature map, only displaying the placenta images in the training set, and carrying out convolution operation on the placenta images in the training set to obtain image enclosing region features of invasive placenta implantation, wherein the image enclosing region features comprise feature coordinates, lengths and confidence degrees;
s3, displaying the feature map, performing convolution operation on the feature map after combining the map layers to obtain image enclosing region features of the feature map, comparing the feature map with the image enclosing region features of placenta images in a training set, and marking the overlapped part to obtain preliminary region prediction features;
s4, equidistantly expanding the surrounding area features of the feature map by at least 1 pixel distance, and marking the overlapped part to obtain the prediction features of the identified area;
s5, confirming a prediction result of the placenta implantation range and depth according to the preliminary region prediction features and the identification region prediction features, and marking the prediction position of placenta implantation.
2. The intelligent extraction system of invasive placenta implantation image features AI according to claim 1, wherein the feature map after merging the image layers is redrawn the edges before merging by using a deep learning algorithm again, so that the images before merging and the new images after merging are more real and natural in the linking transition area.
3. The intelligent extraction system of invasive placental implantation image features AI according to claim 2, wherein the B-ultrasound image data and MRI image data mainly comprise placenta location, placenta thickness, post-placental low-echo band, placenta-to-uterus interface vascular features, placenta pit and placenta area to volume ratio.
4. The intelligent extraction system of invasive placental implantation image feature AI of claim 3, wherein the double Net network comprises an encoder and an adaptive pooling layer, the encoder comprising a first feature extraction unit and a plurality of cascaded second feature extraction units, the second feature extraction units at the head and tail ends being respectively connected to the first feature extraction unit and the adaptive pooling layer.
5. The intelligent extraction system of invasive placenta implantation image features AI of claim 4, wherein the first feature extraction unit comprises two cascaded convolution blocks, the convolution blocks comprising a convolution layer, a batch normalization layer and an activation function layer, which are cascaded in sequence.
6. The intelligent extraction system of invasive placental implantation image features AI of claim 5, wherein gray scale normalization of the image is required before confirming the image bounding region features, and iterating from 0.
7. The intelligent extraction system for invasive placental implantation image feature AI according to claim 6, wherein the histogram equalization processing is performed on each region in the image for the image after gray scale normalization, gray scale gradients are obtained for a plurality of local regions of the image by the encoder, and these gradient values are combined into image feature values.
CN202311667705.4A 2023-12-06 2023-12-06 Intelligent extraction system for invasive placenta implantation image features AI Active CN117671284B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311667705.4A CN117671284B (en) 2023-12-06 2023-12-06 Intelligent extraction system for invasive placenta implantation image features AI

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311667705.4A CN117671284B (en) 2023-12-06 2023-12-06 Intelligent extraction system for invasive placenta implantation image features AI

Publications (2)

Publication Number Publication Date
CN117671284A true CN117671284A (en) 2024-03-08
CN117671284B CN117671284B (en) 2024-04-30

Family

ID=90086033

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311667705.4A Active CN117671284B (en) 2023-12-06 2023-12-06 Intelligent extraction system for invasive placenta implantation image features AI

Country Status (1)

Country Link
CN (1) CN117671284B (en)

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180190377A1 (en) * 2016-12-30 2018-07-05 Dirk Schneemann, LLC Modeling and learning character traits and medical condition based on 3d facial features
US20190120920A1 (en) * 2016-04-21 2019-04-25 Koninklijke Philips N.V. Modification of mri pulse sequence parameters using a historical database
CN109903271A (en) * 2019-01-29 2019-06-18 福州大学 Placenta implantation B ultrasonic image feature extraction and verification method
CN110969614A (en) * 2019-12-11 2020-04-07 中国科学院自动化研究所 Brain age prediction method and system based on three-dimensional convolutional neural network
CN112085113A (en) * 2020-09-14 2020-12-15 四川大学华西医院 Severe tumor image recognition system and method
CN113160256A (en) * 2021-03-09 2021-07-23 宁波大学 MR image placenta segmentation method for multitask generation confrontation model
US20220225888A1 (en) * 2019-05-17 2022-07-21 Koninklijke Philips N.V. Automated field of view alignment for magnetic resonance imaging
CN115496771A (en) * 2022-09-22 2022-12-20 安徽医科大学 Brain tumor segmentation method based on brain three-dimensional MRI image design
CN116363081A (en) * 2023-03-16 2023-06-30 北京大学 Placenta implantation MRI sign detection classification method and device based on deep neural network
US20230297646A1 (en) * 2022-03-18 2023-09-21 Change Healthcare Holdings, Llc System and methods for classifying magnetic resonance imaging (mri) image characteristics

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190120920A1 (en) * 2016-04-21 2019-04-25 Koninklijke Philips N.V. Modification of mri pulse sequence parameters using a historical database
US20180190377A1 (en) * 2016-12-30 2018-07-05 Dirk Schneemann, LLC Modeling and learning character traits and medical condition based on 3d facial features
CN109903271A (en) * 2019-01-29 2019-06-18 福州大学 Placenta implantation B ultrasonic image feature extraction and verification method
US20220225888A1 (en) * 2019-05-17 2022-07-21 Koninklijke Philips N.V. Automated field of view alignment for magnetic resonance imaging
CN110969614A (en) * 2019-12-11 2020-04-07 中国科学院自动化研究所 Brain age prediction method and system based on three-dimensional convolutional neural network
CN112085113A (en) * 2020-09-14 2020-12-15 四川大学华西医院 Severe tumor image recognition system and method
CN113160256A (en) * 2021-03-09 2021-07-23 宁波大学 MR image placenta segmentation method for multitask generation confrontation model
US20230297646A1 (en) * 2022-03-18 2023-09-21 Change Healthcare Holdings, Llc System and methods for classifying magnetic resonance imaging (mri) image characteristics
CN115496771A (en) * 2022-09-22 2022-12-20 安徽医科大学 Brain tumor segmentation method based on brain three-dimensional MRI image design
CN116363081A (en) * 2023-03-16 2023-06-30 北京大学 Placenta implantation MRI sign detection classification method and device based on deep neural network

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
JORDINA TORRENTS-BARRENA: ""Deep Q-CapsNet Reinforcement Learning Framework for Intrauterine Cavity Segmentation in TTTS Fetal Surgery Planning"", 《 IEEE TRANSACTIONS ON MEDICAL IMAGING 》, 14 April 2020 (2020-04-14) *
李莉;木拉提・哈米提;: "医学影像数据分类方法研究综述", 中国医学物理学杂志, no. 06, 15 November 2011 (2011-11-15) *
王颖超: "" 基于磁共振图像特征机器学习构建胎盘植入诊断模型的初步研究"", 《磁共振成像》, 20 August 2023 (2023-08-20) *

Also Published As

Publication number Publication date
CN117671284B (en) 2024-04-30

Similar Documents

Publication Publication Date Title
CN109166133A (en) Soft tissue organs image partition method based on critical point detection and deep learning
US8768436B2 (en) Coronary artery angiography image processing method to detect occlusion and degree effect of blood vessel occlusion to an organ
EP3295373A1 (en) A system and method for surgical guidance and intra-operative pathology through endo-microscopic tissue differentiation
US20200187896A1 (en) Apparatus and method for assessing uterine parameters
CN112263217B (en) Improved convolutional neural network-based non-melanoma skin cancer pathological image lesion area detection method
CN114299072B (en) Artificial intelligence-based anatomy variation identification prompting method and system
CN111415361B (en) Method and device for estimating brain age of fetus and detecting abnormality based on deep learning
CN110751187B (en) Training method of abnormal area image generation network and related product
CN102132320A (en) Method and device for image processing, particularly for medical image processing
CN111951276A (en) Image segmentation method and device, computer equipment and storage medium
CN111968130B (en) Brain contrast image processing method, device, medium and electronic equipment
CN111145137B (en) Vein and artery identification method based on neural network
CN117671284B (en) Intelligent extraction system for invasive placenta implantation image features AI
CN109903271B (en) Placenta implantation B ultrasonic image feature extraction and verification method
CN116747017A (en) Cerebral hemorrhage operation planning system and method
CN111144163B (en) Vein and artery identification system based on neural network
KR20220001985A (en) Apparatus and method for diagnosing local tumor progression using deep neural networks in diagnostic images
WO2020087732A1 (en) Neural network-based method and system for vein and artery identification
Matsopoulos et al. MITIS: a WWW-based medical system for managing and processing gynecological–obstetrical–radiological data
CN113160256B (en) MR image placenta segmentation method for multitasking countermeasure model
CN109567861A (en) Ultrasonic imaging method and relevant device
US20210319210A1 (en) Region specification apparatus, region specification method, and region specification program
CN117078711A (en) Medical image segmentation method, system, electronic device and storage medium
CN113298773A (en) Heart view identification and left ventricle detection device and system based on deep learning
CN113870980A (en) Visual obstetrical image examination processing method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant