CN117671284B - Intelligent extraction system for invasive placenta implantation image features AI - Google Patents

Intelligent extraction system for invasive placenta implantation image features AI Download PDF

Info

Publication number
CN117671284B
CN117671284B CN202311667705.4A CN202311667705A CN117671284B CN 117671284 B CN117671284 B CN 117671284B CN 202311667705 A CN202311667705 A CN 202311667705A CN 117671284 B CN117671284 B CN 117671284B
Authority
CN
China
Prior art keywords
placenta
image
features
images
implantation
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202311667705.4A
Other languages
Chinese (zh)
Other versions
CN117671284A (en
Inventor
王志坚
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou Kesong Medical Intelligent Technology Co ltd
Original Assignee
Guangzhou Kesong Medical Intelligent Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangzhou Kesong Medical Intelligent Technology Co ltd filed Critical Guangzhou Kesong Medical Intelligent Technology Co ltd
Priority to CN202311667705.4A priority Critical patent/CN117671284B/en
Publication of CN117671284A publication Critical patent/CN117671284A/en
Application granted granted Critical
Publication of CN117671284B publication Critical patent/CN117671284B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B8/00Diagnosis using ultrasonic, sonic or infrasonic waves
    • A61B8/08Detecting organic movements or changes, e.g. tumours, cysts, swellings
    • A61B8/0866Detecting organic movements or changes, e.g. tumours, cysts, swellings involving foetal diagnosis; pre-natal or peri-natal diagnosis of the baby
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/0033Features or image-related aspects of imaging apparatus classified in A61B5/00, e.g. for MRI, optical tomography or impedance tomography apparatus; arrangements of imaging apparatus in a room
    • A61B5/0035Features or image-related aspects of imaging apparatus classified in A61B5/00, e.g. for MRI, optical tomography or impedance tomography apparatus; arrangements of imaging apparatus in a room adapted for acquisition of images from more than one imaging mode, e.g. combining MRI and optical tomography
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/05Detecting, measuring or recording for diagnosis by means of electric currents or magnetic fields; Measuring using microwaves or radio waves 
    • A61B5/055Detecting, measuring or recording for diagnosis by means of electric currents or magnetic fields; Measuring using microwaves or radio waves  involving electronic [EMR] or nuclear [NMR] magnetic resonance, e.g. magnetic resonance imaging
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/72Signal processing specially adapted for physiological signals or for diagnostic purposes
    • A61B5/7235Details of waveform analysis
    • A61B5/7264Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B8/00Diagnosis using ultrasonic, sonic or infrasonic waves
    • A61B8/44Constructional features of the ultrasonic, sonic or infrasonic diagnostic device
    • A61B8/4416Constructional features of the ultrasonic, sonic or infrasonic diagnostic device related to combined acquisition of different diagnostic modalities, e.g. combination of ultrasound and X-ray acquisitions
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B8/00Diagnosis using ultrasonic, sonic or infrasonic waves
    • A61B8/52Devices using data or image processing specially adapted for diagnosis using ultrasonic, sonic or infrasonic waves
    • A61B8/5215Devices using data or image processing specially adapted for diagnosis using ultrasonic, sonic or infrasonic waves involving processing of medical diagnostic data
    • A61B8/5238Devices using data or image processing specially adapted for diagnosis using ultrasonic, sonic or infrasonic waves involving processing of medical diagnostic data for combining image data of patient, e.g. merging several images from different acquisition modes into one image
    • A61B8/5261Devices using data or image processing specially adapted for diagnosis using ultrasonic, sonic or infrasonic waves involving processing of medical diagnostic data for combining image data of patient, e.g. merging several images from different acquisition modes into one image combining images from different diagnostic modalities, e.g. ultrasound and X-ray
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0464Convolutional networks [CNN, ConvNet]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/50Extraction of image or video features by performing operations within image blocks; by using histograms, e.g. histogram of oriented gradients [HoG]; by summing image-intensity values; Projection analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
    • G06V10/751Comparing pixel values or logical combinations thereof, or feature values having positional relevance, e.g. template matching
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Medical Informatics (AREA)
  • Molecular Biology (AREA)
  • General Physics & Mathematics (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Artificial Intelligence (AREA)
  • Surgery (AREA)
  • Animal Behavior & Ethology (AREA)
  • Public Health (AREA)
  • Evolutionary Computation (AREA)
  • Veterinary Medicine (AREA)
  • Pathology (AREA)
  • Radiology & Medical Imaging (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Software Systems (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Databases & Information Systems (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • General Engineering & Computer Science (AREA)
  • Signal Processing (AREA)
  • Psychiatry (AREA)
  • Physiology (AREA)
  • Fuzzy Systems (AREA)
  • High Energy & Nuclear Physics (AREA)
  • Gynecology & Obstetrics (AREA)
  • Pregnancy & Childbirth (AREA)
  • Image Processing (AREA)

Abstract

The invention provides an intelligent extraction system for characteristics of an invasive placenta implantation image AI, which relates to the field of image processing and comprises an acquisition module, a historical case database, a characteristic extraction module and a prediction module, wherein B-ultrasonic image data and nuclear Magnetic Resonance (MRI) image data are respectively input into an image segmentation network based on DoubleU Net networks, a training set and a testing set based on B-ultrasonic images and MRI images are respectively constructed, an image characteristic recognition model is generated based on the training set and the testing set, placenta implantation image characteristics are recognized through the image characteristic recognition model, the prediction module is connected with the characteristic extraction module, a prediction result is generated according to the image characteristics, the placenta implantation image characteristics are recognized through the image characteristic recognition model, subjective error factors of people, environment and instruments are eliminated, and the placenta implantation image characteristics are realized by a computer machine learning method, so that time and labor are saved.

Description

Intelligent extraction system for invasive placenta implantation image features AI
Technical Field
The invention relates to the field of image processing, in particular to an intelligent extraction system for invasive placenta implantation image features AI.
Background
The placenta implantation refers to that placenta villi penetrates into part of the uterine wall myometrium and occurs in early pregnancy, the placenta implantation is one of serious complications of obstetrics, which can lead to the large bleeding, shock, uterine perforation, secondary infection and even death of puerpera, the prenatal color ultrasonic screening placenta implantation is necessary for puerpera with high risk factors, the placenta implantation is one of serious complications of pregnancy, the placenta villi invades into the uterine myometrium, and the placenta implanted part cannot be normally and automatically stripped in the third birth process, which can cause massive bleeding, uterine penetration, shock of the pregnant woman, further infection and even death. Placenta implantation is often accompanied by the concurrence of placenta precursors, which can be classified into four types, i.e., low placenta, marginal placenta, complete placenta, and central placenta.
Placenta implantation due to high caesarean section yields will pose an extremely Yan Jun challenge to obstetricians and also a heavy burden on the home and society. If the implanted placenta implantation and the penetrating placenta implantation cannot be identified in time, the failure of a primary hospital to perform timely referral and blind surgery may lead to death of a pregnant and lying-in woman which is difficult to avoid, while the failure of a comprehensive hospital to perform adequate blood source preparation and multi-disciplinary joint diagnosis may lead to fatal hemorrhage and hysterectomy. Therefore, how to accurately evaluate the implantation type and predict the placenta growth and the villus growth process before operation, so as to finally reduce the fatal post-partum bleeding rate is a practical problem which is urgent to be solved clinically at present.
Therefore, it is necessary to provide a new intelligent extraction system for the characteristics AI of invasive placenta implantation image to solve the above technical problems.
Disclosure of Invention
In order to solve the technical problems of accurately evaluating the implantation type before operation and predicting the implantation depth and range of the placenta, and making an operation plan in advance so as to finally reduce the fatal post-partum bleeding rate and hysterectomy rate, the invention provides an intelligent extraction system for invasive placenta implantation image features AI.
The intelligent extraction system for the invasive placenta implantation image features AI comprises an acquisition module, a processing module and a processing module, wherein the acquisition module is used for acquiring B ultrasonic image data and MRI image data of the placenta of a patient;
a historical case database for storing historical samples of B-ultrasound image data and MRI image data of invasive placenta implants, wherein the historical samples include a plurality of invasive placenta implant B-ultrasound images and MRI images;
The characteristic extraction module is used for respectively inputting the historical sample, the B-ultrasonic image data and the MRI image data acquired by the acquisition module into an image segmentation network based on DoubleU Net networks, constructing a training set and a testing set, generating an image characteristic recognition model based on the training set and the testing set, recognizing the placenta implantation image characteristic through the image characteristic recognition model, comparing the B-ultrasonic image and the MRI image data of the acquired placenta of the patient with the historical sample, predicting the implantation range and depth of the placenta, and marking the predicted position of the placenta implantation, wherein the training set is based on the B-ultrasonic image data and the MRI image data of the patient, and the testing set is based on the B-ultrasonic image data and the MRI image data of the placenta of the patient acquired by the acquisition module;
the specific steps for predicting the placenta implantation according to the image characteristics comprise:
S1, building layers for placenta images in a test set, copying n layers to obtain n+1 placenta images with different layers, synchronously overturning the n+1 placenta images by 90 degrees in any combination of left and right, horizontal and vertical directions to obtain a 6 (n+1) Zhang Tezheng image, and merging the obtained feature images;
s2, overlapping the feature map obtained in the step S1 with placenta images in the training set and respectively in different map layers, hiding the feature map, only displaying the placenta images in the training set, and carrying out convolution operation on the placenta images in the training set to obtain image enclosing region features of invasive placenta implantation, wherein the image enclosing region features comprise feature coordinates, lengths and confidence degrees;
S3, displaying the feature map, performing convolution operation on the feature map after combining the map layers to obtain image enclosing region features of the feature map, comparing the feature map with the image enclosing region features of placenta images in a training set, and marking the overlapped part to obtain preliminary region prediction features;
s4, equidistantly expanding the surrounding area features of the feature map by at least 1 pixel distance, and marking the overlapped part to obtain the prediction features of the identified area;
And S5, confirming a prediction result of the placenta implantation range and depth according to the preliminary region prediction features and the identification region prediction features, and marking the placenta implantation prediction position.
And further, redrawing the edges before combining the feature images after combining the image layers by using a deep learning algorithm again, so that the images before combining and the new images after combining are more real and natural in a joint transition area.
Further, the B-ultrasonic image data and MRI image data mainly include placenta location, placenta thickness, post-placenta low-echo zone, placenta-to-uterus interface vascular features, placenta trappings, and placenta area to volume ratio.
Further, the DoubleU Net network comprises an encoder and an adaptive pooling layer, wherein the encoder comprises a first feature extraction unit and a plurality of cascaded second feature extraction units, and the second feature extraction units at the head end and the tail end are respectively connected with the first feature extraction unit and the adaptive pooling layer.
Further, the first feature extraction unit comprises two cascaded convolution blocks, and the convolution blocks comprise a convolution layer, a batch normalization layer and an activation function layer which are cascaded in sequence.
Further, gray scale normalization of the image is required before confirming the image bounding region features, and iteration is performed starting from 0.
Further, for the image with normalized gray scale, histogram equalization processing is performed on each region in the image, gray scale gradients are obtained for a plurality of local regions of the image by an encoder, and these gradient values are combined into image feature values.
Compared with the related art, the intelligent extraction system for the invasive placenta implantation image feature AI has the following beneficial effects:
1. According to the invention, the B ultrasonic image data and the MRI image data are respectively input into an image segmentation network based on DoubleU Net networks, a training set and a testing set are constructed, an image feature recognition model is generated based on the training set and the testing set, and the placenta implantation image features are recognized through the image feature recognition model, so that subjective error factors of people, environment and instruments are eliminated, and the method is realized by a computer machine learning method, so that time and labor are saved.
2. According to the invention, layers are built on placenta images in a test set and n placenta images are copied to obtain n+1 placenta images with different layers, the n+1 placenta images are synchronously turned by 90 degrees in a turning mode of arbitrary combination in the left-right direction, the horizontal direction and the vertical direction respectively to obtain a6 (n+1) Zhang Tezheng image, and then the obtained feature images are combined to obtain more-dimensional feature information, so that the feature point judgment and prediction are further improved, the accuracy of placenta implantation prediction can be improved, the growth of placenta and villus is predicted, and the prediction positions of the placenta and the villus growth are marked.
3. The invention extends the distance of at least 1 pixel from the surrounding area feature of the feature map equidistantly, marks the overlapping part to obtain the prediction feature of the identified area, confirms the prediction result according to the preliminary prediction feature of the identified area, avoids the condition that the accuracy of diagnosis is reduced due to the deviation of the measured related index, reserves a safety line, and improves the safety measure.
Drawings
FIG. 1 is a block flow diagram of a prediction result generated by image features provided by the invention;
fig. 2 is a system block diagram of an intelligent extraction system for invasive placenta implantation image features AI provided by the present invention;
fig. 3 is a block diagram of a DoubleU Net network according to the present invention.
Detailed Description
The invention will be further described with reference to the drawings and embodiments.
Referring to fig. 1, fig. 2, and fig. 3 in combination, fig. 1 is a block flow diagram of a prediction result generated by image features provided by the present invention; fig. 2 is a system block diagram of an intelligent extraction system for invasive placenta implantation image features AI provided by the present invention; fig. 3 is a block diagram of a DoubleU Net network according to the present invention.
As used herein, the singular forms "a", "an", "the" and "the" are intended to include the plural forms as well, unless expressly stated otherwise, as understood by those skilled in the art. It will be further understood that the terms "comprises" and/or "comprising," when used in this specification, specify the presence of stated features, procedures, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, procedures, steps, operations, elements, components, and/or groups thereof, and are defined to cover a wide variety of methods from the fields of statistics, machine learning, artificial intelligence, and data mining, that may be used to determine a predictive model. The term also includes any method suitable for predicting the outcome and includes not only methods for performing complex analyses on multiple markers, but also methods for directly comparing the expression of a single marker or tag to control tissue or to a predetermined threshold for predicting the outcome. These are further discussed in the detailed description section.
In a specific implementation process, as shown in fig. 1-3, the intelligent extraction system of the invasive placenta implantation image feature AI comprises an acquisition module, a processing module and a processing module, wherein the acquisition module is used for acquiring ultrasonic image data at the placenta of a patient;
a historical case database for storing a historical sample of B-ultrasound image data of an invasive placental implantation, wherein the historical sample comprises a plurality of invasive placental implantation growth process images;
the characteristic extraction module is used for respectively inputting the historical sample, the B-ultrasonic image data and the MRI image data acquired by the acquisition module into an image segmentation network based on DoubleU Net networks, constructing a training set and a testing set, generating an image characteristic recognition model based on the training set and the testing set, recognizing the placenta implantation image characteristic through the image characteristic recognition model, comparing the B-ultrasonic image and the MRI image data of the acquired placenta of the patient with the historical sample, predicting the growth of the placenta and the villus, and marking the predicted positions of the placenta and the villus, wherein the training set is based on the B-ultrasonic image data and the MRI image data of the patient, and the testing set is based on the B-ultrasonic image data and the MRI image data of the placenta of the patient acquired by the acquisition module;
the specific steps of predicting the placenta implantation range and depth according to the image features comprise:
S1, building layers for placenta images in a test set, copying 1 placenta image to obtain 2 placenta images with different layers, synchronously turning over the 2 placenta images by 90 degrees in a turning manner of arbitrary combination in the left-right direction, the horizontal direction and the vertical direction to obtain 12 feature images, and merging the obtained feature images;
s2, overlapping the feature map obtained in the step S1 with placenta images in the training set and respectively in different map layers, hiding the feature map, only displaying the placenta images in the training set, and carrying out convolution operation on the placenta images in the training set to obtain image enclosing region features of invasive placenta implantation, wherein the image enclosing region features comprise feature coordinates, lengths and confidence degrees;
S3, displaying the feature map, performing convolution operation on the feature map after combining the map layers to obtain image enclosing region features of the feature map, comparing the feature map with the image enclosing region features of placenta images in a training set, and marking the overlapped part to obtain preliminary region prediction features;
s4, equidistantly expanding the surrounding area features of the feature map by at least 1 pixel distance, and marking the overlapped part to obtain the prediction features of the identified area;
And S5, confirming a prediction result of the placenta implantation range and depth according to the preliminary region prediction features and the identification region prediction features, and marking the placenta implantation prediction position.
The B ultrasonic image data and the MRI image data mainly comprise placenta position, placenta thickness, low echo band after placenta, blood vessel characteristics of placenta and uterus interface, placenta pit and placenta area volume ratio.
The DoubleU Net network comprises an encoder and a self-adaptive pooling layer, the encoder comprises a first feature extraction unit and a plurality of cascaded second feature extraction units, the second feature extraction units at the head end and the tail end are respectively connected with the first feature extraction unit and the self-adaptive pooling layer, the first feature extraction unit comprises two cascaded convolution blocks, and each convolution block comprises a convolution layer, a batch normalization layer and an activation function layer which are sequentially cascaded.
Before confirming the image surrounding area characteristics, carrying out gray level normalization on the image, carrying out iteration from 0, carrying out histogram equalization processing on each area in the image aiming at the image with normalized gray level, obtaining gray level gradients aiming at a plurality of local areas of the image through an encoder, forming the gradient values into image characteristic values, assuming 0-i as foreground pixels through the normalized histogram, wherein the number of the foreground pixels accounts for omega 0, the average gray level is u0, the number of background points accounts for omega 1, the average gray level is u1, calculating the current variance through τ2=omega 0ω1 (uu-) 2, and stopping iteration after the calculation is completed, i is equal to 255.
Example two
The specific steps for generating the prediction result according to the image characteristics comprise:
s1, building layers for placenta images in a test set, copying 2 layers to obtain placenta images of 3 different layers, synchronously turning over the placenta images by 90 degrees in a turning manner of arbitrary combination in the left-right direction, the horizontal direction and the vertical direction to obtain 18 feature images, and merging the obtained feature images;
s2, overlapping the feature map obtained in the step S1 with placenta images in the training set and respectively in different map layers, hiding the feature map, only displaying the placenta images in the training set, and carrying out convolution operation on the placenta images in the training set to obtain image enclosing region features of invasive placenta implantation, wherein the image enclosing region features comprise feature coordinates, lengths and confidence degrees;
S3, displaying the feature map, performing convolution operation on the feature map after combining the map layers to obtain image enclosing region features of the feature map, comparing the feature map with the image enclosing region features of placenta images in a training set, and marking the overlapped part to obtain preliminary region prediction features;
s4, equidistantly expanding the surrounding area features of the feature map by at least 1 pixel distance, and marking the overlapped part to obtain the prediction features of the identified area;
S5, confirming a prediction result according to the preliminary region prediction feature and the identification region prediction feature.
Example III
The specific steps for generating the prediction result according to the image characteristics comprise:
s1, building layers for placenta images in a test set, copying 3 layers to obtain placenta images of 4 different layers, synchronously turning the placenta images by 90 degrees in a turning mode of arbitrary combination in the left-right direction, the horizontal direction and the vertical direction to obtain 24 feature images, and merging the obtained feature images;
s2, overlapping the feature map obtained in the step S1 with placenta images in the training set and respectively in different map layers, hiding the feature map, only displaying the placenta images in the training set, and carrying out convolution operation on the placenta images in the training set to obtain image enclosing region features of invasive placenta implantation, wherein the image enclosing region features comprise feature coordinates, lengths and confidence degrees;
S3, displaying the feature map, performing convolution operation on the feature map after combining the map layers to obtain image enclosing region features of the feature map, comparing the feature map with the image enclosing region features of placenta images in a training set, and marking the overlapped part to obtain preliminary region prediction features;
s4, equidistantly expanding the surrounding area features of the feature map by at least 1 pixel distance, and marking the overlapped part to obtain the prediction features of the identified area;
S5, confirming a prediction result according to the preliminary region prediction feature and the identification region prediction feature.
The more the placenta images in the test set are subjected to layer establishment and the more the number of copies are, the more the feature images are obtained, and in order to avoid feature point confusion, the optimal number of copies is 3.
The circuits and control involved in the present invention are all of the prior art, and are not described in detail herein.
From the above description of embodiments, it will be apparent to those skilled in the art that the embodiments may be implemented by means of software plus a general purpose hardware platform, or may be implemented by hardware. Based on such understanding, the foregoing technical solution may be embodied essentially or in a part contributing to the related art in the form of a software product, which may be stored in a computer readable storage medium, such as ROM/RAM, a magnetic disk, an optical disk, etc., including several instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to perform the method described in the respective embodiments or some parts of the embodiments.
While the fundamental and principal features of the invention and advantages of the invention have been shown and described, it will be apparent to those skilled in the art that the invention is not limited to the details of the foregoing exemplary embodiments, but may be embodied in other specific forms without departing from the spirit or essential characteristics thereof. The present embodiments are therefore to be considered in all respects as illustrative and not restrictive, the scope of the invention being indicated by the appended claims rather than by the foregoing description, and all changes which come within the meaning and range of equivalency of the claims are therefore intended to be embraced therein.
Furthermore, it should be understood that although the present disclosure describes embodiments, not every embodiment is provided with a separate embodiment, and that this description is provided for clarity only, and that the disclosure is not limited to the embodiments described in detail below, and that the embodiments described in the examples may be combined as appropriate to form other embodiments that will be apparent to those skilled in the art.

Claims (7)

1. The intelligent extraction system for the invasive placenta implantation image features AI is characterized by comprising an acquisition module, a processing module and a processing module, wherein the acquisition module is used for acquiring B ultrasonic image data and MRI image data of the placenta of a patient;
a historical case database for storing historical samples of B-ultrasound image data and MRI image data of an invasive placenta implant, wherein the historical samples include a plurality of invasive placenta implant images;
The characteristic extraction module is used for respectively inputting the historical sample, the B-ultrasonic image data and the MRI image data acquired by the acquisition module into an image segmentation network based on DoubleU Net networks, constructing a training set and a testing set, generating an image characteristic recognition model based on the training set and the testing set, recognizing the placenta implantation image characteristic through the image characteristic recognition model, comparing the B-ultrasonic image and the MRI image data of the acquired placenta of the patient with the historical sample, predicting the implantation range and depth of the placenta, and marking the predicted position of the placenta implantation, wherein the training set is based on the B-ultrasonic image and the MRI image data of the patient, and the testing set is based on the B-ultrasonic image data and the MRI image data of the placenta of the patient acquired by the acquisition module;
The specific steps of predicting the placenta implantation range and depth according to the image features include:
S1, building layers for placenta images in a test set, copying n layers to obtain n+1 placenta images with different layers, synchronously overturning the n+1 placenta images by 90 degrees in any combination of left and right, horizontal and vertical directions to obtain a 6 (n+1) Zhang Tezheng image, and merging the obtained feature images;
s2, overlapping the feature map obtained in the step S1 with placenta images in the training set and respectively in different map layers, hiding the feature map, only displaying the placenta images in the training set, and carrying out convolution operation on the placenta images in the training set to obtain image enclosing region features of invasive placenta implantation, wherein the image enclosing region features comprise feature coordinates, lengths and confidence degrees;
S3, displaying the feature map, performing convolution operation on the feature map after combining the map layers to obtain image enclosing region features of the feature map, comparing the feature map with the image enclosing region features of placenta images in a training set, and marking the overlapped part to obtain preliminary region prediction features;
s4, equidistantly expanding the surrounding area features of the feature map by at least 1 pixel distance, and marking the overlapped part to obtain the prediction features of the identified area;
S5, confirming a prediction result of the placenta implantation range and depth according to the preliminary region prediction features and the identification region prediction features, and marking the prediction position of placenta implantation.
2. The intelligent extraction system of invasive placenta implantation image features AI according to claim 1, wherein the feature map after merging the image layers is redrawn the edges before merging by using a deep learning algorithm again, so that the images before merging and the new images after merging are more real and natural in the linking transition area.
3. The intelligent extraction system of invasive placental implantation image features AI according to claim 2, wherein the B-ultrasound image data and MRI image data mainly comprise placenta location, placenta thickness, post-placental low-echo band, placenta-to-uterus interface vascular features, placenta pit and placenta area to volume ratio.
4. The intelligent extraction system of invasive placental implantation image feature AI of claim 3, wherein said DoubleU Net network comprises an encoder and an adaptive pooling layer, said encoder comprising a first feature extraction unit and a plurality of cascaded second feature extraction units, the second feature extraction units at the head and tail ends being respectively connected to the first feature extraction unit and the adaptive pooling layer.
5. The intelligent extraction system of invasive placenta implantation image features AI of claim 4, wherein the first feature extraction unit comprises two cascaded convolution blocks, the convolution blocks comprising a convolution layer, a batch normalization layer and an activation function layer, which are cascaded in sequence.
6. The intelligent extraction system of invasive placental implantation image features AI of claim 5, wherein gray scale normalization of the image is required before confirming the image bounding region features, and iterating from 0.
7. The intelligent extraction system for invasive placental implantation image feature AI according to claim 6, wherein the histogram equalization processing is performed on each region in the image for the image after gray scale normalization, gray scale gradients are obtained for a plurality of local regions of the image by the encoder, and these gradient values are combined into image feature values.
CN202311667705.4A 2023-12-06 2023-12-06 Intelligent extraction system for invasive placenta implantation image features AI Active CN117671284B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311667705.4A CN117671284B (en) 2023-12-06 2023-12-06 Intelligent extraction system for invasive placenta implantation image features AI

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311667705.4A CN117671284B (en) 2023-12-06 2023-12-06 Intelligent extraction system for invasive placenta implantation image features AI

Publications (2)

Publication Number Publication Date
CN117671284A CN117671284A (en) 2024-03-08
CN117671284B true CN117671284B (en) 2024-04-30

Family

ID=90086033

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311667705.4A Active CN117671284B (en) 2023-12-06 2023-12-06 Intelligent extraction system for invasive placenta implantation image features AI

Country Status (1)

Country Link
CN (1) CN117671284B (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109903271A (en) * 2019-01-29 2019-06-18 福州大学 Placenta implantation B ultrasonic image feature extraction and verification method
CN110969614A (en) * 2019-12-11 2020-04-07 中国科学院自动化研究所 Brain age prediction method and system based on three-dimensional convolutional neural network
CN112085113A (en) * 2020-09-14 2020-12-15 四川大学华西医院 Severe tumor image recognition system and method
CN113160256A (en) * 2021-03-09 2021-07-23 宁波大学 MR image placenta segmentation method for multitask generation confrontation model
CN115496771A (en) * 2022-09-22 2022-12-20 安徽医科大学 Brain tumor segmentation method based on brain three-dimensional MRI image design
CN116363081A (en) * 2023-03-16 2023-06-30 北京大学 Placenta implantation MRI sign detection classification method and device based on deep neural network

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
RU2734059C2 (en) * 2016-04-21 2020-10-12 Конинклейке Филипс Н.В. Modification of pulse sequence parameters for magnetic resonance imaging
WO2018126275A1 (en) * 2016-12-30 2018-07-05 Dirk Schneemann, LLC Modeling and learning character traits and medical condition based on 3d facial features
US12078703B2 (en) * 2019-05-17 2024-09-03 Koninklijke Philips N.V. Automated field of view alignment for magnetic resonance imaging
US20230297646A1 (en) * 2022-03-18 2023-09-21 Change Healthcare Holdings, Llc System and methods for classifying magnetic resonance imaging (mri) image characteristics

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109903271A (en) * 2019-01-29 2019-06-18 福州大学 Placenta implantation B ultrasonic image feature extraction and verification method
CN110969614A (en) * 2019-12-11 2020-04-07 中国科学院自动化研究所 Brain age prediction method and system based on three-dimensional convolutional neural network
CN112085113A (en) * 2020-09-14 2020-12-15 四川大学华西医院 Severe tumor image recognition system and method
CN113160256A (en) * 2021-03-09 2021-07-23 宁波大学 MR image placenta segmentation method for multitask generation confrontation model
CN115496771A (en) * 2022-09-22 2022-12-20 安徽医科大学 Brain tumor segmentation method based on brain three-dimensional MRI image design
CN116363081A (en) * 2023-03-16 2023-06-30 北京大学 Placenta implantation MRI sign detection classification method and device based on deep neural network

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
"Deep Q-CapsNet Reinforcement Learning Framework for Intrauterine Cavity Segmentation in TTTS Fetal Surgery Planning";Jordina Torrents-Barrena;《 IEEE Transactions on Medical Imaging 》;20200414;全文 *
医学影像数据分类方法研究综述;李莉;木拉提・哈米提;;中国医学物理学杂志;20111115(第06期);全文 *
王颖超." 基于磁共振图像特征机器学习构建胎盘植入诊断模型的初步研究".《磁共振成像》.2023,全文. *

Also Published As

Publication number Publication date
CN117671284A (en) 2024-03-08

Similar Documents

Publication Publication Date Title
KR102014359B1 (en) Method and apparatus for providing camera location using surgical video
US11380084B2 (en) System and method for surgical guidance and intra-operative pathology through endo-microscopic tissue differentiation
US20100130878A1 (en) Systems, apparatus and processes for automated blood flow assessment of vasculature
CN111210401B (en) Automatic aortic detection and quantification from medical images
US20200187896A1 (en) Apparatus and method for assessing uterine parameters
CN111415361B (en) Method and device for estimating brain age of fetus and detecting abnormality based on deep learning
CN110751187B (en) Training method of abnormal area image generation network and related product
CN111951276A (en) Image segmentation method and device, computer equipment and storage medium
CN113516623A (en) Puncture path checking method and device, computer equipment and readable storage medium
CN113298773A (en) Heart view identification and left ventricle detection device and system based on deep learning
CN117078711A (en) Medical image segmentation method, system, electronic device and storage medium
CN109903271B (en) Placenta implantation B ultrasonic image feature extraction and verification method
CN111145137B (en) Vein and artery identification method based on neural network
CN117671284B (en) Intelligent extraction system for invasive placenta implantation image features AI
CN113160256B (en) MR image placenta segmentation method for multitasking countermeasure model
CN113487665B (en) Method, device, equipment and medium for measuring cavity gap
Singhal et al. Deep learning based junctional zone quantification using 3D transvaginal ultrasound in assisted reproduction
CN118476860A (en) Left auricle plugging simulation method, system and application
CN116747017A (en) Cerebral hemorrhage operation planning system and method
CN111144163B (en) Vein and artery identification system based on neural network
KR102213412B1 (en) Method, apparatus and program for generating a pneumoperitoneum model
WO2020087732A1 (en) Neural network-based method and system for vein and artery identification
CN109567861A (en) Ultrasonic imaging method and relevant device
US20210319210A1 (en) Region specification apparatus, region specification method, and region specification program
CN116205930A (en) Intracranial hemorrhage area automatic segmentation method based on multi-layer CT image

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant