CN115578456A - Simulated pneumoperitoneum image generation method and device and computer equipment - Google Patents

Simulated pneumoperitoneum image generation method and device and computer equipment Download PDF

Info

Publication number
CN115578456A
CN115578456A CN202211308290.7A CN202211308290A CN115578456A CN 115578456 A CN115578456 A CN 115578456A CN 202211308290 A CN202211308290 A CN 202211308290A CN 115578456 A CN115578456 A CN 115578456A
Authority
CN
China
Prior art keywords
pneumoperitoneum
region
simulated
image
abdominal cavity
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211308290.7A
Other languages
Chinese (zh)
Inventor
不公告发明人
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Microport Medbot Group Co Ltd
Original Assignee
Shanghai Microport Medbot Group Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Microport Medbot Group Co Ltd filed Critical Shanghai Microport Medbot Group Co Ltd
Priority to CN202211308290.7A priority Critical patent/CN115578456A/en
Publication of CN115578456A publication Critical patent/CN115578456A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • G06T7/74Determining position or orientation of objects or cameras using feature-based methods involving reference images or patches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/174Segmentation; Edge detection involving the use of two or more images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10068Endoscopic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computational Linguistics (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Health & Medical Sciences (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Image Processing (AREA)

Abstract

The present application relates to a simulated pneumoperitoneum image generation method, apparatus, computer device, storage medium and computer program product. The simulated pneumoperitoneum image generation method comprises the following steps: acquiring an abdominal cavity image; determining a target area according to the abdominal cavity image; according to the target area, performing simulated pneumoperitoneum fitting on the abdominal cavity image to determine a simulated pneumoperitoneum area; and determining a simulated pneumoperitoneum image according to the target area, the simulated pneumoperitoneum area and the abdominal cavity image. The method can automatically mark out the target area from the abdominal cavity image, saves the time of manual marking, enhances the automation degree in the generation process of the simulated pneumoperitoneum image, and adopts the method of simulated pneumoperitoneum fitting to directly obtain the deformed abdominal cavity state image after pneumoperitoneum on the image, thereby greatly saving the image generation time compared with the method of generating the abdominal cavity state image by adopting a numerical simulation method.

Description

Simulated pneumoperitoneum image generation method and device and computer equipment
Technical Field
The present application relates to the field of image processing technologies, and in particular, to a method, an apparatus, a computer device, a storage medium, and a computer program product for generating a simulated pneumoperitoneum image.
Background
The pneumoperitoneum which is the basis of the laparoscopic surgery is always the focus of the research of the laparoscopic technology, the liver laparoscopic surgery is that medical gas such as carbon dioxide is injected into an abdominal cavity through an endoscope hole to inflate the abdominal cavity to be enlarged to form a basic environment for the laparoscopic surgery, then the laparoscope is stretched into the endoscope hole to observe the abdominal cavity environment, then the surgical position is found through preoperative CT scanning and external probing, the position of a surgical operation hole is determined by combining the surgical experience of a doctor, and finally the operation hole is opened through a scalpel by the doctor, so that the doctor can watch the abdominal cavity environment and perform the surgery through an abdominal cavity picture returned by the endoscope.
In order to facilitate a doctor to probe a human body before a laparoscopic surgery formally, so that sufficient surgical preparation is made according to a probing result, pneumoperitoneum simulation processing needs to be performed on the abdominal cavity of the human body to build a simulated pneumoperitoneum thoracic cavity environment.
Disclosure of Invention
In view of the above, it is necessary to provide a simulated pneumoperitoneum image generation method, apparatus, computer device, storage medium, and computer program product capable of quickly generating a simulated pneumoperitoneum image in view of the above technical problems.
In a first aspect, the present application provides a simulated pneumoperitoneum image generation method, including:
acquiring an abdominal cavity image;
determining a target area according to the abdominal cavity image;
according to the target area, performing simulated pneumoperitoneum fitting on the abdominal cavity image to determine a simulated pneumoperitoneum area;
and determining a simulated pneumoperitoneum image according to the target area, the simulated pneumoperitoneum area and the abdominal cavity image.
In one embodiment, the determining a target region according to the abdominal cavity image includes:
performing target recognition on the abdominal cavity image by adopting at least one deep neural network to determine an abdominal muscle area;
performing image segmentation on the abdominal cavity image to determine an abdominal wall area;
and fusing the abdominal muscle area and the abdominal wall area to obtain the target area.
In one embodiment, the performing target recognition on the abdominal cavity image by using at least one deep neural network to determine an abdominal muscle region includes:
inputting the abdominal cavity image into at least one deep neural network, and performing target identification to obtain a corresponding prediction region;
and determining the abdominal muscle area according to the prediction area and the preset weight corresponding to each deep neural network.
In one embodiment, the determining the abdominal muscle region according to the prediction region and the preset weight corresponding to each deep neural network includes:
determining a prediction region probability value of each pixel point in the abdominal cavity image according to the prediction region corresponding to each deep neural network and the corresponding preset weight;
and according to the probability value of the prediction region of each pixel point, the abdominal muscle region is segmented from the abdominal cavity image.
In one embodiment, the performing image segmentation on the abdominal cavity image to determine an abdominal wall region includes:
determining the outer contour of the abdominal wall area from the abdominal cavity image by adopting a threshold analysis method;
dividing an inner contour of the abdominal wall region according to the outer contour by adopting a dynamic contour algorithm;
optimizing the inner contour by adopting a scatter-point contour algorithm;
and determining the abdominal wall area according to the outer contour and the optimized inner contour.
In one embodiment, before the fusing the abdominal muscle region and the abdominal wall region to obtain the target region, the method further includes:
optimizing the abdominal muscle region;
the fusing the abdominal muscle area and the abdominal wall area to obtain the target area comprises:
and fusing the optimized abdominal muscle area and the abdominal wall area to obtain the target area.
In one embodiment, the optimizing the abdominal muscle region includes:
acquiring a first sub-region in the abdominal cavity image;
acquiring the relative position of any pixel point in the abdominal muscle region and any pixel point in the first sub-region;
and removing the pixel points of which the relative positions do not accord with the preset conditions in the abdominal muscle area.
In one embodiment, before the acquiring the first sub-region in the abdominal cavity image, the method includes:
identifying the abdominal cavity image and determining a characteristic part;
and taking the area where the characteristic part is positioned as a first subregion.
In one embodiment, the optimizing the abdominal muscle region includes:
acquiring pixel points in a preset range in the abdominal cavity image;
and removing the pixel points in the abdominal muscle area which are overlapped with the pixel points in the preset range.
In one embodiment, the performing a simulated pneumoperitoneum fit on the abdominal cavity image according to the target region to determine a simulated pneumoperitoneum region includes:
determining a deformation range according to the target area and the abdominal cavity image;
deforming the inner contour of the target area in the deformation range by adopting a deformation fitting algorithm;
and determining the deformed outer contour on the basis of the deformed inner contour according to the corresponding relation between the pixel points of the inner contour and the pixel points of the outer contour in the target area, so as to obtain the simulated pneumoperitoneum area.
In one embodiment, the determining a deformation range according to the target region and the abdominal cavity image includes:
acquiring a second sub-region in the abdominal cavity image;
and determining the deformation range according to the positions of the pixel points in the target region and the positions of the pixel points in the second sub-region.
In one embodiment, before determining a simulated pneumoperitoneum image according to the target region, the simulated pneumoperitoneum region, and the abdominal cavity image, the method further includes:
receiving an adjusting instruction;
adjusting the target area and/or the simulated pneumoperitoneum area according to the adjustment instruction;
determining a simulated pneumoperitoneum image according to the target region, the simulated pneumoperitoneum region and the abdominal cavity image, including:
and determining the simulated pneumoperitoneum image according to the adjusted target area and/or the adjusted simulated pneumoperitoneum area.
In one embodiment, the determining a simulated pneumoperitoneum image from the target region, the simulated pneumoperitoneum region, and the abdominal cavity image comprises:
and replacing the target area in the abdominal cavity image with the simulated pneumoperitoneum area to obtain the simulated pneumoperitoneum image.
In a second aspect, the present application further provides a simulated pneumoperitoneum image generation apparatus, including:
the acquisition module is used for acquiring an abdominal cavity image;
the first determination module is used for determining a target area according to the abdominal cavity image;
the second determination module is used for performing simulated pneumoperitoneum fitting on the abdominal cavity image according to the target area to determine a simulated pneumoperitoneum area;
and the third determining module is used for determining a simulated pneumoperitoneum image according to the target area, the simulated pneumoperitoneum area and the abdominal cavity image.
In a third aspect, the application also provides a computer device. The computer device comprises a memory storing a computer program and a processor implementing the simulated pneumoperitoneum image generation method of any of the above embodiments when the processor executes the computer program.
In a fourth aspect, the present application further provides a computer-readable storage medium. The computer-readable storage medium, on which a computer program is stored, which, when executed by a processor, implements the simulated pneumoperitoneum image generation method according to any of the above embodiments.
In a fifth aspect, the present application further provides a computer program product. The computer program product comprising a computer program which, when executed by a processor, implements the simulated pneumoperitoneum image generation method of any of the above embodiments.
According to the simulated pneumoperitoneum image generation method, the simulated pneumoperitoneum image generation device, the computer equipment, the storage medium and the computer program product, the target area is automatically divided from the abdominal cavity image, the time for manual marking is saved, the automation degree in the simulated pneumoperitoneum image generation process is enhanced, the method for simulating pneumoperitoneum fitting is adopted, the deformed abdominal cavity state image after pneumoperitoneum is directly obtained on the image, and compared with the method for generating the abdominal cavity state image by adopting a numerical simulation method, the image generation time is greatly saved.
Drawings
FIG. 1 is a diagram of an exemplary application environment for a simulated pneumoperitoneum image generation method;
FIG. 2 is a diagram of an application environment of a simulated pneumoperitoneum image generation method in an embodiment;
FIG. 3 is a schematic flow chart diagram illustrating a simulated pneumoperitoneum image generation method in one embodiment;
FIG. 4 is a schematic flow chart illustrating the target region determination step in the simulated pneumoperitoneum image generation method in one embodiment;
FIG. 5 is a schematic view of a process of a target area determination step in the simulated pneumoperitoneum image generation method in an embodiment;
FIG. 6 is a schematic diagram showing a process of the abdominal muscle region determination step in the simulated pneumoperitoneum image generation method according to an embodiment;
fig. 7 is a schematic processing diagram of the abdominal wall region determination step in the simulated pneumoperitoneum image generation method in one embodiment;
FIG. 8 is a schematic flow chart diagram illustrating the target region determination step in the simulated pneumoperitoneum image generation method in one embodiment;
FIG. 9 is a schematic flow chart diagram illustrating the simulated pneumoperitoneum region determination step in the simulated pneumoperitoneum image generation method in one embodiment;
FIG. 10 is a schematic flow chart illustrating the step of obtaining the deformed inner contour of the target region in the simulated pneumoperitoneum image generation method according to an embodiment;
FIG. 11 is a flowchart illustrating an outer contour obtaining step after deformation of a target area in the simulated pneumoperitoneum image generation method according to an embodiment;
FIG. 12 is a schematic flow chart diagram illustrating a simulated pneumoperitoneum image generation method in one embodiment;
FIG. 13 is a schematic diagram showing the structure of a simulated pneumoperitoneum image generating apparatus in one embodiment;
FIG. 14 is a block diagram of a first determining module in the simulated pneumoperitoneum image generating apparatus according to an embodiment;
FIG. 15 is a block diagram that illustrates a first determination module of the apparatus for generating a simulated pneumoperitoneum image in an exemplary embodiment;
FIG. 16 is a block diagram illustrating a second determining module of the simulated pneumoperitoneum image generating apparatus according to an embodiment;
FIG. 17 is a schematic diagram showing the structure of a simulated pneumoperitoneum image generation apparatus in one embodiment;
FIG. 18 is a diagram illustrating an internal structure of a computer device according to an embodiment.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application.
The simulated pneumoperitoneum image generation method provided by the embodiment of the application can be applied to the application environment shown in fig. 1-2. The terminal 102 communicates with the server 104 and the medical imaging device 106 through a network.
For example, the simulated pneumoperitoneum image generation method is applied to the terminal 102, and the terminal 102 first obtains an abdominal cavity image through the medical imaging device 106; determining a target area according to the abdominal cavity image, performing simulated pneumoperitoneum fitting on the abdominal cavity image according to the target area, and determining a simulated pneumoperitoneum area; and finally, determining a simulated pneumoperitoneum image according to the target area, the simulated pneumoperitoneum area and the abdominal cavity image, sending the simulated pneumoperitoneum image to the server 104 by the terminal 102, and storing the simulated pneumoperitoneum image into a data storage system by the server 104, wherein the terminal 102 can be but is not limited to a desktop computer. The medical imaging device 106 includes, but is not limited to, various imaging devices such as a CT (Computed Tomography) imaging device, which uses a precisely collimated X-ray beam together with a highly sensitive detector to perform a cross-sectional scan one by one around a certain part of a human body, and can reconstruct a precise three-dimensional position image of a tumor and the like through the CT scan; a magnetic resonance apparatus (which is one of tomographic imaging that obtains electromagnetic signals from a human body using a magnetic resonance phenomenon and reconstructs a human body information image), and the like.
For another example, the simulated pneumoperitoneum image generation method is applied to the server 104, and the server 104 first obtains an abdominal cavity image from the medical imaging device 106 through the terminal 102; determining a target area according to the abdominal cavity image, performing simulated pneumoperitoneum fitting on the abdominal cavity image according to the target area, and determining a simulated pneumoperitoneum area; finally, a simulated pneumoperitoneum image is determined according to the target area, the simulated pneumoperitoneum area and the abdominal cavity image, and then the server 104 stores the simulated pneumoperitoneum image in the data storage system. It will be appreciated that the data storage system may be a stand-alone storage device, or the data storage system may be located on a server, or the data storage system may be located on another terminal.
In one embodiment, a simulated pneumoperitoneum image generation method is provided, which is exemplified by the application of the simulated pneumoperitoneum image generation method to a terminal. As shown in fig. 3, the simulated pneumoperitoneum image generation method includes:
step 202, obtaining an abdominal cavity image.
The abdominal cavity image refers to an image for showing an actual abdominal cavity environment of the human body, and the abdominal cavity image may be a Computed Tomography (CT) image or a magnetic resonance imaging (mri) image, and the abdominal cavity image is used for assisting medical staff to view an actual abdominal cavity condition of the human body.
As an example, the terminal in this embodiment acquires a CT image of the abdominal cavity of the human body through a CT device as an abdominal cavity image.
And step 204, determining a target area according to the abdominal cavity image.
Pneumoperitoneum refers to a treatment means for forming a basic environment for endoscopic surgery by injecting medical gas such as carbon dioxide through an endoscopic hole to inflate and enlarge the abdominal cavity. The target area refers to the area where the abdominal cavity can be changed when pneumoperitoneum occurs in the human body, and the target area is usually composed of abdominal muscles and the abdominal wall. When the abdominal cavity of a human body is pneumoperitoneum, the abdominal muscles and the abdominal wall can be inflated to be enlarged.
In this embodiment, the terminal locates a region that can be deformed due to pneumoperitoneum on the abdominal cavity image, and determines the region as a target region.
And step 206, performing simulated pneumoperitoneum fitting on the abdominal cavity image according to the target area, and determining a simulated pneumoperitoneum area.
The simulated pneumoperitoneum fitting refers to the action of performing pneumoperitoneum simulation on the human abdominal cavity, and the simulated pneumoperitoneum area refers to the area after pneumoperitoneum change, which is obtained by performing pneumoperitoneum simulation on the area of the human abdominal cavity which can be changed due to pneumoperitoneum.
In this embodiment, the terminal performs simulated pneumoperitoneum fitting on the target region of the abdominal cavity image to obtain an image of the target region after pneumoperitoneum occurs, and the image is used as the simulated pneumoperitoneum region.
And step 208, determining a simulated pneumoperitoneum image according to the target area, the simulated pneumoperitoneum area and the abdominal cavity image.
The simulated pneumoperitoneum image refers to an image in which a region that can be changed due to pneumoperitoneum is replaced with a region after pneumoperitoneum change.
Specifically, the terminal removes all pixel points contained in the target area from the abdominal cavity image, and replaces all pixel points contained in the simulated pneumoperitoneum area to determine the simulated pneumoperitoneum image.
The simulated pneumoperitoneum image may be a 2D image, and as an example, the simulated pneumoperitoneum image is a planar image obtained after simulated pneumoperitoneum fitting is performed on the basis of each abdominal cavity image.
As an example, after obtaining the simulated pneumoperitoneum region in the abdominal cavity image by simulation in step 206, the target region in the abdominal cavity image is removed and replaced with the simulated pneumoperitoneum region to obtain a simulated image corresponding to the current abdominal cavity image, the simulated images corresponding to all the planar abdominal cavity images (i.e., each tomographic image) of the human abdominal cavity are further merged, and the pixel points of the virtual pneumoperitoneum region in each simulated image are subjected to linear interpolation, connected domain analysis, threshold analysis, and the like to obtain a 3D image of the human abdominal cavity after virtual pneumoperitoneum simulation. The time stamps can be printed on each abdominal cavity image according to the acquisition sequence in the process of acquiring the abdominal cavity images, and all the abdominal cavity images are spliced again according to the front-back sequence of the time stamps in the process of splicing the simulation images corresponding to all the plane abdominal cavity images of the human abdominal cavity.
It should be understood that, the removing of the target region in the abdominal cavity image mentioned in this embodiment refers to adjusting the pixel values of all pixel points included in the target region by a preset value, and the preset value may be-1000, for example.
In this embodiment, the terminal replaces the target region in the abdominal cavity image with the simulated pneumoperitoneum region to obtain a simulated pneumoperitoneum image.
In the simulated pneumoperitoneum image generation method, the terminal acquires an abdominal cavity image showing an actual human abdominal cavity environment through the image acquisition device, determines an area which can change during pneumoperitoneum in the abdominal cavity image as a target area, performs simulated pneumoperitoneum fitting on the target area, acquires a shape image shown by the target area after pneumoperitoneum as a simulated pneumoperitoneum area, removes all pixel points contained in the target area in the abdominal cavity image, and replaces all pixel points contained in the simulated pneumoperitoneum area to obtain the simulated pneumoperitoneum image. The terminal automatically marks out the target area from the abdominal cavity image, so that the manual marking time is saved, the automation degree in the generation process of the simulated pneumoperitoneum image is enhanced, and the deformed abdominal cavity state image after pneumoperitoneum is directly acquired on the 2D plane image by adopting a simulated pneumoperitoneum fitting method. In addition, a 3D simulated pneumoperitoneum image can be generated according to the simulated image after the 2D pneumoperitoneum deformation, and the full automation of the process treatment is realized.
As shown in fig. 4, in some optional embodiments, the step 204 of determining the target region according to the abdominal cavity image includes: 2042, performing target recognition on the abdominal cavity image by adopting at least one deep neural network to determine an abdominal muscle area; step 2044, performing image segmentation on the abdominal cavity image to determine an abdominal wall area; step 2046, the abdominal muscle region and the abdominal wall region are fused to obtain a target region.
As shown in fig. 5, a schematic processing flow diagram for determining the target area in this embodiment is shown. The deep neural network may adopt any one of a feedforward neural network, a long-short term memory neural network, a generation countermeasure neural network, a cyclic neural network or a convolutional neural network, at least one deep neural network adopted in this embodiment may respectively adopt different types of neural networks, or may adopt a plurality of neural networks of the same type obtained by training different sample data, for example, at least one deep neural network adopts a convolutional neural network, and is obtained by respectively adopting a male abdominal cavity sample image, a female abdominal cavity sample image, a southern abdominal cavity sample image, a northern abdominal cavity sample image, an adult abdominal cavity sample image, and a juvenile abdominal cavity sample image.
The terminal inputs the abdominal cavity images into at least one deep neural network at the same time to obtain a regional prediction result of at least one abdominal cavity image, and determines an abdominal muscle region according to the regional prediction result of at least one abdominal cavity image. With this arrangement, the present embodiment is enabled to determine the abdominal muscle region by way of collective determination, thereby improving the recognition accuracy of the abdominal muscle region.
Further, image segmentation processing is performed on the abdominal cavity image to determine an abdominal wall region.
Furthermore, the abdominal wall area is adjacent to the abdominal muscle area, and as an example, the two areas are fused by means of connected component analysis or threshold analysis to obtain the target area.
In this embodiment, the terminal performs region identification on the same abdominal cavity image by using at least one deep neural network to obtain a plurality of region prediction results, determines the abdominal muscle region by referring to the plurality of region prediction results, so that the abdominal muscle region can be positioned more accurately, further partitions the abdominal wall region from the abdominal cavity image by using an image partitioning method, and finally fuses the abdominal muscle region and the abdominal wall region to obtain the target region. The position of the abdominal muscle area is determined by adopting a collective judgment mode and combining the prediction results of a plurality of deep neural networks, so that the area positioning is more accurate.
In some optional embodiments, the step 2042 of performing target recognition on the abdominal cavity image by using at least one deep neural network to determine the abdominal muscle region includes: inputting the abdominal cavity image into at least one deep neural network, and performing target identification to obtain a corresponding prediction region; and determining the abdominal muscle area according to the prediction area and the preset weight corresponding to each deep neural network.
As shown in fig. 6, the terminal inputs the abdominal cavity image into a plurality of deep neural networks at the same time, and obtains a prediction region corresponding to each pixel point in the abdominal cavity image, where the prediction region may be an abdominal muscle region or a non-abdominal muscle region.
Further, for the pixel points in each abdominal cavity image, the prediction regions output by the plurality of deep neural networks can be obtained, the prediction region corresponding to each pixel point is determined finally according to the preset weight corresponding to each deep neural network, all the pixel points of which the prediction regions are abdominal muscle regions are divided as the abdominal muscle regions, and the determination of the abdominal muscle regions is completed.
In some optional embodiments, determining the abdominal muscle region according to the prediction region and the preset weight corresponding to each deep neural network includes: determining the probability value of the prediction region of each pixel point in the abdominal cavity image according to the prediction region corresponding to each deep neural network and the corresponding preset weight; and according to the probability value of the predicted region of each pixel point, segmenting the abdominal muscle region from the abdominal cavity image.
Specifically, the terminal allocates a corresponding weight to at least one deep neural network in advance, when the abdominal cavity image is input into all the deep neural networks in step 204, and a plurality of corresponding regional prediction results are obtained, each regional prediction result includes a prediction region and a corresponding prediction probability of each pixel point in the abdominal cavity image, and a prediction region probability value of each pixel point includes a prediction region and a corresponding prediction probability of a current pixel point, then a final prediction region probability value of each pixel point is obtained by calculating according to the weight corresponding to the deep neural network corresponding to each regional prediction result by using the following formula:
Figure BDA0003906858260000091
wherein p represents the probability value of the prediction region of the current pixel point in the abdominal cavity image; n denotes the total number of deep neural networks, p i The probability value of the prediction region, w, of the current pixel point output by the ith deep neural network i Representing the weight of the ith deep neural network.
Further, whether the final prediction region probability value of the abdominal muscle region of each pixel point reaches a preset probability threshold value or not is judged, and if the final prediction region probability value of the abdominal muscle region of each pixel point reaches the preset probability threshold value, the pixel point is considered to belong to the abdominal muscle region. For example, the probability threshold may be 50%, which means that if the probability value of the predicted region of the abdominal muscle region is greater than 50%, the pixel is considered to belong to the abdominal muscle region.
As an example, when there are three deep neural networks, the weight corresponding to the first neural network is 0.1, the weight corresponding to the second neural network is 0.5, and the weight corresponding to the third neural network is 0.4, then for a pixel in the abdominal cavity image, the probability that the pixel belongs to the abdominal muscle region is 50% as represented by the region prediction result output by the first neural network, the probability that the pixel belongs to the abdominal muscle region is 5% as represented by the region prediction result output by the second neural network, and the probability that the pixel belongs to the abdominal muscle region is 70% as represented by the region prediction result output by the third neural network, which is calculated by using the above formula, the probability that the current pixel belongs to the abdominal muscle region is p =0.1 + 50.5% + 0.4% = 70% =35.5%.
As an example, the abdominal cavity image may be input into all the deep neural networks, and a predicted region probability value P of one pixel point is obtained, where P = { P = { P = } 1 ,P 2 ,P 3 ,P 4 },P 1 =0.3,P 2 =0.4,P 3 =0.1,P 4 =0.2, wherein P 1 Representing the probability that the current pixel belongs to the abdominal muscle region, P 2 Representing the probability, P, that the current pixel belongs to the liver region 1 Representing the probability that the current pixel belongs to the skeleton region, P 1 Representing the probability that the current pixel point belongs to other areas, if the probability threshold value preset by the terminal is 50%, then P 1 If the probability threshold value is not reached, the current pixel point is not considered to belong to the abdominal muscle area, and if the probability threshold value preset by the terminal is 20%, P is 1 And when the probability threshold is reached, the current pixel point is considered to belong to the abdominal muscle area.
Specifically, the regional prediction result of the deep neural network may only divide the abdominal cavity image into an abdominal muscle region and other regions, and when the probability of a pixel point in the abdominal cavity image obtained after the regional prediction results output by all the deep neural networks are synthesized is smaller than a preset probability threshold, the pixel point included in the regional prediction result is determined to belong to other regions.
In the embodiment, the terminal fuses the output results of the multiple deep neural networks, collective judgment is performed through the multiple regional prediction results, and the prediction region corresponding to each pixel point in the abdominal cavity image is determined, so that the regional segmentation precision of the terminal is improved, and the division of the abdominal muscle region is more accurate.
As shown in fig. 7, in some optional embodiments, the image segmentation is performed on the abdominal cavity image to determine an abdominal wall region in step 2044, which includes: determining the outer contour of the abdominal wall region from the abdominal cavity image by adopting a threshold analysis method; dividing an inner contour of the abdominal wall area according to the outer contour by adopting a dynamic contour algorithm; optimizing the inner contour by adopting a scatter-point contour algorithm; and determining the abdominal wall area according to the outer contour and the optimized inner contour.
Specifically, the terminal divides the pixel levels into a plurality of classes by setting a threshold value according to the difference of the gray value between the pixel points of the outer contour of the abdominal wall region to be extracted and the pixel points of the background in the abdominal image, so that the separation of the outer contour and the background is realized.
Further, the terminal defines an energy function by using information such as gray scale, gradient and the like of the abdominal cavity image pixels from the outer contour which is a continuous closed curve, so that the outer contour moves along the direction of reducing the edge energy until the edge energy reaches the minimum, and the inner contour of the abdominal wall area is obtained.
And further, on the basis of the inner contour of the abdominal wall area, rolling a circle with the radius of adjacent radius R around the operation points for all operation points contained in the inner contour, and generating the optimized inner contour according to the principle of a scattered point contour algorithm.
And further, the terminal marks out the outer contour and the optimized inner contour in the abdominal cavity image, and sets all pixel points between the outer contour and the optimized inner contour as an abdominal wall area.
In this embodiment, the terminal separates the background in the abdominal cavity image from the entire abdominal cavity of the human body by a threshold analysis method, determines the periphery of the abdominal cavity of the human body as an outer contour, further segments an inner contour curve by using a dynamic contour algorithm on the basis of a curve of the outer contour, optimizes the inner contour curve by using a scatter-point contour algorithm, and determines the abdominal wall region by using a set of pixels between the outer contour curve and the optimized inner contour curve as a set of pixels of the abdominal wall region.
As shown in fig. 8, in some optional embodiments, step 2046 further includes: step 2045, the abdominal muscle region is optimized.
Step 2046 includes: and fusing the optimized abdominal muscle area and the abdominal wall area to obtain a target area.
In this embodiment, the terminal optimizes the abdominal muscle region obtained in step 2042 again to prevent an obvious error from occurring in a pixel point included in the abdominal muscle region.
In some alternative embodiments, step 2045 includes: acquiring a first sub-region in an abdominal cavity image; acquiring the relative position of any pixel point in the abdominal muscle area and any pixel point in the first sub-area; and removing the pixel points of which the relative positions in the abdominal muscle area do not accord with the preset condition.
As an example, when the abdominal cavity image is a CT image, the abdominal cavity image includes images of a spine and a rib of a human body, and then, in step 2045, a relative position between a pixel point in the abdominal muscle region and a pixel point in any one of the skeletal regions is obtained for any one of the pixel points in the abdominal muscle region.
The preset condition may be that the pixel point of the abdominal muscle region is located in a preset direction of the pixel point of the first sub-region, and the preset direction may be a direction away from the central point of the abdominal cavity image.
And when the relative position indicates that the pixel points in the abdominal muscle region are not located in the preset direction of the pixel points in the first sub-region, removing the pixel points in the abdominal muscle region which do not accord with the preset condition.
In this embodiment, the terminal removes the pixel point of the abdominal muscle region located at the inner side of the human skeleton region according to the general knowledge, so as to optimize the abdominal muscle region.
In some optional embodiments, before acquiring the first sub-region in the abdominal cavity image, the method includes: identifying the abdominal cavity image and determining characteristic parts; and taking the area where the characteristic part is positioned as a first subregion.
Specifically, the terminal identifies the abdominal cavity image and determines the characteristic position by using, for example, a deep neural network, which may be any one of a feed-forward neural network, a long-short term memory neural network, a generation-confrontation neural network, a cyclic neural network, or a convolutional neural network. The terminal inputs the abdominal cavity image into the deep neural network to obtain the recognition result of the characteristic part, and the set of all pixel points contained in the characteristic part is used as a first sub-region.
For another example, the terminal selects all the pixel points with the gray values smaller than the preset gray threshold value as the feature portion according to the gray values of all the pixel points in the abdominal cavity image, and takes the set of all the pixel points with the gray values smaller than the preset gray threshold value as the first sub-region.
In this embodiment, the terminal divides the characteristic region from the abdominal cavity image to serve as a first sub-region for serving as a standard for optimizing the abdominal muscle region, so that the abdominal muscle region is more accurately located.
In some alternative embodiments, step 2045 comprises: acquiring pixel points in a preset range in the abdominal cavity image; and removing the pixel points which coincide with the pixel points in the preset range in the abdominal muscle area.
Specifically, the preset range is, for example, a range in which pixel points of a bone region in the abdominal cavity image are more concentrated by 50%, and is used for representing the side and the back of the human body and removing errors caused by muscles of the back of the human body.
In this embodiment, the terminal sets the one side at the backbone place to predetermineeing the scope in the abdominal cavity image to pixel that lies in predetermineeing the within range in will abdominal muscle region is got rid of, is used for preventing the error that the pixel in the muscle region of human side and back caused.
As shown in fig. 9, in some alternative embodiments, step 206 includes: step 2062, determining a deformation range according to the target area and the abdominal cavity image; step 2064, deforming the inner contour of the target area in the deformation range by adopting a deformation fitting algorithm; step 2066, determining the deformed outer contour on the basis of the deformed inner contour according to the corresponding relation between the pixel points of the inner contour and the pixel points of the outer contour in the target area, and obtaining the simulated pneumoperitoneum area.
The terminal determines an abdominal cavity range which can deform along with pneumoperitoneum according to the abdominal cavity image and the position relation of the abdominal muscle region and the abdominal wall region, and then deforms the inner contour of the target region in the deformation range by adopting deformation fitting, wherein a method of linear interpolation and spline curve difference can be adopted in the deformation fitting process.
As shown in fig. 10, in the process of deforming the inner contour of the target region within the deformation range, a preset deformation mode may be adopted, and the deformation mode is applied to the inner contour of the target region, for example, a coordinate system may be established with a central point of the abdominal cavity image as an origin, a vertical direction of the abdominal cavity image as a y-axis, and a horizontal direction of the abdominal cavity image as an x-axis, and the preset deformation mode in the terminal includes, for example, a correspondence relationship between a coordinate position before deformation and a coordinate position after deformation of any point on an inner contour curve of the target region, which may be embodied by a mapping formula, and specific content of the mapping formula may be changed according to user settings.
As shown in fig. 11, further, after the human abdominal cavity is deformed, the total area of the target region should be constant, because after the inner contour of the target region is deformed, the outer contour of the target region should be changed accordingly, so that the total number of pixel points between the inner contour of the target region and the outer contour of the target region is constant, that is, the area of the target region in the abdominal cavity image is constant.
For example, a coordinate system may be established with the central point of the abdominal cavity image as an origin, the vertical direction of the abdominal cavity image as a y-axis, and the horizontal direction of the abdominal cavity image as an x-axis, and coordinate positions of all pixel points on the inner contour curve of the target region before deformation, coordinate positions of all pixel points on the outer contour curve of the target region before deformation, and coordinate positions of all pixel points on the inner contour curve of the target region after deformation may be obtained. Firstly, obtaining an initial area A0 of a target region between an inner contour and an outer contour before deformation according to coordinate positions of all points on an inner contour curve of the target region before deformation and coordinate positions of all points on an outer contour curve of the target region before deformation, then obtaining coordinate positions of pixel points which are the same as the abscissa of the pixel points on the inner contour curve after deformation aiming at the pixel points i on the inner contour curve before deformation, taking the coordinate positions as the coordinate positions of the pixel points i after deformation, and further calculating to obtain the lifting distance delta in the y-axis direction before and after the deformation of the pixel points i i y, further, obtaining the outer contour curve before deformationAssuming that the pixel point j is lifted by delta in the ordinate direction i y distance to obtain an outer contour curve after lifting treatment, calculating to obtain an area A1 of a target area between an inner contour and an outer contour after lifting treatment according to the coordinate positions of all pixel points on the deformed inner contour curve and the coordinate positions of all pixel points on the outer contour curve after lifting treatment, taking the area A1 as the initial area A0 to obtain a ratio r, and finally actually lifting a pixel point j in the vertical coordinate direction to r delta i And y, obtaining the coordinate positions of all pixel points on the outer contour curve of the finally deformed target area. In this way, it is ensured that the area of the target region remains constant between the inner and outer contours before and after deformation.
In this embodiment, the terminal determines a deformation range capable of deforming from the abdominal cavity image, determines the deformed inner contour of the target region in the deformation range by using a deformation fitting algorithm, and then determines the deformed outer contour of the target region on the basis of the deformed inner contour according to a principle that the total number of pixels of the target region is unchanged. By means of the setting, the curves of the inner contour and the outer contour of the deformed target area can be obtained, and therefore the simulated pneumoperitoneum area after simulated pneumoperitoneum fitting is obtained.
In some optional embodiments, the step 2062 of determining the deformation range according to the target region and the abdominal cavity image comprises: acquiring a second sub-region in the abdominal cavity image; and determining the deformation range according to the positions of the pixel points in the target region and the positions of the pixel points in the second sub-region.
As an example, since the skeleton of the human body may not be deformed and the abdominal muscle portion connected with the skeleton may not be deformed, the range that can be changed along with pneumoperitoneum in the human body abdominal cavity should be an abdominal portion that is not surrounded by ribs, the second sub-region may be, for example, a human skeleton region, that is, the first sub-region and the second sub-region may be the same region, and the identification and positioning process of the second sub-region refers to the acquisition process of the first sub-region, which will not be described herein any more.
And in a near step, the terminal determines a deformation range which is not surrounded by human bones aiming at the current abdominal cavity image.
In this embodiment, the terminal takes an abdominal cavity portion not surrounded by the human bone as a deformation range according to the position of the human bone.
As shown in fig. 12, in some optional embodiments, before step 208, further comprising: step 207, receiving an adjustment instruction; and adjusting the target area and/or the simulated pneumoperitoneum area according to the adjusting instruction.
Step 208 includes: and determining a simulated pneumoperitoneum image according to the adjusted target area and/or the adjusted simulated pneumoperitoneum area.
In this embodiment, the medical staff may view the target area segmented in step 204 and the simulated pneumoperitoneum area determined in step 206 through the interactive device, and perform re-segmentation and/or area delineation on the abdominal cavity image in an interactive or semi-automatic manner, or modify the target area and/or the simulated pneumoperitoneum area obtained by the terminal automatic algorithm through the interactive delineation, so that the terminal sets the modification made by the medical staff through the interactive device as the highest priority, and uses the target area and/or the simulated pneumoperitoneum area modified by the medical staff as the standard for generating the simulated pneumoperitoneum image.
In some alternative embodiments, step 208, determining a simulated pneumoperitoneum image from the target region, the simulated pneumoperitoneum region, and the abdominal cavity image, includes: and replacing the target area in the abdominal cavity image by the simulated pneumoperitoneum area to obtain a simulated pneumoperitoneum image.
Specifically, the terminal removes all pixel points contained in a target area in the abdominal cavity image, and adds all pixel points contained in the simulated pneumoperitoneum area to the abdominal cavity image from which the target area is removed to obtain the simulated pneumoperitoneum image.
Further, the terminal may also present the final simulated pneumoperitoneum image through the interactive device, for example, the simulated pneumoperitoneum area may also be highlighted in the presented simulated pneumoperitoneum image.
In the simulated pneumoperitoneum image generation method, a plurality of deep neural networks are adopted to identify an abdominal cavity image, a plurality of regional prediction results are obtained, the plurality of regional prediction results are referred to at the same time, an abdominal muscle region is determined, and a collective judgment mode is adopted, so that the region positioning is more accurate; further, optimizing the pixel points in the abdominal muscle region one by one according to the positions of human skeletons or the positions of the pixel points in the abdominal muscle region, eliminating errors caused by muscle pixel points in other regions in an abdominal cavity image, so that the abdominal muscle region is more accurately positioned, further determining the deformation range of the abdominal cavity of the human body along with the pneumoperitoneum according to the positions of the skeletons of the human body, deforming the inner contour of the target region by adopting a deformation mode set in advance by a user, and then determining the curve of the outer contour of the deformed target region according to the curve of the deformed inner contour of the target region on the basis of ensuring that the area of the target region is not changed, thereby determining the simulated pneumoperitoneum region; before the simulated pneumoperitoneum image is determined, a region adjusting instruction made by a user through an interactive interface is preferentially referred to, generation of the simulated pneumoperitoneum image is ensured according to a region division result of the user, finally, pixel points contained in a final target region are removed from the abdominal cavity image, the pixel points of the simulated pneumoperitoneum region are replaced, a planar simulated pneumoperitoneum image is obtained, and a three-dimensional 3D simulated pneumoperitoneum image can be generated according to the plurality of planar simulated pneumoperitoneum images. The simulated pneumoperitoneum image generation method enables the operation of generating the virtual pneumoperitoneum to be simpler, reduces human errors and improves reliability.
It should be understood that, although the steps in the flowcharts related to the embodiments are shown in sequence as indicated by the arrows, the steps are not necessarily executed in sequence as indicated by the arrows. The steps are not performed in the exact order shown and described, and may be performed in other orders, unless explicitly stated otherwise. Moreover, at least a part of the steps in the flowcharts related to the above embodiments may include multiple steps or multiple stages, which are not necessarily performed at the same time, but may be performed at different times, and the order of performing the steps or stages is not necessarily sequential, but may be performed alternately or alternately with other steps or at least a part of the steps or stages in other steps.
Based on the same inventive concept, the embodiment of the present application further provides a simulated pneumoperitoneum image generation apparatus for implementing the simulated pneumoperitoneum image generation method mentioned above. The implementation scheme for solving the problem provided by the apparatus is similar to the implementation scheme described in the above method, so specific limitations in one or more embodiments of the simulated pneumoperitoneum image generation apparatus provided below can be referred to the limitations on the simulated pneumoperitoneum image generation method in the foregoing, and details are not described here again.
In one embodiment, as shown in fig. 13, there is provided a simulated pneumoperitoneum image generation apparatus 1300 including: an obtaining module 1302, a first determining module 1304, a second determining module 1306, and a third determining module 1308, wherein: the obtaining module 1302 is configured to obtain an abdominal cavity image; the first determining module 1304 is used for determining a target region according to the abdominal cavity image; the second determining module 1306 is configured to perform simulated pneumoperitoneum fitting on the abdominal cavity image according to the target region, and determine a simulated pneumoperitoneum region; the third determining module 1308 is configured to determine a simulated pneumoperitoneum image according to the target region, the simulated pneumoperitoneum region, and the abdominal cavity image.
As shown in fig. 14, in some alternative embodiments, the first determining module 1304 includes: an identifying unit 13042, configured to perform target identification on the abdominal cavity image by using at least one deep neural network to determine an abdominal muscle region; a segmentation unit 13044 configured to perform image segmentation on the abdominal cavity image to determine an abdominal wall region; a fusion unit 13046 is used for fusing the abdominal muscle region and the abdominal wall region to obtain the target region.
In some optional embodiments, the identifying unit 13042 is configured to: inputting the abdominal cavity image into at least one deep neural network, and performing target identification to obtain a corresponding prediction region; and determining the abdominal muscle area according to the prediction area and the preset weight corresponding to each deep neural network.
In some optional embodiments, the identifying unit 13042 is further configured to: determining the probability value of the prediction region of each pixel point in the abdominal cavity image according to the prediction region corresponding to each deep neural network and the corresponding preset weight; and according to the probability value of the prediction region of each pixel point, segmenting the abdominal muscle region from the abdominal cavity image.
In some optional embodiments, the segmentation unit 13044 is configured to: determining the outer contour of the abdominal wall region from the abdominal cavity image by adopting a threshold analysis method; dividing an inner contour of the abdominal wall area according to the outer contour by adopting a dynamic contour algorithm; optimizing the inner contour by adopting a scatter-point contour algorithm; and determining an abdominal wall area according to the outer contour and the optimized inner contour.
As shown in fig. 15, in some optional embodiments, the first determining module 1304 further comprises: an optimization unit 13045 for optimizing the abdominal muscle region; the third determination module 1308 is further configured to: and fusing the optimized abdominal muscle area and the abdominal wall area to obtain a target area.
In some optional embodiments, the optimizing unit 13045 is configured to: acquiring a first sub-region in an abdominal cavity image; acquiring the relative position of any pixel point in the abdominal muscle region and any pixel point in the first sub-region; and removing the pixel points of which the relative positions in the abdominal muscle region do not accord with the preset conditions.
In some optional embodiments, the optimizing unit 13045 is further configured to: identifying the abdominal cavity image and determining a characteristic part; and taking the area where the characteristic part is positioned as a first subarea.
In some optional embodiments, the optimizing unit 13045 is configured to: acquiring pixel points in a preset range in the abdominal cavity image; and removing the pixel points which are overlapped with the pixel points in the preset range in the abdominal muscle area.
As shown in fig. 16, in some optional embodiments, the second determining module 1306 includes: a first determining unit 13062 for determining a deformation range according to the target region and the abdominal cavity image; a deforming unit 13064, configured to deform the inner contour of the target area in the deformation range by using a deformation fitting algorithm; a second determining unit 13066 is configured to determine the deformed outer contour based on the deformed inner contour according to the corresponding relationship between the pixel points of the inner contour and the pixel points of the outer contour in the target area, so as to obtain the simulated pneumoperitoneum area.
In some optional embodiments, the first determining unit 13062 is configured to: acquiring a second sub-region in the abdominal cavity image; and determining the deformation range according to the positions of the pixel points in the target region and the positions of the pixel points in the second sub-region.
As shown in fig. 17, in some alternative embodiments, the simulated pneumoperitoneum image generation apparatus 1300 further includes: a receiving module 1307 configured to receive an adjustment instruction, and adjust the target area and/or the simulated pneumoperitoneum area according to the adjustment instruction; the third determining module 1308 is further configured to: and determining a simulated pneumoperitoneum image according to the adjusted target area and/or the adjusted simulated pneumoperitoneum area.
In some optional embodiments, the third determining module 1308 is further configured to: and replacing the target area in the abdominal cavity image by the simulated pneumoperitoneum area to obtain a simulated pneumoperitoneum image.
The modules in the simulated pneumoperitoneum image generation device can be wholly or partially realized by software, hardware and a combination thereof. The modules can be embedded in a hardware form or independent from a processor in the computer device, and can also be stored in a memory in the computer device in a software form, so that the processor can call and execute operations corresponding to the modules.
In one embodiment, a computer device is provided, which may be a terminal, and its internal structure diagram may be as shown in fig. 18. The computer apparatus includes a processor, a memory, an input/output interface, a communication interface, a display unit, and an input device. The processor, the memory and the input/output interface are connected by a system bus, and the communication interface, the display unit and the input device are connected by the input/output interface to the system bus. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device includes a non-volatile storage medium and an internal memory. The non-volatile storage medium stores an operating system and a computer program. The internal memory provides an environment for the operation of an operating system and computer programs in the non-volatile storage medium. The input/output interface of the computer device is used for exchanging information between the processor and an external device. The communication interface of the computer device is used for carrying out wired or wireless communication with an external terminal, and the wireless communication can be realized through WIFI, a mobile cellular network, NFC (near field communication) or other technologies. The computer program is executed by a processor to implement a simulated pneumoperitoneum image generation method. The display unit of the computer device is used for forming a visual visible picture, and can be a display screen, a projection device or a virtual reality imaging device. The display screen can be a liquid crystal display screen or an electronic ink display screen, and the input device of the computer equipment can be a touch layer covered on the display screen, a key, a track ball or a touch pad arranged on the shell of the computer equipment, an external keyboard, a touch pad or a mouse and the like.
Those skilled in the art will appreciate that the architecture shown in fig. 18 is merely a block diagram of some of the structures associated with the disclosed aspects and is not intended to limit the computing devices to which the disclosed aspects apply, as particular computing devices may include more or less components than those shown, or may combine certain components, or have a different arrangement of components.
In one embodiment, a computer readable storage medium is provided, having stored thereon a computer program which, when executed by a processor, performs the steps of the above-described simulated pneumoperitoneum image generation method.
In one embodiment, a computer program product is provided comprising a computer program which, when executed by a processor, performs the steps of the simulated pneumoperitoneum image generation method described above.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above may be implemented by hardware instructions of a computer program, which may be stored in a non-volatile computer-readable storage medium, and when executed, may include the processes of the embodiments of the methods described above. Any reference to memory, database, or other medium used in the embodiments provided herein may include at least one of non-volatile and volatile memory. The nonvolatile Memory may include a Read-Only Memory (ROM), a magnetic tape, a floppy disk, a flash Memory, an optical Memory, a high-density embedded nonvolatile Memory, a resistive Random Access Memory (ReRAM), a Magnetic Random Access Memory (MRAM), a Ferroelectric Random Access Memory (FRAM), a Phase Change Memory (PCM), a graphene Memory, and the like. Volatile Memory can include Random Access Memory (RAM), external cache Memory, and the like. By way of illustration and not limitation, RAM can take many forms, such as Static Random Access Memory (SRAM) or Dynamic Random Access Memory (DRAM), for example. The databases referred to in various embodiments provided herein may include at least one of relational and non-relational databases. The non-relational database may include, but is not limited to, a block chain based distributed database, and the like. The processors referred to in the embodiments provided herein may be general purpose processors, central processing units, graphics processors, digital signal processors, programmable logic devices, quantum computing based data processing logic devices, etc., without limitation.
All possible combinations of the technical features in the above embodiments may not be described for the sake of brevity, but should be considered as being within the scope of the present disclosure as long as there is no contradiction between the combinations of the technical features.
The above-mentioned embodiments only express several embodiments of the present application, and the description thereof is more specific and detailed, but not construed as limiting the scope of the present application. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the concept of the present application, which falls within the scope of protection of the present application. Therefore, the protection scope of the present application shall be subject to the appended claims.

Claims (17)

1. A simulated pneumoperitoneum image generation method, comprising:
acquiring an abdominal cavity image;
determining a target area according to the abdominal cavity image;
according to the target area, performing simulated pneumoperitoneum fitting on the abdominal cavity image to determine a simulated pneumoperitoneum area;
and determining a simulated pneumoperitoneum image according to the target area, the simulated pneumoperitoneum area and the abdominal cavity image.
2. The method of claim 1, wherein the determining a target region from the abdominal cavity image comprises:
performing target recognition on the abdominal cavity image by adopting at least one deep neural network to determine an abdominal muscle area;
performing image segmentation on the abdominal cavity image to determine an abdominal wall area;
and fusing the abdominal muscle area and the abdominal wall area to obtain the target area.
3. The method of claim 2, wherein the performing target recognition on the abdominal cavity image using at least one deep neural network to determine the abdominal muscle region comprises:
inputting the abdominal cavity image into at least one deep neural network, and performing target identification to obtain a corresponding prediction region;
and determining the abdominal muscle area according to the prediction area and the preset weight corresponding to each deep neural network.
4. The method of claim 3, wherein said determining the abdominal muscle region according to the predicted region and the preset weight corresponding to each of the deep neural networks comprises:
determining a prediction region probability value of each pixel point in the abdominal cavity image according to the prediction region corresponding to each deep neural network and the corresponding preset weight;
and according to the probability value of the predicted region of each pixel point, segmenting the abdominal muscle region from the abdominal cavity image.
5. The method of claim 2, wherein the image segmenting the abdominal cavity image to determine an abdominal wall region comprises:
determining the outer contour of the abdominal wall region from the abdominal cavity image by adopting a threshold analysis method;
dividing an inner contour of the abdominal wall region according to the outer contour by adopting a dynamic contour algorithm;
optimizing the inner contour by adopting a scatter-point contour algorithm;
and determining the abdominal wall area according to the outer contour and the optimized inner contour.
6. The method of any one of claims 2-5, wherein prior to said fusing said abdominal muscle region and said abdominal wall region to obtain said target region, further comprising:
optimizing the abdominal muscle region;
the fusing the abdominal muscle area and the abdominal wall area to obtain the target area comprises:
and fusing the optimized abdominal muscle area and the abdominal wall area to obtain the target area.
7. The method of claim 6, wherein the optimizing the abdominal muscle region comprises:
acquiring a first sub-region in the abdominal cavity image;
acquiring the relative position of any pixel point in the abdominal muscle region and any pixel point in the first sub-region;
and removing the pixel points of which the relative positions in the abdominal muscle area do not accord with preset conditions.
8. The method of claim 7, wherein prior to said acquiring the first sub-region in the abdominal cavity image, comprising:
identifying the abdominal cavity image and determining a characteristic part;
and taking the area where the characteristic part is positioned as a first subregion.
9. The method of claim 6, wherein the optimizing the abdominal muscle region comprises:
acquiring pixel points in a preset range in the abdominal cavity image;
and removing the pixel points which are overlapped with the pixel points in the preset range in the abdominal muscle area.
10. The method of claim 1, wherein said performing a simulated pneumoperitoneum fit to the abdominal cavity image based on the target region, determining a simulated pneumoperitoneum region, comprises:
determining a deformation range according to the target area and the abdominal cavity image;
deforming the inner contour of the target area in the deformation range by adopting a deformation fitting algorithm;
and determining the deformed outer contour on the basis of the deformed inner contour according to the corresponding relation between the pixel points of the inner contour and the pixel points of the outer contour in the target area to obtain the simulated pneumoperitoneum area.
11. The method of claim 10, wherein the determining a deformation range from the target region and the abdominal cavity image comprises:
acquiring a second sub-region in the abdominal cavity image;
and determining the deformation range according to the positions of the pixel points in the target region and the positions of the pixel points in the second sub-region.
12. The method of claim 1, wherein prior to determining a simulated pneumoperitoneum image from the target region, the simulated pneumoperitoneum region, and the abdominal cavity image, further comprising:
receiving an adjusting instruction;
adjusting the target area and/or the simulated pneumoperitoneum area according to the adjustment instruction;
determining a simulated pneumoperitoneum image according to the target region, the simulated pneumoperitoneum region and the abdominal cavity image, including:
and determining the simulated pneumoperitoneum image according to the adjusted target area and/or the adjusted simulated pneumoperitoneum area.
13. The method of claim 1, wherein determining a simulated pneumoperitoneum image from the target region, the simulated pneumoperitoneum region, and the abdominal cavity image comprises:
and replacing the target area in the abdominal cavity image with the simulated pneumoperitoneum area to obtain the simulated pneumoperitoneum image.
14. A simulated pneumoperitoneum image generation apparatus, comprising:
the acquisition module is used for acquiring an abdominal cavity image;
the first determining module is used for determining a target area according to the abdominal cavity image;
the second determination module is used for performing simulated pneumoperitoneum fitting on the abdominal cavity image according to the target area to determine a simulated pneumoperitoneum area;
and the third determining module is used for determining a simulated pneumoperitoneum image according to the target area, the simulated pneumoperitoneum area and the abdominal cavity image.
15. A computer device comprising a memory and a processor, the memory storing a computer program, characterized in that the processor, when executing the computer program, implements the steps of the memory data access method of any one of claims 1 to 13.
16. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the steps of the method of any one of claims 1 to 13.
17. A computer program product comprising a computer program, characterized in that the computer program, when being executed by a processor, carries out the steps of the method of any one of claims 1 to 13.
CN202211308290.7A 2022-10-25 2022-10-25 Simulated pneumoperitoneum image generation method and device and computer equipment Pending CN115578456A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211308290.7A CN115578456A (en) 2022-10-25 2022-10-25 Simulated pneumoperitoneum image generation method and device and computer equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211308290.7A CN115578456A (en) 2022-10-25 2022-10-25 Simulated pneumoperitoneum image generation method and device and computer equipment

Publications (1)

Publication Number Publication Date
CN115578456A true CN115578456A (en) 2023-01-06

Family

ID=84586085

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211308290.7A Pending CN115578456A (en) 2022-10-25 2022-10-25 Simulated pneumoperitoneum image generation method and device and computer equipment

Country Status (1)

Country Link
CN (1) CN115578456A (en)

Similar Documents

Publication Publication Date Title
CN112885453B (en) Method and system for identifying pathological changes in subsequent medical images
US9646229B2 (en) Method and system for bone segmentation and landmark detection for joint replacement surgery
JP5643304B2 (en) Computer-aided lung nodule detection system and method and chest image segmentation system and method in chest tomosynthesis imaging
KR101883258B1 (en) Detection of anatomical landmarks
CN111080573B (en) Rib image detection method, computer device and storage medium
US8385614B2 (en) Slice image display apparatus, method and recording-medium having stored therein program
CN110717961B (en) Multi-modal image reconstruction method and device, computer equipment and storage medium
Wang et al. Automated segmentation of CBCT image using spiral CT atlases and convex optimization
CN110751187B (en) Training method of abnormal area image generation network and related product
CN111462071B (en) Image processing method and system
Varnavas et al. Increasing the automation of a 2D-3D registration system
US12106856B2 (en) Image processing apparatus, image processing method, and program for segmentation correction of medical image
US11475568B2 (en) Method for controlling display of abnormality in chest x-ray image, storage medium, abnormality display control apparatus, and server apparatus
CN108898578B (en) Medical image processing method and device and computer storage medium
CN111563901A (en) Hip joint image processing method and system based on magnetic resonance, storage medium and equipment
CN111462018B (en) Image alignment method in image, computer equipment and storage medium
CN111223158B (en) Artifact correction method for heart coronary image and readable storage medium
CN115861656A (en) Method, apparatus and system for automatically processing medical images to output an alert
CN113962957A (en) Medical image processing method, bone image processing method, device and equipment
US11138736B2 (en) Information processing apparatus and information processing method
Clerici et al. Anisotropic adapted meshes for image segmentation: Application to three-dimensional medical data
CN114596275B (en) Lung vessel segmentation method, device, storage medium and electronic equipment
CN114820740A (en) Image registration method, storage medium and computer device
CN115578456A (en) Simulated pneumoperitoneum image generation method and device and computer equipment
CN118252614B (en) Radio frequency ablation puncture path planning method for lumbar disc herniation through intervertebral foramen access

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination