CN114494317A - Biological tissue edge extraction method based on laparoscope and electronic equipment - Google Patents

Biological tissue edge extraction method based on laparoscope and electronic equipment Download PDF

Info

Publication number
CN114494317A
CN114494317A CN202210111521.9A CN202210111521A CN114494317A CN 114494317 A CN114494317 A CN 114494317A CN 202210111521 A CN202210111521 A CN 202210111521A CN 114494317 A CN114494317 A CN 114494317A
Authority
CN
China
Prior art keywords
image
edge
biological tissue
feature
laparoscopic
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210111521.9A
Other languages
Chinese (zh)
Inventor
李其花
吴乙荣
李南哲
段小明
郭元甫
李和意
陈永健
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Qingdao Hisense Medical Equipment Co Ltd
Original Assignee
Qingdao Hisense Medical Equipment Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Qingdao Hisense Medical Equipment Co Ltd filed Critical Qingdao Hisense Medical Equipment Co Ltd
Priority to CN202210111521.9A priority Critical patent/CN114494317A/en
Publication of CN114494317A publication Critical patent/CN114494317A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2413Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on distances to training or reference patterns
    • G06F18/24133Distances to prototypes
    • G06F18/24137Distances to cluster centroïds
    • G06F18/2414Smoothing the distance, e.g. radial basis function networks [RBFN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/253Fusion techniques of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/12Edge-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • General Health & Medical Sciences (AREA)
  • Evolutionary Biology (AREA)
  • Health & Medical Sciences (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The application discloses a biological tissue edge extraction method based on a laparoscope and electronic equipment, which are used for solving the problem that the biological tissue edge extraction in a laparoscope image cannot be rapidly and accurately realized in the related technology. According to the method, the gray level image and the edge image of the laparoscopic image of the biological tissue are simultaneously input to the edge segmentation network, so that the direction can be provided for the extraction of the edge of the biological tissue in the laparoscopic image, and the edge of the biological tissue in the laparoscopic image can be extracted more quickly and accurately. Meanwhile, the features in the plurality of specified feature maps of the extended path module are further extracted through the feature extraction module and the classification identification module of the edge segmentation network, so that the output edge segmentation result aiming at the biological tissue can be obtained more quickly and accurately. The gray level image and the edge image are input, so that the extraction of the edge of the biological tissue in the laparoscope image is quickly and accurately realized, a foundation is laid for the accurate positioning of the biological tissue in the laparoscope image, the surgical risk is reduced, and the user experience is improved.

Description

Biological tissue edge extraction method based on laparoscope and electronic equipment
Technical Field
The present disclosure relates to the field of image processing technologies, and in particular, to a method for extracting an edge of a biological tissue based on a laparoscope and an electronic device.
Background
Laparoscopic surgery is a minimally invasive abdominal surgery, and because of the advantages of small incision, less bleeding, quick recovery and the like, minimally invasive surgery becomes a general trend and pursuit target of surgical development. However, there is still a controversy in laparoscopic tissue tumor resection, and the root of the controversy is that it is more difficult for laparoscopic surgeons to locate the lesion site and the distribution of the surrounding vascular tissue than in conventional open surgery. Therefore, it becomes very critical to register the laparoscope image with the biological tissue model extracted from the preoperative three-dimensional image, so as to realize accurate positioning of the focus position of the laparoscope image.
In the related art, the key of the process of registering the laparoscopic image and the biological tissue model extracted from the preoperative three-dimensional image is as follows: firstly, accurately extracting the edge of a biological tissue in a laparoscope image; then three-dimensionally reconstructing a biological tissue based on the preoperative three-dimensional image; and finally registering the three-dimensional model of the biological tissue with the edge of the biological tissue of the laparoscopic image. The accurate extraction of the biological tissue edge in the laparoscopic image is a key problem affecting the automation, stability and result accuracy of the whole analysis process, and the quality of the processing result directly affects the subsequent analysis process, but the extraction of the biological tissue edge in the laparoscopic image cannot be realized fully automatically, quickly and accurately at present, so how to quickly and accurately extract the biological tissue edge in the laparoscopic image is a problem which needs to be solved urgently.
Disclosure of Invention
The application aims to provide a biological tissue edge extraction method based on a laparoscope and an electronic device, which are used for solving the problem that the related technology cannot rapidly and accurately realize the biological tissue edge extraction in a laparoscope image.
In a first aspect, the present application provides a laparoscope-based biological tissue edge extraction method, comprising:
acquiring a laparoscopic image of a biological tissue;
converting the laparoscope image into a gray image, and extracting edge information of the gray image by adopting an edge detection method to obtain an edge image;
inputting the gray level image and the edge image to an edge segmentation network to obtain an edge segmentation result output by the edge segmentation network and aiming at the biological tissue;
the edge segmentation network comprises a compression path module, an expansion path module, a feature extraction module and a classification identification module, wherein the feature extraction module is used for respectively carrying out feature extraction operation on a plurality of specified feature graphs of the expansion path module to obtain feature sub-graphs, carrying out feature fusion on the extracted feature sub-graphs to obtain fusion features, and the fusion features are used for the classification identification module to obtain the edge segmentation result.
In a possible implementation manner, the specified feature map includes a feature map for increasing the image size in the extended path module and a feature map output by the extended path module.
In a possible implementation manner, the designated feature maps include n, the feature extraction module includes n convolutional layers, n-1 upsampling layers and a connection layer, each convolutional layer corresponds to one upsampling map except the convolutional layer corresponding to the feature map output by the extended path module, and each designated feature map corresponds to one convolutional layer;
respectively carrying out feature extraction operation on the plurality of specified feature graphs to obtain feature sub-graphs, and carrying out feature fusion on the plurality of extracted feature sub-graphs to obtain fusion features, wherein the method comprises the following steps:
extracting each appointed characteristic graph by adopting a corresponding convolution layer respectively to obtain intermediate characteristics corresponding to each appointed characteristic graph respectively; wherein the intermediate feature output by the convolution layer corresponding to the feature graph output by the extended path module is a specified size and is one of the plurality of feature subgraphs;
the upper sampling layer is adopted to perform up-sampling on the intermediate features output by the corresponding convolution layer to obtain feature subgraphs output by each upper sampling layer respectively, wherein the feature subgraphs output by each upper sampling layer have the same size and are the specified size;
and splicing each characteristic subgraph by using the connecting layer to obtain the fusion characteristic.
In a possible implementation, the compression path module and the expansion path module are respectively an encoder and a decoder of a U-net network (semantic segmentation network) and its derivatives.
In a possible implementation manner, after the edge segmentation result is a mask image and the edge segmentation result of the biological tissue is obtained, the method further includes:
performing edge detection on the edge segmentation result to obtain the edge position of the biological tissue;
and removing pixels belonging to the image edge in the edge position in the laparoscopic image to obtain the final edge of the biological tissue in the laparoscopic image.
In one possible implementation, the loss function used to train the edge segmentation network includes class cross entropy and a dess Dice coefficient.
In a possible implementation manner, before the edge detection method is used to extract the edge information of the grayscale image to obtain the edge image, the method further includes:
and resampling the image data of the gray-scale image, and resampling the gray-scale image to a specified size.
In one possible embodiment, training the edge segmentation network includes:
acquiring a plurality of laparoscopic image samples of a biological tissue and tag data for each laparoscopic sample image, the tag data being indicative of a contour of the biological tissue in the respective laparoscopic image sample;
converting each laparoscope sample into a gray image, resampling each frame of gray image and corresponding label data to an appointed size, and extracting the edge outline of each frame of gray image to obtain an edge sample image of each gray image;
constructing a training sample based on the gray level image and the corresponding label data and edge sample image;
training the edge segmentation network based on training samples.
In one possible embodiment, converting the laparoscopic image to a grayscale image comprises:
converting the pixel value of the R, G, B channel of each pixel point in the laparoscopic image into the pixel value of the gray image R, G, B channel according to a corresponding proportion;
and adding the pixel values of the R, G, B channels of the gray level image of each pixel point in the laparoscope image to obtain the pixel value of the gray level image of each pixel point in the laparoscope image.
In a second aspect, the present application provides an electronic device comprising a processor and a memory:
the memory for storing a computer program executable by the processor;
the processor is coupled to the memory and configured to execute the instructions to implement the laparoscope-based biological tissue edge extraction method as described in any of the first aspects above.
In a third aspect, the present application provides a computer-readable storage medium, wherein the instructions, when executed by an electronic device, enable the electronic device to perform the laparoscopic-based biological tissue edge extraction method as described in any one of the first aspects above.
In a fourth aspect, the present application provides a computer program product comprising a computer program:
the computer program when executed by a processor implements the laparoscope-based biological tissue edge extraction method as defined in any of the above first aspects.
The technical scheme provided by the embodiment of the application at least has the following beneficial effects:
the embodiment of the application acquires a laparoscope image of biological tissues; then, converting the laparoscope image into a gray image, and extracting edge information of the gray image by adopting an edge detection method to obtain an edge image; inputting the gray level image and the edge image into an edge segmentation network to obtain an edge segmentation result output by the edge segmentation network and aiming at the biological tissue; the edge segmentation network comprises a compression path module, an expansion path module, a feature extraction module and a classification recognition module, wherein the feature extraction module is used for respectively carrying out feature extraction operation on a plurality of specified feature graphs of the expansion path module to obtain feature sub-graphs, carrying out feature fusion on the extracted feature sub-graphs to obtain fusion features, and supplying the fusion features to the classification recognition module to obtain an edge segmentation result.
Therefore, the gray-scale image and the edge image are simultaneously input to the edge segmentation network, so that the direction can be provided for extracting the edge of the biological tissue in the laparoscopic image, the workload of extracting the edge of the biological tissue can be reduced, and the edge of the biological tissue in the laparoscopic image can be extracted more quickly and accurately. Meanwhile, by training the edge segmentation network, the features in the plurality of specified feature maps of the extended path module are further extracted by using the feature extraction module and the classification recognition module, an edge segmentation result can be more accurately obtained, so that the edges of the biological tissues in the laparoscopic image can be rapidly and accurately extracted, a foundation is laid for accurate positioning of the distribution of the biological tissue focus and the vessels around the focus in the laparoscopic image, the requirements of clinical application are met, the risk of biological tissue tumor resection operation under the laparoscope can be reduced, and the user experience is improved.
Additional features and advantages of the application will be set forth in the description which follows, and in part will be obvious from the description, or may be learned by the practice of the application. The objectives and other advantages of the application may be realized and attained by the structure particularly pointed out in the written description and claims hereof as well as the appended drawings.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings required to be used in the embodiments of the present application will be briefly described below, and it is obvious that the drawings described below are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on the drawings without creative efforts.
Fig. 1 is a schematic flow chart of a laparoscopic-based method for extracting an edge of a biological tissue according to an embodiment of the present disclosure;
FIG. 2 is a schematic view of various laparoscopic images provided in accordance with an embodiment of the present application;
FIG. 3 is a schematic diagram of a grayscale image and an edge image of a laparoscopic image provided in an embodiment of the present application;
fig. 4 is a schematic diagram of an edge segmentation network according to an embodiment of the present application;
fig. 5 is a schematic diagram of another edge-splitting network according to an embodiment of the present application;
fig. 6 is a diagram illustrating correspondence between each module of the edge-segmentation network shown in fig. 4 and the edge-segmentation network shown in fig. 5 according to an embodiment of the present disclosure;
fig. 7 is a schematic flow chart illustrating a process of performing feature extraction to obtain a feature sub-graph and performing feature fusion on the feature sub-graph to obtain a fusion feature according to the embodiment of the present application;
fig. 8 is a schematic flowchart of training an edge segmentation network according to an embodiment of the present disclosure;
FIG. 9 is a diagram illustrating the result of edge segmentation of biological tissue in a laparoscopic image according to an embodiment of the present application;
FIG. 10 is a schematic view of an edge of a biological tissue in a laparoscopic image provided with an embodiment of the present application;
fig. 11 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present application clearer, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application. The embodiments described are some, but not all embodiments of the present application. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
Also, in the description of the embodiments of the present application, "/" indicates an inclusive meaning unless otherwise specified, for example, a/B may indicate a or B; "and/or" in the text is only an association relationship describing an associated object, and means that three relationships may exist, for example, a and/or B may mean: three cases of a alone, a and B both, and B alone exist, and in addition, "a plurality" means two or more than two in the description of the embodiments of the present application.
In the following, the terms "first", "second" are used for descriptive purposes only and are not to be understood as implying or implying relative importance or implicitly indicating the number of technical features indicated. Thus, features defined as "first", "second", may explicitly or implicitly include one or more of the features, and in the description of embodiments of the application, unless stated otherwise, "plurality" means two or more.
Compared with the traditional open surgery, the laparoscopic surgery is a minimally invasive abdominal surgery, and because of the advantages of small incision, less bleeding, quick recovery and the like, the minimally invasive surgery becomes the general trend and pursuit target of surgical development.
At present, the laparoscope used for the downlink biological tissue tumor resection still has a certain controversy. The support for laparoscopic resection patients think that: the laparoscopic surgery reduces the surgical trauma, reduces the serious adhesion in the abdominal cavity caused by the conventional open surgery, and is beneficial to the recovery of patients; and the opponent thinks that the laparoscopic surgery lacks the touch sense of the hand, is difficult to judge the tumor position, has increased the operation risk. For example, the tumor located in the liver I, IV, VII and VIII sections is difficult to expose under the laparoscope, and the tumors of the liver sections are close to the main vascular system of biological tissues, and peripheral vessels are easy to damage during the operation process, so as to cause bleeding. The source of the dispute is that compared with the open surgery, the laparoscope surgeon is more difficult to locate the focus of the biological tissue and the distribution of the surrounding vascular tissue due to the lack of hand touch.
The accurate positioning of the focus position in the laparoscope image is the key to reduce the risk of the biological tissue tumor resection operation under the laparoscope and improve the success rate of the operation. To realize accurate positioning of the focus position of the laparoscope image, the laparoscope image needs to be registered with a biological tissue model extracted based on a preoperative three-dimensional image, and the key of the process is as follows: firstly, accurately extracting the edge of the biological tissue in the laparoscopic image; then three-dimensionally reconstructing a biological tissue based on the preoperative image; and finally, registering the three-dimensional representation of the biological tissue with the edge of the biological tissue of the laparoscopic image. The accurate extraction of the biological tissue edge in the laparoscopic image is one of the core steps, the realization of the accurate extraction is a key problem influencing the analysis automation, stability and result accuracy of the whole process, the quality of a processing result directly influences the subsequent analysis process, and the accurate extraction has great clinical diagnosis significance. However, the extraction of the edge of the biological tissue in the laparoscopic image cannot be rapidly and accurately achieved, and therefore how to rapidly and accurately extract the edge of the biological tissue in the laparoscopic image is an urgent problem to be solved.
The preoperative three-dimensional image includes, but is not limited to, a CT (Computed Tomography) image and an MR image (Magnetic Resonance image).
In view of the above, the present application provides a method for extracting an edge of a biological tissue based on a laparoscope and an electronic device, so as to solve the problem that the related art cannot rapidly and accurately extract an edge of a biological tissue in a laparoscopic image.
The inventive concept of the present application can be summarized as follows: the embodiment of the application acquires a laparoscope image of biological tissues; then, converting the laparoscope image into a gray image, and extracting edge information of the gray image by adopting an edge detection method to obtain an edge image; inputting the gray level image and the edge image into an edge segmentation network to obtain an edge segmentation result output by the edge segmentation network and aiming at the biological tissue; the edge segmentation network comprises a compression path module, an expansion path module, a feature extraction module and a classification recognition module, wherein the feature extraction module is used for respectively carrying out feature extraction operation on a plurality of specified feature graphs of the expansion path module to obtain feature sub-graphs, carrying out feature fusion on the extracted feature sub-graphs to obtain fusion features, and supplying the fusion features to the classification recognition module to obtain an edge segmentation result.
Therefore, the gray-scale image and the edge image are simultaneously input to the edge segmentation network, so that the direction can be provided for extracting the edge of the biological tissue in the laparoscopic image, the workload of extracting the edge of the biological tissue can be reduced, and the edge of the biological tissue in the laparoscopic image can be extracted more quickly and accurately. Meanwhile, by training the edge segmentation network, the features in the plurality of specified feature maps of the extended path module are further extracted by using the feature extraction module and the classification recognition module, an edge segmentation result can be more accurately obtained, so that the edges of the biological tissues in the laparoscopic image can be rapidly and accurately extracted, a foundation is laid for accurate positioning of the distribution of the biological tissue focus and the vessels around the focus in the laparoscopic image, the requirements of clinical application are met, the risk of biological tissue tumor resection operation under the laparoscope can be reduced, and the user experience is improved.
It should be noted that the biological tissue in the embodiments of the present application includes, but is not limited to, organs such as liver, gallbladder, spleen, pancreas, stomach, small intestine, colon and rectum, and kidney.
After the main inventive concept of the embodiments of the present application is introduced, a method for extracting an edge of a biological tissue based on a laparoscope according to the embodiments of the present application will be described below with reference to the accompanying drawings.
Referring to fig. 1, a schematic flow chart of a laparoscopic-based biological tissue edge extraction method according to an embodiment of the present application is shown. As shown in fig. 1, the method comprises the steps of:
in step 101, a laparoscopic image of a biological tissue is acquired.
In step 102, the laparoscope image is converted into a gray image, and an edge detection method is used to extract edge information of the gray image to obtain an edge image.
In one possible embodiment, since the laparoscopic images may originate from different hospitals, different machines and technicians, the resolution and color difference of the single frame images in the laparoscopic images is large, as shown in fig. 2. Therefore, in order to reduce the influence of the laparoscopic image data on the output result of the edge segmentation network, the laparoscopic image is preprocessed firstly in the embodiment of the application and is converted into the gray image, and the method can be specifically implemented by converting the pixel values of R (Red), G (Green) and B (blue) channels of each pixel point in the laparoscopic image into the pixel value of a gray image R, G, B channel according to the corresponding proportion; and then adding the pixel values of the R, G, B channels of the gray level image of each pixel point in the laparoscopic image to obtain the pixel value of the gray level image of each pixel point in the laparoscopic image.
Illustratively, the laparoscopic image may be converted to a grayscale image according to equation (1):
valuegray=valueR*0.299+valueG*0.587+valueB*0.114(1)
wherein valuegrayIs a gray scale image pixel value; valueRThe color image R channel pixel values; valueGThe pixel values of the G channel of the color image are obtained; valueBIs the color image B channel pixel value. Where 0.299, 0.587, 0.114 are the ratios of the pixel values of the color R, G, B channel to the gray scale image pixel values.
In a possible implementation manner, the resolution difference of a single frame image in the laparoscopic image is large, and therefore, in order to reduce the influence of the resolution on the edge segmentation result, in the embodiment of the present application, before the edge information of the grayscale image is extracted by using an edge detection method to obtain the edge image, image data of the grayscale image is further resampled, and the grayscale image is resampled to a specified size, as shown in fig. 3, a. Specifically, the grayscale image data is normalized, then the grayscale image data is resampled by using an interpolation algorithm, the resolution of the grayscale image is unified to a specified value, and the resampled grayscale image (img _ gray) is obtained, for example, the grayscale image can be resampled to be 800 × 600.
The interpolation algorithm comprises the following steps: the nearest neighbor interpolation method, the bilinear interpolation method and the bicubic interpolation method can select the most appropriate interpolation algorithm to resample the gray image data according to actual conditions, and the embodiment of the application does not limit the method.
In a possible implementation manner, due to a large difference between data samples, in the embodiment of the present application, based on an image obtained by resampling, an edge information of a grayscale image is extracted by using a conventional edge detection algorithm to obtain an edge image (img _ edge), as shown in a diagram b in fig. 3.
The conventional edge detection algorithm includes, but is not limited to, methods using Sobel operator (Sobel operator), Canny operator (Canny operator), Roberts operator (Roberts operator), and the like, which is not limited in this application.
Therefore, the gray level image and the edge image corresponding to the laparoscope image are obtained through processing the laparoscope image.
In step 103, the grayscale image and the edge image are input to the edge segmentation network, and the edge segmentation result for the biological tissue output by the edge segmentation network is obtained.
As shown in fig. 4, the edge segmentation network includes a compression path module, an expansion path module, a feature extraction module and a classification recognition module, where the feature extraction module is configured to perform feature extraction on a plurality of specified feature maps of the expansion path module to obtain feature sub-maps, perform feature fusion on the extracted plurality of feature sub-maps to obtain fusion features, and the fusion features are provided for the classification recognition module to obtain an edge segmentation result.
In a possible implementation manner, the deep learning network with the best performance of the segmentation task is U-Net and its variants, so in this embodiment, in order to make full use of the feature information of the top-level network, U-Net is selected as the base network, and is modified on this basis to obtain the edge segmentation network shown in fig. 5, where the compression path module and the extension path module are respectively an encoder and a decoder of the U-Net network and its derivative networks.
In a possible implementation manner, the specified feature map in the edge segmentation network in the embodiment of the present application includes a feature map for increasing the image size in the extended path module and a feature map output by the extended path module.
In a possible implementation manner, in the embodiment of the present application, the number of the designated feature maps includes n, the feature extraction module includes n convolutional layers, n-1 upsampling layers, and a connection layer, each convolutional layer corresponds to one upsampling map except for the convolutional layer corresponding to the feature map output by the extended path module, and each designated feature map corresponds to one convolutional layer.
Exemplarily, as shown in fig. 6, a corresponding relationship diagram of each module of the edge-segmentation network shown in fig. 4 and the edge-segmentation network shown in fig. 5 is provided for the embodiment of the present application. The specified feature map of the extended path module includes a feature map of an arrow outputting 3 × 3conv (convolution) in the feature extraction module in fig. 6, and it can be seen that there are 4 specified feature maps, where the convolution layer of the feature extraction module in fig. 6 is represented by an arrow of 3 × 3conv, the upsampling layer is represented by an arrow of Up-sampling, and the connection layer is represented by localization, and it can be seen that the feature extraction module has 4 convolution layers and 3 upsampling layers, where the last specified feature map of the extended path module is extracted by using the corresponding convolution layer, and a feature sub-map is directly obtained, and therefore there is no upsampling layer.
In a possible implementation manner, in the embodiment of the present application, the performing feature extraction operation on the multiple specified feature graphs respectively to obtain feature sub-graphs, and performing feature fusion on the multiple extracted feature sub-graphs to obtain fusion features specifically includes the steps shown in fig. 7:
in step 701, extracting each designated feature map by using a corresponding convolution layer respectively to obtain intermediate features corresponding to each designated feature map respectively; the intermediate characteristic output by the convolution layer corresponding to the characteristic graph output by the extended path module is a specified size and is one of a plurality of characteristic subgraphs;
in step 702, an upsampling layer is used to upsample the intermediate features output by the corresponding convolutional layer to obtain feature subgraphs output by each upsampling layer, wherein the feature subgraphs output by each upsampling layer have the same size and are of a specified size;
in step 703, the connection layer is used to splice the feature subgraphs to obtain a fusion feature.
Exemplarily, as shown in fig. 5, the performing feature extraction operation on a plurality of specified feature graphs respectively to obtain feature sub-graphs, and performing feature fusion on the plurality of extracted feature sub-graphs to obtain fusion features specifically includes: firstly, extracting 4 designated feature maps in fig. 5 by respectively adopting 3 × 3 convolutional layers to obtain intermediate features respectively corresponding to each designated feature map, namely a 100 × 75 × 16 feature map, a 200 × 150 × 16 feature map, a 400 × 300 × 16 feature map and a 800 × 600 × 16 feature map; the intermediate characteristic output by the convolution layer corresponding to the characteristic diagram output by the extended path module is a characteristic diagram of 800 multiplied by 600 multiplied by 16. And then, respectively adopting an Up-sampling layer to perform Up-sampling on a 100 × 75 × 16 feature map, a 200 × 150 × 16 feature map and a 400 × 300 × 16 feature map to obtain feature subgraphs with the same size as the 800 × 600 × 16 feature map output by the convolution layer corresponding to the feature map output by the extended path module, and finally adopting a connecting layer to perform splicing processing on the feature subgraphs to obtain fusion features, namely 4 800 × 600 × 16 feature maps output in fig. 5, and performing splicing processing to form the fusion features of 64 channels.
The sizes of convolution kernels commonly found in convolutional layers are 1 × 1, 3 × 3, 5 × 5, and 7 × 7, and sometimes 11 × 11 is also seen, and if the convolutional layers are characterized by extraction, convolution with a size of 3 × 3 is generally selected, and other convolution kernels may be selected according to actual situations, which is not limited in this application.
Therefore, aliasing influence caused by nearest neighbor interpolation can be reduced, the contribution of semantics to edge segmentation is improved, and the accuracy of the edge segmentation network can be improved.
In a possible implementation manner, before using the edge segmentation network, the edge segmentation network needs to be trained, so that the edge segmentation network tends to be stable, and the most correct edge segmentation result can be obtained, in this embodiment, the training of the edge segmentation network may be specifically implemented as the steps shown in fig. 8:
in step 801, a plurality of laparoscopic image samples of a biological tissue and tag data for each laparoscopic sample image are acquired, the tag data indicating an outline of the biological tissue in the respective laparoscopic image sample.
For example, a plurality of laparoscopic Image samples of a biological tissue and label data of each laparoscopic Image sample may be obtained by first obtaining a laparoscopic surgery video Image of a subject, sampling the laparoscopic Image into a series of color images (RGB images, Red Green and Blue images) in a JPG (Joint Photographic Experts Group) format, and then accurately labeling the generated JPG Image by a clinical expert using a free open source Image labeling tool via (vgg Image indicator), so as to label a biological tissue region in the JPG Image, thereby obtaining corresponding label data.
In step 802, each laparoscope sample is converted into a gray image, each frame of gray image and corresponding label data are resampled to a designated size, and an edge profile of each frame of gray image is extracted to obtain an edge sample image of each gray image.
In step 803, a training sample is constructed based on the grayscale image and the corresponding label data and edge sample image.
In step 804, an edge segmentation network is trained based on the training samples.
For example, the grayscale images of the plurality of laparoscopic image samples and the corresponding label data and edge sample images may be divided into a training set and a test set, wherein the training set is used for training the edge segmentation network, and the test set is used for testing the performance of the edge segmentation network. And then, in order to improve the performance stability of the edge segmentation network, a K-fold cross validation method is selected for training the edge segmentation network.
For example, if 100 laparoscope image samples exist, the grayscale images corresponding to the 100 laparoscope image samples, the corresponding label data and the edge sample images can be divided into K groups of subsets, each group of subset data is respectively made into a test set, and the rest K-1 groups of subset data are used as training sets, so that K models can be obtained. And (4) evaluating results of the K models in a verification set respectively, and finally averaging the loss functions of the K models to obtain a final loss function. For example, 100 laparoscopic image samples can be divided into 5 groups, one group of 20 laparoscopic image samples, 4 groups of grayscale images corresponding to 80 laparoscopic image samples and corresponding label data and edge sample images can be used as a training set, 1 group of grayscale images corresponding to 20 laparoscopic image samples and corresponding label data and edge sample images can be used as a test set, and finally, the loss functions of 5 models are averaged to obtain a final loss function.
In a possible implementation manner, in order to increase the generalization of the edge segmentation network, in the embodiment of the present application, in the training process, 2D data enhancement may be performed on the training data, including performing angle rotation, gray scale pull-up change, and flipping operation on the training data within a certain range.
In a possible implementation manner, in order to make the edge segmentation network more accurate, the loss function used in the training of the edge segmentation network in the embodiment of the present application includes a classification cross entropy and a Dice (dess) coefficient, and the loss function in the embodiment of the present application can be calculated according to equation (2):
Figure BDA0003490716890000131
wherein L isCross-EntropyIs classified cross entropy; l isDiceIs the Dice coefficient; y isiAnd
Figure BDA0003490716890000132
and respectively representing the standard result of the pixel i in the label data and the predicted result of the pixel i output by the edge segmentation network.
In addition, the optimizer of the edge segmentation network in the embodiment of the present application uses range as an optimization operator, and the range optimizer is a synergistic combination of raw (corrected Adam) and lookup ahead (predictive optimizer), and has both advantages, and is a deep learning optimizer which is best used so far. When the method is used, the initial value of the learning rate of the edge segmentation network is set to be 1 x 10-4, training is carried out by adopting a learning rate warm-up (warmup) optimization strategy, and training is carried out by modifying the preset learning rate according to the training result after multiple rounds of training.
The initial value of the learning rate may be set according to an actual use condition, which is not limited in the embodiment of the present application.
Thus, by optimizing the edge segmentation network and inputting the grayscale image and the edge image to the edge segmentation network, it is possible to obtain the edge segmentation result for the biological tissue output by the edge segmentation network as shown in a diagram a in fig. 9, and then synthesize the diagram a in fig. 9 and the diagram a in fig. 3 into a display diagram as shown in a diagram b in fig. 9, which can more intuitively express the accuracy of the edge segmentation result.
In a possible implementation manner, in the embodiment of the present application, the feature extraction module uses the edge segmentation result obtained by the classification and identification module for the fused features as a mask image, so after the edge segmentation result of the biological tissue is obtained, the embodiment of the present application further needs to perform edge detection on the edge segmentation result to obtain an edge position of the biological tissue; and removing pixels belonging to the image edge in the edge position in the laparoscope image to obtain the final edge of the biological tissue in the laparoscope image.
In particular, it may be implemented that the object contour detection algorithm is used to obtain the biological tissue edge in the display graph of the edge segmentation result shown in the graph b in fig. 9, as shown in the graph a in fig. 10, however, since the laparoscopic image has an image edge, and manually labeling the image edge has an error of several pixels, the edge of the biological tissue in the graph a of fig. 10 is not a true biological tissue edge, the point within the solid line box of the graph a of fig. 10 approximates the image edge of the laparoscopic image, therefore, in the embodiment of the present application, it is necessary to remove the points inside the solid line frame in fig. 10, an algorithm of image edge inward erosion may be adopted to remove the pixel information inside the solid line frame, i.e. pixels belonging to the image edge in the edge position are removed in the laparoscopic image, resulting in the final edge of the biological tissue in the laparoscopic image as shown in fig. 10, b. Therefore, the extraction of the biological tissue edge in the laparoscopic image can be more accurately realized.
Based on the foregoing description, the embodiments of the present application provide a method for laparoscopically imaging biological tissue; then, converting the laparoscope image into a gray image, and extracting edge information of the gray image by adopting an edge detection method to obtain an edge image; inputting the gray level image and the edge image into an edge segmentation network to obtain an edge segmentation result output by the edge segmentation network and aiming at the biological tissue; the edge segmentation network comprises a compression path module, an expansion path module, a feature extraction module and a classification recognition module, wherein the feature extraction module is used for respectively carrying out feature extraction operation on a plurality of specified feature graphs of the expansion path module to obtain feature sub-graphs, carrying out feature fusion on the extracted feature sub-graphs to obtain fusion features, and supplying the fusion features to the classification recognition module to obtain an edge segmentation result.
Therefore, the gray-scale image and the edge image are simultaneously input to the edge segmentation network, so that the direction can be provided for extracting the edge of the biological tissue in the laparoscopic image, the workload of extracting the edge of the biological tissue can be reduced, and the edge of the biological tissue in the laparoscopic image can be extracted more quickly and accurately. Meanwhile, by training the edge segmentation network, the features in the plurality of specified feature maps of the extended path module are further extracted by using the feature extraction module and the classification recognition module, an edge segmentation result can be more accurately obtained, so that the edges of the biological tissues in the laparoscopic image can be rapidly and accurately extracted, a foundation is laid for accurate positioning of the distribution of the biological tissue focus and the vessels around the focus in the laparoscopic image, the requirements of clinical application are met, the risk of biological tissue tumor resection operation under the laparoscope can be reduced, and the user experience is improved.
An electronic device 110 according to this embodiment of the present application is described below with reference to fig. 11. The electronic device 110 shown in fig. 11 is only an example, and should not bring any limitation to the functions and the scope of use of the embodiments of the present application.
As shown in fig. 11, the electronic device 110 is represented in the form of a general electronic device. The components of the electronic device 110 may include, but are not limited to: the at least one processor 111, the at least one memory 112, and a bus 113 that couples various system components including the memory 112 and the processor 111.
Bus 113 represents one or more of any of several types of bus structures, including a memory bus or memory controller, a peripheral bus, a processor, or a local bus using any of a variety of bus architectures.
The memory 112 may include readable media in the form of volatile memory, such as Random Access Memory (RAM)1121 and/or cache memory 1122, and may further include Read Only Memory (ROM) 1123.
Memory 112 may also include a program/utility 1125 having a set (at least one) of program modules 1124, such program modules 1124 including, but not limited to: an operating system, one or more application programs, other program modules, and program data, each of which, or some combination thereof, may comprise an implementation of a network environment.
Electronic device 110 may also communicate with one or more external devices 114 (e.g., keyboard, pointing device, etc.), with one or more devices that enable a user to interact with electronic device 110, and/or with any devices (e.g., router, modem, etc.) that enable electronic device 110 to communicate with one or more other electronic devices. Such communication may be through an input/output (I/O) interface 115. Also, the electronic device 110 may communicate with one or more networks (e.g., a Local Area Network (LAN), a Wide Area Network (WAN), and/or a public network, such as the internet) via the network adapter 116. As shown, the network adapter 116 communicates with other modules for the electronic device 110 over the bus 113. It should be understood that although not shown in the figures, other hardware and/or software modules may be used in conjunction with electronic device 110, including but not limited to: microcode, device drivers, redundant processors, external disk drive arrays, RAID systems, tape drives, and data backup storage systems, among others.
In an exemplary embodiment, a computer-readable storage medium comprising instructions, such as the memory 112 comprising instructions, executable by the processor 111 to perform the above-described laparoscopic-based biological tissue edge extraction method is also provided. Alternatively, the storage medium may be a non-transitory computer readable storage medium, which may be, for example, a ROM, a Random Access Memory (RAM), a CD-ROM, a magnetic tape, a floppy disk, an optical data storage device, and the like.
In an exemplary embodiment, there is also provided a computer program product comprising a computer program which when executed by the processor 111 implements any of the laparoscope-based biological tissue edge extraction methods as provided herein.
In an exemplary embodiment, various aspects of a laparoscope-based biological tissue edge extraction method provided by the present application may also be embodied in the form of a program product comprising program code for causing a computer device to perform the steps of the laparoscope-based biological tissue edge extraction method according to various exemplary embodiments of the present application described above in the present specification when the program product is run on the computer device.
The program product may employ any combination of one or more readable media. The readable medium may be a readable signal medium or a readable storage medium. A readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples (a non-exhaustive list) of the readable storage medium include: an electrical connection having one or more wires, a portable disk, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
The program product for the laparoscope-based biological tissue edge extraction method of the embodiments of the present application may employ a portable compact disk read-only memory (CD-ROM) and include program code, and may be run on an electronic device. However, the program product of the present application is not limited thereto, and in this document, a readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
A readable signal medium may include a propagated data signal with readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A readable signal medium may also be any readable medium that is not a readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.
Program code embodied on a readable medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
Program code for carrying out operations of the present application may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, C + + or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the consumer electronic device, partly on the consumer device, as a stand-alone software package, partly on the consumer electronic device and partly on a remote electronic device, or entirely on the remote electronic device or server. In the case of remote electronic devices, the remote electronic devices may be connected to the consumer electronic device through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external electronic device (e.g., through the internet using an internet service provider).
It should be noted that although several units or sub-units of the apparatus are mentioned in the above detailed description, such division is merely exemplary and not mandatory. Indeed, the features and functions of two or more units described above may be embodied in one unit, according to embodiments of the application. Conversely, the features and functions of one unit described above may be further divided into embodiments by a plurality of units.
Further, while the operations of the methods of the present application are depicted in the drawings in a particular order, this does not require or imply that these operations must be performed in this particular order, or that all of the illustrated operations must be performed, to achieve desirable results. Additionally or alternatively, certain steps may be omitted, multiple steps combined into one step execution, and/or one step broken down into multiple step executions.
The embodiments provided in the present application are only a few examples of the general concept of the present application, and do not limit the scope of the present application. Any other embodiments extended according to the scheme of the present application without inventive efforts will be within the scope of protection of the present application for a person skilled in the art.
As will be appreciated by one skilled in the art, embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to the application. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
It will be apparent to those skilled in the art that various changes and modifications may be made in the present application without departing from the spirit and scope of the application. Thus, if such modifications and variations of the present application fall within the scope of the claims of the present application and their equivalents, the present application is intended to include such modifications and variations as well.

Claims (10)

1. A laparoscope-based biological tissue edge extraction method, comprising:
acquiring a laparoscopic image of a biological tissue;
converting the laparoscope image into a gray image, and extracting edge information of the gray image by adopting an edge detection method to obtain an edge image;
inputting the gray level image and the edge image to an edge segmentation network to obtain an edge segmentation result output by the edge segmentation network and aiming at the biological tissue;
the edge segmentation network comprises a compression path module, an expansion path module, a feature extraction module and a classification identification module, wherein the feature extraction module is used for respectively carrying out feature extraction operation on a plurality of specified feature graphs of the expansion path module to obtain feature sub-graphs, carrying out feature fusion on the extracted feature sub-graphs to obtain fusion features, and the fusion features are used for the classification identification module to obtain the edge segmentation result.
2. The method according to claim 1, wherein the specified feature map comprises a feature map for increasing an image size in the extended path module and a feature map output by the extended path module.
3. The method according to claim 2, wherein the designated feature maps comprise n, the feature extraction module comprises n convolutional layers, n-1 upsampling layers and connection layers, each convolutional layer corresponds to one upsampling map except the convolutional layer corresponding to the feature map output by the extended path module, and each designated feature map corresponds to one convolutional layer;
respectively carrying out feature extraction operation on the plurality of specified feature graphs to obtain feature sub-graphs, and carrying out feature fusion on the plurality of extracted feature sub-graphs to obtain fusion features, wherein the method comprises the following steps:
extracting each appointed characteristic graph by adopting a corresponding convolution layer respectively to obtain intermediate characteristics corresponding to each appointed characteristic graph respectively; wherein the intermediate feature output by the convolutional layer corresponding to the feature map output by the extended path module is a specified size and is one of the plurality of feature subgraphs;
the upper sampling layer is adopted to perform up-sampling on the intermediate features output by the corresponding convolution layer to obtain feature subgraphs output by each upper sampling layer respectively, wherein the feature subgraphs output by each upper sampling layer have the same size and are the specified size;
and splicing each characteristic subgraph by using the connecting layer to obtain the fusion characteristic.
4. The method according to any one of claims 1-3, wherein the edge segmentation result is a mask image, and after obtaining the edge segmentation result of the biological tissue, the method further comprises:
performing edge detection on the edge segmentation result to obtain the edge position of the biological tissue;
and removing pixels belonging to the image edge in the edge position in the laparoscopic image to obtain the final edge of the biological tissue in the laparoscopic image.
5. The method of claim 1, wherein the loss function used to train the edge segmentation network comprises class cross entropy and a dess Dice coefficient.
6. The method according to any one of claims 1 to 3, wherein before extracting edge information of the gray image by using an edge detection method to obtain an edge image, the method further comprises:
and resampling the image data of the gray-scale image, and resampling the gray-scale image to a specified size.
7. The method of claim 6, wherein training the edge segmentation network comprises:
acquiring a plurality of laparoscopic image samples of a biological tissue and tag data for each laparoscopic sample image, the tag data being indicative of a contour of the biological tissue in the respective laparoscopic image sample;
converting each laparoscope sample into a gray image, resampling each frame of gray image and corresponding label data to an appointed size, and extracting the edge outline of each frame of gray image to obtain an edge sample image of each gray image;
constructing a training sample based on the gray level image and the corresponding label data and edge sample image;
training the edge segmentation network based on training samples.
8. The method of claim 1, wherein converting the laparoscopic image to a grayscale image comprises:
converting the pixel value of the R, G, B channel of each pixel point in the laparoscopic image into the pixel value of the gray image R, G, B channel according to a corresponding proportion;
adding the pixel values of the gray scale image R, G, B channels of each pixel point in the laparoscope image to obtain the gray scale image pixel value of each pixel point in the laparoscope image.
9. An electronic device, comprising a processor and a memory:
the memory for storing a computer program executable by the processor;
the processor is coupled to the memory and configured to execute the instructions to implement the laparoscope-based biological tissue edge extraction method of any of claims 1-8.
10. A computer-readable storage medium, wherein instructions in the computer-readable storage medium, when executed by an electronic device, enable the electronic device to perform the laparoscopic-based biological tissue edge extraction method of any one of claims 1 to 8.
CN202210111521.9A 2022-01-26 2022-01-26 Biological tissue edge extraction method based on laparoscope and electronic equipment Pending CN114494317A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210111521.9A CN114494317A (en) 2022-01-26 2022-01-26 Biological tissue edge extraction method based on laparoscope and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210111521.9A CN114494317A (en) 2022-01-26 2022-01-26 Biological tissue edge extraction method based on laparoscope and electronic equipment

Publications (1)

Publication Number Publication Date
CN114494317A true CN114494317A (en) 2022-05-13

Family

ID=81478260

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210111521.9A Pending CN114494317A (en) 2022-01-26 2022-01-26 Biological tissue edge extraction method based on laparoscope and electronic equipment

Country Status (1)

Country Link
CN (1) CN114494317A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116363111A (en) * 2023-04-06 2023-06-30 哈尔滨市科佳通用机电股份有限公司 Method for identifying clamping fault of guide rod of railway wagon manual brake

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116363111A (en) * 2023-04-06 2023-06-30 哈尔滨市科佳通用机电股份有限公司 Method for identifying clamping fault of guide rod of railway wagon manual brake

Similar Documents

Publication Publication Date Title
AU2019431299B2 (en) AI systems for detecting and sizing lesions
JP2017117462A (en) Physical registration of images acquired by fourier ptychography
CN110110808B (en) Method and device for performing target labeling on image and computer recording medium
CN110974306B (en) System for discernment and location pancreas neuroendocrine tumour under ultrasonic endoscope
WO2013028762A1 (en) Method and system for integrated radiological and pathological information for diagnosis, therapy selection, and monitoring
EP2827301A1 (en) Image generation device, method, and program
US9478027B2 (en) Method for evaluating an examination
CN114299072A (en) Artificial intelligence-based anatomy variation identification prompting method and system
US20240274276A1 (en) Apparatus and method for medical image reading assistant providing representative image based on medical use artificial neural network
CN113888470A (en) Diagnosis method and device based on convolutional neural network and multi-modal medical image
US20080285831A1 (en) Automatically updating a geometric model
Gong et al. Using deep learning to identify the recurrent laryngeal nerve during thyroidectomy
CN114494317A (en) Biological tissue edge extraction method based on laparoscope and electronic equipment
CN115761365A (en) Intraoperative hemorrhage condition determination method and device and electronic equipment
CN111128349A (en) GAN-based medical image focus detection marking data enhancement method and device
CN112331311B (en) Method and device for fusion display of video and preoperative model in laparoscopic surgery
EP3432310A1 (en) Method and system for a preoperative surgical intervention simulation
US20050197558A1 (en) System and method for performing a virtual endoscopy in a branching structure
CN115953345A (en) Method and device for synthesizing lesions of cerebral hemorrhage medical image and storage medium
CN106204623A (en) Many contrasts image synchronization shows and the method and device of positioning and demarcating
Dzyubachyk et al. Comparative exploration of whole-body MR through locally rigid transforms
CN114723710A (en) Method and device for detecting ultrasonic video key frame based on neural network
Amparore et al. Computer vision and machine-learning techniques for automatic 3D virtual images overlapping during augmented reality guided robotic partial nephrectomy
Assis et al. Aneurysm pose estimation with deep learning
CN113689369B (en) Medical segmentation multi-stage fusion method and device based on deep learning

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination