CN114581668A - Segmentation model construction and contour recognition method and device and computer equipment - Google Patents

Segmentation model construction and contour recognition method and device and computer equipment Download PDF

Info

Publication number
CN114581668A
CN114581668A CN202210221162.2A CN202210221162A CN114581668A CN 114581668 A CN114581668 A CN 114581668A CN 202210221162 A CN202210221162 A CN 202210221162A CN 114581668 A CN114581668 A CN 114581668A
Authority
CN
China
Prior art keywords
branch
segmentation
result
image
fusion
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210221162.2A
Other languages
Chinese (zh)
Inventor
左廷涛
李新泰
戴辰晨
陈景春
许小强
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Lepu Medical Technology Beijing Co Ltd
Original Assignee
Lepu Medical Technology Beijing Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Lepu Medical Technology Beijing Co Ltd filed Critical Lepu Medical Technology Beijing Co Ltd
Priority to CN202210221162.2A priority Critical patent/CN114581668A/en
Publication of CN114581668A publication Critical patent/CN114581668A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques

Landscapes

  • Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Theoretical Computer Science (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Artificial Intelligence (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)
  • Ultra Sonic Daignosis Equipment (AREA)

Abstract

The invention discloses a segmentation model construction and contour recognition method, a device and computer equipment, wherein the method comprises the steps of obtaining a sample image; the sample images comprise positive sample images marked with target areas and unmarked negative sample images; inputting the negative sample image into a shared encoder of a preset image segmentation model, and performing encoding processing to obtain an encoding result; respectively inputting the coding results into a segmentation branch decoder and an edge detection branch coder to obtain segmentation branch results and edge detection results; inputting decoding results of the segmentation branch decoder and the edge detection branch encoder into the fusion branch decoder to obtain a fusion segmentation result; obtaining an output result of a preset image segmentation model based on the segmentation branch result, the edge detection result and the fusion segmentation result; obtaining a loss function of a preset image segmentation model based on the positive sample image and the output result; and adjusting the parameters of the preset image segmentation model based on the loss function to obtain the multi-branch fusion segmentation model.

Description

Segmentation model construction and contour recognition method and device and computer equipment
Technical Field
The invention relates to the technical field of ultrasonic image identification, in particular to a segmentation model construction and contour identification method, a segmentation model construction and contour identification device and computer equipment.
Background
Cardiovascular disease is one of the major health-threatening diseases. It is mainly characterized by severe atherosclerosis of coronary arteries, narrowing, blocking or thrombogenic coronary arteries, resulting in myocardial ischemia, hypoxia or myocardial infarction. Intravascular ultrasound (IVUS) is one of the most effective imaging methods for diagnosing cardiovascular diseases, and can detect structures inside blood vessels and display pathological changes inside blood vessels by interventional catheter technology and ultrasound imaging technology. Effective analysis of IVUS images can better help physicians determine diagnostic results and make diagnosis and treatment plans.
One of the most important processes of IVUS image analysis is delineation of the lumen and the media-adventitia boundary. Clinically common approaches include manual delineation methods, virtual histological imaging techniques, and digital image processing techniques. The IVUS image segmentation method based on digital image processing mainly comprises a traditional segmentation method and a method based on deep learning. However, due to the influence of ultrasonic speckle noise and the existence of various artifacts, lesions and surrounding structures, the conventional IVUS image segmentation algorithm is susceptible to interference, difficult to ensure the accuracy of automatic segmentation of the lumen and the mid-adventitia region, and time-consuming. The IVUS image segmentation algorithm based on deep learning can realize fast automatic segmentation of the inner membrane and the middle-outer membrane regions in the IVUS image through training learning of the neural network, and has certain robustness on ultrasonic noise. However, in the case of limited data size, it is difficult for the single-task IVUS image segmentation model to learn sufficient image features. For IVUS images with the conditions of peripheral structure interference, vessel bifurcation and the like, the accuracy of a segmentation result is difficult to ensure, and particularly the positioning of pixels in an edge area is not accurate enough.
Disclosure of Invention
Therefore, the technical problem to be solved by the present invention is to overcome the defect of low accuracy caused by various interference factors in the conventional IVUS image segmentation method for digital image processing, and thereby provide a segmentation model construction and contour recognition method, apparatus and computer device.
According to a first aspect, the embodiment of the invention discloses a method for constructing a multi-branch fusion segmentation model, which comprises the following steps: acquiring a sample image; the sample images comprise positive sample images marked with target areas and unmarked negative sample images; inputting the negative sample image serving as input data into a shared encoder of a preset image segmentation model, and performing encoding processing to obtain an encoding result; inputting the coding result into a segmentation branch decoder and an edge detection branch encoder of the preset image segmentation model respectively to obtain a segmentation branch result and an edge detection result; inputting the decoding results of the segmentation branch decoder and the edge detection branch encoder into a fusion branch decoder of the preset image segmentation model to obtain a fusion segmentation result; obtaining an output result of the preset image segmentation model based on the segmentation branch result, the edge detection result and the fusion segmentation result; obtaining a loss function of the preset image segmentation model based on the positive sample image and the output result; and adjusting parameters of the preset image segmentation model based on the loss function to obtain a multi-branch fusion segmentation model.
Optionally, the multi-branch fusion segmentation model includes an n-level shared encoder and a multi-branch decoder, the multi-branch decoder includes an n-level segmentation branch decoder, an n-level edge detection branch decoder, and an n-1-level fusion branch decoder, and the encoding result is input to the segmentation branch decoder and the edge detection branch encoder of the preset image segmentation model, respectively, so as to obtain a segmentation branch result and an edge detection result, including: inputting the sample image into a first-stage shared encoder to obtain first encoding output data; inputting the first encoding output data into a second-level shared encoder, a first-level segmentation branch decoder and a first-level edge detection branch decoder to obtain first segmentation data and first edge detection data; inputting the output data of the (m-1) th-level shared encoder to the (m) th-level shared encoder to obtain the output data of the (m) th-level shared encoder, wherein m is more than or equal to 2 and less than or equal to n; inputting the output data of the m-th level shared encoder and the output data of the m-1-th level division branch decoder into the m-th level division branch decoder to obtain the output data of the m-th level division branch decoder, wherein m is more than or equal to 2 and less than or equal to n; inputting the output data of the m-th level shared encoder and the output data of the m-1 level edge detection branch decoder into the m-th level edge detection branch decoder to obtain the output data of the m-th level edge detection branch decoder, wherein m is more than or equal to 2 and less than or equal to n; and obtaining the segmentation branch result based on the first segmentation data and the output data of the m-th segmentation branch decoder, and obtaining the edge detection result based on the first edge detection data and the output data of the m-th edge detection branch decoder.
Optionally, inputting the decoding results of the segmentation branch decoder and the edge detection branch encoder into a fusion branch decoder of the preset image segmentation model to obtain a fusion segmentation result, including: inputting the output data of the first-stage segmentation branch decoder and the output data of the first-stage edge detection branch decoder into the first-stage fusion branch decoder to obtain the output data of the first-stage fusion branch decoder; inputting the output data of the m-th level segmentation branch decoder, the output data of the m-th level edge detection branch decoder and the output data of the m-1 level fusion branch decoder into the m-th level fusion branch decoder to obtain the output data of the m-th level fusion branch decoder; and obtaining the fusion segmentation result based on the output data of the first-level fusion branch decoder and the output data of the m-level fusion branch decoder.
Optionally, the obtaining a loss function of the preset image segmentation model based on the positive sample image and the output result includes: calculating to obtain a segmentation branch loss function based on the positive sample image and a segmentation branch result; calculating to obtain an edge detection loss function based on the positive sample image and the edge detection result; calculating to obtain a fusion segmentation loss function based on the positive sample image and the fusion segmentation result; and obtaining the loss function based on the segmentation branch loss function, the edge detection loss function and the fusion segmentation loss function.
Optionally, the acquiring a sample image includes: and preprocessing the sample image, labeling the preprocessed sample image, and taking the labeled sample image as the positive sample image.
Optionally, the acquiring the sample image further includes: and carrying out amplification treatment on the sample image to obtain an amplified sample image.
According to a second aspect, an embodiment of the present invention further discloses an ultrasound imaging contour recognition method, including: acquiring an original image; inputting the original image into a preset multi-branch fusion segmentation model to obtain an image segmentation result of the original image; the preset multi-branch fusion segmentation model is obtained by using the construction method of the multi-branch fusion segmentation model according to the first aspect or any optional embodiment of the first aspect; and extracting the contour based on the image segmentation result to obtain the contour recognition result of each region of the original image.
Optionally, the performing contour extraction based on the image segmentation result to obtain a contour recognition result of each region of the original image includes: extracting contours based on the image segmentation result to obtain a first image result; performing Gaussian blur processing on the first image result to obtain a second image result; carrying out binarization processing on the second image result to obtain a third image result; and carrying out region-of-interest selection processing on the third image result to obtain a final contour recognition result of each region.
Optionally, performing region-of-interest selection processing on the third image result to obtain final contour recognition results of each region, including: determining a maximum connected domain with the gray value of 255 of the image center point containing the third image result as an interested region based on the third image result; obtaining a contour point set based on an intersection set of n rays emitted from the center of the image of the third image result and the region of interest at n angles with uniform intervals; and sequentially connecting the curve fitting results passing through the contour point set to obtain the final contour recognition result of each region.
According to the third aspect, the embodiment of the present invention further discloses a device for constructing a multi-branch fusion segmentation model, including: the first acquisition module is used for acquiring a sample image; the sample images comprise positive sample images marked with target areas and unmarked negative sample images; the coding module is used for inputting the negative sample image serving as input data to a shared coder of a preset image segmentation model for coding to obtain a coding result; the decoding module is used for respectively inputting the coding result into a segmentation branch decoder and an edge detection branch coder of the preset image segmentation model to obtain a segmentation branch result and an edge detection result; inputting the decoding results of the segmentation branch decoder and the edge detection branch encoder into the fusion branch decoder of the preset image segmentation model to obtain a fusion segmentation result; the output module is used for obtaining an output result of the preset image segmentation model based on the segmentation branch result, the edge detection result and the fusion segmentation result; the loss function module is used for obtaining a loss function of the preset image segmentation model based on the positive sample image and the output result; and the adjusting module is used for adjusting the parameters of the preset image segmentation model based on the loss function to obtain a multi-branch fusion segmentation model.
According to a fourth aspect, an embodiment of the present invention further discloses an ultrasound imaging contour recognition apparatus, including: the second acquisition module is used for acquiring an original image; the segmentation module is used for inputting the original image into a preset multi-branch fusion segmentation model to obtain an image segmentation result of the original image; the preset multi-branch fusion segmentation model is obtained by using the construction method of the multi-branch fusion segmentation model according to the first aspect or any optional embodiment of the first aspect; and the contour extraction module is used for extracting contours based on the image segmentation result to obtain the contour recognition result of each region of the original image.
According to a fifth aspect, an embodiment of the present invention further discloses a computer device, including: at least one processor; and a memory communicatively coupled to the at least one processor; wherein the memory stores instructions executable by the at least one processor, the instructions being executable by the at least one processor to cause the at least one processor to perform the steps of the method for constructing a multi-branch fusion segmentation model according to the first aspect or any one of the optional embodiments of the first aspect or the method for ultrasound imaging contour recognition according to the second aspect or any one of the optional embodiments of the second aspect.
According to a sixth aspect, the present invention further discloses a computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, implements the method for constructing a multi-branch fusion segmentation model according to the first aspect or any one of the optional embodiments of the first aspect, or the steps of the method for identifying an ultrasound imaging contour according to the second aspect or any one of the optional embodiments of the second aspect.
The technical scheme of the invention has the following advantages:
the invention provides a segmentation model construction and contour recognition method, a segmentation model construction and contour recognition device and computer equipment, wherein the model construction method comprises the following steps: acquiring a sample image; the sample images comprise positive sample images marked with target areas and unmarked negative sample images; inputting the negative sample image serving as input data into a shared encoder of a preset image segmentation model, and performing encoding processing to obtain an encoding result; inputting the coding result into a segmentation branch decoder and an edge detection branch encoder of the preset image segmentation model respectively to obtain a segmentation branch result and an edge detection result; inputting the decoding results of the segmentation branch decoder and the edge detection branch encoder into a fusion branch decoder of the preset image segmentation model to obtain a fusion segmentation result; obtaining an output result of the preset image segmentation model based on the segmentation branch result, the edge detection result and the fusion segmentation result; obtaining a loss function of the preset image segmentation model based on the positive sample image and the output result; and adjusting parameters of the preset image segmentation model based on the loss function to obtain a multi-branch fusion segmentation model. The sample image passes through the segmentation branch decoder, the edge detection branch encoder and the fusion branch decoder after sharing the encoder, so that different segmentation results are obtained, and the accuracy of model identification is improved. Based on the edge detection branch encoder, the learning ability and the generalization ability are enhanced, and the image features learned by each branch task are fully utilized by fusing branch decoders, so that the accuracy of network segmentation is further improved.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and other drawings can be obtained by those skilled in the art without creative efforts.
FIG. 1 is a flowchart illustrating a specific example of a method for constructing a multi-branch fusion segmentation model according to an embodiment of the present invention;
FIG. 2 is a flowchart of a specific example of an ultrasound imaging contour recognition method in an embodiment of the present invention;
FIG. 3 is a flowchart of a specific example of an apparatus for constructing a multi-branch fusion segmentation model according to an embodiment of the present invention;
FIG. 4 is a schematic block diagram of a specific example of an ultrasound imaging profile recognition apparatus in an embodiment of the present invention;
FIG. 5 is a diagram showing a specific example of a computer device according to an embodiment of the present invention;
FIG. 6 is a diagram illustrating a specific example of a method for constructing a multi-branch fusion segmentation model according to an embodiment of the present invention;
FIG. 7 is a diagram illustrating a specific example of a method for constructing a multi-branch fusion segmentation model according to an embodiment of the present invention;
FIG. 8 is a diagram illustrating a specific example of an ultrasound imaging contour recognition method according to an embodiment of the present invention;
fig. 9 is a schematic diagram of another specific example of the ultrasound imaging contour recognition method in the embodiment of the present invention.
Detailed Description
The technical solutions of the present invention will be described clearly and completely with reference to the accompanying drawings, and it should be understood that the described embodiments are some, but not all embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
In the description of the present invention, it should be noted that the terms "center", "upper", "lower", "left", "right", "vertical", "horizontal", "inner", "outer", etc., indicate orientations or positional relationships based on the orientations or positional relationships shown in the drawings, and are only for convenience of description and simplicity of description, but do not indicate or imply that the device or element being referred to must have a particular orientation, be constructed and operated in a particular orientation, and thus, should not be construed as limiting the present invention. Furthermore, the terms "first," "second," and "third" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance.
In the description of the present invention, it should be noted that, unless otherwise explicitly specified or limited, the terms "mounted," "connected," and "connected" are to be construed broadly and may be, for example, fixedly connected, detachably connected, or integrally connected; can be mechanically or electrically connected; the two elements may be directly connected or indirectly connected through an intermediate medium, or may be communicated with each other inside the two elements, or may be wirelessly connected or wired connected. The specific meanings of the above terms in the present invention can be understood in specific cases to those skilled in the art.
In addition, the technical features involved in the different embodiments of the present invention described below may be combined with each other as long as they do not conflict with each other.
The embodiment of the invention discloses a construction method of a multi-branch fusion segmentation model, which comprises the following steps of:
step 101: acquiring a sample image; the sample images include a positive sample image labeled with a target region and an unlabeled negative sample image. Illustratively, the sample image may be an IVUS image obtained using intravascular ultrasound techniques, which may be used for analysis of cardiovascular disease. The target region is the region of the intima and the media-adventitia of the blood vessel in the sample image, the positive sample image is the sample image after the artificial labeling, and the negative sample image is the sample image without the artificial labeling.
Step 102: and inputting the negative sample image serving as input data to a shared encoder of a preset image segmentation model, and performing encoding processing to obtain an encoding result.
Illustratively, a sample image which is not marked manually is used as input data and input into a shared encoder of an image segmentation model, and the corresponding encoding result is obtained by encoding. The preset image segmentation model is shown in fig. 7, and the image segmentation model includes an input image, a shared encoder, a decoder, and a segmentation structure, where the shared encoder includes an n-layer encoder, and the decoder includes a separation branch decoder (n-layer decoder), an edge detection decoder (n-layer decoder), and a fusion branch decoder (n-1-layer decoder). In the embodiment of the invention, the shared encoder with 4 layers is constructed in consideration of the size of an ultrasonic image and the condition of more noise, and the loss of excessive detail information is avoided while noise information is filtered and overfitting is prevented. The shared encoder is formed by 4-layer encoders, wherein each layer encoder comprises a convolution block and a downsampling block, as shown in fig. 6, the convolution block is composed of two identical subunits, and each subunit comprises a convolution layer, a batch normalization layer, a Dropout layer and an activation function layer which are sequentially connected. The lower sampling block is formed by sequentially connecting a convolution layer with the step length of 2 and an activation function layer, the convolution kernel size of the convolution layer is 3 x 3, and the activation function uniformly uses a LeakyRelu function. The number of layers of the shared encoder, the size of the convolution kernel in each encoder and the type of the activation function are not limited in the embodiment of the invention, and can be determined by a person skilled in the art according to actual needs.
Step 103: inputting the coding result into a segmentation branch decoder and an edge detection branch encoder of the preset image segmentation model respectively to obtain a segmentation branch result and an edge detection result; and inputting the decoding results of the segmentation branch decoder and the edge detection branch encoder into the fusion branch decoder of the preset image segmentation model to obtain a fusion segmentation result.
Illustratively, after passing through the shared encoder, the output result of the shared encoder is decoded as output data of the segmentation branch decoder and the edge detection branch encoder. The structure of the dividing branch decoder and the structure of the edge detection decoder are the same, the layer number is the same, and the dividing branch decoder and the edge detection decoder are symmetrical to the structure of the shared encoder, and output data of each stage in the shared encoder are used as input data in the corresponding dividing branch decoder and the edge detection decoder. The segmentation and edge detection sub-decoders are sequentially connected by 4 decoder layers, each decoder layer being serially connected by an upsampling block and a convolution block. The up-sampling block is formed by sequentially connecting a 2-time up-sampling layer and a convolution layer, and the convolution block structure is the same as that of the convolution block in the shared encoder. The same-layer encoder layer features and the same-layer decoder layer features are connected by skip-connection, and the convolution kernel size of the convolutional layer is 3 × 3. Compared with the mode of singly using a large convolution kernel, the series mode of a plurality of small convolution kernels ensures smaller parameters and smaller calculated amount while enlarging the receptive field, and introduces more activation parameters to improve the nonlinear fitting capability of the network. The encoder uses down-sampling, the decoder uses up-sampling, and each feature map in the down-sampling and the up-sampling is connected through jumping, so that each layer of feature map extracted by the network can be more fully utilized.
The decoding output results of the segmentation branch decoder and the edge detection branch encoder are used as input data of the fusion branch decoder to perform fusion decoding, wherein the number of layers of the fusion branch decoder is one layer less than that of the sharing encoder, the segmentation branch decoder and the edge detection branch encoder, the fusion branch decoder is formed by sequentially connecting 3 decoder modules, and each fusion module consists of a rolling block and an upper sampling layer. The volume block is formed by sequentially connecting two identical subunits. Each subunit includes a convolutional layer, a batch normalization layer, a Dropout layer, and an activation function layer connected in sequence, reducing the likelihood of over-fitting. The sizes of convolution kernels in the convolution layers are all 3 × 3, and the LeakyRelu function is uniformly used as the activation function in the embodiment of the invention.
Step 104: and obtaining an output result of the preset image segmentation model based on the segmentation branch result, the edge detection result and the fusion segmentation result. Illustratively, after passing through a segmentation branch decoder, an edge detection branch encoder and a fusion branch decoder in the decoder, three segmentation results are obtained as corresponding sample images, and the three segmentation results are used as output results of the image segmentation model.
Step 105: and obtaining a loss function of the preset image segmentation model based on the positive sample image and the output result. Illustratively, according to each segmentation result in the above step 104, the loss functions of the segmentation branch decoder, the edge detection branch encoder and the fusion branch decoder are respectively calculated with the labeled regions in the positive sample label, and the loss function of the image segmentation model is a linear combination of the loss functions of the segmentation branch decoder, the edge detection branch encoder and the fusion branch decoder.
Step 106: and adjusting parameters of the preset image segmentation model based on the loss function to obtain a multi-branch fusion segmentation model. Illustratively, the weights and biases in the image segmentation model are updated according to the loss function calculated in the above step 105, iteration is repeated until the final loss function is not reduced any more, the weights and biases in the final image segmentation model are retained, and the corresponding decoder with the minimum loss function is used as the final trained model.
The invention provides a construction method of a multi-branch fusion segmentation model, which comprises the following steps: acquiring a sample image; the sample images comprise positive sample images marked with target areas and unmarked negative sample images; inputting the negative sample image serving as input data into a shared encoder of a preset image segmentation model, and performing encoding processing to obtain an encoding result; inputting the coding result into a segmentation branch decoder and an edge detection branch encoder of the preset image segmentation model respectively to obtain a segmentation branch result and an edge detection result; inputting the decoding results of the segmentation branch decoder and the edge detection branch encoder into a fusion branch decoder of the preset image segmentation model to obtain a fusion segmentation result; obtaining an output result of the preset image segmentation model based on the segmentation branch result, the edge detection result and the fusion segmentation result; obtaining a loss function of the preset image segmentation model based on the positive sample image and the output result; and adjusting parameters of the preset image segmentation model based on the loss function to obtain a multi-branch fusion segmentation model. The sample image passes through the segmentation branch decoder, the edge detection branch encoder and the fusion branch decoder after sharing the encoder, so that different segmentation results are obtained, and the accuracy of model identification is improved. Based on the edge detection branch encoder, the learning ability and the generalization ability are enhanced, and the image features learned by each branch task are fully utilized by fusing branch decoders, so that the accuracy of network segmentation is further improved.
As an optional embodiment of the present invention, the multi-branch fusion segmentation model includes an n-level shared encoder and a multi-branch decoder, and the multi-branch decoder includes an n-level segmentation branch decoder, an n-level edge detection branch decoder, and an n-1-level fusion branch decoder, and the step 103 respectively inputs the encoding result into the segmentation branch decoder and the edge detection branch encoder of the preset image segmentation model, so as to obtain the segmentation branch result and the edge detection result, and the process mainly includes: inputting the sample image into a first-stage shared encoder to obtain first encoding output data; inputting the first coding output data into a second-level shared encoder, a first-level segmentation branch decoder and a first-level edge detection branch decoder to obtain first segmentation data and first edge detection data; inputting the output data of the (m-1) th-level shared encoder to the (m) th-level shared encoder to obtain the output data of the (m) th-level shared encoder, wherein m is more than or equal to 2 and less than or equal to n; inputting the output data of the m-th level shared encoder and the output data of the m-1-th level division branch decoder into the m-th level division branch decoder to obtain the output data of the m-th level division branch decoder, wherein m is more than or equal to 2 and less than or equal to n; inputting the output data of the m-th level shared encoder and the output data of the m-1 level edge detection branch decoder into the m-th level edge detection branch decoder to obtain the output data of the m-th level edge detection branch decoder, wherein m is more than or equal to 2 and less than or equal to n; and obtaining the segmentation branch result based on the first segmentation data and the output data of the m-th segmentation branch decoder, and obtaining the edge detection result based on the first edge detection data and the output data of the m-th edge detection branch decoder.
As an optional embodiment of the present invention, in step 103, the process of inputting the decoding results of the segmentation branch decoder and the edge detection branch encoder into the fusion branch decoder of the preset image segmentation model to obtain a fusion segmentation result includes: inputting the output data of the first-stage segmentation branch decoder and the output data of the first-stage edge detection branch decoder into the first-stage fusion branch decoder to obtain the output data of the first-stage fusion branch decoder; inputting the output data of the m-th level segmentation branch decoder, the output data of the m-th level edge detection branch decoder and the output data of the m-1 level fusion branch decoder into the m-th level fusion branch decoder to obtain the output data of the m-th level fusion branch decoder; and obtaining the fusion segmentation result based on the output data of the first-level fusion branch decoder and the output data of the m-level fusion branch decoder.
Illustratively, fig. 7 is a schematic diagram of a network model of a 4-stage shared encoder, a 4-stage split branch decoder, a 4-stage edge detection branch decoder, and a 3-stage fused branch decoder. The output of the first-level shared encoder is respectively input to the first-level dividing branch decoder and the first-level edge detection decoder, and so on, and the output of the second, third and fourth-level shared encoders is respectively input to the second, third and fourth-level dividing branch decoders and the second, third and fourth-level edge detection decoders. The input data of each stage of the divided branch decoder and each stage of the edge detection decoder is the output data of the corresponding divided branch decoder and the edge detection decoder of the previous stage in addition to the output data of the corresponding shared encoder of the previous stage.
In the merging branch decoder, the input of the first stage merging branch decoder is the output data of the first stage dividing branch decoder and the first stage edge detection decoder, except the first stage merging branch decoder, the input of each stage merging branch decoder is the output data of the previous stage merging branch decoder except the output data of the corresponding dividing branch decoder and the edge detection decoder as the input data. The features learned by the segmentation branch task and the edge detection branch task can be fully utilized by fusing the features of the segmentation branch decoder and the edge detection decoder.
As an alternative embodiment of the present invention, the step 105 includes: calculating to obtain a segmentation branch loss function based on the positive sample image and a segmentation branch result; calculating to obtain an edge detection loss function based on the positive sample image and an edge detection result; calculating to obtain a fusion segmentation loss function based on the positive sample image and the fusion segmentation result; and obtaining the loss function based on the segmentation branch loss function, the edge detection loss function and the fusion segmentation loss function.
Illustratively, the positive sample image is a sample image which is accurately labeled on the intima and the media-adventitia of the intravascular ultrasound image, and the segmentation branch loss function L is respectively calculated by comparing the identification result of each decoder with the positive sample imageSegEdge detection branch loss function of LEdgeAnd a fused branch loss function of LFusThe total loss function of the segmentation model is:
L=αLseg+βLEdge+LFus
alpha and beta are hyper-parameters, the setting is carried out according to the training situation, the segmentation branch and the edge detection branch are auxiliary branches, and alpha and beta are E [0,1 ].
Figure BDA0003536446690000121
Wherein N is the number of pixels, C is the number of types of the segmentation labels,
Figure BDA0003536446690000122
to manually label the probability that the nth pixel in the graph belongs to the c-th label,
Figure BDA0003536446690000123
the probability that the nth pixel in the net prediction result belongs to the c label is predicted.
Figure BDA0003536446690000131
Where N is the number of pixels, y(i)E {0,1} is the probability that the nth pixel in the artificial label graph belongs to the edge pixel,
Figure BDA0003536446690000132
the probability that the nth pixel in the net prediction result belongs to the edge pixel is predicted.
And (4) the loss function is propagated reversely, and the weight and the bias of the network model are updated. The iteration is repeated until the loss function has not decreased. The embodiment of the present invention does not limit the linear combination mode of the loss function, and those skilled in the art can determine the linear combination mode according to actual needs.
And when the loss function reaches the minimum, obtaining a plurality of initial network models, taking the output result of the fusion branch as the initial segmentation result of the inner membrane and the middle-outer membrane of the sample image, taking the Jaccard similarity coefficient, the Hausdorff distance and the area difference percentage as the segmentation result evaluation indexes, evaluating the initial segmentation result output by the fusion branch of each network model, and taking the network model with the best comprehensive evaluation as the network model applied to the system.
Wherein the Jaccard similarity coefficient formula is as follows:
Figure BDA0003536446690000133
Rpredindicates the vascular structure region in the prediction result, RtrueRepresenting the vascular structure region in the artificial labeling map.
The Hausdorff distance formula is as follows:
Figure BDA0003536446690000134
wherein, CpredRepresenting the edges of the vascular structure region in the prediction, CtrueThe edges of the vascular structure region in the artificial labeled chart are shown, and a and b respectively belong to CpredAnd CtruePoint (c) above. d (a, b) represents the Euclidean distance from between a and b.
The percentage area difference PAD formula is as follows:
Figure BDA0003536446690000135
wherein A ispredIndicates the area of the vascular structure region in the prediction result, AtrueShowing the area of the vascular structure region in the artificial marker map.
As an optional embodiment of the present invention, in step 101, acquiring a sample image includes: and preprocessing the sample image, labeling the preprocessed sample image, and taking the labeled sample image as the positive sample image.
For example, since a sample image directly acquired clinically is limited by an acquisition operation method, an acquisition instrument, and the like, the acquired sample image may not be directly used for training a model, and at this time, the sample image may be preprocessed, where the preprocessing manner may be screening, uniform resolution, and uniform gray scale value operation, the size of the resolution and the gray scale value range may be set according to specific requirements and hardware device limitations, and the uniform resolution used in this embodiment is 256 × 256, and the gray scale value range is [0,1 ]. The embodiment of the invention does not limit the type and the preprocessing mode of the sample image, and a person skilled in the art can determine the type and the preprocessing mode according to actual needs.
As an optional embodiment of the present invention, in the step 101, acquiring a sample image, further includes: and carrying out amplification treatment on the sample image to obtain an amplified sample image. Illustratively, after the sample images are preprocessed and manually marked, in order to expand the number of samples, improve the accuracy of segmentation and avoid the phenomenon of overfitting in the training process, the sample images are amplified, and the number of the sample images is further increased, wherein the amplification method can be that the sample images are uniformly rotated, turned over up and down and turned over left and right at certain angle intervals. The embodiment of the present invention does not limit the specific amplification mode, and those skilled in the art can determine the amplification mode according to actual needs.
The embodiment of the invention discloses an ultrasonic imaging contour recognition method, which comprises the following steps of:
step 201: an original image is acquired. Illustratively, the raw image is a clinically obtained intravascular ultrasound image, and a specific raw image is shown in fig. 8.
Step 202: inputting the original image into a preset multi-branch fusion segmentation model to obtain an image segmentation result of the original image; the preset multi-branch fusion segmentation model is obtained by using the construction method of the multi-branch fusion segmentation model according to any embodiment. Illustratively, the acquired original image is input into the multi-branch fusion segmentation model trained in the method embodiment, and a segmentation result corresponding to the original image is obtained.
Step 203: and extracting the contour based on the image segmentation result to obtain the contour recognition result of each region of the original image. For example, the segmentation result obtained in step 203 is actually a probability map predicted pixel by pixel, and there may be problems that the inner membrane and the middle-outer membrane regions are not unique and the edges are not clear, and there is no specific contour line, so that it is necessary to perform clear extraction of the corresponding contour according to the contour identification result, and obtain a clearer and more accurate contour identification result of each region.
The invention provides an ultrasonic imaging contour recognition method, which comprises the following steps: acquiring an original image; inputting the original image into a preset multi-branch fusion segmentation model to obtain an image segmentation result of the original image; the preset multi-branch fusion segmentation model is obtained by using the construction method of the multi-branch fusion segmentation model in the embodiment; and extracting the contour based on the image segmentation result to obtain the contour recognition result of each region of the original image. The sample image is subjected to recognition of the segmentation result by the preset multi-branch fusion segmentation model, the recognition accuracy is improved, contour extraction is performed again based on the recognition result, and a clearer and more accurate contour recognition result of each region is obtained on the basis of the segmentation result.
As an optional embodiment of the present invention, the step 203 includes: extracting contours based on the image segmentation result to obtain a first image result; performing Gaussian blur processing on the first image result to obtain a second image result; carrying out binarization processing on the second image result to obtain a third image result; and carrying out region-of-interest selection processing on the third image result to obtain final contour recognition results of all regions.
Illustratively, according to the problems that regions are not unique and edges are not obvious in a segmentation result obtained by a multi-branch fusion segmentation model, a part of isolated pixels and regions are removed by adopting Gaussian blur, and the size of a filter kernel can be 7 x 7; the binarization method is to determine a definite region edge according to an automatically calculated or manually set threshold value of the preliminary segmentation result of the inner membrane and the middle-outer membrane, and set a gray value to be 0 or 255, wherein the gray value 255 represents a region of the inner membrane or the middle-outer membrane, and the gray value 0 represents a background. The embodiment of the invention does not limit the size of the filtering kernel and the size of the threshold value in binarization, and can be determined by a person skilled in the art according to actual needs.
As an optional embodiment of the present invention, in step 203, performing region-of-interest selection processing on the third image result to obtain a final contour recognition result of each region, includes: determining a maximum connected domain containing a preset gray value of the image center point of the third image result as an interested region based on the third image result; obtaining a contour point set based on an intersection point set of a plurality of rays emitted from the center of the image of the third image result under a plurality of angles with uniform intervals and the region of interest; and sequentially connecting the curve fitting results passing through the contour point set to obtain the final contour recognition result of each region.
Exemplarily, after a definite Region edge is obtained, the Region edge needs to be marked and divided, and the corresponding dividing method adopts a Region Of Interest (ROI) containing a maximum connected domain with a gray value Of 255 at an image center point, so as to solve the problem that the inner membrane Region, the middle-outer membrane Region are not unique, obtain a smooth inner membrane contour, the middle-outer membrane contour and the outer membrane contour which conform to the vision habit Of human eyes, wherein n is 36 according to the resolution Of the image, and as shown in fig. 9, the image segmentation result after marking is completed.
The embodiment of the invention also discloses a device for constructing the multi-branch fusion segmentation model, as shown in fig. 3, the device comprises:
a first obtaining module 301, configured to obtain a sample image; the sample images include a positive sample image labeled with a target region and an unlabeled negative sample image. For example, the details are the contents of step 101 in the above method embodiment, and are not described here again.
And the encoding module 302 is configured to input the negative sample image as input data to a shared encoder of a preset image segmentation model, and perform encoding processing to obtain an encoding result. For example, the details are given in the above-mentioned step 102 of the method embodiment, and are not described herein again.
A decoding module 303, configured to input the coding result into a segmentation branch decoder and an edge detection branch encoder of the preset image segmentation model, respectively, to obtain a segmentation branch result and an edge detection result; and inputting the decoding results of the segmentation branch decoder and the edge detection branch encoder into the fusion branch decoder of the preset image segmentation model to obtain a fusion segmentation result. For example, the details are the contents of step 103 in the above method embodiment, and are not described here again.
An output module 304, configured to obtain an output result of the preset image segmentation model based on the segmentation branch result, the edge detection result, and the fusion segmentation result. For example, the details are given in the above-mentioned step 104 of the method embodiment, and are not described herein again.
A loss function module 305, configured to obtain a loss function of the preset image segmentation model based on the positive sample image and the output result. For example, the details are given in the above step 105 of the method embodiment, and are not described herein again.
An adjusting module 306, configured to adjust parameters of the preset image segmentation model based on the loss function, so as to obtain a multi-branch fusion segmentation model. For example, the details are given in the above step 106 of the method embodiment, and are not described herein again.
The device for constructing the multi-branch fusion segmentation model comprises a first acquisition module 301, a second acquisition module 301, a first segmentation module and a second segmentation module, wherein the first acquisition module is used for acquiring a sample image; the sample images comprise positive sample images marked with target areas and unmarked negative sample images; the encoding module 302 is configured to input the negative sample image as input data to a shared encoder of a preset image segmentation model, and perform encoding processing to obtain an encoding result; a decoding module 303, configured to input the coding result into a segmentation branch decoder and an edge detection branch encoder of the preset image segmentation model, respectively, to obtain a segmentation branch result and an edge detection result; inputting the decoding results of the segmentation branch decoder and the edge detection branch encoder into a fusion branch decoder of the preset image segmentation model to obtain a fusion segmentation result; an output module 304, configured to obtain an output result of the preset image segmentation model based on the segmentation branch result, the edge detection result, and the fusion segmentation result; a loss function module 305, configured to obtain a loss function of the preset image segmentation model based on the positive sample image and the output result; an adjusting module 306, configured to adjust parameters of the preset image segmentation model based on the loss function, so as to obtain a multi-branch fusion segmentation model. The sample image passes through the segmentation branch decoder, the edge detection branch encoder and the fusion branch decoder after sharing the encoder, so that different segmentation results are obtained, and the accuracy of model identification is improved. Based on the edge detection branch encoder, the learning ability and the generalization ability are enhanced, and the image features learned by each branch task are fully utilized by fusing branch decoders, so that the accuracy of network segmentation is further improved.
As an optional embodiment of the present invention, the multi-branch fusion segmentation model includes an n-level shared encoder and a multi-branch decoder, the multi-branch decoder includes an n-level segmentation branch decoder, an n-level edge detection branch decoder, and an n-1-level fusion branch decoder, and the decoding module 303 inputs the encoding result into the segmentation branch decoder and the edge detection branch encoder of the preset image segmentation model, respectively, to obtain the segmentation branch result and the edge detection result, and includes: inputting the sample image into a first-stage shared encoder to obtain first encoding output data; inputting the first encoding output data into a second-level shared encoder, a first-level segmentation branch decoder and a first-level edge detection branch decoder to obtain first segmentation data and first edge detection data; inputting the output data of the (m-1) th-level shared encoder to the (m) th-level shared encoder to obtain the output data of the (m) th-level shared encoder, wherein m is more than or equal to 2 and less than or equal to n; inputting the output data of the m-th level shared encoder and the output data of the m-1-th level division branch decoder into the m-th level division branch decoder to obtain the output data of the m-th level division branch decoder, wherein m is more than or equal to 2 and less than or equal to n; inputting the output data of the m-th level shared encoder and the output data of the m-1 level edge detection branch decoder into the m-th level edge detection branch decoder to obtain the output data of the m-th level edge detection branch decoder, wherein m is more than or equal to 2 and less than or equal to n; and obtaining the segmentation branch result based on the first segmentation data and the output data of the m-th segmentation branch decoder, and obtaining the edge detection result based on the first edge detection data and the output data of the m-th edge detection branch decoder. Illustratively, the details are given above in the context of step 103 of the method embodiment.
As an optional implementation manner of the present invention, in the decoding module 303, the inputting of the decoding results of the segmentation branch decoder and the edge detection branch encoder into the fusion branch decoder of the preset image segmentation model to obtain a fusion segmentation result includes: inputting the output data of the first-stage segmentation branch decoder and the output data of the first-stage edge detection branch decoder into the first-stage fusion branch decoder to obtain the output data of the first-stage fusion branch decoder; inputting the output data of the mth level division branch decoder, the output data of the mth level edge detection branch decoder and the output data of the (m-1) th level fusion branch decoder into the mth level fusion branch decoder to obtain the output data of the mth level fusion branch decoder; and obtaining the fusion segmentation result based on the output data of the first-level fusion branch decoder and the output data of the m-level fusion branch decoder. Illustratively, the details are given above in the context of step 103 of the method embodiment.
As an alternative embodiment of the present invention, the loss function module 305 includes: the first loss function submodule is used for calculating to obtain a segmentation branch loss function based on the positive sample image and a segmentation branch result; the second loss function submodule is used for calculating to obtain an edge detection loss function based on the positive sample image and an edge detection result; a third loss function submodule, configured to calculate a fusion segmentation loss function based on the positive sample image and a fusion segmentation result; and the fourth loss function submodule is used for obtaining the loss function based on the segmentation branch loss function, the edge detection loss function and the fusion segmentation loss function. Illustratively, the details are given above in the context of step 105 of the method embodiment.
As an optional embodiment of the present invention, in the first obtaining module 301, obtaining a sample image includes: and the preprocessing module is used for preprocessing the sample image, labeling the preprocessed sample image and taking the labeled sample image as the positive sample image. Illustratively, the details are given above in the context of step 101 of the method embodiment.
As an optional embodiment of the present invention, the first obtaining module 301 obtains a sample image, and further includes: and the amplification module is used for carrying out amplification treatment on the sample image to obtain an amplified sample image. Illustratively, the details are given above in the context of step 101 of the method embodiment.
The embodiment of the invention also discloses an ultrasonic imaging contour recognition device, as shown in fig. 4, the device comprises:
a second obtaining module 401, configured to obtain an original image. For example, the details are given in the above step 201 of the method embodiment, and are not described herein again.
A segmentation module 402, configured to input the original image into a preset multi-branch fusion segmentation model, so as to obtain an image segmentation result of the original image; the preset multi-branch fusion segmentation model is obtained by using the construction method of the multi-branch fusion segmentation model described in the above embodiment. For example, the details are given in the above step 202 of the method embodiment, and are not described herein again.
And an outline extraction module 403, configured to perform outline extraction based on the image segmentation result, so as to obtain an outline identification result of each area of the original image. For example, the details are the contents of step 203 in the above method embodiment, and are not described herein again.
The device for constructing the multi-branch fusion segmentation model comprises a second acquisition module 401, a first segmentation module and a second segmentation module, wherein the second acquisition module is used for acquiring an original image; a segmentation module 402, configured to input the original image into a preset multi-branch fusion segmentation model, so as to obtain an image segmentation result of the original image; the preset multi-branch fusion segmentation model is obtained by using the construction method of the multi-branch fusion segmentation model in the embodiment; and an outline extraction module 403, configured to perform outline extraction based on the image segmentation result, so as to obtain an outline identification result of each region of the original image. The sample image is subjected to recognition of the segmentation result by the preset multi-branch fusion segmentation model, the recognition accuracy is improved, the contour extraction is carried out again on the basis of the recognition result, and the clearer and more accurate contour recognition result of each region is further obtained on the basis of the segmentation result.
As an optional implementation manner of the present invention, the contour extraction module 403 includes: the first contour extraction module is used for extracting contours based on the image segmentation result to obtain a first image result; the Gaussian blur module is used for carrying out Gaussian blur processing on the first image result to obtain a second image result; the binarization module is used for carrying out binarization processing on the second image result to obtain a third image result; and the region selection module is used for carrying out region-of-interest selection processing on the third image result to obtain a final contour recognition result of each region. For example, the details are the contents of step 203 in the above method embodiment, and are not described herein again.
As an optional embodiment of the present invention, in the contour extraction module 403, the region selection module includes: a region determining module, configured to determine, based on the third image result, a maximum connected region with a grayscale value of 255 at an image center point that includes the third image result, as a region of interest; the contour point set module is used for obtaining a contour point set based on an intersection point set of n rays emitted from the center of the image of the third image result under n angles at uniform intervals and the region of interest; and the connecting module is used for sequentially connecting the curve fitting results passing through the contour point set to obtain the final contour recognition result of each region. For example, the details are given in the above step 203 of the method embodiment, and are not described herein again.
An embodiment of the present invention further provides a computer device, as shown in fig. 5, the computer device may include a processor 501 and a memory 502, where the processor 501 and the memory 502 may be connected by a bus or in another manner, and fig. 5 takes the example of being connected by a bus as an example.
Processor 501 may be a Central Processing Unit (CPU). The Processor 501 may also be other general purpose processors, Digital Signal Processors (DSPs), Application Specific Integrated Circuits (ASICs), Field Programmable Gate Arrays (FPGAs) or other Programmable logic devices, discrete Gate or transistor logic devices, discrete hardware components, or combinations thereof.
The memory 502, which is a non-transitory computer-readable storage medium, may be used to store non-transitory software programs, non-transitory computer-executable programs, and modules, such as program instructions/modules corresponding to the construction method of the multi-branch fusion segmentation model and the ultrasound imaging contour recognition method in the embodiment of the present invention. The processor 501 executes various functional applications and data processing of the processor by running non-transitory software programs, instructions and modules stored in the memory 502, namely, implementing the construction method of the multi-branch fusion segmentation model and the ultrasound imaging contour recognition method in the above method embodiments.
The memory 502 may include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required for at least one function; the storage data area may store data created by the processor 501, and the like. Further, the memory 502 may include high speed random access memory, and may also include non-transitory memory, such as at least one magnetic disk storage device, flash memory device, or other non-transitory solid state storage device. In some embodiments, memory 502 may optionally include memory located remotely from processor 501, which may be connected to processor 501 via a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The one or more modules are stored in the memory 502 and when executed by the processor 501, perform a method of constructing a multi-branch fusion segmentation model and a method of ultrasound imaging contour recognition as in the embodiment shown in fig. 1 or fig. 2.
The details of the computer device can be understood by referring to the corresponding related descriptions and effects in the embodiments shown in fig. 1 or fig. 2, and are not described herein again.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by a computer program, which can be stored in a computer-readable storage medium, and when executed, can include the processes of the embodiments of the methods described above. The storage medium may be a magnetic Disk, an optical Disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a Flash Memory (Flash Memory), a Hard Disk (Hard Disk Drive, abbreviated as HDD) or a Solid State Drive (SSD), etc.; the storage medium may also comprise a combination of memories of the kind described above.
Although the embodiments of the present invention have been described in conjunction with the accompanying drawings, those skilled in the art may make various modifications and variations without departing from the spirit and scope of the invention, and such modifications and variations fall within the scope defined by the appended claims.

Claims (13)

1. A construction method of a multi-branch fusion segmentation model is characterized by comprising the following steps:
acquiring a sample image; the sample images comprise a positive sample image marked with a target area and an unmarked negative sample image;
inputting the negative sample image serving as input data into a shared encoder of a preset image segmentation model, and performing encoding processing to obtain an encoding result;
inputting the coding result into a segmentation branch decoder and an edge detection branch encoder of the preset image segmentation model respectively to obtain a segmentation branch result and an edge detection result; inputting the decoding results of the segmentation branch decoder and the edge detection branch encoder into a fusion branch decoder of the preset image segmentation model to obtain a fusion segmentation result;
obtaining an output result of the preset image segmentation model based on the segmentation branch result, the edge detection result and the fusion segmentation result;
obtaining a loss function of the preset image segmentation model based on the positive sample image and the output result;
and adjusting parameters of the preset image segmentation model based on the loss function to obtain a multi-branch fusion segmentation model.
2. The method of claim 1, wherein the multi-branch fused segmentation model comprises an n-level shared encoder and a multi-branch decoder comprising an n-level segmented branch decoder, an n-level edge detection branch decoder, and an n-1 level fused branch decoder,
inputting the coding result into a segmentation branch decoder and an edge detection branch encoder of the preset image segmentation model respectively to obtain a segmentation branch result and an edge detection result, and the method comprises the following steps:
inputting the sample image into a first-stage shared encoder to obtain first encoding output data;
inputting the first encoding output data into a second-level shared encoder, a first-level segmentation branch decoder and a first-level edge detection branch decoder to obtain first segmentation data and first edge detection data;
inputting the output data of the (m-1) th-level shared encoder to the (m) th-level shared encoder to obtain the output data of the (m) th-level shared encoder, wherein m is more than or equal to 2 and less than or equal to n;
inputting the output data of the m-th level shared encoder and the output data of the m-1-th level division branch decoder into the m-th level division branch decoder to obtain the output data of the m-th level division branch decoder, wherein m is more than or equal to 2 and less than or equal to n;
inputting the output data of the m-th level shared encoder and the output data of the m-1 level edge detection branch decoder into the m-th level edge detection branch decoder to obtain the output data of the m-th level edge detection branch decoder, wherein m is more than or equal to 2 and less than or equal to n;
and obtaining the segmentation branch result based on the first segmentation data and the output data of the m-th segmentation branch decoder, and obtaining the edge detection result based on the first edge detection data and the output data of the m-th edge detection branch decoder.
3. The method according to claim 2, wherein inputting the decoding results of the segmentation branch decoder and the edge detection branch encoder into a merging branch decoder of the preset image segmentation model to obtain a merged segmentation result, comprises:
inputting the output data of the first-stage segmentation branch decoder and the output data of the first-stage edge detection branch decoder into the first-stage fusion branch decoder to obtain the output data of the first-stage fusion branch decoder;
inputting the output data of the mth level division branch decoder, the output data of the mth level edge detection branch decoder and the output data of the (m-1) th level fusion branch decoder into the mth level fusion branch decoder to obtain the output data of the mth level fusion branch decoder;
and obtaining the fusion segmentation result based on the output data of the first-level fusion branch decoder and the output data of the m-level fusion branch decoder.
4. The method of claim 1, wherein obtaining the loss function of the preset image segmentation model based on the positive sample image and the output result comprises:
calculating to obtain a segmentation branch loss function based on the positive sample image and a segmentation branch result;
calculating to obtain an edge detection loss function based on the positive sample image and the edge detection result;
calculating to obtain a fusion segmentation loss function based on the positive sample image and the fusion segmentation result;
and obtaining the loss function based on the segmentation branch loss function, the edge detection loss function and the fusion segmentation loss function.
5. The method of claim 1, wherein said obtaining a sample image comprises:
and preprocessing the sample image, labeling the preprocessed sample image, and taking the labeled sample image as the positive sample image.
6. The method of claim 5, wherein the obtaining a sample image further comprises: and carrying out amplification treatment on the sample image to obtain an amplified sample image.
7. An ultrasonic imaging contour recognition method is characterized by comprising the following steps:
acquiring an original image;
inputting the original image into a preset multi-branch fusion segmentation model to obtain an image segmentation result of the original image; the preset multi-branch fusion segmentation model is obtained by using the construction method of the multi-branch fusion segmentation model according to any one of claims 1 to 6;
and extracting the contour based on the image segmentation result to obtain the contour recognition result of each region of the original image.
8. The method according to claim 7, wherein the performing contour extraction based on the image segmentation result to obtain the contour recognition result of each region of the original image comprises:
extracting the outline based on the image segmentation result to obtain a first image result;
performing Gaussian blur processing on the first image result to obtain a second image result;
carrying out binarization processing on the second image result to obtain a third image result;
and carrying out region-of-interest selection processing on the third image result to obtain a final contour recognition result of each region.
9. The method according to claim 8, wherein performing region-of-interest selection processing on the third image result to obtain a final contour recognition result of each region comprises:
determining a maximum connected domain containing a preset gray value of the image center point of the third image result as an interested region based on the third image result;
obtaining a contour point set based on an intersection point set of a plurality of rays emitted from the center of the image of the third image result under a plurality of angles with uniform intervals and the region of interest;
and sequentially connecting the curve fitting results passing through the contour point set to obtain the final contour recognition result of each region.
10. A construction device of a multi-branch fusion segmentation model is characterized by comprising:
the first acquisition module is used for acquiring a sample image; the sample images comprise positive sample images marked with target areas and unmarked negative sample images;
the coding module is used for inputting the negative sample image serving as input data to a shared coder of a preset image segmentation model for coding to obtain a coding result;
the decoding module is used for respectively inputting the coding result into a segmentation branch decoder and an edge detection branch coder of the preset image segmentation model to obtain a segmentation branch result and an edge detection result; inputting the decoding results of the segmentation branch decoder and the edge detection branch encoder into a fusion branch decoder of the preset image segmentation model to obtain a fusion segmentation result;
the output module is used for obtaining an output result of the preset image segmentation model based on the segmentation branch result, the edge detection result and the fusion segmentation result;
the loss function module is used for obtaining a loss function of the preset image segmentation model based on the positive sample image and the output result;
and the adjusting module is used for adjusting the parameters of the preset image segmentation model based on the loss function to obtain a multi-branch fusion segmentation model.
11. An ultrasonic imaging profile recognition apparatus, comprising:
the second acquisition module is used for acquiring an original image;
the segmentation module is used for inputting the original image into a preset multi-branch fusion segmentation model to obtain an image segmentation result of the original image; the preset multi-branch fusion segmentation model is obtained by using the construction method of the multi-branch fusion segmentation model according to any one of claims 1 to 6;
and the contour extraction module is used for extracting contours based on the image segmentation result to obtain the contour identification result of each region of the original image.
12. A computer device, comprising: at least one processor; and a memory communicatively coupled to the at least one processor; wherein the memory stores instructions executable by the at least one processor to cause the at least one processor to perform the steps of the method of constructing a multi-branch fusion segmentation model according to any one of claims 1 to 6 or the method of ultrasound imaging contour recognition according to any one of claims 7 to 9.
13. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the steps of the method for constructing a multi-branch fusion segmentation model according to any one of claims 1 to 6 or the method for ultrasound imaging contour recognition according to any one of claims 7 to 9.
CN202210221162.2A 2022-03-08 2022-03-08 Segmentation model construction and contour recognition method and device and computer equipment Pending CN114581668A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210221162.2A CN114581668A (en) 2022-03-08 2022-03-08 Segmentation model construction and contour recognition method and device and computer equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210221162.2A CN114581668A (en) 2022-03-08 2022-03-08 Segmentation model construction and contour recognition method and device and computer equipment

Publications (1)

Publication Number Publication Date
CN114581668A true CN114581668A (en) 2022-06-03

Family

ID=81773087

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210221162.2A Pending CN114581668A (en) 2022-03-08 2022-03-08 Segmentation model construction and contour recognition method and device and computer equipment

Country Status (1)

Country Link
CN (1) CN114581668A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115631122A (en) * 2022-11-07 2023-01-20 北京拙河科技有限公司 Image optimization method and device for edge image algorithm
CN117315263A (en) * 2023-11-28 2023-12-29 杭州申昊科技股份有限公司 Target contour segmentation device, training method, segmentation method and electronic equipment

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115631122A (en) * 2022-11-07 2023-01-20 北京拙河科技有限公司 Image optimization method and device for edge image algorithm
CN117315263A (en) * 2023-11-28 2023-12-29 杭州申昊科技股份有限公司 Target contour segmentation device, training method, segmentation method and electronic equipment
CN117315263B (en) * 2023-11-28 2024-03-22 杭州申昊科技股份有限公司 Target contour device, training method, segmentation method, electronic equipment and storage medium

Similar Documents

Publication Publication Date Title
CN111784671B (en) Pathological image focus region detection method based on multi-scale deep learning
US10528848B2 (en) Histomorphometric classifier to predict cardiac failure from whole-slide hematoxylin and eosin stained images
CN114120102A (en) Boundary-optimized remote sensing image semantic segmentation method, device, equipment and medium
CN110706246A (en) Blood vessel image segmentation method and device, electronic equipment and storage medium
CN114581668A (en) Segmentation model construction and contour recognition method and device and computer equipment
CN112419271B (en) Image segmentation method, device and computer readable storage medium
CN112862830B (en) Multi-mode image segmentation method, system, terminal and readable storage medium
CN113539402B (en) Multi-mode image automatic sketching model migration method
CN112884788B (en) Cup optic disk segmentation method and imaging method based on rich context network
CN110969632B (en) Deep learning model training method, image processing method and device
CN114581628B (en) Cerebral cortex surface reconstruction method and readable storage medium
CN112785581A (en) Training method and device for extracting and training large blood vessel CTA (computed tomography angiography) imaging based on deep learning
CN114782398A (en) Training method and training system for learning network for medical image analysis
CN113065551A (en) Method for performing image segmentation using a deep neural network model
CN114693622B (en) Plaque erosion automatic detection system based on artificial intelligence
CN113658700B (en) Gate pulse high-pressure noninvasive evaluation method and system based on machine learning
CN113269778B (en) Image weak supervision segmentation method based on iteration
CN112200810B (en) Multi-modal automated ventricle segmentation system and method of use thereof
CN118071688A (en) Real-time cerebral angiography quality assessment method
CN117635625A (en) Pancreatic tumor segmentation method based on automatic data enhancement strategy and multi-attention-assisted UNet
CN116385467B (en) Cerebrovascular segmentation method based on self-supervision learning and related equipment
CN117576383A (en) Attention decoding-based informative meat segmentation method and system
CN116824146A (en) Small sample CT image segmentation method, system, terminal and storage medium
CN113763343B (en) Deep learning-based Alzheimer's disease detection method and computer-readable medium
CN114565617A (en) Pruning U-Net + + based breast tumor image segmentation method and system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination