Detailed Description
The present application will be described in further detail with reference to the drawings and examples, in order to make the objects, technical solutions and advantages of the present application more apparent. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the application.
In one embodiment, as shown in fig. 1, a brain ultrasound image parallel segmentation method is provided, and this embodiment is illustrated by applying the method to a server, it will be understood that the method may also be applied to a terminal, and may also be applied to a system including a terminal and a server, and implemented through interaction between the terminal and the server. In this embodiment, the method includes the steps of:
Step 102, acquiring a target craniocerebral ultrasonic image.
Wherein the target craniocerebral ultrasonic image is a craniocerebral ultrasonic image of the craniocerebral structural tissue to be segmented. The target craniocerebral ultrasound image may be a fetal craniocerebral ultrasound image, such as a fetal normal craniocerebral ultrasound image, and may specifically be a fetal craniocerebral cut-plane image. In this embodiment, the fetal craniocerebral cut image includes a craniofacial cut image, a diaphragmatic cavity horizontal cut image, a cerebellum horizontal cut image, a thalamus cut image, a lateral ventricle horizontal cut image, and the like.
Specifically, the server acquires a target craniocerebral ultrasonic image through an ultrasonic image acquisition device. The ultrasonic image acquisition device acquires a target craniocerebral ultrasonic image and sends the acquired target craniocerebral ultrasonic image to the server.
In one embodiment, the server receives an initial craniocerebral ultrasonic image acquired and transmitted by the ultrasonic image acquisition device, and performs preprocessing on the received initial craniocerebral ultrasonic image to obtain a corresponding target craniocerebral ultrasonic image. Wherein the preprocessing operation includes one or more of image size scaling, image normalization processing, and image enhancement operations.
In one embodiment, a server acquires an initial craniocerebral ultrasonic image, performs size scaling on the initial craniocerebral ultrasonic image according to preset scaling conditions, performs normalization processing on the initial craniocerebral ultrasonic image after size scaling, and performs random enhancement operation on the initial craniocerebral ultrasonic image after normalization processing to obtain a target craniocerebral ultrasonic image. The preset scaling conditions are used for scaling the short side of the initial craniocerebral ultrasonic image to 500 pixels and scaling the long side of the image in an equal-ratio scaling mode. The scaled image long side= (image long side/image short side) ×500 can be determined in an equal-ratio scaling manner. And the server normalizes the initial craniocerebral ultrasonic image after the size scaling by using a linear function to obtain the initial craniocerebral ultrasonic image after the normalization processing. The random enhancing operation includes one or more of random horizontal flipping, random vertical flipping, random increasing saturation of the image, contrast and brightness, etc.
And 104, inputting the target craniocerebral ultrasonic image into a parallelized feature extraction model to obtain a target feature map.
The feature extraction model is a model which is obtained by training according to a first training sample set obtained in advance and can be used for extracting features of the craniocerebral ultrasonic image to obtain a corresponding feature map. The target feature map is a feature map obtained by extracting features of the target craniocerebral ultrasonic image. The target feature map may in particular be determined by a multi-dimensional matrix of feature pixels, such as 256-dimensional matrix of feature pixels. The feature extraction model with parallelization means that the feature extraction model can execute corresponding operations in parallel in the model so as to improve the processing efficiency of the feature extraction model.
Specifically, the server inputs the target craniocerebral ultrasonic image into a trained feature extraction model to perform feature extraction, and a corresponding target feature map is obtained. It can be understood that the server performs feature extraction on the target craniocerebral ultrasonic image in parallel through the feature extraction model to obtain a corresponding target feature map.
In one embodiment, the server inputs the target craniocerebral ultrasonic image into the trained feature extraction model to perform feature extraction, and extracts feature images output by a plurality of feature layers of the feature extraction model as target feature images, namely, extracts a plurality of target feature images from the target craniocerebral ultrasonic image through the feature extraction model.
In one embodiment, the machine learning algorithm involved in training the feature extraction model includes, but is not limited to ResNet a 101 (a convolutional neural network). Taking ResNet as an example, the feature layer in the network may be divided into a plurality of groups, and a feature map output by one feature layer may be extracted from each group as a target feature map, for example, the network may be divided into 6 groups P1, P2, P3, P4, P5 and P6, and one target feature map may be extracted from P2, P3, P4 and P5, respectively. It will be appreciated how the network is divided into a plurality of groups, and in particular how a target feature map is selected from each group, and is not specifically herein, for example, the network may be divided uniformly or in equal proportions, and for example, the feature map output by the last feature layer in each group may be used as the target feature map.
And 106, inputting the target feature map into a parallelized structure tissue extraction model to obtain a target structure tissue candidate frame.
The structure tissue extraction model is a model which is obtained through training according to a second training sample set obtained in advance and can be used for determining a target structure tissue candidate frame from a target feature map. The target structural tissue candidate boxes are candidate boxes that can be used to locate the corresponding craniocerebral structural tissue from the target feature map. The parallelized structure tissue extraction model means that the structure tissue extraction model can execute corresponding operations in parallel in the model so as to improve the processing efficiency of the structure tissue extraction model.
Specifically, the server inputs the target feature map into a trained structural tissue extraction model, positions each craniocerebral structural tissue from the target feature map through the structural tissue extraction model, and determines a target structural tissue candidate frame corresponding to each craniocerebral structural tissue so as to extract a target structural tissue map comprising craniocerebral structural tissues from the target feature map based on the target structural tissue candidate frame. It can be appreciated that the server locates each craniocerebral structure tissue from the target feature map in parallel by the structure tissue extraction model, and locates the target structure tissue candidate boxes corresponding to each craniocerebral structure tissue.
In one embodiment, for each pixel point in the target feature map, the server generates a first preset number of first candidate frames with different sizes, extracts second feature subgraphs corresponding to each first candidate frame from the target feature map, inputs each second feature subgraph into a trained structure tissue extraction model, and determines a target structure tissue candidate frame corresponding to the target feature map according to each second feature subgraph through the structure tissue extraction model.
In one embodiment, the server predicts probability values of the craniocerebral structure tissues in each second feature subgraph and second candidate frames corresponding to each second feature subgraph in parallel through the structure tissue extraction model, and screens target structure tissue candidate frames from the second candidate frames according to the probability values and the second candidate frames corresponding to each second feature subgraph. In this way, by predicting the probability value of each second feature subgraph and the second candidate frame in parallel, the prediction efficiency can be improved, and the determination efficiency of the target structure organization candidate frame can be improved.
In one embodiment, there are a plurality of target feature maps corresponding to the target craniocerebral ultrasound image, and a corresponding target structure tissue candidate frame is determined for each target feature map according to the above manner. It can be understood that the server may input the multiple target feature maps into the structure tissue extraction model at the same time, and extract the target structure tissue candidate boxes corresponding to the target feature maps in parallel through the structure tissue extraction model.
And step 108, extracting the target structure organization chart from the target feature chart based on the target structure organization candidate frame.
The target structure organization chart is a feature subgraph which is extracted from the corresponding target feature chart based on the target structure organization candidate frame and comprises target craniocerebral structure organization.
Specifically, the server extracts a target structure organization chart matched with each target structure organization candidate frame from the target feature chart according to the positions of the target structure organization candidate frames in the target feature chart.
In one embodiment, when the target craniocerebral ultrasonic image corresponds to one target feature map, the server extracts the matched target structure tissue map from the target feature map according to each target structure tissue candidate frame corresponding to the target feature map. When the target craniocerebral ultrasonic image corresponds to a plurality of target feature images, the server selects target structure tissue candidate frames from the target structure tissue candidate frames corresponding to each target feature image respectively, and extracts matched target structure tissue images from the corresponding target feature images according to each selected target structure tissue candidate frame.
In one embodiment, the server determines feature subgraphs extracted from corresponding target feature graphs according to the target structure organization candidate frames as initial structure organization graphs, and uniformly scales each initial structure organization graph into target structure organization graphs with preset sizes.
Step 110, classifying the target craniocerebral structure tissue corresponding to the target structure tissue map through the structure tissue classification model to obtain a corresponding target structure tissue class.
The structural tissue classification model is a model which is obtained through training according to a third training sample set obtained in advance and can be used for classifying the target cranium structural tissue corresponding to the target structural tissue map to obtain a corresponding target structural tissue type. The target structural tissue class refers to the structural tissue class to which the target craniocerebral structural tissue belongs. The structural tissue categories included 25 categories of lateral fissures, thalamus, choroid plexus, transparent compartment, third ventricle, midline of brain, cerebellum curtain, vault column, transparent forehead, lateral ventricle posterior horn, brain island, cerebellum lumbricus, cerebellum medullary space, corpus callosum, midbrain aqueduct, temporal superior sulcus, temporal inferior sulcus, parietal occipital sulcus, frontal inferior sulcus, cerebral sickle, lateral ventricle anterior horn, midbrain, posterior fossa pit and cranial halo.
Specifically, the server flattens each extracted target structure organization chart to obtain corresponding structure organization vectors, each structure organization vector is respectively input into a trained structure organization classification model, and target brain structure tissues corresponding to the corresponding target structure organization chart are classified according to each structure organization vector through the structure organization classification model to obtain target structure tissue types of each target brain structure tissue.
In one embodiment, the target structure tissue candidate boxes are regular rectangles, and each craniocerebral structure tissue in the target craniocerebral ultrasound image is irregularly shaped, so that the most dominant craniocerebral structure tissue in each target structure tissue map is determined as the target craniocerebral structure tissue corresponding to the target structure tissue map based on the target structure tissue candidate boxes in the target structure tissue map extracted from the corresponding target feature map, wherein the target structure tissue map generally comprises one or more craniocerebral structure tissues. Further, the target structure tissue class to which the target craniocerebral structure tissue corresponding to the target structure tissue map belongs may be determined as the target structure tissue class corresponding to the target structure tissue map.
In one embodiment, each target feature map is a 256-dimensional feature pixel matrix, and thus, each target structure map extracted from the target feature map in the above manner is also a 256-dimensional feature pixel matrix, and when the size of each target structure map is 14×14, each target structure map is a 256×14 feature pixel matrix. And flattening each target structure organization chart by the server to obtain 256-14 structure organization vectors.
In one embodiment, the server respectively inputs the structure tissue vectors corresponding to each target structure tissue map into a trained structure tissue position regression model, and respectively positions the target craniocerebral structure tissue corresponding to the corresponding target structure tissue map according to each structure tissue vector through the structure tissue position regression model to obtain four coordinate values corresponding to the target craniocerebral structure tissue so as to determine a structure tissue labeling frame corresponding to the target craniocerebral structure tissue according to the four coordinate values corresponding to each target craniocerebral structure tissue, so that the corresponding target craniocerebral structure tissue can be positioned in the target craniocerebral ultrasonic image according to the structure tissue labeling frame.
In one embodiment, a network structure of a structure organization classification model includes: the input layer is a full-connection layer of 256 x 14 neurons, the second layer is a full-connection layer of 1024 neurons, the third layer is connected with a full-connection layer of 25 neurons, each neuron uses a softmax function to calculate the structural tissue class of the target cranium structural tissue corresponding to the target structural tissue map, a 25-dimensional probability vector is output, and the maximum probability value in the probability vector is determined as the target structural tissue class of the target cranium structural tissue.
In one embodiment, a network structure of a structure organization location regression model includes: the input layer is a full-connection layer of 256 x 14 neurons, the second layer is a full-connection layer of 1024 neurons, the third layer is connected with a full-connection layer of 4 neurons, and four coordinate values of target craniocerebral structural tissues positioned in the target structural tissue map are output.
And step 112, segmenting the target craniocerebral structure tissue from the target structure tissue map according to the target structure tissue type through the structure tissue segmentation model to obtain a structure tissue segmentation subgraph.
The structural tissue segmentation model is a model which is obtained by training based on a fourth training sample set obtained in advance and can be used for segmenting corresponding target craniocerebral structural tissues from the target structural tissue map. The structural tissue segmentation subgraph is a structural tissue map in which the target craniocerebral structural tissue and other backgrounds are segmented from the target structural tissue map, wherein the other backgrounds comprise the backgrounds except the craniocerebral structural tissue in the target craniocerebral ultrasonic image and/or the other craniocerebral structural tissues except the target craniocerebral structural tissue in the target craniocerebral ultrasonic image.
Specifically, for each target structure tissue map corresponding to a target craniocerebral ultrasonic image, the server inputs the target structure tissue map into a trained structure tissue segmentation model, obtains a structure tissue feature map corresponding to the target structure tissue map through the structure tissue segmentation model, and segments a target craniocerebral structure tissue from the target structure tissue map according to a target structure tissue category and the structure tissue feature map corresponding to the target structure tissue map to obtain a corresponding structure tissue segmentation subgraph.
In one embodiment, the server inputs each target structure organization chart into a structure organization segmentation model to classify pixel by pixel, determines the structure organization category to which each pixel point in the target structure organization chart belongs, and segments the target structure organization chart according to the determined structure organization category to obtain a corresponding structure organization segmentation subgraph.
And 114, obtaining a craniocerebral structure tissue segmentation map corresponding to the target craniocerebral ultrasonic image according to the structure tissue segmentation subgraph.
Specifically, for each structural tissue segmentation sub-image corresponding to the target craniocerebral ultrasonic image, the server determines the scaling of the structural tissue segmentation sub-image compared with the target craniocerebral ultrasonic image according to the image sizes corresponding to the structural tissue segmentation sub-image and the target craniocerebral ultrasonic image, and inversely scales the structural tissue segmentation sub-image according to the determined scaling to obtain the corresponding target structural segmentation sub-image. The server maps each target structure segmentation subgraph corresponding to the target craniocerebral ultrasonic image back to the corresponding position of the target craniocerebral ultrasonic image to obtain a corresponding craniocerebral structure tissue segmentation graph. The inverse scaling means that the structural tissue segmentation sub-graph is reduced by a preset multiple compared with the target craniocerebral ultrasonic image, and the corresponding target structural segmentation sub-graph is obtained by amplifying the structural tissue segmentation sub-graph by the preset multiple.
It can be understood that each craniocerebral structure tissue segmented in the craniocerebral structure tissue segmentation map is the fine structure in the target craniocerebral ultrasound image. By the parallelization segmentation method of the fine structures of the fetal normal craniocerebral ultrasonic image, all the fine structures can be accurately segmented from the target craniocerebral ultrasonic image, namely, each craniocerebral structure tissue in the target craniocerebral ultrasonic image can be accurately segmented.
In one embodiment, the server obtains the brain structure tissue segmentation map corresponding to the target brain ultrasound image by mapping each structure tissue segmentation sub-map corresponding to the target brain ultrasound image back to the original map size.
In one embodiment, the server determines a growth parameter for each craniocerebral structure tissue in the target craniocerebral ultrasound image from the craniocerebral structure tissue segmentation map. Wherein, the growth parameters of the craniocerebral structure tissue comprise the area and perimeter of the craniocerebral structure tissue. Specifically, the server determines a structural tissue contour of each craniocerebral structural tissue from the target craniocerebral ultrasonic image according to the craniocerebral structural tissue segmentation map, and measures growth parameters of the corresponding craniocerebral structural tissue based on the determined structural tissue contour. Therefore, the automatic measurement of the growth parameters of the craniocerebral structure tissue is performed based on the craniocerebral structure tissue segmentation map with higher accuracy, and the measurement accuracy can be improved.
According to the brain ultrasound image parallel segmentation method, the feature extraction is carried out on the target brain ultrasound image through the parallelized feature extraction model to obtain the target feature image, the parallelized structure tissue extraction model is used for positioning a target structure tissue candidate frame for extracting the target structure tissue image from the target feature image, the target structure tissue image is extracted from the target feature image according to the target structure tissue candidate frame, the target brain structure tissue in the target structure tissue image is classified through the structure tissue classification model to obtain the target structure tissue type, the target brain structure tissue is segmented from the target structure tissue image according to the target structure tissue type through the structure tissue segmentation model to obtain the corresponding structure tissue segmentation subgraph, and the structure tissue segmentation subgraph is mapped into the brain structure tissue segmentation graph with the same size as the target brain ultrasound image so as to complete the segmentation of the target brain ultrasound image. In this way, by extracting the characteristics of the brain structure tissues in the target brain ultrasonic image, positioning the positions of the brain structure tissues based on the characteristics, identifying the target structure tissue types corresponding to the corresponding brain structure tissues based on the positions, and dividing each target brain structure tissue according to the target structure tissue types to obtain an accurate brain structure tissue division map, the influence of the size of the structure tissue on the division result can be reduced, and the division accuracy of the brain ultrasonic image can be improved. Further, the operations of feature extraction, structure tissue positioning, structure tissue classification, segmentation and the like in the craniocerebral ultrasonic image segmentation flow are executed through the model, so that the craniocerebral ultrasonic image segmentation accuracy can be further improved.
In one embodiment, step 104 includes: extracting features of the target craniocerebral ultrasonic image through a feature extraction model to obtain a corresponding target feature image; for each convolution layer involved in the feature extraction model, convolving according to the following parallelized convolution operation; the parallelized convolution operation includes: splitting a feature map to be convolved into a plurality of first feature subgraphs; carrying out convolution operation on the plurality of first feature subgraphs in parallel to obtain a plurality of corresponding feature values; and obtaining a characteristic diagram after convolution according to the plurality of characteristic values.
Specifically, the server inputs the target craniocerebral ultrasonic image into a trained feature extraction model, and the feature extraction model is used for carrying out feature extraction on the target craniocerebral ultrasonic image to obtain a corresponding target feature map. The feature extraction model includes one or more convolution layers. In the process of extracting the characteristics of the target craniocerebral ultrasonic image through the characteristic extraction model, the convolution operation of each convolution layer is executed by a parallelization convolution mode aiming at each convolution layer related in the characteristic extraction model. The parallelization convolution mode comprises the following steps: splitting a feature map to be convolved in a current convolution layer into a plurality of first feature subgraphs, performing convolution operation on the plurality of first feature subgraphs and corresponding convolution kernels in parallel to obtain feature values corresponding to each first feature subgraph, namely obtaining a plurality of feature values, and combining the plurality of feature values according to the positions of the corresponding first feature subgraphs in the feature map to be convolved to obtain the convolved feature map.
In one embodiment, the server splits the feature map to be convolved into a target number of first feature subgraphs. The target number can be dynamically determined according to the mapping relation between the feature map size and the convolution kernel size. Wherein, the mapping relationship is m= ((H-f+2×padding)/step size+1) ((W-f+2×padding)/step size+1)), M is a target number, H and W are respectively the height and width of the feature map to be convolved, F is the convolution kernel size, padding represents the number of times of filling the feature map with 0 up and down, left and right, and padding can be dynamically determined, for example, 0,1 or 2, and the step size can be customized, for example, 1 or 2.
In one embodiment, a server starts a target number of GPU blocks, and carries out convolution operation on a target number of first feature subgraphs in parallel through the target number of GPU blocks to obtain corresponding feature values, namely, one GPU block is correspondingly allocated with one first feature subgraph, and carries out convolution operation on the allocated first feature subgraphs in parallel through the target number of GPU blocks to obtain the target number of feature values. The server reads the characteristic values of the target number obtained in parallel from the shared memory of the block, reassembles the characteristic pixel matrix with the height of ((H-F+2 x padding)/step length+1) and the width of (W-F+2 x padding)/step length+1), and takes the characteristic pixel matrix as a characteristic diagram after convolution.
FIG. 2 is a schematic diagram of a parallelized convolution operation in one embodiment. As shown in fig. 2, the feature map to be convolved is a feature pixel matrix with a size of 3*3, the convolution kernel is a pixel matrix with a size of 2×2, that is, the convolution kernel is 2, assuming that padding is 0 and the step length is 1, the feature map to be convolved can be split into 4 first feature subgraphs according to the mapping relationship, the 4 first feature subgraphs share the convolution kernel, and the 4 first feature subgraphs and the respective corresponding convolution kernels are convolved in parallel to obtain respective corresponding feature values, that is, 4 feature values are obtained, and the feature values are recombined into a feature pixel matrix with a size of 2×2, where the feature pixel matrix with a size of 2×2 is the convolved feature map.
In the above embodiment, the feature extraction efficiency of the feature extraction model can be improved by the parallelized convolution operation, so that the segmentation efficiency of the craniocerebral ultrasonic image can be improved.
In one embodiment, step 106 includes: respectively generating first candidate frames which are different in size and of a first preset number for each pixel point in the target feature map; extracting a second feature subgraph corresponding to the first candidate frame from the target feature graph; and predicting the probability value of the craniocerebral structure in each second feature subgraph in parallel through the structure extraction model, and selecting a second preset number of initial structure candidate frames from the second candidate frames according to the probability value, and selecting target structure candidate frames from the initial structure candidate frames according to the cross-over ratio between the initial structure candidate frames.
The first preset number may be customized, such as 15. The second predetermined number may be custom, such as 2000. The cross-over ratio between the initial structure organization candidate frames refers to the cross-over ratio between any two initial structure organization candidate frames.
Specifically, for each pixel point in the target feature map, the server generates a first preset number of first candidate frames with different sizes according to the preset candidate frame area and the preset candidate frame aspect ratio, and the first preset number of first candidate frames with different sizes corresponding to each pixel point are obtained. After generating a plurality of first candidate frames corresponding to each pixel point, the server extracts second feature subgraphs corresponding to each first candidate frame from the target feature map respectively, and inputs the second feature subgraphs extracted for each pixel point in the target feature map into a trained structural organization extraction model. The server predicts the probability value of each of the second feature subgraphs including the craniocerebral structure organization and the second candidate frame corresponding to each of the second feature subgraphs in parallel through the structure organization extraction model, associates the probability value corresponding to each of the second feature subgraphs with the second candidate frame, namely, determines the probability value corresponding to each of the second feature subgraphs as the probability value corresponding to the second candidate frame corresponding to the second feature subgraphs, further sorts the corresponding second candidate frames according to the order of the probability values from large to small to obtain a second candidate frame sequence, screening a second candidate frame with a second preset number and being ranked at the front from the second candidate frame sequence, taking the second candidate frame as an initial structure organization candidate frame, iteratively calculating the cross-over ratio between any two initial structure organization candidate frames, and when the calculated cross-over ratio is larger than or equal to a cross-over ratio threshold value, and deleting the initial structure tissue candidate frame with a smaller probability value from the two initial structure tissue candidate frames of which the current cross-over ratio is calculated until the cross-over ratio between any two initial structure tissue candidate frames is smaller than a cross-over ratio threshold value, and determining the rest initial structure tissue candidate frames as the screened target structure tissue candidate frames. Wherein, the cross ratio threshold can be customized, such as 0.7.
In one embodiment, the server pre-configures a plurality of preset candidate frame areas and a plurality of preset candidate frame aspect ratio cases, and combines each preset candidate frame area with each preset candidate frame aspect ratio case, so as to determine a corresponding candidate frame size according to the combined preset candidate frame area and preset candidate frame aspect ratio case, and generate a first candidate frame corresponding to each pixel point in the target feature map according to each candidate frame size. Wherein the product of the number of the preset candidate frame areas and the number of the preset candidate frame aspect ratio cases is identical to the first preset number. For example, if the number of the preset candidate frame areas is 5 and the number of the preset candidate frame aspect ratio cases is 3, the corresponding first preset number is 3*3 =15. The preset candidate frame area and the preset candidate frame aspect ratio example, and the corresponding number thereof can be customized, the preset candidate frame area includes 32, 64, 128, 256 and 512, and the preset candidate frame aspect ratio example includes 0.5,1 and 2.
In one embodiment, when the target craniocerebral ultrasonic image corresponds to a plurality of target feature images, a corresponding probability value and a second candidate frame are predicted for each corresponding second feature sub-image of the plurality of target feature images through the structural tissue extraction model in parallel, and the target structural tissue candidate frame corresponding to the target feature image is screened from the second candidate frames corresponding to the target feature image according to the probability value and the second candidate frame corresponding to each target feature image in the above manner.
In the above embodiment, the corresponding second feature subgraphs are extracted from the target feature map according to the dynamically generated first candidate frame, the probability that each second feature subgraph includes the craniocerebral structure organization is predicted in parallel by the structure organization extraction model, and the corresponding second candidate frame is a candidate frame obtained by correcting the corresponding first candidate frame according to the feature value in the corresponding second feature subgraph, and based on the target structure organization candidate frame screened out from the second candidate frame, the corresponding target craniocerebral structure organization can be accurately positioned from the target feature map, thereby, the segmentation accuracy of the craniocerebral ultrasound image can be improved, and the probability value and the second candidate frame of each second feature subgraph can be predicted in parallel by the structure organization extraction model, the prediction efficiency can be improved, and thus, the craniocerebral ultrasound image segmentation efficiency can be improved.
In one embodiment, predicting, in parallel, a probability value including a craniocerebral structural organization in each second feature subgraph and a second candidate box corresponding to each second feature subgraph by a structural organization extraction model includes: starting a third preset number of first threads, and predicting probability values including craniocerebral structural organization in the corresponding second feature subgraphs in parallel through the first threads; and starting a fourth preset number of second threads, predicting coordinate values of the corresponding second feature subgraphs in parallel through the second threads, and determining corresponding second candidate frames according to the coordinate values of each second feature subgraph.
The third preset number and the fourth preset number are respectively determined by the first preset number, the number of target feature images corresponding to the target craniocerebral ultrasonic images, and the width and height of each target feature image, namely, the number of target feature images and the number of second feature subgraphs corresponding to each target feature image.
In one embodiment, the region in which the craniocerebral structure tissue exists in the second feature subgraphs is referred to as a foreground, and the region in which the craniocerebral structure tissue does not exist is referred to as a background, so that if only the probability value of the craniocerebral structure tissue included in each second feature subgraphs is predicted, that is, if only the probability value of the foreground of each second feature subgraphs is predicted, the third preset number=the first preset number×w0×w0×t0, so that the probability value of the craniocerebral structure tissue included in the corresponding second feature subgraphs can be predicted in parallel by each first thread, and if the probability value of the foreground and the background of each second feature subgraphs needs to be predicted respectively, the third preset number=2×the first preset number×w0×t0, so that the probability value of the foreground or the background of the corresponding second feature subgraphs can be predicted in parallel by each first thread. Further, the second candidate frame corresponding to each second feature sub-graph can be uniquely determined by four coordinate values, and then the fourth preset number=4×the first preset number×h0×w0×t0, so that one coordinate value corresponding to the corresponding second feature sub-graph can be predicted in parallel by each second thread. Wherein, H0 and W0 are the height and width of the target feature map, respectively, T0 is the number of target feature maps, and it is assumed here that the height and width of each target feature map are identical, respectively.
Specifically, the server dynamically determines a third preset number and a fourth preset number according to the first preset number, the number of target feature maps corresponding to the target craniocerebral ultrasonic images, and the width and height of each target feature map. The server starts a third preset number of first threads, allocates corresponding second feature subgraphs to each first thread, and predicts probability values of the corresponding second feature subgraphs including cranium brain structure organization in parallel through the plurality of first threads. Correspondingly, the server starts a fourth preset number of second threads, allocates corresponding second feature subgraphs for each second thread, and predicts coordinate values of the corresponding second feature subgraphs in parallel through the plurality of second threads. And predicting one coordinate value of the second feature subgraph by a corresponding second thread, so that four coordinate values corresponding to each second feature subgraph can be obtained through four second threads, and determining a second candidate frame corresponding to each second feature subgraph according to the four coordinate values corresponding to each second feature subgraph. It can be understood that the operation of predicting the probability value including the craniocerebral structure organization in the corresponding second feature subgraph by each first thread may be performed in parallel with the operation of predicting the coordinate value of the corresponding second feature subgraph by each second thread, so that the concurrency can be further improved, and thus the screening efficiency of the target structure organization candidate frame can be further improved.
In one embodiment, the server assigns each second feature sub-graph to two first threads, and predicts the probability value of the corresponding second feature sub-graph as foreground or background in parallel through each first thread, so that the probability value of the second feature sub-graph as foreground through one first thread Cheng Yuce is predicted in parallel through the probability value of the second feature sub-graph as background through the other first thread, and the prediction efficiency of the probability value can be improved.
In one embodiment, the server initiates two GPUs kernel, denoted kernel0 and kernel1, through the CPU, predicts whether the second feature subgraph includes a craniocerebral structure tissue, and predicts the location of the craniocerebral structure tissue in the second feature subgraph through kernel 1. Further, the server divides the kernel0 and the kernel1 into blocks with the same number as the target feature images according to the number of the target feature images corresponding to the target craniocerebral ultrasonic images, so that each block corresponds to one target feature image, and the number of the first threads or the second threads required to be started in each block is determined according to the mode. It can be appreciated that by setting the number of target feature maps in the mapping relationship for dynamically determining the third preset number or the fourth preset number to 1, the number of first threads or second threads that need to be started in each block can be determined.
Taking P2, P3, P4 and P5 as an example, namely taking the number of the target feature maps as 4 as an example, the four target feature maps are respectively marked as a first target feature map, a second target feature map, a third target feature map and a fourth target feature map, and it is assumed that probability values of foreground and background of each second feature map need to be respectively predicted, and the first preset number is assumed to be 15. Thus, the server divides kernel0 into 4 blocks, which are respectively marked as block0, block1, block2 and block3, starts 2×15×h0×w0 first threads in block0, which are used for parallel prediction of probability values of foreground or background of each second feature sub-graph corresponding to the first target feature graph, starts 2×15×h0×w0 first threads in block1, which are used for parallel prediction of probability values of foreground or background of each second feature sub-graph corresponding to the second target feature graph, starts 2×15×h0×w0 first threads in block2, which are used for parallel prediction of probability values of foreground or background of each second feature sub-graph corresponding to the third target feature graph, and starts 2×15×h0 first threads in block3, which are used for parallel prediction of probability values of foreground or background of each second feature sub-graph corresponding to the fourth target feature graph.
Correspondingly, the server divides kernel1 into 4 blocks, which are respectively marked as block4, block5, block6 and block7, starts 4×15×h0×w0 second threads in block4, which are used for parallel prediction of coordinate values of each second feature sub-graph corresponding to the first target feature graph, starts 4×15×h0×w0 second threads in block5, which are used for parallel prediction of coordinate values of each second feature corresponding to the second target feature graph, starts 4×15×h0×w0 second threads in block6, which are used for parallel prediction of coordinate values of each second feature sub-graph corresponding to the third target feature graph, and starts 4×15×h0×w0 second threads in block7, which are used for parallel prediction of coordinate values of each second feature corresponding to the fourth target feature graph. It can be understood that, through four second threads corresponding to each second feature sub-graph, coordinate values x1, y1, x2 and y2 corresponding to the second feature sub-graph can be respectively predicted, so that a corresponding second candidate frame can be determined based on the coordinate values x1, y1, x2 and y 2.
Further, for each target feature map, the server reads probability values of foreground or background of each second feature sub-map from the shared memory of the GPU block through the CPU, and second candidate frames corresponding to each second feature sub-map, screens initial structure organization candidate frames from the second candidate frames according to the probability values of foreground of the second feature sub-map, and screens target structure organization candidate frames from the initial structure organization candidate frames according to the cross ratio.
In the above embodiment, the prediction efficiency can be improved by starting a plurality of threads to predict the probability value including the craniocerebral structure organization in each second feature subgraph and the second candidate frame corresponding to each second feature subgraph in parallel, so that the screening efficiency of the candidate frame of the target structure organization can be improved.
In one embodiment, step 108 includes: selecting target structure organization candidate frames with the candidate frame areas meeting the preset candidate frame selection conditions from the target structure organization candidate frames; extracting an initial structure organization chart corresponding to the selected target structure organization candidate frame from the target feature chart; resetting the initial structure organization chart to a target structure organization chart with a preset size.
The candidate frame area refers to the area of the target structure organization candidate frame, and can be specifically determined by the width and the height of the target structure organization candidate frame. The preset candidate frame selection condition is a basis or a condition for further selecting a target structure tissue candidate frame from target structure tissue candidate frames corresponding to each target feature map, and specifically may be selecting a target structure tissue candidate frame whose candidate frame area is within a corresponding preset area range. It can be understood that when the target craniocerebral ultrasonic image corresponds to a plurality of target feature images, different target feature images correspond to different preset candidate frame selection conditions, that is, the preset area ranges in the preset candidate frame selection conditions corresponding to the different target feature images are different. The predetermined dimension may be customized, such as 14×14, and is not specifically limited herein.
Specifically, the server compares the respective candidate frame areas of the target structure organization candidate frames corresponding to each target feature map with a preset candidate frame selection condition corresponding to the target feature map, and selects a target structure organization candidate frame with a candidate frame area meeting the preset candidate frame selection condition from the target structure organization candidate frames corresponding to each target feature map. And the server extracts the matched initial structure organization chart from the corresponding target feature chart according to each selected target structure organization candidate frame, and resets each extracted initial structure organization chart to a target structure organization chart with a preset size.
For example, the first target feature map, the second target feature map, the third target feature map and the fourth target feature map are mapped by the target brain ultrasound image, an initial structure organization map matched with a target structure organization candidate frame with the candidate frame area being greater than or equal to 32 and smaller than 64 is extracted from the first target feature map, an initial structure organization map matched with a target structure organization candidate frame with the candidate frame area being greater than or equal to 64 and smaller than 128 is extracted from the second target feature map, an initial structure organization map matched with a target structure organization candidate frame with the candidate frame area being greater than or equal to 128 and smaller than 512 is extracted from the third target feature map, and an initial structure organization map matched with a target structure organization candidate frame with the candidate frame area being greater than or equal to 512 is extracted from the fourth target feature map according to preset candidate frame selection conditions corresponding to the target feature maps. Further, the server divides each initial structure organization chart extracted from each target feature chart into grids of 14 x 14 according to rows and columns respectively, and replaces each other feature value in each grid with the largest feature value in each grid, namely, adjusts the feature values of each pixel point in each grid to be consistent, so as to obtain the target structure organization chart with the size of 14 x 14. It is understood that the characteristic value of the pixel is the pixel value of the pixel. In this way, the matched target structure tissue diagrams are respectively extracted based on the target structure tissue candidate frames with different scales, so that the brain structure tissues in the corresponding brain ultrasonic images can be conveniently segmented based on the target structure tissue diagrams with different scales, and the segmentation accuracy can be improved.
In the above embodiment, the target structure tissue candidate frames whose candidate frame areas satisfy the selection conditions of the corresponding candidate frames are selected from each target feature map, and the initial structure tissue map extracted from the corresponding target feature map based on each selected target structure tissue candidate frame is reset to the target structure tissue map with the consistent size, so that the corresponding cranium structure tissue is accurately classified and positioned based on the target structure tissue map with the consistent size.
In one embodiment, step 112 comprises: inputting the target structure tissue diagram into a structure tissue segmentation model to obtain a structure tissue characteristic diagram corresponding to each structure tissue category; screening the matched structure tissue feature images from the structure tissue feature images according to the target structure tissue categories; and carrying out binary classification on the screened structure tissue feature map to obtain a structure tissue segmentation sub-map.
Specifically, the server inputs each target structure organization chart into a structure organization segmentation model respectively to obtain a structure organization feature chart corresponding to each structure organization chart, and screens matched structure organization feature charts from a plurality of structure organization feature charts corresponding to the target structure organization charts according to the target structure organization chart corresponding to the target cranium structure chart. Further, the server performs binarization processing on the screened structure tissue feature map, and performs binary classification on the region in the corresponding target structure tissue map according to the binarized structure tissue feature map, namely, the target cranium structure tissue serving as a foreground and other backgrounds are segmented from the corresponding target structure tissue map according to the binarized structure tissue feature map, so as to obtain a structure tissue segmentation subgraph corresponding to the target structure tissue map.
In one embodiment, the step of performing binarization processing on the structural organization feature map by the server includes: and determining the pixel points with the feature value larger than or equal to the binarization threshold value in the structural tissue feature map as a foreground, and determining the pixel points with the feature value smaller than the binarization threshold value as a background, thereby obtaining the binarized structural tissue feature map. The binarization threshold may be customized, such as 0.5.
In one embodiment, a network structure of a structure organization segmentation model includes: the first layer is an input layer, and the input layer is a pixel point matrix of 256 x 14; the second layer is a convolution layer, the convolution kernel size is 3*3, the number of convolution kernels is 256, the step length is 1, the filling is 1, the SAME mode is used for filling, and a pixel point matrix with the size of 256 x 14 is output; the third layer is a convolution layer, the convolution kernel size is 3*3, the number of convolution kernels is 256, the step length is 1, the filling is 1, the SAME mode is used for filling, and a pixel point matrix with the size of 256 x 14 is output; the fourth layer is a transposed convolution layer, the convolution kernel size is 2 x 2, the number of convolution kernels is 256, the step length is 2, and a pixel point matrix with the size of 256 x 28 is output; the fifth layer is a transposed convolution layer, the convolution kernel size is 2 x 2, the number of convolution kernels is 256, the step length is 2, and a pixel point matrix with the size of 256 x 56 is output; the sixth layer is a transposed convolution layer, the convolution kernel size is 2 x 2, the number of convolution kernels is 256, the step length is 2, and a pixel point matrix with the size of 256 x 56 is output; the seventh layer is a convolution layer, the convolution kernel size is 3*3, the number of convolution kernels is 25, the step length is 1, the filling is 1, the SAME mode is used for filling, and the pixel point matrix with the size of 25 x 56 is output. The pixel point matrix of 25×56×56 includes 25 feature pixel matrices of 56×56, each feature pixel matrix corresponds to a structural organization feature map, each structural organization feature map corresponds to a structural organization category, and thus, a structural organization feature map corresponding to each structural organization category in the 25 structural organization categories is obtained.
In one embodiment, inside the structural tissue segmentation model, a probability value that each pixel point belongs to different cranium structural tissues in the target structural tissue map can be calculated by using softmax, and the structural type corresponding to the maximum probability value is determined as a final segmentation result.
In the above embodiment, the structural tissue feature map corresponding to the target structural tissue map under each organization tissue category is obtained through the structural tissue segmentation model, and the target craniocerebral structural tissue is segmented according to the structural tissue feature map and the corresponding target structural tissue category, so as to obtain the corresponding structural tissue segmentation subgraph, and improve the segmentation accuracy.
In one embodiment, the above-mentioned brain ultrasound image parallel segmentation method further includes: determining a structural tissue contour corresponding to each craniocerebral structural tissue from the craniocerebral structural tissue segmentation map; and respectively calculating the area and the perimeter of the corresponding craniocerebral structural tissues according to the outline of each structural tissue.
The craniocerebral structure tissue segmentation map is a structure tissue segmentation map obtained after craniocerebral structure tissue segmentation is carried out on the target craniocerebral ultrasonic image. The structural tissue outline refers to the outline of each craniocerebral structural tissue in the target craniocerebral ultrasonic image, and the corresponding craniocerebral structural tissue can be accurately positioned in the target craniocerebral ultrasonic image based on the structural tissue outline.
Specifically, each brain structure tissue in the target brain ultrasound image has been segmented in the brain structure tissue segmentation map, and thus, the structure tissue contour corresponding to each brain structure tissue in the target brain ultrasound image can be determined based on the brain structure tissue segmentation map. The server can calculate the area and perimeter of each craniocerebral structural tissue based on the structural tissue outline of the craniocerebral structural tissue.
In one embodiment, the server can determine edge pixels of the corresponding craniocerebral structural organization based on the structural organization profile, and coordinates of each edge pixel. For each craniocerebral structure tissue, the server selects one edge pixel point from all edge pixel points on the structure tissue outline as a vertex, connects the vertex with each other edge pixel point on the structure tissue outline to obtain a plurality of line segments, divides the craniocerebral structure tissue into a plurality of triangles by using the plurality of line segments, calculates the area of each triangle according to the coordinates of the edge pixel points, and adds the respective areas of the triangles to obtain the area of the craniocerebral structure tissue. Correspondingly, aiming at each craniocerebral structure tissue, the server calculates the pixel point distance between two adjacent edge pixel points on the structure tissue outline, and adds all the pixel point distances calculated based on the structure tissue outline to obtain the circumference of the craniocerebral structure tissue.
In one embodiment, the server calculates the area and circumference of each craniocerebral structural tissue based on the existing area calculation algorithm and circumference calculation algorithm according to the structural tissue profile of the craniocerebral structural tissue, which is not particularly limited herein.
In the above embodiment, based on the craniocerebral structure tissue segmentation result, the growth parameters such as the area and the perimeter of the craniocerebral structure tissue can be automatically measured, so that the development condition of the fetus can be quantitatively analyzed based on the growth parameters.
In one embodiment, the method for parallel segmentation of craniocerebral ultrasonic images provided by the application is used for positioning and segmenting the craniocerebral structural tissues of 430 craniocerebral ultrasonic images, and the obtained Average Precision (AP) is shown in the following table.
Therefore, based on the brain ultrasonic image parallel segmentation method provided by the application, brain structure tissues can be accurately positioned and segmented from the brain ultrasonic image.
FIG. 3 is a schematic diagram of the results of locating craniocerebral structural tissue from a craniocerebral ultrasound image based on a method of parallel segmentation of the craniocerebral ultrasound image in one embodiment. As shown in FIG. 3, the brain ultrasound image parallel segmentation method provided by the application can accurately position each brain structure tissue and the structure tissue type of each brain structure tissue from the brain ultrasound image, wherein the position of the brain structure tissue is marked by the structure tissue marking frame shown in FIG. 3, and the structure tissue type of the brain structure tissue is determined as the structure tissue type corresponding to the corresponding structure tissue marking frame. It can be understood that fig. 3 only illustrates 3 cranium light rings, cerebellum peninsula, cerebral lateral fissures and cerebral parenchyma, and other cranium structure tissues are labeled only by the structure tissue labeling frame, and the corresponding structure tissue types are not labeled one by one.
Fig. 4 is a schematic diagram of a brain ultrasound image segmentation in one embodiment. After an initial craniocerebral ultrasonic image is acquired by a server, performing image preprocessing on the initial craniocerebral ultrasonic image to obtain a target craniocerebral ultrasonic image, inputting the target craniocerebral ultrasonic image into a feature extraction model to perform feature extraction to obtain a target feature image, inputting the target feature image into a structure tissue extraction model to obtain a target structure tissue candidate frame, extracting an initial structure tissue image from the target feature image based on the target structure tissue candidate frame, scaling the initial structure tissue image into a target structure tissue image with a preset size, classifying and positioning target craniocerebral structure tissues corresponding to the target structure tissue image to obtain a target structure tissue type and a structure tissue labeling frame corresponding to the target craniocerebral structure tissue, segmenting the target craniocerebral structure tissue from the target structure tissue image based on the target structure tissue type to obtain a structure tissue segmentation sub-image, obtaining a craniocerebral structure tissue segmentation image corresponding to the initial craniocerebral ultrasonic image based on each structure tissue segmentation sub-image, and measuring the area and the circumference of the craniocerebral structure tissue based on the craniocerebral structure tissue segmentation image.
In the embodiment, the automatic segmentation and measurement of the brain structure tissue in the brain ultrasonic image are realized, the problems that the existing artificial brain structure tissue is difficult to identify, the precise segmentation of the brain structure cannot meet the clinical real-time requirement, and the automatic measurement of the growth parameters of the brain structure tissue are solved, and the ultrasonic device can help an ultrasonic doctor to identify the complex tissue structure and the automatic measurement of the growth parameters in the brain and simultaneously meet the clinical real-time requirement.
Fig. 5 is a schematic diagram of the effect of a brain structure tissue segmentation map obtained by segmentation based on a brain ultrasound image parallel segmentation method in one embodiment. As shown in fig. 5, the brain ultrasonic image parallel segmentation method provided by the application can accurately segment brain structure tissues from the brain ultrasonic image.
FIG. 6 is a schematic representation of the measurement of area and circumference of a craniocerebral structural tissue according to one embodiment. As shown in fig. 6, the area and perimeter of each brain structure tissue in the brain ultrasound image can be accurately measured based on the brain structure tissue segmentation map obtained by the brain ultrasound image parallel segmentation method provided by the application. The areas of the brain parenchyma, cerebellum peninsula and cerebral lateral fissure shown in fig. 6 are 9.25cm 2、1.25cm2 and 1.04cm 2, respectively, and the circumferences thereof are 14.25cm, 4.1cm and 4.44cm, respectively. The measurements shown in fig. 6 are by way of example only and do not illustrate the area and circumference of each craniocerebral structural tissue.
It should be understood that, although the steps in the flowchart of fig. 1 are shown in sequence as indicated by the arrows, the steps are not necessarily performed in sequence as indicated by the arrows. The steps are not strictly limited to the order of execution unless explicitly recited herein, and the steps may be executed in other orders. Moreover, at least a portion of the steps in fig. 1 may include a plurality of steps or stages, which are not necessarily performed at the same time, but may be performed at different times, and the order of the steps or stages is not necessarily sequential, but may be performed in rotation or alternatively with at least a portion of the steps or stages in other steps or other steps.
In one embodiment, as shown in fig. 7, there is provided a brain ultrasound image parallel segmentation apparatus 700 comprising: an acquisition module 701, a feature extraction module 702, a candidate frame extraction module 703, a structure tissue extraction module 704, a structure tissue classification module 705, a structure tissue segmentation module 706, and an ultrasound image segmentation module 707, wherein:
An acquisition module 701, configured to acquire a target craniocerebral ultrasound image; the feature extraction module 702 is used for inputting the target craniocerebral ultrasonic image into the parallelized feature extraction model to obtain a target feature map; the candidate frame extraction module 703 is configured to input the target feature map into a parallelized structure tissue extraction model to obtain a target structure tissue candidate frame; a structure tissue extraction module 704, configured to extract a target structure tissue map from the target feature map based on the target structure tissue candidate frame; the structure tissue classification module 705 is configured to classify, by using a structure tissue classification model, a target craniocerebral structure tissue corresponding to the target structure tissue map to obtain a corresponding target structure tissue class; the structure tissue segmentation module 706 is configured to segment the target craniocerebral structure tissue from the target structure tissue map according to the target structure tissue class through the structure tissue segmentation model, to obtain a structure tissue segmentation sub-graph; the ultrasonic image segmentation module 707 is configured to obtain a craniocerebral structure tissue segmentation map corresponding to the target craniocerebral ultrasonic image according to the structure tissue segmentation subgraph.
In one embodiment, the feature extraction module 702 is further configured to perform feature extraction on the target craniocerebral ultrasonic image through the feature extraction model, so as to obtain a corresponding target feature map; for each convolution layer involved in the feature extraction model, convolving according to the following parallelized convolution operation; the parallelized convolution operation includes: splitting a feature map to be convolved into a plurality of first feature subgraphs; carrying out convolution operation on the plurality of first feature subgraphs in parallel to obtain a plurality of corresponding feature values; and obtaining a characteristic diagram after convolution according to the plurality of characteristic values.
In one embodiment, the candidate frame extraction module 703 is further configured to generate, for each pixel point in the target feature map, a first preset number of first candidate frames with different sizes; extracting a second feature subgraph corresponding to the first candidate frame from the target feature graph; and predicting the probability value of the craniocerebral structure in each second feature subgraph in parallel through the structure extraction model, and selecting a second preset number of initial structure candidate frames from the second candidate frames according to the probability value, and selecting target structure candidate frames from the initial structure candidate frames according to the cross-over ratio between the initial structure candidate frames.
In one embodiment, the candidate box extraction module 703 is further configured to start a third preset number of first threads, and predict, in parallel, probability values including the craniocerebral structure organization in the corresponding second feature subgraph through the first threads; and starting a fourth preset number of second threads, predicting coordinate values of the corresponding second feature subgraphs in parallel through the second threads, and determining corresponding second candidate frames according to the coordinate values of each second feature subgraph.
In one embodiment, the structure organization extraction module 704 is further configured to select a target structure organization candidate frame whose candidate frame area meets a preset candidate frame selection condition from the target structure organization candidate frames; extracting an initial structure organization chart corresponding to the selected target structure organization candidate frame from the target feature chart; resetting the initial structure organization chart to a target structure organization chart with a preset size.
In one embodiment, the structure organization segmentation module 706 is further configured to input the target structure organization chart into a structure organization segmentation model to obtain a structure organization feature chart corresponding to each structure organization category; screening the matched structure tissue feature images from the structure tissue feature images according to the target structure tissue categories; and carrying out binary classification on the screened structure tissue feature map to obtain a structure tissue segmentation sub-map.
In one embodiment, the above-mentioned brain ultrasound image parallel segmentation device 700 further includes: a measurement module;
the measuring module is used for determining a structure tissue outline corresponding to each craniocerebral structure tissue from the craniocerebral structure tissue segmentation map; and respectively calculating the area and the perimeter of the corresponding craniocerebral structural tissues according to the outline of each structural tissue.
For specific limitations on the parallel segmentation apparatus of the craniocerebral ultrasound image, reference may be made to the above limitation on the parallel segmentation method of the craniocerebral ultrasound image, and no further description is given here. The above-mentioned each module in the brain ultrasound image parallel dividing device can be realized by all or part of software, hardware and the combination thereof. The above modules may be embedded in hardware or may be independent of a processor in the computer device, or may be stored in software in a memory in the computer device, so that the processor may call and execute operations corresponding to the above modules.
In one embodiment, a computer device is provided, which may be a server, and the internal structure of which may be as shown in fig. 8. The computer device includes a processor, a memory, and a network interface connected by a system bus. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device includes a non-volatile storage medium and an internal memory. The non-volatile storage medium stores an operating system, computer programs, and a database. The internal memory provides an environment for the operation of the operating system and computer programs in the non-volatile storage media. The database of the computer device is used for storing the trained feature extraction model, the structure tissue classification model and the structure tissue segmentation model. The network interface of the computer device is used for communicating with an external terminal through a network connection. The computer program is executed by a processor to implement a method for parallel segmentation of craniocerebral ultrasound images.
It will be appreciated by those skilled in the art that the structure shown in FIG. 8 is merely a block diagram of some of the structures associated with the present inventive arrangements and is not limiting of the computer device to which the present inventive arrangements may be applied, and that a particular computer device may include more or fewer components than shown, or may combine some of the components, or have a different arrangement of components.
In one embodiment, a computer device is provided, comprising a memory and a processor, the memory having stored therein a computer program, the processor implementing the steps of the method embodiments described above when the computer program is executed.
In one embodiment, a computer-readable storage medium is provided, on which a computer program is stored which, when executed by a processor, implements the steps of the method embodiments described above.
Those skilled in the art will appreciate that implementing all or part of the above-described methods in accordance with the embodiments may be accomplished by way of a computer program stored on a non-transitory computer readable storage medium, which when executed may comprise the steps of the embodiments of the methods described above. Any reference to memory, storage, database, or other medium used in embodiments provided herein may include at least one of non-volatile and volatile memory. The nonvolatile Memory may include Read-Only Memory (ROM), magnetic tape, floppy disk, flash Memory, optical Memory, or the like. Volatile memory can include random access memory (Random Access Memory, RAM) or external cache memory. By way of illustration, and not limitation, RAM can be in various forms such as static random access memory (Static Random Access Memory, SRAM) or dynamic random access memory (Dynamic Random Access Memory, DRAM), etc.
The technical features of the above embodiments may be arbitrarily combined, and all possible combinations of the technical features in the above embodiments are not described for brevity of description, however, as long as there is no contradiction between the combinations of the technical features, they should be considered as the scope of the description.
The foregoing examples illustrate only a few embodiments of the application, which are described in detail and are not to be construed as limiting the scope of the application. It should be noted that it will be apparent to those skilled in the art that several variations and modifications can be made without departing from the spirit of the application, which are all within the scope of the application. Accordingly, the scope of protection of the present application is to be determined by the appended claims.