CN112102230A - Ultrasonic tangent plane identification method, system, computer equipment and storage medium - Google Patents

Ultrasonic tangent plane identification method, system, computer equipment and storage medium Download PDF

Info

Publication number
CN112102230A
CN112102230A CN202010725339.3A CN202010725339A CN112102230A CN 112102230 A CN112102230 A CN 112102230A CN 202010725339 A CN202010725339 A CN 202010725339A CN 112102230 A CN112102230 A CN 112102230A
Authority
CN
China
Prior art keywords
image
ultrasonic image
identified
information
tangent plane
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010725339.3A
Other languages
Chinese (zh)
Inventor
李肯立
李红旗
李胜利
邢翔
朱宁波
文华轩
谭光华
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Maternity And Child Healthcare Hospital
Hunan University
Original Assignee
Shenzhen Maternity And Child Healthcare Hospital
Hunan University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Maternity And Child Healthcare Hospital, Hunan University filed Critical Shenzhen Maternity And Child Healthcare Hospital
Priority to CN202010725339.3A priority Critical patent/CN112102230A/en
Publication of CN112102230A publication Critical patent/CN112102230A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10132Ultrasound image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30044Fetus; Embryo

Abstract

The application relates to an ultrasonic section identification method, an ultrasonic section identification system, computer equipment and a storage medium. The method comprises the following steps: acquiring an ultrasonic image to be identified; carrying out feature extraction on an ultrasonic image to be identified to obtain corresponding image features; identifying all foreground targets in the ultrasonic image to be identified according to the image characteristics, and obtaining the positioning information of each foreground target contained in the ultrasonic image to be identified; and identifying all tangent plane targets in the ultrasonic image to be identified according to the positioning information and the image characteristics, and obtaining first class information and position information of each tangent plane target contained in the ultrasonic image to be identified. By adopting the method, a plurality of tangent plane targets contained in a single ultrasonic image can be effectively identified.

Description

Ultrasonic tangent plane identification method, system, computer equipment and storage medium
Technical Field
The present application relates to the technical field of prenatal ultrasound examination, and in particular, to an ultrasound section identification method, system, computer device, and storage medium.
Background
At present, the ultrasonic examination of the fetus in the middle and late pregnancy is the first choice examination method for prenatal diagnosis and defect child screening, the standard ultrasonic section of the fetus for diagnosis is automatically selected, the workload of an ultrasonic doctor can be greatly reduced, the ultrasonic doctor can be more concentrated on the ultrasonic examination per se without manually fixing the section midway, and the working efficiency of the ultrasonic examination is improved.
However, the acquisition of a standard ultrasound section of the fetus presents the following difficulties: firstly, during the middle-late pregnancy, the fetus is large, the curling state in the mother is obvious, the leg and the abdomen of the fetus are obvious to be close to each other, so that the relevant section of the lower limb is often accompanied in the process of intercepting the relevant section of the abdomen of the fetus, and vice versa; secondly, when relevant coronal and sagittal sections of the fetal chest are made, relevant sections of the kidney often appear, and vice versa; third, when a coronal section of a face of a fetus, a relevant section of an umbilical cord and a placenta and a relevant section of a certain limb are made, other limb sections are accompanied, and vice versa, and at the moment, the manipulation and habit of a doctor often cause the simultaneous appearance of a plurality of section targets in a single image. For some tissues with smaller section targets, the appearance of the single-image multi-section target often makes the focus out of the way, and the phenomenon needs to be particularly emphasized.
The existing intelligent automatic identification and sorting method for the standard ultrasonic section of the fetus mainly adopts a classification algorithm, particularly considers the display of the whole image at one time and allocates a category for each image. However, this method has the following problems: when the single-image multi-slice target appears, the classification judgment meeting the expectation can not be made due to the bustling phenomenon caused by the appearance of the slice, so that the single ultrasonic image with the multi-slice target can not be effectively identified.
Disclosure of Invention
In view of the above, it is necessary to provide an ultrasound sectional identification method, system, computer device and storage medium capable of effectively identifying a plurality of sectional objects contained in a single ultrasound image.
An ultrasonic sectional identification method, the method comprising:
acquiring an ultrasonic image to be identified;
extracting the characteristics of the ultrasonic image to be identified to obtain corresponding image characteristics;
according to the image characteristics, identifying all foreground targets in the ultrasonic image to be identified, and obtaining the positioning information of each foreground target contained in the ultrasonic image to be identified;
and identifying all tangent plane targets in the ultrasonic image to be identified according to the positioning information and the image characteristics, and obtaining first type information and position information of each tangent plane target contained in the ultrasonic image to be identified.
An ultrasonic slice identification system, the system comprising:
the acquisition module is used for acquiring an ultrasonic image to be identified;
the characteristic extraction module is used for extracting the characteristics of the ultrasonic image to be identified to obtain corresponding image characteristics;
the foreground identification module is used for identifying foreground targets in the ultrasonic image to be identified according to the image characteristics and obtaining positioning information of each foreground target contained in the ultrasonic image to be identified;
and the tangent plane identification module is used for identifying all tangent plane targets in the ultrasonic image to be identified according to the positioning information and the image characteristics to obtain first class information and position information of each tangent plane target contained in the ultrasonic image to be identified.
A computer device comprising a memory and a processor, the memory storing a computer program, the processor implementing the following steps when executing the computer program:
acquiring an ultrasonic image to be identified;
extracting the characteristics of the ultrasonic image to be identified to obtain corresponding image characteristics;
according to the image characteristics, identifying all foreground targets in the ultrasonic image to be identified, and obtaining the positioning information of each foreground target contained in the ultrasonic image to be identified;
and identifying all tangent plane targets in the ultrasonic image to be identified according to the positioning information and the image characteristics, and obtaining first type information and position information of each tangent plane target contained in the ultrasonic image to be identified.
A computer-readable storage medium, on which a computer program is stored which, when executed by a processor, carries out the steps of:
acquiring an ultrasonic image to be identified;
extracting the characteristics of the ultrasonic image to be identified to obtain corresponding image characteristics;
according to the image characteristics, identifying all foreground targets in the ultrasonic image to be identified, and obtaining the positioning information of each foreground target contained in the ultrasonic image to be identified;
and identifying all tangent plane targets in the ultrasonic image to be identified according to the positioning information and the image characteristics, and obtaining first type information and position information of each tangent plane target contained in the ultrasonic image to be identified.
According to the ultrasonic tangent plane identification method, the ultrasonic tangent plane identification system, the computer equipment and the storage medium, all foreground targets contained in an ultrasonic image are identified firstly, the positioning information of each foreground target is obtained, and then all tangent plane targets contained in the ultrasonic image are further identified in the corresponding area according to the positioning information of the foreground targets, so that a plurality of tangent plane targets in the ultrasonic image can be effectively identified, background interference is eliminated, and the tangent plane identification accuracy is improved.
Drawings
FIG. 1 is a schematic flow chart of a method for identifying an ultrasonic section in one embodiment;
FIG. 2 is a diagram illustrating a structure of a section recognition model according to an embodiment;
FIG. 3 is a diagram illustrating a network header in one embodiment;
FIG. 4 is a diagram illustrating an output result of the section identification model in one embodiment;
FIG. 5 is a block diagram of an ultrasound sectional identification system in one embodiment;
FIG. 6 is a diagram illustrating an internal structure of a computer device according to an embodiment;
FIG. 7 is a diagram illustrating an internal structure of a computer device according to an embodiment.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application.
In one embodiment, as shown in fig. 1, an ultrasonic section identification method is provided, which includes the following steps S102 to S108.
And S102, acquiring an ultrasonic image to be identified.
The ultrasonic image to be identified represents a fetal ultrasonic image, which can be detected by ultrasonic equipment, the detection positions are different, the section types contained in the detected ultrasonic image may also be different, and the section types may represent the approximate parts of a fetus, such as thalamus, abdomen, femur, thorax, face, hands, feet, umbilical cord, placenta, and the like.
And S104, performing feature extraction on the ultrasound image to be identified to obtain corresponding image features.
The image features are used for describing image information contained in the to-be-identified ultrasonic image, and different image information corresponds to different features, so that the image features can be used for subsequently classifying the image information, such as distinguishing of a foreground and a background and distinguishing of various different sections.
And S106, identifying all foreground targets in the ultrasonic image to be identified according to the image characteristics, and obtaining the positioning information of each foreground target contained in the ultrasonic image to be identified.
The foreground object may be understood as a region of interest (ROI), and all slices contained in the ultrasound image to be identified may be considered as foreground objects. The positioning information of the foreground object may specifically include a position and a size of the foreground object. Specifically, foreground objects identified from the ultrasound image to be identified are marked through first detection frames, each first detection frame corresponds to one foreground object, the coordinates of the center point of each first detection frame represent the position of the foreground object, and the width and height dimensions of the first detection frame represent the size of the foreground object.
S108, identifying all tangent plane targets in the ultrasonic image to be identified according to the positioning information and the image characteristics, and obtaining first type information and position information of each tangent plane target contained in the ultrasonic image to be identified.
The first category information of the tangent plane target is a category obtained by roughly classifying the tangent plane target, and may represent an approximate part shown by the tangent plane, and specifically may include but is not limited to: thalamic section, abdominal section, femoral section, thoracic section, facial section, hand section, foot section, umbilical cord section, and placenta section. The position information of the tangent plane target may specifically include a position and a size of the tangent plane target. Specifically, the tangent plane targets identified from the ultrasound image to be identified are marked through the second detection frames, each second detection frame corresponds to one tangent plane target, the coordinates of the center point of each second detection frame represent the position of the tangent plane target, and the width and height dimensions of each second detection frame represent the size of the tangent plane target.
According to the ultrasonic section identification method, all foreground targets contained in the ultrasonic image are identified firstly, the positioning information of all the foreground targets is obtained, and then all section targets contained in the ultrasonic image are further identified in the corresponding area according to the positioning information of the foreground targets, so that a plurality of section targets in the ultrasonic image can be effectively identified, background interference is eliminated, and section identification accuracy is improved.
In an embodiment, the step of identifying all the tangent plane targets in the ultrasound image to be identified according to the positioning information and the image features, and obtaining the first category information and the position information of each tangent plane target included in the ultrasound image to be identified may specifically include the following steps: intercepting the characteristics of the corresponding area from the image characteristics according to the positioning information to obtain foreground characteristics; and identifying all tangent plane targets in the ultrasonic image to be identified according to the foreground characteristics, and obtaining first class information and position information of each tangent plane target contained in the ultrasonic image to be identified.
The foreground features represent the features of the foreground target, the image features corresponding to the ultrasonic image to be recognized can be understood as including the foreground features and the background features, and the features of the corresponding area can be intercepted from the image features to serve as the foreground features according to the positioning information of the recognized foreground target.
When the ultrasonic image to be identified contains a plurality of tangent planes, the foreground features describe a plurality of tangent plane information, and different tangent plane information corresponds to different features, so that the foreground features can be used for tangent plane classification, all tangent plane targets in the ultrasonic image to be identified can be identified according to the foreground features, and first classification information and position information of each tangent plane target contained in the ultrasonic image to be identified are obtained.
In one embodiment, after obtaining the position information of each tangent plane target included in the ultrasound image to be identified, the method may further include the following steps: intercepting the characteristics of the corresponding area from the image characteristics according to the position information to obtain section characteristics; and according to the section characteristics, carrying out second classification on each section target contained in the ultrasonic image to be recognized, and obtaining second class information of each section target contained in the ultrasonic image to be recognized.
The section feature represents the feature of the section target, and the feature of the corresponding region can be intercepted from the image feature as the section feature according to the identified position information of the section target. When the ultrasonic image to be identified contains a plurality of tangent planes, the tangent plane characteristics describe a plurality of tangent plane information, and different tangent plane information corresponds to different characteristics, so the tangent plane characteristics can be used for tangent plane classification, all tangent plane targets in the ultrasonic image to be identified can be identified according to the tangent plane characteristics, and the second category information of each tangent plane target contained in the ultrasonic image to be identified is obtained.
The second type information of the tangent plane target is a type obtained by finely classifying the tangent plane target, and may represent not only an approximate part shown by the tangent plane, but also whether the tangent plane is a standard tangent plane available for diagnosis, which may specifically include, but is not limited to: a thalamus standard section, a thalamus non-standard section, an abdomen non-standard section, a femur non-standard section, a thorax standard section, a femur non-standard section, a face non-standard section, a hand non-standard section, a foot non-standard section, an umbilical cord non-standard section, a placenta non-standard section and the like.
In one embodiment, the ultrasound image to be identified may be input into a trained section identification model, and the position and the category of each section included in each ultrasound image may be obtained through the section identification model. Specifically, the section identification model is a deep convolutional neural network model, and as shown in fig. 2, the section identification model includes a backbone network, a feature interaction network, a foreground-background classification sub-network, and a network header, which are connected in sequence, where the network header includes three branches in parallel, namely a positioning sub-network, a first classification sub-network (also referred to as a large classification sub-network), and a second classification sub-network (also referred to as a fine classification sub-network).
Specifically, feature extraction is carried out on the ultrasonic image to be identified through a backbone network to obtain extracted features; fusing the extracted features through a feature interaction network to obtain image features corresponding to the ultrasonic image to be identified; identifying all foreground targets in the ultrasonic image to be identified according to the image characteristics through a foreground and background classification sub-network, and obtaining the positioning information of each foreground target contained in the ultrasonic image to be identified; through a first classification sub-network, according to the foreground characteristics, performing first classification on all section targets in the ultrasonic image to be recognized to obtain first classification information of each section target contained in the ultrasonic image to be recognized; positioning all tangent plane targets in the ultrasonic image to be identified according to the foreground characteristics through a positioning sub-network to obtain the position information of each tangent plane target contained in the ultrasonic image to be identified; and performing second classification on each section target contained in the ultrasonic image to be recognized through a second classification sub-network according to the section characteristics to obtain second class information of each section target contained in the ultrasonic image to be recognized.
For the backbone network, the input is the ultrasonic image to be identified, and the extracted features are output. The specific network structure of the backbone network is as follows:
the first layer is the input layer, which inputs the pixel matrix at 736 x 960 x 3. After the ultrasonic image to be identified is acquired, the ultrasonic image is processed as follows: under the condition of keeping the aspect ratio of the image unchanged, the obtained ultrasonic image is zoomed, so that the long edge of the zoomed image does not exceed 960 pixels, the short edge does not exceed 736 pixels, and the edge with insufficient length is filled with 0 to reach 960 pixels wide and 736 pixels high, so as to meet the size requirement of an input layer on the input image.
The second layer is a feature extraction layer, which uses the feature extraction network ResNet-50 and uses the outputs of stages 2, 3, 4, 5 in ResNet-50 as extracted features C2, C3, C4, C5, whose sizes are 184 × 240 × 256, 92 × 120 × 512, 46 × 60 × 1024, 23 × 30 × 2048, respectively.
For the feature interaction network, features C2, C3, C4 and C5 output by the backbone network are input, and fused features P2, P3, P4, P5 and P6 with 5 scales are output. The specific network structure of the feature interaction network is as follows:
the first layer is a ConvBnRelu type convolutional layer based on the feature layer C5, with a convolutional kernel size of 1 × 1, a number of output channels of 256, a step size of 1, a VALID pattern fill, and an output matrix of 23 × 30 × 256, denoted as M5.
The second layer is a ConvBnRelu type convolutional layer based on the feature layer C4, with a convolutional kernel size of 1 x 1, output channels of 256, step size of 1, VALID pattern filling, and output matrix of 46 x 60 x 256, denoted as M4.
The third layer is a ConvBnRelu type convolution layer based on the feature layer C3, with convolution kernel size 1 x 1, output channel 256, step size 1, VALID pattern filling, output matrix 92 x 120 x 256, denoted M3.
The fourth layer is a ConvBnRelu type convolution layer based on the feature layer C2, with convolution kernel size 1 x 1, output channel 256, step 1, VALID pattern fill, output matrix 184 x 240 x 256, denoted M2.
The fifth layer was a2 x 2 upsampled layer based on M5 with an output matrix of 46 x 60 x 256.
The sixth layer is an ADD layer, adding the fifth layer output to M4, with an output matrix of 46 x 60 x 256, denoted as a 4.
The seventh layer is a2 x 2 upsampled layer based on a4, and the output matrix is 92 x 120 x 256.
The eighth layer is the ADD layer, adding the outputs of the seventh layer to M3, with an output matrix of 92 × 120 × 256, denoted as A3.
The ninth tier is a2 x 2 upsampled tier based on a3, with an output matrix of 184 x 240 x 256.
The tenth layer is the ADD layer, and the output of the ninth layer is added to M2, with an output matrix 184 × 240 × 256, denoted as a 2.
The eleventh layer is a ConvBnRelu type convolution layer based on a2 with convolution kernel size 3 x 3, output channel 256, SAME fill pattern, output matrix size 184 x 240 x 256, denoted P2.
The twelfth layer is a ConvBnRelu type convolution layer based on a3, with a convolution kernel size of 3 × 3, output channels of 256, SAME fill pattern, and output matrix size of 92 × 120 × 256, denoted as P3.
The thirteenth layer is a ConvBnRelu type convolution layer based on a4, with a convolution kernel size of 3 × 3, output channels of 256, SAME fill pattern, output matrix size of 46 × 60 × 256, and is denoted as P4.
The fourteenth layer is a ConvBnRelu type convolution layer based on M5, with convolution kernel size 3 × 3, output channel 256, SAME fill pattern, output matrix size 23 × 30 × 256, denoted P5.
The fifteenth layer is a MaxPooling pooling layer based on C5 with a pooling size of 1 x 1, step size of 2, output matrix size of 12 x 15 x 256, denoted P6.
For the foreground-background classification sub-network, the features P2, P3, P4, P5 and P6 which are input to the feature interaction network output are output, the position and size of the region classified as foreground are output, and the region classified as foreground is also called region of interest. The concrete network structure of the foreground and background classification sub-network is as follows:
the first layer was a ConvBnRelu type convolution layer based on P2, P3, P4, P5, P6, respectively, with a convolution kernel size of 3 x 3, output channels of 512, SAME pattern filling, output matrices of 184 x 240 x 512, 92 x 120 x 512, 46 x 60 x 512, 23 x 30 x 512, 12 x 15 x 256, respectively, these outputs collectively denoted as S.
The second layer is a ConvBnSigmoid convolution layer on an S basis with convolution kernel size 1 x 1, output channels 18, and output matrices 184 x 240 x 18, 92 x 120 x 18, 46 x 60 x 18, 23 x 30 x 18, 12 x 15 x 18, respectively.
The third layer is a Reshape layer and the outputs of the second layer are all reshaped into a matrix with a depth of 2, with the output matrices being 397440 x 2, 99360 x 2, 24840 x 2, 6210 x 2, 1620 x 2, respectively.
The fourth layer is a localization layer, all the outputs of the third layer are connected along the first dimension, the output matrix is 529470 × 2, and the result is the classification result of the foreground and the background.
The fifth layer is a ConvBn type convolution layer on an S basis with convolution kernel size 1 x 1, output channel 36, output matrix 184 x 240 x 36, 92 x 120 x 36, 46 x 60 x 36, 23 x 30 x 36, 12 x 15 x 36, respectively.
The sixth layer is a Reshape layer and the outputs of the fifth layer are all reshaped into matrices of depth 4, with the output matrices being 397440 × 4, 99360 × 4, 24840 × 4, 6210 × 4, 1620 × 4, respectively.
The seventh layer is the localization layer, all the outputs of the sixth layer are connected along the first dimension, the output matrix is 529470 × 4, and the result is the approximate positioning result of the region where the foreground and the background are located.
For the network header, as shown in fig. 3, it consists of three parallel sub-networks, a positioning sub-network, a large classification sub-network and a fine classification sub-network. The input of the positioning sub-network and the large classification sub-network is the approximate positioning result of the region of interest generated by the features P2, P3, P4, P5, P6 and the foreground and background classification sub-network output by the feature interaction network, the network intercepts the features of the corresponding regions from P2, P3, P4, P5 and P6 according to the position of the region of interest, and the intercepted features are used for the large classification sub-network and the positioning sub-network. The specific network structure of the positioning sub-network and the large classification sub-network is as follows:
the first layer was a ROIPooling layer, 7 × 7 in size, and each region of interest feature was truncated and sorted into a 7 × 256 matrix.
The second layer is a Flatten layer, and the characteristics of each region arranged in the first layer are unfolded into vectors of 12544.
The third layer is a DenseRelu layer based on the output result of the second layer, the number of the neurons is 1024, and the output matrix of each characteristic region is a 1024 vector.
The fourth layer is a DenseRelu layer based on the output result of the third layer, the number of the neurons is 1024, and the output matrix of each characteristic region is a 1024 vector. The first to fourth layers are network layers that are shared by the positioning sub-network and the large classification sub-network.
And the fifth layer is a Dense layer based on the output result of the fourth layer, the number of the neurons is 4 x N, the output matrix of each characteristic region is a vector of 4 x N, the result represents the fine position of each interested region, and N represents the number of classes related to the large classification.
The sixth layer is a DenseSoft max layer based on the output result of the fourth layer, the number of neurons is N, the output matrix of each characteristic region is a vector of N, and the result represents a large classification category to which each region of interest belongs.
The input of the fine classification sub-network in the inference phase is the fine positions of the regions of interest generated by the P2, P3, P4, P5 and P6 output by the feature interaction network and the rough positioning result of the regions of interest classified as foreground generated by the P2, P3, P4, P5 and P6 output by the feature interaction network and the foreground background sub-network. The network will truncate features of the corresponding regions from P2, P3, P4, P5, P6 according to the position of the region of interest, the truncated features being used for sub-classification sub-networks. The specific network structure of the sub-classification sub-network is as follows:
the first layer was a ROIPooling layer, 8 × 8 in size, and each region of interest feature was truncated and sorted into a matrix of 8 × 256.
The second layer is a ConvBnRelu type convolutional layer with convolution kernel size 1 x 1, output channel 256, VALID fill pattern, and output matrix 8 x 256.
The third layer is a ConvBnRelu type convolution layer with convolution kernel size 3 x 3, output channels 256, SAME fill pattern, and output matrix 8 x 256.
The fourth layer is a ConvBnRelu type convolutional layer with convolution kernel size of 1 x 1, output channel 512, VALID fill pattern, and output matrix 8 x 512.
The fifth layer was a MaxPooling layer with a pooling size of 8 x 8 and an output matrix of 1 x 512.
The sixth layer is a ConvBnRelu layer with convolution kernel size 1 x 1, output channels 64, and output matrix 1 x 64.
The seventh layer is a ConvBnRelu layer, with convolution kernel size 1 x 1, output channel 512, and output matrix 1 x 512.
The eighth layer is a dot-packed layer, and the fourth layer output and the seventh layer output are subjected to matrix multiplication, and the output matrix is 8 × 512.
The ninth layer was a MaxPooling layer with a pooling size of 2 x 2 and an output matrix of 4 x 512.
The tenth layer is the Flatten layer, which expands the output of the ninth layer into a vector of 8192.
The eleventh layer is a Dense layer, the number of output channels is H, the output matrix is a vector with the length of H, and the result represents the fine classification category corresponding to the intercepted area feature, wherein H is the number of categories related to the fine classification.
As shown in fig. 4, the right side shows the output result of the section identification model in an embodiment, and the section identification model in the above embodiment can predict a plurality of section categories for a single ultrasound image, as compared with the output result of predicting only one section category for a single ultrasound image on the left side, so as to effectively identify a plurality of section objects in the ultrasound image.
In an embodiment, the above training process of the tangent plane recognition model includes the following steps: acquiring a sample ultrasonic image and corresponding annotation information, wherein the annotation information comprises: the position information and the category information of each section target are marked in the sample ultrasonic image; inputting the sample ultrasonic image into a section identification model to be trained to obtain identification information corresponding to the sample ultrasonic image, wherein the identification information comprises: identifying the position information and the category information of each section target from the sample ultrasonic image; and adjusting parameters of each network in the tangent plane recognition model to be trained based on the recognition information and the labeling information until the condition of model training ending is met, and obtaining the trained tangent plane recognition model.
The obtaining of the sample ultrasound image specifically comprises the following steps: acquiring an initial sample ultrasonic image; selecting a sample ultrasonic image with a preset proportion from the initial sample ultrasonic image to perform image enhancement, and obtaining a corresponding enhanced sample ultrasonic image; determining a sample ultrasound image from the initial sample ultrasound image and the enhanced sample ultrasound image.
Specifically, the initial sample ultrasound image (i.e., data set) is about 16 million fetal ultrasound sectional images obtained from ultrasound equipment manufactured by mainstream manufacturers (including samsung, siemens, kelly, etc.) in the market, each sectional data is about 4 thousand throughout 42 sections required for prenatal ultrasound fetal testing, and the fetal ultrasound sectional images are randomly divided into 3 parts, wherein 80% is used as a training set, 10% is used as a verification set, and 10% is used as a test set.
The labeling information corresponding to the data set can be labeled by an expert to obtain a labeled data set, the specific labeling method can be that each section contained in the initial sample ultrasonic image is labeled by a rectangular frame, the labeled position information comprises the center point coordinate and the width and the height of the rectangular frame, and the labeled category information is used for indicating whether the rectangular frame belongs to the foreground or the background and belongs to the large classification category and the small classification category. Using a clustering algorithm to count the labeled data set, obtaining 5 values (such as 64 × 64, 128 × 128, 256 × 256, 512 × 512, 704 × 704) which can represent the sizes of the key regions of all the sections most, and combining the values with the aspect ratios of 0.5, 1.0, and 2.0 respectively to generate anchor points in the neural network, which can be understood as that, for each pixel point in each sample ultrasound image, 15 frames with different sizes are labeled by taking the pixel point as the center.
And preprocessing the labeled data set as follows: for each initial sample ultrasonic image, under the condition of keeping the aspect ratio of the image unchanged, zooming the initial sample ultrasonic image so that the long edge of the zoomed image does not exceed 960 pixels and the short edge does not exceed 736 pixels, and filling the edge with insufficient length with 0 so as to enable the edge to reach 960 pixels in width and 736 pixels in height; then randomly selecting a preset proportion (lower than 50%) of images from the scaled initial sample ultrasound image, and randomly applying one to more of the following enhancement operations on each selected image: randomly turning the image up and down, randomly turning the image left and right, randomly rotating the image by-20 degrees to +20 degrees by taking the center of the image as an origin, randomly disturbing the brightness of 5 to 10 pixels, and enhancing the contrast by 0.8 to 1.2 times to obtain a corresponding enhanced sample ultrasonic image; all the zoomed initial sample ultrasonic images and all the enhanced sample ultrasonic images are used as sample ultrasonic images participating in training together, so that the robustness of the model can be improved; and (4) counting the pixel mean and the variance of the ultrasonic images of all the samples, and carrying out standardization treatment on the ultrasonic images of all the samples by using the mean and the variance to obtain the finally preprocessed ultrasonic images of the samples for model training.
Inputting the sample ultrasonic image into a section identification model to be trained to obtain identification information corresponding to the sample ultrasonic image, wherein the identification information comprises information corresponding to each detection frame, each detection frame corresponds to one identified section target, the position information of the section target comprises a center point coordinate and a width and a height of the corresponding detection frame, and the category information of the section target comprises whether the corresponding detection frame belongs to a foreground or a background, and a large classification category and a small classification category.
The loss function used in the tangent plane identification model includes 4 parts for calculating the loss of the foreground-background classification sub-network, the positioning sub-network, the large classification sub-network and the fine classification sub-network respectively.
The penalty function for the foreground-background classification subnetwork (Binary-cross entry) is as follows:
Figure BDA0002601476940000121
where m denotes the number of samples, yiLabel, x, indicating whether the ith sample is foregroundiDenotes the ith sample, hwRepresenting a mapping function from the sample to the foreground background class.
The loss function for locating subnetworks (Smooth-L1) is as follows:
Figure BDA0002601476940000122
Figure BDA0002601476940000123
wherein, tuAnd v represents the actual label value in the labeling information.
The loss function for the large class subnetwork (Category-cross entry) is as follows:
Figure BDA0002601476940000124
wherein i represents the ith class, ZiRepresenting the logical value of the object when classified as the ith class.
The loss function (Arc-Softmax) of the sub-classification sub-network is as follows:
Figure BDA0002601476940000125
wherein m represents a plus angle margin, 0.35 can be adopted, s represents a characteristic scale, 36.0 can be adopted, and thetajRepresenting the angle between the embedded feature of the jth target region and the class center.
Respectively inputting the output result and the labeling information of each sub-network into a loss function corresponding to each sub-network to obtain a loss value, optimizing the loss value according to an SGD algorithm to achieve the aim of gradually updating the parameters of each sub-network, wherein in the optimization process, a learning rate lr is firstly gradually increased to 0.001 from 0.00005 through a warming-up process, then the learning rate is iteratively attenuated once every 1 ten thousand times, the attenuation weight is 0.004, the training is finished until the model converges or reaches a preset iteration number, and the training process comprises 300 periods and 1 ten thousand iterations in each period.
And verifying the trained section recognition model by using a verification set, inputting an ultrasonic image into the trained model, automatically recognizing the input ultrasonic image by using the model, and giving a judgment result of whether the section type and the section are standard or not. The large classification accuracy, the fine classification accuracy, the standard tangent plane average accuracy and the standard tangent plane average recall rate of the trained model on the verification set are shown in the following table 1:
TABLE 1
High classification accuracy Fine classification accuracy Standard tangent plane accuracy Standard cut-noodle recall rate
99.26% 98.79% 99.50% 94.32%
It can be seen from the table that the recall rate on the standard section still keeps a very high level on the basis of ensuring that the accuracy of the standard section exceeds 99%.
By the embodiment, the standard section can be automatically identified from the ultrasonic image without manual intervention, so that the operation of intercepting the relevant section by a manual freeze frame during ultrasonic examination by an ultrasonic doctor can be effectively reduced or even completely avoided, and the working efficiency of the ultrasonic doctor is improved; moreover, the situation of single-image multi-section labels possibly occurring in the prenatal fetal ultrasonic examination process is considered, so that the scheme is widely applicable to the selection of the standard sections in all prenatal fetal ultrasonic examinations, the selection requirements of all standard sections related to prenatal fetal ultrasonic are met in a one-stop manner, the complexity caused by artificial grouping data and the loss of knowledge information among groups are avoided, the richness and completeness of data used for model training and learning are equivalently increased, and the overall recognition accuracy is improved; in addition, the one-stop solution requirement means less computing resource overhead, so that the hardware cost required when the scheme is actually deployed can be reduced.
It should be understood that, although the steps in the flowchart of fig. 1 are shown in order as indicated by the arrows, the steps are not necessarily performed in order as indicated by the arrows. The steps are not performed in the exact order shown and described, and may be performed in other orders, unless explicitly stated otherwise. Moreover, at least a portion of the steps in fig. 1 may include multiple steps or multiple stages, which are not necessarily performed at the same time, but may be performed at different times, which are not necessarily performed in sequence, but may be performed in turn or alternately with other steps or at least a portion of the other steps or stages.
In one embodiment, as shown in fig. 5, there is provided an ultrasonic sectional identification system 500 comprising: an obtaining module 510, a feature extraction module 520, a foreground identification module 530, and a tangent plane identification module 540, wherein:
an obtaining module 510, configured to obtain an ultrasound image to be identified.
The feature extraction module 520 is configured to perform feature extraction on the ultrasound image to be identified to obtain corresponding image features.
The foreground identifying module 530 is configured to identify all foreground targets in the ultrasound image to be identified according to the image features, and obtain positioning information of each foreground target included in the ultrasound image to be identified.
The tangent plane identification module 540 is configured to identify all tangent plane targets in the ultrasound image to be identified according to the positioning information and the image characteristics, and obtain first category information and position information of each tangent plane target included in the ultrasound image to be identified.
In one embodiment, the section identification module 540 includes: a first interception unit and a first identification unit. And the first interception unit is used for intercepting the features of the corresponding area from the image features according to the positioning information to obtain the foreground features. And the first identification unit is used for identifying all tangent plane targets in the ultrasonic image to be identified according to the foreground characteristics and obtaining the first class information and the position information of each tangent plane target contained in the ultrasonic image to be identified.
In one embodiment, the feature extraction module 520 is specifically configured to: extracting features of the ultrasonic image to be recognized through a backbone network in the trained section recognition model to obtain extracted features; and fusing the extracted features through a feature interaction network in the section identification model to obtain the image features corresponding to the ultrasonic image to be identified.
In one embodiment, the foreground identification module 530 is specifically configured to: and identifying all foreground targets in the ultrasonic image to be identified according to the image characteristics by a foreground and background classification sub-network in the section identification model, and obtaining the positioning information of each foreground target contained in the ultrasonic image to be identified.
In one embodiment, the first identification unit is specifically configured to: performing first classification on all section targets in the ultrasonic image to be recognized through a first classification sub-network in the section recognition model according to the foreground characteristics to obtain first class information of each section target contained in the ultrasonic image to be recognized; and positioning all tangent plane targets in the ultrasonic image to be identified according to the foreground characteristics through a positioning sub-network in the tangent plane identification model to obtain the position information of each tangent plane target contained in the ultrasonic image to be identified.
In one embodiment, the tangent plane identification module 540 further comprises: a second interception unit and a second identification unit. And the second intercepting unit is used for intercepting the characteristics of the corresponding area from the image characteristics according to the position information to obtain the section characteristics. And the second identification unit is used for carrying out second classification on each section target contained in the ultrasonic image to be identified according to the section characteristics to obtain second category information of each section target contained in the ultrasonic image to be identified.
In one embodiment, the second identification unit is specifically configured to: and performing second classification on each section target contained in the ultrasonic image to be recognized through a second classification sub-network in the section recognition model according to the section characteristics to obtain second classification information of each section target contained in the ultrasonic image to be recognized.
In one embodiment, the system further comprises a training module for training to obtain the section recognition model. The section identification model comprises a backbone network, a feature interaction network, a foreground and background classification sub-network and a network head which are connected in sequence, wherein the network head comprises a positioning sub-network, a first classification sub-network and a second classification sub-network which are connected in parallel. The training module is specifically configured to: acquiring a sample ultrasonic image and corresponding annotation information, wherein the annotation information comprises: the position information and the category information of each section target are marked in the sample ultrasonic image; inputting the sample ultrasonic image into a section identification model to be trained to obtain identification information corresponding to the sample ultrasonic image, wherein the identification information comprises: identifying the position information and the category information of each section target from the sample ultrasonic image; and adjusting parameters of each network in the tangent plane recognition model to be trained based on the recognition information and the labeling information until the condition of model training ending is met, and obtaining the trained tangent plane recognition model.
In one embodiment, the training module, when acquiring the sample ultrasound image, is specifically configured to: acquiring an initial sample ultrasonic image; selecting a sample ultrasonic image with a preset proportion from the initial sample ultrasonic image to perform image enhancement, and obtaining a corresponding enhanced sample ultrasonic image; determining a sample ultrasound image from the initial sample ultrasound image and the enhanced sample ultrasound image.
For the specific definition of the ultrasonic section identification system, reference may be made to the above definition of the ultrasonic section identification method, which is not described herein again. The modules in the ultrasonic section identification system can be wholly or partially realized by software, hardware and a combination thereof. The modules can be embedded in a hardware form or independent from a processor in the computer device, and can also be stored in a memory in the computer device in a software form, so that the processor can call and execute operations corresponding to the modules.
In one embodiment, a computer device is provided, which may be a server, and its internal structure diagram may be as shown in fig. 6. The computer device includes a processor, a memory, and a network interface connected by a system bus. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device comprises a nonvolatile storage medium and an internal memory. The non-volatile storage medium stores an operating system, a computer program, and a database. The internal memory provides an environment for the operation of an operating system and computer programs in the non-volatile storage medium. The network interface of the computer device is used for communicating with an external terminal through a network connection. The computer program is executed by a processor to implement an ultrasound sectional identification method.
In one embodiment, a computer device is provided, which may be a terminal, and its internal structure diagram may be as shown in fig. 7. The computer device includes a processor, a memory, a communication interface, a display screen, and an input device connected by a system bus. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device comprises a nonvolatile storage medium and an internal memory. The non-volatile storage medium stores an operating system and a computer program. The internal memory provides an environment for the operation of an operating system and computer programs in the non-volatile storage medium. The communication interface of the computer device is used for carrying out wired or wireless communication with an external terminal, and the wireless communication can be realized through WIFI, an operator network, NFC (near field communication) or other technologies. The computer program is executed by a processor to implement an ultrasound sectional identification method. The display screen of the computer equipment can be a liquid crystal display screen or an electronic ink display screen, and the input device of the computer equipment can be a touch layer covered on the display screen, a key, a track ball or a touch pad arranged on the shell of the computer equipment, an external keyboard, a touch pad or a mouse and the like.
Those skilled in the art will appreciate that the configurations shown in fig. 6 or 7 are merely block diagrams of some configurations relevant to the present disclosure, and do not constitute a limitation on the computing devices to which the present disclosure may be applied, and that a particular computing device may include more or less components than those shown, or may combine certain components, or have a different arrangement of components.
In one embodiment, a computer device is provided, comprising a memory and a processor, the memory having stored therein a computer program, the processor implementing the steps of the above-described method embodiments when executing the computer program.
In an embodiment, a computer-readable storage medium is provided, on which a computer program is stored, which computer program, when being executed by a processor, carries out the steps of the above-mentioned method embodiments.
In one embodiment, a computer program product or computer program is provided that includes computer instructions stored in a computer-readable storage medium. The computer instructions are read by a processor of a computer device from a computer-readable storage medium, and the computer instructions are executed by the processor to cause the computer device to perform the steps in the above-mentioned method embodiments.
It should be understood that the terms "first", "second", etc. in the above-described embodiments are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implying any number of technical features indicated.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by hardware instructions of a computer program, which can be stored in a non-volatile computer-readable storage medium, and when executed, can include the processes of the embodiments of the methods described above. Any reference to memory, storage, database or other medium used in the embodiments provided herein can include at least one of non-volatile and volatile memory. Non-volatile Memory may include Read-Only Memory (ROM), magnetic tape, floppy disk, flash Memory, optical storage, or the like. Volatile Memory can include Random Access Memory (RAM) or external cache Memory. By way of illustration and not limitation, RAM can take many forms, such as Static Random Access Memory (SRAM) or Dynamic Random Access Memory (DRAM), among others.
The technical features of the above embodiments can be arbitrarily combined, and for the sake of brevity, all possible combinations of the technical features in the above embodiments are not described, but should be considered as the scope of the present specification as long as there is no contradiction between the combinations of the technical features.
The above-mentioned embodiments only express several embodiments of the present application, and the description thereof is more specific and detailed, but not construed as limiting the scope of the invention. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the concept of the present application, which falls within the scope of protection of the present application. Therefore, the protection scope of the present patent shall be subject to the appended claims.

Claims (10)

1. An ultrasonic section identification method is characterized by comprising the following steps:
acquiring an ultrasonic image to be identified;
extracting the characteristics of the ultrasonic image to be identified to obtain corresponding image characteristics;
according to the image characteristics, identifying all foreground targets in the ultrasonic image to be identified, and obtaining the positioning information of each foreground target contained in the ultrasonic image to be identified;
and identifying all tangent plane targets in the ultrasonic image to be identified according to the positioning information and the image characteristics, and obtaining first type information and position information of each tangent plane target contained in the ultrasonic image to be identified.
2. The method of claim 1, wherein identifying all the sectional objects in the ultrasound image to be identified according to the positioning information and the image features, and obtaining the first category information and the position information of each sectional object included in the ultrasound image to be identified comprises:
intercepting the features of the corresponding area from the image features according to the positioning information to obtain foreground features;
and identifying all tangent plane targets in the ultrasonic image to be identified according to the foreground characteristics, and obtaining first class information and position information of each tangent plane target contained in the ultrasonic image to be identified.
3. The method of claim 2, comprising at least one of:
the first item:
carrying out feature extraction on the ultrasonic image to be identified to obtain corresponding image features, wherein the feature extraction comprises the following steps:
extracting features of the ultrasonic image to be recognized through a backbone network in the trained section recognition model to obtain extracted features;
fusing the extracted features through a feature interaction network in the section identification model to obtain image features corresponding to the ultrasonic image to be identified;
the second term is:
according to the image characteristics, identifying all foreground targets in the ultrasonic image to be identified, and obtaining the positioning information of each foreground target contained in the ultrasonic image to be identified, wherein the method comprises the following steps:
identifying all foreground targets in the ultrasonic image to be identified according to the image characteristics through a foreground and background classification sub-network in the section identification model, and obtaining the positioning information of each foreground target contained in the ultrasonic image to be identified;
the third item:
according to the foreground features, identifying all tangent plane targets in the ultrasonic image to be identified, and obtaining first class information and position information of each tangent plane target contained in the ultrasonic image to be identified, wherein the first class information and the position information comprise:
performing first classification on all section targets in the ultrasonic image to be recognized through a first classification sub-network in the section recognition model according to the foreground characteristics to obtain first classification information of each section target contained in the ultrasonic image to be recognized;
and positioning all tangent plane targets in the ultrasonic image to be identified according to the foreground characteristics through a positioning sub-network in the tangent plane identification model to obtain the position information of each tangent plane target contained in the ultrasonic image to be identified.
4. The method of any of claims 1 to 3, further comprising:
intercepting the features of the corresponding area from the image features according to the position information to obtain tangent plane features;
and according to the section characteristics, performing second classification on each section target contained in the ultrasonic image to be identified to obtain second category information of each section target contained in the ultrasonic image to be identified.
5. The method of claim 4, wherein performing a second classification on each tangent plane target included in the ultrasound image to be identified according to the tangent plane feature to obtain a second category information of each tangent plane target included in the ultrasound image to be identified comprises:
and performing second classification on each section target contained in the ultrasonic image to be recognized according to the section characteristics through a second classification sub-network in the section recognition model, so as to obtain second class information of each section target contained in the ultrasonic image to be recognized.
6. The method of claim 5, wherein the section recognition model comprises a backbone network, a feature interaction network, a foreground-background classification sub-network, and a network header connected in sequence, the network header comprising a positioning sub-network, a first classification sub-network, and a second classification sub-network in parallel;
the training process of the tangent plane recognition model comprises the following steps:
acquiring a sample ultrasonic image and corresponding labeling information, wherein the labeling information comprises: position information and category information of each section target marked in the sample ultrasonic image;
inputting the sample ultrasonic image into a section identification model to be trained to obtain identification information corresponding to the sample ultrasonic image, wherein the identification information comprises: identifying position information and category information of each section target from the sample ultrasonic image;
and adjusting parameters of each network in the tangent plane recognition model to be trained based on the recognition information and the labeling information until model training ending conditions are met, and obtaining a trained tangent plane recognition model.
7. The method of claim 6, wherein acquiring a sample ultrasound image comprises:
acquiring an initial sample ultrasonic image;
selecting a sample ultrasonic image with a preset proportion from the initial sample ultrasonic image to perform image enhancement, and obtaining a corresponding enhanced sample ultrasonic image;
determining a sample ultrasound image from the initial sample ultrasound image and the enhanced sample ultrasound image.
8. An ultrasonic sectional identification system, the system comprising:
the acquisition module is used for acquiring an ultrasonic image to be identified;
the characteristic extraction module is used for extracting the characteristics of the ultrasonic image to be identified to obtain corresponding image characteristics;
the foreground identification module is used for identifying all foreground targets in the ultrasonic image to be identified according to the image characteristics to obtain positioning information of all foreground targets contained in the ultrasonic image to be identified;
and the tangent plane identification module is used for identifying all tangent plane targets in the ultrasonic image to be identified according to the positioning information and the image characteristics to obtain first class information and position information of each tangent plane target contained in the ultrasonic image to be identified.
9. A computer device comprising a memory and a processor, the memory storing a computer program, characterized in that the processor, when executing the computer program, implements the steps of the method of any of claims 1 to 7.
10. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the steps of the method of any one of claims 1 to 7.
CN202010725339.3A 2020-07-24 2020-07-24 Ultrasonic tangent plane identification method, system, computer equipment and storage medium Pending CN112102230A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010725339.3A CN112102230A (en) 2020-07-24 2020-07-24 Ultrasonic tangent plane identification method, system, computer equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010725339.3A CN112102230A (en) 2020-07-24 2020-07-24 Ultrasonic tangent plane identification method, system, computer equipment and storage medium

Publications (1)

Publication Number Publication Date
CN112102230A true CN112102230A (en) 2020-12-18

Family

ID=73749476

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010725339.3A Pending CN112102230A (en) 2020-07-24 2020-07-24 Ultrasonic tangent plane identification method, system, computer equipment and storage medium

Country Status (1)

Country Link
CN (1) CN112102230A (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112614123A (en) * 2020-12-29 2021-04-06 深圳开立生物医疗科技股份有限公司 Ultrasonic image identification method and related device
CN113393456A (en) * 2021-07-13 2021-09-14 湖南大学 Automatic quality control method of early pregnancy fetus standard section based on multiple tasks
CN113658145A (en) * 2021-08-20 2021-11-16 合肥合滨智能机器人有限公司 Liver ultrasonic standard tangent plane identification method and device, electronic equipment and storage medium
CN115082487A (en) * 2022-08-23 2022-09-20 深圳华声医疗技术股份有限公司 Ultrasonic image section quality evaluation method and device, ultrasonic equipment and storage medium

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2017193251A1 (en) * 2016-05-09 2017-11-16 深圳迈瑞生物医疗电子股份有限公司 Method and system for recognizing region of interest profile in ultrasound image
CN109063740A (en) * 2018-07-05 2018-12-21 高镜尧 The detection model of ultrasonic image common-denominator target constructs and detection method, device
CN110464380A (en) * 2019-09-12 2019-11-19 李肯立 A kind of method that the ultrasound cross-section image of the late pregnancy period fetus of centering carries out quality control
CN110555836A (en) * 2019-09-05 2019-12-10 李肯立 Automatic identification method and system for standard fetal section in ultrasonic image
WO2020038240A1 (en) * 2018-08-23 2020-02-27 腾讯科技(深圳)有限公司 Image processing method and apparatus, computer-readable storage medium and computer device
CN111223092A (en) * 2020-02-28 2020-06-02 长沙大端信息科技有限公司 Automatic quality control system and detection method for ultrasonic sectional images of fetus
CN111326256A (en) * 2020-02-28 2020-06-23 李胜利 Intelligent identification self-training learning system and examination method for fetus ultrasonic standard section image

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2017193251A1 (en) * 2016-05-09 2017-11-16 深圳迈瑞生物医疗电子股份有限公司 Method and system for recognizing region of interest profile in ultrasound image
CN108701354A (en) * 2016-05-09 2018-10-23 深圳迈瑞生物医疗电子股份有限公司 Identify the method and system of area-of-interest profile in ultrasonoscopy
CN109063740A (en) * 2018-07-05 2018-12-21 高镜尧 The detection model of ultrasonic image common-denominator target constructs and detection method, device
WO2020038240A1 (en) * 2018-08-23 2020-02-27 腾讯科技(深圳)有限公司 Image processing method and apparatus, computer-readable storage medium and computer device
CN110555836A (en) * 2019-09-05 2019-12-10 李肯立 Automatic identification method and system for standard fetal section in ultrasonic image
CN110464380A (en) * 2019-09-12 2019-11-19 李肯立 A kind of method that the ultrasound cross-section image of the late pregnancy period fetus of centering carries out quality control
CN111223092A (en) * 2020-02-28 2020-06-02 长沙大端信息科技有限公司 Automatic quality control system and detection method for ultrasonic sectional images of fetus
CN111326256A (en) * 2020-02-28 2020-06-23 李胜利 Intelligent identification self-training learning system and examination method for fetus ultrasonic standard section image

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
陈健;赵海桐;杨玉志;徐志扬;茹彤;: "基于深度学习的中孕期胎儿超声筛查切面自动识别", 中国医疗设备, no. 05, 10 May 2020 (2020-05-10) *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112614123A (en) * 2020-12-29 2021-04-06 深圳开立生物医疗科技股份有限公司 Ultrasonic image identification method and related device
CN113393456A (en) * 2021-07-13 2021-09-14 湖南大学 Automatic quality control method of early pregnancy fetus standard section based on multiple tasks
CN113658145A (en) * 2021-08-20 2021-11-16 合肥合滨智能机器人有限公司 Liver ultrasonic standard tangent plane identification method and device, electronic equipment and storage medium
CN115082487A (en) * 2022-08-23 2022-09-20 深圳华声医疗技术股份有限公司 Ultrasonic image section quality evaluation method and device, ultrasonic equipment and storage medium

Similar Documents

Publication Publication Date Title
CN112102230A (en) Ultrasonic tangent plane identification method, system, computer equipment and storage medium
CN107895367B (en) Bone age identification method and system and electronic equipment
CN110689038A (en) Training method and device of neural network model and medical image processing system
Omonigho et al. Breast cancer: tumor detection in mammogram images using modified alexnet deep convolution neural network
CN108062749B (en) Identification method and device for levator ani fissure hole and electronic equipment
CN109584209B (en) Vascular wall plaque recognition apparatus, system, method, and storage medium
CN109389129A (en) A kind of image processing method, electronic equipment and storage medium
CN110276411A (en) Image classification method, device, equipment, storage medium and medical treatment electronic equipment
CN113728335A (en) Method and system for classification and visualization of 3D images
CN110555836A (en) Automatic identification method and system for standard fetal section in ultrasonic image
CN111862044A (en) Ultrasonic image processing method and device, computer equipment and storage medium
US20220335600A1 (en) Method, device, and storage medium for lesion segmentation and recist diameter prediction via click-driven attention and dual-path connection
CN109146891B (en) Hippocampus segmentation method and device applied to MRI and electronic equipment
CN108664986B (en) Based on lpNorm regularized multi-task learning image classification method and system
WO2020168648A1 (en) Image segmentation method and device, and computer-readable storage medium
CN109523546A (en) A kind of method and device of Lung neoplasm analysis
CN111340775A (en) Parallel method and device for acquiring ultrasonic standard tangent plane and computer equipment
CN110738702B (en) Three-dimensional ultrasonic image processing method, device, equipment and storage medium
CN109671072A (en) Cervical cancer tissues pathological image diagnostic method based on spotted arrays condition random field
CN113096080B (en) Image analysis method and system
CN112036298A (en) Cell detection method based on double-segment block convolutional neural network
CN110992310A (en) Method and device for determining partition where mediastinal lymph node is located
KR20210079132A (en) Classification method of prostate cancer using support vector machine
CN112614570B (en) Sample set labeling method, pathological image classification method, classification model construction method and device
JP7404535B2 (en) Conduit characteristic acquisition method based on computer vision, intelligent microscope, conduit tissue characteristic acquisition device, computer program, and computer equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination