CN110598675B - Ultrasonic fetal posture identification method, storage medium and electronic equipment - Google Patents

Ultrasonic fetal posture identification method, storage medium and electronic equipment Download PDF

Info

Publication number
CN110598675B
CN110598675B CN201910907068.0A CN201910907068A CN110598675B CN 110598675 B CN110598675 B CN 110598675B CN 201910907068 A CN201910907068 A CN 201910907068A CN 110598675 B CN110598675 B CN 110598675B
Authority
CN
China
Prior art keywords
fetal
posture
volume data
ultrasonic
gesture
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910907068.0A
Other languages
Chinese (zh)
Other versions
CN110598675A (en
Inventor
杨鑫
高睿
史文龙
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Duying Medical Technology Co ltd
Original Assignee
Shenzhen Duying Medical Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Duying Medical Technology Co ltd filed Critical Shenzhen Duying Medical Technology Co ltd
Priority to CN201910907068.0A priority Critical patent/CN110598675B/en
Publication of CN110598675A publication Critical patent/CN110598675A/en
Application granted granted Critical
Publication of CN110598675B publication Critical patent/CN110598675B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • G06F18/2155Generating training patterns; Bootstrap methods, e.g. bagging or boosting characterised by the incorporation of unlabelled data, e.g. multiple instance learning [MIL], semi-supervised techniques using expectation-maximisation [EM] or naïve labelling
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10132Ultrasound image
    • G06T2207/101363D ultrasound image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/03Recognition of patterns in medical or anatomical images

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Software Systems (AREA)
  • Evolutionary Computation (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • General Health & Medical Sciences (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Computer Graphics (AREA)
  • Geometry (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • Ultra Sonic Daignosis Equipment (AREA)

Abstract

The invention discloses an identification method of ultrasonic fetal postures, a storage medium and electronic equipment, wherein the identification method comprises the steps of acquiring ultrasonic volume data to be identified of a fetus; inputting the ultrasonic volume data to be recognized into a trained gesture recognition model, wherein the gesture recognition model is obtained by training based on a training sample set, the training sample set comprises a plurality of groups of training samples, and each group of training samples comprises ultrasonic volume data and fetal gesture information; and recognizing the posture information corresponding to the ultrasonic volume data to be recognized through the posture recognition model, and determining the fetal posture according to the posture information. According to the invention, the fetal posture related information is extracted from the ultrasonic volume data through the trained posture recognition model, and the fetal posture is estimated according to the extracted information, so that the fetal posture can be accurately, robustly and quickly and automatically determined according to the input ultrasonic volume data, and a doctor is helped to more efficiently and accurately analyze the prenatal fetal ultrasonic image.

Description

Ultrasonic fetal posture identification method, storage medium and electronic equipment
Technical Field
The present invention relates to the field of ultrasound technologies, and in particular, to a method for identifying an ultrasound fetal posture, a storage medium, and an electronic device.
Background
The fetal posture in the prenatal ultrasound refers to a geometric structural state of a fetal body in the prenatal fetal ultrasound image, and accurate estimation of the fetal posture can help a plurality of clinical diagnosis tasks, such as parameter measurement, section detection, motion function examination and development state monitoring. However, due to the difficulty of observing the three-dimensional ultrasound image, the high diversity of postures of different fetuses and the high degree of dimension of the three-dimensional image, it is difficult for a doctor to distinguish the fetal posture in the prenatal three-dimensional ultrasound image by means of the existing tools.
Thus, the prior art has yet to be improved and enhanced.
Disclosure of Invention
The invention provides an ultrasonic fetal posture identification method, a storage medium and an electronic device, aiming at the defects of the prior art.
In order to solve the technical problems, the technical scheme adopted by the invention is as follows:
a method of ultrasonic fetal gesture recognition, comprising:
acquiring ultrasound volume data of a fetus to be identified;
inputting the ultrasonic volume data to be recognized into a trained gesture recognition model, wherein the gesture recognition model is obtained by training based on a training sample set, the training sample set comprises a plurality of groups of training samples, and each group of training samples comprises ultrasonic volume data and fetal gesture information;
and recognizing the posture information corresponding to the ultrasonic volume data to be recognized through the posture recognition model, and determining the fetal posture according to the posture information.
The method for recognizing the ultrasonic fetal posture comprises the following steps of:
the preset model generates generation attitude information corresponding to the ultrasonic volume data according to the ultrasonic volume data in the training sample set;
and the preset model corrects the model parameters of the preset model according to the generated attitude information and the fetal attitude information corresponding to the ultrasonic volume data, and continues to execute the step of generating the generated attitude information corresponding to the ultrasonic volume data according to the ultrasonic volume data concentrated by the training sample until the training condition of the preset model meets the preset condition, so as to obtain the trained attitude recognition model.
The identification method of the ultrasonic fetal posture comprises the steps that the fetal posture information in the training sample comprises at least one thermodynamic diagram generated by key points representing the fetal posture, and the thermodynamic diagram corresponds to the position information of the key points of the fetal posture.
The identification method of the ultrasonic fetal postures is characterized in that the fetal posture information in the training sample comprises a plurality of segmentation maps, each segmentation map corresponds to a fetal part, and the fetal parts corresponding to the segmentation maps are different from each other.
The method for identifying the ultrasonic fetal posture, wherein the preset model comprises the following steps of generating posture information corresponding to ultrasonic volume data according to the ultrasonic volume data in a training sample set:
and performing data enhancement processing on the training sample set, and taking the training sample set subjected to the data enhancement processing as a training sample set.
The identification method of the ultrasonic fetal posture comprises the following steps that the posture identification model is obtained by training a training sample set based on a thermodynamic diagram carrying key points; the identifying the gesture information corresponding to the ultrasonic volume data to be identified through the gesture identification model, and the determining the fetal gesture according to the gesture information specifically comprises:
recognizing gesture information corresponding to the ultrasonic volume data to be recognized through the gesture recognition model, wherein the gesture information comprises a plurality of thermodynamic diagrams;
and acquiring key points corresponding to each thermodynamic diagram, and determining the fetal posture according to all the acquired key points.
The identification method of the ultrasonic fetal gesture comprises the following steps that the gesture identification model is obtained by training based on a training sample set carrying a segmentation graph of a fetal part; the identifying, by the gesture identification model, gesture information corresponding to the ultrasound volume data to be identified, and determining the fetal gesture according to the gesture information specifically includes:
recognizing gesture information corresponding to the ultrasonic volume data to be recognized through the gesture recognition model, wherein the gesture information comprises a plurality of segmentation maps;
and acquiring the position information of each fetal part corresponding to each segmentation map, and determining the fetal postures according to all the acquired fetal postures.
The identification method of the ultrasonic fetal posture, wherein the identifying the posture information corresponding to the ultrasonic volume data to be identified through the posture identification model, and the determining the fetal posture according to the posture information specifically comprises:
recognizing gesture information corresponding to the ultrasonic volume data to be recognized through the gesture recognition model;
and post-processing the attitude information, and determining the fetal attitude according to the attitude information obtained by post-processing.
A computer readable storage medium storing one or more programs, the one or more programs being executable by one or more processors to implement the steps in the method for identifying an ultrasonic fetal gesture as described in any above.
An electronic device, comprising: a processor, a memory, and a communication bus; the memory has stored thereon a computer readable program executable by the processor;
the communication bus realizes the connection communication between the processor and the memory;
the processor, when executing the computer readable program, implements the steps in the method for identifying an ultrasonic fetal gesture as described in any one of the above.
Has the beneficial effects that: compared with the prior art, the invention provides an ultrasonic fetal gesture recognition method, a storage medium and electronic equipment, wherein the recognition method comprises the steps of acquiring ultrasonic volume data to be recognized of a fetus; inputting the ultrasonic volume data to be recognized into a trained gesture recognition model, wherein the gesture recognition model is obtained by training based on a training sample set, the training sample set comprises a plurality of groups of training samples, and each group of training samples comprises ultrasonic volume data and fetal gesture information; and recognizing the posture information corresponding to the ultrasonic volume data to be recognized through the posture recognition model, and determining the fetal posture according to the posture information. According to the invention, the relevant information of the fetal posture is extracted from the ultrasonic volume data through the trained posture recognition model, and the fetal posture is estimated according to the extracted information, so that the fetal posture can be accurately, robustly and quickly and automatically determined according to the input ultrasonic volume data, a doctor can be helped to efficiently and accurately recognize the fetal posture, and the fetal posture information is utilized for other clinical tasks.
Drawings
Fig. 1 is a flowchart of an ultrasonic fetal posture identification method provided by the present invention.
Fig. 2 is a schematic diagram of key points of the fetal posture provided by the present invention.
Fig. 3 is another schematic diagram of key points of the fetal posture provided by the present invention.
Fig. 4 is a schematic diagram of a view angle of the median sagittal plane of the fetus identified according to the fetal posture information provided by the present invention.
Fig. 5 is a schematic diagram of another view of the mid-sagittal plane of the fetus identified according to the fetal posture information provided by the present invention.
Fig. 6 is a schematic structural diagram of an electronic device provided in the present invention.
Detailed Description
The invention provides an ultrasonic fetal posture identification method, a storage medium and an electronic device, and in order to make the purpose, technical scheme and effect of the invention clearer and clearer, the invention is further described in detail below by referring to the attached drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and do not limit the invention.
As used herein, the singular forms "a", "an", "the" and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will be further understood that the terms "comprises" and/or "comprising," when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof. It will be understood that when an element is referred to as being "connected" or "coupled" to another element, it can be directly connected or coupled to the other element or intervening elements may also be present. Further, "connected" or "coupled" as used herein may include wirelessly connected or wirelessly coupled. As used herein, the term "and/or" includes all or any element and all combinations of one or more of the associated listed items.
It will be understood by those skilled in the art that, unless otherwise defined, all terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs. It will be further understood that terms, such as those defined in commonly used dictionaries, should be interpreted as having a meaning that is consistent with their meaning in the context of the prior art and will not be interpreted in an idealized or overly formal sense unless expressly so defined herein.
The invention will be further explained by the description of the embodiments with reference to the drawings.
The present implementation provides a method for identifying an ultrasound fetal gesture, as shown in fig. 1, the method including:
and S10, acquiring ultrasonic volume data to be identified of the fetus.
Specifically, the ultrasound volume data may be obtained by scanning with a probe array, or may be sent by an external device, where the ultrasound volume data includes ultrasound scan data and a spatial position parameter corresponding to the ultrasound scan data. Furthermore, after acquiring the ultrasound volume data, ultrasound volume imaging may be performed according to the ultrasound volume data to obtain an ultrasound image.
S20, inputting the ultrasonic volume data to be recognized into a trained gesture recognition model, wherein the gesture recognition model is obtained by training based on a training sample set, the training sample set comprises a plurality of groups of training samples, and each group of training samples comprises ultrasonic volume data and fetal gesture information.
Specifically, the gesture recognition model is trained in advance and is used for recognizing fetal gesture information corresponding to the ultrasound volume data according to the ultrasound volume data, and the gesture recognition model is obtained by training based on a preset training sample set. The input item of the gesture recognition model is ultrasonic volume data, and the output item of the gesture model is fetal gesture information, namely, after the ultrasonic volume data is input into the gesture recognition model, the gesture recognition model can output fetal gesture information corresponding to the ultrasonic volume data.
Further, in an implementation manner of this embodiment, after the ultrasound volume data to be recognized is input into the trained gesture recognition model, the ultrasound volume data to be recognized may be preprocessed, so as to improve the accuracy of the gesture recognition model in recognizing the ultrasound volume data to be recognized. The preprocessing may be normalization processing, that is, normalization processing is performed on the ultrasound volume data to be identified. In this embodiment, the ultrasound volume data to be identified may be an ultrasound volume image, and thus the normalization processing performed on the ultrasound volume data to be identified may be the normalization processing performed on the ultrasound volume image. Of course, in practical applications, the preprocessing may also include other processing, such as denoising, etc.
Further, in an implementation manner of this embodiment, the training process of the gesture recognition model specifically includes:
m10, generating generation posture information corresponding to ultrasonic volume data according to the ultrasonic volume data in the training sample set by a preset model;
m20, the preset model corrects model parameters of the preset model according to the generated attitude information and the fetal attitude information corresponding to the ultrasonic volume data, and continues to execute the step of generating the generated attitude information corresponding to the ultrasonic volume data according to the ultrasonic volume data collected by the training sample until the training condition of the preset model meets the preset condition, so as to obtain the trained attitude recognition model.
Specifically, the training sample set includes a plurality of sets of training samples, each set of training samples includes ultrasound volume data and fetal posture information, where the fetal posture information is fetal posture information of a fetus in the ultrasound volume data. Meanwhile, for each group of training samples in the training sample set, the ultrasonic volume data in the training sample is different from the ultrasonic volume data in any training sample except the training sample in the training sample set. In addition, the fetal posture information in each group of training samples is the same type of fetal posture information, for example, the fetal posture information in each group of training samples is thermodynamic diagrams carrying key points, and the thermodynamic diagrams carrying key points in each group of training samples are corresponding to the ultrasound volume data in the group of training samples; for another example, the fetal position information in each training sample set is a segmentation map of the fetal-carrying part, and the segmentation map of the fetal-carrying part in each training sample set is corresponding to the ultrasound volume data in the training sample set.
Further, in an implementation manner of this embodiment, the fetal positioning information in the training sample includes at least one thermodynamic diagram generated by key points representing the fetal positioning, and the thermodynamic diagram corresponds to the position information of the key points of the fetal positioning. The plurality of key points can be expressed in a coordinate form, that is, the fetal posture information is coordinate information of the plurality of key points for which the fetal posture position can be confirmed, wherein the coordinate information of the plurality of key points can be manually marked by an expert. For example, the fetal posture information includes coordinate information of n key points, and the ultrasound volume data carries n labels, and the ultrasound volume data corresponds to n thermodynamic diagrams, and each thermodynamic diagram corresponds to one key point. The image size of the thermodynamic diagram corresponding to each key point is the same as the size of the ultrasonic volume data, and the thermodynamic diagram is generated by multivariate Gaussian distribution with the key point as the center. Of course, it is worth mentioning: and when the fetal posture information is a plurality of thermodynamic diagrams, inputting the fetal posture information in the preset model training process into the thermodynamic diagrams corresponding to the key points. Meanwhile, the number and the types of key points contained in the fetal posture information of each group of training samples in the training sample group are the same, and only the thermodynamic diagrams corresponding to each key point can be different. For example, the fetal position information in each training sample includes left and right shoulder joints, left and right elbow joints, left and right ankle joints, left and right knee joints, left and right hip joints, left and right ankle joints, a head, a neck, and the like.
Further, in an implementation manner of this embodiment, the fetal position information in the training sample includes a plurality of segmentation maps, each segmentation map corresponds to a fetal part, and the fetal parts corresponding to each segmentation map are different from each other. The segmentation map may be obtained by segmenting each body part of the fetus, wherein the fetal position parts may include a head, a body, legs, arms, hands and the like. The segmentation result of each body part of the fetus can be obtained by manual labeling of experts. For example, the ultrasound volume data has n segmentation sites, and the fetal position information includes n +1 segmentation maps, where the n +1 segmentation maps include a segmentation map corresponding to each segmentation site and a segmentation map corresponding to the background of the ultrasound volume data. Of course, it is worth mentioning that: and when the fetal posture information is a plurality of segmentation graphs, inputting the fetal posture information in the preset model training process into the plurality of segmentation graphs. Meanwhile, the number of segmentation images contained in the fetal posture information of each training sample in the training sample group is the same, and a plurality of segmentation images contained in each training sample group contain the same fetal body parts. For example, the fetal position information in each training sample includes left and right hands, left and right feet, left and right legs, left and right arms, head, neck, and the like.
Further, in an implementation manner of this embodiment, the fetal posture information in the training sample includes three-dimensional space coordinates of a plurality of key points, and the fetal posture information is three-dimensional space coordinate information of a plurality of key points for which fetal posture positions can be confirmed, where the three-dimensional space coordinate information of the plurality of key points may be obtained by manual labeling by an expert. For example, the fetal posture information includes three-dimensional space coordinate information of n key points, and then the ultrasound volume data carries n labels, as shown in fig. 2 and 3, the ultrasound volume data corresponds to the n key points. Of course, it is worth mentioning: when the fetal attitude information is the coordinates of a plurality of key points, the fetal attitude information input in the preset model training process is the coordinates of each key point. Meanwhile, the number and the types of key points contained in the fetal posture information of each group of training samples in the training sample group are the same, and only the thermodynamic diagrams corresponding to each key point can be different. For example, the fetal posture information in each training sample includes left and right shoulder joints, left and right elbow joints, left and right ankle joints, left and right knee joints, left and right hip joints, left and right ankle joints, a head, a neck, and the like.
Further, in an implementation manner of this embodiment, the fetal posture information in the training sample includes a plurality of three-dimensional target bounding boxes, each bounding box corresponds to a part of the fetus, and the parts of the fetus corresponding to different bounding boxes are different from each other. The three-dimensional target bounding box can be obtained according to the positions of various body parts of a fetus, wherein the fetal posture parts can comprise a head, a body, legs, arms, hands and the like. The bounding box of each body part of the fetus can be manually marked by an expert. For example, the ultrasound volume data has n detected body parts, and the fetal position information includes n bounding boxes. Of course, it is worth mentioning: and when the fetal posture information is a plurality of target bounding boxes, inputting the fetal posture information in the preset model training process into the plurality of target bounding boxes. Meanwhile, the number of the target bounding boxes contained in the fetal posture information of each group of training samples in the training sample group is the same, and the fetal body parts corresponding to a plurality of target bounding boxes of each group of training samples are the same.
Further, the preset model may be a deep learning model, the deep learning model is a discriminant model, and the deep learning model may be trained by using a back propagation algorithm when training based on a training sample set. In addition, when the model parameters of the preset model are corrected according to the generated posture information and the fetal posture information corresponding to the ultrasonic volume data, the model parameters can be corrected by adopting a random step-down method. In an implementation manner of this embodiment, the deep learning model is a three-dimensional full convolution neural network, and a jump connection and residual module is adopted in the three-dimensional full convolution neural network, so that the training speed of the three-dimensional full convolution neural network can be increased by the jump connection and residual module, and the depth of the three-dimensional full convolution neural network can be increased to improve the expression capability of the three-dimensional full convolution neural network.
Further, in an implementation manner of this embodiment, before the preset model generates the generated pose information corresponding to the ultrasound volume data according to the ultrasound volume data in the training sample set, the training sample set may be preprocessed to improve the recognition accuracy of the pose recognition model. The preprocessing may be normalization processing, that is, normalization processing is performed on the ultrasound volume data to be identified. In this embodiment, the ultrasound volume data to be identified may be an ultrasound volume image, and the normalization processing performed on the ultrasound volume data to be identified may be a normalization processing performed on the ultrasound volume image. Of course, in practical applications, the preprocessing may also include other processing, such as denoising, graying, and the like.
Further, in an implementation manner of this embodiment, before the preset model generates the generated pose information corresponding to the ultrasound volume data according to the ultrasound volume data in the training sample set, data enhancement processing may be performed on the training sample set. Correspondingly, before the preset model generates the generated posture information corresponding to the ultrasound volume data according to the ultrasound volume data in the training sample set, the preset model comprises:
and performing data enhancement processing on the training sample set, and taking the training sample set subjected to the data enhancement processing as a training sample set.
Specifically, the data enhancement processing refers to performing data enhancement processing on training samples in a training sample set, and the data enhancement processing may include rotation, scaling, mirroring, and the like. That is to say, each group of training samples in the training sample set is randomly rotated, scaled or mirrored to obtain processed training samples, and the processed training samples are added into the training sample set, so that the diversity of the training samples can be increased, overfitting can be weakened, the recognition accuracy of the posture recognition model can be improved, and particularly the left-right confusion problem of the model in arm and leg recognition can be weakened. It should be noted that, when the training process includes a preprocessing step, the preprocessing step may be performed before the enhancement step is added, that is, the training samples in the training sample set are preprocessed, and after the preprocessing is completed, the preprocessed training sample set is subjected to data enhancement.
And S30, recognizing the posture information corresponding to the ultrasonic volume data to be recognized through the posture recognition model, and determining the fetal posture according to the posture information.
Specifically, the posture information is the posture information corresponding to the ultrasonic volume data to be recognized, which is recognized by the posture recognition model, and the posture information corresponds to the fetal posture information in the training sample adopted by the posture recognition model, and when the fetal posture information in the training sample is the thermodynamic diagrams of a plurality of key points, the posture information is the thermodynamic diagrams of the plurality of key points; when the fetal posture information in the training sample is a plurality of segmentation maps, the posture information is a plurality of segmentation maps.
Further, in an implementation manner of this embodiment, when the gesture recognition model is a gesture recognition model obtained by training a training sample set based on a thermodynamic diagram carrying key points, the recognizing, by the gesture recognition model, gesture information corresponding to the ultrasound volume data to be recognized, and determining the fetal gesture according to the gesture information specifically includes:
s31, recognizing gesture information corresponding to the ultrasonic volume data to be recognized through the gesture recognition model, wherein the gesture information comprises a plurality of thermodynamic diagrams;
and S32, obtaining key points corresponding to each thermodynamic diagram, and determining the fetal posture according to all the obtained key points.
Specifically, each thermodynamic diagram in the several thermodynamic diagrams carries a key point, and the coordinate information of the key point can be determined through the thermodynamic diagram, so that the coordinate information of the several key points can be obtained according to the several thermodynamic diagrams, and the fetal posture can be determined according to the several key coordinate information, for example, as shown in fig. 4 and 5. In the step of acquiring the coordinate information of each key point in each thermodynamic diagram, the thermodynamic diagrams are placed on the same three-dimensional coordinate system, and the coordinate information of the key point carried by each thermodynamic diagram is acquired. Certainly, it is worth explaining, after the coordinate information of each key point is obtained, the existing method may be adopted for determining the fetal pose according to the coordinate information of all key points, which is not repeated herein, so that the method for determining the fetal pose according to the coordinate information of all key points all belong to the inclusion methods of the present application.
Further, in an implementation manner of this embodiment, when the gesture recognition model is a gesture recognition model obtained by training based on a training sample set of a segmentation map carrying a fetal part, the recognizing, by the gesture recognition model, gesture information corresponding to the ultrasound volume data to be recognized, and determining the fetal gesture according to the gesture information specifically includes:
s31a, recognizing gesture information corresponding to the ultrasonic volume data to be recognized through the gesture recognition model, wherein the gesture information comprises a plurality of segmentation maps;
and S32a, acquiring the position information of each fetal part corresponding to each segmentation map, and determining the fetal postures according to all the acquired fetal postures.
Specifically, each of the plurality of segmentation maps carries a fetal body part, and after the segmentation map is acquired, the fetal posture can be determined according to the position information of each fetal part corresponding to each segmentation map. After acquiring the plurality of segmentation maps, the segmentation maps can be placed on the same three-dimensional coordinate system, and the position information of the fetal body part carried by each segmentation map is acquired, wherein the point of the maximum value of each thermodynamic diagram is the key point of the thermodynamic diagram. Of course, it should be noted that after the position information of each fetal body part is obtained, the existing method may be adopted for determining the fetal postures according to the position information of all fetal body parts, which is not described herein any more, so that the methods for determining the fetal postures according to the position information of all fetal body parts all belong to the inclusion methods of the present application.
Further, in an implementation manner of this embodiment, in order to provide accuracy of fetal posture recognition, after the posture information is acquired, post-processing may be performed on the acquired posture information, and then the fetal posture may be determined according to the posture information obtained by the post-processing. Correspondingly, the identifying the posture information corresponding to the ultrasonic volume data to be identified through the posture identification model, and the determining the fetal posture according to the posture information specifically includes:
s31b, recognizing gesture information corresponding to the ultrasonic volume data to be recognized through the gesture recognition model;
and S32b, post-processing the attitude information, and determining the fetal attitude according to the attitude information obtained by the post-processing.
Specifically, the post-processing comprises the posture correction of the left and right arms and/or the left and right legs of the posture of the fetus, and the posture information is post-processed, so that the posture information carrying noise and deviation can be removed, and the accuracy of the posture of the fetus obtained by recognition according to the posture information is improved.
Based on the above recognition method of the ultrasonic fetal posture, the present embodiment provides a computer-readable storage medium storing one or more programs, which are executable by one or more processors to implement the steps in the recognition method of the ultrasonic fetal posture according to the above embodiment.
Based on the above identification method of the ultrasonic fetal posture, the present invention further provides an electronic device, as shown in fig. 6, which includes at least one processor (processor) 20; a display screen 21; and a memory (memory) 22, and may further include a communication Interface (Communications Interface) 23 and a bus 24. The processor 20, the display 21, the memory 22 and the communication interface 23 can communicate with each other through the bus 24. The display screen 21 is configured to display a user guidance interface preset in the initial setting mode. The communication interface 23 may transmit information. Processor 20 may call logic instructions in memory 22 to perform the methods in the embodiments described above.
Furthermore, the logic instructions in the memory 22 may be implemented in the form of software functional units and stored in a computer readable storage medium when the logic instructions are sold or used as independent products.
The memory 22, which is a computer-readable storage medium, may be configured to store a software program, a computer-executable program, such as program instructions or modules corresponding to the methods in the embodiments of the present disclosure. The processor 20 executes the functional applications and data processing, i.e. implements the methods in the above embodiments, by running software programs, instructions or modules stored in the memory 22.
The memory 22 may include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required for at least one function; the storage data area may store data created according to use of the electronic device, and the like. Further, the memory 22 may include a high speed random access memory and may also include a non-volatile memory. For example, a variety of media that can store program codes, such as a usb disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk, may also be transient storage media.
In addition, the specific processes loaded and executed by the storage medium and the instruction processors in the electronic device are described in detail in the method, and are not stated herein.
Finally, it should be noted that: the above examples are only intended to illustrate the technical solution of the present invention, but not to limit it; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions of the embodiments of the present invention.

Claims (6)

1. A method for recognizing ultrasonic fetal postures is characterized by comprising the following steps:
acquiring ultrasonic volume data to be identified of a fetus;
inputting the ultrasonic volume data to be recognized into a trained gesture recognition model, wherein the gesture recognition model is obtained by training based on a training sample set, the training sample set comprises a plurality of groups of training samples, and each group of training samples comprises ultrasonic volume data and fetal gesture information;
recognizing gesture information corresponding to the ultrasonic volume data to be recognized through the gesture recognition model, and determining the fetal gesture according to the gesture information;
the gesture recognition model is obtained by training based on a training sample set carrying segmentation maps of fetal parts, the fetal gesture information in the training sample comprises a plurality of segmentation maps, each segmentation map corresponds to one fetal part, and the fetal parts corresponding to the segmentation maps are different from each other; the identifying the gesture information corresponding to the ultrasonic volume data to be identified through the gesture identification model, and the determining the fetal gesture according to the gesture information specifically comprises:
recognizing gesture information corresponding to the ultrasonic volume data to be recognized through the gesture recognition model, wherein the gesture information comprises a plurality of segmentation maps;
acquiring position information of each fetal part corresponding to each segmentation map, and determining the fetal postures by all the acquired fetal postures;
after a plurality of segmentation maps are obtained, all the segmentation maps are placed on the same three-dimensional coordinate system, and the position information of the fetal body part carried by each segmentation map is obtained respectively.
2. The method for recognizing the ultrasonic fetal posture as claimed in claim 1, wherein the training process of the posture recognition model specifically comprises:
the preset model generates generation attitude information corresponding to the ultrasonic volume data according to the ultrasonic volume data in the training sample set;
and the preset model corrects the model parameters of the preset model according to the generated posture information and the fetal posture information corresponding to the ultrasonic volume data, and continues executing the step of concentrating the ultrasonic volume data according to the training sample and generating the generated posture information corresponding to the ultrasonic volume data until the training condition of the preset model meets the preset condition to obtain the trained posture recognition model.
3. The method for recognizing the ultrasonic fetal posture as claimed in claim 2, wherein the generating of the posture information corresponding to the ultrasonic volume data by the preset model according to the ultrasonic volume data in the training sample set comprises:
and performing data enhancement processing on the training sample set, and taking the training sample set subjected to the data enhancement processing as a training sample set.
4. The method for recognizing the ultrasonic fetal posture as claimed in claim 1, wherein the recognizing, by the posture recognition model, the posture information corresponding to the ultrasonic volume data to be recognized, and the determining the fetal posture according to the posture information specifically comprises:
recognizing gesture information corresponding to the ultrasonic volume data to be recognized through the gesture recognition model;
and performing post-processing on the posture information, and determining the posture of the fetus according to the posture information obtained by the post-processing, wherein the post-processing comprises the posture correction of the left and right arms and/or the left and right legs of the posture of the fetus.
5. A computer readable storage medium storing one or more programs, the one or more programs being executable by one or more processors to perform the steps of the method for ultrasonic fetal gesture recognition as claimed in any one of claims 1 to 4.
6. An electronic device, comprising: a processor, a memory, and a communication bus; the memory has stored thereon a computer readable program executable by the processor;
the communication bus realizes connection communication between the processor and the memory;
the processor, when executing the computer readable program, implements the steps in the method for identifying an ultrasonic fetal gesture as claimed in any one of claims 1 to 4.
CN201910907068.0A 2019-09-24 2019-09-24 Ultrasonic fetal posture identification method, storage medium and electronic equipment Active CN110598675B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910907068.0A CN110598675B (en) 2019-09-24 2019-09-24 Ultrasonic fetal posture identification method, storage medium and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910907068.0A CN110598675B (en) 2019-09-24 2019-09-24 Ultrasonic fetal posture identification method, storage medium and electronic equipment

Publications (2)

Publication Number Publication Date
CN110598675A CN110598675A (en) 2019-12-20
CN110598675B true CN110598675B (en) 2022-10-11

Family

ID=68863007

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910907068.0A Active CN110598675B (en) 2019-09-24 2019-09-24 Ultrasonic fetal posture identification method, storage medium and electronic equipment

Country Status (1)

Country Link
CN (1) CN110598675B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112489129A (en) * 2020-12-18 2021-03-12 深圳市优必选科技股份有限公司 Pose recognition model training method and device, pose recognition method and terminal equipment

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107766874A (en) * 2017-09-07 2018-03-06 沈燕红 A kind of measuring method and measuring system of ultrasound volume biological parameter
CN108717531A (en) * 2018-05-21 2018-10-30 西安电子科技大学 Estimation method of human posture based on Faster R-CNN
CN109685023A (en) * 2018-12-27 2019-04-26 深圳开立生物医疗科技股份有限公司 A kind of facial critical point detection method and relevant apparatus of ultrasound image

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170273663A1 (en) * 2016-03-24 2017-09-28 Elwha Llc Image processing for an ultrasonic fetal imaging device
CN109069119B (en) * 2016-04-26 2021-10-22 皇家飞利浦有限公司 3D image synthesis for ultrasound fetal imaging
CN107424145A (en) * 2017-06-08 2017-12-01 广州中国科学院软件应用技术研究所 The dividing method of nuclear magnetic resonance image based on three-dimensional full convolutional neural networks
CN109918975B (en) * 2017-12-13 2022-10-21 腾讯科技(深圳)有限公司 Augmented reality processing method, object identification method and terminal
CN108154104B (en) * 2017-12-21 2021-10-15 北京工业大学 Human body posture estimation method based on depth image super-pixel combined features
CN109063301B (en) * 2018-07-24 2023-06-16 杭州师范大学 Single image indoor object attitude estimation method based on thermodynamic diagram
CN109671086A (en) * 2018-12-19 2019-04-23 深圳大学 A kind of fetus head full-automatic partition method based on three-D ultrasonic
CN109727240B (en) * 2018-12-27 2021-01-19 深圳开立生物医疗科技股份有限公司 Method and related device for stripping shielding tissues of three-dimensional ultrasonic image

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107766874A (en) * 2017-09-07 2018-03-06 沈燕红 A kind of measuring method and measuring system of ultrasound volume biological parameter
CN108717531A (en) * 2018-05-21 2018-10-30 西安电子科技大学 Estimation method of human posture based on Faster R-CNN
CN109685023A (en) * 2018-12-27 2019-04-26 深圳开立生物医疗科技股份有限公司 A kind of facial critical point detection method and relevant apparatus of ultrasound image

Non-Patent Citations (5)

* Cited by examiner, † Cited by third party
Title
FetusMap: Fetal Pose Estimation in 3D Ultrasound;Xin Yang等;《MICCAI 2019: Medical Image Computing and Computer Assisted Intervention》;20191010;281-289 *
Real-Time Deep Pose Estimation With Geodesic Loss for Image-to-Template Rigid Registration;Seyed Sadegh Mohseni Salehi等;《IEEE Transactions on Medical Imaging》;20180821;第38卷(第2期);470-481 *
Weakly Supervised Localisation for Fetal Ultrasound Images;Nicolas Toussaint等;《Deep Learning in Medical Image Analysis and Multimodal Learning for Clinical Decision Support》;20180920;192-200 *
基于树形图结构模型的人体姿态估计;韩贵金;《西安邮电大学学报》;20130331;第18卷(第03期);83-86 *
基于深度图像的人体关节点定位的方法研究;吕洁;《中国优秀博硕士学位论文全文数据库(硕士)信息科技辑》;20140715(第(2014)07期);I138-777 *

Also Published As

Publication number Publication date
CN110598675A (en) 2019-12-20

Similar Documents

Publication Publication Date Title
US20200210702A1 (en) Apparatus and method for image processing to calculate likelihood of image of target object detected from input image
JP4985516B2 (en) Information processing apparatus, information processing method, and computer program
JP7046553B2 (en) Superposition method of magnetic tracking system equipped with an image pickup device
CN110652317B (en) Automatic positioning method for standard tangent plane in prenatal fetal ultrasound volume image
US20150015602A1 (en) System and method for selective determination of point clouds
JP6331517B2 (en) Image processing apparatus, system, image processing method, and image processing program
US20150297313A1 (en) Markerless tracking of robotic surgical tools
CN111091562B (en) Method and system for measuring size of digestive tract lesion
JP6387831B2 (en) Feature point position detection apparatus, feature point position detection method, and feature point position detection program
CN111582186A (en) Object edge identification method, device, system and medium based on vision and touch
CN105103164A (en) View classification-based model initialization
CN112102294A (en) Training method and device for generating countermeasure network, and image registration method and device
CN113706473A (en) Method for determining long and short axes of lesion region in ultrasonic image and ultrasonic equipment
CN110598675B (en) Ultrasonic fetal posture identification method, storage medium and electronic equipment
CN110991292A (en) Action identification comparison method and system, computer storage medium and electronic device
JP2020098588A (en) Curvilinear object segmentation with noise priors
CN116869652B (en) Surgical robot based on ultrasonic image and electronic skin and positioning method thereof
Mehryar et al. Automatic landmark detection for 3d face image processing
Wang et al. Enhanced extended-field-of-view ultrasound for musculoskeletal tissues using parallel computing
CN104361601A (en) Probability graphic model image segmentation method based on flag fusion
WO2014106747A1 (en) Methods and apparatus for image processing
CN114723659A (en) Acupuncture point detection effect determining method and device and electronic equipment
CN113744234A (en) Multi-modal brain image registration method based on GAN
US20210201512A1 (en) Method and apparatus for registering live medical image with anatomical model
JP2019039864A (en) Hand recognition method, hand recognition program and information processing device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant