CN112614123A - Ultrasonic image identification method and related device - Google Patents

Ultrasonic image identification method and related device Download PDF

Info

Publication number
CN112614123A
CN112614123A CN202011599556.9A CN202011599556A CN112614123A CN 112614123 A CN112614123 A CN 112614123A CN 202011599556 A CN202011599556 A CN 202011599556A CN 112614123 A CN112614123 A CN 112614123A
Authority
CN
China
Prior art keywords
tissue
identification
recognition
detection frame
network
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011599556.9A
Other languages
Chinese (zh)
Inventor
黄子殷
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sonoscape Medical Corp
Original Assignee
Sonoscape Medical Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sonoscape Medical Corp filed Critical Sonoscape Medical Corp
Priority to CN202011599556.9A priority Critical patent/CN112614123A/en
Publication of CN112614123A publication Critical patent/CN112614123A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/12Edge-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10132Ultrasound image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Quality & Reliability (AREA)
  • Ultra Sonic Daignosis Equipment (AREA)

Abstract

The application discloses an ultrasonic image identification method, which comprises the following steps: acquiring an ultrasonic image to be identified; carrying out tissue identification on the ultrasonic image to be identified by using a tissue identification model according to the identification information of the interactive tissue detection frame in the identification process to obtain an identification result; wherein the recognition result comprises result data identifying an organizational structure segmentation boundary; the tissue recognition model is obtained by training according to training data, and the training data is marked with a tissue detection frame and a tissue structure segmentation boundary. The ultrasonic image to be identified is subjected to tissue identification through the tissue identification model to obtain an identification result provided with the tissue structure segmentation boundary, so that medical personnel can rapidly determine the tissue structure through the tissue structure segmentation boundary, and the speed of ultrasonic examination is improved. The application also discloses an ultrasonic image identification device, a computing device and a computer readable storage medium, which have the beneficial effects.

Description

Ultrasonic image identification method and related device
Technical Field
The present disclosure relates to the field of image recognition technologies, and in particular, to an ultrasound image recognition method, an ultrasound image recognition apparatus, a computing device, and a computer-readable storage medium.
Background
With the development of medical detection technology, ultrasound technology is commonly used in the medical field for examining and diagnosing disease species. When the wave generated by the ultrasonic wave is transmitted in the human body, the ultrasonic image containing the reflection and attenuation rules of various organs and tissues in the body to the ultrasonic wave is displayed through the oscillograph screen so as to reflect the condition in the body. The ultrasonic waves have good directivity, and when the ultrasonic waves are transmitted in a human body, the ultrasonic waves encounter tissues and organs with different densities, namely, the phenomena of reflection, refraction, absorption and the like are generated. According to the distance, the intensity and the degree of the echo of the ultrasonic image displayed on the oscillograph screen and whether the attenuation is obvious or not, the moving functions of some internal organs in the body can be displayed, and whether the tissue organ contains liquid or gas or is a substantive tissue can be accurately identified. Therefore, the condition of the tissues in the body can be conveniently displayed through the ultrasonic images so as to be examined.
In the prior art, generally, when medical staff performs ultrasonic examination, the medical staff screens an ultrasonic sectional image meeting requirements by manually clicking or pressing keys according to clinical experience knowledge of the medical staff, and the medical staff needs to judge and identify tissues and organs in the ultrasonic sectional image by himself/herself so as to perform labeling or measurement. Such a method requires the medical staff to judge by themselves and manually and repeatedly confirm or adjust, which is inefficient, and may introduce subjective errors and reduce the accuracy of the examination process.
Therefore, how to improve the efficiency of searching for tissue structures in ultrasound images is a major concern for those skilled in the art.
Disclosure of Invention
The application aims to provide an ultrasonic image identification method, an ultrasonic image identification device, a computing device and a computer readable storage medium, wherein an ultrasonic image to be identified is subjected to tissue identification through a tissue identification model to obtain an identification result provided with a tissue detection frame and a structure segmentation boundary, so that medical personnel can rapidly determine the positions of tissues and structures thereof through the tissue detection frame and the structure segmentation boundary, and the speed of ultrasonic examination is improved.
In order to solve the above technical problem, the present application provides an ultrasound image identification method, including:
acquiring an ultrasonic image to be identified;
carrying out tissue identification on the ultrasonic image to be identified by using a tissue identification model according to the identification information of the interactive tissue detection frame in the identification process to obtain an identification result;
wherein the recognition result comprises result data identifying an organizational structure segmentation boundary; the tissue recognition model is obtained by training according to training data, and the training data is marked with a tissue detection frame and a tissue structure segmentation boundary.
Optionally, the tissue identification module is used to perform tissue identification on the ultrasound image to be identified according to the identification information of the tissue detection frame interacted in the identification process, so as to obtain an identification result, and the method includes:
carrying out feature extraction on the ultrasonic image to be identified by using a feature extraction network of the tissue identification model to obtain target features;
detecting and identifying the target characteristics by using a detection network of the tissue identification model to obtain identification information of a tissue detection frame; wherein the tissue detection box identification information includes a tissue detection box that frames a corresponding tissue.
And carrying out segmentation identification on the target features based on the identification information of the tissue detection frame by utilizing a segmentation network of the tissue identification model to obtain the identification result.
Optionally, the segmenting and identifying the target feature by using the segmentation network of the tissue identification model based on the identification information of the tissue detection frame to obtain the identification result includes:
and carrying out segmentation identification on the target features in the range of the tissue detection frame corresponding to the identification information of the tissue detection frame by using the segmentation network to obtain the identification result.
Optionally, the performing feature extraction on the ultrasound image to be identified by using the feature extraction network of the tissue identification model to obtain a target feature includes:
performing bottom layer feature extraction on the ultrasonic image to be identified by adopting a feature extraction network of the tissue identification model to obtain a plurality of bottom layer features; and combining the bottom layer characteristics to obtain the high-level semantic characteristics serving as the target characteristics.
Optionally, after the detecting network using the tissue recognition model detects and recognizes the target feature to obtain the identification information of the tissue detection frame, the method further includes:
and carrying out classification and identification on the target features based on the identification information of the tissue detection frame by utilizing the classification network of the tissue identification model to obtain a tissue classification and identification result.
Optionally, after the step of performing tissue identification on the ultrasound image to be identified according to the identification information of the tissue detection frame interacted in the identification process by using the tissue identification model to obtain the identification result, the method further includes:
automatically measuring the identified tissue according to the identification result, and outputting measurement data; wherein the measurement data comprises measurement values and/or measurement lines.
Optionally, the tissue recognition model is a deep learning neural network model; wherein the detection network comprises convolutional layers.
Optionally, before the step of obtaining the tissue recognition model by training according to the training data, the method further includes:
carrying out image transformation processing on an original training image to obtain the training data; the image transformation processing comprises any one or combination of any more of affine transformation, image splicing, mosaic blurring and Gaussian blurring.
The present application further provides an ultrasound image recognition apparatus, including:
the image acquisition module is used for acquiring an ultrasonic image to be identified;
the image identification module is used for carrying out tissue identification on the ultrasonic image to be identified according to the identification information of the interactive tissue detection frame in the identification process by utilizing the tissue identification model to obtain an identification result; wherein the recognition result comprises result data identifying an organizational structure segmentation boundary; the tissue recognition model is obtained by training according to training data, and the training data is marked with a tissue detection frame and a tissue structure segmentation boundary.
The present application further provides a computing device comprising:
a memory for storing a computer program;
a processor for implementing the steps of the ultrasound image identification method as described above when executing the computer program.
The present application also provides a computer-readable storage medium having stored thereon a computer program which, when being executed by a processor, carries out the steps of the ultrasound image identification method as described above.
The application provides an ultrasonic image identification method, which comprises the following steps: acquiring an ultrasonic image to be identified; carrying out tissue identification on the ultrasonic image to be identified by using a tissue identification model according to the identification information of the interactive tissue detection frame in the identification process to obtain an identification result; wherein the recognition result comprises result data identifying an organizational structure segmentation boundary; the tissue recognition model is obtained by training according to training data, and the training data is marked with a tissue detection frame and a tissue structure segmentation boundary.
The ultrasonic image to be recognized is subjected to tissue recognition through the tissue recognition model to obtain a recognition result, automatic recognition is achieved, and the training data adopted by the tissue recognition model is the training data marked with the tissue detection frame and the tissue structure segmentation boundary, so that the obtained recognition result contains the tissue structure segmentation boundary, medical workers can rapidly determine the tissue structure through the tissue structure segmentation boundary, the ultrasonic examination speed is increased, meanwhile, subjective errors are avoided, and the accuracy of the ultrasonic examination is improved.
The application also provides an ultrasound image identification device, a computing device and a computer readable storage medium, which have the above beneficial effects and are not described herein again.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings needed to be used in the description of the embodiments or the prior art will be briefly introduced below, it is obvious that the drawings in the following description are only embodiments of the present application, and for those skilled in the art, other drawings can be obtained according to the provided drawings without creative efforts.
Fig. 1 is a flowchart of an ultrasound image identification method according to an embodiment of the present application;
FIG. 2 is a flow chart of an identification process of the ultrasound image identification method according to the embodiment of the present application;
fig. 3 is a schematic structural diagram of an ultrasound image recognition apparatus according to an embodiment of the present application;
fig. 4 is a schematic structural diagram of a computing device according to an embodiment of the present application.
Detailed Description
The core of the application is to provide an ultrasound image identification method, an ultrasound image identification device, a computing device and a computer readable storage medium, wherein the ultrasound image to be identified is subjected to tissue identification through a tissue identification model to obtain an identification result provided with a tissue structure segmentation boundary, so that medical personnel can rapidly determine a tissue structure through the tissue structure segmentation boundary, and the speed of ultrasound examination is improved.
In order to make the objects, technical solutions and advantages of the embodiments of the present application clearer, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some embodiments of the present application, but not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
In the related art, a medical staff member directly and manually inspects an ultrasonic image to screen different tissue conditions. However, the process of manual examination first requires the medical personnel to determine the tissue and structures in the ultrasound image. That is, medical staff is required to determine the position of the tissue, and then the corresponding structure of the tissue is determined in the tissue, so that the process not only needs to consume a lot of time, but also further subjective errors can be introduced, and the accuracy of the inspection process is reduced.
Therefore, the ultrasonic image identification method provided by the application carries out tissue identification on an ultrasonic image to be identified through the tissue identification model to obtain an identification result, and realizes automatic identification.
The following describes an ultrasound image identification method provided by the present application, by way of an example.
Referring to fig. 1, fig. 1 is a flowchart illustrating an ultrasound image identification method according to an embodiment of the present disclosure.
In this embodiment, the method may include:
s101, obtaining an ultrasonic image to be identified;
this step is intended to acquire an ultrasound image to be identified.
The acquired ultrasound image to be identified can be an ultrasound image to be identified acquired from an ultrasound system in real time, so that the ultrasound image to be identified is identified in real time in the operation process, a corresponding identification result is displayed, and a quick and convenient auxiliary function is provided for medical staff.
Or the stored ultrasound images to be identified are acquired from the database of the ultrasound system, so that the ultrasound images to be identified stored in the database can be identified in batch, and the efficiency and speed of medical staff for disease examination by referring to the ultrasound images can be improved.
As can be seen, the manner of obtaining the ultrasound image to be identified in this step is not unique, and is not specifically limited herein.
S102, carrying out tissue identification on the ultrasound image to be identified by using the tissue identification model according to the identification information of the tissue detection frame interacted in the identification process to obtain an identification result; wherein the recognition result comprises result data identifying the division boundary of the organization structure; the tissue recognition model is obtained by training according to training data, and the training data is marked with a tissue detection frame and a tissue structure segmentation boundary.
Therefore, the step aims to perform tissue identification on the ultrasound image to be identified by using the tissue identification model according to the identification information of the interactive tissue detection frame in the identification process to obtain an identification result.
Wherein the recognition result comprises result data identifying the segmentation boundary of the organizational structure. That is, in the process of identifying the ultrasound image to be identified, the tissue structure segmentation boundary is identified for the tissue structure of interest in the ultrasound image to be identified. The position of the tissue structure is determined directly by the tissue structure segmentation boundary in the recognition result.
That is to say, medical personnel can directly look over the organizational structure on the ultrasonic image of treating discernment and cut apart the boundary and confirm the organizational structure, further can carry out corresponding diagnostic analysis operation, avoid medical personnel to go on the ultrasonic image artifical discernment, improved ultrasonic examination speed, also avoid appearing subjective mistake equally. Meanwhile, the tedious operations of repeated freezing, adjustment and the like for identifying the tissue structure meeting the requirements when the medical staff scans and acquires the ultrasonic image can be avoided, and further misdiagnosis and missed diagnosis caused by personal reasons such as fatigue of doctors can be avoided.
The structure of the tissue recognition model may include two recognition networks, which are a detection network and a segmentation network, respectively, where the detection network is configured to recognize a range of a tissue in the ultrasound image and to frame the tissue with a tissue detection frame, and the segmentation network is configured to recognize the structure in the ultrasound image according to information of the tissue detection frame and to identify a boundary with a tissue structure segmentation boundary. The detection network may interact the organization detection box identification information with the segmentation network so that the segmentation network identifies according to the organization detection box identification information.
Therefore, the identification process in this step is to identify the tissue by using the tissue identification model according to the identification information of the tissue detection frame interacted in the identification process. That is, the identification information of the organization detection frame obtained by identification is interacted between the detection network and the segmentation network in the identification process, so that other networks receive the information of the organization detection frame and complete the organization identification. Furthermore, the detection network and the segmentation network in the organization recognition model can be connected with each other so as to carry out information interaction in the processing process, realize a feedback mechanism between the detection network and the segmentation network and improve the recognition accuracy of the detection network and the segmentation network.
Then, the training data of the corresponding tissue recognition model is the training data labeled with the tissue detection box and the tissue structure segmentation boundary. That is, the training data simultaneously or respectively marked with the tissue detection frame and the tissue structure segmentation boundary is adopted to perform deep learning model training, so as to obtain the tissue identification model.
In order to further improve the accuracy and the recognition effect of the tissue recognition model, the structure of the tissue recognition model may be that the feature extraction network is respectively connected with the detection network and the segmentation network. The feature extraction network is used for extracting high-level semantic features from the ultrasonic image so as to improve the effect of the detection network and the segmentation network on data processing. Further, the feature extraction network can be specifically used for extracting bottom-layer features such as edges, contours, gray scales and contrast, and then performing family combination on all the bottom-layer features to obtain high-layer semantic features.
It should be noted that in a specific application environment, the above-described detection network is used for locating and intercepting a tissue organ of interest to a medical staff during a pathological examination, and the segmentation network is used for segmenting a specific structure in a tissue from a diagnosed tissue organ.
Further, in an actual application environment, after determining each tissue structure in the ultrasound image, the size or the important measurement item of the corresponding tissue structure needs to be measured, so as to diagnose each tissue structure, and in order to avoid manual measurement by medical staff, the automation degree is improved, and the embodiment may further include:
automatically measuring the identified tissue according to the identification result, and outputting measurement data; wherein the measurement data comprises measurement values and/or measurement lines.
Therefore, on the basis of obtaining the identification result, the identified tissue can be automatically measured, and the measurement data can be obtained and output. And the medical staff does not perform manual measurement, so that the measurement efficiency is improved. Wherein the measurement data comprises measurement values and/or measurement lines. When both the measured value and the measuring line are present, the measured value and the measuring line correspond to each other. For example, the measured value is the length value of the corresponding measuring line. The automatic measurement mode may be automatic measurement through a preset script, or measurement according to a trained measurement model, or measurement by any automatic measurement mode provided in the prior art, which is not specifically limited herein.
Further, in order to increase the number of training samples so as to enhance the recognition effect of the tissue recognition model, the embodiment may further include:
carrying out image transformation processing on an original training image to obtain training data; the image transformation processing comprises any one or combination of any more of affine transformation, image splicing, mosaic blurring and Gaussian blurring.
It can be seen that in the alternative, image processing is performed on the original training image to increase the number of training data, mainly for increasing the number of training samples. The image processing method includes, but is not limited to, any one or any combination of affine transformation, image stitching, mosaic blurring and gaussian blurring. The affine transformation for the image may include transformation operations such as translation, scaling, flipping, and rotation of the image.
Further, in order to improve the accuracy of tissue identification in the disease examination environment, the embodiment may further include:
ultrasound image data for a plurality of time periods and/or a plurality of regions is acquired as training data.
It can be seen that the present alternative scheme includes richer information in the training data. Ultrasound image data for a plurality of time periods and/or a plurality of regions is acquired as training data. The plurality of time periods are a plurality of time periods from the beginning to the end of a certain disease type. For example, when the acquired training data is obstetrical data, training data for a plurality of gestational weeks is acquired.
Furthermore, in the recognition result corresponding to the ultrasound image to be recognized, a plurality of structures in the ultrasound image to be recognized are generally identified through the tissue structure segmentation boundary, the types of the structures are different in some application scenarios, and medical staff is required to recognize different structures, so that the examination efficiency is reduced. Therefore, in this embodiment, corresponding tissue structure classification information may also be identified for each type of structure.
Therefore, optionally, this embodiment may further include:
step 1, when tissue identification is carried out on an ultrasonic image to be identified according to a tissue identification model, a tissue classification identification result for classifying and identifying the ultrasonic image to be identified in a tissue identification process is obtained;
and 2, adding corresponding tissue structure classification information to the tissue structure segmentation boundary in the identification result according to the tissue classification identification result.
Because the tissue identification model firstly classifies the tissue structure to be identified and then identifies the tissue structure segmentation boundary for the tissue structure in the process of identifying the structure in the ultrasonic image to be identified. Therefore, in the alternative, the tissue classification recognition result in the classification process can be directly obtained, and then, the corresponding tissue structure classification information is added to the tissue structure segmentation boundary in the recognition result according to the tissue classification recognition result. For example, in obstetrical examinations, tissue structure segmentation boundaries segment tissues or organs such as the stomach vacuole, the neck diaphragmatic layer (NT), and the placenta of a fetus. Furthermore, by the present alternative, corresponding tissue structure classification information, i.e., tissue structure classification information such as "placenta", "cervical hyaline layer", "gastric bleb", etc., may be added to each tissue structure segmentation boundary. To facilitate the medical personnel in quickly determining the type of each tissue structure.
In addition, the tissue identification model of the present embodiment may further include a regression branch network, and specifically, the regression branch network may include a convolutional layer and a full link layer to achieve the positioning of the identified tissue structure.
In summary, in the present embodiment, the tissue identification result is obtained by performing tissue identification on the ultrasound image to be identified through the tissue identification model, so as to achieve automatic identification, and because the training data adopted by the tissue identification model is the training data marked with the tissue detection frame and the tissue structure segmentation boundary, the obtained identification result includes the tissue structure segmentation boundary, so that medical personnel can quickly determine the tissue structure through the tissue structure segmentation boundary, thereby improving the speed of the ultrasound examination, avoiding subjective errors, and improving the accuracy of the ultrasound examination. The identification method of the embodiment can be suitable for obstetrical ultrasonic image screening.
The ultrasound image identification method provided by the present application is further described below by another embodiment.
In this embodiment, the organization and identification model includes a feature extraction network, a detection network, and a segmentation network; wherein, the split network can adopt a full convolution network. The feature extraction network is respectively connected with the detection network and the segmentation network, and the detection network is connected with the segmentation network. The feature extraction network is mainly used for extracting features of input data to obtain high-level semantic features so as to improve the effect of feature extraction. Based on the method, the detection network and the segmentation network respectively process the received high-level semantic features, and realize the interaction of the information data of the detection network and the segmentation network in the processing process. Namely, the segmentation network adjusts the segmentation processing process by adopting the detection frame information of the detection network in the detection processing process, so that the identification range in the segmentation processing can be reduced, and the segmentation identification can be realized more quickly and accurately.
The feature extraction network comprises a convolution layer and a feature pyramid network. The purpose of the convolutional layer is to extract different features of the input image data, and the first convolutional layer may only extract some low-level features such as edges, lines, corners, and the like. When more layers of networks are provided, more complex features can be iteratively extracted from the low-level features. The feature pyramid network achieves the purpose of prediction by fusing the features of different layers by utilizing high-resolution of the low-layer features and high semantic information of the high-layer features, and the effect of feature extraction is improved.
In specific implementation, the feature extraction network may use ResNet50, FPN (i.e., the above-mentioned feature pyramid network), and the like as a backbone network, so as to obtain more accurate target features. Where Resnet is an abbreviation of Residual Network (Residual Network), which is widely used in the fields of object classification and the like and as a part of the classical neural Network of the computer vision task backbone, typical networks are Resnet50, Resnet101 and the like; therefore, feature extraction can be achieved using ResNet 50.
Referring to fig. 2, fig. 2 is a flowchart illustrating an identification process of an ultrasound image identification method according to an embodiment of the present disclosure.
In this embodiment, the identification process may include:
s201, performing feature extraction on an ultrasonic image to be identified by using a feature extraction network of the tissue identification model to acquire target features;
s202, detecting and identifying the target characteristics by using a detection network of the tissue identification model to obtain identification information of a tissue detection frame; wherein the tissue detection frame identification information includes a tissue detection frame for framing a range of the corresponding tissue;
s203, carrying out segmentation identification on the target features based on the identification information of the tissue detection frame by utilizing a segmentation network of the tissue identification model to obtain an identification result.
As can be seen, in the identification process, through S203, the target feature is segmented and identified based on the identification information of the tissue detection frame by using the segmentation network of the tissue identification model.
Further, the detection network sends the identification information of the organization detection frame to the segmentation network in the identification process. The segmentation network comprises information such as position, size and the like, so that the segmentation network can reduce the range of segmentation recognition in the high-level semantic features to be recognized, and then segmentation recognition processing is carried out. The method not only reduces the data amount processed by the segmentation network and can realize segmentation identification more quickly, but also eliminates error features and improves the accuracy of the segmentation network.
The method for realizing data interaction between the detection network and the segmentation network can be embodied in the structural design of the organization recognition model. The data interaction may be implemented by using an organization detection frame information transmission structure, or by using any design method provided in the prior art, which is not specifically limited herein.
The structure of the feature extraction network may be any one of the feature extraction networks provided in the prior art, and is not specifically limited herein.
Further, in order to improve the efficiency and accuracy of segmentation identification, S203 in this embodiment may include:
and carrying out segmentation identification on the target characteristics in the range of the tissue detection frame corresponding to the identification information of the tissue detection frame by utilizing a segmentation network to obtain an identification result.
Therefore, in the alternative scheme, the range of the segmentation recognition of the segmentation network is reduced by organizing the detection frame, the data volume is reduced, the error range is eliminated, the segmentation recognition efficiency is improved, and meanwhile, the segmentation recognition accuracy is also improved.
Further, to improve the effect of feature extraction, S202 in this embodiment may include:
performing bottom layer feature extraction on an ultrasonic image to be identified by adopting a feature extraction network of the tissue identification model to obtain a plurality of bottom layer features; and combining the plurality of bottom-layer features to obtain a high-layer semantic feature serving as a target feature.
The feature extraction network is used for extracting high-level semantic features from the ultrasonic image so as to improve the effect of the detection network and the segmentation network on data processing. Furthermore, the feature extraction network can be specifically used for extracting bottom-layer features such as edges, contours, gray scales and contrast, and then performing series combination on all the bottom-layer features to obtain high-layer semantic features of the target features, so that the effect of the target features is maintained.
Furthermore, in the recognition result corresponding to the ultrasound image to be recognized, a plurality of structures in the ultrasound image to be recognized are generally identified through the tissue structure segmentation boundary, the types of the structures are different in some application scenarios, and medical staff is required to recognize different structures, so that the examination efficiency is reduced. Therefore, in order to improve the inspection efficiency, S203 in this embodiment may further include:
and classifying and identifying the target characteristics by utilizing a classification network of the tissue identification model based on the identification information of the tissue detection frame to obtain a tissue classification and identification result.
The classification network may include, among other things, convolutional layers and fully-connected layers. The segmentation network and the classification network share the feature extraction network, so that the correlation between the segmentation tasks and the classification tasks can be utilized, the performance of the organization recognition model is improved, and the classification and segmentation efficiency is improved.
Correspondingly, training data labeled with classification labels is also required to be included in the training process so as to train the classification network.
In the alternative, the tissue classification recognition result in the classification network can be directly obtained. For example, in obstetrical examinations, tissue structure segmentation boundaries segment tissues or organs such as the stomach vacuole, the neck diaphragmatic layer (NT), and the placenta of a fetus. Furthermore, a corresponding tissue classification recognition result added to each tissue structure segmentation boundary, namely, structure classification information such as "placenta", "neck diaphanous layer", "gastric bleb" and the like, can be obtained through the classification network in the alternative, so that medical personnel can conveniently and quickly determine the type of each tissue structure.
In this embodiment, the initial model corresponding to the tissue recognition model may be trained using the training data, and the tissue recognition model as the target model may be obtained.
It is contemplated that any model training method provided in the prior art may be used after the training data is determined, and is not specifically limited herein. Therefore, the tissue identification model for identifying the ultrasonic image to be identified can be obtained by training the initial model. The accuracy and the recognition effect of the organization recognition model can be improved by detecting information interaction between the network and the segmentation network.
It can be seen that, the embodiment carries out information interaction through the detection network and the segmentation network, so that the detection efficiency and the accuracy are improved, and further the identification result is obtained, and automatic identification is realized.
In the following, the ultrasound image recognition apparatus provided by the embodiment of the present application is introduced, and the ultrasound image recognition apparatus described below and the ultrasound image recognition method described above may be referred to correspondingly.
Referring to fig. 3, fig. 3 is a schematic structural diagram of an ultrasound image recognition apparatus according to an embodiment of the present disclosure.
In this embodiment, the apparatus may include:
an image obtaining module 100, configured to obtain an ultrasound image to be identified;
the image identification module 200 is configured to perform tissue identification on the ultrasound image to be identified according to the identification information of the tissue detection frame interacted in the identification process by using a tissue identification model, so as to obtain an identification result; wherein, the identification result is result data marked with an organization detection frame and an organization structure segmentation boundary; the tissue recognition model is obtained by training according to training data, and the training data is marked with a tissue detection frame and a tissue structure segmentation boundary.
Optionally, the image recognition module 200 may include:
the characteristic extraction unit is used for extracting the characteristics of the ultrasonic image to be identified by utilizing the characteristic extraction network of the tissue identification model to acquire target characteristics;
the tissue detection unit is used for detecting and identifying the target characteristics by using a detection network of the tissue identification model to obtain identification information of a tissue detection frame; wherein the tissue detection frame identification information includes a tissue detection frame for framing a range of the corresponding tissue;
and the structure segmentation unit is used for carrying out segmentation identification on the target characteristics based on the identification information of the tissue detection frame by utilizing a segmentation network of the tissue identification model to obtain an identification result.
Optionally, the structure segmentation unit is specifically configured to perform segmentation and identification on the target feature within the range of the tissue detection frame corresponding to the tissue detection frame identification information by using a segmentation network, so as to obtain an identification result.
Optionally, the feature extraction unit is specifically configured to perform bottom-layer feature extraction on the ultrasound image to be identified by using a feature extraction network of the tissue identification model to obtain a plurality of bottom-layer features; and combining the plurality of bottom-layer features to obtain a high-layer semantic feature serving as a target feature.
Optionally, the tissue detection unit may further include:
and the classification unit is used for classifying and identifying the target characteristics based on the identification information of the tissue detection frame by utilizing a classification network of the tissue identification model to obtain a tissue classification and identification result.
Optionally, the apparatus may further include:
the automatic measurement module is used for automatically measuring the identified tissue according to the identification result and outputting measurement data; wherein the measurement data comprises measurement values and/or measurement lines.
Optionally, the tissue recognition model is a deep learning neural network model; wherein the detection network comprises convolutional layers.
Optionally, the feature extraction network may include a convolutional layer and a feature pyramid network.
Optionally, the apparatus may further include:
the image processing module is used for carrying out image transformation processing on the original training image to obtain training data; the image transformation processing comprises any one or combination of any more of affine transformation, image splicing, mosaic blurring and Gaussian blurring.
Referring to fig. 4, fig. 4 is a schematic structural diagram of a computing device according to an embodiment of the present disclosure.
An embodiment of the present application further provides a computing device, including:
a memory 11 for storing a computer program;
a processor 12 for implementing the steps of the ultrasound image identification method according to the above embodiments when executing the computer program.
Specifically, the computing device may be an ultrasound diagnostic device.
Embodiments of the present application further provide a computer-readable storage medium, on which a computer program is stored, where the computer program, when executed by a processor, implements the steps of the ultrasound image identification method according to the above embodiments.
The embodiments are described in a progressive manner in the specification, each embodiment focuses on differences from other embodiments, and the same and similar parts among the embodiments are referred to each other. The device disclosed by the embodiment corresponds to the method disclosed by the embodiment, so that the description is simple, and the relevant points can be referred to the method part for description.
Those of skill would further appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, computer software, or combinations of both, and that the various illustrative components and steps have been described above generally in terms of their functionality in order to clearly illustrate this interchangeability of hardware and software. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
The steps of a method or algorithm described in connection with the embodiments disclosed herein may be embodied directly in hardware, in a software module executed by a processor, or in a combination of the two. A software module may reside in Random Access Memory (RAM), memory, Read Only Memory (ROM), electrically programmable ROM, electrically erasable programmable ROM, registers, hard disk, a removable disk, a CD-ROM, or any other form of storage medium known in the art.
The above provides an ultrasound image identification method, an ultrasound image identification apparatus, a computing device and a computer readable storage medium. The principles and embodiments of the present application are explained herein using specific examples, which are provided only to help understand the method and the core idea of the present application. It should be noted that, for those skilled in the art, it is possible to make several improvements and modifications to the present application without departing from the principle of the present application, and such improvements and modifications also fall within the scope of the claims of the present application.

Claims (11)

1. An ultrasound image recognition method, comprising:
acquiring an ultrasonic image to be identified;
carrying out tissue identification on the ultrasonic image to be identified by using a tissue identification model according to the identification information of the interactive tissue detection frame in the identification process to obtain an identification result;
wherein the recognition result comprises result data identifying an organizational structure segmentation boundary; the tissue recognition model is obtained by training according to training data, and the training data is marked with a tissue detection frame and a tissue structure segmentation boundary.
2. The method for identifying the ultrasonic image according to claim 1, wherein the step of identifying the ultrasonic image to be identified by using a tissue identification model according to the identification information of the interactive tissue detection frame in the identification process to obtain an identification result comprises the following steps:
carrying out feature extraction on the ultrasonic image to be identified by using a feature extraction network of the tissue identification model to obtain target features;
detecting and identifying the target characteristics by using a detection network of the tissue identification model to obtain identification information of the tissue detection frame; wherein the tissue detection frame identification information includes a tissue detection frame that frames a range of a corresponding tissue;
and carrying out segmentation identification on the target features based on the identification information of the tissue detection frame by utilizing a segmentation network of the tissue identification model to obtain the identification result.
3. The method according to claim 2, wherein the performing segmentation recognition on the target feature based on the tissue detection frame recognition information by using the segmentation network of the tissue recognition model to obtain the recognition result comprises:
and carrying out segmentation identification on the target features in the range of the tissue detection frame corresponding to the identification information of the tissue detection frame by using the segmentation network to obtain the identification result.
4. The method according to claim 2, wherein the extracting the features of the ultrasound image to be identified by using the feature extraction network of the tissue identification model to obtain the target features comprises:
performing bottom layer feature extraction on the ultrasonic image to be identified by adopting a feature extraction network of the tissue identification model to obtain a plurality of bottom layer features; and combining the bottom layer characteristics to obtain the high-level semantic characteristics serving as the target characteristics.
5. The method according to claim 2, wherein after the detecting and identifying the target feature by using the detection network of the tissue identification model to obtain the identification information of the tissue detection frame, the method further comprises:
and carrying out classification and identification on the target features based on the identification information of the tissue detection frame by utilizing the classification network of the tissue identification model to obtain a tissue classification and identification result.
6. The method according to any one of claims 1 to 5, wherein after the step of performing tissue recognition on the ultrasound image to be recognized by using a tissue recognition model according to the identification information of the tissue detection frame interacted during the recognition process to obtain the recognition result, the method further comprises:
automatically measuring the identified tissue according to the identification result, and outputting measurement data; wherein the measurement data comprises measurement values and/or measurement lines.
7. The ultrasound image recognition method of claim 2, wherein the tissue recognition model is a deep learning neural network model; wherein the detection network comprises convolutional layers.
8. The ultrasound image recognition method of claim 1, further comprising, before the step of training the tissue recognition model according to the training data:
carrying out image transformation processing on an original training image to obtain the training data; the image transformation processing comprises any one or combination of any more of affine transformation, image splicing, mosaic blurring and Gaussian blurring.
9. An ultrasound image recognition apparatus, comprising:
the image acquisition module is used for acquiring an ultrasonic image to be identified;
the image identification module is used for carrying out tissue identification on the ultrasonic image to be identified according to the identification information of the interactive tissue detection frame in the identification process by utilizing the tissue identification model to obtain an identification result; wherein the recognition result comprises result data identifying an organizational structure segmentation boundary; the tissue recognition model is obtained by training according to training data, and the training data is marked with a tissue detection frame and a tissue structure segmentation boundary.
10. A computing device, comprising:
a memory for storing a computer program;
a processor for implementing the steps of the method of ultrasound image identification as claimed in any one of claims 1 to 8 when executing said computer program.
11. A computer-readable storage medium, in which a computer program is stored which, when being executed by a processor, carries out the steps of the method for ultrasound image recognition according to any one of claims 1 to 8.
CN202011599556.9A 2020-12-29 2020-12-29 Ultrasonic image identification method and related device Pending CN112614123A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011599556.9A CN112614123A (en) 2020-12-29 2020-12-29 Ultrasonic image identification method and related device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011599556.9A CN112614123A (en) 2020-12-29 2020-12-29 Ultrasonic image identification method and related device

Publications (1)

Publication Number Publication Date
CN112614123A true CN112614123A (en) 2021-04-06

Family

ID=75249111

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011599556.9A Pending CN112614123A (en) 2020-12-29 2020-12-29 Ultrasonic image identification method and related device

Country Status (1)

Country Link
CN (1) CN112614123A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114972751A (en) * 2022-05-11 2022-08-30 平安科技(深圳)有限公司 Medical image recognition method, electronic device and storage medium

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104408398A (en) * 2014-10-21 2015-03-11 无锡海斯凯尔医学技术有限公司 Liver boundary identification method and system
CN107451615A (en) * 2017-08-01 2017-12-08 广东工业大学 Thyroid papillary carcinoma Ultrasound Image Recognition Method and system based on Faster RCNN
CN107767386A (en) * 2017-10-12 2018-03-06 深圳开立生物医疗科技股份有限公司 Ultrasonoscopy processing method and processing device
CN108364293A (en) * 2018-04-10 2018-08-03 复旦大学附属肿瘤医院 A kind of on-line training thyroid tumors Ultrasound Image Recognition Method and its device
CN111242956A (en) * 2020-01-09 2020-06-05 西北工业大学 U-Net-based ultrasonic fetal heart and fetal lung deep learning joint segmentation method
CN111862044A (en) * 2020-07-21 2020-10-30 长沙大端信息科技有限公司 Ultrasonic image processing method and device, computer equipment and storage medium
CN111951268A (en) * 2020-08-11 2020-11-17 长沙大端信息科技有限公司 Parallel segmentation method and device for brain ultrasonic images
US20200372635A1 (en) * 2017-08-03 2020-11-26 Nucleai Ltd Systems and methods for analysis of tissue images
CN112102230A (en) * 2020-07-24 2020-12-18 湖南大学 Ultrasonic tangent plane identification method, system, computer equipment and storage medium

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104408398A (en) * 2014-10-21 2015-03-11 无锡海斯凯尔医学技术有限公司 Liver boundary identification method and system
CN107451615A (en) * 2017-08-01 2017-12-08 广东工业大学 Thyroid papillary carcinoma Ultrasound Image Recognition Method and system based on Faster RCNN
US20200372635A1 (en) * 2017-08-03 2020-11-26 Nucleai Ltd Systems and methods for analysis of tissue images
CN107767386A (en) * 2017-10-12 2018-03-06 深圳开立生物医疗科技股份有限公司 Ultrasonoscopy processing method and processing device
CN108364293A (en) * 2018-04-10 2018-08-03 复旦大学附属肿瘤医院 A kind of on-line training thyroid tumors Ultrasound Image Recognition Method and its device
CN111242956A (en) * 2020-01-09 2020-06-05 西北工业大学 U-Net-based ultrasonic fetal heart and fetal lung deep learning joint segmentation method
CN111862044A (en) * 2020-07-21 2020-10-30 长沙大端信息科技有限公司 Ultrasonic image processing method and device, computer equipment and storage medium
CN112102230A (en) * 2020-07-24 2020-12-18 湖南大学 Ultrasonic tangent plane identification method, system, computer equipment and storage medium
CN111951268A (en) * 2020-08-11 2020-11-17 长沙大端信息科技有限公司 Parallel segmentation method and device for brain ultrasonic images

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114972751A (en) * 2022-05-11 2022-08-30 平安科技(深圳)有限公司 Medical image recognition method, electronic device and storage medium

Similar Documents

Publication Publication Date Title
CN109670510B (en) Deep learning-based gastroscope biopsy pathological data screening system
CN108665456B (en) Method and system for real-time marking of breast ultrasound lesion region based on artificial intelligence
CN112070119B (en) Ultrasonic section image quality control method, device and computer equipment
CN110458883B (en) Medical image processing system, method, device and equipment
US11284855B2 (en) Ultrasound needle positioning system and ultrasound needle positioning method utilizing convolutional neural networks
KR102498686B1 (en) Systems and methods for analyzing electronic images for quality control
CN109614995A (en) The system and method for pancreatic duct and pancreas structure is identified under a kind of endoscopic ultrasonography
CN113576508A (en) Cerebral hemorrhage auxiliary diagnosis system based on neural network
CN107543788A (en) A kind of urine erythrocyte abnormal rate detection method and system
CN109460717A (en) Alimentary canal Laser scanning confocal microscope lesion image-recognizing method and device
CN112750142A (en) Ultrasonic image segmentation system and method based on side window attention mechanism
Zeng et al. Machine Learning-Based Medical Imaging Detection and Diagnostic Assistance
CN114972266A (en) Lymphoma ultrasonic image semantic segmentation method based on self-attention mechanism and stable learning
CN112614123A (en) Ultrasonic image identification method and related device
US20240046473A1 (en) Transformation of histochemically stained images into synthetic immunohistochemistry (ihc) images
CN116563216B (en) Endoscope ultrasonic scanning control optimization system and method based on standard site intelligent recognition
CN112998755A (en) Method for automatic measurement of anatomical structures and ultrasound imaging system
CN116168029A (en) Method, device and medium for evaluating rib fracture
CN116452523A (en) Ultrasonic image quality quantitative evaluation method
Vivek et al. CNN Models and Machine Learning Classifiers for Analysis of Goiter Disease
CN112200726B (en) Urinary sediment visible component detection method and system based on lensless microscopic imaging
CN111696085B (en) Rapid ultrasonic evaluation method and equipment for lung impact injury condition on site
Haja et al. Advancing glaucoma detection with convolutional neural networks: a paradigm shift in ophthalmology
CN117153343B (en) Placenta multiscale analysis system
Sutarno et al. FetalNet: Low-light fetal echocardiography enhancement and dense convolutional network classifier for improving heart defect prediction

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination