CN111862044A - Ultrasonic image processing method and device, computer equipment and storage medium - Google Patents

Ultrasonic image processing method and device, computer equipment and storage medium Download PDF

Info

Publication number
CN111862044A
CN111862044A CN202010704590.1A CN202010704590A CN111862044A CN 111862044 A CN111862044 A CN 111862044A CN 202010704590 A CN202010704590 A CN 202010704590A CN 111862044 A CN111862044 A CN 111862044A
Authority
CN
China
Prior art keywords
network
image
target
target tissue
preset
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010704590.1A
Other languages
Chinese (zh)
Other versions
CN111862044B (en
Inventor
李肯立
李胜利
伍湘琼
谭光华
文华轩
朱宁波
陈志伦
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Lanxiang Zhiying Technology Co ltd
Original Assignee
Changsha Datang Information Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Changsha Datang Information Technology Co ltd filed Critical Changsha Datang Information Technology Co ltd
Priority to CN202010704590.1A priority Critical patent/CN111862044B/en
Publication of CN111862044A publication Critical patent/CN111862044A/en
Application granted granted Critical
Publication of CN111862044B publication Critical patent/CN111862044B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/253Fusion techniques of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
    • G06V10/751Comparing pixel values or logical combinations thereof, or feature values having positional relevance, e.g. template matching
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10132Ultrasound image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • General Health & Medical Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • Software Systems (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Computational Linguistics (AREA)
  • Biomedical Technology (AREA)
  • Molecular Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Mathematical Physics (AREA)
  • Biophysics (AREA)
  • Medical Informatics (AREA)
  • Databases & Information Systems (AREA)
  • Multimedia (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Quality & Reliability (AREA)
  • Ultra Sonic Daignosis Equipment (AREA)

Abstract

The application relates to an ultrasonic image processing method, an ultrasonic image processing device, a computer device and a storage medium. The method comprises the following steps: acquiring an ultrasound image dataset; preprocessing an ultrasonic image data set to obtain a preprocessed image data set; inputting the preprocessed image data set into a preset image processing network to obtain a corresponding characteristic diagram; respectively inputting the feature maps into a preset target detection network and a preset segmentation network, identifying the category and the position of a target tissue in each image through the preset target detection network, and determining the size of the target tissue in each image through the preset segmentation network; and inputting the category and the position of the target tissue in each image into a trained target tracking network for target tracking to obtain the category data and the number of the target tissue. By adopting the method, the real-time and automatic identification, segmentation, tracking and size measurement of the tissue and the organ can be realized, and doctors are effectively assisted in accurately screening and diagnosing the tissue and the organ (thyroid and the main neck tissues around the thyroid).

Description

Ultrasonic image processing method and device, computer equipment and storage medium
Technical Field
The present application relates to the field of image processing technologies, and in particular, to an ultrasound image processing method, an ultrasound image processing apparatus, a computer device, and a storage medium.
Background
With the development of medical technology, ultrasonic examination is radiationless and low in cost, and can be dynamically examined in real time, so that the ultrasonic examination is widely applied to screening and diagnosis of diseases. For example, in the case of thyroid disease examination, ultrasound examination has become the first-choice imaging method for evaluating thyroid disease examination, and during the ultrasound diagnosis by using ultrasound imaging, an sonographer first needs to acquire an ultrasound scanning video of the thyroid, obtain an ultrasound sonogram of the thyroid, identify important tissue structures of the thyroid gland and the neck, check whether the size of the thyroid gland and internal echoes are abnormal, and then make an analysis and a diagnosis.
However, the above-mentioned ultrasound diagnosis schemes have a great problem, and first, identification and measurement of thyroid gland and tissue around the neck are dependent on the clinical experience and anatomical structure knowledge of the doctor, which is time-consuming, labor-consuming, and slow in obtaining the diagnosis result, and the diagnosis result is given depending on the personal experience of the doctor, and the objectivity, consistency and repeatability of the diagnosis result are lacked. Secondly, the ultrasound image is easily affected by speckle noise and echo disturbance, so that the image is blurred and uneven, for example, the tissue structure around the thyroid is complex, and the thyroid and the surrounding tissue are under-defined, which may affect the accuracy of diagnosis. Third, the acquisition of ultrasound images is highly operator dependent, and the images acquired for the same patient may vary from hospital to hospital, from machine to machine, and from doctor to doctor. This variability in the images also affects the diagnosis of thyroid disorders.
In summary, the existing ultrasound diagnosis schemes cannot effectively assist doctors in accurately screening and diagnosing tissue organs (such as thyroid and surrounding main neck tissues) according to ultrasound images of the tissue organs.
Disclosure of Invention
In view of the above, there is a need to provide an ultrasound image processing method, an apparatus, a computer device and a storage medium, which can assist a doctor in performing accurate screening diagnosis on a tissue organ (such as thyroid gland and its surrounding main neck tissue) according to ultrasound image data of the tissue organ.
A method of ultrasound image processing, the method comprising:
acquiring an ultrasound image dataset;
preprocessing an ultrasonic image data set to obtain a preprocessed image data set;
inputting the preprocessed image data set into a preset image processing network to obtain a corresponding characteristic diagram;
respectively inputting the feature maps into a preset target detection network and a preset segmentation network, identifying the category and the position of a target tissue in each image through the preset target detection network, and determining the size of the target tissue in each image through the preset segmentation network;
and inputting the category and the position of the target tissue in each image into a trained target tracking network for target tracking to obtain the category data and the number of the target tissue.
In one embodiment, inputting the category and the position of the target tissue in each image into a trained target tracking network for target tracking, and obtaining the category data and the number of the target tissue includes:
acquiring the category and the position of a target tissue in a current frame image;
predicting the position of the target tissue in the next frame image through Kalman filtering processing based on the type and the position of the target tissue in the current frame image;
judging whether the target tissues in the front and rear frame images are the same;
and distributing the same identification data for the same target organization, and counting the category data and the number of the target organization.
In one embodiment, the determining whether the target tissues in the previous and next frame images are the same target tissue comprises:
and calculating the Mahalanobis distance, the cosine distance and the intersection and parallel ratio among the target tissues in the front and rear frame images, and judging whether the target tissues in the front and rear frame images are the same target tissue.
In one embodiment, inputting the preprocessed image data set into a preset image processing network to obtain the corresponding feature map includes:
inputting the preprocessed image data set into a preset feature extraction network, and extracting a corresponding initial feature map;
Inputting the initial characteristic graphs into a preset area suggestion network to obtain suggestion windows corresponding to the initial characteristic graphs;
and mapping the suggestion window to an initial feature map of the last layer of the suggestion network in the preset area, and processing through a ROIAlign layer to obtain a feature map with a fixed size.
In one embodiment, inputting the preprocessed image data set into a preset feature extraction network, and extracting the corresponding initial feature map includes:
inputting the preprocessed image data set into a backbone network ResNet-50, and extracting corresponding characteristic data;
and performing feature fusion on the extracted feature data by adopting the feature pyramid layer to obtain a corresponding initial feature map.
In one embodiment, determining the size of the target tissue in each image through the preset segmentation network comprises:
acquiring the number of pixels in unit length and a segmentation mask corresponding to a feature map output by a preset segmentation network;
calculating the number of pixels contained in the target tissue in each image according to the segmentation mask corresponding to the feature map and the number of pixels in unit length;
and determining the size of the target tissue in each image according to the number of pixels contained in the target structure in each image and preset pixel proportion data.
In one embodiment, preprocessing the ultrasound image data set to obtain a preprocessed image data set comprises:
and sequentially carrying out scaling processing, normalization processing and random enhancement processing on the ultrasonic image data set to obtain a preprocessed image data set.
An ultrasound image processing apparatus, the apparatus comprising:
a dataset acquisition module for acquiring an ultrasound image dataset;
the data preprocessing module is used for preprocessing the ultrasonic image data set to obtain a preprocessed image data set;
the characteristic diagram acquisition module is used for inputting the preprocessed image data set into a preset image processing network to obtain a corresponding characteristic diagram;
the characteristic graph processing module is used for respectively inputting the characteristic graphs into a preset target detection network and a preset segmentation network, identifying the type and the position of a target tissue in each image through the preset target detection network, and determining the size of the target tissue in each image through the preset segmentation network;
and the target tracking module is used for inputting the category and the position of the target tissue in each image into a trained target tracking network for target tracking to obtain the category data and the quantity of the target tissue.
A computer device comprising a memory and a processor, the memory storing a computer program, the processor implementing the following steps when executing the computer program:
Acquiring an ultrasound image dataset;
preprocessing an ultrasonic image data set to obtain a preprocessed image data set;
inputting the preprocessed image data set into a preset image processing network to obtain a corresponding characteristic diagram;
respectively inputting the feature maps into a preset target detection network and a preset segmentation network, identifying the category and the position of a target tissue in each image through the preset target detection network, and determining the size of the target tissue in each image through the preset segmentation network;
and inputting the category and the position of the target tissue in each image into a trained target tracking network for target tracking to obtain the category data and the number of the target tissue.
A computer-readable storage medium, on which a computer program is stored which, when executed by a processor, carries out the steps of:
acquiring an ultrasound image dataset;
preprocessing an ultrasonic image data set to obtain a preprocessed image data set;
inputting the preprocessed image data set into a preset image processing network to obtain a corresponding characteristic diagram;
respectively inputting the feature maps into a preset target detection network and a preset segmentation network, identifying the category and the position of a target tissue in each image through the preset target detection network, and determining the size of the target tissue in each image through the preset segmentation network;
And inputting the category and the position of the target tissue in each image into a trained target tracking network for target tracking to obtain the category data and the number of the target tissue.
The ultrasonic image processing method, the ultrasonic image processing device, the computer equipment and the storage medium carry out corresponding processing on the ultrasonic image data set to obtain the characteristic graph with fixed size, then the characteristic graph is respectively input into the preset target detection network and the preset segmentation network, so that the category and the position of the target tissue in each image can be identified, the size of the target tissue is measured, and the category and the position of the target tissue in each image are input to carry out target tracking by utilizing the trained target tracking network, so that the category data and the number of the target tissue can be obtained. The whole scheme can realize real-time and automatic identification, segmentation, tracking and size measurement of tissues and organs (such as thyroid and main neck tissues around the thyroid), improves the speed of result processing, ensures the accuracy of data, and can effectively assist doctors to realize accurate screening and diagnosis of the tissues and organs (such as thyroid and main neck tissues around the thyroid).
Drawings
FIG. 1 is a diagram illustrating an exemplary embodiment of a method for processing an ultrasound image;
FIG. 2 is a flow chart illustrating a method for processing an ultrasound image according to an embodiment;
FIG. 3 is a flow chart illustrating a method for processing an ultrasound image according to another embodiment;
FIG. 4 is a flowchart illustrating the steps of obtaining category data and quantity of a target organization in one embodiment;
FIG. 5(a) is a cross-sectional view of the isthmus of the thyroid gland, FIG. 5(b) is a cross-sectional view of the right thyroid lobe, FIG. 5(c) is a cross-sectional view of the right thyroid lobe, FIG. 5(d) is a cross-sectional view of the left thyroid lobe, and FIG. 5(e) is a cross-sectional view of the left thyroid lobe;
FIGS. 6 and 7 are schematic diagrams of recognition, segmentation and capture results obtained after thyroid video frames are input to a deep convolutional neural network;
FIG. 8 is a block diagram showing the structure of an ultrasound image processing apparatus according to an embodiment;
FIG. 9 is a diagram illustrating an internal structure of a computer device according to an embodiment.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application.
The ultrasound image processing method provided by the application can be applied to the application environment shown in fig. 1. Wherein the terminal 102 communicates with the server 104 via a network. Specifically, the user may upload the neck ultrasound image data set to be processed to the server 104 through the terminal 102, perform operations on the terminal 102, generate image processing messages, and sends an image processing message to server 104, in response to the message, obtains an ultrasound image dataset, preprocessing the ultrasonic image data set to obtain a preprocessed image data set, inputting the preprocessed image data set to a preset image processing network to obtain corresponding characteristic graphs, respectively inputting the characteristic graphs to a preset target detection network and a preset segmentation network, the method comprises the steps of identifying the category and the position of a target tissue in each image through a preset target detection network, determining the size of the target tissue in each image through a preset segmentation network, inputting the category and the position of the target tissue in each image into a trained target tracking network for target tracking, and obtaining category data and the number of the target tissue. The terminal 102 may be, but not limited to, various medical devices, personal computers, notebook computers, smart phones, tablet computers, and portable wearable devices, and the server 104 may be implemented by an independent server or a server cluster formed by a plurality of servers.
In one embodiment, as shown in fig. 2, an ultrasound image processing method is provided, which is exemplified by the application of the method to the server in fig. 1, and includes the following steps:
step 202, an ultrasound image data set is acquired.
In practical applications, the ultrasound image data set may be a continuous thyroid ultrasound video data set acquired from an ultrasound device, which contains thyroid ultrasound image frames. Taking the thyroid ultrasound video data set as an example, the data set may be one that has been selected and accurately labeled by a sonographer based on clinical experience.
Step 204, preprocessing the ultrasonic image data set to obtain a preprocessed image data set.
Further image pre-processing of the acquired dataset is required to ensure image normalization and processibility. Specifically, the image preprocessing may include image reduction and normalization processing.
As shown in FIG. 3, in another embodiment, step 204 includes: and 224, sequentially carrying out scaling processing, normalization processing and random enhancement processing on the ultrasonic image data set to obtain a preprocessed image data set. Specifically, for each frame of image in the acquired data set, the image is scaled to 800 × 600 pixels, the scaled image is normalized by using a linear function to obtain a normalized image, and each normalized image is subjected to a random enhancement operation to obtain a randomly enhanced image data set.
Step 206, inputting the preprocessed image data set to a preset image processing network to obtain a corresponding feature map.
In this embodiment, the preset image processing Network may include a feature extraction Network and a Region suggestion Network (RPN). Specifically, the preprocessed image data set may be input to a trained feature extraction network to obtain a corresponding initial feature map, and the obtained initial feature map is placed in an area suggestion network for further processing to obtain a feature map with a fixed size.
As shown in FIG. 3, in one embodiment, step 206 comprises:
step 226, inputting the preprocessed image data set to a preset feature extraction network, extracting corresponding initial feature maps, inputting the initial feature maps to a preset area suggestion network, obtaining suggestion windows corresponding to the initial feature maps, mapping the suggestion windows to the initial feature maps of the last layer of the preset area suggestion network, and obtaining a feature map with a fixed size through ROIAlign layer processing.
In this embodiment, the preset Feature extraction network may include a backbone network Resnet-50 and a Feature Pyramid Network (FPN). The preprocessed image data set may be input into a trained backbone network Resnet-50 and a feature pyramid network, a corresponding initial feature map is obtained, the initial feature map is put into an area suggestion network, a corresponding area suggestion window (Region suggestions) is obtained, and N suggestion windows are generated for each map. And mapping the obtained suggestion window to an initial feature map of the last layer of the network, and enabling each ROI to generate a feature map with a fixed size through a ROIAlign layer. Specifically, for the backbone network ResNet-50, the network structure thereof may be as follows: the first layer is an input layer, which is a matrix of 600 × 800 × 3 pixels; the second layer is a feature extraction layer, which uses the public feature extraction network Resnet-50, and takes the output matrices of three layers, i.e., the conv3.x layer, the conv4.x layer and the conv5.x layer, in the feature extraction network Resnet-50 as extracted features C3, C4 and C5, whose sizes are 75 × 100 × 512, 38 × 50 × 1024 and 19 × 25 × 2048, respectively. The feature pyramid network performs feature fusion on features C3, C4 and C5 input by a backbone network ResNet-50, and outputs fused 5-scale features P3, P4, P5, P6 and P7, namely an initial feature map.
The network structure of the feature pyramid network is as follows: the first layer is a convolution layer based on feature C5 with convolution kernel size 1 x 256 and step size 1, this layer is filled using SAME pattern, the output matrix is 19 x 25 x 256;
the second layer is a convolution layer with convolution kernel size 3 x 256 and step size 1, this layer is filled using SAME pattern, its output matrix P5 size is 19 x 25 x 256;
the third layer is a convolution layer based on feature C4, with convolution kernel size 1 x 256 and step size 1, this layer is filled using SAME pattern, and its output matrix P4_ size 38 x 50 x 256 is noted;
the fourth layer is an upsampling layer, which upsamples the output matrix P5 into an output matrix P5_ upsamplie, which has a size of 38 × 50 × 1024;
the fifth layer is an add layer that adds an output matrix P5_ update and an output matrix P4_ with an output matrix size of 38 x 50 x 1024;
the sixth layer is a convolution layer with convolution kernel size 3 x 256 and step size 1, filled using SAME pattern, and output matrix P4 size 38 x 50 x 256;
the seventh layer is a convolutional layer based on feature C3 with convolutional kernel size 1 x 256 and step size 1, filled using SAME pattern, with output matrix P3_ size 75 x 100 x 256;
The eighth layer is an upsampling layer, upsampling P4 to size 75 × 100, with output matrix P4_ upsamplle size 75 × 100 × 512;
the ninth layer is an Add layer that adds P4_ update and P3_ with an output matrix size of 75 × 100 × 512;
the tenth layer is a convolution layer with convolution kernel size 3 x 256 and step size 1, this layer is filled using SAME pattern, output matrix P3 size 75 x 100 x 512;
the eleventh layer is a convolution layer on C5 with convolution kernel size 3 x 256 and step size 2, this layer is filled with SAME pattern, and its output matrix P6 is 19 x 25 x 256 in size;
the twelfth layer is a convolution layer with convolution kernel size 3 x 256 and step size 2, filled using SAME pattern, and output matrix P7 size 19 x 25 x 256.
Then, the output matrixes P3, P4, P5, P6 and P7 of the feature pyramid network are used as the input of the region suggestion network RPN, the features of each output matrix individually enter the processing layer of the region suggestion network, a 3 × 3 sliding window is used in the region suggestion network to traverse the whole input feature map, an anchor with a set size is generated at the center of each window in the traversing process, and the output is sent to two branches after being input to the full connection layer: branch one is the preliminary target frame regression branch, convolution kernel is convolution layer of 1 x 36 with step size of 1, and the layer is filled in by using VALID mode to obtain the rough position of the target frame. And a second branch is a foreground and background classification branch, the output of the previous layer is put into a convolution layer with a convolution kernel of 1 × 18, the step length is 1, the layer is filled by using a VALID mode, the output matrix of the convolution layer is reshaped, and the reshaped output matrix is input into an activation function layer softmax layer to judge whether the framed content of the target frame is a background. Inputting the output results of the two branches into a Proposal layer (Proposal layer), firstly sorting the candidate frames according to the foreground score, then deleting unnecessary candidate frames through Non-Maximum Suppression (NMS), and then obtaining a series of area Proposal frames with the same size and the characteristics thereof by using a region of interest alignment layer (ROIAlign), namely a characteristic diagram with the fixed size of 7 x 7. In this embodiment, since the output of the area recommendation network is not a fixed output, the processing by the roiign layer can quickly obtain the feature map with a fixed size.
And 208, inputting the feature map into a preset target detection network and a preset segmentation network respectively, identifying the type and the position of the target tissue in each image through the preset target detection network, and determining the size of the target tissue in each image through the preset segmentation network.
In specific implementation, the preset target detection network may include a classification subnet and a positioning subnet, where the classification subnet is mainly used to identify the category of the target tissue in each image, i.e., the category of the target frame, and the positioning subnet is mainly used to position the accurate coordinates of the target tissue in each image, so as to obtain the position of the target tissue. Taking thyroid ultrasound video data set as an example, the target tissue is the thyroid and the surrounding main neck structure. Specifically, the input of the classification subnet and the positioning subnet are the series of area suggestion boxes with the SAME size and the characteristics thereof, the first layer to the fourth layer of the classification subnet and the positioning subnet are the SAME, the convolution kernels are 3 × 256, the step length is 1, and the classification subnet and the positioning subnet are filled in an SAME mode. For the positioning subnet, the fifth layer is a full connection layer, and the output is the accurate coordinate of the target frame; for the classification subnet, the fifth layer is a full connection layer, and the output is the category of the target box. The preset segmentation network can also be called as a segmentation sub-network, the input of the preset segmentation sub-network is an output matrix of the regional suggestion network, the output is a segmentation mask corresponding to the characteristic diagram, and the segmentation mask is a binary black-and-white picture. The areas with target tissue (thyroid and cervical structures) are white, and the areas without target tissue are black. The number of pixels contained in the target tissue can be calculated through the segmentation mask, and then the size of the target tissue is measured. The target tissue is the thyroid gland as an example, and whether the thyroid gland is swollen or not can be judged by measuring the size of the thyroid gland so as to judge whether the patient has the goiter or not.
In one embodiment, determining the size of the target tissue in each image through the preset segmentation network comprises:
step 228, acquiring the number of pixels in unit length and a segmentation mask corresponding to a feature map output by a preset segmentation network;
step 248, calculating the number of pixels contained in the target tissue in each image according to the segmentation mask corresponding to the feature map and the number of pixels in unit length;
step 268, determining the size of the target tissue in each image according to the number of pixels contained in the target structure in each image and the preset pixel proportion data.
In specific implementation, the input of the preset segmentation network is an output matrix of the area suggestion network, the next four layers of the output matrix are consistent with the classification subnet and the alignment layer of the interested area of the positioning subnet, the outputs of the previous four layers are sent to four mask full convolution layers with consistent parameters, the convolution kernel size is 3 × 256, the step length is 1, and the layers are filled by using an SAME mode. The resulting output is fed into the convolutional layer with a convolutional kernel size of 2 x 256, step size of 2, which is filled using the VALID pattern. The output matrix of the layer is fed into the convolution layer with convolution kernel size 1 x num _ cls and step size 1 to obtain the final segmentation mask, and the layer is filled by using a VALID mode. Where num _ cls is the number of categories of the dataset. For the measurement of target tissues such as thyroid and neck structures, the number of pixels contained in a unit length can be obtained by identifying a scale in an image, then the number of pixels of the target tissues is calculated according to the obtained segmentation mask, and finally the sizes of the segmented thyroid and neck structures are obtained according to the proportion. Specifically, the contour point coordinates of the thyroid and the neck structure may be obtained by segmenting the subnet, and when the thyroid structure and the neck structure are fitted, binarization processing is performed on the thyroid structure and the neck structure to obtain a binary image, and the circumference and the surrounded area of the contour curve are calculated according to the contour point coordinates obtained by fitting. The circumference is the number of the fitted contour point set, the area is the number of all pixels of the position contained in the contour, the number of pixels contained in the unit length of one centimeter is obtained through calculation by a scale on the right side of the ultrasonic image, then the circumference and the area of the thyroid structure and the neck structure are converted into unit representation of the centimeter from the pixels, and further the size of the thyroid structure and the neck structure is obtained.
Step 210, inputting the category and the position of the target tissue in each image into a trained target tracking network for target tracking, and obtaining category data and the number of the target tissue.
After the type and position of the target tissue in each image tissue are identified, target tracking needs to be performed on each target tissue in order to ensure the accuracy of the identified target tissue. Specifically, the category and the position of the target tissue output by the positioning subnet and the classification subnet can be input into a trained target tracking network, target capture and tracking matching are performed on the target tissue in the front and back continuous frame images, whether abnormal detection exists or not is detected, and data such as the category of the target tissue contained in the image data set, the number corresponding to each category of the target tissue, the number of the target tissue contained in the image data set and the like are obtained through statistics.
In the ultrasonic image processing method, the ultrasonic image data set is correspondingly processed to obtain the characteristic diagram with fixed size, then the characteristic diagram is respectively input into the preset target detection network and the preset segmentation network, so that the category and the position of the target tissue in each image can be identified, the size of the target tissue is measured, and the category and the position of the target tissue in each image are input by utilizing the trained target tracking network for target tracking, so that the category data and the quantity of the target tissue can be obtained. The whole scheme can realize real-time and automatic identification, segmentation, tracking and size measurement of tissues and organs (such as thyroid and main cervical tissues around the thyroid), improves the speed of result processing, ensures the accuracy of data, and can effectively assist doctors to realize accurate screening and diagnosis of the thyroid and the main cervical tissues around the thyroid.
As shown in fig. 4, in one embodiment, inputting the category and the position of the target tissue in each image into a trained target tracking network for target tracking, and obtaining the category data and the number of the target tissue includes:
step 212, acquiring the category and position of a target tissue in the current frame image;
step 214, predicting the position of the target tissue in the next frame image through Kalman filtering processing based on the type and the position of the target tissue in the current frame image;
step 216, judging whether the target tissues in the previous frame image and the next frame image are the same target tissue;
and step 218, distributing the same identification data to the same target organization, and counting the category data and the number of the target organization.
In specific implementation, the type and the position of a target tissue in a current frame image can be obtained according to the type and the position of the target tissue in each image obtained from a preamble, then the position of the target tissue in a next frame image is predicted through Kalman filtering, then the next frame image is detected through target detection, whether the target tissues in the previous and next frame images are the same target tissue or not is judged, if the target tissues are the same target tissue, the same identification data, namely the same ID (Identity) is allocated to the same target tissue, and the type of the target tissue, the number of each type of target tissue and the total number of the target tissues contained in the image data set are counted based on the image data set after the identification data is allocated. In another embodiment, step 216 includes: and calculating the Mahalanobis distance, the cosine distance and the intersection and parallel ratio among the target tissues in the front and rear frame images, and judging whether the target tissues in the front and rear frame images are the same target tissue. Specifically, a mahalanobis distance, a cosine distance and an Intersection-to-Union ratio (IoU) between a predicted target tissue and an actual target detection frame are calculated, meanwhile, a VGG16 network is introduced to extract features in the target frame, the matching degree of appearance features of the features is calculated, and then the target frame is tracked and matched by using a hungarian algorithm; calculating the similarity between the features of the target frames in the front and rear two images extracted by the VGG16 by using the cosine distance, wherein if the similarity is high, the target tissues in the front and rear two images are the same target tissue; and combining two target frames in the two frames of images before and after through the intersection ratio, calculating the coincidence degree of the two target frames, and if the coincidence degree is high, determining that the target frames in the two frames of images are the same frame, namely the same target organization. Through the above processing, the same ID is assigned to the same association characterized by the same target organization. In the embodiment, the defect of the mahalanobis distance can be made up through the calculation of the cosine distance and the cross-over ratio, and the accuracy of capturing, tracking and matching the target tissue is improved.
Specifically, the application discloses a method for automatically identifying, segmenting, capturing and measuring a tissue structure in an ultrasound image, comprising the following steps: and acquiring a data set, preprocessing the acquired data set to obtain a preprocessed data set, and inputting the preprocessed data set into a trained deep convolutional neural network to obtain the category, position and size of the tissue structure contained in each ultrasonic image. The deep convolutional neural network comprises a backbone network ResNet-50, a feature pyramid network FPN, an area suggestion network RPN, a classification subnet, a positioning subnet, a division subnet and a tracking subnet which are connected in sequence. The invention can assist the sonographer to complete the disease screening and diagnosis, accelerate the disease screening speed, reduce the workload of the sonographer and improve the consistency of the ultrasonic diagnosis.
Specifically, the deep convolutional neural network used in the present application is obtained by training through the following steps:
(1) acquiring a data set, sending the data set to a sonographer, and acquiring the data set labeled by the sonographer;
specifically, the data set takes 200 thyroid ultrasound videos obtained from ultrasound devices manufactured by main manufacturers on the market (including baisheng, siemens, philips, maire, etc.) as an example, the videos are parsed into continuous video frames to obtain a thyroid ultrasound image frame training set, and then the thyroid ultrasound image frame training set is randomly divided into 3 parts, wherein 80% is used as a training set (Train set), 10% is used as a verification set (Validation set), and 10% is used as a Test set (Test set). Specifically, the sonographer labeled thyroid ultrasound image frame training set may be as shown in fig. 5(a), fig. 5(b), fig. 5(c), fig. 5(d), and fig. 5 (e).
(2) Preprocessing the labeled data set to obtain a preprocessed data set;
specifically, the preprocessing process includes random cropping, random flipping, random transformation of image saturation and brightness, and the like.
(3) Counting the data set labeled in the step (1) by using a K-means clustering algorithm to obtain 3 proportional values which can represent the length and the width of a key target in the data set most and are used as the proportion of anchor points (anchors) in the deep convolutional neural network;
(4) inputting a batch of data (which can be 16 images) in the training set part in the preprocessed data set obtained in the step (2) into a deep convolutional neural network to obtain an inference output, and inputting the inference output and the data set labeled by the sonographer in the step (1) into a loss function L of the deep convolutional neural networkallTo obtain the loss value.
(5) According to the Adam algorithm and by using the loss value obtained in the step (4), a loss function L of the deep convolutional neural networkallOptimizing to achieve the aim of gradually updating parameters in the deep convolutional neural network;
specifically, in the optimization process, the learning rate lr is 0.001, the impulse ξ is 0.9, and the weight attenuation ψ is 0.004.
(6) Repeating the step (4) and the step (5) in sequence aiming at the residual batch of data in the training set part in the preprocessed data set obtained in the step (2) until the iteration number is reached, thereby obtaining a trained deep convolutional neural network;
Specifically, the training process in step (6) may include 120 cycles, and the number of iterations in each cycle is 100.
(7) And (3) verifying the trained deep convolutional neural network by using the test set part in the preprocessed data set obtained in the step (2).
By inputting the video frame of the thyroid ultrasound examination to the trained convolutional neural network, the category and the position of the thyroid and the neck structure can be automatically output, and the size of the thyroid and the neck structure measured by segmentation can be given, wherein schematic diagrams of recognition, segmentation and capture results obtained after the thyroid video frame is input to the deep convolutional neural network can be shown in fig. 6-7.
It should be understood that although the various steps in the flow charts of fig. 2-4 are shown in order as indicated by the arrows, the steps are not necessarily performed in order as indicated by the arrows. The steps are not performed in the exact order shown and described, and may be performed in other orders, unless explicitly stated otherwise. Moreover, at least some of the steps in fig. 2-4 may include multiple steps or multiple stages, which are not necessarily performed at the same time, but may be performed at different times, which are not necessarily performed in sequence, but may be performed in turn or alternately with other steps or at least some of the other steps.
In one embodiment, as shown in fig. 8, there is provided an ultrasound image processing apparatus including: a dataset acquisition module 510, a data pre-processing module 520, a feature map acquisition module 530, a feature map processing module 540, and a target tracking module 550, wherein:
a dataset acquisition module 510 for acquiring an ultrasound image dataset.
The data preprocessing module 520 is configured to preprocess the ultrasound image data set to obtain a preprocessed image data set.
The feature map obtaining module 530 is configured to input the preprocessed image data set to a preset image processing network, so as to obtain a corresponding feature map.
The feature map processing module 540 is configured to input the feature map into a preset target detection network and a preset segmentation network, identify the type and the position of the target tissue in each image through the preset target detection network, and determine the size of the target tissue in each image through the preset segmentation network.
And a target tracking module 550, configured to input the category and the position of the target tissue in each image into a trained target tracking network for target tracking, so as to obtain category data and number of the target tissue.
In one embodiment, the target tracking module 550 is further configured to obtain the type and the position of the target tissue in the current frame image, predict the position of the target tissue in the next frame image through kalman filtering processing based on the type and the position of the target tissue in the current frame image, determine whether the target tissues in the previous and subsequent frame images are the same target tissue, allocate the same identification data to the same target tissue, and count the type data and the number of the target tissues.
In one embodiment, the target tracking module 550 is further configured to calculate a mahalanobis distance, a cosine distance, and an intersection ratio between target tissues in the previous and subsequent frame images, and determine whether the target tissues in the previous and subsequent frame images are the same target tissue.
In one embodiment, the feature map obtaining module 530 is further configured to input the preprocessed image data set to a preset feature extraction network, extract a corresponding initial feature map, input the initial feature map to a preset area suggestion network, obtain suggestion windows corresponding to the initial feature maps, map the suggestion windows to an initial feature map of a last layer of the preset area suggestion network, and obtain a feature map with a fixed size through roiign layer processing.
In one embodiment, the feature map obtaining module 530 is further configured to input the preprocessed image data set to a backbone network ResNet-50, extract corresponding feature data, and perform feature fusion on the extracted feature data by using a feature pyramid layer to obtain a corresponding initial feature map.
In one embodiment, the feature map processing module 540 is further configured to obtain the number of pixels in unit length and a segmentation mask corresponding to a feature map output by a preset segmentation network, calculate the number of pixels included in the target tissue in each image according to the segmentation mask corresponding to the feature map and the number of pixels in unit length, and determine the size of the target tissue in each image according to the number of pixels included in the target structure in each image and preset pixel proportion data.
In one embodiment, the data preprocessing module 520 is further configured to sequentially perform scaling, normalization, and random enhancement on the ultrasound image data set to obtain a preprocessed image data set.
For specific definition of the ultrasound image processing apparatus, reference may be made to the above definition of the ultrasound image processing method, which is not described herein again. The modules in the ultrasound image processing apparatus can be wholly or partially implemented by software, hardware and a combination thereof. The modules can be embedded in a hardware form or independent from a processor in the computer device, and can also be stored in a memory in the computer device in a software form, so that the processor can call and execute operations corresponding to the modules.
In one embodiment, a computer device is provided, which may be a server, and its internal structure diagram may be as shown in fig. 9. The computer device includes a processor, a memory, and a network interface connected by a system bus. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device comprises a nonvolatile storage medium and an internal memory. The non-volatile storage medium stores an operating system, a computer program, and a database. The internal memory provides an environment for the operation of an operating system and computer programs in the non-volatile storage medium. The database of the computer device is used for storing the ultrasound image data set and data of the category, position and size of the target tissue. The network interface of the computer device is used for communicating with an external terminal through a network connection. The computer program is executed by a processor to implement an ultrasound image processing method.
Those skilled in the art will appreciate that the architecture shown in fig. 9 is merely a block diagram of some of the structures associated with the disclosed aspects and is not intended to limit the computing devices to which the disclosed aspects apply, as particular computing devices may include more or less components than those shown, or may combine certain components, or have a different arrangement of components.
In one embodiment, a computer device is provided, which includes a memory and a processor, wherein the memory stores a computer program, and the processor implements the steps of the ultrasound image processing method when executing the computer program.
In one embodiment, a computer-readable storage medium is provided, on which a computer program is stored, which, when being executed by a processor, carries out the steps of the ultrasound image processing method described above.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by hardware instructions of a computer program, which can be stored in a non-volatile computer-readable storage medium, and when executed, can include the processes of the embodiments of the methods described above. Any reference to memory, storage, database or other medium used in the embodiments provided herein can include at least one of non-volatile and volatile memory. Non-volatile Memory may include Read-Only Memory (ROM), magnetic tape, floppy disk, flash Memory, optical storage, or the like. Volatile Memory can include Random Access Memory (RAM) or external cache Memory. By way of illustration and not limitation, RAM can take many forms, such as Static Random Access Memory (SRAM) or Dynamic Random Access Memory (DRAM), among others.
The technical features of the above embodiments can be arbitrarily combined, and for the sake of brevity, all possible combinations of the technical features in the above embodiments are not described, but should be considered as the scope of the present specification as long as there is no contradiction between the combinations of the technical features.
The above-mentioned embodiments only express several embodiments of the present application, and the description thereof is more specific and detailed, but not construed as limiting the scope of the invention. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the concept of the present application, which falls within the scope of protection of the present application. Therefore, the protection scope of the present patent shall be subject to the appended claims.

Claims (10)

1. A method of ultrasound image processing, the method comprising:
acquiring an ultrasound image dataset;
preprocessing the ultrasonic image data set to obtain a preprocessed image data set;
inputting the preprocessed image data set into a preset image processing network to obtain a corresponding characteristic diagram;
inputting the feature maps into a preset target detection network and a preset segmentation network respectively, identifying the category and the position of a target tissue in each image through the preset target detection network, and determining the size of the target tissue in each image through the preset segmentation network;
And inputting the category and the position of the target tissue in each image into a trained target tracking network for target tracking to obtain the category data and the number of the target tissue.
2. The method of claim 1, wherein inputting the category and the position of the target tissue in each image into a trained target tracking network for target tracking, and obtaining the category data and the number of the target tissue comprises:
acquiring the category and the position of a target tissue in a current frame image;
predicting the position of the target tissue in the next frame image through Kalman filtering processing based on the type and the position of the target tissue in the current frame image;
judging whether the target tissues in the front and rear frame images are the same;
and distributing the same identification data for the same target organization, and counting the category data and the number of the target organization.
3. The method according to claim 2, wherein the determining whether the target tissues in the previous and subsequent frame images are the same target tissue comprises:
and calculating the Mahalanobis distance, the cosine distance and the intersection and parallel ratio among the target tissues in the front and rear frame images, and judging whether the target tissues in the front and rear frame images are the same target tissue.
4. The method of claim 1, wherein inputting the pre-processed image dataset into a pre-defined image processing network to obtain a corresponding feature map comprises:
inputting the preprocessed image data set into a preset feature extraction network, and extracting a corresponding initial feature map;
inputting the initial characteristic diagrams into a preset area suggestion network to obtain suggestion windows corresponding to the initial characteristic diagrams;
and mapping the suggestion window to an initial characteristic diagram of the last layer of the suggestion network in the preset area, and processing through a ROIAlign layer to obtain a characteristic diagram with a fixed size.
5. The method of claim 4, wherein inputting the preprocessed image data set to a pre-defined feature extraction network, extracting a corresponding initial feature map comprises:
inputting the preprocessed image data set into a backbone network ResNet-50, and extracting corresponding characteristic data;
and performing feature fusion on the extracted feature data by adopting the feature pyramid layer to obtain a corresponding initial feature map.
6. The method of claim 1, wherein determining the size of the target tissue in each image through the predetermined segmentation network comprises:
Acquiring the number of pixels in unit length and a segmentation mask corresponding to the feature map output by the preset segmentation network;
calculating the number of pixels contained in the target tissue in each image according to the segmentation mask corresponding to the feature map and the number of pixels in unit length;
and determining the size of the target tissue in each image according to the number of pixels contained in the target structure in each image and preset pixel proportion data.
7. The method of any of claims 1 to 6, wherein the pre-processing the ultrasound image data set to obtain a pre-processed image data set comprises:
and sequentially carrying out scaling processing, normalization processing and random enhancement processing on the ultrasonic image data set to obtain a preprocessed image data set.
8. An ultrasound image processing apparatus, characterized in that the apparatus comprises:
a dataset acquisition module for acquiring an ultrasound image dataset;
the data preprocessing module is used for preprocessing the ultrasonic image data set to obtain a preprocessed image data set;
the characteristic diagram acquisition module is used for inputting the preprocessed image data set into a preset image processing network to obtain a corresponding characteristic diagram;
The characteristic graph processing module is used for respectively inputting the characteristic graphs into a preset target detection network and a preset segmentation network, identifying the category and the position of a target tissue in each image through the preset target detection network, and determining the size of the target tissue in each image through the preset segmentation network;
and the target tracking module is used for inputting the category and the position of the target tissue in each image into a trained target tracking network for target tracking to obtain the category data and the quantity of the target tissue.
9. A computer device comprising a memory and a processor, the memory storing a computer program, characterized in that the processor, when executing the computer program, implements the steps of the method of any of claims 1 to 7.
10. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the steps of the method of any one of claims 1 to 7.
CN202010704590.1A 2020-07-21 2020-07-21 Ultrasonic image processing method, ultrasonic image processing device, computer equipment and storage medium Active CN111862044B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010704590.1A CN111862044B (en) 2020-07-21 2020-07-21 Ultrasonic image processing method, ultrasonic image processing device, computer equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010704590.1A CN111862044B (en) 2020-07-21 2020-07-21 Ultrasonic image processing method, ultrasonic image processing device, computer equipment and storage medium

Publications (2)

Publication Number Publication Date
CN111862044A true CN111862044A (en) 2020-10-30
CN111862044B CN111862044B (en) 2024-06-18

Family

ID=73000818

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010704590.1A Active CN111862044B (en) 2020-07-21 2020-07-21 Ultrasonic image processing method, ultrasonic image processing device, computer equipment and storage medium

Country Status (1)

Country Link
CN (1) CN111862044B (en)

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112381777A (en) * 2020-11-09 2021-02-19 深圳开立生物医疗科技股份有限公司 Image processing method and device, electronic equipment and storage medium
CN112614123A (en) * 2020-12-29 2021-04-06 深圳开立生物医疗科技股份有限公司 Ultrasonic image identification method and related device
CN112651960A (en) * 2020-12-31 2021-04-13 上海联影智能医疗科技有限公司 Image processing method, device, equipment and storage medium
CN112906545A (en) * 2021-02-07 2021-06-04 广东省科学院智能制造研究所 Real-time action recognition method and system for multi-person scene
CN113257392A (en) * 2021-04-20 2021-08-13 哈尔滨晓芯科技有限公司 Automatic preprocessing method for universal external data of ultrasonic machine
CN113570567A (en) * 2021-07-23 2021-10-29 无锡祥生医疗科技股份有限公司 Method and device for monitoring target tissue in ultrasonic image and storage medium
CN113570594A (en) * 2021-08-11 2021-10-29 无锡祥生医疗科技股份有限公司 Method and device for monitoring target tissue in ultrasonic image and storage medium
CN114842238A (en) * 2022-04-01 2022-08-02 苏州视尚医疗科技有限公司 Embedded mammary gland ultrasonic image identification method
CN116128874A (en) * 2023-04-04 2023-05-16 中国医学科学院北京协和医院 Carotid plaque ultrasonic real-time identification method and device based on 5G
WO2024093099A1 (en) * 2022-11-01 2024-05-10 上海杏脉信息科技有限公司 Thyroid ultrasound image processing method and apparatus, medium and electronic device

Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107169998A (en) * 2017-06-09 2017-09-15 西南交通大学 A kind of real-time tracking and quantitative analysis method based on hepatic ultrasound contrast enhancement image
CN108520518A (en) * 2018-04-10 2018-09-11 复旦大学附属肿瘤医院 A kind of thyroid tumors Ultrasound Image Recognition Method and its device
US20180276821A1 (en) * 2015-12-03 2018-09-27 Sun Yat-Sen University Method for Automatically Recognizing Liver Tumor Types in Ultrasound Images
CN109241967A (en) * 2018-09-04 2019-01-18 青岛大学附属医院 Thyroid ultrasound automatic image recognition system, computer equipment, storage medium based on deep neural network
CN109816690A (en) * 2018-12-25 2019-05-28 北京飞搜科技有限公司 Multi-target tracking method and system based on depth characteristic
CN109948712A (en) * 2019-03-20 2019-06-28 天津工业大学 A kind of nanoparticle size measurement method based on improved Mask R-CNN
CN110021014A (en) * 2019-03-29 2019-07-16 无锡祥生医疗科技股份有限公司 Nerve fiber recognition methods, system and storage medium neural network based
CN110490892A (en) * 2019-07-03 2019-11-22 中山大学 A kind of Thyroid ultrasound image tubercle automatic positioning recognition methods based on USFaster R-CNN
CN110599448A (en) * 2019-07-31 2019-12-20 浙江工业大学 Migratory learning lung lesion tissue detection system based on MaskScoring R-CNN network
CN110648327A (en) * 2019-09-29 2020-01-03 无锡祥生医疗科技股份有限公司 Method and equipment for automatically tracking ultrasonic image video based on artificial intelligence
CN111243042A (en) * 2020-02-28 2020-06-05 浙江德尚韵兴医疗科技有限公司 Ultrasonic thyroid nodule benign and malignant characteristic visualization method based on deep learning
CN111311578A (en) * 2020-02-17 2020-06-19 腾讯科技(深圳)有限公司 Object classification method and device based on artificial intelligence and medical imaging equipment
US20200210761A1 (en) * 2018-12-28 2020-07-02 Shanghai United Imaging Intelligence Co., Ltd. System and method for classification determination

Patent Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180276821A1 (en) * 2015-12-03 2018-09-27 Sun Yat-Sen University Method for Automatically Recognizing Liver Tumor Types in Ultrasound Images
CN107169998A (en) * 2017-06-09 2017-09-15 西南交通大学 A kind of real-time tracking and quantitative analysis method based on hepatic ultrasound contrast enhancement image
CN108520518A (en) * 2018-04-10 2018-09-11 复旦大学附属肿瘤医院 A kind of thyroid tumors Ultrasound Image Recognition Method and its device
CN109241967A (en) * 2018-09-04 2019-01-18 青岛大学附属医院 Thyroid ultrasound automatic image recognition system, computer equipment, storage medium based on deep neural network
CN109816690A (en) * 2018-12-25 2019-05-28 北京飞搜科技有限公司 Multi-target tracking method and system based on depth characteristic
US20200210761A1 (en) * 2018-12-28 2020-07-02 Shanghai United Imaging Intelligence Co., Ltd. System and method for classification determination
CN109948712A (en) * 2019-03-20 2019-06-28 天津工业大学 A kind of nanoparticle size measurement method based on improved Mask R-CNN
CN110021014A (en) * 2019-03-29 2019-07-16 无锡祥生医疗科技股份有限公司 Nerve fiber recognition methods, system and storage medium neural network based
CN110490892A (en) * 2019-07-03 2019-11-22 中山大学 A kind of Thyroid ultrasound image tubercle automatic positioning recognition methods based on USFaster R-CNN
CN110599448A (en) * 2019-07-31 2019-12-20 浙江工业大学 Migratory learning lung lesion tissue detection system based on MaskScoring R-CNN network
CN110648327A (en) * 2019-09-29 2020-01-03 无锡祥生医疗科技股份有限公司 Method and equipment for automatically tracking ultrasonic image video based on artificial intelligence
CN111311578A (en) * 2020-02-17 2020-06-19 腾讯科技(深圳)有限公司 Object classification method and device based on artificial intelligence and medical imaging equipment
CN111243042A (en) * 2020-02-28 2020-06-05 浙江德尚韵兴医疗科技有限公司 Ultrasonic thyroid nodule benign and malignant characteristic visualization method based on deep learning

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
LAIFA MA, ET AL.: "A Novel Deep Learning Framework for Automatic Recognition of Thyroid Gland and Tissues of Neck in Ultrasound Image", IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, vol. 32, no. 9, pages 6113 - 6124 *
OLFA MOUSSA, ET AL.: "Thyroid nodules classification and diagnosis in ultrasound images using fine-tuning deep convolutional neural network", INT J IMAGING SYST TECHNOL., pages 1 - 11 *
XIANGQIONG WU, ET AL.: "CacheTrack-YOLO: Real-Time Detection and Tracking for Thyroid Nodules and Surrounding Tissues in Ultrasound Videos", IEEE JOURNAL OF BIOMEDICAL AND HEALTH INFORMATICS, vol. 25, no. 10, pages 3812 - 3823, XP011881863, DOI: 10.1109/JBHI.2021.3084962 *

Cited By (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112381777A (en) * 2020-11-09 2021-02-19 深圳开立生物医疗科技股份有限公司 Image processing method and device, electronic equipment and storage medium
CN112614123A (en) * 2020-12-29 2021-04-06 深圳开立生物医疗科技股份有限公司 Ultrasonic image identification method and related device
CN112651960A (en) * 2020-12-31 2021-04-13 上海联影智能医疗科技有限公司 Image processing method, device, equipment and storage medium
CN112906545A (en) * 2021-02-07 2021-06-04 广东省科学院智能制造研究所 Real-time action recognition method and system for multi-person scene
CN113257392B (en) * 2021-04-20 2024-04-16 哈尔滨晓芯科技有限公司 Automatic preprocessing method for universal external data of ultrasonic machine
CN113257392A (en) * 2021-04-20 2021-08-13 哈尔滨晓芯科技有限公司 Automatic preprocessing method for universal external data of ultrasonic machine
CN113570567A (en) * 2021-07-23 2021-10-29 无锡祥生医疗科技股份有限公司 Method and device for monitoring target tissue in ultrasonic image and storage medium
CN113570594A (en) * 2021-08-11 2021-10-29 无锡祥生医疗科技股份有限公司 Method and device for monitoring target tissue in ultrasonic image and storage medium
CN114842238A (en) * 2022-04-01 2022-08-02 苏州视尚医疗科技有限公司 Embedded mammary gland ultrasonic image identification method
CN114842238B (en) * 2022-04-01 2024-04-16 苏州视尚医疗科技有限公司 Identification method of embedded breast ultrasonic image
WO2024093099A1 (en) * 2022-11-01 2024-05-10 上海杏脉信息科技有限公司 Thyroid ultrasound image processing method and apparatus, medium and electronic device
CN116128874A (en) * 2023-04-04 2023-05-16 中国医学科学院北京协和医院 Carotid plaque ultrasonic real-time identification method and device based on 5G
CN116128874B (en) * 2023-04-04 2023-06-16 中国医学科学院北京协和医院 Carotid plaque ultrasonic real-time identification method and device based on 5G

Also Published As

Publication number Publication date
CN111862044B (en) 2024-06-18

Similar Documents

Publication Publication Date Title
CN111862044B (en) Ultrasonic image processing method, ultrasonic image processing device, computer equipment and storage medium
CN111325739A (en) Method and device for detecting lung focus and training method of image detection model
CN111524137A (en) Cell identification counting method and device based on image identification and computer equipment
US11967181B2 (en) Method and device for retinal image recognition, electronic equipment, and storage medium
CN112365973B (en) Pulmonary nodule auxiliary diagnosis system based on countermeasure network and fast R-CNN
CN112102230B (en) Ultrasonic section identification method, system, computer device and storage medium
CN113077419A (en) Information processing method and device for hip joint CT image recognition
CN114332132A (en) Image segmentation method and device and computer equipment
CN111951215A (en) Image detection method and device and computer readable storage medium
CN115082487B (en) Ultrasonic image section quality evaluation method and device, ultrasonic equipment and storage medium
CN110738702B (en) Three-dimensional ultrasonic image processing method, device, equipment and storage medium
CN115797929A (en) Small farmland image segmentation method and device based on double-attention machine system
CN114693671B (en) Lung nodule semi-automatic segmentation method, device, equipment and medium based on deep learning
CN113256670A (en) Image processing method and device, and network model training method and device
CN112001983A (en) Method and device for generating occlusion image, computer equipment and storage medium
CN113160199B (en) Image recognition method and device, computer equipment and storage medium
CN117274278B (en) Retina image focus part segmentation method and system based on simulated receptive field
CN118015190A (en) Autonomous construction method and device of digital twin model
CN117036305A (en) Image processing method, system and storage medium for throat examination
CN116486304A (en) Key frame extraction method based on ultrasonic video and related equipment
CN113537407B (en) Image data evaluation processing method and device based on machine learning
CN113222985B (en) Image processing method, image processing device, computer equipment and medium
CN111325732B (en) Face residue detection method and related equipment
CN114565626A (en) Lung CT image segmentation algorithm based on PSPNet improvement
CN109934870B (en) Target detection method, device, equipment, computer equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right
TA01 Transfer of patent application right

Effective date of registration: 20211116

Address after: 510515 No. 1023-1063, shatai South Road, Guangzhou, Guangdong

Applicant after: SOUTHERN MEDICAL University

Applicant after: HUNAN University

Address before: 410000 room 515a266, block BCD, Lugu business center, No. 199 Lulong Road, Changsha high tech Development Zone, Changsha, Hunan (cluster registration)

Applicant before: Changsha Datang Information Technology Co.,Ltd.

TA01 Transfer of patent application right
TA01 Transfer of patent application right

Effective date of registration: 20230505

Address after: 518000, 6th Floor, Building A3, Nanshan Zhiyuan, No. 1001 Xueyuan Avenue, Changyuan Community, Taoyuan Street, Nanshan District, Shenzhen, Guangdong Province

Applicant after: Shenzhen Lanxiang Zhiying Technology Co.,Ltd.

Address before: No.1023-1063, shatai South Road, Guangzhou, Guangdong 510515

Applicant before: SOUTHERN MEDICAL University

Applicant before: HUNAN University

GR01 Patent grant
GR01 Patent grant